de-francophones commited on
Commit
e667c31
1 Parent(s): 6b08210

293dfd8662c77747009db14dc626367aff1f98e8b55fda17f56741da6901afa1

Browse files
en/5544.html.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Over 30 extinct genera, 6 extant, see text.
4
+
5
+ Suidae is a family of artiodactyl mammals which are commonly called pigs, hogs or boars. In addition to numerous fossil species, 18 extant species are currently recognized (or 19 counting domestic pigs and wild boars separately), classified into between four and eight genera. Within this family, the genus Sus includes the domestic pig, Sus scrofa domesticus or Sus domesticus, and many species of wild pig from Europe to the Pacific. Other genera include babirusas and warthogs. All suids, or swine, are native to the Old World, ranging from Asia to Europe and Africa.
6
+
7
+ The earliest fossil suids date from the Oligocene epoch in Asia, and their descendants reached Europe during the Miocene.[1] Several fossil species are known and show adaptations to a wide range of different diets, from strict herbivory to possible carrion-eating (in Tetraconodontinae).[2]
8
+
9
+ Suids belong to the order Artiodactyla, and are generally regarded as the living members of that order most similar to the ancestral form. Unlike most other members of the order, they have four toes on each foot, although they walk only on the middle two digits, with the others staying clear of the ground. They also have a simple stomach, rather than the more complex, ruminant, stomach found in most other artiodactyl families.[3]
10
+
11
+ They are small to medium animals, varying in size from 58 to 66 cm (23 to 26 in) in length, and 6 to 9 kg (13 to 20 lb) in weight in the case of the pygmy hog, to 130–210 cm (4.3–6.9 ft) and 100–275 kg (220–606 lb) in the giant forest hog.[4] They have large heads and short necks, with relatively small eyes and prominent ears. Their heads have a distinctive snout, ending in a disc-shaped nose. Suids typically have a bristly coat, and a short tail ending in a tassle.[citation needed] The males possess a corkscrew-shaped penis, which fits into a similarly shaped groove in the female's cervix.[5][6][7]
12
+
13
+ Suids have a well-developed sense of hearing, and are vocal animals, communicating with a series of grunts, squeals, and similar sounds. They also have an acute sense of smell. Many species are omnivorous, eating grass, leaves, roots, insects, worms, and even frogs or mice. Other species are more selective and purely herbivorous.[3]
14
+
15
+ Their teeth reflect their diet, and suids retain the upper incisors, which are lost in most other artiodactyls. The canine teeth are enlarged to form prominent tusks, used for rooting in moist earth or undergrowth, and in fighting. They have only a short diastema. The number of teeth varies between species, but the general dental formula is: 1–3.1.2–4.3030.1.020.3.
16
+
17
+ Suids are intelligent and adaptable animals. Adult females (sows) and their young travel in a group (sounder; see List of animal names), while adult males (boars) are either solitary, or travel in small bachelor groups. Males generally are not territorial, and come into conflict only during the mating season.
18
+
19
+ Litter size varies between one and twelve, depending on the species. The mother prepares a grass nest or similar den, which the young leave after about ten days. Suids are weaned at around three months, and become sexually mature at 18 months. In practice, however, male suids are unlikely to gain access to sows in the wild until they have reached their full physical size, at around four years of age. In all species, the male is significantly larger than the female, and possesses more prominent tusks.[3]
20
+
21
+ The following 18 extant species of suid are currently recognised:[8]
22
+
23
+
24
+
25
+ A partial list of genera, with extinct taxa marked with a dagger "†",[2] are:
en/5545.html.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Over 30 extinct genera, 6 extant, see text.
4
+
5
+ Suidae is a family of artiodactyl mammals which are commonly called pigs, hogs or boars. In addition to numerous fossil species, 18 extant species are currently recognized (or 19 counting domestic pigs and wild boars separately), classified into between four and eight genera. Within this family, the genus Sus includes the domestic pig, Sus scrofa domesticus or Sus domesticus, and many species of wild pig from Europe to the Pacific. Other genera include babirusas and warthogs. All suids, or swine, are native to the Old World, ranging from Asia to Europe and Africa.
6
+
7
+ The earliest fossil suids date from the Oligocene epoch in Asia, and their descendants reached Europe during the Miocene.[1] Several fossil species are known and show adaptations to a wide range of different diets, from strict herbivory to possible carrion-eating (in Tetraconodontinae).[2]
8
+
9
+ Suids belong to the order Artiodactyla, and are generally regarded as the living members of that order most similar to the ancestral form. Unlike most other members of the order, they have four toes on each foot, although they walk only on the middle two digits, with the others staying clear of the ground. They also have a simple stomach, rather than the more complex, ruminant, stomach found in most other artiodactyl families.[3]
10
+
11
+ They are small to medium animals, varying in size from 58 to 66 cm (23 to 26 in) in length, and 6 to 9 kg (13 to 20 lb) in weight in the case of the pygmy hog, to 130–210 cm (4.3–6.9 ft) and 100–275 kg (220–606 lb) in the giant forest hog.[4] They have large heads and short necks, with relatively small eyes and prominent ears. Their heads have a distinctive snout, ending in a disc-shaped nose. Suids typically have a bristly coat, and a short tail ending in a tassle.[citation needed] The males possess a corkscrew-shaped penis, which fits into a similarly shaped groove in the female's cervix.[5][6][7]
12
+
13
+ Suids have a well-developed sense of hearing, and are vocal animals, communicating with a series of grunts, squeals, and similar sounds. They also have an acute sense of smell. Many species are omnivorous, eating grass, leaves, roots, insects, worms, and even frogs or mice. Other species are more selective and purely herbivorous.[3]
14
+
15
+ Their teeth reflect their diet, and suids retain the upper incisors, which are lost in most other artiodactyls. The canine teeth are enlarged to form prominent tusks, used for rooting in moist earth or undergrowth, and in fighting. They have only a short diastema. The number of teeth varies between species, but the general dental formula is: 1–3.1.2–4.3030.1.020.3.
16
+
17
+ Suids are intelligent and adaptable animals. Adult females (sows) and their young travel in a group (sounder; see List of animal names), while adult males (boars) are either solitary, or travel in small bachelor groups. Males generally are not territorial, and come into conflict only during the mating season.
18
+
19
+ Litter size varies between one and twelve, depending on the species. The mother prepares a grass nest or similar den, which the young leave after about ten days. Suids are weaned at around three months, and become sexually mature at 18 months. In practice, however, male suids are unlikely to gain access to sows in the wild until they have reached their full physical size, at around four years of age. In all species, the male is significantly larger than the female, and possesses more prominent tusks.[3]
20
+
21
+ The following 18 extant species of suid are currently recognised:[8]
22
+
23
+
24
+
25
+ A partial list of genera, with extinct taxa marked with a dagger "†",[2] are:
en/5546.html.txt ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ in Europe (green and dark grey)
6
+
7
+ Switzerland, officially the Swiss Confederation, is a country situated in the confluence of Western, Central, and Southern Europe.[10][note 4] It is a federal republic composed of 26 cantons, with federal authorities based in Bern.[1][2][note 1] Switzerland is a landlocked country bordered by Italy to the south, France to the west, Germany to the north, and Austria and Liechtenstein to the east. It is geographically divided among the Swiss Plateau, the Alps, and the Jura, spanning a total area of 41,285 km2 (15,940 sq mi), and land area of 39,997 km2 (15,443 sq mi). While the Alps occupy the greater part of the territory, the Swiss population of approximately 8.5 million is concentrated mostly on the plateau, where the largest cities and economic centres are located, among them Zürich, Geneva and Basel, where multiple international organisations are domiciled (such as FIFA, the UN's second-largest Office, and the Bank for International Settlements) and where the main international airports of Switzerland are.
8
+
9
+ The establishment of the Old Swiss Confederacy dates to the late medieval period, resulting from a series of military successes against Austria and Burgundy. Swiss independence from the Holy Roman Empire was formally recognized in the Peace of Westphalia in 1648. The Federal Charter of 1291 is considered the founding document of Switzerland which is celebrated on Swiss National Day. Since the Reformation of the 16th century, Switzerland has maintained a strong policy of armed neutrality; it has not fought an international war since 1815 and did not join the United Nations until 2002. Nevertheless, it pursues an active foreign policy and is frequently involved in peace-building processes around the world.[11] Switzerland is the birthplace of the Red Cross, one of the world's oldest and best known humanitarian organisations, and is home to numerous international organisations, including the United Nations Office at Geneva, which is its second-largest in the world. It is a founding member of the European Free Trade Association, but notably not part of the European Union, the European Economic Area or the Eurozone. However, it participates in the Schengen Area and the European Single Market through bilateral treaties.
10
+
11
+ Switzerland occupies the crossroads of Germanic and Romance Europe, as reflected in its four main linguistic and cultural regions: German, French, Italian and Romansh. Although the majority of the population are German-speaking, Swiss national identity is rooted in a common historical background, shared values such as federalism and direct democracy,[12] and Alpine symbolism.[13][14] Due to its linguistic diversity, Switzerland is known by a variety of native names: Schweiz [ˈʃvaɪts] (German);[note 5] Suisse [sɥis(ə)] (French); Svizzera [ˈzvittsera] (Italian); and Svizra [ˈʒviːtsrɐ, ˈʒviːtsʁɐ] (Romansh).[note 6] On coins and stamps, the Latin name, Confoederatio Helvetica – frequently shortened to "Helvetia" – is used instead of the four national languages.
12
+
13
+ The sovereign state is one of the most developed countries in the world, with the highest nominal wealth per adult[15] and the eighth-highest per capita gross domestic product.[16][17] It ranks at or near the top in several international metrics, including economic competitiveness and human development. Zürich, Geneva and Basel have been ranked among the top ten cities in the world in terms of quality of life, with Zürich ranked second globally.[18] In 2019, IMD placed Switzerland first in attracting skilled workers.[19] World Economic Forum ranks it the 5th most competitive country globally.[20]
14
+
15
+ The English name Switzerland is a compound containing Switzer, an obsolete term for the Swiss, which was in use during the 16th to 19th centuries.[21] The English adjective Swiss is a loan from French Suisse, also in use since the 16th century. The name Switzer is from the Alemannic Schwiizer, in origin an inhabitant of Schwyz and its associated territory, one of the Waldstätte cantons which formed the nucleus of the Old Swiss Confederacy. The Swiss began to adopt the name for themselves after the Swabian War of 1499, used alongside the term for "Confederates", Eidgenossen (literally: comrades by oath), used since the 14th century. The data code for Switzerland, CH, is derived from Latin Confoederatio Helvetica (English: Helvetic Confederation).
16
+
17
+ The toponym Schwyz itself was first attested in 972, as Old High German Suittes, ultimately perhaps related to swedan ‘to burn’ (cf. Old Norse svíða ‘to singe, burn’), referring to the area of forest that was burned and cleared to build.[22] The name was extended to the area dominated by the canton, and after the Swabian War of 1499 gradually came to be used for the entire Confederation.[23][24]
18
+ The Swiss German name of the country, Schwiiz, is homophonous to that of the canton and the settlement, but distinguished by the use of the definite article (d'Schwiiz for the Confederation,[25] but simply Schwyz for the canton and the town).[26] The long [iː] of Swiss German is historically and still often today spelled ⟨y⟩ rather than ⟨ii⟩, preserving the original identity of the two names even in writing.
19
+
20
+ The Latin name Confoederatio Helvetica was neologized and introduced gradually after the formation of the federal state in 1848, harking back to the Napoleonic Helvetic Republic, appearing on coins from 1879, inscribed on the Federal Palace in 1902 and after 1948 used in the official seal.[27] (for example, the ISO banking code "CHF" for the Swiss franc, and the country top-level domain ".ch", are both taken from the state's Latin name). Helvetica is derived from the Helvetii, a Gaulish tribe living on the Swiss plateau before the Roman era.
21
+
22
+ Helvetia appears as a national personification of the Swiss confederacy in the 17th century with a 1672 play by Johann Caspar Weissenbach.[28]
23
+
24
+ Switzerland has existed as a state in its present form since the adoption of the Swiss Federal Constitution in 1848. The precursors of Switzerland established a protective alliance at the end of the 13th century (1291), forming a loose confederation of states which persisted for centuries.
25
+
26
+ The oldest traces of hominid existence in Switzerland date back about 150,000 years.[29] The oldest known farming settlements in Switzerland, which were found at Gächlingen, have been dated to around 5300 BC.[29]
27
+
28
+ The earliest known cultural tribes of the area were members of the Hallstatt and La Tène cultures, named after the archaeological site of La Tène on the north side of Lake Neuchâtel. La Tène culture developed and flourished during the late Iron Age from around 450 BC,[29] possibly under some influence from the Greek and Etruscan civilisations. One of the most important tribal groups in the Swiss region was the Helvetii. Steadily harassed by the Germanic tribes, in 58 BC the Helvetii decided to abandon the Swiss plateau and migrate to western Gallia, but Julius Caesar's armies pursued and defeated them at the Battle of Bibracte, in today's eastern France, forcing the tribe to move back to its original homeland.[29] In 15 BC, Tiberius, who would one day become the second Roman emperor, and his brother Drusus, conquered the Alps, integrating them into the Roman Empire. The area occupied by the Helvetii—the namesakes of the later Confoederatio Helvetica—first became part of Rome's Gallia Belgica province and then of its Germania Superior province, while the eastern portion of modern Switzerland was integrated into the Roman province of Raetia. Sometime around the start of the Common Era, the Romans maintained a large legionary camp called Vindonissa, now a ruin at the confluence of the Aare and Reuss rivers, near the town of Windisch, an outskirt of Brugg.
29
+
30
+ The first and second century AD was an age of prosperity for the population living on the Swiss plateau. Several towns, like Aventicum, Iulia Equestris and Augusta Raurica, reached a remarkable size, while hundreds of agricultural estates (Villae rusticae) were founded in the countryside.
31
+
32
+ Around 260 AD, the fall of the Agri Decumates territory north of the Rhine transformed today's Switzerland into a frontier land of the Empire. Repeated raids by the Alamanni tribes provoked the ruin of the Roman towns and economy, forcing the population to find shelter near Roman fortresses, like the Castrum Rauracense near Augusta Raurica. The Empire built another line of defence at the north border (the so-called Donau-Iller-Rhine-Limes), but at the end of the fourth century the increased Germanic pressure forced the Romans to abandon the linear defence concept, and the Swiss plateau was finally open to the settlement of Germanic tribes.
33
+
34
+ In the Early Middle Ages, from the end of the 4th century, the western extent of modern-day Switzerland was part of the territory of the Kings of the Burgundians. The Alemanni settled the Swiss plateau in the 5th century and the valleys of the Alps in the 8th century, forming Alemannia. Modern-day Switzerland was therefore then divided between the kingdoms of Alemannia and Burgundy.[29] The entire region became part of the expanding Frankish Empire in the 6th century, following Clovis I's victory over the Alemanni at Tolbiac in 504 AD, and later Frankish domination of the Burgundians.[31][32]
35
+
36
+ Throughout the rest of the 6th, 7th and 8th centuries the Swiss regions continued under Frankish hegemony (Merovingian and Carolingian dynasties). But after its extension under Charlemagne, the Frankish Empire was divided by the Treaty of Verdun in 843.[29] The territories of present-day Switzerland became divided into Middle Francia and East Francia until they were reunified under the Holy Roman Empire around 1000 AD.[29]
37
+
38
+ By 1200, the Swiss plateau comprised the dominions of the houses of Savoy, Zähringer, Habsburg, and Kyburg.[29] Some regions (Uri, Schwyz, Unterwalden, later known as Waldstätten) were accorded the Imperial immediacy to grant the empire direct control over the mountain passes. With the extinction of its male line in 1263 the Kyburg dynasty fell in AD 1264; then the Habsburgs under King Rudolph I (Holy Roman Emperor in 1273) laid claim to the Kyburg lands and annexed them extending their territory to the eastern Swiss plateau.[31]
39
+
40
+ A female who died in about 200 BC was found buried in a carved tree trunk during a construction project at the Kern school complex in March 2017 in Aussersihl. Archaeologists revealed that she was approximately 40 years old when she died and likely carried out little physical labor when she was alive. A sheepskin coat, a belt chain, a fancy wool dress, a scarf and a pendant made of glass and amber beads were also discovered with the woman.[33][34][35][36][37]
41
+
42
+ The Old Swiss Confederacy was an alliance among the valley communities of the central Alps. The Confederacy, governed by nobles and patricians of various cantons, facilitated management of common interests and ensured peace on the important mountain trade routes. The Federal Charter of 1291 agreed between the rural communes of Uri, Schwyz, and Unterwalden is considered the confederacy's founding document, even though similar alliances are likely to have existed decades earlier.[38][39]
43
+
44
+ By 1353, the three original cantons had joined with the cantons of Glarus and Zug and the Lucerne, Zürich and Bern city states to form the "Old Confederacy" of eight states that existed until the end of the 15th century. The expansion led to increased power and wealth for the confederation.[39] By 1460, the confederates controlled most of the territory south and west of the Rhine to the Alps and the Jura mountains, particularly after victories against the Habsburgs (Battle of Sempach, Battle of Näfels), over Charles the Bold of Burgundy during the 1470s, and the success of the Swiss mercenaries. The Swiss victory in the Swabian War against the Swabian League of Emperor Maximilian I in 1499 amounted to de facto independence within the Holy Roman Empire.[39] In 1501, Basel and Schaffhausen joined the Old Swiss Confederacy.
45
+
46
+ The Old Swiss Confederacy had acquired a reputation of invincibility during these earlier wars, but expansion of the confederation suffered a setback in 1515 with the Swiss defeat in the Battle of Marignano. This ended the so-called "heroic" epoch of Swiss history.[39] The success of Zwingli's Reformation in some cantons led to inter-cantonal religious conflicts in 1529 and 1531 (Wars of Kappel). It was not until more than one hundred years after these internal wars that, in 1648, under the Peace of Westphalia, European countries recognised Switzerland's independence from the Holy Roman Empire and its neutrality.[31][32]
47
+
48
+ During the Early Modern period of Swiss history, the growing authoritarianism of the patriciate families combined with a financial crisis in the wake of the Thirty Years' War led to the Swiss peasant war of 1653. In the background to this struggle, the conflict between Catholic and Protestant cantons persisted, erupting in further violence at the First War of Villmergen, in 1656, and the Toggenburg War (or Second War of Villmergen), in 1712.[39]
49
+
50
+ In 1798, the revolutionary French government conquered Switzerland and imposed a new unified constitution.[39] This centralised the government of the country, effectively abolishing the cantons: moreover, Mülhausen joined France and the Valtellina valley became part of the Cisalpine Republic, separating from Switzerland. The new regime, known as the Helvetic Republic, was highly unpopular. It had been imposed by a foreign invading army and destroyed centuries of tradition, making Switzerland nothing more than a French satellite state. The fierce French suppression of the Nidwalden Revolt in September 1798 was an example of the oppressive presence of the French Army and the local population's resistance to the occupation.
51
+
52
+ When war broke out between France and its rivals, Russian and Austrian forces invaded Switzerland. The Swiss refused to fight alongside the French in the name of the Helvetic Republic. In 1803 Napoleon organised a meeting of the leading Swiss politicians from both sides in Paris. The result was the Act of Mediation which largely restored Swiss autonomy and introduced a Confederation of 19 cantons.[39] Henceforth, much of Swiss politics would concern balancing the cantons' tradition of self-rule with the need for a central government.
53
+
54
+ In 1815 the Congress of Vienna fully re-established Swiss independence and the European powers agreed to permanently recognise Swiss neutrality.[31][32][39] Swiss troops still served foreign governments until 1860 when they fought in the Siege of Gaeta. The treaty also allowed Switzerland to increase its territory, with the admission of the cantons of Valais, Neuchâtel and Geneva. Switzerland's borders have not changed since, except for some minor adjustments.[40]
55
+
56
+ The restoration of power to the patriciate was only temporary. After a period of unrest with repeated violent clashes, such as the Züriputsch of 1839, civil war (the Sonderbundskrieg) broke out in 1847 when some Catholic cantons tried to set up a separate alliance (the Sonderbund).[39] The war lasted for less than a month, causing fewer than 100 casualties, most of which were through friendly fire. Yet however minor the Sonderbundskrieg appears compared with other European riots and wars in the 19th century, it nevertheless had a major impact on both the psychology and the society of the Swiss and of Switzerland.
57
+
58
+ The war convinced most Swiss of the need for unity and strength towards its European neighbours. Swiss people from all strata of society, whether Catholic or Protestant, from the liberal or conservative current, realised that the cantons would profit more if their economic and religious interests were merged.
59
+
60
+ Thus, while the rest of Europe saw revolutionary uprisings, the Swiss drew up a constitution which provided for a federal layout, much of it inspired by the American example. This constitution provided for a central authority while leaving the cantons the right to self-government on local issues. Giving credit to those who favoured the power of the cantons (the Sonderbund Kantone), the national assembly was divided between an upper house (the Council of States, two representatives per canton) and a lower house (the National Council, with representatives elected from across the country). Referendums were made mandatory for any amendment of this constitution.[32] This new constitution also brought a legal end to nobility in Switzerland.[41]
61
+
62
+ A system of single weights and measures was introduced and in 1850 the Swiss franc became the Swiss single currency. Article 11 of the constitution forbade sending troops to serve abroad, with the exception of serving the Holy See, though the Swiss were still obliged to serve Francis II of the Two Sicilies with Swiss Guards present at the Siege of Gaeta in 1860, marking the end of foreign service.
63
+
64
+ An important clause of the constitution was that it could be re-written completely if this was deemed necessary, thus enabling it to evolve as a whole rather than being modified one amendment at a time.[42]
65
+
66
+ This need soon proved itself when the rise in population and the Industrial Revolution that followed led to calls to modify the constitution accordingly. An early draft was rejected by the population in 1872 but modifications led to its acceptance in 1874.[39] It introduced the facultative referendum for laws at the federal level. It also established federal responsibility for defence, trade, and legal matters.
67
+
68
+ In 1891, the constitution was revised with unusually strong elements of direct democracy, which remain unique even today.[39]
69
+
70
+ Switzerland was not invaded during either of the world wars. During World War I, Switzerland was home to Vladimir Illych Ulyanov (Vladimir Lenin) and he remained there until 1917.[43] Swiss neutrality was seriously questioned by the Grimm–Hoffmann Affair in 1917, but it was short-lived. In 1920, Switzerland joined the League of Nations, which was based in Geneva, on condition that it was exempt from any military requirements.
71
+
72
+ During World War II, detailed invasion plans were drawn up by the Germans,[44] but Switzerland was never attacked.[39] Switzerland was able to remain independent through a combination of military deterrence, concessions to Germany, and good fortune as larger events during the war delayed an invasion.[32][45] Under General Henri Guisan, appointed the commander-in-chief for the duration of the war, a general mobilisation of the armed forces was ordered. The Swiss military strategy was changed from one of static defence at the borders to protect the economic heartland, to one of organised long-term attrition and withdrawal to strong, well-stockpiled positions high in the Alps known as the Reduit. Switzerland was an important base for espionage by both sides in the conflict and often mediated communications between the Axis and Allied powers.[45]
73
+
74
+ Switzerland's trade was blockaded by both the Allies and by the Axis. Economic cooperation and extension of credit to the Third Reich varied according to the perceived likelihood of invasion and the availability of other trading partners. Concessions reached a peak after a crucial rail link through Vichy France was severed in 1942, leaving Switzerland (together with Liechtenstein) entirely isolated from the wider world by Axis controlled territory. Over the course of the war, Switzerland interned over 300,000 refugees[46] and the International Red Cross, based in Geneva, played an important part during the conflict. Strict immigration and asylum policies as well as the financial relationships with Nazi Germany raised controversy, but not until the end of the 20th century.[47]
75
+
76
+ During the war, the Swiss Air Force engaged aircraft of both sides, shooting down 11 intruding Luftwaffe planes in May and June 1940, then forcing down other intruders after a change of policy following threats from Germany. Over 100 Allied bombers and their crews were interned during the war. Between 1940 and 1945, Switzerland was bombed by the Allies causing fatalities and property damage.[45] Among the cities and towns bombed were Basel, Brusio, Chiasso, Cornol, Geneva, Koblenz, Niederweningen, Rafz, Renens, Samedan, Schaffhausen, Stein am Rhein, Tägerwilen, Thayngen, Vals, and Zürich. Allied forces explained the bombings, which violated the 96th Article of War, resulted from navigation errors, equipment failure, weather conditions, and errors made by bomber pilots. The Swiss expressed fear and concern that the bombings were intended to put pressure on Switzerland to end economic cooperation and neutrality with Nazi Germany.[48] Court-martial proceedings took place in England and the U.S. Government paid 62,176,433.06 in Swiss francs for reparations of the bombings.
77
+
78
+ Switzerland's attitude towards refugees was complicated and controversial; over the course of the war it admitted as many as 300,000 refugees[46] while refusing tens of thousands more,[49] including Jews who were severely persecuted by the Nazis.
79
+
80
+ After the war, the Swiss government exported credits through the charitable fund known as the Schweizerspende and also donated to the Marshall Plan to help Europe's recovery, efforts that ultimately benefited the Swiss economy.[50]
81
+
82
+ During the Cold War, Swiss authorities considered the construction of a Swiss nuclear bomb.[51] Leading nuclear physicists at the Federal Institute of Technology Zürich such as Paul Scherrer made this a realistic possibility. In 1988, the Paul Scherrer Institute was founded in his name to explore the therapeutic uses of neutron scattering technologies. Financial problems with the defence budget and ethical considerations prevented the substantial funds from being allocated, and the Nuclear Non-Proliferation Treaty of 1968 was seen as a valid alternative. All remaining plans for building nuclear weapons were dropped by 1988.[52]
83
+
84
+ Switzerland was the last Western republic to grant women the right to vote. Some Swiss cantons approved this in 1959, while at the federal level it was achieved in 1971[39][53] and, after resistance, in the last canton Appenzell Innerrhoden (one of only two remaining Landsgemeinde, along with Glarus) in 1990. After obtaining suffrage at the federal level, women quickly rose in political significance, with the first woman on the seven member Federal Council executive being Elisabeth Kopp, who served from 1984 to 1989,[39] and the first female president being Ruth Dreifuss in 1999.
85
+
86
+ Switzerland joined the Council of Europe in 1963.[32] In 1979 areas from the canton of Bern attained independence from the Bernese, forming the new canton of Jura. On 18 April 1999 the Swiss population and the cantons voted in favour of a completely revised federal constitution.[39]
87
+
88
+ In 2002 Switzerland became a full member of the United Nations, leaving the Vatican City as the last widely recognised state without full UN membership. Switzerland is a founding member of the EFTA, but is not a member of the European Economic Area. An application for membership in the European Union was sent in May 1992, but not advanced since the EEA was rejected in December 1992[39] when Switzerland was the only country to launch a referendum on the EEA. There have since been several referendums on the EU issue; due to opposition from the citizens, the membership application has been withdrawn. Nonetheless, Swiss law is gradually being adjusted to conform with that of the EU, and the government has signed a number of bilateral agreements with the European Union. Switzerland, together with Liechtenstein, has been completely surrounded by the EU since Austria's entry in 1995. On 5 June 2005, Swiss voters agreed by a 55% majority to join the Schengen treaty, a result that was regarded by EU commentators as a sign of support by Switzerland, a country that is traditionally perceived as independent and reluctant to enter supranational bodies.[32]
89
+
90
+ Extending across the north and south side of the Alps in west-central Europe, Switzerland encompasses a great diversity of landscapes and climates on a limited area of 41,285 square kilometres (15,940 sq mi).[54] The population is about 8 million, resulting in an average population density of around 195 people per square kilometre (500/sq mi).[54][55] The more mountainous southern half of the country is far more sparsely populated than the northern half.[54] In the largest Canton of Graubünden, lying entirely in the Alps, population density falls to 27 /km² (70 /sq mi).[56]
91
+
92
+ Switzerland lies between latitudes 45° and 48° N, and longitudes 5° and 11° E. It contains three basic topographical areas: the Swiss Alps to the south, the Swiss Plateau or Central Plateau, and the Jura mountains on the west. The Alps are a high mountain range running across the central-south of the country, constituting about 60% of the country's total area. The majority of the Swiss population live in the Swiss Plateau. Among the high valleys of the Swiss Alps many glaciers are found, totalling an area of 1,063 square kilometres (410 sq mi). From these originate the headwaters of several major rivers, such as the Rhine, Inn, Ticino and Rhône, which flow in the four cardinal directions into the whole of Europe. The hydrographic network includes several of the largest bodies of freshwater in Central and Western Europe, among which are included Lake Geneva (also called le Lac Léman in French), Lake Constance (known as Bodensee in German) and Lake Maggiore. Switzerland has more than 1500 lakes, and contains 6% of Europe's stock of fresh water. Lakes and glaciers cover about 6% of the national territory. The largest lake is Lake Geneva, in western Switzerland shared with France. The Rhône is both the main source and outflow of Lake Geneva. Lake Constance is the second largest Swiss lake and, like the Lake Geneva, an intermediate step by the Rhine at the border to Austria and Germany. While the Rhône flows into the Mediterranean Sea at the French Camargue region and the Rhine flows into the North Sea at Rotterdam in the Netherlands, about 1,000 kilometres (620 miles) apart, both springs are only about 22 kilometres (14 miles) apart from each other in the Swiss Alps.[54][57]
93
+
94
+ Forty-eight of Switzerland's mountains are 4,000 metres (13,000 ft) above sea in altitude or higher.[54] At 4,634 m (15,203 ft), Monte Rosa is the highest, although the Matterhorn (4,478 m or 14,692 ft) is often regarded as the most famous. Both are located within the Pennine Alps in the canton of Valais, on the border with Italy. The section of the Bernese Alps above the deep glacial Lauterbrunnen valley, containing 72 waterfalls, is well known for the Jungfrau (4,158 m or 13,642 ft) Eiger and Mönch, and the many picturesque valleys in the region. In the southeast the long Engadin Valley, encompassing the St. Moritz area in canton of Graubünden, is also well known; the highest peak in the neighbouring Bernina Alps is Piz Bernina (4,049 m or 13,284 ft).[54]
95
+
96
+ The more populous northern part of the country, constituting about 30% of the country's total area, is called the Swiss Plateau. It has greater open and hilly landscapes, partly forested, partly open pastures, usually with grazing herds, or vegetables and fruit fields, but it is still hilly. There are large lakes found here and the biggest Swiss cities are in this area of the country.[54]
97
+
98
+ Within Switzerland there are two small enclaves: Büsingen belongs to Germany, Campione d'Italia belongs to Italy.[58] Switzerland has no exclaves in other countries.
99
+
100
+ The Swiss climate is generally temperate, but can vary greatly between the localities,[59] from glacial conditions on the mountaintops to the often pleasant near Mediterranean climate at Switzerland's southern tip. There are some valley areas in the southern part of Switzerland where some cold-hardy palm trees are found. Summers tend to be warm and humid at times with periodic rainfall so they are ideal for pastures and grazing. The less humid winters in the mountains may see long intervals of stable conditions for weeks, while the lower lands tend to suffer from inversion, during these periods, thus seeing no sun for weeks.
101
+
102
+ A weather phenomenon known as the föhn (with an identical effect to the chinook wind) can occur at all times of the year and is characterised by an unexpectedly warm wind, bringing air of very low relative humidity to the north of the Alps during rainfall periods on the southern face of the Alps. This works both ways across the alps but is more efficient if blowing from the south due to the steeper step for oncoming wind from the south. Valleys running south to north trigger the best effect.
103
+ The driest conditions persist in all inner alpine valleys that receive less rain because arriving clouds lose a lot of their content while crossing the mountains before reaching these areas. Large alpine areas such as Graubünden remain drier than pre-alpine areas and as in the main valley of the Valais wine grapes are grown there.[60]
104
+
105
+ The wettest conditions persist in the high Alps and in the Ticino canton which has much sun yet heavy bursts of rain from time to time.[60] Precipitation tends to be spread moderately throughout the year with a peak in summer. Autumn is the driest season, winter receives less precipitation than summer, yet the weather patterns in Switzerland are not in a stable climate system and can be variable from year to year with no strict and predictable periods.
106
+
107
+ Switzerland's ecosystems can be particularly fragile, because the many delicate valleys separated by high mountains often form unique ecologies. The mountainous regions themselves are also vulnerable, with a rich range of plants not found at other altitudes, and experience some pressure from visitors and grazing. The climatic, geological and topographical conditions of the alpine region make for a very fragile ecosystem that is particularly sensitive to climate change.[59][62] Nevertheless, according to the 2014 Environmental Performance Index, Switzerland ranks first among 132 nations in safeguarding the environment, due to its high scores on environmental public health, its heavy reliance on renewable sources of energy (hydropower and geothermal energy), and its control of greenhouse gas emissions.[63]
108
+
109
+ However, access to biocapacity in Switzerland is far lower than world average. In 2016, Switzerland had 1.0 global hectares[64] of biocapacity per person within its territory, 40 percent less than world average of 1.6 global hectares per person. In contrast, in 2016, they used 4.6 global hectares of biocapacity - their ecological footprint of consumption. This means they used about 4.6 times as much biocapacity as Switzerland contains. The remainder comes from imports and overusing the global commons (such as the atmosphere through greenhouse gas emissions). As a result, Switzerland is running a biocapacity deficit.[64]
110
+
111
+ The Federal Constitution adopted in 1848 is the legal foundation of the modern federal state.[65] A new Swiss Constitution was adopted in 1999, but did not introduce notable changes to the federal structure. It outlines basic and political rights of individuals and citizen participation in public affairs, divides the powers between the Confederation and the cantons and defines federal jurisdiction and authority. There are three main governing bodies on the federal level:[66] the bicameral parliament (legislative), the Federal Council (executive) and the Federal Court (judicial).
112
+
113
+ The Swiss Parliament consists of two houses: the Council of States which has 46 representatives (two from each canton and one from each half-canton) who are elected under a system determined by each canton, and the National Council, which consists of 200 members who are elected under a system of proportional representation, depending on the population of each canton. Members of both houses serve for 4 years and only serve as members of parliament part-time (so-called Milizsystem or citizen legislature).[67] When both houses are in joint session, they are known collectively as the Federal Assembly. Through referendums, citizens may challenge any law passed by parliament and through initiatives, introduce amendments to the federal constitution, thus making Switzerland a direct democracy.[65]
114
+
115
+ The Federal Council constitutes the federal government, directs the federal administration and serves as collective Head of State. It is a collegial body of seven members, elected for a four-year mandate by the Federal Assembly which also exercises oversight over the Council. The President of the Confederation is elected by the Assembly from among the seven members, traditionally in rotation and for a one-year term; the President chairs the government and assumes representative functions. However, the president is a primus inter pares with no additional powers, and remains the head of a department within the administration.[65]
116
+
117
+ The Swiss government has been a coalition of the four major political parties since 1959, each party having a number of seats that roughly reflects its share of electorate and representation in the federal parliament.
118
+ The classic distribution of 2 CVP/PDC, 2 SPS/PSS, 2 FDP/PRD and 1 SVP/UDC as it stood from 1959 to 2003 was known as the "magic formula". Following the 2015 Federal Council elections, the seven seats in the Federal Council were distributed as follows:
119
+
120
+ The function of the Federal Supreme Court is to hear appeals against rulings of cantonal or federal courts. The judges are elected by the Federal Assembly for six-year terms.[68]
121
+
122
+ Direct democracy and federalism are hallmarks of the Swiss political system.[69] Swiss citizens are subject to three legal jurisdictions: the municipality, canton and federal levels. The 1848 and 1999 Swiss Constitutions define a system of direct democracy (sometimes called half-direct or representative direct democracy because it is aided by the more commonplace institutions of a representative democracy). The instruments of this system at the federal level, known as popular rights (German: Volksrechte, French: droits populaires, Italian: diritti popolari),[70] include the right to submit a federal initiative and a referendum, both of which may overturn parliamentary decisions.[65][71]
123
+
124
+ By calling a federal referendum, a group of citizens may challenge a law passed by parliament, if they gather 50,000 signatures against the law within 100 days. If so, a national vote is scheduled where voters decide by a simple majority whether to accept or reject the law. Any 8 cantons together can also call a constitutional referendum on a federal law.[65]
125
+
126
+ Similarly, the federal constitutional initiative allows citizens to put a constitutional amendment to a national vote, if 100,000 voters sign the proposed amendment within 18 months.[note 8] The Federal Council and the Federal Assembly can supplement the proposed amendment with a counter-proposal, and then voters must indicate a preference on the ballot in case both proposals are accepted. Constitutional amendments, whether introduced by initiative or in parliament, must be accepted by a double majority of the national popular vote and the cantonal popular votes.[note 9][69]
127
+
128
+ The Swiss Confederation consists of 26 cantons:[65][72]
129
+
130
+ *These cantons are known as half-cantons.
131
+
132
+ The cantons are federated states, have a permanent constitutional status and, in comparison with the situation in other countries, a high degree of independence. Under the Federal Constitution, all 26 cantons are equal in status, except that 6 (referred to often as the half-cantons) are represented by only one councillor (instead of two) in the Council of States and have only half a cantonal vote with respect to the required cantonal majority in referendums on constitutional amendments. Each canton has its own constitution, and its own parliament, government, police and courts.[72] However, there are considerable differences between the individual cantons, most particularly in terms of population and geographical area. Their populations vary between 16,003 (Appenzell Innerrhoden) and 1,487,969 (Zürich), and their area between 37 km2 (14 sq mi) (Basel-Stadt) and 7,105 km2 (2,743 sq mi) (Grisons).
133
+
134
+ The cantons comprise a total of 2,222 municipalities as of 2018.
135
+
136
+ Traditionally, Switzerland avoids alliances that might entail military, political, or direct economic action and has been neutral since the end of its expansion in 1515. Its policy of neutrality was internationally recognised at the Congress of Vienna in 1815.[73][74] Only in 2002 did Switzerland become a full member of the United Nations[73] and it was the first state to join it by referendum. Switzerland maintains diplomatic relations with almost all countries and historically has served as an intermediary between other states.[73] Switzerland is not a member of the European Union; the Swiss people have consistently rejected membership since the early 1990s.[73] However, Switzerland does participate in the Schengen Area.[75] Swiss neutrality has been questioned at times.[76][77][78][79][80]
137
+
138
+
139
+
140
+ Many international institutions have their seats in Switzerland, in part because of its policy of neutrality. Geneva is the birthplace of the Red Cross and Red Crescent Movement, the Geneva Conventions and, since 2006, hosts the United Nations Human Rights Council. Even though Switzerland is one of the most recent countries to have joined the United Nations, the Palace of Nations in Geneva is the second biggest centre for the United Nations after New York, and Switzerland was a founding member and home to the League of Nations.
141
+
142
+ Apart from the United Nations headquarters, the Swiss Confederation is host to many UN agencies, like the World Health Organization (WHO), the International Labour Organization (ILO), the International Telecommunication Union (ITU), the United Nations High Commissioner for Refugees (UNHCR) and about 200 other international organisations, including the World Trade Organization and the World Intellectual Property Organization.[73] The annual meetings of the World Economic Forum in Davos bring together top international business and political leaders from Switzerland and foreign countries to discuss important issues facing the world, including health and the environment. Additionally the headquarters of the Bank for International Settlements (BIS) are located in Basel since 1930.
143
+
144
+ Furthermore, many sport federations and organisations are located throughout the country, such as the International Handball Federation in Basel, the
145
+ International Basketball Federation in Geneva, the Union of European Football Associations (UEFA) in Nyon, the International Federation of Association Football (FIFA) and the International Ice Hockey Federation both in Zürich, the International Cycling Union in Aigle, and the International Olympic Committee in Lausanne.[82]
146
+
147
+ The Swiss Armed Forces, including the Land Forces and the Air Force, are composed mostly of conscripts, male citizens aged from 20 to 34 (in special cases up to 50) years. Being a landlocked country, Switzerland has no navy; however, on lakes bordering neighbouring countries, armed military patrol boats are used. Swiss citizens are prohibited from serving in foreign armies, except for the Swiss Guards of the Vatican, or if they are dual citizens of a foreign country and reside there.
148
+
149
+ The structure of the Swiss militia system stipulates that the soldiers keep their Army issued equipment, including all personal weapons, at home. Some organisations and political parties find this practice controversial.[83] Women can serve voluntarily. Men usually receive military conscription orders for training at the age of 18.[84] About two thirds of the young Swiss are found suited for service; for those found unsuited, various forms of alternative service exist.[85] Annually, approximately 20,000 persons are trained in recruit centres for a duration from 18 to 21 weeks. The reform "Army XXI" was adopted by popular vote in 2003, it replaced the previous model "Army 95", reducing the effectives from 400,000 to about 200,000. Of those, 120,000 are active in periodic Army training and 80,000 are non-training reserves.[86]
150
+
151
+ Overall, three general mobilisations have been declared to ensure the integrity and neutrality of Switzerland. The first one was held on the occasion of the Franco-Prussian War of 1870–71. The second was in response to the outbreak of the First World War in August 1914. The third mobilisation of the army took place in September 1939 in response to the German attack on Poland; Henri Guisan was elected as the General-in-Chief.
152
+
153
+ Because of its neutrality policy, the Swiss army does not currently take part in armed conflicts in other countries, but is part of some peacekeeping missions around the world. Since 2000 the armed force department has also maintained the Onyx intelligence gathering system to monitor satellite communications.[87] Switzerland decided not to sign the Nuclear Weapon Ban Treaty.[88]
154
+
155
+ Following the end of the Cold War there have been a number of attempts to curb military activity or even abolish the armed forces altogether. A notable referendum on the subject, launched by an anti-militarist group, was held on 26 November 1989. It was defeated with about two thirds of the voters against the proposal.[89][90] A similar referendum, called for before, but held shortly after the 11 September attacks in the US, was defeated by over 78% of voters.[91]
156
+
157
+ Gun politics in Switzerland are unique in Europe in that 29% of citizens are legally armed. The large majority of firearms kept at home are issued by the Swiss army, but ammunition is no longer issued.[92][93]
158
+
159
+ Until 1848 the rather loosely coupled Confederation did not know a central political organisation, but representatives, mayors, and Landammänner met several times a year at the capital of the Lieu presiding the Confederal Diet for one year.
160
+
161
+ Until 1500 the legates met most of the time in Lucerne, but also in Zürich, Baden, Bern, Schwyz etc., but sometimes also at places outside of the confederation, such as Constance. From the Swabian War in 1499 onwards until Reformation, most conferences met in Zurich. Afterwards the town hall at Baden, where the annual accounts of the common people had been held regularly since 1426, became the most frequent, but not the sole place of assembly. After 1712 Frauenfeld gradually dissolved Baden. From 1526, the Catholic conferences were held mostly in Lucerne, the Protestant conferences from 1528 mostly in Aarau, the one for the legitimation of the French Ambassador in Solothurn. At the same time the syndicate for the Ennetbirgischen Vogteien located in the present Ticino met from 1513 in Lugano and Locarno.[94]
162
+
163
+ After the Helvetic Republic and during the Mediation from 1803 until 1815 the Confederal Diet of the 19 Lieus met at the capitals of the directoral cantons Fribourg, Berne, Basel, Zurich, Lucerne and Solothurn.[94]
164
+
165
+ After the Long Diet from 6 April 1814 to 31 August 1815 took place in Zurich to replace the constitution and the enhancement of the Confederation to 22 cantons by the admission of the cantons of Valais, Neuchâtel and Geneva to full members, the directoral cantons of Lucerne, Zurich and Berne took over the diet in two-year turns.[94]
166
+
167
+ In 1848, the federal constitution provided that details concerning the federal institutions, such as their locations, should be taken care of by the Federal Assembly (BV 1848 Art. 108). Thus on 28 November 1848, the Federal Assembly voted in majority to locate the seat of government in Berne. And, as a prototypical federal compromise, to assign other federal institutions, such as the Federal Polytechnical School (1854, the later ETH) to Zurich, and other institutions to Lucerne, such as the later SUVA (1912) and the Federal Insurance Court (1917). In 1875, a law (RS 112) fixed the compensations owed by the city of Bern for the federal seat.[1] According to these living fundamental federalistic feelings further federal institutions were subsequently attributed to Lausanne (Federal Supreme Court in 1872, and EPFL in 1969), Bellinzona (Federal Criminal Court, 2004), and St. Gallen (Federal Administrative Court and Federal Patent Court, 2012).
168
+
169
+ The 1999 new constitution, however, does not contain anything concerning any Federal City. In 2002 a tripartite committee has been asked by the Swiss Federal Council to prepare the "creation of a federal law on the status of Bern as a Federal City", and to evaluate the positive and negative aspects for the city and the canton of Bern if this status were awarded. After a first report the work of this committee was suspended in 2004 by the Swiss Federal Council, and work on this subject has not resumed since.[95]
170
+
171
+ Thus as of today, no city in Switzerland has the official status either of capital or of Federal City, nevertheless Berne is commonly referred to as "Federal City" (German: Bundesstadt, French: ville fédérale, Italian: città federale).
172
+
173
+ Switzerland has a stable, prosperous and high-tech economy and enjoys great wealth, being ranked as the wealthiest country in the world per capita in multiple rankings, while at the same time being one the least corrupt countries in the world.[97][98][99] It has the world's twentieth largest economy by nominal GDP and the thirty-eighth largest by purchasing power parity. It is the seventeenth largest exporter. Zürich and Geneva are regarded as global cities, ranked as Alpha and Beta respectively. Switzerland has the highest European rating in the Index of Economic Freedom 2010, while also providing large coverage through public services.[100] The nominal per capita GDP is higher than those of the larger Western and Central European economies and Japan.[101] In terms of GDP per capita adjusted for purchasing power, Switzerland was ranked 5th in the world in 2018 by World Bank[102] and estimated at 9th by the IMF in 2020[103], as well as 11th by the CIA World Factbook in 2017[104].
174
+
175
+ The World Economic Forum's Global Competitiveness Report currently ranks Switzerland's economy as the most competitive in the world,[106] while ranked by the European Union as Europe's most innovative country.[107][108] It is a relatively easy place to do business, currently ranking 20th of 189 countries in the Ease of Doing Business Index. The slow growth Switzerland experienced in the 1990s and the early 2000s has brought greater support for economic reforms and harmonisation with the European Union.[109][110]
176
+
177
+ For much of the 20th century, Switzerland was the wealthiest country in Europe by a considerable margin (by GDP – per capita).[111] Switzerland also has one of the world's largest account balances as a percentage of GDP.[112] In 2018, the canton of Basel-City had the highest GDP per capita in the country, ahead of the cantons of Zug and Geneva.[113] According to Credit Suisse, only about 37% of residents own their own homes, one of the lowest rates of home ownership in Europe. Housing and food price levels were 171% and 145% of the EU-25 index in 2007, compared to 113% and 104% in Germany.[114]
178
+
179
+ Origin of the capital at the 30 biggest Swiss corporations, 2018[115]
180
+
181
+ Switzerland is home to several large multinational corporations. The largest Swiss companies by revenue are Glencore, Gunvor, Nestlé, Novartis, Hoffmann-La Roche, ABB, Mercuria Energy Group and Adecco.[116] Also, notable are UBS AG, Zurich Financial Services, Credit Suisse, Barry Callebaut, Swiss Re, Tetra Pak, The Swatch Group and Swiss International Air Lines. Switzerland is ranked as having one of the most powerful economies in the world.[111]
182
+
183
+ Switzerland's most important economic sector is manufacturing. Manufacturing consists largely of the production of specialist chemicals, health and pharmaceutical goods, scientific and precision measuring instruments and musical instruments. The largest exported goods are chemicals (34% of exported goods), machines/electronics (20.9%), and precision instruments/watches (16.9%).[114] Exported services amount to a third of exports.[114] The service sector – especially banking and insurance, tourism, and international organisations – is another important industry for Switzerland.
184
+
185
+ Agricultural protectionism—a rare exception to Switzerland's free trade policies—has contributed to high food prices. Product market liberalisation is lagging behind many EU countries according to the OECD.[109] Nevertheless, domestic purchasing power is one of the best in the world.[117][118][119] Apart from agriculture, economic and trade barriers between the European Union and Switzerland are minimal and Switzerland has free trade agreements worldwide. Switzerland is a member of the European Free Trade Association (EFTA).
186
+
187
+ Switzerland has an overwhelmingly private sector economy and low tax rates by Western World standards; overall taxation is one of the smallest of developed countries. The Swiss Federal budget had a size of 62.8 billion Swiss francs in 2010, which is an equivalent 11.35% of the country's GDP in that year; however, the regional (canton) budgets and the budgets of the municipalities are not counted as part of the federal budget and the total rate of government spending is closer to 33.8% of GDP. The main sources of income for the federal government are the value-added tax (33%) and the direct federal tax (29%) and the main expenditure is located in the areas of social welfare and finance & tax. The expenditures of the Swiss Confederation have been growing from 7% of GDP in 1960 to 9.7% in 1990 and to 10.7% in 2010. While the sectors social welfare and finance & tax have been growing from 35% in 1990 to 48.2% in 2010, a significant reduction of expenditures has been occurring in the sectors of agriculture and national defence; from 26.5% in to 12.4% (estimation for the year 2015).[120][121]
188
+
189
+ Slightly more than 5 million people work in Switzerland;[122] about 25% of employees belonged to a trade union in 2004.[123] Switzerland has a more flexible job market than neighbouring countries and the unemployment rate is very low. The unemployment rate increased from a low of 1.7% in June 2000 to a peak of 4.4% in December 2009.[124] The unemployment rate decreased to 3.2% in 2014 and held steady at that level for several years,[125] before further dropping to 2.5% in 2018 and 2.3% in 2019.[126] Population growth from net immigration is quite high, at 0.52% of population in 2004, increased in the following years before falling to 0.54% again in 2017.[114][127] The foreign citizen population was 28.9% in 2015, about the same as in Australia. GDP per hour worked is the world's 16th highest, at 49.46 international dollars in 2012.[128]
190
+
191
+ In 2016, median monthly gross salary in Switzerland was 6,502 francs per month (equivalent to US$6,597 per month), is just enough to cover the high cost of living. After rent, taxes and social security contributions, plus spending on goods and services, the average household has about 15% of its gross income left for savings. Though 61% of the population made less than the average income, income inequality is relatively low with a Gini coefficient of 29.7, placing Switzerland among the top 20 countries for income equality.
192
+
193
+ About 8.2% of the population live below the national poverty line, defined in Switzerland as earning less than CHF3,990 per month for a household of two adults and two children, and a further 15% are at risk of poverty. Single-parent families, those with no post-compulsory education and those who are out of work are among the most likely to be living below the poverty line. Although getting a job is considered a way out of poverty, among the gainfully employed, some 4.3% are considered working poor. One in ten jobs in Switzerland is considered low-paid and roughly 12% of Swiss workers hold such jobs, many of them women and foreigners.
194
+
195
+ Education in Switzerland is very diverse because the constitution of Switzerland delegates the authority for the school system to the cantons.[129] There are both public and private schools, including many private international schools. The minimum age for primary school is about six years in all cantons, but most cantons provide a free "children's school" starting at four or five years old.[129] Primary school continues until grade four, five or six, depending on the school. Traditionally, the first foreign language in school was always one of the other national languages, although recently (2000) English was introduced first in a few cantons.[129]
196
+
197
+ At the end of primary school (or at the beginning of secondary school), pupils are separated according to their capacities in several (often three) sections. The fastest learners are taught advanced classes to be prepared for further studies and the matura,[129] while students who assimilate a little more slowly receive an education more adapted to their needs.
198
+
199
+ There are 12 universities in Switzerland, ten of which are maintained at cantonal level and usually offer a range of non-technical subjects. The first university in Switzerland was founded in 1460 in Basel (with a faculty of medicine) and has a tradition of chemical and medical research in Switzerland. It is listed 87th on the 2019 Academic Ranking of World Universities.[130] The largest university in Switzerland is the University of Zurich with nearly 25,000 students.[citation needed]The Swiss Federal Institute of Technology Zurich (ETHZ) and the University of Zurich are listed 20th and 54th respectively, on the 2015 Academic Ranking of World Universities.[131][132][133]
200
+
201
+ The two institutes sponsored by the federal government are the Swiss Federal Institute of Technology Zurich (ETHZ) in Zürich, founded 1855 and the EPFL in Lausanne, founded 1969 as such, which was formerly an institute associated with the University of Lausanne.[note 10][134][135]
202
+
203
+ In addition, there are various Universities of Applied Sciences. In business and management studies, the University of St. Gallen, (HSG) is ranked 329th in the world according to QS World University Rankings[136] and the International Institute for Management Development (IMD), was ranked first in open programmes worldwide by the Financial Times.[137] Switzerland has the second highest rate (almost 18% in 2003) of foreign students in tertiary education, after Australia (slightly over 18%).[138][139]
204
+
205
+ As might befit a country that plays home to innumerable international organisations, the Graduate Institute of International and Development Studies, located in Geneva, is not only continental Europe's oldest graduate school of international and development studies, but also widely believed to be one of its most prestigious.[140][141]
206
+
207
+ Many Nobel Prize laureates have been Swiss scientists. They include the world-famous physicist Albert Einstein[142] in the field of physics, who developed his special relativity while working in Bern. More recently Vladimir Prelog, Heinrich Rohrer, Richard Ernst, Edmond Fischer, Rolf Zinkernagel, Kurt Wüthrich and Jacques Dubochet received Nobel Prizes in the sciences. In total, 114 Nobel Prize winners in all fields stand in relation to Switzerland[143][note 11] and the Nobel Peace Prize has been awarded nine times to organisations residing in Switzerland.[144]
208
+
209
+ Geneva and the nearby French department of Ain co-host the world's largest laboratory, CERN,[146] dedicated to particle physics research. Another important research centre is the Paul Scherrer Institute. Notable inventions include lysergic acid diethylamide (LSD), diazepam (Valium), the scanning tunnelling microscope (Nobel prize) and Velcro. Some technologies enabled the exploration of new worlds such as the pressurised balloon of Auguste Piccard and the Bathyscaphe which permitted Jacques Piccard to reach the deepest point of the world's oceans.
210
+
211
+ Switzerland Space Agency, the Swiss Space Office, has been involved in various space technologies and programmes. In addition it was one of the 10 founders of the European Space Agency in 1975 and is the seventh largest contributor to the ESA budget. In the private sector, several companies are implicated in the space industry such as Oerlikon Space[147] or Maxon Motors[148] who provide spacecraft structures.
212
+
213
+ Switzerland voted against membership in the European Economic Area in a referendum in December 1992 and has since maintained and developed its relationships with the European Union (EU) and European countries through bilateral agreements. In March 2001, the Swiss people refused in a popular vote to start accession negotiations with the EU.[149] In recent years, the Swiss have brought their economic practices largely into conformity with those of the EU in many ways, in an effort to enhance their international competitiveness. The economy grew at 3% in 2010, 1.9% in 2011, and 1% in 2012.[150] EU membership was a long-term objective of the Swiss government, but there was and remains considerable popular sentiment against membership, which is opposed by the conservative SVP party, the largest party in the National Council, and not currently supported or proposed by several other political parties. The application for membership of the EU was formally withdrawn in 2016, having long been frozen. The western French-speaking areas and the urban regions of the rest of the country tend to be more pro-EU, nonetheless with far from a significant share of the population.[151][152]
214
+
215
+ The government has established an Integration Office under the Department of Foreign Affairs and the Department of Economic Affairs. To minimise the negative consequences of Switzerland's isolation from the rest of Europe, Bern and Brussels signed seven bilateral agreements to further liberalise trade ties. These agreements were signed in 1999 and took effect in 2001. This first series of bilateral agreements included the free movement of persons. A second series covering nine areas was signed in 2004 and has since been ratified, which includes the Schengen Treaty and the Dublin Convention besides others.[153] They continue to discuss further areas for cooperation.[154]
216
+
217
+ In 2006, Switzerland approved 1 billion francs of supportive investment in the poorer Southern and Central European countries in support of cooperation and positive ties to the EU as a whole. A further referendum will be needed to approve 300 million francs to support Romania and Bulgaria and their recent admission. The Swiss have also been under EU and sometimes international pressure to reduce banking secrecy and to raise tax rates to parity with the EU. Preparatory discussions are being opened in four new areas: opening up the electricity market, participation in the European GNSS project Galileo, cooperating with the European centre for disease prevention and recognising certificates of origin for food products.[155]
218
+
219
+ On 27 November 2008, the interior and justice ministers of European Union in Brussels announced Switzerland's accession to the Schengen passport-free zone from 12 December 2008. The land border checkpoints will remain in place only for goods movements, but should not run controls on people, though people entering the country had their passports checked until 29 March 2009 if they originated from a Schengen nation.[156]
220
+
221
+ On 9 February 2014, Swiss voters narrowly approved by 50.3% a ballot initiative launched by the national conservative Swiss People's Party (SVP/UDC) to restrict immigration, and thus reintroducing a quota system on the influx of foreigners. This initiative was mostly backed by rural (57.6% approvals) and suburban agglomerations (51.2% approvals), and isolated towns (51.3% approvals) of Switzerland as well as by a strong majority (69.2% approval) in the canton of Ticino, while metropolitan centres (58.5% rejection) and the French-speaking part (58.5% rejection) of Switzerland rather rejected it.[157] Some news commentators claim that this proposal de facto contradicts the bilateral agreements on the free movement of persons from these respective countries.[158][159]
222
+
223
+ In December 2016, a compromise with the European Union was attained effectively canceling quotas on EU citizens but still allowing for favourable treatment of Swiss-based job applicants.[160]
224
+
225
+ Electricity generated in Switzerland is 56% from hydroelectricity and 39% from nuclear power, resulting in a nearly CO2-free electricity-generating network. On 18 May 2003, two anti-nuclear initiatives were turned down: Moratorium Plus, aimed at forbidding the building of new nuclear power plants (41.6% supported and 58.4% opposed),[161] and Electricity Without Nuclear (33.7% supported and 66.3% opposed) after a previous moratorium expired in 2000.[162] However, as a reaction to the Fukushima nuclear disaster, the Swiss government announced in 2011 that it plans to end its use of nuclear energy in the next 2 or 3 decades.[163] In November 2016, Swiss voters rejected a proposal by the Green Party to accelerate the phaseout of nuclear power (45.8% supported and 54.2% opposed).[164] The Swiss Federal Office of Energy (SFOE) is the office responsible for all questions relating to energy supply and energy use within the Federal Department of Environment, Transport, Energy and Communications (DETEC). The agency is supporting the 2000-watt society initiative to cut the nation's energy use by more than half by the year 2050.[165]
226
+
227
+ The most dense rail network in Europe[53] of 5,250 kilometres (3,260 mi) carries over 596 million passengers annually (as of 2015).[166] In 2015, each Swiss resident travelled on average 2,550 kilometres (1,580 mi) by rail, which makes them the keenest rail users.[166] Virtually 100% of the network is electrified. The vast majority (60%) of the network is operated by the Swiss Federal Railways (SBB CFF FFS). Besides the second largest standard gauge railway company BLS AG two railways companies operating on narrow gauge networks are the Rhaetian Railway (RhB) in the southeastern canton of Graubünden, which includes some World Heritage lines,[167] and the Matterhorn Gotthard Bahn (MGB), which co-operates together with RhB the Glacier Express between Zermatt and St. Moritz/Davos. On 31 May 2016 the world's longest and deepest railway tunnel and the first flat, low-level route through the Alps, the 57.1-kilometre long (35.5 mi) Gotthard Base Tunnel, opened as the largest part of the New Railway Link through the Alps (NRLA) project after 17 years of realization. It started its daily business for passenger transport on 11 December 2016 replacing the old, mountainous, scenic route over and through the St Gotthard Massif.
228
+
229
+ Switzerland has a publicly managed road network without road tolls that is financed by highway permits as well as vehicle and gasoline taxes. The Swiss autobahn/autoroute system requires the purchase of a vignette (toll sticker)—which costs 40 Swiss francs—for one calendar year in order to use its roadways, for both passenger cars and trucks. The Swiss autobahn/autoroute network has a total length of 1,638 km (1,018 mi) (as of 2000) and has, by an area of 41,290 km2 (15,940 sq mi), also one of the highest motorway densities in the world.[168] Zurich Airport is Switzerland's largest international flight gateway, which handled 22.8 million passengers in 2012.[169] The other international airports are Geneva Airport (13.9 million passengers in 2012),[170] EuroAirport Basel Mulhouse Freiburg which is located in France, Bern Airport, Lugano Airport, St. Gallen-Altenrhein Airport and Sion Airport. Swiss International Air Lines is the flag carrier of Switzerland. Its main hub is Zürich, but it is legally domiciled in Basel.
230
+
231
+ Switzerland has one of the best environmental records among nations in the developed world;[171] it was one of the countries to sign the Kyoto Protocol in 1998 and ratified it in 2003. With Mexico and the Republic of Korea it forms the Environmental Integrity Group (EIG).[172] The country is heavily active in recycling and anti-littering regulations and is one of the top recyclers in the world, with 66% to 96% of recyclable materials being recycled, depending on the area of the country.[173] The 2014 Global Green Economy Index ranked Switzerland among the top 10 green economies in the world.[174]
232
+
233
+ Switzerland developed an efficient system to recycle most recyclable materials.[175] Publicly organised collection by volunteers and economical railway transport logistics started as early as 1865 under the leadership of the notable industrialist Hans Caspar Escher (Escher Wyss AG) when the first modern Swiss paper manufacturing plant was built in Biberist.[176]
234
+
235
+ Switzerland also has an economic system for garbage disposal, which is based mostly on recycling and energy-producing incinerators due to a strong political will to protect the environment.[177] As in other European countries, the illegal disposal of garbage is not tolerated at all and heavily fined. In almost all Swiss municipalities, stickers or dedicated garbage bags need to be purchased that allow for identification of disposable garbage.[178]
236
+
237
+ In 2018, Switzerland's population slightly exceeded 8.5 million. In common with other developed countries, the Swiss population increased rapidly during the industrial era, quadrupling between 1800 and 1990 and has continued to grow. Like most of Europe, Switzerland faces an ageing population, albeit with consistent annual growth projected into 2035, due mostly to immigration and a fertility rate close to replacement level.[179] Switzerland subsequently has one of the oldest populations in the world, with the average age of 42.5 years.[180]
238
+
239
+ As of 2019[update], resident foreigners make up 25.2% of the population, one of the largest proportions in the developed world.[5] Most of these (64%) were from European Union or EFTA countries.[181] Italians were the largest single group of foreigners, with 15.6% of total foreign population, followed closely by Germans (15.2%), immigrants from Portugal (12.7%), France (5.6%), Serbia (5.3%), Turkey (3.8%), Spain (3.7%), and Austria (2%). Immigrants from Sri Lanka, most of them former Tamil refugees, were the largest group among people of Asian origin (6.3%).[181]
240
+
241
+ Additionally, the figures from 2012 show that 34.7% of the permanent resident population aged 15 or over in Switzerland (around 2.33 million), had an immigrant background. A third of this population (853,000) held Swiss citizenship. Four fifths of persons with an immigration background were themselves immigrants (first generation foreigners and native-born and naturalised Swiss citizens), whereas one fifth were born in Switzerland (second generation foreigners and native-born and naturalised Swiss citizens).[182]
242
+
243
+ In the 2000s, domestic and international institutions expressed concern about what was perceived as an increase in xenophobia, particularly in some political campaigns. In reply to one critical report, the Federal Council noted that "racism unfortunately is present in Switzerland", but stated that the high proportion of foreign citizens in the country, as well as the generally unproblematic integration of foreigners, underlined Switzerland's openness.[183]
244
+ Follow-up study conducted in 2018 found that 59% considered racism a serious problem in Switzerland.
245
+ [184]
246
+ The proportion of the population that has reported being targeted by racial discrimination has increased in recent years, from 10% in 2014 to almost 17% in 2018, according to the Federal Statistical Office. [185]
247
+
248
+ Switzerland has four national languages: mainly German (spoken by 62.8% of the population in 2016); French (22.9%) in the west; and Italian (8.2%) in the south.[187][186] The fourth national language, Romansh (0.5%), is a Romance language spoken locally in the southeastern trilingual canton of Grisons, and is designated by Article 4 of the Federal Constitution as a national language along with German, French, and Italian, and in Article 70 as an official language if the authorities communicate with persons who speak Romansh. However, federal laws and other official acts do not need to be decreed in Romansh.
249
+
250
+ In 2016, the languages most spoken at home among permanent residents aged 15 and older were Swiss German (59.4%), French (23.5%), Standard German (10.6%), and Italian (8.5%). Other languages spoken at home included English (5.0%), Portuguese (3.8%), Albanian (3.0%), Spanish (2.6%) and Serbian and Croatian (2.5%). 6.9% reported speaking another language at home.[188] In 2014 almost two-thirds (64.4%) of the permanent resident population indicated speaking more than one language regularly.[189]
251
+
252
+ The federal government is obliged to communicate in the official languages, and in the federal parliament simultaneous translation is provided from and into German, French and Italian.[190]
253
+
254
+ Aside from the official forms of their respective languages, the four linguistic regions of Switzerland also have their local dialectal forms. The role played by dialects in each linguistic region varies dramatically: in the German-speaking regions, Swiss German dialects have become ever more prevalent since the second half of the 20th century, especially in the media, such as radio and television, and are used as an everyday language for many, while the Swiss variety of Standard German is almost always used instead of dialect for written communication (c.f. diglossic usage of a language).[191] Conversely, in the French-speaking regions the local dialects have almost disappeared (only 6.3% of the population of Valais, 3.9% of Fribourg, and 3.1% of Jura still spoke dialects at the end of the 20th century), while in the Italian-speaking regions dialects are mostly limited to family settings and casual conversation.[191]
255
+
256
+ The principal official languages (German, French, and Italian) have terms, not used outside of Switzerland, known as Helvetisms. German Helvetisms are, roughly speaking, a large group of words typical of Swiss Standard German, which do not appear either in Standard German, nor in other German dialects. These include terms from Switzerland's surrounding language cultures (German Billett[192] from French), from similar terms in another language (Italian azione used not only as act but also as discount from German Aktion).[193] The French spoken in Switzerland has similar terms, which are equally known as Helvetisms. The most frequent characteristics of Helvetisms are in vocabulary, phrases, and pronunciation, but certain Helvetisms denote themselves as special in syntax and orthography likewise. Duden, the comprehensive German dictionary, contains about 3000 Helvetisms.[193] Current French dictionaries, such as the Petit Larousse, include several hundred Helvetisms.[194]
257
+
258
+ Learning one of the other national languages at school is compulsory for all Swiss pupils, so many Swiss are supposed to be at least bilingual, especially those belonging to linguistic minority groups.[195]
259
+
260
+ Swiss residents are universally required to buy health insurance from private insurance companies, which in turn are required to accept every applicant. While the cost of the system is among the highest, it compares well with other European countries in terms of health outcomes; patients have been reported as being, in general, highly satisfied with it.[196][197][198] In 2012, life expectancy at birth was 80.4 years for men and 84.7 years for women[199] — the highest in the world.[200][201] However, spending on health is particularly high at 11.4% of GDP (2010), on par with Germany and France (11.6%) and other European countries, but notably less than spending in the USA (17.6%).[202] From 1990, a steady increase can be observed, reflecting the high costs of the services provided.[203] With an ageing population and new healthcare technologies, health spending will likely continue to rise.[203] Drug use is comparable to other developed countries with 14% of men and 6.5% of women between 20-24 saying they had consumed cannabis in the past 30 days[204] and 5 Swiss cities were listed among the top 10 European cities for cocaine use as measured in wastewater.[205][206]
261
+
262
+ Between two thirds and three quarters of the population live in urban areas.[207][208] Switzerland has gone from a largely rural country to an urban one in just 70 years. Since 1935 urban development has claimed as much of the Swiss landscape as it did during the previous 2,000 years. This urban sprawl does not only affect the plateau but also the Jura and the Alpine foothills[209] and there are growing concerns about land use.[210] However, from the beginning of the 21st century, the population growth in urban areas is higher than in the countryside.[208]
263
+
264
+ Switzerland has a dense network of towns, where large, medium and small towns are complementary.[208] The plateau is very densely populated with about 450 people per km2 and the landscape continually shows signs of human presence.[211] The weight of the largest metropolitan areas, which are Zürich, Geneva–Lausanne, Basel and Bern tend to increase.[208] In international comparison the importance of these urban areas is stronger than their number of inhabitants suggests.[208] In addition the three main centres of Zürich, Geneva and Basel are recognised for their particularly great quality of life.[212]
265
+
266
+ Switzerland has no official state religion, though most of the cantons (except Geneva and Neuchâtel) recognise official churches, which are either the Roman Catholic Church or the Swiss Reformed Church. These churches, and in some cantons also the Old Catholic Church and Jewish congregations, are financed by official taxation of adherents.[214]
267
+
268
+ Christianity is the predominant religion of Switzerland (about 67% of resident population in 2016-2018[3] and 75% of Swiss citizens[215]), divided between the Roman Catholic Church (35.8% of the population), the Swiss Reformed Church (23.8%), further Protestant churches (2.2%), Eastern Orthodoxy (2.5%), and other Christian denominations (2.2%).[3] Immigration has established Islam (5.3%) as a sizeable minority religion.[3]
269
+
270
+ 26.3% of Swiss permanent residents are not affiliated with any religious community (Atheism, Agnosticism, and others).[3]
271
+
272
+ As of the 2000 census other Christian minority communities included Neo-Pietism (0.44%), Pentecostalism (0.28%, mostly incorporated in the Schweizer Pfingstmission), Methodism (0.13%), the New Apostolic Church (0.45%), Jehovah's Witnesses (0.28%), other Protestant denominations (0.20%), the Old Catholic Church (0.18%), other Christian denominations (0.20%). Non-Christian religions are Hinduism (0.38%), Buddhism (0.29%), Judaism (0.25%) and others (0.11%); 4.3% did not make a statement.[216]
273
+
274
+ The country was historically about evenly balanced between Catholic and Protestant, with a complex patchwork of majorities over most of the country. Switzerland played an exceptional role during the Reformation as it became home to many reformers. Geneva converted to Protestantism in 1536, just before John Calvin arrived there. In 1541, he founded the Republic of Geneva on his own ideals. It became known internationally as the Protestant Rome, and housed such reformers as Theodore Beza, William Farel or Pierre Viret. Zürich became another stronghold around the same time, with Huldrych Zwingli and Heinrich Bullinger taking the lead there. Anabaptists Felix Manz and Conrad Grebel also operated there. They were later joined by the fleeing Peter Martyr Vermigli and Hans Denck. Other centres included Basel (Andreas Karlstadt and Johannes Oecolampadius), Berne (Berchtold Haller and Niklaus Manuel), and St. Gallen (Joachim Vadian). One canton, Appenzell, was officially divided into Catholic and Protestant sections in 1597. The larger cities and their cantons (Bern, Geneva, Lausanne, Zürich and Basel) used to be predominantly Protestant. Central Switzerland, the Valais, the Ticino, Appenzell Innerrhodes, the Jura, and Fribourg are traditionally Catholic. The Swiss Constitution of 1848, under the recent impression of the clashes of Catholic vs. Protestant cantons that culminated in the Sonderbundskrieg, consciously defines a consociational state, allowing the peaceful co-existence of Catholics and Protestants. A 1980 initiative calling for the complete separation of church and state was rejected by 78.9% of the voters.[217] Some traditionally Protestant cantons and cities nowadays have a slight Catholic majority, not because they were growing in members, quite the contrary, but only because since about 1970 a steadily growing minority became not affiliated with any church or other religious body (21.4% in Switzerland, 2012) especially in traditionally Protestant regions, such as Basel-City (42%), canton of Neuchâtel (38%), canton of Geneva (35%), canton of Vaud (26%), or Zürich city (city: >25%; canton: 23%).[218]
275
+
276
+ Three of Europe's major languages are official in Switzerland. Swiss culture is characterised by diversity, which is reflected in a wide range of traditional customs.[219] A region may be in some ways strongly culturally connected to the neighbouring country that shares its language, the country itself being rooted in western European culture.[220] The linguistically isolated Romansh culture in Graubünden in eastern Switzerland constitutes an exception, it survives only in the upper valleys of the Rhine and the Inn and strives to maintain its rare linguistic tradition.
277
+
278
+ Switzerland is home to many notable contributors to literature, art, architecture, music and sciences. In addition the country attracted a number of creative persons during time of unrest or war in Europe.[221]
279
+ Some 1000 museums are distributed through the country; the number has more than tripled since 1950.[222] Among the most important cultural performances held annually are the Paléo Festival, Lucerne Festival,[223] the Montreux Jazz Festival,[224] the Locarno International Film Festival and the Art Basel.[225]
280
+
281
+ Alpine symbolism has played an essential role in shaping the history of the country and the Swiss national identity.[13][226] Nowadays some concentrated mountain areas have a strong highly energetic ski resort culture in winter, and a hiking (German: das Wandern) or Mountain biking culture in summer. Other areas throughout the year have a recreational culture that caters to tourism, yet the quieter seasons are spring and autumn when there are fewer visitors. A traditional farmer and herder culture also predominates in many areas and small farms are omnipresent outside the towns. Folk art is kept alive in organisations all over the country. In Switzerland it is mostly expressed in music, dance, poetry, wood carving and embroidery. The alphorn, a trumpet-like musical instrument made of wood, has become alongside yodeling and the accordion an epitome of traditional Swiss music.[227][228]
282
+
283
+ As the Confederation, from its foundation in 1291, was almost exclusively composed of German-speaking regions, the earliest forms of literature are in German. In the 18th century, French became the fashionable language in Bern and elsewhere, while the influence of the French-speaking allies and subject lands was more marked than before.[230]
284
+
285
+ Among the classic authors of Swiss German literature are Jeremias Gotthelf (1797–1854) and Gottfried Keller (1819–1890). The undisputed giants of 20th-century Swiss literature are Max Frisch (1911–91) and Friedrich Dürrenmatt (1921–90), whose repertoire includes Die Physiker (The Physicists) and Das Versprechen (The Pledge), released in 2001 as a Hollywood film.[231]
286
+
287
+ Famous French-speaking writers were Jean-Jacques Rousseau (1712–1778) and Germaine de Staël (1766–1817). More recent authors include Charles Ferdinand Ramuz (1878–1947), whose novels describe the lives of peasants and mountain dwellers, set in a harsh environment and Blaise Cendrars (born Frédéric Sauser, 1887–1961).[231] Italian and Romansh-speaking authors also contributed to the Swiss literary landscape, but generally in more modest ways given their small number.
288
+
289
+ Probably the most famous Swiss literary creation, Heidi, the story of an orphan girl who lives with her grandfather in the Alps, is one of the most popular children's books ever and has come to be a symbol of Switzerland. Her creator, Johanna Spyri (1827–1901), wrote a number of other books on similar themes.[231]
290
+
291
+ The freedom of the press and the right to free expression is guaranteed in the federal constitution of Switzerland.[232] The Swiss News Agency (SNA) broadcasts information around-the-clock in three of the four national languages—on politics, economics, society and culture. The SNA supplies almost all Swiss media and a couple dozen foreign media services with its news.[232]
292
+
293
+ Switzerland has historically boasted the greatest number of newspaper titles published in proportion to its population and size.[233] The most influential newspapers are the German-language Tages-Anzeiger and Neue Zürcher Zeitung NZZ, and the French-language Le Temps, but almost every city has at least one local newspaper. The cultural diversity accounts for a variety of newspapers.[233]
294
+
295
+ The government exerts greater control over broadcast media than print media, especially due to finance and licensing.[233] The Swiss Broadcasting Corporation, whose name was recently changed to SRG SSR, is charged with the production and broadcast of radio and television programmes. SRG SSR studios are distributed throughout the various language regions. Radio content is produced in six central and four regional studios while the television programmes are produced in Geneva, Zürich, and Lugano. An extensive cable network also allows most Swiss to access the programmes from neighbouring countries.[233]
296
+
297
+ Skiing, snowboarding and mountaineering are among the most popular sports in Switzerland, the nature of the country being particularly suited for such activities.[234] Winter sports are practised by the natives and tourists since the second half of the 19th century with the invention of bobsleigh in St. Moritz.[235] The first world ski championships were held in Mürren (1931) and St. Moritz (1934). The latter town hosted the second Winter Olympic Games in 1928 and the fifth edition in 1948. Among the most successful skiers and world champions are Pirmin Zurbriggen and Didier Cuche.
298
+
299
+ The most prominently watched sports in Switzerland are football, ice hockey, Alpine skiing, "Schwingen", and tennis.[236]
300
+
301
+ The headquarters of the international football's and ice hockey's governing bodies, the International Federation of Association Football (FIFA) and International Ice Hockey Federation (IIHF), are located in Zürich. Actually many other headquarters of international sports federations are located in Switzerland. For example, the International Olympic Committee (IOC), IOC's Olympic Museum and the Court of Arbitration for Sport (CAS) are located in Lausanne.
302
+
303
+ Switzerland hosted the 1954 FIFA World Cup, and was the joint host, with Austria, of the UEFA Euro 2008 tournament. The Swiss Super League is the nation's professional football club league. Europe's highest football pitch, at 2,000 metres (6,600 ft) above sea level, is located in Switzerland and is named the Ottmar Hitzfeld Stadium.[237]
304
+
305
+ Many Swiss also follow ice hockey and support one of the 12 teams of the National League, which is the most attended league in Europe.[239] In 2009, Switzerland hosted the IIHF World Championship for the 10th time.[240] It also became World Vice-Champion in 2013 and 2018. The numerous lakes make Switzerland an attractive place for sailing. The largest, Lake Geneva, is the home of the sailing team Alinghi which was the first European team to win the America's Cup in 2003 and which successfully defended the title in 2007. Tennis has become an increasingly popular sport, and Swiss players such as Martina Hingis, Roger Federer, and Stanislas Wawrinka have won multiple Grand Slams.
306
+
307
+ Motorsport racecourses and events were banned in Switzerland following the 1955 Le Mans disaster with exception to events such as Hillclimbing. During this period, the country still produced successful racing drivers such as Clay Regazzoni, Sébastien Buemi, Jo Siffert, Dominique Aegerter, successful World Touring Car Championship driver Alain Menu, 2014 24 Hours of Le Mans winner Marcel Fässler and 2015 24 Hours Nürburgring winner Nico Müller. Switzerland also won the A1GP World Cup of Motorsport in 2007–08 with driver Neel Jani. Swiss motorcycle racer Thomas Lüthi won the 2005 MotoGP World Championship in the 125cc category. In June 2007 the Swiss National Council, one house of the Federal Assembly of Switzerland, voted to overturn the ban, however the other house, the Swiss Council of States rejected the change and the ban remains in place.[241][242]
308
+
309
+ Traditional sports include Swiss wrestling or "Schwingen". It is an old tradition from the rural central cantons and considered the national sport by some. Hornussen is another indigenous Swiss sport, which is like a cross between baseball and golf.[243] Steinstossen is the Swiss variant of stone put, a competition in throwing a heavy stone. Practised only among the alpine population since prehistoric times, it is recorded to have taken place in Basel in the 13th century. It is also central to the Unspunnenfest, first held in 1805, with its symbol the 83.5 stone named Unspunnenstein.[244]
310
+
311
+ The cuisine of Switzerland is multifaceted. While some dishes such as fondue, raclette or rösti are omnipresent through the country, each region developed its own gastronomy according to the differences of climate and languages.[245][246] Traditional Swiss cuisine uses ingredients similar to those in other European countries, as well as unique dairy products and cheeses such as Gruyère or Emmental, produced in the valleys of Gruyères and Emmental. The number of fine-dining establishments is high, particularly in western Switzerland.[247][248]
312
+
313
+ Chocolate has been made in Switzerland since the 18th century but it gained its reputation at the end of the 19th century with the invention of modern techniques such as conching and tempering which enabled its production on a high quality level. Also a breakthrough was the invention of solid milk chocolate in 1875 by Daniel Peter. The Swiss are the world's largest consumers of chocolate.[249][250]
314
+
315
+ Due to the popularisation of processed foods at the end of the 19th century, Swiss health food pioneer Maximilian Bircher-Benner created the first nutrition-based therapy in form of the well-known rolled oats cereal dish, called Birchermüesli.
316
+
317
+ The most popular alcoholic drink in Switzerland is wine. Switzerland is notable for the variety of grapes grown because of the large variations in terroirs, with their specific mixes of soil, air, altitude and light. Swiss wine is produced mainly in Valais, Vaud (Lavaux), Geneva and Ticino, with a small majority of white wines. Vineyards have been cultivated in Switzerland since the Roman era, even though certain traces can be found of a more ancient origin. The most widespread varieties are the Chasselas (called Fendant in Valais) and Pinot noir. The Merlot is the main variety produced in Ticino.[251][252]
318
+
319
+ Coordinates: 46°50′N 8°20′E / 46.833°N 8.333°E / 46.833; 8.333
320
+
en/5547.html.txt ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ – in Europe (green & dark grey)– in the European Union (green)  –  [Legend]
6
+
7
+ Finland (Finnish: Suomi [ˈsuo̯mi] (listen); Swedish: Finland [ˈfɪ̌nland] (listen), Finland Swedish: [ˈfinlɑnd]), officially the Republic of Finland (Finnish: Suomen tasavalta, Swedish: Republiken Finland (listen to all)),[note 1] is a country located in the Nordic region of Europe. Bounded by the Baltic Sea to its southwest, the Gulf of Bothnia on the west, and the Gulf of Finland on the south, it shares land borders with Sweden to the west, Russia to the east, and Norway to the north. Helsinki, the capital of Finland, and Tampere are largest cities and urban areas in the whole country.
8
+
9
+ Finland was first inhabited around the end of the most recent ice age, approximately around 9000 BC.[7] The Comb Ceramic culture introduced pottery in 5200 BC and the Corded Ware culture coincided with the start of agriculture between 3000 and 2500 BC. The Bronze Age and Iron Age were characterised by extensive contacts with other cultures in Fennoscandia and the Baltic region. At the time Finland had three main cultural areas – Southwest Finland, Tavastia and Karelia.[8] From the late 13th century, Finland gradually became an integral part of Sweden as a consequence of the Northern Crusades and the Swedish colonisation of coastal Finland, an event, the legacy of which is reflected in the prevalence of the Swedish language and its official status.
10
+
11
+ In 1809, Finland was annexed by the Russian Empire as the autonomous Grand Duchy of Finland. In 1906, Finland became the first European state to grant all adult citizens the right to vote, and the first in the world to give all adult citizens the right to run for public office.[9][10] Following the 1917 Russian Revolution, Finland declared itself independent from the empire. In 1918, the fledgling state was divided by the Finnish Civil War, with the Bolshevik-leaning Red Guards, supported by the Russian Soviet Federative Socialist Republic, fighting against the White Guard, supported by the German Empire. After a brief attempt to establish a kingdom, the country became a republic. During World War II, Finland fought the Soviet Union in the Winter War and the Continuation War, and lost part of its territory, but maintained independence.
12
+
13
+ Finland largely remained an agrarian country until the 1950s. After World War II, war reparations demanded by the Soviet Union, amounting to $300 million (5,449 million in 2018) forced Finland to industrialise. The country rapidly developed an advanced economy, while building an extensive welfare state based on the Nordic model, resulting in widespread prosperity and a high per capita income.[11] Finland is a top performer in numerous metrics of national performance, including education, economic competitiveness, civil liberties, quality of life, and human development.[12][13][14][15] In 2015, Finland was ranked first in the World Human Capital[16] and the Press Freedom Index and as the most stable country in the world during 2011–2016 in the Fragile States Index,[17] and second in the Global Gender Gap Report.[18] It also ranked first on the World Happiness Report report for 2018, 2019 and 2020.[19][20]
14
+
15
+ Finland joined the United Nations in 1955 and adopted an official policy of neutrality. The Finno-Soviet Treaty of 1948 gave the Soviet Union some leverage in Finnish domestic politics during the Cold War. Finland joined the OECD in 1969, the NATO Partnership for Peace in 1994,[21] the European Union in 1995, the Euro-Atlantic Partnership Council in 1997,[21] and the Eurozone at its inception in 1999.
16
+
17
+ Finland has a population of approximately 5.5 million, making it the 25th-most populous country in Europe. The majority of its population live in the central and south of the country and speak Finnish, a Finnic language of the Uralic language family, which is unrelated to the Scandinavian languages.[22] Swedish is the second official language of Finland, and is mainly spoken in certain coastal areas of the country and on Åland. Finland is the eighth-largest country in Europe in terms of area, and the most sparsely populated country in the European Union. It is a parliamentary republic consisting of 310 municipalities,[23] and includes an autonomous region, the Åland Islands. Over 1.4 million people live in the Greater Helsinki metropolitan area, which produces a third of the country's GDP. A large majority of Finns are members of the Evangelical Lutheran Church.[24]
18
+
19
+ The earliest written appearance of the name Finland is thought to be on three runestones. Two were found in the Swedish province of Uppland and have the inscription finlonti (U 582). The third was found in Gotland. It has the inscription finlandi (G 319) and dates back to the 13th century.[25] The name can be assumed to be related to the tribe name Finns, which is mentioned at first known time AD 98 (disputed meaning).
20
+
21
+ The name Suomi (Finnish for 'Finland') has uncertain origins, but a candidate for a source is the Proto-Baltic word *źemē, meaning "land". In addition to the close relatives of Finnish (the Finnic languages), this name is also used in the Baltic languages Latvian and Lithuanian. Alternatively, the Indo-European word *gʰm-on "man" (cf. Gothic guma, Latin homo) has been suggested, being borrowed as *ćoma. Earlier theories suggested derivation from suomaa (fen land) or suoniemi (fen cape), but these are now considered outdated. Some have suggested common etymology with saame (Sami, a Finno-Ugric people in Lapland) and Häme (a province in the inland), but that theory is uncertain.[26]
22
+
23
+ The earliest attested use of word Suomi is in 811 in the Royal Frankish Annals where it is used as a person name connected to a peace treaty.[27][28]
24
+
25
+ In the earliest historical sources, from the 12th and 13th centuries, the term Finland refers to the coastal region around Turku from Perniö to Uusikaupunki. This region later became known as Finland Proper in distinction from the country name Finland. Finland became a common name for the whole country in a centuries-long process that started when the Catholic Church established a missionary diocese in Nousiainen in the northern part of the province of Suomi possibly sometime in the 12th century.[29]
26
+
27
+ The devastation of Finland during the Great Northern War (1714–1721) and during the Russo-Swedish War (1741–1743) caused Sweden to begin carrying out major efforts to defend its eastern half from Russia. These 18th-century experiences created a sense of a shared destiny that when put in conjunction with the unique Finnish language, led to the adoption of an expanded concept of Finland.[30]
28
+
29
+ If the archeological finds from Wolf Cave are the result of Neanderthals' activities, the first people inhabited Finland approximately 120,000–130,000 years ago.[31] The area that is now Finland was settled in, at the latest, around 8,500 BC during the Stone Age towards the end of the last glacial period. The artifacts the first settlers left behind present characteristics that are shared with those found in Estonia, Russia, and Norway.[32] The earliest people were hunter-gatherers, using stone tools.[33]
30
+
31
+ The first pottery appeared in 5200 BC, when the Comb Ceramic culture was introduced.[34] The arrival of the Corded Ware culture in Southern coastal Finland between 3000 and 2500 BC may have coincided with the start of agriculture.[35] Even with the introduction of agriculture, hunting and fishing continued to be important parts of the subsistence economy.
32
+
33
+ In the Bronze Age permanent all-year-round cultivation and animal husbandry spread, but the cold climate phase slowed the change.[36] Cultures in Finland shared common features in pottery and also axes had similarities but local features existed. Seima-Turbino-phenomenon brought first bronze artifacts to the region and possibly also the Finno-Ugric-Languages.[36][37] Commercial contacts that had so far mostly been to Estonia started to extend to Scandinavia. Domestic manufacture of bronze artifacts started 1300 BC with Maaninka-type bronze axes. Bronze was imported from Volga region and from Southern Scandinavia.[38]
34
+
35
+ In the Iron Age population grew especially in Häme and Savo regions. Finland proper was the most densely populated area. Cultural contacts to the Baltics and Scandinavia became more frequent. Commercial contacts in the Baltic Sea region grew and extended during the 8th and 9th centuries.
36
+
37
+ Main exports from Finland were furs, slaves, castoreum, and falcons to European courts. Imports included silk and other fabrics, jewelry, Ulfberht swords, and, in lesser extent, glass. Production of iron started approximately in 500 BC.[39]
38
+
39
+ At the end of the 9th century, indigenous artifact culture, especially women's jewelry and weapons, had more common local features than ever before. This has been interpreted to be expressing common Finnish identity which was born from an image of common origin.[40]
40
+
41
+ An early form of Finnic languages spread to the Baltic Sea region approximately 1900 BC with the Seima-Turbino-phenomenon. Common Finnic language was spoken around Gulf of Finland 2000 years ago. The dialects from which the modern-day Finnish language was developed came into existence during the Iron Age.[41] Although distantly related, the Sami retained the hunter-gatherer lifestyle longer than the Finns. The Sami cultural identity and the Sami language have survived in Lapland, the northernmost province, but the Sami have been displaced or assimilated elsewhere.
42
+
43
+ The 12th and 13th centuries were a violent time in the northern Baltic Sea. The Livonian Crusade was ongoing and the Finnish tribes such as the Tavastians and Karelians were in frequent conflicts with Novgorod and with each other. Also, during the 12th and 13th centuries several crusades from the Catholic realms of the Baltic Sea area were made against the Finnish tribes. According to historical sources, Danes waged two crusades on Finland, in 1191 and in 1202,[42] and Swedes, possibly the so-called second crusade to Finland, in 1249 against Tavastians and the third crusade to Finland in 1293 against the Karelians. The so-called first crusade to Finland, possibly in 1155, is most likely an unreal event. Also, it is possible that Germans made violent conversion of Finnish pagans in the 13th century.[43] According to a papal letter from 1241, the king of Norway was also fighting against "nearby pagans" at that time.[44]
44
+
45
+ As a result of the crusades and the colonisation of some Finnish coastal areas with Christian Swedish population during the Middle Ages,[45] including the old capital Turku, Finland gradually became part of the kingdom of Sweden and the sphere of influence of the Catholic Church. Due to the Swedish conquest, the Finnish upper class lost its position and lands to the new Swedish and German nobility and to the Catholic Church.[46] In Sweden even in the 17th and 18th centuries, it was clear that Finland was a conquered country and its inhabitants could be treated arbitrarily. Swedish kings visited Finland rarely and in Swedish contemporary texts Finns were portrayed to be primitive and their language inferior.[47]
46
+
47
+ Swedish became the dominant language of the nobility, administration, and education; Finnish was chiefly a language for the peasantry, clergy, and local courts in predominantly Finnish-speaking areas. During the Protestant Reformation, the Finns gradually converted to Lutheranism.[48]
48
+
49
+ In the 16th century, Mikael Agricola published the first written works in Finnish, and Finland's current capital city, Helsinki, was founded by Gustav I of Sweden.[49] The first university in Finland, the Royal Academy of Turku, was established in 1640. Finland suffered a severe famine in 1696–1697, during which about one third of the Finnish population died,[50] and a devastating plague a few years later.
50
+
51
+ In the 18th century, wars between Sweden and Russia twice led to the occupation of Finland by Russian forces, times known to the Finns as the Greater Wrath (1714–1721) and the Lesser Wrath (1742–1743).[50] It is estimated that almost an entire generation of young men was lost during the Great Wrath, due mainly to the destruction of homes and farms, and to the burning of Helsinki.[51] By this time Finland was the predominant term for the whole area from the Gulf of Bothnia to the Russian border.[citation needed]
52
+
53
+ Two Russo-Swedish wars in twenty-five years served as reminders to the Finnish people of the precarious position between Sweden and Russia. An increasingly vocal elite in Finland soon determined that Finnish ties with Sweden were becoming too costly, and following Russo-Swedish War (1788–1790), the Finnish elite's desire to break with Sweden only heightened.[52]
54
+
55
+ Even before the war there were conspiring politicians, among them Col G. M. Sprengtporten, who had supported Gustav III's coup in 1772. Sprengporten fell out with the king and resigned his commission in 1777. In the following decade he tried to secure Russian support for an autonomous Finland, and later became an adviser to Catherine II.[52] In the spirit of the notion of Adolf Ivar Arwidsson (1791–1858), "we are not Swedes, we do not want to become Russians, let us therefore be Finns", the Finnish national identity started to become established.[citation needed]
56
+
57
+ Notwithstanding the efforts of Finland's elite and nobility to break ties with Sweden, there was no genuine independence movement in Finland until the early 20th century. As a matter of fact, at this time the Finnish peasantry was outraged by the actions of their elite and almost exclusively supported Gustav's actions against the conspirators. (The High Court of Turku condemned Sprengtporten as a traitor c. 1793.)[52] The Swedish era ended in the Finnish War in 1809.
58
+
59
+ On 29 March 1809, having been taken over by the armies of Alexander I of Russia in the Finnish War, Finland became an autonomous Grand Duchy in the Russian Empire until the end of 1917. In 1811, Alexander I incorporated the Russian Vyborg province into the Grand Duchy of Finland. During the Russian era, the Finnish language began to gain recognition. From the 1860s onwards, a strong Finnish nationalist movement known as the Fennoman movement grew. Milestones included the publication of what would become Finland's national epic – the Kalevala – in 1835, and the Finnish language's achieving equal legal status with Swedish in 1892.
60
+
61
+ The Finnish famine of 1866–1868 killed 15% of the population, making it one of the worst famines in European history. The famine led the Russian Empire to ease financial regulations, and investment rose in following decades. Economic and political development was rapid.[54] The gross domestic product (GDP) per capita was still half of that of the United States and a third of that of Britain.[54]
62
+
63
+ In 1906, universal suffrage was adopted in the Grand Duchy of Finland. However, the relationship between the Grand Duchy and the Russian Empire soured when the Russian government made moves to restrict Finnish autonomy. For example, the universal suffrage was, in practice, virtually meaningless, since the tsar did not have to approve any of the laws adopted by the Finnish parliament. Desire for independence gained ground, first among radical liberals[55] and socialists.
64
+
65
+ After the 1917 February Revolution, the position of Finland as part of the Russian Empire was questioned, mainly by Social Democrats. Since the head of state was the tsar of Russia, it was not clear who the chief executive of Finland was after the revolution. The Parliament, controlled by social democrats, passed the so-called Power Act to give the highest authority to the Parliament. This was rejected by the Russian Provisional Government which decided to dissolve the Parliament.[56]
66
+
67
+ New elections were conducted, in which right-wing parties won with a slim majority. Some social democrats refused to accept the result and still claimed that the dissolution of the parliament (and thus the ensuing elections) were extralegal. The two nearly equally powerful political blocs, the right-wing parties and the social democratic party, were highly antagonized.
68
+
69
+ The October Revolution in Russia changed the geopolitical situation anew. Suddenly, the right-wing parties in Finland started to reconsider their decision to block the transfer of highest executive power from the Russian government to Finland, as the Bolsheviks took power in Russia. Rather than acknowledge the authority of the Power Act of a few months earlier, the right-wing government declared independence on 6 December 1917.
70
+
71
+ On 27 January 1918, the official opening shots of the war were fired in two simultaneous events. The government started to disarm the Russian forces in Pohjanmaa, and the Social Democratic Party staged a coup.[failed verification] The latter gained control of southern Finland and Helsinki, but the white government continued in exile from Vaasa. This sparked the brief but bitter civil war. The Whites, who were supported by Imperial Germany, prevailed over the Reds.[57] After the war, tens of thousands of Reds and suspected sympathizers were interned in camps, where thousands died by execution or from malnutrition and disease. Deep social and political enmity was sown between the Reds and Whites and would last until the Winter War and beyond; still, even nowadays, the civil war is an inherently sensitive, emotional topic.[58][59] The civil war and activist expeditions into Soviet Russia strained Eastern relations.
72
+
73
+ After a brief experimentation with monarchy, Finland became a presidential republic, with Kaarlo Juho Ståhlberg elected as its first president in 1919. The Finnish–Russian border was determined by the Treaty of Tartu in 1920, largely following the historic border but granting Pechenga (Finnish: Petsamo) and its Barents Sea harbour to Finland. Finnish democracy did not see any Soviet coup attempts and survived the anti-Communist Lapua Movement. The relationship between Finland and the Soviet Union was tense. Army officers were trained in France, and relations with Western Europe and Sweden were strengthened.
74
+
75
+ In 1917, the population was 3 million. Credit-based land reform was enacted after the civil war, increasing the proportion of capital-owning population.[54] About 70% of workers were occupied in agriculture and 10% in industry.[60] The largest export markets were the United Kingdom and Germany.
76
+
77
+ Finland fought the Soviet Union in the Winter War of 1939–1940 after the Soviet Union attacked Finland and in the Continuation War of 1941–1944, following Operation Barbarossa, when Finland aligned with Germany following Germany's invasion of the Soviet Union. For 872 days, the German army, aided indirectly by Finnish forces, besieged Leningrad, the USSR's second-largest city.[61] After resisting a major Soviet offensive in June/July 1944 led to a standstill, Finland reached an armistice with the Soviet Union. This was followed by the Lapland War of 1944–1945, when Finland fought retreating German forces in northern Finland.
78
+
79
+ The treaties signed in 1947 and 1948 with the Soviet Union included Finnish obligations, restraints, and reparations—as well as further Finnish territorial concessions in addition to those in the Moscow Peace Treaty of 1940. As a result of the two wars, Finland ceded most of Finnish Karelia, Salla, and Petsamo, which amounted to 10% of its land area and 20% of its industrial capacity, including the ports of Vyborg (Viipuri) and the ice-free Liinakhamari (Liinahamari). Almost the whole population, some 400,000 people, fled these areas. The former Finnish territory now constitutes part of Russia's Republic of Karelia. Finland was never occupied by Soviet forces and it retained its independence, but at a loss of about 93,000 soldiers.
80
+
81
+ Finland rejected Marshall aid, in apparent deference to Soviet desires. However, the United States provided secret development aid and helped the Social Democratic Party, in hopes of preserving Finland's independence.[62] Establishing trade with the Western powers, such as the United Kingdom, and paying reparations to the Soviet Union produced a transformation of Finland from a primarily agrarian economy to an industrialised one. Valmet was founded to create materials for war reparations. After the reparations had been paid off, Finland continued to trade with the Soviet Union in the framework of bilateral trade.
82
+
83
+ In 1950, 46% of Finnish workers worked in agriculture and a third lived in urban areas.[63] The new jobs in manufacturing, services, and trade quickly attracted people to the towns. The average number of births per woman declined from a baby boom peak of 3.5 in 1947 to 1.5 in 1973.[63] When baby-boomers entered the workforce, the economy did not generate jobs quickly enough, and hundreds of thousands emigrated to the more industrialized Sweden, with emigration peaking in 1969 and 1970.[63] The 1952 Summer Olympics brought international visitors. Finland took part in trade liberalization in the World Bank, the International Monetary Fund and the General Agreement on Tariffs and Trade.
84
+
85
+ Officially claiming to be neutral, Finland lay in the grey zone between the Western countries and the Soviet Union. The YYA Treaty (Finno-Soviet Pact of Friendship, Cooperation and Mutual Assistance) gave the Soviet Union some leverage in Finnish domestic politics. This was extensively exploited by president Urho Kekkonen against his opponents. He maintained an effective monopoly on Soviet relations from 1956 on, which was crucial for his continued popularity. In politics, there was a tendency of avoiding any policies and statements that could be interpreted as anti-Soviet. This phenomenon was given the name "Finlandization" by the West German press.
86
+
87
+ Despite close relations with the Soviet Union, Finland maintained a market economy. Various industries benefited from trade privileges with the Soviets, which explains the widespread support that pro-Soviet policies enjoyed among business interests in Finland. Economic growth was rapid in the postwar era, and by 1975 Finland's GDP per capita was the 15th-highest in the world. In the 1970s and '80s, Finland built one of the most extensive welfare states in the world. Finland negotiated with the European Economic Community (EEC, a predecessor of the European Union) a treaty that mostly abolished customs duties towards the EEC starting from 1977, although Finland did not fully join. In 1981, President Urho Kekkonen's failing health forced him to retire after holding office for 25 years.
88
+
89
+ Finland reacted cautiously to the collapse of the Soviet Union, but swiftly began increasing integration with the West. On 21 September 1990, Finland unilaterally declared the Paris Peace Treaty obsolete, following the German reunification decision nine days earlier.[64]
90
+
91
+ Miscalculated macroeconomic decisions, a banking crisis, the collapse of its largest trading partner (the Soviet Union), and a global economic downturn caused a deep early 1990s recession in Finland. The depression bottomed out in 1993, and Finland saw steady economic growth for more than ten years.[65] Like other Nordic countries, Finland decentralised its economy since the late 1980s. Financial and product market regulation were loosened. Some state enterprises have been privatized and there have been some modest tax cuts.[citation needed] Finland joined the European Union in 1995, and the Eurozone in 1999. Much of the late 1990s economic growth was fueled by the success of the mobile phone manufacturer Nokia, which held a unique position of representing 80% of the market capitalization of the Helsinki Stock Exchange.
92
+
93
+ Lying approximately between latitudes 60° and 70° N, and longitudes 20° and 32° E, Finland is one of the world's northernmost countries. Of world capitals, only Reykjavík lies more to the north than Helsinki. The distance from the southernmost point – Hanko in Uusimaa – to the northernmost – Nuorgam in Lapland – is 1,160 kilometres (720 mi).
94
+
95
+ Finland has about 168,000 lakes (of area larger than 500 m2 or 0.12 acres) and 179,000 islands.[66] Its largest lake, Saimaa, is the fourth largest in Europe. The Finnish Lakeland is the area with the most lakes in the country; many of the major cities in the area, most notably Tampere, Jyväskylä and Kuopio, are located in the immediate vicinity of the large lakes. The greatest concentration of islands is found in the southwest, in the Archipelago Sea between continental Finland and the main island of Åland.
96
+
97
+ Much of the geography of Finland is a result of the Ice Age. The glaciers were thicker and lasted longer in Fennoscandia compared with the rest of Europe. Their eroding effects have left the Finnish landscape mostly flat with few hills and fewer mountains. Its highest point, the Halti at 1,324 metres (4,344 ft), is found in the extreme north of Lapland at the border between Finland and Norway. The highest mountain whose peak is entirely in Finland is Ridnitšohkka at 1,316 m (4,318 ft), directly adjacent to Halti.
98
+
99
+ The retreating glaciers have left the land with morainic deposits in formations of eskers. These are ridges of stratified gravel and sand, running northwest to southeast, where the ancient edge of the glacier once lay. Among the biggest of these are the three Salpausselkä ridges that run across southern Finland.
100
+
101
+ Having been compressed under the enormous weight of the glaciers, terrain in Finland is rising due to the post-glacial rebound. The effect is strongest around the Gulf of Bothnia, where land steadily rises about 1 cm (0.4 in) a year. As a result, the old sea bottom turns little by little into dry land: the surface area of the country is expanding by about 7 square kilometres (2.7 sq mi) annually.[67] Relatively speaking, Finland is rising from the sea.[68]
102
+
103
+ The landscape is covered mostly by coniferous taiga forests and fens, with little cultivated land. Of the total area 10% is lakes, rivers and ponds, and 78% forest. The forest consists of pine, spruce, birch, and other species.[69] Finland is the largest producer of wood in Europe and among the largest in the world. The most common type of rock is granite. It is a ubiquitous part of the scenery, visible wherever there is no soil cover. Moraine or till is the most common type of soil, covered by a thin layer of humus of biological origin. Podzol profile development is seen in most forest soils except where drainage is poor. Gleysols and peat bogs occupy poorly drained areas.
104
+
105
+ Phytogeographically, Finland is shared between the Arctic, central European, and northern European provinces of the Circumboreal Region within the Boreal Kingdom. According to the WWF, the territory of Finland can be subdivided into three ecoregions: the Scandinavian and Russian taiga, Sarmatic mixed forests, and Scandinavian Montane Birch forest and grasslands. Taiga covers most of Finland from northern regions of southern provinces to the north of Lapland. On the southwestern coast, south of the Helsinki-Rauma line, forests are characterized by mixed forests, that are more typical in the Baltic region. In the extreme north of Finland, near the tree line and Arctic Ocean, Montane Birch forests are common.
106
+
107
+ Similarly, Finland has a diverse and extensive range of fauna. There are at least sixty native mammalian species, 248 breeding bird species, over 70 fish species, and 11 reptile and frog species present today, many migrating from neighboring countries thousands of years ago.
108
+ Large and widely recognized wildlife mammals found in Finland are the brown bear (the national animal), gray wolf, wolverine, and elk. Three of the more striking birds are the whooper swan, a large European swan and the national bird of Finland; the Western capercaillie, a large, black-plumaged member of the grouse family; and the Eurasian eagle-owl. The latter is considered an indicator of old-growth forest connectivity, and has been declining because of landscape fragmentation.[70] The most common breeding birds are the willow warbler, common chaffinch, and redwing.[71] Of some seventy species of freshwater fish, the northern pike, perch, and others are plentiful. Atlantic salmon remains the favourite of fly rod enthusiasts.
109
+
110
+ The endangered Saimaa ringed seal, one of only three lake seal species in the world, exists only in the Saimaa lake system of southeastern Finland, down to only 390 seals today.[72] It has become the emblem of the Finnish Association for Nature Conservation.[73]
111
+
112
+ The main factor influencing Finland's climate is the country's geographical position between the 60th and 70th northern parallels in the Eurasian continent's coastal zone. In the Köppen climate classification, the whole of Finland lies in the boreal zone, characterized by warm summers and freezing winters. Within the country, the temperateness varies considerably between the southern coastal regions and the extreme north, showing characteristics of both a maritime and a continental climate. Finland is near enough to the Atlantic Ocean to be continuously warmed by the Gulf Stream. The Gulf Stream combines with the moderating effects of the Baltic Sea and numerous inland lakes to explain the unusually warm climate compared with other regions that share the same latitude, such as Alaska, Siberia, and southern Greenland.[74]
113
+
114
+ Winters in southern Finland (when mean daily temperature remains below 0 °C or 32 °F) are usually about 100 days long, and in the inland the snow typically covers the land from about late November to April, and on the coastal areas such as Helsinki, snow often covers the land from late December to late March.[75] Even in the south, the harshest winter nights can see the temperatures fall to −30 °C (−22 °F) although on coastal areas like Helsinki, temperatures below −30 °C (−22 °F) are rare. Climatic summers (when mean daily temperature remains above 10 °C or 50 °F) in southern Finland last from about late May to mid-September, and in the inland, the warmest days of July can reach over 35 °C (95 °F).[74] Although most of Finland lies on the taiga belt, the southernmost coastal regions are sometimes classified as hemiboreal.[76]
115
+
116
+ In northern Finland, particularly in Lapland, the winters are long and cold, while the summers are relatively warm but short. The most severe winter days in Lapland can see the temperature fall down to −45 °C (−49 °F). The winter of the north lasts for about 200 days with permanent snow cover from about mid-October to early May. Summers in the north are quite short, only two to three months, but can still see maximum daily temperatures above 25 °C (77 °F) during heat waves.[74] No part of Finland has Arctic tundra, but Alpine tundra can be found at the fells Lapland.[76]
117
+
118
+ The Finnish climate is suitable for cereal farming only in the southernmost regions, while the northern regions are suitable for animal husbandry.[77]
119
+
120
+ A quarter of Finland's territory lies within the Arctic Circle and the midnight sun can be experienced for more days the farther north one travels. At Finland's northernmost point, the sun does not set for 73 consecutive days during summer, and does not rise at all for 51 days during winter.[74]
121
+
122
+ Finland consists of 19 regions, called maakunta in Finnish and landskap in Swedish. The regions are governed by regional councils which serve as forums of cooperation for the municipalities of a region. The main tasks of the regions are regional planning and development of enterprise and education. In addition, the public health services are usually organized on the basis of regions. Currently, the only region where a popular election is held for the council is Kainuu. Other regional councils are elected by municipal councils, each municipality sending representatives in proportion to its population.
123
+
124
+ In addition to inter-municipal cooperation, which is the responsibility of regional councils, each region has a state Employment and Economic Development Centre which is responsible for the local administration of labour, agriculture, fisheries, forestry, and entrepreneurial affairs. The Finnish Defence Forces regional offices are responsible for the regional defence preparations and for the administration of conscription within the region.
125
+
126
+ Regions represent dialectal, cultural, and economic variations better than the former provinces, which were purely administrative divisions of the central government. Historically, regions are divisions of historical provinces of Finland, areas which represent dialects and culture more accurately.
127
+
128
+ Six Regional State Administrative Agencies were created by the state of Finland in 2010, each of them responsible for one of the regions called alue in Finnish and region in Swedish; in addition, Åland was designated a seventh region. These take over some of the tasks of the earlier Provinces of Finland (lääni/län), which were abolished.[78]
129
+
130
+ The region of Eastern Uusimaa (Itä-Uusimaa) was consolidated with Uusimaa on 1 January 2011.[81]
131
+
132
+ The fundamental administrative divisions of the country are the municipalities, which may also call themselves towns or cities. They account for half of public spending. Spending is financed by municipal income tax, state subsidies, and other revenue. As of 2020[update], there are 310 municipalities,[23] and most have fewer than 6,000 residents.
133
+
134
+ In addition to municipalities, two intermediate levels are defined. Municipalities co-operate in seventy sub-regions and nineteen regions. These are governed by the member municipalities and have only limited powers. The autonomous province of Åland has a permanent democratically elected regional council. Sami people have a semi-autonomous Sami native region in Lapland for issues on language and culture.
135
+
136
+ In the following chart, the number of inhabitants includes those living in the entire municipality (kunta/kommun), not just in the built-up area. The land area is given in km², and the density in inhabitants per km² (land area). The figures are as of 31 January 2019. The capital region – comprising Helsinki, Vantaa, Espoo and Kauniainen – forms a continuous conurbation of over 1.1 million people. However, common administration is limited to voluntary cooperation of all municipalities, e.g. in Helsinki Metropolitan Area Council.
137
+
138
+ The Constitution of Finland defines the political system; Finland is a parliamentary republic within the framework of a representative democracy. The Prime Minister is the country's most powerful person. The current version of the constitution was enacted on 1 March 2000, and was amended on 1 March 2012. Citizens can run and vote in parliamentary, municipal, presidential and European Union elections.
139
+
140
+ The head of state of Finland is President of the Republic of Finland (in Finnish: Suomen tasavallan presidentti; in Swedish: Republiken Finlands president). Finland has had for most of its independence a semi-presidential system, but in the last few decades the powers of the President have been diminished. In constitution amendments, which came into effect in 1991 or 1992 and also with a new drafted constitution of 2000, amended in 2012, the President's position has become primarily a ceremonial office. However, the President still leads the nation's foreign politics together with the Council of State and is the commander-in-chief of the Defence Forces.[1] The position still does entail some powers, including responsibility for foreign policy (excluding affairs related to the European Union) in cooperation with the cabinet, being the head of the armed forces, some decree and pardoning powers, and some appointive powers. Direct, one- or two-stage elections are used to elect the president for a term of six years and for a maximum of two consecutive terms. The current president is Sauli Niinistö; he took office on 1 March 2012. Former presidents were K. J. Ståhlberg (1919–1925), L. K. Relander (1925–1931), P. E. Svinhufvud (1931–1937), Kyösti Kallio (1937–1940), Risto Ryti (1940–1944), C. G. E. Mannerheim (1944–1946), J. K. Paasikivi (1946–1956), Urho Kekkonen (1956–1982), Mauno Koivisto (1982–1994), Martti Ahtisaari (1994–2000), and Tarja Halonen (2000–2012).
141
+
142
+ The current president was elected from the ranks of the National Coalition Party for the first time since 1946. The presidency between 1946 and the present was instead held by a member of the Social Democratic Party or the Centre Party.
143
+
144
+ The 200-member unicameral Parliament of Finland (Finnish: Eduskunta, Swedish: Riksdag) exercises supreme legislative authority in the country. It may alter the constitution and ordinary laws, dismiss the cabinet, and override presidential vetoes. Its acts are not subject to judicial review; the constitutionality of new laws is assessed by the parliament's constitutional law committee. The parliament is elected for a term of four years using the proportional D'Hondt method within a number of multi-seat constituencies through the most open list multi-member districts. Various parliament committees listen to experts and prepare legislation. The speaker of the parliament is Anu Vehviläinen (Centre Party).[84]
145
+
146
+ Since universal suffrage was introduced in 1906, the parliament has been dominated by the Centre Party (former Agrarian Union), the National Coalition Party, and the Social Democrats. These parties have enjoyed approximately equal support, and their combined vote has totalled about 65–80% of all votes. Their lowest common total of MPs, 121, was reached in the 2011 elections. For a few decades after 1944, the Communists were a strong fourth party. Due to the electoral system of proportional representation, and the relative reluctance of voters to switch their support between parties, the relative strengths of the parties have commonly varied only slightly from one election to another. However, there have been some long-term trends, such as the rise and fall of the Communists during the Cold War; the steady decline into insignificance of the Liberals and its predecessors from 1906 to 1980; and the rise of the Green League since 1983. In the 2011 elections, the Finns Party achieved exceptional success, increasing its representation from 5 to 39 seats, surpassing the Centre Party.[85]
147
+
148
+ The autonomous province of Åland, which forms a federacy with Finland, elects one member to the parliament, who traditionally joins the parliamentary group of the Swedish People's Party of Finland. (The province also holds elections for its own permanent regional council, and in the 2011 elections, Åland Centre was the largest party.)
149
+
150
+ The Parliament can be dissolved by a recommendation of the Prime Minister, endorsed by the President. This procedure has never been used, although the parliament was dissolved eight times under the pre-2000 constitution, when this action was the sole prerogative of the president.
151
+
152
+ After the parliamentary elections on 19 April 2015, the seats were divided among eight parties as follows:[86]
153
+
154
+ After parliamentary elections, the parties negotiate among themselves on forming a new cabinet (the Finnish Government), which then has to be approved by a simple majority vote in the parliament. The cabinet can be dismissed by a parliamentary vote of no confidence, although this rarely happens (the last time in 1957), as the parties represented in the cabinet usually make up a majority in the parliament.[87]
155
+
156
+ The cabinet exercises most executive powers, and originates most of the bills that the parliament then debates and votes on. It is headed by the Prime Minister of Finland, and consists of him or her, of other ministers, and of the Chancellor of Justice. The current prime minister is Sanna Marin (Social Democratic Party). Each minister heads his or her ministry, or, in some cases, has responsibility for a subset of a ministry's policy. After the prime minister, the most powerful minister is the minister of finance. The incumbent Minister of Finance is Matti Vanhanen.
157
+
158
+ As no one party ever dominates the parliament, Finnish cabinets are multi-party coalitions. As a rule, the post of prime minister goes to the leader of the biggest party and that of the minister of finance to the leader of the second biggest.
159
+
160
+ The judicial system of Finland is a civil law system divided between courts with regular civil and criminal jurisdiction and administrative courts with jurisdiction over litigation between individuals and the public administration. Finnish law is codified and based on Swedish law and in a wider sense, civil law or Roman law. The court system for civil and criminal jurisdiction consists of local courts (käräjäoikeus, tingsrätt), regional appellate courts (hovioikeus, hovrätt), and the Supreme Court (korkein oikeus, högsta domstolen). The administrative branch of justice consists of administrative courts (hallinto-oikeus, förvaltningsdomstol) and the Supreme Administrative Court (korkein hallinto-oikeus, högsta förvaltningsdomstolen). In addition to the regular courts, there are a few special courts in certain branches of administration. There is also a High Court of Impeachment for criminal charges against certain high-ranking officeholders.
161
+
162
+ Around 92% of residents have confidence in Finland's security institutions.[88] The overall crime rate of Finland is not high in the EU context. Some crime types are above average, notably the high homicide rate for Western Europe.[89] A day fine system is in effect and also applied to offenses such as speeding.
163
+
164
+ Finland has successfully fought against government corruption, which was more common in the 1970s and '80s.[90][verification needed] For instance, economic reforms and EU membership introduced stricter requirements for open bidding and many public monopolies were abolished.[90] Today, Finland has a very low number of corruption charges; Transparency International ranks Finland as one of the least corrupt countries in Europe.
165
+
166
+ In 2008, Transparency International criticized the lack of transparency of the system of Finnish political finance.[91] According to GRECO in 2007, corruption should be taken into account in the Finnish system of election funds better.[92] A scandal revolving around campaign finance of the 2007 parliamentary elections broke out in spring 2008. Nine Ministers of Government submitted incomplete funding reports and even more of the members of parliament. The law includes no punishment of false funds reports of the elected politicians.
167
+
168
+ According to the 2012 constitution, the president (currently Sauli Niinistö) leads foreign policy in cooperation with the government, except that the president has no role in EU affairs.[93]
169
+
170
+ In 2008, president Martti Ahtisaari was awarded the Nobel Peace Prize.[94] Finland was considered a cooperative model state, and Finland did not oppose proposals for a common EU defence policy.[95] This was reversed in the 2000s, when Tarja Halonen and Erkki Tuomioja made Finland's official policy to resist other EU members' plans for common defence.[95]
171
+
172
+ Finland has one of the world's most extensive welfare systems, one that guarantees decent living conditions for all residents: Finns, and non-citizens. Since the 1980s the social security has been cut back, but still the system is one of the most comprehensive in the world. Created almost entirely during the first three decades after World War II, the social security system was an outgrowth of the traditional Nordic belief that the state was not inherently hostile to the well-being of its citizens, but could intervene benevolently on their behalf. According to some social historians, the basis of this belief was a relatively benign history that had allowed the gradual emergence of a free and independent peasantry in the Nordic countries and had curtailed the dominance of the nobility and the subsequent formation of a powerful right wing. Finland's history has been harsher than the histories of the other Nordic countries, but not harsh enough to bar the country from following their path of social development.[96]
173
+
174
+ The Finnish Defence Forces consist of a cadre of professional soldiers (mainly officers and technical personnel), currently serving conscripts, and a large reserve. The standard readiness strength is 34,700 people in uniform, of which 25% are professional soldiers. A universal male conscription is in place, under which all male Finnish nationals above 18 years of age serve for 6 to 12 months of armed service or 12 months of civilian (non-armed) service.
175
+ Voluntary post-conscription overseas peacekeeping service is popular, and troops serve around the world in UN, NATO, and EU missions. Approximately 500 women choose voluntary military service every year.[97] Women are allowed to serve in all combat arms including front-line infantry and special forces.
176
+ The army consists of a highly mobile field army backed up by local defence units. The army defends the national territory and its military strategy employs the use of the heavily forested terrain and numerous lakes to wear down an aggressor, instead of attempting to hold the attacking army on the frontier.
177
+
178
+ Finnish defence expenditure per capita is one of the highest in the European Union.[98] The Finnish military doctrine is based on the concept of total defence. The term total means that all sectors of the government and economy are involved in the defence planning. The armed forces are under the command of the Chief of Defence (currently General Jarmo Lindberg), who is directly subordinate to the president in matters related to military command. The branches of the military are the army, the navy, and the air force. The border guard is under the Ministry of the Interior but can be incorporated into the Defence Forces when required for defence readiness.
179
+
180
+ Even while Finland hasn't joined the North Atlantic Treaty Organization, the country has joined the NATO Response Force, the EU Battlegroup,[99] the NATO Partnership for Peace and in 2014 signed a NATO memorandum of understanding,[100][101] thus forming a practical coalition.[21] In 2015, the Finland-NATO ties were strengthened with a host nation support agreement allowing assistance from NATO troops in emergency situations.[102] Finland has been an active participant in the Afghanistan and Kosovo.[103][104] Recently Finland has been more eager to discuss about its current and planned roles in Syria, Iraq and war against ISIL.[105] On 21 December 2012 Finnish military officer Atte Kaleva was reported to have been kidnapped and later released in Yemen for ransom. At first he was reported be a casual Arabic student, however only later it was published that his studies were about jihadists, terrorism, and that he was employed by the military.[106][107] As response to French request for solidarity, Finnish defence minister commented in November that Finland could and is willing to offer intelligence support.[108]
181
+
182
+ In May 2015, Finnish Military sent nearly one million letters to all relevant males in the country, informing them about their roles in the war effort. It was globally speculated that Finland was preparing for war—however Finland claimed that this was a standard procedure, yet something never done before in Finnish history.[109] Assistant chief of staff Hannu Hyppönen however said that this is not an isolated case, but bound to the European security dilemma.[109] The NATO Memorandum of Understanding signed earlier bestows an obligation e.g. to report on internal capabilities and the availability thereof to NATO.[101]
183
+
184
+ The economy of Finland has a per capita output equal to that of other European economies such as those of France, Germany, Belgium, or the UK. The largest sector of the economy is the service sector at 66% of GDP, followed by manufacturing and refining at 31%. Primary production represents 2.9%.[110] With respect to foreign trade, the key economic sector is manufacturing. The largest industries in 2007[111] were electronics (22%); machinery, vehicles, and other engineered metal products (21.1%); forest industry (13%); and chemicals (11%). The gross domestic product peaked in 2008. As of 2015[update], the country's economy is at the 2006 level.[112][113]
185
+
186
+ Finland has significant timber, mineral (iron, chromium, copper, nickel, and gold), and freshwater resources. Forestry, paper factories, and the agricultural sector (on which taxpayers spend[clarification needed] around €3 billion annually) are important for rural residents so any policy changes affecting these sectors are politically sensitive for politicians dependent on rural votes. The Greater Helsinki area generates around one third of Finland's GDP. In a 2004 OECD comparison, high-technology manufacturing in Finland ranked second largest after Ireland. Knowledge-intensive services have also resulted in the smallest and slow-growth sectors – especially agriculture and low-technology manufacturing – being ranked the second largest after Ireland.[114] The overall short-term outlook was good and GDP growth has been above that of many EU peers.[citation needed]
187
+
188
+ Finland is highly integrated into the global economy, and international trade produces one third of GDP.[citation needed] Trade with the European Union makes up 60% of Finland's total trade.[citation needed] The largest trade flows are with Germany, Russia, Sweden, the United Kingdom, the United States, the Netherlands, and China. Trade policy is managed by the European Union, where Finland has traditionally been among the free trade supporters, except for agricultural policy.[115] Finland is the only Nordic country to have joined the Eurozone.
189
+
190
+ Finland's climate and soils make growing crops a particular challenge. The country lies between the latitudes 60°N and 70°N, and it has severe winters and relatively short growing seasons that are sometimes interrupted by frost. However, because the Gulf Stream and the North Atlantic Drift Current moderate the climate, Finland contains half of the world's arable land north of 60° north latitude. Annual precipitation is usually sufficient, but it occurs almost exclusively during the winter months, making summer droughts a constant threat. In response to the climate, farmers have relied on quick-ripening and frost-resistant varieties of crops, and they have cultivated south-facing slopes as well as richer bottomlands to ensure production even in years with summer frosts. Most farmland was originally either forest or swamp, and the soil has usually required treatment with lime and years of cultivation to neutralize excess acid and to improve fertility. Irrigation has generally not been necessary, but drainage systems are often needed to remove excess water. Finland's agriculture has been efficient and productive—at least when compared with farming in other European countries.[96]
191
+
192
+ Forests play a key role in the country's economy, making it one of the world's leading wood producers and providing raw materials at competitive prices for the crucial wood-processing industries. As in agriculture, the government has long played a leading role in forestry, regulating tree cutting, sponsoring technical improvements, and establishing long-term plans to ensure that the country's forests continue to supply the wood-processing industries. To maintain the country's comparative advantage in forest products, Finnish authorities moved to raise lumber output toward the country's ecological limits. In 1984, the government published the Forest 2000 plan, drawn up by the Ministry of Agriculture and Forestry. The plan aimed at increasing forest harvests by about 3% per year, while conserving forestland for recreation and other uses.[96]
193
+
194
+ Private sector employees amount to 1.8 million, out of which around a third with tertiary education. The average cost of a private sector employee per hour was €25.10 in 2004.[116] As of 2008[update], average purchasing power-adjusted income levels are similar to those of Italy, Sweden, Germany, and France.[117] In 2006, 62% of the workforce worked for enterprises with less than 250 employees and they accounted for 49% of total business turnover and had the strongest rate of growth.[118] The female employment rate is high. Gender segregation between male-dominated professions and female-dominated professions is higher than in the US.[119] The proportion of part-time workers was one of the lowest in OECD in 1999.[119] In 2013, the 10 largest private sector employers in Finland were Itella, Nokia, OP-Pohjola, ISS, VR, Kesko, UPM-Kymmene, YIT, Metso, and Nordea.[120]
195
+
196
+ The unemployment rate was 9.4% in 2015, having risen from 8.7% in 2014.[121] Youth unemployment rate rose from 16.5% in 2007 to 20.5% in 2014.[122] A fifth of residents are outside the job market at the age of 50 and less than a third are working at the age of 61.[123] In 2014, nearly one million people were living with minimal wages or unemployed not enough to cover their costs of living.[124]
197
+
198
+ As of 2006[update], 2.4 million households reside in Finland. The average size is 2.1 persons; 40% of households consist of a single person, 32% two persons and 28% three or more persons. Residential buildings total 1.2 million, and the average residential space is 38 square metres (410 sq ft) per person. The average residential property without land costs €1,187 per sq metre and residential land €8.60 per sq metre. 74% of households had a car. There are 2.5 million cars and 0.4 million other vehicles.[125]
199
+
200
+ Around 92% have a mobile phone and 83.5% (2009) Internet connection at home. The average total household consumption was €20,000, out of which housing consisted of about €5,500, transport about €3,000, food and beverages (excluding alcoholic beverages) at around €2,500, and recreation and culture at around €2,000.[126] According to Invest in Finland, private consumption grew by 3% in 2006 and consumer trends included durables, high-quality products, and spending on well-being.[127]
201
+
202
+ In 2017, Finland's GDP reached €224 billion. However, second quarter of 2018 saw a slow economic growth. Unemployment rate fell to a near one-decade low in June, marking private consumption growth much higher.[128]
203
+
204
+ Finland has the highest concentration of cooperatives relative to its population.[129] The largest retailer, which is also the largest private employer, S-Group, and the largest bank, OP-Group, in the country are both cooperatives.
205
+
206
+ The free and largely privately owned financial and physical Nordic energy markets traded in NASDAQ OMX Commodities Europe and Nord Pool Spot exchanges, have provided competitive prices compared with other EU countries. As of 2007[update], Finland has roughly the lowest industrial electricity prices in the EU-15 (equal to France).[131]
207
+
208
+ In 2006, the energy market was around 90 terawatt hours and the peak demand around 15 gigawatts in winter. This means that the energy consumption per capita is around 7.2 tons of oil equivalent per year. Industry and construction consumed 51% of total consumption, a relatively high figure reflecting Finland's industries.[132][133] Finland's hydrocarbon resources are limited to peat and wood. About 10–15% of the electricity is produced by hydropower,[134] which is low compared with more mountainous Sweden or Norway. In 2008, renewable energy (mainly hydropower and various forms of wood energy) was high at 31% compared with the EU average of 10.3% in final energy consumption.[135] Russia supplies more than 75% of Finland's oil imports and 100% of total gas imports.[136][137]
209
+
210
+ Finland has four privately owned nuclear reactors producing 18% of the country's energy[139] and one research reactor (decommissioned 2018 [140]) at the Otaniemi campus. The fifth AREVA-Siemens-built reactor – the world's largest at 1600 MWe and a focal point of Europe's nuclear industry – has faced many delays and is currently scheduled to be operational by 2018–2020, a decade after the original planned opening.[141] A varying amount (5–17%) of electricity has been imported from Russia (at around 3 gigawatt power line capacity), Sweden and Norway.
211
+
212
+ The Onkalo spent nuclear fuel repository is currently under construction at the Olkiluoto Nuclear Power Plant in the municipality of Eurajoki, on the west coast of Finland, by the company Posiva.[142] Energy companies are about to increase nuclear power production, as in July 2010 the Finnish parliament granted permits for additional two new reactors.
213
+
214
+ Finland’s road system is utilized by most internal cargo and passenger traffic. The annual state operated road network expenditure of around €1 billion is paid for with vehicle and fuel taxes which amount to around €1.5 billion and €1 billion, respectively.
215
+
216
+ The main international passenger gateway is Helsinki Airport, which handled about 17 million passengers in 2016. Oulu Airport is the second largest, whilst another 25 airports have scheduled passenger services.[143] The Helsinki Airport-based Finnair, Blue1, and Nordic Regional Airlines, Norwegian Air Shuttle sell air services both domestically and internationally. Helsinki has an optimal location for great circle (i.e. the shortest and most efficient) routes between Western Europe and the Far East.
217
+
218
+ Despite having a low population density, the Government annually spends around €350 million to maintain the 5,865-kilometre-long (3,644 mi) network of railway tracks. Rail transport is handled by the state owned VR Group, which has a 5% passenger market share (out of which 80% are from urban trips in Greater Helsinki) and 25% cargo market share.[144] Since 12 December 2010, Karelian Trains, a joint venture between Russian Railways and VR Group, has been running Alstom Pendolino operated high-speed services between Saint Petersburg's Finlyandsky and Helsinki's Central railway stations. These services are branded as "Allegro" trains. The journey from Helsinki to Saint Petersburg takes only three and a half hours. A high-speed rail line is planned between Helsinki and Turku, with a line from the capital to Tampere also proposed.[145] Helsinki opened the world's northernmost metro system in 1982, which also serves the neighbouring city of Espoo since 2017.
219
+
220
+ The majority of international cargo shipments are handled at ports. Vuosaari Harbour in Helsinki is the largest container port in Finland; others include Kotka, Hamina, Hanko, Pori, Rauma, and Oulu. There is passenger traffic from Helsinki and Turku, which have ferry connections to Tallinn, Mariehamn, Stockholm and Travemünde. The Helsinki-Tallinn route – one of the busiest passenger sea routes in the world – has also been served by a helicopter line, and the Helsinki-Tallinn Tunnel has been proposed to provide railway services between the two cities.[146] Largely following the example of the Øresund Bridge between Sweden and Denmark, the Kvarken Bridge connecting Umeå in Sweden and Vaasa in Finland to cross the Gulf of Bothnia has also been planned for decades.[147]
221
+
222
+ Finland rapidly industrialized after World War II, achieving GDP per capita levels comparable to that of Japan or the UK in the beginning of the 1970s. Initially, most of the economic development was based on two broad groups of export-led industries, the "metal industry" (metalliteollisuus) and "forest industry" (metsäteollisuus). The "metal industry" includes shipbuilding, metalworking, the automotive industry, engineered products such as motors and electronics, and production of metals and alloys including steel, copper and chromium. Many of the world's biggest cruise ships, including MS Freedom of the Seas and the Oasis of the Seas have been built in Finnish shipyards.[148]
223
+ [149] The "forest industry" includes forestry, timber, pulp and paper, and is often considered a logical development based on Finland's extensive forest resources, as 73% of the area is covered by forest. In the pulp and paper industry, many major companies are based in Finland; Ahlstrom-Munksjö, Metsä Board, and UPM are all Finnish forest-based companies with revenues exceeding €1 billion. However, in recent decades, the Finnish economy has diversified, with companies expanding into fields such as electronics (Nokia), metrology (Vaisala), petroleum (Neste), and video games (Rovio Entertainment), and is no longer dominated by the two sectors of metal and forest industry. Likewise, the structure has changed, with the service sector growing, with manufacturing declining in importance; agriculture remains a minor part. Despite this, production for export is still more prominent than in Western Europe, thus making Finland possibly more vulnerable to global economic trends.
224
+
225
+ In 2017, the Finnish economy was estimated to consist of approximately 2.7% agriculture, 28.2% manufacturing and 69.1% services.[150] In 2019, the per-capita income of Finland was estimated to be $48,869. In 2020, Finland was ranked 20th on the ease of doing business index, among 190 jurisdictions.
226
+
227
+ Finnish politicians have often emulated other Nordics and the Nordic model.[151] Nordics have been free-trading and relatively welcoming to skilled migrants for over a century, though in Finland immigration is relatively new. The level of protection in commodity trade has been low, except for agricultural products.[151]
228
+
229
+ Finland has top levels of economic freedom in many areas.[clarification needed] Finland is ranked 16th in the 2008 global Index of Economic Freedom and 9th in Europe.[152] While the manufacturing sector is thriving, the OECD points out that the service sector would benefit substantially from policy improvements.[153]
230
+
231
+ The 2007 IMD World Competitiveness Yearbook ranked Finland 17th most competitive.[154] The World Economic Forum 2008 index ranked Finland the 6th most competitive.[155] In both indicators, Finland's performance was next to Germany, and significantly higher than most European countries. In the Business competitiveness index 2007–2008 Finland ranked third in the world.
232
+
233
+ Economists attribute much growth to reforms in the product markets. According to the OECD, only four EU-15 countries have less regulated product markets (UK, Ireland, Denmark and Sweden) and only one has less regulated financial markets (Denmark). Nordic countries were pioneers in liberalizing energy, postal, and other markets in Europe.[151] The legal system is clear and business bureaucracy less than most countries.[152] Property rights are well protected and contractual agreements are strictly honoured.[152] Finland is rated the least corrupt country in the world in the Corruption Perceptions Index[156] and 13th in the Ease of doing business index. This indicates exceptional ease in cross-border trading (5th), contract enforcement (7th), business closure (5th), tax payment (83rd), and low worker hardship (127th).[157]
234
+
235
+ Finnish law forces all workers to obey the national contracts that are drafted every few years for each profession and seniority level. The agreement becomes universally enforceable provided that more than 50% of the employees support it, in practice by being a member of a relevant trade union. The unionization rate is high (70%), especially in the middle class (AKAVA—80%). A lack of a national agreement in an industry is considered an exception.[114][151]
236
+
237
+ In 2017, tourism in Finland grossed approximately €15.0 billion with a 7% increase from the previous year. Of this, €4.6 billion (30%) came from foreign tourism.[161] In 2017, there were 15.2 million overnight stays of domestic tourists and 6.7 million overnight stays of foreign tourists.[162] Much of the sudden growth can be attributed to the globalisation of the country as well as a rise in positive publicity and awareness. While Russia remains the largest market for foreign tourists, the biggest growth came from Chinese markets (35%).[162] Tourism contributes roughly 2.7% to Finland's GDP, making it comparable to agriculture and forestry.[163]
238
+
239
+ Commercial cruises between major coastal and port cities in the Baltic region, including Helsinki, Turku, Mariehamn, Tallinn, Stockholm, and Travemünde, play a significant role in the local tourism industry. By passenger counts, the Port of Helsinki is the busiest port in the world.[164] The Helsinki-Vantaa International Airport is the fourth busiest airport in the Nordic countries in terms of passenger numbers,[165] and about 90% of Finland's international air traffic passes through the airport.[166]
240
+
241
+ Lapland has the highest tourism consumption of any Finnish region.[163] Above the Arctic Circle, in midwinter, there is a polar night, a period when the sun does not rise for days or weeks, or even months, and correspondingly, midnight sun in the summer, with no sunset even at midnight (for up to 73 consecutive days, at the northernmost point). Lapland is so far north that the aurora borealis, fluorescence in the high atmosphere due to solar wind, is seen regularly in the fall, winter, and spring. Finnish Lapland is also locally regarded as the home of Saint Nicholas or Santa Claus, with several theme parks, such as Santa Claus Village and Santa Park in Rovaniemi.[167]
242
+
243
+ Tourist attractions in Finland include the natural landscape found throughout the country as well as urban attractions. Finland is covered with thick pine forests, rolling hills, and lakes. Finland contains 40 national parks, from the Southern shores of the Gulf of Finland to the high fells of Lapland. Outdoor activities range from Nordic skiing, golf, fishing, yachting, lake cruises, hiking, and kayaking, among many others. Bird-watching is popular for those fond of avifauna, however hunting is also popular. Elk and hare are common game in Finland. Finland also has urbanised regions with many cultural events and activities. Tourist attractions in Helsinki include the Helsinki Cathedral and the Suomenlinna sea fortress. Olavinlinna in Savonlinna hosts the annual Savonlinna Opera Festival,[168] and the medieval milieus of the cities of Turku, Rauma and Porvoo also attract curious spectators.[169]
244
+
245
+ Population by ethnic background in 2017[170][171]
246
+
247
+ The population of Finland is currently about 5.5 million inhabitants and is aging with the birth rate at 10.42 births per 1,000 population per year, or a fertility rate of 1.49 children born per woman,[172] one of the lowest in the world, below the replacement rate of 2.1, it remains considerably below the high of 5.17 children born per woman in 1887.[173] Finland subsequently has one of the oldest populations in the world, with the average age of 42.6 years.[174] Approximately half of voters are estimated to be over 50 years old.[175][63][176][22] Finland has an average population density of 18 inhabitants per square kilometre. This is the third-lowest population density of any European country, behind those of Norway and Iceland, and the lowest population density in the EU. Finland's population has always been concentrated in the southern parts of the country, a phenomenon that became even more pronounced during 20th-century urbanisation. Two of the three largest cities in Finland are situated in the Greater Helsinki metropolitan area—Helsinki and Espoo. Tampere holds the third place while also Helsinki-neighbouring Vantaa is the fourth. Other cities with population over 100,000 are Turku, Oulu, Jyväskylä, Kuopio, and Lahti.
248
+
249
+ As of 2018[update], there were 402,619 people with a foreign background living in Finland (7.3% of the population), most of whom are from Russia, Estonia, Somalia, Iraq and former Yugoslavia.[177] The children of foreigners are not automatically given Finnish citizenship, as Finnish nationality law practices and maintain jus sanguinis policy where only children born to at least one Finnish parent are granted citizenship. If they are born in Finland and cannot get citizenship of any other country, they become citizens.[178] Additionally, certain persons of Finnish descent who reside in countries that were once part of Soviet Union, retain the right of return, a right to establish permanent residency in the country, which would eventually entitle them to qualify for citizenship.[179] 387,215 people in Finland in 2018 were born in another country, representing 7% of the population. The 10 largest foreign born groups are (in order) from Russia, Estonia, Sweden, Iraq, Somalia, China, Thailand, Serbia, Vietnam and Turkey.[180]
250
+
251
+ The immigrant population is growing. By 2035, the three largest cites in Finland are projected to have a foreign speaking[clarification needed] population percentage of over a quarter each, with Helsinki rising to 26%, Espoo to 30% and Vantaa to 34%. The Helsinki region alone will have 437,000 foreign speakers, up by 236,000.[181]
252
+
253
+ Finnish and Swedish are the official languages of Finland. Finnish predominates nationwide while Swedish is spoken in some coastal areas in the west and south and in the autonomous region of Åland, which is the only monolingual Swedish speaking region in Finland.[183] The native language of 87.3% of the population is Finnish,[184][185] which is part of the Finnic subgroup of the Uralic languages. The language is one of only four official EU languages not of Indo-European origin. Finnish is closely related to Karelian and Estonian and more remotely to the Sami languages and Hungarian. Swedish is the native language of 5.2% of the population (Swedish-speaking Finns).[186]
254
+
255
+ The Nordic languages and Karelian are also specially treated in some contexts.
256
+
257
+ Finnish Romani is spoken by some 5,000–6,000 people; it and Finnish Sign Language are also recognized in the constitution. There are two sign languages: Finnish Sign Language, spoken natively by 4,000–5,000 people,[187] and Finland-Swedish Sign Language, spoken natively by about 150 people. Tatar is spoken by a Finnish Tatar minority of about 800 people whose ancestors moved to Finland mainly during Russian rule from the 1870s to the 1920s.[188]
258
+
259
+ The Sami language has an official language status in the north, in Lapland or in northern Lapland, where the Sami people predominate, numbering around 7,000[189] and recognized as an indigenous people. About a quarter of them speak a Sami language as their mother tongue.[190] The Sami languages that are spoken in Finland are Northern Sami, Inari Sami, and Skolt Sami.[note 2]
260
+
261
+ The rights of minority groups (in particular Sami, Swedish speakers, and Romani people) are protected by the constitution.[191]
262
+
263
+ The largest immigrant languages are Russian (1.4%), Estonian (0.9%), Arabic (0.5%), Somali (0.4%) and English (0.4%).[186] English is studied by most pupils as a compulsory subject from the first grade (at seven years of age) in the comprehensive school (in some schools other languages can be chosen instead),[192][193] as a result of which Finns' English language skills have been significantly strengthened over several decades.[194][195] German, French, Spanish and Russian can be studied as second foreign languages from the fourth grade (at 10 years of age; some schools may offer other options).[196]
264
+
265
+ 93% of Finns can speak a second language.[197] The figures in this section should be treated with caution, as they come from the official Finnish population register. People can only register one language and so bilingual or multilingual language users' language competencies are not properly included. A citizen of Finland that speaks bilingually Finnish and Swedish will often be registered as a Finnish only speaker in this system. Similarly "old domestic language" is a category applied to some languages and not others for political not linguistic reasons, for example Russian.[198]
266
+
267
+ Religions in Finland (2019)[199]
268
+
269
+ With 3.9 million members,[200] the Evangelical Lutheran Church of Finland is one of the largest Lutheran churches in the world and is also by far Finland's largest religious body; at the end of 2019, 68.7% of Finns were members of the church.[201] The Evangelical Lutheran Church of Finland sees its share of the country's population declining by roughly one percent annually in recent years.[201] The decline has been due to both church membership resignations and falling baptism rates.[202][203] The second largest group, accounting for 26.3% of the population[201] in 2017, has no religious affiliation. The irreligious group rose quickly from just below 13% in the year 2000. A small minority belongs to the Finnish Orthodox Church (1.1%). Other Protestant denominations and the Roman Catholic Church are significantly smaller, as are the Jewish and other non-Christian communities (totalling 1.6%). The Pew Research Center estimated the Muslim population at 2.7% in 2016.[204] The main Lutheran and Orthodox churches are national churches of Finland with special roles such as in state ceremonies and schools.[205]
270
+
271
+ In 1869, Finland was the first Nordic country to disestablish its Evangelical Lutheran church by introducing the Church Act, followed by the Church of Sweden in 2000. Although the church still maintains a special relationship with the state, it is not described as a state religion in the Finnish Constitution or other laws passed by the Finnish Parliament.[206] Finland's state church was the Church of Sweden until 1809. As an autonomous Grand Duchy under Russia 1809–1917, Finland retained the Lutheran State Church system, and a state church separate from Sweden, later named the Evangelical Lutheran Church of Finland, was established. It was detached from the state as a separate judicial entity when the new church law came to force in 1869. After Finland had gained independence in 1917, religious freedom was declared in the constitution of 1919 and a separate law on religious freedom in 1922. Through this arrangement, the Evangelical Lutheran Church of Finland lost its position as a state church but gained a constitutional status as a national church alongside the Finnish Orthodox Church, whose position however is not codified in the constitution.
272
+
273
+ In 2016, 69.3% of Finnish children were baptized[207] and 82.3% were confirmed in 2012 at the age of 15,[208] and over 90% of the funerals are Christian. However, the majority of Lutherans attend church only for special occasions like Christmas ceremonies, weddings, and funerals. The Lutheran Church estimates that approximately 1.8% of its members attend church services weekly.[209] The average number of church visits per year by church members is approximately two.[210]
274
+
275
+ According to a 2010 Eurobarometer poll, 33% of Finnish citizens responded that "they believe there is a God"; 42% answered that "they believe there is some sort of spirit or life force"; and 22% that "they do not believe there is any sort of spirit, God, or life force".[211] According to ISSP survey data (2008), 8% consider themselves "highly religious", and 31% "moderately religious". In the same survey, 28% reported themselves as "agnostic" and 29% as "non-religious".[212]
276
+
277
+ Life expectancy has increased from 71 years for men and 79 years for women in 1990 to 79 years for men and 84 years for women in 2017.[213] The under-five mortality rate has decreased from 51 per 1,000 live births in 1950 to 2.3 per 1,000 live births in 2017 ranking Finland's rate among the lowest in the world.[214] The fertility rate in 2014 stood at 1.71 children born/per woman and has been below the replacement rate of 2.1 since 1969.[215] With a low birth rate women also become mothers at a later age, the mean age at first live birth being 28.6 in 2014.[215] A 2011 study published in The Lancet medical journal found that Finland had the lowest stillbirth rate out of 193 countries, including the UK, France and New Zealand.[216]
278
+
279
+ There has been a slight increase or no change in welfare and health inequalities between population groups in the 21st century. Lifestyle-related diseases are on the rise. More than half a million Finns suffer from diabetes, type 1 diabetes being globally the most common in Finland. Many children are diagnosed with type 2 diabetes. The number of musculoskeletal diseases and cancers are increasing, although the cancer prognosis has improved. Allergies and dementia are also growing health problems in Finland. One of the most common reasons for work disability are due to mental disorders, in particular depression.[217] Treatment for depression has improved and as a result the historically high suicide rates have declined to 13 per 100 000 in 2017, closer to the North European average.[218] Suicide rates are still among the highest among developed countries in the OECD.[219]
280
+
281
+ There are 307 residents for each doctor.[220] About 19% of health care is funded directly by households and 77% by taxation.
282
+
283
+ In April 2012, Finland was ranked 2nd in Gross National Happiness in a report published by The Earth Institute.[221] Since 2012, Finland has every time ranked at least in the top 5 of world's happiest countries in the annual World Happiness Report by the United Nations,[222][223][224] as well as ranking as the happiest country in 2018.[225]
284
+
285
+ Most pre-tertiary education is arranged at municipal level. Even though many or most schools were started as private schools, today only around 3 percent of students are enrolled in private schools (mostly specialist language and international schools), much less than in Sweden and most other developed countries.[226] Pre-school education is rare compared with other EU countries and formal education is usually started at the age of 7. Primary school takes normally six years and lower secondary school three years. Most schools are managed by municipal officials.
286
+
287
+ The flexible curriculum is set by the Ministry of Education and the Education Board. Education is compulsory between the ages of 7 and 16. After lower secondary school, graduates may either enter the workforce directly, or apply to trade schools or gymnasiums (upper secondary schools). Trade schools offer a vocational education: approximately 40% of an age group choose this path after the lower secondary school.[227] Academically oriented gymnasiums have higher entrance requirements and specifically prepare for Abitur and tertiary education. Graduation from either formally qualifies for tertiary education.
288
+
289
+ In tertiary education, two mostly separate and non-interoperating sectors are found: the profession-oriented polytechnics and the research-oriented universities. Education is free and living expenses are to a large extent financed by the government through student benefits. There are 15 universities and 24 Universities of Applied Sciences (UAS) in the country.[228][229] The University of Helsinki is ranked 75th in the Top University Ranking of 2010.[230] The World Economic Forum ranks Finland's tertiary education No. 1 in the world.[231] Around 33% of residents have a tertiary degree, similar to Nordics and more than in most other OECD countries except Canada (44%), United States (38%) and Japan (37%).[232] The proportion of foreign students is 3% of all tertiary enrollments, one of the lowest in OECD, while in advanced programs it is 7.3%, still below OECD average 16.5%.[233]
290
+
291
+ More than 30% of tertiary graduates are in science-related fields. Forest improvement, materials research, environmental sciences, neural networks, low-temperature physics, brain research, biotechnology, genetic technology, and communications showcase fields of study where Finnish researchers have had a significant impact.[234]
292
+
293
+ Finland has a long tradition of adult education, and by the 1980s nearly one million Finns were receiving some kind of instruction each year. Forty percent of them did so for professional reasons. Adult education appeared in a number of forms, such as secondary evening schools, civic and workers' institutes, study centres, vocational course centres, and folk high schools. Study centres allowed groups to follow study plans of their own making, with educational and financial assistance provided by the state. Folk high schools are a distinctly Nordic institution. Originating in Denmark in the 19th century, folk high schools became common throughout the region. Adults of all ages could stay at them for several weeks and take courses in subjects that ranged from handicrafts to economics.[96]
294
+
295
+ Finland is highly productive in scientific research. In 2005, Finland had the fourth most scientific publications per capita of the OECD countries.[235] In 2007, 1,801 patents were filed in Finland.[236]
296
+
297
+ In addition, 38 percent of Finland's population has a university or college degree, which is among the highest percentages in the world.[237][238]
298
+
299
+ In 2010 a new law was enacted considering the universities, which defined that there are 16 of them as they were excluded from the public sector to be autonomous legal and financial entities, however enjoying special status in the legislation.[239] As result many former state institutions were driven to collect funding from private sector contributions and partnerships. The change caused deep rooted discussions among the academic circles.[240]
300
+
301
+ English language is important in Finnish education. There are a number of degree programs that are taught in English, which attracts thousands of degree and exchange students every year.
302
+
303
+ In December 2017 the OECD reported that Finnish fathers spend an average of eight minutes a day more with their school-aged children than mothers do.[241][242]
304
+
305
+ Written Finnish could be said to have existed since Mikael Agricola translated the New Testament into Finnish during the Protestant Reformation, but few notable works of literature were written until the 19th century and the beginning of a Finnish national Romantic Movement. This prompted Elias Lönnrot to collect Finnish and Karelian folk poetry and arrange and publish them as the Kalevala, the Finnish national epic. The era saw a rise of poets and novelists who wrote in Finnish, notably Aleksis Kivi, Eino Leino and Johannes Linnankoski. Many writers of the national awakening wrote in Swedish, such as the national poet Johan Ludvig Runeberg and Zachris Topelius.
306
+
307
+ After Finland became independent, there was a rise of modernist writers, most famously the Finnish-speaking Mika Waltari and Swedish-speaking Edith Södergran. Frans Eemil Sillanpää was awarded the Nobel Prize in Literature in 1939. World War II prompted a return to more national interests in comparison to a more international line of thought, characterized by Väinö Linna. Besides Kalevala and Waltari, the Swedish-speaking Tove Jansson is the most translated Finnish writer. Popular modern writers include Arto Paasilinna, Ilkka Remes, Kari Hotakainen, Sofi Oksanen, and Jari Tervo, while the best novel is annually awarded the prestigious Finlandia Prize.
308
+
309
+ The visual arts in Finland started to form their individual characteristics in the 19th century, when Romantic nationalism was rising in autonomic Finland. The best known of Finnish painters, Akseli Gallen-Kallela, started painting in a naturalist style, but moved to national romanticism. Finland's best-known sculptor of the 20th century was Wäinö Aaltonen, remembered for his monumental busts and sculptures. Finns have made major contributions to handicrafts and industrial design: among the internationally renowned figures are Timo Sarpaneva, Tapio Wirkkala and Ilmari Tapiovaara. Finnish architecture is famous around the world, and has contributed significantly to several styles internationally, such as Jugendstil (or Art Nouveau), Nordic Classicism and Functionalism. Among the top 20th-century Finnish architects to gain international recognition are Eliel Saarinen and his son Eero Saarinen. Architect Alvar Aalto is regarded as among the most important 20th-century designers in the world;[243] he helped bring functionalist architecture to Finland, but soon was a pioneer in its development towards an organic style.[244] Aalto is also famous for his work in furniture, lamps, textiles and glassware, which were usually incorporated into his buildings.
310
+
311
+ Much of Finland's classical music is influenced by traditional Karelian melodies and lyrics, as comprised in the Kalevala. Karelian culture is perceived as the purest expression of the Finnic myths and beliefs, less influenced by Germanic influence than the Nordic folk dance music that largely replaced the kalevaic tradition. Finnish folk music has undergone a roots revival in recent decades, and has become a part of popular music.
312
+
313
+ The people of northern Finland, Sweden, and Norway, the Sami, are known primarily for highly spiritual songs called joik. The same word sometimes refers to lavlu or vuelie songs, though this is technically incorrect.
314
+
315
+ The first Finnish opera was written by the German-born composer Fredrik Pacius in 1852. Pacius also wrote the music to the poem Maamme/Vårt land (Our Country), Finland's national anthem. In the 1890s Finnish nationalism based on the Kalevala spread, and Jean Sibelius became famous for his vocal symphony Kullervo. He soon received a grant to study runo singers in Karelia and continued his rise as the first prominent Finnish musician. In 1899 he composed Finlandia, which played its important role in Finland gaining independence. He remains one of Finland's most popular national figures and is a symbol of the nation.
316
+
317
+ Today, Finland has a very lively classical music scene and many of Finland's important composers are still alive, such as Magnus Lindberg, Kaija Saariaho, Kalevi Aho, and Aulis Sallinen. The composers are accompanied by a large number of great conductors such as Esa-Pekka Salonen, Osmo Vänskä, Jukka-Pekka Saraste, and Leif Segerstam. Some of the internationally acclaimed Finnish classical musicians are Karita Mattila, Soile Isokoski, Pekka Kuusisto, Olli Mustonen, and Linda Lampenius.
318
+
319
+ Iskelmä (coined directly from the German word Schlager, meaning "hit") is a traditional Finnish word for a light popular song.[245] Finnish popular music also includes various kinds of dance music; tango, a style of Argentine music, is also popular.[246] The light music in Swedish-speaking areas has more influences from Sweden. Modern Finnish popular music includes a number of prominent rock bands, jazz musicians, hip hop performers, dance music acts, etc.[citation needed]
320
+
321
+ During the early 1960s, the first significant wave of Finnish rock groups emerged, playing instrumental rock inspired by groups such as The Shadows. Around 1964, Beatlemania arrived in Finland, resulting in further development of the local rock scene. During the late 1960s and '70s, Finnish rock musicians increasingly wrote their own music instead of translating international hits into Finnish. During the decade, some progressive rock groups such as Tasavallan Presidentti and Wigwam gained respect abroad but failed to make a commercial breakthrough outside Finland. This was also the fate of the rock and roll group Hurriganes. The Finnish punk scene produced some internationally acknowledged names including Terveet Kädet in the 1980s. Hanoi Rocks was a pioneering 1980s glam rock act that inspired the American hard rock group Guns N' Roses, among others.[247]
322
+
323
+ Many Finnish metal bands have gained international recognition. HIM and Nightwish are some of Finland's most internationally known bands. HIM's 2005 album Dark Light went gold in the United States. Apocalyptica are an internationally famous Finnish group who are most renowned for mixing strings-led classical music with classic heavy metal. Other well-known metal bands are Amorphis, Children of Bodom, Ensiferum, Finntroll, Impaled Nazarene, Insomnium, Korpiklaani, Moonsorrow, Reverend Bizarre, Sentenced, Sonata Arctica, Stratovarius, Swallow the Sun, Turisas, Waltari, and Wintersun.[248]
324
+
325
+ After Finnish hard rock/heavy metal band Lordi won the 2006 Eurovision Song Contest, Finland hosted the competition in 2007.[249] Alternative rock band Poets of the Fall, formed in 2003, have released eight studio albums and have toured widely.[250]
326
+
327
+ In the film industry, notable directors include Aki Kaurismäki, Mauritz Stiller, Spede Pasanen, and Hollywood film director and producer Renny Harlin. Around twelve feature films are made each year.[251]
328
+
329
+ Finland's most internationally successful TV shows are the backpacking travel documentary series Madventures and the reality TV show The Dudesons, about four childhood friends who perform stunts and play pranks on each other (in similar vein to the American TV show Jackass).
330
+
331
+ Thanks to its emphasis on transparency and equal rights, Finland's press has been rated the freest in the world.[252]
332
+
333
+ Today, there are around 200 newspapers, 320 popular magazines, 2,100 professional magazines, 67 commercial radio stations, three digital radio channels and one nationwide and five national public service radio channels.
334
+
335
+ Each year, around 12,000 book titles are published and 12 million records are sold.[251]
336
+
337
+ Sanoma publishes the newspaper Helsingin Sanomat (its circulation of 412,000[253] making it the largest), the tabloid Ilta-Sanomat, the commerce-oriented Taloussanomat and the television channel Nelonen. The other major publisher Alma Media publishes over thirty magazines, including the newspaper Aamulehti, tabloid Iltalehti and commerce-oriented Kauppalehti. Worldwide, Finns, along with other Nordic peoples and the Japanese, spend the most time reading newspapers.[254]
338
+
339
+ Yle, the Finnish Broadcasting Company, operates five television channels and thirteen radio channels in both national languages. Yle is funded through a mandatory television license and fees for private broadcasters. All TV channels are broadcast digitally, both terrestrially and on cable. The commercial television channel MTV3 and commercial radio channel Radio Nova are owned by Nordic Broadcasting (Bonnier and Proventus Industrier).
340
+
341
+ In regards to telecommunication infrastructure, Finland is the highest ranked country in the World Economic Forum's Network Readiness Index (NRI) – an indicator for determining the development level of a country's information and communication technologies. Finland ranked 1st overall in the 2014 NRI ranking, unchanged from the year before.[255] This is shown in its penetration throughout the country's population. Around 79% of the population use the Internet.[256] Finland had around 1.52 million broadband Internet connections by the end of June 2007 or around 287 per 1,000 inhabitants.[257] All Finnish schools and public libraries have Internet connections and computers and most residents have a mobile phone. Value-added services are rare.[258] In October 2009, Finland's Ministry of Transport and Communications committed to ensuring that every person in Finland would be able to access the Internet at a minimum speed of one megabit-per-second beginning July 2010.[259]
342
+
343
+ Finnish cuisine is notable for generally combining traditional country fare and haute cuisine with contemporary style cooking. Fish and meat play a prominent role in traditional Finnish dishes from the western part of the country, while the dishes from the eastern part have traditionally included various vegetables and mushrooms. Refugees from Karelia contributed to foods in eastern Finland.
344
+
345
+ Finnish foods often use wholemeal products (rye, barley, oats) and berries (such as bilberries, lingonberries, cloudberries, and sea buckthorn). Milk and its derivatives like buttermilk are commonly used as food, drink, or in various recipes. Various turnips were common in traditional cooking, but were replaced with the potato after its introduction in the 18th century.
346
+
347
+ According to the statistics, red meat consumption has risen, but still Finns eat less beef than many other nations, and more fish and poultry. This is mainly because of the high cost of meat in Finland.
348
+
349
+ Finland has the world's highest per capita consumption of coffee.[260] Milk consumption is also high, at an average of about 112 litres (25 imp gal; 30 US gal), per person, per year,[261] even though 17% of the Finns are lactose intolerant.[262]
350
+
351
+ All official holidays in Finland are established by Acts of Parliament. Christian holidays include Christmas, New Year's Day, Epiphany, Easter, Ascension Day, Pentecost, Midsummer Day (St. John's Day), and All Saints' Day, while secular holidays include May Day, Independence Day, New Year's Day, and Midsummer. Christmas is the most extensively celebrated, and at least 24 to 26 December is taken as a holiday.
352
+
353
+ Various sporting events are popular in Finland. Pesäpallo, resembling baseball, is the national sport of Finland, although the most popular sports in terms of spectators is ice hockey. Ice Hockey World Championships 2016 final Finland-Canada, 69% of Finnish people watched that game on TV.[263] Other popular sports include athletics, cross-country skiing, ski jumping, football, volleyball and basketball.[264] While ice hockey is the most popular sports when it comes to attendance at games, association football is the most played team sport in terms of the number of players in the country and is also the most appreciated sport in Finland.[265][266]
354
+
355
+ In terms of medals and gold medals won per capita, Finland is the best performing country in Olympic history.[267] Finland first participated as a nation in its own right at the Olympic Games in 1908, while still an autonomous Grand Duchy within the Russian Empire. At the 1912 Summer Olympics, great pride was taken in the three gold medals won by the original "Flying Finn" Hannes Kolehmainen.
356
+
357
+ Finland was one of the most successful countries at the Olympic Games before World War II. At the 1924 Summer Olympics, Finland, a nation then of only 3.2 million people, came second in the medal count. In the 1920s and '30s, Finnish long-distance runners dominated the Olympics, with Paavo Nurmi winning a total of nine Olympic gold medals between 1920 and 1928 and setting 22 official world records between 1921 and 1931. Nurmi is often considered the greatest Finnish sportsman and one of the greatest athletes of all time.
358
+
359
+ For over 100 years, Finnish male and female athletes have consistently excelled at the javelin throw. The event has brought Finland nine Olympic gold medals, five world championships, five European championships, and 24 world records.
360
+
361
+ In addition to Kolehmainen and Nurmi, some of Finland's most internationally well-known and successful sportspeople are long-distance runners Ville Ritola and Lasse Virén; ski-jumpers Matti Nykänen and Janne Ahonen; cross-country skiers Veikko Hakulinen, Eero Mäntyranta, Marja-Liisa Kirvesniemi and Mika Myllylä; rower Pertti Karppinen; gymnast Heikki Savolainen; professional skateboarder Arto Saari; ice hockey players Kimmo Timonen, Jari Kurri, Teemu Selänne, and Saku Koivu; football players Jari Litmanen and Sami Hyypiä; basketball player Hanno Möttölä; alpine skiers Kalle Palander and Tanja Poutiainen; Formula One world champions Keke Rosberg, Mika Häkkinen and Kimi Räikkönen; four-time World Rally champions Juha Kankkunen and Tommi Mäkinen; and 13-time World Enduro Champion Juha Salminen, seven-time champion Kari Tiainen, and the five-time champions Mika Ahola, biathlete Kaisa Mäkäräinen and Samuli Aro. Finland is also one of the most successful nations in bandy, being the only nation beside Russia and Sweden to win a Bandy World Championship.
362
+
363
+ The 1952 Summer Olympics were held in Helsinki. Other notable sporting events held in Finland include the 1983 and 2005 World Championships in Athletics.
364
+
365
+ Finland also has a notable history in figure skating. Finnish skaters have won 8 world championships and 13 junior world cups in synchronized skating, and Finland is considered one of the best countries at the sport.
366
+
367
+ Some of the most popular recreational sports and activities include floorball, Nordic walking, running, cycling, and skiing (alpine skiing, cross-country skiing, and ski jumping).
368
+ Floorball, in terms of registered players, occupies third place after football and ice hockey. According to the Finnish Floorball Federation, floorball is the most popular school, youth, club and workplace sport.[268] As of 2016[update], the total number of licensed players reaches 57,400.[269]
369
+
370
+ Especially since the 2014 FIBA Basketball World Cup, Finland's national basketball team has received widespread public attention. More than 8,000 Finns travelled to Spain to support their team. Overall, they chartered more than 40 airplanes.[270]
371
+
372
+ Government
373
+
374
+ Maps
375
+
376
+ Travel
377
+
378
+ Coordinates: 64°N 26°E / 64°N 26°E / 64; 26
en/5548.html.txt ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In geology, a supercontinent is the assembly of most or all of Earth's continental blocks or cratons to form a single large landmass.[2][3][1] However, some earth scientists use a different definition: "a grouping of formerly dispersed continents", which leaves room for interpretation and is easier to apply to Precambrian times,[4] although a minimum of about 75% of the continental crust then in existence has been proposed as a limit to separate supercontinents from other groupings.[5]
2
+
3
+ Supercontinents have assembled and dispersed multiple times in the geologic past (see table). According to the modern definitions, a supercontinent does not exist today.[2] The supercontinent Pangaea is the collective name describing all of the continental landmasses when they were most recently near to one another. The positions of continents have been accurately determined back to the early Jurassic, shortly before the breakup of Pangaea (see animated image).[6] The earlier continent Gondwana is not considered a supercontinent under the first definition, since the landmasses of Baltica, Laurentia and Siberia were separate at the time.[4]
4
+
5
+ The following table names reconstructed ancient supercontinents, using Bradley's 2011 looser definition,[4] with an approximate timescale of millions of years ago (Ma).
6
+
7
+ There are two contrasting models for supercontinent evolution through geological time. The first model theorizes that at least two separate supercontinents existed comprising Vaalbara (from ~3636 to 2803 Ma) and Kenorland (from ~2720 to 2450 Ma). The Neoarchean supercontinent consisted of Superia and Sclavia. These parts of Neoarchean age broke off at ~2480 and 2312 Ma and portions of them later collided to form Nuna (Northern Europe North America) (~1820 Ma). Nuna continued to develop during the Mesoproterozoic, primarily by lateral accretion of juvenile arcs, and in ~1000 Ma Nuna collided with other land masses, forming Rodinia.[4] Between ~825 and 750 Ma Rodinia broke apart.[7] However, before completely breaking up, some fragments of Rodinia had already come together to form Gondwana (also known as Gondwanaland) by ~608 Ma. Pangaea formed by ~336 Ma through the collision of Gondwana, Laurasia (Laurentia and Baltica), and Siberia.
8
+
9
+ The second model (Kenorland-Arctica) is based on both palaeomagnetic and geological evidence and proposes that the continental crust comprised a single supercontinent from ~2.72 Ga until break-up during the Ediacaran Period after ~0.573 Ga. The reconstruction[8] is derived from the observation that palaeomagnetic poles converge to quasi-static positions for long intervals between ~2.72–2.115, 1.35–1.13, and 0.75–0.573 Ga with only small peripheral modifications to the reconstruction.[9] During the intervening periods, the poles conform to a unified apparent polar wander path. Because this model shows that exceptional demands on the paleomagnetic data are satisfied by prolonged quasi-integrity, it must be regarded as superseding the first model proposing multiple diverse continents, although the first phase (Protopangea) essentially incorporates Vaalbara and Kenorland of the first model. The explanation for the prolonged duration of the Protopangea-Paleopangea supercontinent appears to be that lid tectonics (comparable to the tectonics operating on Mars and Venus) prevailed during Precambrian times. Plate tectonics as seen on the contemporary Earth became dominant only during the latter part of geological times.[9]
10
+
11
+ The Phanerozoic supercontinent Pangaea began to break up 215 Ma and is still doing so today. Because Pangaea is the most recent of Earth's supercontinents, it is the most well known and understood. Contributing to Pangaea's popularity in the classroom is the fact that its reconstruction is almost as simple as fitting the present continents bordering the Atlantic-type oceans like puzzle pieces.[4]
12
+
13
+ A supercontinent cycle is the break-up of one supercontinent and the development of another, which takes place on a global scale.[4] Supercontinent cycles are not the same as the Wilson cycle, which is the opening and closing of an individual oceanic basin. The Wilson cycle rarely synchronizes with the timing of a supercontinent cycle.[2] However, supercontinent cycles and Wilson cycles were both involved in the creation of Pangaea and Rodinia.[6]
14
+
15
+ Secular trends such as carbonatites, granulites, eclogites, and greenstone belt deformation events are all possible indicators of Precambrian supercontinent cyclicity, although the Protopangea-Paleopangea solution implies that Phanerozoic style of supercontinent cycles did not operate during these times. Also there are instances where these secular trends have a weak, uneven or absent imprint on the supercontinent cycle; secular methods for supercontinent reconstruction will produce results that have only one explanation, and each explanation for a trend must fit in with the rest.[4]
16
+
17
+ The causes of supercontinent assembly and dispersal are thought to be driven by convection processes in the Earth's mantle.[2] Approximately 660 km into the mantle, a discontinuity occurs, affecting the surface crust through processes like plumes and "superplumes". When a slab of subducted crust is denser than the surrounding mantle, it sinks to the discontinuity. Once the slabs build up, they will sink through to the lower mantle in what is known as a "slab avalanche". This displacement at the discontinuity will cause the lower mantle to compensate and rise elsewhere. The rising mantle can form a plume or superplume.
18
+
19
+ Besides having compositional effects on the upper mantle by replenishing the large-ion lithophile elements, volcanism affects plate movement.[2] The plates will be moved towards a geoidal low perhaps where the slab avalanche occurred and pushed away from the geoidal high that can be caused by the plumes or superplumes. This causes the continents to push together to form supercontinents and was evidently the process that operated to cause the early continental crust to aggregate into Protopangea.[10] Dispersal of supercontinents is caused by the accumulation of heat underneath the crust due to the rising of very large convection cells or plumes, and a massive heat release resulted in the final break-up of Paleopangea.[11] Accretion occurs over geoidal lows that can be caused by avalanche slabs or the downgoing limbs of convection cells. Evidence of the accretion and dispersion of supercontinents is seen in the geological rock record.
20
+
21
+ The influence of known volcanic eruptions does not compare to that of flood basalts. The timing of flood basalts has corresponded with large-scale continental break-up. However, due to a lack of data on the time required to produce flood basalts, the climatic impact is difficult to quantify. The timing of a single lava flow is also undetermined. These are important factors on how flood basalts influenced paleoclimate.[6]
22
+
23
+ Global paleogeography and plate interactions as far back as Pangaea are relatively well understood today. However, the evidence becomes more sparse further back in geologic history. Marine magnetic anomalies, passive margin match-ups, geologic interpretation of orogenic belts, paleomagnetism, paleobiogeography of fossils, and distribution of climatically sensitive strata are all methods to obtain evidence for continent locality and indicators of environment throughout time.[4]
24
+
25
+ Phanerozoic (541 Ma to present) and Precambrian (4.6 Ga to 541 Ma) had primarily passive margins and detrital zircons (and orogenic granites), whereas the tenure of Pangaea contained few.[4] Matching edges of continents are where passive margins form. The edges of these continents may rift. At this point, seafloor spreading becomes the driving force. Passive margins are therefore born during the break-up of supercontinents and die during supercontinent assembly. Pangaea's supercontinent cycle is a good example for the efficiency of using the presence, or lack of, these entities to record the development, tenure, and break-up of supercontinents. There is a sharp decrease in passive margins between 500 and 350 Ma during the timing of Pangaea's assembly. The tenure of Pangaea is marked by a low number of passive margins during 336 to 275 Ma, and its break-up is indicated accurately by an increase in passive margins.[4]
26
+
27
+ Orogenic belts can form during the assembly of continents and supercontinents. The orogenic belts present on continental blocks are classified into three different categories and have implications of interpreting geologic bodies.[2] Intercratonic orogenic belts are characteristic of ocean basin closure. Clear indicators of intercratonic activity contain ophiolites and other oceanic materials that are present in the suture zone. Intracratonic orogenic belts occur as thrust belts and do not contain any oceanic material. However, the absence of ophiolites is not strong evidence for intracratonic belts, because the oceanic material can be squeezed out and eroded away in an intercratonic environment. The third kind of orogenic belt is a confined orogenic belt which is the closure of small basins. The assembly of a supercontinent would have to show intercratonic orogenic belts.[2] However, interpretation of orogenic belts can be difficult.
28
+
29
+ The collision of Gondwana and Laurasia occurred in the late Palaeozoic. By this collision, the Variscan mountain range was created, along the equator.[6] This 6000-km-long mountain range is usually referred to in two parts: the Hercynian mountain range of the late Carboniferous makes up the eastern part, and the western part is called the Appalachians, uplifted in the Early Permian. (The existence of a flat elevated plateau like the Tibetan Plateau is under much debate.) The locality of the Variscan range made it influential to both the northern and southern hemispheres. The elevation of the Appalachians would greatly influence global atmospheric circulation.[6]
30
+
31
+ Continents affect the climate of the planet drastically, with supercontinents having a larger, more prevalent influence. Continents modify global wind patterns, control ocean current paths and have a higher albedo than the oceans.[2] Winds are redirected by mountains, and albedo differences cause shifts in onshore winds. Higher elevation in continental interiors produce cooler, drier climate, the phenomenon of continentality. This is seen today in Eurasia, and rock record shows evidence of continentality in the middle of Pangaea.[2]
32
+
33
+ The term glacio-epoch refers to a long episode of glaciation on Earth over millions of years.[12] Glaciers have major implications on the climate particularly through sea level change. Changes in the position and elevation of the continents, the paleolatitude and ocean circulation affect the glacio-epochs. There is an association between the rifting and breakup of continents and supercontinents and glacio-epochs.[12] According to the first model for Precambrian supercontinents described above the breakup of Kenorland and Rodinia were associated with the Paleoproterozoic and Neoproterozoic glacio-epochs, respectively. In contrast, the second solution described above shows that these glaciations correlated with periods of low continental velocity and it is concluded that a fall in tectonic and corresponding volcanic activity was responsible for these intervals of global frigidity.[9] During the accumulation of supercontinents with times of regional uplift, glacio-epochs seem to be rare with little supporting evidence. However, the lack of evidence does not allow for the conclusion that glacio-epochs are not associated with collisional assembly of supercontinents.[12] This could just represent a preservation bias.
34
+
35
+ During the late Ordovician (~458.4 Ma), the particular configuration of Gondwana may have allowed for glaciation and high CO2 levels to occur at the same time.[13] However, some geologists disagree and think that there was a temperature increase at this time. This increase may have been strongly influenced by the movement of Gondwana across the South Pole, which may have prevented lengthy snow accumulation. Although late Ordovician temperatures at the South Pole may have reached freezing, there were no ice sheets during the Early Silurian (~443.8 Ma) through the late Mississippian (~330.9 Ma).[6] Agreement can be met with the theory that continental snow can occur when the edge of a continent is near the pole. Therefore, Gondwana, although located tangent to the South Pole, may have experienced glaciation along its coast.[13]
36
+
37
+ Though precipitation rates during monsoonal circulations are difficult to predict, there is evidence for a large orographic barrier within the interior of Pangaea during the late Paleozoic (~251.902 Ma). The possibility of the SW-NE trending Appalachian-Hercynian Mountains makes the region's monsoonal circulations potentially relatable to present day monsoonal circulations surrounding the Tibetan Plateau, which is known to positively influence the magnitude of monsoonal periods within Eurasia. It is therefore somewhat expected that lower topography in other regions of the supercontinent during the Jurassic would negatively influence precipitation variations. The breakup of supercontinents may have affected local precipitation.[14] When any supercontinent breaks up, there will be an increase in precipitation runoff over the surface of the continental land masses, increasing silicate weathering and the consumption of CO2.[7]
38
+
39
+ Even though during the Archaean solar radiation was reduced by 30 percent and the Cambrian-Precambrian boundary by six percent, the Earth has only experienced three ice ages throughout the Precambrian.[6] Erroneous conclusions are more likely to be made when models are limited to one climatic configuration (which is usually present day).[14]
40
+
41
+ Cold winters in continental interiors are due to rate ratios of radiative cooling (greater) and heat transport from continental rims. To raise winter temperatures within continental interiors, the rate of heat transport must increase to become greater than the rate of radiative cooling. Through climate models, alterations in atmospheric CO2 content and ocean heat transport are not comparatively effective.[14]
42
+
43
+ CO2 models suggest that values were low in the late Cenozoic and Carboniferous-Permian glaciations. Although early Paleozoic values are much larger (more than ten percent higher than that of today). This may be due to high seafloor spreading rates after the breakup of Precambrian supercontinents and the lack of land plants as a carbon sink.[13]
44
+
45
+ During the late Permian, it is expected that seasonal Pangaean temperatures varied drastically. Subtropic summer temperatures were warmer than that of today by as much as 6–10 degrees and mid-latitudes in the winter were less than −30 degrees Celsius. These seasonal changes within the supercontinent were influenced by the large size of Pangaea. And, just like today, coastal regions experienced much less variation.[6]
46
+
47
+ During the Jurassic, summer temperatures did not rise above zero degrees Celsius along the northern rim of Laurasia, which was the northernmost part of Pangaea (the southernmost portion of Pangaea was Gondwana). Ice-rafted dropstones sourced from Russia are indicators of this northern boundary. The Jurassic is thought to have been approximately 10 degrees Celsius warmer along 90 degrees East paleolongitude compared to the present temperature of today's central Eurasia.[14]
48
+
49
+ Many studies of the Milankovitch fluctuations during supercontinent time periods have focused on the Mid-Cretaceous. Present amplitudes of Milankovitch cycles over present day Eurasia may be mirrored in both the southern and northern hemispheres of the supercontinent Pangaea. Climate modeling shows that summer fluctuations varied 14–16 degrees Celsius on Pangaea, which is similar or slightly higher than summer temperatures of Eurasia during the Pleistocene. The largest-amplitude Milankovitch cycles are expected to have been at mid- to high-latitudes during the Triassic and Jurassic.[14]
50
+
51
+ Granites and detrital zircons have notably similar and episodic appearances in the rock record. Their fluctuations correlate with Precambrian supercontinent cycles. The U–Pb zircon dates from orogenic granites are among the most reliable aging determinants. Some issues exist with relying on granite sourced zircons, such as a lack of evenly globally sourced data and the loss of granite zircons by sedimentary coverage or plutonic consumption. Where granite zircons are less adequate, detrital zircons from sandstones appear and make up for the gaps. These detrital zircons are taken from the sands of major modern rivers and their drainage basins.[4] Oceanic magnetic anomalies and paleomagnetic data are the primary resources used for reconstructing continent and supercontinent locations back to roughly 150 Ma.[6]
52
+
53
+ Plate tectonics and the chemical composition of the atmosphere (specifically greenhouse gases) are the two most prevailing factors present within the geologic time scale. Continental drift influences both cold and warm climatic episodes. Atmospheric circulation and climate are strongly influenced by the location and formation of continents and megacontinents. Therefore, continental drift influences mean global temperature.[6]
54
+
55
+ Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. It is thought that the Earth's oxygen content has risen in stages: six or seven steps that are timed very closely to the development of Earth's supercontinents.[15]
56
+
57
+ The process of Earth's increase in atmospheric oxygen content is theorized to have started with continent-continent collision of huge land masses forming supercontinents, and therefore possibly supercontinent mountain ranges (supermountains). These supermountains would have eroded, and the mass amounts of nutrients, including iron and phosphorus, would have washed into oceans, just as we see happening today. The oceans would then be rich in nutrients essential to photosynthetic organisms, which would then be able to respire mass amounts of oxygen. There is an apparent direct relationship between orogeny and the atmospheric oxygen content). There is also evidence for increased sedimentation concurrent with the timing of these mass oxygenation events, meaning that the organic carbon and pyrite at these times were more likely to be buried beneath sediment and therefore unable to react with the free oxygen. This sustained the atmospheric oxygen increases.[15]
58
+
59
+ During this time, 2.65 Ga there was an increase in molybdenum isotope fractionation. It was temporary, but supports the increase in atmospheric oxygen because molybdenum isotopes require free oxygen to fractionate. Between 2.45 and 2.32 Ga, the second period of oxygenation occurred, it has been called the 'great oxygenation event.' There are many pieces of evidence that support the existence of this event, including red beds appearance 2.3 Ga (meaning that Fe3+ was being produced and became an important component in soils). The third oxygenation stage approximately 1.8 Ga is indicated by the disappearance of iron formations. Neodymium isotopic studies suggest that iron formations are usually from continental sources, meaning that dissolved Fe and Fe2+ had to be transported during continental erosion. A rise in atmospheric oxygen prevents Fe transport, so the lack of iron formations may have been due to an increase in oxygen. The fourth oxygenation event, roughly 0.6 Ga, is based on modeled rates of sulfur isotopes from marine carbonate-associated sulfates. An increase (near doubled concentration) of sulfur isotopes, which is suggested by these models, would require an increase in oxygen content of the deep oceans. Between 650 and 550 Ma there were three increases in ocean oxygen levels, this period is the fifth oxygenation stage. One of the reasons indicating this period to be an oxygenation event is the increase in redox-sensitive molybdenum in black shales. The sixth event occurred between 360 and 260 Ma and was identified by models suggesting shifts in the balance of 34S in sulfates and 13C in carbonates, which were strongly influenced by an increase in atmospheric oxygen.[15][16]
60
+
61
+ Africa
62
+
63
+ Antarctica
64
+
65
+ Asia
66
+
67
+ Australia
68
+
69
+ Europe
70
+
71
+ North America
72
+
73
+ South America
74
+
75
+ Afro-Eurasia
76
+
77
+ America
78
+
79
+ Eurasia
80
+
81
+ Oceania
en/5549.html.txt ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The UEFA Super Cup is an annual super cup football match organised by UEFA and contested by the reigning champions of the two main European club competitions, the UEFA Champions League and the UEFA Europa League. It is not recognised as one of UEFA's 'major' competitions.[1][2]
4
+
5
+ From 1972 to 1999, the UEFA Super Cup was contested between the winners of the European Cup/UEFA Champions League and the winners of the UEFA Cup Winners' Cup. After the discontinuation of the UEFA Cup Winners' Cup, it has been contested by the winners of the UEFA Champions League and the winners of the UEFA Cup, which was renamed the UEFA Europa League in 2009.
6
+
7
+ The current holders are Liverpool, who won 5–4 after penalties against Chelsea in the 2019 edition. The most successful teams in the competition are Barcelona and Milan, who have won the trophy five times each.
8
+
9
+ The European Super Cup was created in 1971 by Anton Witkamp, a reporter and later sports editor of Dutch newspaper De Telegraaf. The idea came to him in a time when Dutch total football was Europe's finest and Dutch football clubs were enjoying their golden era (especially Ajax). Witkamp was looking for something new to definitely decide which was the best team in Europe and also to further test Ajax's legendary team, led by their star player Johan Cruyff. It was then proposed that the winner of the European Cup would face the winner of the European Cup Winners' Cup.
10
+ All was set for a new competition to be born. However, when Witkamp tried to get an official endorsement to his competition, the UEFA president turned it down.
11
+
12
+ The 1972 final between Ajax and Scotland's Rangers is considered unofficial by UEFA,[3] as Rangers were banned from European competition due to the behaviour of their fans during the 1972 UEFA Cup Winners' Cup Final. As a result, UEFA refused to endorse the competition until the following season.[4] It was played in two legs and was financially supported by De Telegraaf. Ajax defeated Rangers 6–3 on aggregate and won the first (albeit unofficial) European Super Cup.
13
+
14
+ The 1973 final, in which Ajax defeated Milan 6–1 on aggregate, was the first Super Cup officially recognised and supported by UEFA.
15
+
16
+ Although the two-legged format was kept until 1997, the Super Cup was decided in one single match because of schedule issues or political problems in 1984, 1986, and 1991. In 1974, 1981 and 1985, the Super Cup was not played at all: 1974's competition was abandoned because Bayern Munich and Magdeburg could not find a mutually convenient date, 1981's was abandoned when Liverpool could not make space to meet Dinamo Tbilisi, while 1985's was abandoned due to a ban on English clubs' participation preventing Everton from playing Juventus.[3][5]
17
+
18
+ In the 1992–93 season, the European Cup was renamed the UEFA Champions League and the winners of this competition would face the winners of the Cup Winners' Cup in the European Super Cup. In the 1994–1995 season, the European Cup Winners' Cup was renamed the UEFA Cup Winners' Cup. The following season, the Super Cup also renamed the UEFA Super Cup.
19
+
20
+ After the 1998–99 season, the Cup Winners' Cup was discontinued by UEFA. The 1999 Super Cup was the last one contested by the winners of the Cup Winners' Cup. Lazio, winners of the 1998–99 UEFA Cup Winners' Cup, defeated Manchester United, winners of the 1998–99 UEFA Champions League, 1–0.
21
+
22
+ Since then, the UEFA Super Cup was contested between the winners of the UEFA Champions League and the winners of the UEFA Cup. The 2000 Super Cup was the first one contested by the winners of the UEFA Cup. Galatasaray, winners of the 1999–2000 UEFA Cup, defeated Real Madrid, winners of the 1999–2000 UEFA Champions League, 2–1.
23
+
24
+ In the 2009–10 season, the UEFA Cup was renamed the UEFA Europa League and the winners of this competition would continue to face the winners of the Champions League in the UEFA Super Cup.
25
+
26
+ Chelsea is the first club to contest the Super Cup as holders of all three UEFA club honours, having entered as holders of the Cup Winners' Cup (1998), Champions League (2012), and Europa League (2013 and 2019). Manchester United shared this honour in 2017 after their Europa League win, having qualified as Cup Winners' Cup holders in 1991.
27
+
28
+ After 15 consecutive Super Cups being played at Stade Louis II in Monaco between 1998 and 2012, the Super Cup is now played at various stadiums (similar to the finals of the Champions League and the Europa League). It was started with the 2013 edition, which was played at Eden Stadium in Prague, Czech Republic.[6]
29
+
30
+ Starting in 2014, the date of the UEFA Super Cup was moved from Friday in late August, to Tuesday in mid-August, following the removal of the August international friendly date in the new FIFA International Match Calendar.[7]
31
+
32
+ The competition was originally played over two legs, one at each participating club's stadium, except in exceptional circumstances; for instance in 1991 when Red Star Belgrade were not permitted to play the leg in their native Yugoslavia due to the war which was taking place at the time, so instead Manchester United's home leg was only played. Since 1998, the Super Cup was played as a single match at a neutral venue.[8] Between 1998 and 2012, the Super Cup was played at the Stade Louis II in Monaco. Since 2013 various stadiums have been used.
33
+
34
+ The UEFA Super Cup trophy is retained by UEFA at all times. A full-size replica trophy is awarded to the winning club. Forty gold medals are presented to the winning club and forty silver medals to the runners-up.[16]
35
+
36
+ The Super Cup trophy has undergone several changes in its history. The first trophy was presented to Ajax in 1973. In 1977, the original trophy was replaced by a plaque with a gold UEFA emblem. In 1987, the next trophy was the smallest and lightest of all the European club trophies, weighing 5 kg (11 lb) and measuring 42.5 cm (16.7 in) in height (the UEFA Champions League trophy weighs 8 kg (18 lb) and the UEFA Europa League trophy 15 kg (33 lb)). The new model, which is a larger version of the previous trophy was introduced in 2006 and weighs 12.2 kg (27 lb) and measures 58 cm (23 in) in height.[17]
37
+
38
+ Until 2008, a team which won three times in a row or five in total received an original copy of the trophy and a special mark of recognition. Since then, the original trophy has been kept exclusively by UEFA. Milan and Barcelona have achieved this honour, winning a total of five times each but the Italian team is the only one which was awarded the official trophy permanently in 2007.
39
+
40
+ As of 2020, the fixed amount of prize money paid to the clubs is as follows:
41
+
42
+ Currently, the rules of the UEFA Super Cup are that it is a single match, contested in a neutral venue. The match consists of two periods of 45 minutes each, known as halves. If the scores are level at the end of 90 minutes, two additional 15-minute periods of extra time are played. If there is no winner at the end of the second period of extra time, a penalty shoot-out determines the winner. Each team names 23 players, 11 of which start the match. Of the 12 remaining players, a total of 3 may be substituted throughout the match; a fourth substitute is permitted however if the match enters extra time. Each team may wear its first choice kit; if these clash, however, the previous year's Europa League winning team must wear an alternative colour. If a club refuses to play or is ineligible to play then they are replaced by the runner-up of the competition through which they qualified. If the field is unfit for play due to bad weather, the match must be played the next day.[16]
43
+
44
+ The UEFA Super Cup's sponsors are the same as the sponsors for the UEFA Champions League. The tournament's current main sponsors are[18]
45
+
46
+ Adidas is a secondary sponsor and supplies the official match ball and referee uniform.
47
+
48
+ Individual clubs may wear jerseys with advertising, even if such sponsors conflict with those of the Europa League; however, only one sponsorship is permitted per jersey (plus that of the manufacturer). Exceptions are made for non-profit organisations, which can feature on the front of the shirt, incorporated with the main sponsor, or on the back, either below the squad number or between the player name and the collar.
49
+
50
+ 60% of the stadium capacity is reserved for the visiting clubs. The remaining seats are sold by UEFA through an online auction. There are an unlimited number of applications for tickets given out. The 5 euro administration fee is deducted from each applicant and there is no limit to the number of applications each individual can make.[27]
51
+
52
+ Media related to UEFA Super Cup at Wikimedia Commons
en/555.html.txt ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 13°10′12″N 59°33′09″W / 13.17000°N 59.55250°W / 13.17000; -59.55250
4
+
5
+ Barbados (/bɑːrˈbeɪdɒs/ (listen) or /-doʊs/) is an island country in the Lesser Antilles of the West Indies, in the Caribbean region of North America. It is 34 kilometres (21 miles) in length and up to 23 km (14 mi) in width, covering an area of 432 km2 (167 sq mi). It is situated in the western area of the North Atlantic and 100 km (62 mi) east of the Windward Islands and the Caribbean Sea;[7] therein, Barbados is east of the Windwards, part of the Lesser Antilles, roughly at 13°N of the equator. It is about 168 km (104 mi) east of both the countries of Saint Lucia and Saint Vincent and the Grenadines and 180 km (110 mi) south-east of Martinique and 400 km (250 mi) north-east of Trinidad and Tobago. Barbados is outside the principal Atlantic hurricane belt. Its capital and largest city is Bridgetown.
6
+
7
+ Inhabited by Kalinago people since the 13th century, and prior to that by other Amerindians, Barbados was visited by Spanish navigators in the late 15th century and claimed for the Spanish Crown. It first appeared on a Spanish map in 1511.[8]
8
+ The Portuguese Empire claimed the island between 1532 and 1536, but later abandoned it in 1620; with their only remnants being an introduction of wild boars for a good supply of meat whenever the island was visited, and to replenish their supply of freshwater. An English ship, the Olive Blossom, arrived in Barbados on 14 May 1625; its men took possession of it in the name of King James I. In 1627, the first permanent settlers arrived from England, and it became an English and later British colony.[9] As a wealthy sugar colony, it became an English centre of the African slave trade until that trade was outlawed in 1807, with final emancipation of slaves in Barbados occurring over a period of years from 1833.
9
+
10
+ On 30 November 1966, Barbados became an independent state and Commonwealth realm with Elizabeth II as its queen.[10] It has a population of 287,010 people, predominantly of African descent. Despite being classified as an Atlantic island, Barbados is considered to be a part of the Caribbean, where it is ranked as a leading tourist destination. Of the tourists, 40% come from the UK, with the US and Canada making up the next large groups of visitors to the island.[11]
11
+
12
+ The name "Barbados" is from either the Portuguese term os barbudos or the Spanish equivalent, los barbudos, both meaning "the bearded ones". It is unclear whether "bearded" refers to the long, hanging roots of the bearded fig-tree (Ficus citrifolia), indigenous to the island, or to the allegedly bearded Caribs who once inhabited the island, or, more fancifully, to a visual impression of a beard formed by the sea foam that sprays over the outlying reefs. In 1519, a map produced by the Genoese mapmaker Visconte Maggiolo showed and named Barbados in its correct position. Furthermore, the island of Barbuda in the Leewards is very similar in name and was once named "Las Barbudas" by the Spanish.
13
+
14
+ The original name for Barbados in the Pre-Columbian era was Ichirouganaim, according to accounts by descendants of the indigenous Arawakan-speaking tribes in other regional areas, with possible translations including "Red land with white teeth"[12] or "Redstone island with teeth outside (reefs)"[13] or simply "Teeth".[14][15][16]
15
+
16
+ Colloquially, Barbadians refer to their home island as "Bim" or other nicknames associated with Barbados, including "Bimshire". The origin is uncertain, but several theories exist. The National Cultural Foundation of Barbados says that "Bim" was a word commonly used by slaves, and that it derives from the Igbo term bém from bé mụ́ meaning 'my home, kindred, kind',[17] the Igbo phoneme [e] in the Igbo orthography is very close to /ɪ/.[18] The name could have arisen due to the relatively large percentage of enslaved Igbo people from modern-day southeastern Nigeria arriving in Barbados in the 18th century.[19][20] The words 'Bim' and 'Bimshire' are recorded in the Oxford English Dictionary and Chambers Twentieth Century Dictionaries. Another possible source for 'Bim' is reported to be in the Agricultural Reporter of 25 April 1868, where the Rev. N. Greenidge (father of one of the island's most famous scholars, Abel Hendy Jones Greenidge) suggested the listing of Bimshire as a county of England. Expressly named were "Wiltshire, Hampshire, Berkshire and Bimshire".[17] Lastly, in the Daily Argosy (of Demerara, i.e. Guyana) of 1652, there is a reference to Bim as a possible corruption of 'Byam', the name of a Royalist leader against the Parliamentarians. That source suggested the followers of Byam became known as 'Bims' and that this became a word for all Barbadians.[17]
17
+
18
+ Archeological evidence suggests humans may have first settled or visited the island circa 1600 BC.[21][22] More permanent Amerindian settlement of Barbados dates to about the 4th to 7th centuries AD, by a group known as the Saladoid-Barrancoid.[23] The two main groups were the Arawaks from South America, who became dominant around 800–1200 AD, and the more war-like the Kalinago (Island Caribs) who arrived from South America in the 12th–13th centuries[21]
19
+
20
+ It is uncertain which European nation arrived first in Barbados, which most likely would have been at some point in the 15th century or 16th century. One lesser-known source points to earlier revealed works predating contemporary sources indicating it could have been the Spanish.[8] Many, if not most, believe the Portuguese, en route to Brazil,[24][25] were the first Europeans to come upon the island. The island was largely ignored by Europeans, though Spanish slave raiding is thought to have reduced the native population, with many fleeing to other islands.[21][26]
21
+
22
+ The first English ship, which had arrived on May 14, 1625, was captained by John Powell. The first settlement began on 17 February 1627, near what is now Holetown (formerly Jamestown),[28] by a group led by John Powell's younger brother, Henry, consisting of 80 settlers and 10 English indentured laborers.[29] Some sources state that some Africans were amongst these first settlers.[21]
23
+
24
+ The settlement was established as a proprietary colony and funded by Sir William Courten, a City of London merchant who acquired the title to Barbados and several other islands. So the first colonists were actually tenants and much of the profits of their labour returned to Courten and his company.[30] Courten's title was later transferred to James Hay, 1st Earl of Carlisle, in what was called the "Great Barbados Robbery."[citation needed] Carlisle then chose as governor Henry Hawley, who established the House of Assembly in 1639, in an effort to appease the planters, who might otherwise have opposed his controversial appointment.[21][31]
25
+
26
+ In the period 1640–60, the West Indies attracted over two-thirds of the total number of English emigrants to the Americas.[citation needed] By 1650 there were 44,000 settlers in the West Indies, as compared to 12,000 on the Chesapeake and 23,000 in New England.[citation needed] Most English arrivals were indentured. After five years of labour, they were given "freedom dues" of about £10, usually in goods. Before the mid-1630s, they also received 5 to 10 acres (2 to 4 hectares) of land, but after that time the island filled and there was no more free land.[citation needed] During the Cromwellian era (1650s) this included a large number of prisoners-of-war, vagrants and people who were illicitly kidnapped, who were forcibly transported to the island and sold as servants. These last two groups were predominantly Irish, as several thousand were infamously rounded up by English merchants and sold into servitude in Barbados and other Caribbean islands during this period, a practice that came to be known as being Barbadosed.[31][32] Cultivation of tobacco, cotton, ginger and indigo was thus handled primarily by European indentured labour until the start of the sugar cane industry in the 1640s and the growing reliance on and importation of enslaved Africans.
27
+
28
+ Life in the young colony was not easy, with parish registers from the 1650s show, for the white population, there were four times as many deaths as marriages.[citation needed] The mainstay of the infant colony's economy was the growth export of tobacco, but tobacco prices eventually fell in the 1630s as Chesapeake production expanded.[31]
29
+
30
+ Around the same time, fighting during the War of the Three Kingdoms and the Interregnum spilled over into Barbados and Barbadian territorial waters. The island was not involved in the war until after the execution of Charles I, when the island's government fell under the control of Royalists (ironically the Governor, Philip Bell, remaining loyal to Parliament while the Barbadian House of Assembly, under the influence of Humphrey Walrond, supported Charles II).[citation needed] To try to bring the recalcitrant colony to heel, the Commonwealth Parliament passed an act on 3 October 1650 prohibiting trade between England and Barbados, and because the island also traded with the Netherlands, further navigation acts were passed prohibiting any but English vessels trading with Dutch colonies. These acts were a precursor to the First Anglo-Dutch War.[citation needed] The Commonwealth of England sent an invasion force under the command of Sir George Ayscue, which arrived in October 1651. After some skirmishing, the Royalists in the House of Assembly led by Lord Willoughby surrendered. The conditions of the surrender were incorporated into the Charter of Barbados (Treaty of Oistins), which was signed at the Mermaid's Inn, Oistins, on 17 January 1652.[33]
31
+
32
+ Starting with Cromwell, a large percentage of the white laborer population were indentured servants and involuntarily transported people from Ireland. Irish servants in Barbados were often treated poorly, and Barbadian planters gained a reputation for cruelty.[34]:55 The decreased appeal of an indenture on Barbados, combined with enormous demand for labor caused by sugar cultivation, led the use of involuntary transportation to Barbados as a punishment for crimes, or for political prisoners, and also to the kidnapping of laborers who were sent to Barbados involuntarily.[34]:55 Irish indentured servants were a significant portion of the population throughout the period when white servants were used for plantation labor in Barbados, and while a "steady stream" of Irish servants entered the Barbados throughout the seventeenth century, Cromwellian efforts to pacify Ireland created a "veritable tidal wave" of Irish laborers who were sent to Barbados during the 1650s.[34]:56 Due to inadequate historical records, the total number of Irish laborers sent to Barbados is unknown, and estimates have been "highly contentious."[34]:56 While one historical source estimated that as many as 50,000 Irish people were transported to either Barbados or Virginia unwillingly during the 1650s, this estimate is "quite likely exaggerated."[34]:56 Another estimate that 12,000 Irish prisoners had arrived in Barbados by 1655 has been described as "probably exaggerated" by historian Richard B. Sheridan.[35]:236 According to historian Thomas Bartlett, it is "generally accepted" that approximately 10,000 Irish were sent to the West Indies involuntarily, and approximately 40,000 came as voluntary indentured servants, while many also traveled as voluntary, un-indentured emigrants.[36]:256
33
+
34
+ The introduction of sugar cane from Dutch Brazil in 1640 completely transformed society, the economy and the physical landscape. Barbados eventually had one of the world's biggest sugar industries.[37] One group instrumental in ensuring the early success of the industry was the Sephardic Jews, who had originally been expelled from the Iberian peninsula, to end up in Dutch Brazil.[37] As the effects of the new crop increased, so did the shift in the ethnic composition of Barbados and surrounding islands.[31] The workable sugar plantation required a large investment and a great deal of heavy labour. At first, Dutch traders supplied the equipment, financing, and enslaved Africans, in addition to transporting most of the sugar to Europe.[31][21] In 1644 the population of Barbados was estimated at 30,000, of which about 800 were of African descent, with the remainder mainly of English descent. These English smallholders were eventually bought out and the island filled up with large sugar plantations worked by enslaved Africans.[21] By 1660 there was near parity with 27,000 blacks and 26,000 whites. By 1666 at least 12,000 white smallholders had been bought out, died, or left the island, many choosing to emigrate to Jamaica or the American Colonies (notably the Carolinas).[21] As a result, Barbados enacted a slave code as a way of legislatively controlling its black enslaved population.[38] This inhumane document was later to be copied as far away as Virginia and became the template for slavery in the United States.
35
+
36
+ By 1680 there were 20,000 free whites and 46,000 enslaved Africans;[21] by 1724, there were 18,000 free whites and 55,000 enslaved Africans.[31]
37
+
38
+ The harsh conditions endured by the slaves resulted in several planned slave rebellions, the largest of which was Bussa's rebellion in 1816 which was suppressed by British troops.[21] Growing opposition to slavery led to its abolition in the British Empire in 1834.[21] However the white plantocracy class retained control of the political and economic situation on the island, with most workers living in relative poverty.[21]
39
+
40
+ The 1780 hurricane killed over 4,000 people on Barbados.[39][40] In 1854, a cholera epidemic killed over 20,000 inhabitants.[41]
41
+
42
+ Deep dissatisfaction with the situation on Barbados led many to emigrate.[21][42] Things came to a head in the 1930s during the Great Depression, as Barbadians began demanding better conditions for workers, the legalisation of trade unions and a widening of the franchise, which at that point was limited to male property owners.[21] As a result of the increasing unrest the British sent a commission (The West Indies Royal Commission, or Moyne Commission) in 1938, which recommended enacting many of the requested reforms on the islands.[21] As a result, Afro-Barbadians began to play a much more prominent role in the colony's politics, with universal suffrage being introduced in 1950.[21]
43
+
44
+ Prominent among these early activists was Grantley Herbert Adams, who helped found the Barbados Labour Party (BLP) in 1938.[43] He became the first Premier of Barbados in 1953, followed by fellow BLP-founder Hugh Gordon Cummins from 1958–1961. A group of left-leaning politicians who advocated swifter moves to independence broke off from the BLP and founded the Democratic Labour Party (DLP) in 1955.[44][45] The DLP subsequently won the 1961 Barbadian general election and their leader Errol Barrow became premier.
45
+
46
+ Full internal self-government was enacted in 1961.[21] Barbados joined the short-lived West Indies Federation from 1958–62, later gaining full independence on 30 November 1966.[21] Errol Barrow became the country's first Prime Minister. Barbados opted to remain within the British Commonwealth, retaining Queen Elizabeth as Monarch, represented locally by a Governor-General.
47
+
48
+ The Barrow government sought to diversify the economy away from agriculture, seeking to boost industry and the tourism sector. Barbados was also at forefront of regional integration efforts, spearheading the creation of CARIFTA and CARICOM.[21] The DLP lost the 1976 Barbadian general election to the BLP under Tom Adams. Adams adopted a more conservative and strongly pro-Western stance, allowing the Americans to use Barbados as the launchpad for their invasion of Grenada in 1983.[46] Adams died in office in 1985 and was replaced by Harold Bernard St. John, however he lost the 1986 Barbadian general election which saw the return of the DLP under Errol Barrow, who had been highly critical of the US intervention in Grenada. However Barrow too died in office, and was replaced by Lloyd Erskine Sandiford who remained Prime Minister until 1994.
49
+
50
+ Owen Arthur of the BLP won the 1994 Barbadian general election, remaining Prime Minister until 2008. Arthur was a strong advocate of republicanism, though a planned referendum to replace Queen Elizabeth as Head of State in 2008 never took place.[47] The DLP won the 2008 Barbadian general election, however the new Prime Minister David Thompson died in 2010 and was replaced by Freundel Stuart. The BLP returned to power in 2018 under Mia Mottley, who became Barbados's first female Prime Minister.[48]
51
+
52
+ Barbados is situated in the Atlantic Ocean, east of the other West Indies Islands. Barbados is the easternmost island in the Lesser Antilles. It is flat in comparison to its island neighbours to the west, the Windward Islands. The island rises gently to the central highland region known as Scotland District, with the high point of the nation being Mount Hillaby 340 m (1,120 ft) above sea level.[21]
53
+
54
+ In the parish of Saint Michael lies Barbados's capital and main city, Bridgetown, containing 1/3 of the country's population.[21] Other major towns scattered across the island include Holetown, in the parish of Saint James; Oistins, in the parish of Christ Church; and Speightstown, in the parish of Saint Peter.
55
+
56
+ Barbados lies on the boundary of the South American and the Caribbean Plates.[49] The subduction of the South American plate beneath the Caribbean plate scrapes sediment from the South American plate and deposits it above the subduction zone forming an accretionary prism. The rate of this depositing of material allows Barbados to rise at a rate of about 25 mm (1 in) per 1,000 years.[50] This subduction means geologically the island is composed of coral roughly 90 m (300 ft) thick, where reefs formed above the sediment. The land slopes in a series of "terraces" in the west and goes into an incline in the east. A large proportion of the island is circled by coral reefs.[21]
57
+
58
+ The erosion of limestone in the northeast of the island, in the Scotland District, has resulted in the formation of various caves and gullies. On the Atlantic east coast of the island coastal landforms, including stacks, have been created due to the limestone composition of the area. Also notable in the island is the rocky cape known as Pico Teneriffe[51] or Pico de Tenerife, which is named after the fact that the island of Tenerife in Spain is the first land east of Barbados according to the belief of the locals.
59
+
60
+ The country generally experiences two seasons, one of which includes noticeably higher rainfall. Known as the "wet season", this period runs from June to December. By contrast, the "dry season" runs from December to May. Annual precipitation ranges between 1,000 and 2,300 mm (40 and 90 in).
61
+ From December to May the average temperatures range from 21 to 31 °C (70 to 88 °F), while between June and November, they range from 23 to 31 °C (73 to 88 °F).[52]
62
+
63
+ On the Köppen climate classification scale, much of Barbados is regarded as a tropical monsoon climate (Am). However, breezes of 12 to 16 km/h (7 to 10 mph) abound throughout the year and give Barbados a climate which is moderately tropical.
64
+
65
+ Infrequent natural hazards include earthquakes, landslips, and hurricanes. Barbados is often spared the worst effects of the region's tropical storms and hurricanes during the rainy season. Its location in the south-east of the Caribbean region puts the country just outside the principal hurricane strike zone. On average, a major hurricane strikes about once every 26 years. The last significant hit from a hurricane to cause severe damage to Barbados was Hurricane Janet in 1955; in 2010 the island was struck by Hurricane Tomas, but this caused only minor damage across the country as it was only at Tropical Storm level of formation.[53]
66
+
67
+ Barbados is susceptible to environmental pressures. As one of the world's most densely populated isles, the government worked during the 1990s[54] to aggressively integrate the growing south coast of the island into the Bridgetown Sewage Treatment Plant to reduce contamination of offshore coral reefs.[55][56] As of the first decade of the 21st century, a second treatment plant has been proposed along the island's west coast. Being so densely populated, Barbados has made great efforts to protect its underground aquifers.[57]
68
+
69
+ As a coral-limestone island, Barbados is highly permeable to seepage of surface water into the earth. The government has placed great emphasis on protecting the catchment areas that lead directly into the huge network of underground aquifers and streams.[57] On occasion illegal squatters have breached these areas, and the government has removed squatters to preserve the cleanliness of the underground springs which provide the island's drinking water.[58]
70
+
71
+ The government has placed a huge emphasis on keeping Barbados clean with the aim of protecting the environment and preserving offshore coral reefs which surround the island. Many initiatives to mitigate human pressures on the coastal regions of Barbados and seas come from the Coastal Zone Management Unit (CZMU).[59][60] Barbados has nearly 90 kilometres (56 miles) of coral reefs just offshore and two protected marine parks have been established off the west coast.[61] Overfishing is another threat which faces Barbados.[62]
72
+
73
+ Although on the opposite side of the Atlantic, and some 4,800 kilometres (3,000 miles) west of Africa, Barbados is one of many places in the American continent that experience heightened levels of mineral dust from the Sahara Desert.[63] Some particularly intense dust episodes have been blamed partly for the impacts on the health of coral reefs[64] surrounding Barbados or asthmatic episodes,[65] but evidence has not wholly supported the former such claim.[66]
74
+
75
+ Access to biocapacity in Barbados is much lower than world average. In 2016, Barbados had 0.17 global hectares [67] of biocapacity per person within its territory, much less than the world average of 1.6 global hectares per person.[68] In 2016 Barbados used 0.84 global hectares of biocapacity per person - their ecological footprint of consumption. This means they use approximately five times as much biocapacity as Barbados contains. As a result, Barbados is running a biocapacity deficit.[67]
76
+
77
+ Barbados is host to four species of nesting turtles (green turtles, loggerheads, hawksbill turtles, and leatherbacks) and has the second-largest hawksbill turtle-breeding population in the Caribbean.[69] The driving of vehicles on beaches can crush nests buried in the sand and such activity is discouraged in nesting areas.[70]
78
+
79
+ Barbados is also the host to the green monkey. The green monkey is found in West Africa from Senegal to the Volta River. It has been introduced to the Cape Verde islands off north-western Africa, and the West Indian islands of Saint Kitts, Nevis, Saint Martin, and Barbados. It was introduced to the West Indies in the late 17th century when slave trade ships travelled to the Caribbean from West Africa.
80
+
81
+ The 2010 national census conducted by the Barbados Statistical Service reported a resident population of 277,821, of which 144,803 were female and 133,018 were male.[71]
82
+
83
+ The life expectancy for Barbados residents as of 2019[update] is 79 years. The average life expectancy is 83 years for females and 78 years for males (2019).[1] Barbados and Japan have the highest per capita occurrences of centenarians in the world.[72]
84
+
85
+ The crude birth rate is 12.23 births per 1,000 people, and the crude death rate is 8.39 deaths per 1,000 people. The infant mortality rate is 11.63 infant deaths per 1,000 live births.
86
+
87
+ Close to 90% of all Barbadians (also known colloquially as "Bajan") are of Afro-Caribbean descent ("Afro-Bajans") and mixed descent. The remainder of the population includes groups of Europeans ("Anglo-Bajans" / "Euro-Bajans") mainly from the United Kingdom and Ireland, and Asians, predominantly Chinese and Indians (both Hindu and Muslim).
88
+ Other groups in Barbados include people from the United Kingdom, United States and Canada. Barbadians who return after years of residence in the United States and children born in America to Bajan parents are called "Bajan Yankees", a term considered derogatory by some.[73] Generally, Bajans recognise and accept all "children of the island" as Bajans, and refer to each other as such.
89
+
90
+ The biggest communities outside the Afro-Caribbean community are:
91
+
92
+ English is the official language of Barbados, and is used for communications, administration, and public services all over the island. In its capacity as the official language of the country, the standard of English tends to conform to the vocabulary, pronunciations, spellings, and conventions akin to, but not exactly the same as, those of British English.
93
+
94
+ An English-based creole language, referred to locally as Bajan, is spoken by most Barbadians in everyday life, especially in informal settings.[21] In its full-fledged form, Bajan sounds markedly different from the Standard English heard on the island. The degree of intelligibility between Bajan and general English, for the general English speaker, depends on the level of creolised vocabulary and idioms. A Bajan speaker may be completely unintelligible to an English speaker from another country.
95
+
96
+ Religion in Barbados (2000)[81]
97
+
98
+ Most Barbadians of African and European descent are Christians (95%), the largest denomination being Anglican (40%).[21] Other Christian denominations with significant followings in Barbados are the Catholic Church (administered by Roman Catholic Diocese of Bridgetown), Pentecostals, Jehovah's Witnesses, the Seventh-day Adventist Church and Spiritual Baptists.[21] The Church of England was the official state religion until its legal disestablishment by the Parliament of Barbados following independence.[82]
99
+
100
+ Other religions in Barbados include Hinduism, Islam, Bahá'í,[83] Judaism.[21]
101
+
102
+ Barbados has been an independent country since 30 November 1966.[84] It functions as a constitutional monarchy and parliamentary democracy modelled on the British Westminster system. The Queen of Barbados, Elizabeth II, is head of state and is represented locally by the Governor-General of Barbados—presently Sandra Mason. Both are advised on matters of the Barbadian state by the Prime Minister of Barbados, who is head of government. There are 30 representatives within the House of Assembly.
103
+
104
+ The Constitution of Barbados is the supreme law of the nation.[85] The Attorney General heads the independent judiciary. New Acts are passed by the Barbadian Parliament and require royal assent by the governor-general to become law.
105
+
106
+ During the 1990s at the suggestion of Trinidad and Tobago's Patrick Manning, Barbados attempted a political union with Trinidad and Tobago and Guyana. The project stalled after the then prime minister of Barbados, Lloyd Erskine Sandiford, became ill and his Democratic Labour Party lost the next general election.[86][87] Barbados continues to share close ties with Trinidad and Tobago and with Guyana, claiming the highest number of Guyanese immigrants after the United States, Canada and the United Kingdom.
107
+
108
+ Barbados functions as a two-party system. The dominant political parties are the Democratic Labour Party and the incumbent Barbados Labour Party. Since Independence on 30 November, 1966, the Democratic Labour Party (DLP) has governed from 1966 to 1976; 1986 to 1994; and from 2008 to 2018; and the Barbados Labour Party (BLP) has also governed from 1976 to 1986; 1994 to 2008; and from 2018 to present. The Democratic Labour Party government (DLP) held office with the then 1st Premier of Barbados became Prime Minister of Barbados, Errol Barrow for three successive terms from 4 December, 1961 to 2 September, 1976; and again from 28 May, 1986 until his sudden death in office on 1 June, 1987. The 4th Prime Minister, Sir. Lloyd Sandiford took over with the Democratic Labour Party government (DLP) from 1 June, 1987 to 20 January, 1991; and from 20 January, 1991 to 6 September, 1994. The Barbados Labour Party (BLP) held office with the then Prime Minister, Tom Adams from 2 September, 1976 to 18 June, 1981; and from 18 June, 1981 until his sudden death in office on 11 March 1985. The 3rd Prime Minister, Sir. Harold St. John took over with the Barbados Labour Party government (BLP) from 11 March, 1985 to 28 May, 1986. The BLP held power from 6 September, 1994 to 15 January, 2008. The Democratic Labour Party government (DLP) held power with the then 6th Prime Minister, David Thompson from 15 January, 2008 until his death in office on 23 October, 2010. The 7th Prime Minister, Freundel Stuart took over with the Democratic Labour Party government (DLP) from 23 October, 2010 to 21 February, 2013; and from 21 February, 2013 to 24 May, 2018 for the general elections for the new Barbados Labour Party (BLP). All of Barbados' Prime Ministers, except Freundel Stuart, held under the Ministry of Finance's portfolio. The Barbados Labour Party government (BLP) held power with the now 8th Prime Minister of Barbados, Mia Mottley from 24 May, 2018 to present until the next general election on 24 May, 2023.
109
+
110
+ Barbados follows a policy of nonalignment and seeks cooperative relations with all friendly states. Barbados is a full and participating member of the Caribbean Community (CARICOM), CARICOM Single Market and Economy (CSME), and the Association of Caribbean States (ACS).[88] Organization of American States (OAS), Commonwealth of Nations, and the Caribbean Court of Justice (CCJ). In 2005 the Parliament of Barbados voted on a measure replacing the UK's Judicial Committee of the Privy Council with the Caribbean Court of Justice based in Port of Spain, Trinidad and Tobago.
111
+
112
+ Barbados is an original member (1995) of the World Trade Organization (WTO) and participates actively in its work. It grants at least MFN treatment to all its trading partners. European Union relations and cooperation with Barbados are carried out both on a bilateral and a regional basis. Barbados is party to the Cotonou Agreement, through which As of December 2007[update] it is linked by an Economic Partnership Agreement with the European Commission. The pact involves the Caribbean Forum (CARIFORUM) subgroup of the African, Caribbean and Pacific Group of States (ACP). CARIFORUM is the only part of the wider ACP-bloc that has concluded the full regional trade-pact with the European Union. There are also ongoing EU-Community of Latin American and Caribbean States (CELAC) and EU-CARIFORUM dialogues.[89]
113
+
114
+ Trade policy has also sought to protect a small number of domestic activities, mostly food production, from foreign competition, while recognising that most domestic needs are best met by imports.
115
+
116
+ On 6 July 1994, at the Sherbourne Conference Centre, St. Michael, Barbados, representatives of eight (8) countries signed the Double Taxation Relief (CARICOM) Treaties 1994. The countries which were represented were: Antigua and Barbuda, Belize, Grenada, Jamaica, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines and Trinidad and Tobago.[90]
117
+
118
+ On 19 August 1994 a representative of the Government of Guyana signed a similar treaty.
119
+
120
+ The Barbados Defence Force has roughly 600 members. Within it, 12- to 18-year-olds make up the Barbados Cadet Corps. The defence preparations of the island nation are closely tied to defence treaties with the United Kingdom, the United States, and the People's Republic of China.[91]
121
+
122
+ The Royal Barbados Police Force is the sole law enforcement agency on the island of Barbados.
123
+
124
+ Barbados is divided into 11 parishes:
125
+
126
+ St. George and St. Thomas are in the middle of the country and are the only parishes without coastlines.
127
+
128
+ Barbados is the 53rd richest country in the world in terms of GDP (Gross Domestic Product) per capita,[92] has a well-developed mixed economy, and a moderately high standard of living. According to the World Bank, Barbados is classified as being in its 66 top high income economies of the world.[93][failed verification] Despite this, a 2012 self-study in conjunction with the Caribbean Development Bank revealed 20% of Barbadians live in poverty, and nearly 10% cannot meet their basic daily food needs.[94]
129
+
130
+ Historically, the economy of Barbados had been dependent on sugarcane cultivation and related activities, but since the late 1970s and early 1980s it has diversified into the manufacturing and tourism sectors.[21] Offshore finance and information services have become important foreign exchange earners, and there is a healthy light manufacturing sector. Since the 1990s the Barbados Government has been seen as business-friendly and economically sound.[95][citation needed] The island saw a construction boom, with the development and redevelopment of hotels, office complexes, and homes, partly due to the staging of the 2007 Cricket World Cup.[96] This slowed during the 2008 to 2012 world economic crisis and the recession.[97]
131
+
132
+ There was a strong economy between 1999 and 2000, and the economy went into the recession in 2001 and 2002 due to slowdowns in tourism, consumer spending and the impact of the 11 September 2001 attacks in the United States and the 7 July 2005 London bombings in the United Kingdom, but rebounded in 2003 and has shown growth since 2004 and continued right through to 2007 and 2008. The economy went into recession again from 2008 to 2013 before showing growth from 2014 to 2017 and then it went back down to another recession from 2017 to 2019. In 2019, after 11 and during the world economic crisis and the recession, there were 23 downgrades by both Standard & Poor's and Moody's in 2016, 2017 and 2018 that had the economy going down to the recession. Then in 2018 and 2019, there was yet another economic recession until 2020. From 1 January to 31 March, 2020 the economy had started to grow, but it was too late when the economy went back down due to the global COVID-19 economic recession after 3 more upgrades by Standard & Poor's and Moody's in December, 2019. Traditional trading partners include Canada, the Caribbean Community (especially Trinidad and Tobago), the United Kingdom and the United States. Recent government administrations have continued efforts to reduce unemployment, encourage foreign direct investment, and privatise remaining state-owned enterprises. Unemployment was reduced to 10.7% in 2003.[1] However, it has since increased to 11.9% in second quarter, 2015.[98]
133
+
134
+ The European Union is assisting Barbados with a €10 million program of modernisation of the country's International Business and Financial Services Sector.[99]
135
+
136
+ Barbados maintains the third largest stock exchange in the Caribbean region. As of 2009[update], officials at the stock exchange were investigating the possibility of augmenting the local exchange with an International Securities Market (ISM) venture.[100]
137
+
138
+ By May 2018, Barbados' outstanding debt climbed to US$7.5 billion, more than 1.7 times higher the country's GDP. In June 2018 the government defaulted on its sovereign debt when it failed to make a coupon on Eurobonds maturing in 2035. Outstanding bond debt of Barbados reached US$4.4 billion.[101]
139
+
140
+ In October 2019, Barbados concluded restructuring negotiations with a creditor group including investments funds Eaton Vance Management, Greylock Capital Management, Teachers Advisors and Guyana Bank for Trade and Industry. Creditors will exchange existing bonds for a new debt series maturing in 2029. The new bonds involve a principal "haircut" of approximately 26% and include a clause allowing for deferment of principal and capitalization of interest in the event of a natural disaster.[102][103]
141
+
142
+ See Health in Barbados
143
+
144
+ The Barbados literacy rate is ranked close to 100%.[104][21] The mainstream public education system of Barbados is fashioned after the British model. The government of Barbados spends 6.7% of its GDP on education (2008).[1]
145
+
146
+ All young people in the country must attend school until age 16. Barbados has over 70 primary schools and over 20 secondary schools throughout the island. There is a number of private schools, including Montessori and the International Baccalaureate. Student enrolment at these schools represents less than 5% of the total enrolment of the public schools.
147
+
148
+ Certificate-, diploma- and degree-level education in the country is provided by the Barbados Community College, the Samuel Jackman Prescod Institute of Technology, Codrington College, and the Cave Hill campus and Open Campus of the University of the West Indies. Barbados is also home to several overseas medical schools, such as Ross University School of Medicine and the American University of Integrative Sciences, School of Medicine.
149
+
150
+ Barbados Secondary School Entrance Examination: Children who are 11 years old but under 12 years old on 1 September in the year of the examination are required to write the examination as a means of allocation to secondary school.
151
+
152
+ Caribbean Secondary Education Certificate (CSEC) examinations are usually taken by students after five years of secondary school and mark the end of standard secondary education. The CSEC examinations are equivalent to the Ordinary Level (O-Levels) examinations and are targeted toward students 16 and older.
153
+
154
+ Caribbean Advanced Proficiency Examinations (CAPE) are taken by students who have completed their secondary education and wish to continue their studies. Students who sit for the CAPE usually possess CSEC or an equivalent certification. The CAPE is equivalent to the British Advanced Levels (A-Levels), voluntary qualifications that are intended for university entrance.[105]
155
+
156
+ Barbados is a blend of West African, Portuguese, Creole, Indian and British cultures. Citizens are officially called Barbadians. The term "Bajan" (pronounced BAY-jun) may have come from a localised pronunciation of the word Barbadian, which at times can sound more like "Bar-bajan"; or, more likely, from English bay ("bayling"), Portuguese baiano.
157
+
158
+ The largest carnival-like cultural event that takes place on the island is the Crop Over festival, which was established in 1974. As in many other Caribbean and Latin American countries, Crop Over is an important event for many people on the island, as well as the thousands of tourists that flock to there to participate in the annual events.[21] The festival includes musical competitions and other traditional activities, and features the majority of the island's homegrown calypso and soca music for the year. The male and female Barbadians who harvested the most sugarcane are crowned as the King and Queen of the crop.[106] Crop Over gets under way at the beginning of July and ends with the costumed parade on Kadooment Day, held on the first Monday of August. New calypso/soca music is usually released and played more frequently from the beginning of May to coincide with the start of the festival.[citation needed]
159
+
160
+ Bajan cuisine is a mixture of African, Indian, Irish, Creole and British influences. A typical meal consists of a main dish of meat or fish, normally marinated with a mixture of herbs and spices, hot side dishes, and one or more salads. The meal is usually served with one or more sauces.[107] The national dish of Barbados is cou-cou & flying fish with spicy gravy.[108] Another traditional meal is "Pudding and Souse" a dish of pickled pork with spiced sweet potatoes.[109] A wide variety of seafood and meats are also available.
161
+
162
+ The Mount Gay Rum visitors centre in Barbados claims to be the world's oldest remaining rum company, with earliest confirmed deed from 1703. Cockspur Rum and Malibu are also from the island. Barbados is home to the Banks Barbados Brewery, which brews Banks Beer, a pale lager, as well as Banks Amber Ale.[110] Banks also brews Tiger Malt, a non-alcoholic malted beverage. 10 Saints beer is brewed in Speightstown, St. Peter in Barbados and aged for 90 days in Mount Gay 'Special Reserve' Rum casks. It was first brewed in 2009 and is available in certain Caricom nations.[111]
163
+
164
+ In music, nine-time Grammy Award winner Rihanna (born in Saint Michael) is one of Barbados's best-known artists and one of the best selling music artists of all time, selling over 200 million records worldwide. In 2009 she was appointed as an Honorary Ambassador of Youth and Culture for Barbados by the late Prime Minister, David Thompson.[112]
165
+
166
+ Singer-songwriters Rayvon and Shontelle, the band Cover Drive, musician Rupee and Mark Morrison, singer of Top 10 hit "Return of the Mack" also originate from Barbados. Grandmaster Flash (born Joseph Saddler in Bridgetown in 1958) is a hugely influential musician of Barbadian origin, pioneering hip-hop DJing, cutting, and mixing in 1970s New York. The Merrymen are a well known Calypso band based in Barbados, performing from the 1960s into the 2010s.
167
+
168
+ As in other Caribbean countries of British colonial heritage, cricket is very popular on the island. The West Indies cricket team usually includes several Barbadian players. In addition to several warm-up matches and six "Super Eight" matches, the country hosted the final of the 2007 Cricket World Cup. Barbados has produced many great cricketers including Sir Garfield Sobers, Sir Frank Worrell, Sir Clyde Walcott, Sir Everton Weekes, Gordon Greenidge, Wes Hall, Charlie Griffith, Joel Garner, Desmond Haynes and Malcolm Marshall.
169
+
170
+ Rugby is also popular in Barbados.
171
+
172
+ Horse racing takes place at the Historic Garrison Savannah close to Bridgetown. Spectators can pay for admission to the stands, or else can watch races from the public "rail", which encompasses the track.
173
+
174
+ Basketball is an increasingly popular sport, played at school or college. Barbados's national team has shown some unexpected results as in the past it beat many much larger countries.
175
+
176
+ Polo is very popular amongst the rich elite on the island and the "High-Goal" Apes Hill team is based at the St James's Club.[113] It is also played at the private Holders Festival ground.
177
+
178
+ In golf, the Barbados Open, played at Royal Westmoreland Golf Club, was an annual stop on the European Seniors Tour from 2000 to 2009. In December 2006 the WGC-World Cup took place at the country's Sandy Lane resort on the Country Club course, an 18-hole course designed by Tom Fazio. The Barbados Golf Club is another course on the island. It has hosted the Barbados Open on several occasions.
179
+
180
+ Volleyball is also popular and is mainly played indoors.
181
+
182
+ Tennis is gaining popularity and Barbados is home to Darian King, currently ranked 270th in the world and is the 2nd highest ranked player in the Caribbean.
183
+
184
+ Motorsports also play a role, with Rally Barbados occurring each summer and being listed on the FIA NACAM calendar. Also, the Bushy Park Circuit hosted the Race of Champions and Global RallyCross Championship in 2014.
185
+
186
+ The presence of the trade winds along with favourable swells make the southern tip of the island an ideal location for wave sailing (an extreme form of the sport of windsurfing).
187
+
188
+ Netball is also popular with women in Barbados.
189
+
190
+ Barbadian team The Flyin' Fish, are the 2009 Segway Polo World Champions.[114]
191
+
192
+ Although Barbados is about 34 km (21 mi) across at its widest point, a car journey from Six Cross Roads in St. Philip (south-east) to North Point in St. Lucy (north-central) can take one and a half hours or longer due to traffic. Barbados has half as many registered cars as citizens.
193
+
194
+ Transport on the island is relatively convenient with "route taxis" called "ZRs" (pronounced "Zed-Rs") travelling to most points on the island. These small buses can at times be crowded, as passengers are generally never turned down regardless of the number. They will usually take the more scenic routes to destinations. They generally depart from the capital Bridgetown or from Speightstown in the northern part of the island.
195
+
196
+ Including the ZRs, there are three bus systems running seven days a week (though less frequently on Sundays). There are ZRs, the yellow minibuses and the blue Transport Board buses. A ride on any of them costs Bds$ 3.5.[115] The smaller buses from the two privately owned systems ("ZRs" and "minibuses") can give change; the larger blue buses from the government-operated Barbados Transport Board system cannot, but do give receipts. The Barbados Transport Board buses travel in regular bus routes and scheduled timetables across Barbados. Schoolchildren in school uniform including some Secondary schools ride for free on the government buses and for Bds$ 2.5 on the ZRs. Most routes require a connection in Bridgetown. Barbados Transport Board's headquarters are located at Kay's House, Roebuck Street, St. Michael, and the bus depots and terminals are located in the Fairchild Street Bus Terminal in Fairchild Street and the Princess Alice Bus Terminal (which was formerly the Lower Green Bus Terminal in Jubilee Gardens, Bridgetown, St. Michael) in Princess Alice Highway, Bridgetown, St. Michael; the Speightstown Bus Terminal in Speightstown, St. Peter; the Oistins Bus Depot in Oistins, Christ Church; and the Mangrove Bus Depot in Mangrove, St. Philip.
197
+
198
+ Some hotels also provide visitors with shuttles to points of interest on the island from outside the hotel lobby. There are several locally owned and operated vehicle rental agencies in Barbados but there are no multi-national companies.
199
+
200
+ The island's lone airport is the Grantley Adams International Airport. It receives daily flights by several major airlines from points around the globe, as well as several smaller regional commercial airlines and charters. The airport serves as the main air-transportation hub for the eastern Caribbean. In the first decade of the 21st century it underwent a US$100 million upgrade and expansion in February 2003 until completion in August 2005.
201
+
202
+ There was also a helicopter shuttle service, which offered air taxi services to a number of sites around the island, mainly on the West Coast tourist belt. Air and maritime traffic was regulated by the Barbados Port Authority. Private Luxury Helicopter Tours were located in Spencers, Christ Church next to the Barbados Concorde Experience when it was opened in September 2007 and closed in April 2010. Bajan Helicopters were opened in April 1989 and closed in April 2009 because of the economic crisis and recession facing Barbados.
203
+
204
+ General information
205
+
206
+ ^ These three Dutch Caribbean territories form the SSS islands.
207
+
208
+ * These three Dutch Caribbean territories form the BES islands.
209
+
210
+ † Physiographically, these are continental islands not a part of the volcanic Windward Islands arc. However, based on proximity, these islands are sometimes grouped with the Windward Islands culturally and politically.
211
+
212
+ ~ Disputed territories administered by Colombia.
213
+
214
+ # Physiographically, Bermuda is an isolated oceanic island in the North Atlantic Ocean, not a part of the Antilles, West Indies, Caribbean, North American continent or South American continent. Usually grouped with Northern American countries based on proximity; occasionally grouped with the Caribbean region culturally.
215
+
216
+ Click on a coloured area to see an article about English in that country or region
en/5550.html.txt ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Area is the quantity that expresses the extent of a two-dimensional figure or shape or planar lamina, in the plane. Surface area is its analog on the two-dimensional surface of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat.[1] It is the two-dimensional analog of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
2
+
3
+ The area of a shape can be measured by comparing the shape to squares of a fixed size.[2] In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long.[3] A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
4
+
5
+ There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles.[4] For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.[5]
6
+
7
+ For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area.[1][6][7] Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
8
+
9
+ Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry.[8] In analysis, the area of a subset of the plane is defined using Lebesgue measure,[9] though not every subset is measurable.[10] In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.[1]
10
+
11
+ Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
12
+
13
+ An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of special kind of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties:
14
+
15
+ It can be proved that such an area function actually exists.[11]
16
+
17
+ Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth.[12] Algebraically, these units can be thought of as the squares of the corresponding length units.
18
+
19
+ The SI unit of area is the square metre, which is considered an SI derived unit.[3]
20
+
21
+ Calculation of the area of a square whose length and width are 1 metre would be:
22
+
23
+ 1 metre × 1 metre = 1 m2
24
+
25
+ and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as:
26
+
27
+ 3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are:
28
+
29
+ In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units.
30
+
31
+ the relationship between square feet and square inches is
32
+
33
+ where 144 = 122 = 12 × 12. Similarly:
34
+
35
+ In addition, conversion factors include:
36
+
37
+ There are several other common units for area. The are was the original unit of area in the metric system, with:
38
+
39
+ Though the are has fallen out of use, the hectare is still commonly used to measure land:[12]
40
+
41
+ Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
42
+
43
+ The acre is also commonly used to measure land areas, where
44
+
45
+ An acre is approximately 40% of a hectare.
46
+
47
+ On the atomic scale, area is measured in units of barns, such that:[12]
48
+
49
+ The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics.[12]
50
+
51
+ In India,
52
+
53
+ In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates,[13] but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared.[14]
54
+
55
+ Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2πr, and the area of a triangle is half the base times the height, yielding the area πr2 for the disk.) Archimedes approximated the value of π (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
56
+
57
+ Swiss scientist Johann Heinrich Lambert in 1761 proved that π, the ratio of a circle's area to its squared radius, is irrational, meaning it is not equal to the quotient of any two whole numbers.[15] In 1794 French mathematician Adrien-Marie Legendre proved that π2 is irrational; this also proves that π is irrational.[16] In 1882, German mathematician Ferdinand von Lindemann proved that π is transcendental (not the solution of any polynomial equation with rational coefficients), confirming a conjecture made by both Legendre and Euler.[15]:p. 196
58
+
59
+ Heron (or Hero) of Alexandria found what is known as Heron's formula for the area of a triangle in terms of its sides, and a proof can be found in his book, Metrica, written around 60 CE. It has been suggested that Archimedes knew the formula over two centuries earlier,[17] and since Metrica is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work.[18]
60
+
61
+ In 499 Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, expressed the area of a triangle as one-half the base times the height in the Aryabhatiya (section 2.6).
62
+
63
+ A formula equivalent to Heron's was discovered by the Chinese independently of the Greeks. It was published in 1247 in Shushu Jiuzhang ("Mathematical Treatise in Nine Sections"), written by Qin Jiushao.
64
+
65
+ In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842 the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
66
+
67
+ The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century.
68
+
69
+ The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
70
+
71
+ For a non-self-intersecting (simple) polygon, the Cartesian coordinates
72
+
73
+
74
+
75
+ (
76
+
77
+ x
78
+
79
+ i
80
+
81
+
82
+ ,
83
+
84
+ y
85
+
86
+ i
87
+
88
+
89
+ )
90
+
91
+
92
+ {\displaystyle (x_{i},y_{i})}
93
+
94
+ (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula:[19]
95
+
96
+ where when i=n-1, then i+1 is expressed as modulus n and so refers to 0.
97
+
98
+ The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length l and width w, the formula for the area is:[2][20]
99
+
100
+ That is, the area of the rectangle is the length multiplied by the width. As a special case, as l = w in the case of a square, the area of a square with side length s is given by the formula:[1][2][21]
101
+
102
+ The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
103
+
104
+ Most other simple formulas for area follow from the method of dissection.
105
+ This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
106
+
107
+ For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:[2]
108
+
109
+ However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:[2]
110
+
111
+ Similar arguments can be used to find area formulas for the trapezoid[22] as well as more complicated polygons.[23]
112
+
113
+ The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius r, it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is r, and the width is half the circumference of the circle, or πr. Thus, the total area of the circle is πr2:[2]
114
+
115
+ Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly πr2, which is the area of the circle.[24]
116
+
117
+ This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
118
+
119
+ The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes x and y the formula is:[2]
120
+
121
+ Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out. For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
122
+
123
+ The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:[6]
124
+
125
+ (see Green's theorem) or the z-component of
126
+
127
+ To find the bounded area between two quadratic functions, we subtract one from the other to write the difference as
128
+
129
+ where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound. Define the discriminant of f(x)-g(x) as
130
+
131
+ By simplifying the integral formula between the graphs of two functions (as given in the section above) and using Vieta's formula, we can obtain[26][27]
132
+
133
+ The above remains valid if one of the bounding functions is linear instead of quadratic.
134
+
135
+ The general formula for the surface area of the graph of a continuously differentiable function
136
+
137
+
138
+
139
+ z
140
+ =
141
+ f
142
+ (
143
+ x
144
+ ,
145
+ y
146
+ )
147
+ ,
148
+
149
+
150
+ {\displaystyle z=f(x,y),}
151
+
152
+ where
153
+
154
+
155
+
156
+ (
157
+ x
158
+ ,
159
+ y
160
+ )
161
+
162
+ D
163
+
164
+
165
+
166
+ R
167
+
168
+
169
+ 2
170
+
171
+
172
+
173
+
174
+ {\displaystyle (x,y)\in D\subset \mathbb {R} ^{2}}
175
+
176
+ and
177
+
178
+
179
+
180
+ D
181
+
182
+
183
+ {\displaystyle D}
184
+
185
+ is a region in the xy-plane with the smooth boundary:
186
+
187
+ An even more general formula for the area of the graph of a parametric surface in the vector form
188
+
189
+
190
+
191
+
192
+ r
193
+
194
+ =
195
+
196
+ r
197
+
198
+ (
199
+ u
200
+ ,
201
+ v
202
+ )
203
+ ,
204
+
205
+
206
+ {\displaystyle \mathbf {r} =\mathbf {r} (u,v),}
207
+
208
+ where
209
+
210
+
211
+
212
+
213
+ r
214
+
215
+
216
+
217
+ {\displaystyle \mathbf {r} }
218
+
219
+ is a continuously differentiable vector function of
220
+
221
+
222
+
223
+ (
224
+ u
225
+ ,
226
+ v
227
+ )
228
+
229
+ D
230
+
231
+
232
+
233
+ R
234
+
235
+
236
+ 2
237
+
238
+
239
+
240
+
241
+ {\displaystyle (u,v)\in D\subset \mathbb {R} ^{2}}
242
+
243
+ is:[8]
244
+
245
+ The above calculations show how to find the areas of many common shapes.
246
+
247
+ The areas of irregular polygons can be calculated using the "Surveyor's formula".[24]
248
+
249
+ The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses,
250
+
251
+ and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
252
+
253
+ At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°.
254
+
255
+ For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr.
256
+
257
+ The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
258
+
259
+ Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
260
+ [29]
261
+
262
+ There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
263
+
264
+ Any line through the midpoint of a parallelogram bisects the area.
265
+
266
+ All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
267
+
268
+ Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles.
269
+
270
+ The question of the filling area of the Riemannian circle remains open.[30]
271
+
272
+ The circle has the largest area of any two-dimensional object having the same perimeter.
273
+
274
+ A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
275
+
276
+ A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral.[31]
277
+
278
+ The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.[32]
279
+
280
+ The ratio of the area of the incircle to the area of an equilateral triangle,
281
+
282
+
283
+
284
+
285
+
286
+ π
287
+
288
+ 3
289
+
290
+
291
+ 3
292
+
293
+
294
+
295
+
296
+
297
+
298
+
299
+ {\displaystyle {\frac {\pi }{3{\sqrt {3}}}}}
300
+
301
+ , is larger than that of any non-equilateral triangle.[33]
302
+
303
+ The ratio of the area to the square of the perimeter of an equilateral triangle,
304
+
305
+
306
+
307
+
308
+
309
+ 1
310
+
311
+ 12
312
+
313
+
314
+ 3
315
+
316
+
317
+
318
+
319
+
320
+ ,
321
+
322
+
323
+ {\displaystyle {\frac {1}{12{\sqrt {3}}}},}
324
+
325
+ is larger than that for any other triangle.[31]
en/5551.html.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Deafness has varying definitions in cultural and medical contexts. When used as a cultural label especially within the culture, the word deaf is often written with a capital D and referred to as "big D Deaf" in speech and sign. When used as a label for the audiological condition, it is written with a lower case d.[1][2]
2
+
3
+ In a cultural context, Deaf culture refers to a tight-knit cultural group of people whose primary language is signed, and who practice social and cultural norms which are distinct from those of the surrounding hearing community. This community does not automatically include all those who are clinically or legally deaf, nor does it exclude every hearing person. According to Baker and Padden, it includes any person or persons who "identifies him/herself as a member of the Deaf community, and other members accept that person as a part of the community,"[3] an example being children of deaf adults with normal hearing ability. It includes the set of social beliefs, behaviors, art, literary traditions, history, values, and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication.[1][2] Members of the Deaf community tend to view deafness as a difference in human experience rather than a disability or disease.[4][5]
4
+
5
+ In a medical context, deafness is defined as a degree of hearing loss such that a person is unable to understand speech, even in the presence of amplification.[6] In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard.
en/5552.html.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Surfing is a surface water pastime in which the wave rider, referred to as a surfer, rides on the forward part, or face, of a moving wave, which usually carries the surfer towards the shore. Waves suitable for surfing are primarily found in the ocean, but can also be found in lakes or rivers in the form of a standing wave or tidal bore. However, surfers can also utilize artificial waves such as those from boat wakes and the waves created in artificial wave pools.
4
+
5
+ The term surfing usually refers to the act of riding a wave using a board, regardless of the stance. There are several types of boards. The native peoples of the Pacific, for instance, surfed waves on alaia, paipo, and other such craft, and did so on their belly and knees. The modern-day definition of surfing, however, most often refers to a surfer riding a wave standing on a surfboard; this is also referred to as stand-up surfing.
6
+
7
+ Another prominent form of surfing is body boarding, when a surfer rides the wave on a bodyboard, either lying on their belly, drop knee (one foot and one knee on the board), or sometimes even standing up on a body board. Other types of surfing include knee boarding, surf matting (riding inflatable mats), and using foils. Body surfing, where the wave is surfed without a board, using the surfer's own body to catch and ride the wave, is very common and is considered by some to be the purest form of surfing. The closest form of body surfing using a board is a handboard which normally has one strap over it to fit one hand in.
8
+
9
+ Three major subdivisions within stand-up surfing are stand-up paddling, long boarding and short boarding with several major differences including the board design and length, the riding style, and the kind of wave that is ridden.
10
+
11
+ In tow-in surfing (most often, but not exclusively, associated with big wave surfing), a motorized water vehicle such as a personal watercraft, tows the surfer into the wave front helping the surfer match a large wave's speed, which is generally a higher speed than a self-propelled surfer can produce. Surfing-related sports such as paddle boarding and sea kayaking do not require waves, and other derivative sports such as kite surfing and windsurfing rely primarily on wind for power, yet all of these platforms may also be used to ride waves. Recently with the use of V-drive boats, Wakesurfing, in which one surfs on the wake of a boat, has emerged. The Guinness Book of World Records recognized a 23.8 m (78 ft) wave ride by Garrett McNamara at Nazaré, Portugal as the largest wave ever surfed.[1]
12
+
13
+ For hundreds of years, surfing was a central part of Polynesian culture. Surfing may have been observed by British explorers at Tahiti in 1767. Samuel Wallis and the crew members of HMS Dolphin were the first Britons to visit the island in June of that year. Another candidate is the botanist Joseph Banks[2] being part of the first voyage of James Cook on HMS Endeavour, who arrived on Tahiti on 10 April 1769. Lieutenant James King was the first person to write about the art of surfing on Hawaii when he was completing the journals of Captain James Cook upon Cook's death in 1779.
14
+
15
+ When Mark Twain visited Hawaii in 1866 he wrote,
16
+
17
+ In one place we came upon a large company of naked natives, of both sexes and all ages, amusing themselves with the national pastime of surf-bathing.[3]
18
+
19
+ References to surf riding on planks and single canoe hulls are also verified for pre-contact Samoa, where surfing was called fa'ase'e or se'egalu (see Augustin Krämer, The Samoa Islands[4]), and Tonga, far pre-dating the practice of surfing by Hawaiians and eastern Polynesians by over a thousand years.
20
+
21
+ In July 1885, three teenage Hawaiian princes took a break from their boarding school, St. Mathew's Hall in San Mateo, and came to cool off in Santa Cruz, California. There, David Kawānanakoa, Edward Keliʻiahonui and Jonah Kūhiō Kalanianaʻole surfed the mouth of the San Lorenzo River on custom-shaped redwood boards, according to surf historians Kim Stoner and Geoff Dunn.[5] In 1890 the pioneer in agricultural education John Wrightson reputedly became the first British surfer when instructed by two Hawaiian students at his college.[6][7][8]
22
+
23
+ George Freeth (8 November 1883 – 7 April 1919) is often credited as being the "Father of Modern Surfing". He is thought to have been the first modern surfer.
24
+
25
+ In 1907, the eclectic interests of the land baron Henry E. Huntington brought the ancient art of surfing to the California coast. While on vacation, Huntington had seen Hawaiian boys surfing the island waves. Looking for a way to entice visitors to the area of Redondo Beach, where he had heavily invested in real estate, he hired a young Hawaiian to ride surfboards. George Freeth decided to revive the art of surfing, but had little success with the huge 500 cm (16 ft) hardwood boards that were popular at that time. When he cut them in half to make them more manageable, he created the original "Long board", which made him the talk of the islands. To the delight of visitors, Freeth exhibited his surfing skills twice a day in front of the Hotel Redondo. Another native Hawaiian, Duke Kahanamoku, spread surfing to both the U.S. and Australia, riding the waves after displaying the swimming prowess that won him Olympic gold medals in 1912 and 1920.
26
+
27
+ In 1975, a professional tour started.[9] That year Margo Oberg became the first female professional surfer.[9]
28
+
29
+ Swell is generated when the wind blows consistently over a large area of open water, called the wind's fetch. The size of a swell is determined by the strength of the wind and the length of its fetch and duration. Because of this, the surf tends to be larger and more prevalent on coastlines exposed to large expanses of ocean traversed by intense low pressure systems.
30
+
31
+ Local wind conditions affect wave quality since the surface of a wave can become choppy in blustery conditions. Ideal conditions include a light to moderate "offshore" wind, because it blows into the front of the wave, making it a "barrel" or "tube" wave. Waves are Left handed and Right Handed depending upon the breaking formation of the wave.
32
+
33
+ Waves are generally recognized by the surfaces over which they break.[10] For example, there are beach breaks, reef breaks and point breaks.
34
+
35
+ The most important influence on wave shape is the topography of the seabed directly behind and immediately beneath the breaking wave. The contours of the reef or bar front become stretched by diffraction. Each break is different since each location's underwater topography is unique. At beach breaks, sandbanks change shape from week to week. Surf forecasting is aided by advances in information technology. Mathematical modeling graphically depicts the size and direction of swells around the globe.
36
+
37
+ Swell regularity varies across the globe and throughout the year. During winter, heavy swells are generated in the mid-latitudes, when the North and South polar fronts shift toward the Equator. The predominantly Westerly winds generate swells that advance Eastward, so waves tend to be largest on West coasts during winter months. However, an endless train of mid-latitude cyclones cause the isobars to become undulated, redirecting swells at regular intervals toward the tropics.
38
+
39
+ East coasts also receive heavy winter swells when low-pressure cells form in the sub-tropics, where slow moving highs inhibit their movement. These lows produce a shorter fetch than polar fronts, however, they can still generate heavy swells since their slower movement increases the duration of a particular wind direction. The variables of fetch and duration both influence how long wind acts over a wave as it travels since a wave reaching the end of a fetch behaves as if the wind died.
40
+
41
+ During summer, heavy swells are generated when cyclones form in the tropics. Tropical cyclones form over warm seas, so their occurrence is influenced by El Niño & La Niña cycles. Their movements are unpredictable.
42
+
43
+ Surf travel and some surf camps offer surfers access to remote, tropical locations, where tradewinds ensure offshore conditions. Since winter swells are generated by mid-latitude cyclones, their regularity coincides with the passage of these lows. Swells arrive in pulses, each lasting for a couple of days, with a few days between each swell.
44
+
45
+ The availability of free model data from the NOAA has allowed the creation of several surf forecasting websites.
46
+
47
+ Tube shape is defined by length to width ratio. A perfectly cylindrical vortex has a ratio of 1:1. Other forms include:
48
+
49
+ Tube speed is defined by angle of peel line.
50
+
51
+ The value of good surf in attracting surf tourism has prompted the construction of artificial reefs and sand bars. Artificial surfing reefs can be built with durable sandbags or concrete, and resemble a submerged breakwater. These artificial reefs not only provide a surfing location, but also dissipate wave energy and shelter the coastline from erosion. Ships such as Seli 1 that have accidentally stranded on sandy bottoms, can create sandbanks that give rise to good waves.[11]
52
+
53
+ An artificial reef known as Chevron Reef was constructed in El Segundo, California in hopes of creating a new surfing area. However, the reef failed to produce any quality waves and was removed in 2008. In Kovalam, South West India, an artificial reef has, however, successfully provided the local community with a quality lefthander, stabilized coastal soil erosion, and provided good habitat for marine life.[12] ASR Ltd., a New Zealand-based company, constructed the Kovalam reef and is working on another reef in Boscombe, England.
54
+
55
+ Even with artificial reefs in place, a tourist's vacation time may coincide with a "flat spell", when no waves are available. Completely artificial Wave pools aim to solve that problem by controlling all the elements that go into creating perfect surf, however there are only a handful of wave pools that can simulate good surfing waves, owing primarily to construction and operation costs and potential liability. Most wave pools generate waves that are too small and lack the power necessary to surf. The Seagaia Ocean Dome, located in Miyazaki, Japan, was an example of a surfable wave pool. Able to generate waves with up to 3 m (10 ft) faces, the specialized pump held water in 20 vertical tanks positioned along the back edge of the pool. This allowed the waves to be directed as they approach the artificial sea floor. Lefts, Rights, and A-frames could be directed from this pump design providing for rippable surf and barrel rides. The Ocean Dome cost about $2 billion to build and was expensive to maintain.[13] The Ocean Dome was closed in 2007. In England, construction is nearing completion on the Wave,[14] situated near Bristol, which will enable people unable to get to the coast to enjoy the waves in a controlled environment, set in the heart of nature.
56
+
57
+ There are two main types of artificial waves that exist today. One being artificial or stationary waves which simulate a moving, breaking wave by pumping a layer of water against a smooth structure mimicking the shape of a breaking wave. Because of the velocity of the rushing water the wave and the surfer can remain stationary while the water rushes by under the surfboard. Artificial waves of this kind provide the opportunity to try surfing and learn its basics in a moderately small and controlled environment near or far from locations with natural surf.
58
+
59
+ Another artificial wave can be made through use of a wave pool. These wave pools strive to make a wave that replicates a real ocean wave more than the stationary wave does. In 2018, the first professional surfing tournament in a wave pool was held.[15]
60
+
61
+ Surfers represent a diverse culture based on riding the waves. Some people practice surfing as a recreational activity while others make it the central focus of their lives. Surfing culture is most dominant in Hawaii and California because these two states offer the best surfing conditions. However, waves can be found wherever there is coastline, and a tight-knit yet far-reaching subculture of surfers has emerged throughout America. Some historical markers of the culture included the woodie, the station wagon used to carry surfers' boards, as well as boardshorts, the long swim shorts typically worn while surfing. Surfers also wear wetsuits in colder regions.
62
+
63
+ The sport is also a significant part of Australia's eastern coast sub-cultural life,[16] especially in New South Wales, where the weather and water conditions are most favourable for surfing.
64
+
65
+ During the 1960s, as surfing caught on in California, its popularity spread through American pop culture. Several teen movies, starting with the Gidget series in 1959, transformed surfing into a dream life for American youth. Later movies, including Beach Party (1963), Ride the Wild Surf (1964), and Beach Blanket Bingo (1965) promoted the California dream of sun and surf. Surf culture also fueled the early records of the Beach Boys.
66
+
67
+ The sport of surfing now represents a multibillion-dollar industry especially in clothing and fashion markets. The World Surf League (WSL) runs the championship tour, hosting top competitors in some of the best surf spots around the globe. A small number of people make a career out of surfing by receiving corporate sponsorships and performing for photographers and videographers in far-flung destinations; they are typically referred to as freesurfers. Sixty-six surfboarders on a 13 m (42 ft) long surfboard set a record in Huntington Beach, California for most people on a surfboard at one time. Dale Webster consecutively surfed for 14,641 days, making it his main life focus.
68
+
69
+ When the waves were flat, surfers persevered with sidewalk surfing, which is now called skateboarding. Sidewalk surfing has a similar feel to surfing and requires only a paved road or sidewalk. To create the feel of the wave, surfers even sneak into empty backyard swimming pools to ride in, known as pool skating. Eventually, surfing made its way to the slopes with the invention of the Snurfer, later credited as the first snowboard. Many other board sports have been invented over the years, but all can trace their heritage back to surfing.
70
+
71
+ Many surfers claim to have a spiritual connection with the ocean, describing surfing, the surfing experience, both in and out of the water, as a type of spiritual experience or a religion.[17]
72
+
73
+ Standup surfing begins when the surfer paddles toward shore in an attempt to match the speed of the wave (the same applies whether the surfer is standup paddling, bodysurfing, boogie-boarding or using some other type of watercraft, such as a waveski or kayak). Once the wave begins to carry the surfer forward, the surfer stands up and proceeds to ride the wave. The basic idea is to position the surfboard so it is just ahead of the breaking part (whitewash) of the wave. A common problem for beginners is being able to catch the wave at all.
74
+
75
+ Surfers' skills are tested by their ability to control their board in difficult conditions, riding challenging waves, and executing maneuvers such as strong turns and cutbacks (turning board back to the breaking wave) and carving (a series of strong back-to-back maneuvers). More advanced skills include the floater (riding on top of the breaking curl of the wave), and off the lip (banking off the breaking wave). A newer addition to surfing is the progression of the air whereby a surfer propels off the wave entirely up into the air, and then successfully lands the board back on the wave.
76
+
77
+ The tube ride is considered to be the ultimate maneuver in surfing. As a wave breaks, if the conditions are ideal, the wave will break in an orderly line from the middle to the shoulder, enabling the experienced surfer to position themselves inside the wave as it is breaking. This is known as a tube ride. Viewed from the shore, the tube rider may disappear from view as the wave breaks over the rider's head. The longer the surfer remains in the tube, the more successful the ride. This is referred to as getting tubed, barrelled, shacked or pitted. Some of the world's best known waves for tube riding include Pipeline on the North shore of Oahu, Teahupoo in Tahiti and G-Land in Java. Other names for the tube include "the barrel", and "the pit".
78
+
79
+ Hanging ten and hanging five are moves usually specific to long boarding. Hanging Ten refers to having both feet on the front end of the board with all of the surfer's toes off the edge, also known as nose-riding. Hanging Five is having just one foot near the front, with five toes off the edge.
80
+
81
+ Cutback: Generating speed down the line and then turning back to reverse direction.
82
+
83
+ Floater: Suspending the board atop the wave. Very popular on small waves.
84
+
85
+ Top-Turn: Turn off the top of the wave. Sometimes used to generate speed and sometimes to shoot spray.
86
+
87
+ Bottom Turn: A turn at the bottom or mid face of the wave, this manuever is used to set up other manuevers such as the top turn, cutback and even aerials.
88
+
89
+ Airs/Aerials: These maneuvers have been becoming more and more prevalent in the sport in both competition and free surfing. An air is when the surfer can achieve enough speed and approach a certain type of section of a wave that is supposed to act as a ramp and launch the surfer above the lip line of the wave, “catching air”, and landing either in the transition of the wave or the whitewash when hitting a close-out section.
90
+
91
+ Airs can either be straight airs or rotational airs. Straight airs have minimal rotation if any, but definitely no more rotation than 90 degrees. Rotational airs require a rotation of 90 degrees or more depending on the level of the surfer.
92
+
93
+ Types of rotations:
94
+
95
+ The Glossary of surfing includes some of the extensive vocabulary used to describe various aspects of the sport of surfing as described in literature on the subject.[18][19] In some cases terms have spread to a wider cultural use. These terms were originally coined by people who were directly involved in the sport of surfing.
96
+
97
+ Many popular surfing destinations have surf schools and surf camps that offer lessons. Surf camps for beginners and intermediates are multi-day lessons that focus on surfing fundamentals. They are designed to take new surfers and help them become proficient riders. All-inclusive surf camps offer overnight accommodations, meals, lessons and surfboards. Most surf lessons begin with instruction and a safety briefing on land, followed by instructors helping students into waves on longboards or "softboards". The softboard is considered the ideal surfboard for learning, due to the fact it is safer, and has more paddling speed and stability than shorter boards. Funboards are also a popular shape for beginners as they combine the volume and stability of the longboard with the manageable size of a smaller surfboard.[20] New and inexperienced surfers typically learn to catch waves on softboards around the 210 to 240 cm (7 to 8 ft) funboard size. Due to the softness of the surfboard the chance of getting injured is substantially minimized.
98
+
99
+ Typical surfing instruction is best performed one-on-one, but can also be done in a group setting. The most popular surf locations offer perfect surfing conditions for beginners, as well as challenging breaks for advanced students. The ideal conditions for learning would be small waves that crumble and break softly, as opposed to the steep, fast-peeling waves desired by more experienced surfers. When available, a sandy seabed is generally safer.
100
+
101
+ Surfing can be broken into several skills: Paddling strength, Positioning to catch the wave, timing, and balance. Paddling out requires strength, but also the mastery of techniques to break through oncoming waves (duck diving, eskimo roll). Take-off positioning requires experience at predicting the wave set and where they will break. The surfer must pop up quickly as soon as the wave starts pushing the board forward. Preferred positioning on the wave is determined by experience at reading wave features including where the wave is breaking.[21] Balance plays a crucial role in standing on a surfboard. Thus, balance training exercises are a good preparation. Practicing with a Balance board or swing boarding helps novices master the art.
102
+
103
+ The repetitive cycle of paddling, popping up, and balancing requires stamina, explosivity, and near-constant core stabilization. Having a proper warm up routine can help prevent injuries.[22]
104
+
105
+ Surfing can be done on various equipment, including surfboards, longboards, stand up paddle boards (SUPs), bodyboards, wave skis, skimboards, kneeboards, surf mats and macca's trays. Surfboards were originally made of solid wood and were large and heavy (often up to 370 cm (12 ft) long and having a mass of 70 kg (150 lb)). Lighter balsa wood surfboards (first made in the late 1940s and early 1950s) were a significant improvement, not only in portability, but also in increasing maneuverability.
106
+
107
+ Most modern surfboards are made of fiberglass foam (PU), with one or more wooden strips or "stringers", fiberglass cloth, and polyester resin (PE). An emerging board material is epoxy resin and Expanded Polystyrene foam (EPS) which is stronger and lighter than traditional PU/PE construction. Even newer designs incorporate materials such as carbon fiber and variable-flex composites in conjunction with fiberglass and epoxy or polyester resins. Since epoxy/EPS surfboards are generally lighter, they will float better than a traditional PU/PE board of similar size, shape and thickness. This makes them easier to paddle and faster in the water. However, a common complaint of EPS boards is that they do not provide as much feedback as a traditional PU/PE board. For this reason, many advanced surfers prefer that their surfboards be made from traditional materials.
108
+
109
+ Other equipment includes a leash (to stop the board from drifting away after a wipeout, and to prevent it from hitting other surfers), surf wax, traction pads (to keep a surfer's feet from slipping off the deck of the board), and fins (also known as skegs) which can either be permanently attached (glassed-on) or interchangeable. Sportswear designed or particularly suitable for surfing may be sold as boardwear (the term is also used in snowboarding). In warmer climates, swimsuits, surf trunks or boardshorts are worn, and occasionally rash guards; in cold water surfers can opt to wear wetsuits, boots, hoods, and gloves to protect them against lower water temperatures. A newer introduction is a rash vest with a thin layer of titanium to provide maximum warmth without compromising mobility. In recent years, there have been advancements in technology that have allowed surfers to pursue even bigger waves with added elements of safety. Big wave surfers are now experimenting with inflatable vests or colored dye packs to help decrease their odds of drowning.[23]
110
+
111
+ There are many different surfboard sizes, shapes, and designs in use today. Modern longboards, generally 270 to 300 cm (9 to 10 ft) in length, are reminiscent of the earliest surfboards, but now benefit from modern innovations in surfboard shaping and fin design. Competitive longboard surfers need to be competent at traditional walking manoeuvres, as well as the short-radius turns normally associated with shortboard surfing. The modern shortboard began life in the late 1960s and has evolved into today's common thruster style, defined by its three fins, usually around 180 to 210 cm (6 to 7 ft) in length. The thruster was invented by Australian shaper Simon Anderson.
112
+
113
+ Midsize boards, often called funboards, provide more maneuverability than a longboard, with more flotation than a shortboard. While many surfers find that funboards live up to their name, providing the best of both surfing modes, others are critical.
114
+
115
+ There are also various niche styles, such as the Egg, a longboard-style short board targeted for people who want to ride a shortboard but need more paddle power. The Fish, a board which is typically shorter, flatter, and wider than a normal shortboard, often with a split tail (known as a swallow tail). The Fish often has two or four fins and is specifically designed for surfing smaller waves. For big waves there is the Gun, a long, thick board with a pointed nose and tail (known as a pin tail) specifically designed for big waves.
116
+
117
+ The physics of surfing involves the physical oceanographic properties of wave creation in the surf zone, the characteristics of the surfboard, and the surfer's interaction with the water and the board.
118
+
119
+ Ocean waves are defined as a collection of dislocated water parcels that undergo a cycle of being forced past their normal position and being restored back to their normal position.[25] Wind caused ripples and eddies form waves that gradually gain speed and distance (fetch). Waves increase in energy and speed, and then become longer and stronger.[26] The fully developed sea has the strongest wave action that experiences storms lasting 10-hours and creates 15 meter wave heights in the open ocean.[25]
120
+
121
+ The waves created in the open ocean are classified as deep-water waves. Deep-water waves have no bottom interaction and the orbits of these water molecules are circular; their wavelength is short relative to water depth and the velocity decays before the reaching the bottom of the water basin.[25] Deep waves have depths greater than ½ their wavelengths. Wind forces waves to break in the deep sea.
122
+
123
+ Deep-water waves travel to shore and become shallow water waves. Shallow water waves have depths less than ½ of their wavelength. Shallow wave's wavelengths are long relative to water depth and have elliptical orbitals. The wave velocity effects the entire water basin. The water interacts with the bottom as it approaches shore and has a drag interaction. The drag interaction pulls on the bottom of the wave, causes refraction, increases the height, decreases the celerity (or the speed of the wave form), and the top (crest) falls over. This phenomenon happens because the velocity of the top of the wave is greater than the velocity of the bottom of the wave.[25]
124
+
125
+ The surf zone is place of convergence of multiple waves types creating complex wave patterns. A wave suitable for surfing results from maximum speeds of 5 meters per second. This speed is relative because local onshore winds can cause waves to break.[26] In the surf zone, shallow water waves are carried by global winds to the beach and interact with local winds to make surfing waves.[26][27]
126
+
127
+ Different onshore and off-shore wind patterns in the surf zone create different types of waves. Onshore winds cause random wave breaking patterns and are more suitable for experienced surfers.[26][27] Light offshore winds create smoother waves, while strong direct offshore winds cause plunging or large barrel waves.[26] Barrel waves are large because the water depth is small when the wave breaks. Thus, the breaker intensity (or force) increases, and the wave speed and height increase.[26] Off-shore winds produce non-surfable conditions by flattening a weak swell. Weak swell is made from surface gravity forces and has long wavelengths.[26][28]
128
+
129
+ Surfing waves can be analyzed using the following parameters: breaking wave height, wave peel angle (α), wave breaking intensity, and wave section length. The breaking wave height has two measurements, the relative heights estimated by surfers and the exact measurements done by physical oceanographers. Measurements done by surfers were 1.36 to 2.58 times higher than the measurements done by scientists. The scientifically concluded wave heights that are physically possible to surf are 1 to 20 meters.[26]
130
+
131
+ The wave peel angle is one of the main constituents of a potential surfing wave. Wave peel angle measures the distance between the peel-line and the line tangent to the breaking crest line. This angle controls the speed of the wave crest. The speed of the wave is an addition of the propagation velocity vector (Vw) and peel velocity vector (Vp), which results in the overall velocity of the wave (Vs).[26]
132
+
133
+ Wave breaking intensity measures the force of the wave as it breaks, spills, or plunges (a plunging wave is termed by surfers as a "barrel wave"). Wave section length is the distance between two breaking crests in a wave set. Wave section length can be hard to measure because local winds, non-linear wave interactions, island sheltering, and swell interactions can cause multifarious wave configurations in the surf zone.[26]
134
+
135
+ The parameters breaking wave height, wave peel angle (α), and wave breaking intensity, and wave section length are important because they are standardized by past oceanographers who researched surfing; these parameters have been used to create a guide that matches the type of wave formed and the skill level of surfer.[26]
136
+
137
+ Table 1 shows a relationship of smaller peel angles correlating with a higher skill level of surfer. Smaller wave peel angles increase the velocities of waves. A surfer must know how to react and paddle quickly to match the speed of the wave to catch it. Therefore, more experience is required to catch a low peel angle waves. More experienced surfers can handle longer section lengths, increased velocities, and higher wave heights.[26] Different locations offer different types of surfing conditions for each skill level.
138
+
139
+ A surf break is an area with an obstruction or an object that causes a wave to break. Surf breaks entail multiple scale phenomena. Wave section creation has micro-scale factors of peel angle and wave breaking intensity. The micro-scale components influence wave height and variations on wave crests. The mesoscale components of surf breaks are the ramp, platform, wedge, or ledge that may be present at a surf break. Macro-scale processes are the global winds that initially produce offshore waves. Types of surf breaks are headlands (point break), beach break, river/estuary entrance bar, reef breaks, and ledge breaks.[26]
140
+
141
+ A headland or point break interacts with the water by causing refraction around the point or headland. The point absorbs the high frequency waves and long period waves persist, which are easier to surf. Examples of locations that have headland or point break induced surf breaks are Dunedin (New Zealand), Raglan, Malibu (California), Rincon (California), and Kirra (Australia).[26]
142
+
143
+ A beach break happens where waves break from offshore waves, and onshore sandbars and rips. Wave breaks happen successively at beach breaks. Example locations are Tairua and Aramoana Beach (New Zealand) and the Gold Coast (Australia).[26]
144
+
145
+ A river or estuary entrance bar creates waves from the ebb tidal delta, sediment outflow, and tidal currents. An ideal estuary entrance bar exists in Whangamata Bar, New Zealand.[26]
146
+
147
+ A reef break is conducive to surfing because large waves consistently break over the reef. The reef is usually made of coral, and because of this, many injuries occur while surfing reef breaks. However, the waves that are produced by reef breaks are some of the best in the world. Famous reef breaks are present in Padang Padang (Indonesia), Pipeline (Hawaii), Uluwatu (Bali), and Teahupo'o (Tahiti).[26][29] When surfing a reef break, the depth of the water needs to be considered as surfboards have fins on the bottom of the board.
148
+
149
+ A ledge break is formed by steep rocks ledges that makes intense waves because the waves travel through deeper water then abruptly reach shallower water at the ledge. Shark Island, Australia is a location with a ledge break. Ledge breaks create difficult surfing conditions, sometimes only allowing body surfing as the only feasible way to confront the waves.[26]
150
+
151
+ Jetties are added to bodies of water to regulate erosion, preserve navigation channels, and make harbors. Jetties are classified into four different types and have two main controlling variables: the type of delta and the size of the jetty.[30]
152
+
153
+ The first classification is a type 1 jetty. This type of jetty is significantly longer than the surf zone width and the waves break at the shore end of the jetty. The effect of a Type 1 jetty is sediment accumulation in a wedge formation on the jetty. These waves are large and increase in size as they pass over the sediment wedge formation. An example of a Type 1 jetty is Mission Beach, San Diego, California. This 1000-meter jetty was installed in 1950 at the mouth of Mission Bay. The surf waves happen north of the jetty, are longer waves, and are powerful. The bathymetry of the sea bottom in Mission Bay has a wedge shape formation that causes the waves to refract as they become closer to the jetty.[30] The waves converge constructively after they refract and increase the sizes of the waves.
154
+
155
+ A type 2 jetty occurs in an ebb tidal delta, a delta transitioning between high and low tide. This area has shallow water, refraction, and a distinctive seabed shapes that creates large wave heights.[30]
156
+
157
+ An example of a type 2 jetty is called "The Poles" in Atlantic Beach, Florida. Atlantic Beach is known to have flat waves, with exceptions during major storms. However, "The Poles" has larger than normal waves due to a 500-meter jetty that was installed on the south side of the St. Johns. This jetty was built to make a deep channel in the river. It formed a delta at "The Poles". This is special area because the jetty increases wave size for surfing, when comparing pre-conditions and post-conditions of the southern St. Johns River mouth area.[30]
158
+
159
+ The wave size at "The Poles" depends on the direction of the incoming water. When easterly waters (from 55°) interact with the jetty, they create waves larger than southern waters (from 100°). When southern waves (from 100°) move toward "The Poles", one of the waves breaks north of the southern jetty and the other breaks south of the jetty. This does not allow for merging to make larger waves. Easterly waves, from 55°, converge north of the jetty and unite to make bigger waves.[30]
160
+
161
+ A type 3 jetty is in an ebb tidal area with an unchanging seabed that has naturally created waves. Examples of a Type 3 jetty occurs in “Southside” Tamarack, Carlsbad, California.[30]
162
+
163
+ A type 4 jetty is one that no longer functions nor traps sediment. The waves are created from reefs in the surf zone. A type 4 jetty can be found in Tamarack, Carlsbad, California.[30]
164
+
165
+ Rip currents are fast, narrow currents that are caused by onshore transport within the surf zone and the successive return of the water seaward.[31][32] The wedge bathymetry makes a convenient and consistent rip current of 5–10 meters that brings the surfers to the “take off point” then out to the beach.[30]
166
+
167
+ Oceanographers have two theories on rip current formation. The wave interaction model assumes that two edges of waves interact, create differing wave heights, and cause longshore transport of nearshore currents. The Boundary Interaction Model assumes that the topography of the sea bottom causes nearshore circulation and longshore transport; the result of both models is a rip current.[31]
168
+
169
+ Rip currents can be extremely strong and narrow as they extend out of the surf zone into deeper water, reaching speeds from 0.5 m/s (1.6 ft/s) and up to 2.5 m/s (8.2 ft/s),[32][33] which is faster than any human can swim. The water in the jet is sediment rich, bubble rich, and moves rapidly.[32] The rip head of the rip current has long shore movement. Rip currents are common on beaches with mild slopes that experience sizeable and frequent oceanic swell.[33]
170
+
171
+ The vorticity and inertia of rip currents have been studied. From a model of the vorticity of a rip current done at Scripps Institute of Oceanography, it was found that a fast rip current extends away from shallow water, the vorticity of the current increases, and the width of the current decreases.[33][34] This model also acknowledges that friction plays a role and waves are irregular in nature.[34] From data from Sector-Scanning Doppler Sonar at Scripps Institute of Oceanography, it was found that rip currents in La Jolla, CA lasted several minutes, reoccurred one to four times per hour, and created a wedge with a 45° arch and a radius 200–400 meters.[32]
172
+
173
+ A longer surfboard of 300 cm (10 ft) causes more friction with the water; therefore, it will be slower than a smaller and lighter board with a length of 180 cm (6 ft). Longer boards are good for beginners who need help balancing. Smaller boards are good for more experienced surfers who want to have more control and maneuverability.[28]
174
+
175
+ When practicing the sport of surfing, the surfer paddles out past the wave break to wait for a wave. When a surfable wave arrives, the surfer must paddle extremely fast to match the velocity of the wave so the wave can accelerate him or her.[28]
176
+
177
+ When the surfer is at wave speed, the surfer must quickly pop up, stay low, and stay toward the front of the wave to become stable and prevent falling as the wave steepens. The acceleration is less toward the front than toward the back. The physics behind the surfing of the wave involves the horizontal acceleration force (F·sinθ) and the vertical force (F·cosθ=mg). Therefore, the surfer should lean forward to gain speed, and lean on the back foot to brake. Also, to increase the length of the ride of the wave, the surfer should travel parallel to the wave crest.[28]
178
+
179
+ Surfing, like all water sports, carries the inherent risk of drowning.[35] Anyone at any age can learn to surf, but should have at least intermediate swimming skills. Although the board assists a surfer in staying buoyant, it can become separated from the user.[36] A leash, attached to the ankle or knee, can keep a board from being swept away, but does not keep a rider on the board or above water. In some cases, possibly including the drowning of professional surfer Mark Foo, a leash can even be a cause of drowning by snagging on a reef or other object and holding the surfer underwater.[37] By keeping the surfboard close to the surfer during a wipeout, a leash also increases the chances that the board may strike the rider, which could knock him or her unconscious and lead to drowning. A fallen rider's board can become trapped in larger waves, and if the rider is attached by a leash, he or she can be dragged for long distances underwater.[37] Surfers should be careful to remain in smaller surf until they have acquired the advanced skills and experience necessary to handle bigger waves and more challenging conditions. However, even world-class surfers have drowned in extremely challenging conditions.[38]
180
+
181
+ Under the wrong set of conditions, anything that a surfer's body can come in contact with is a potential hazard, including sand bars, rocks, small ice, reefs, surfboards, and other surfers.[39] Collisions with these objects can sometimes cause injuries such as cuts and scrapes and in rare instances, death.
182
+
183
+ A large number of injuries, up to 66%,[40] are caused by collision with a surfboard (nose or fins). Fins can cause deep lacerations and cuts,[41] as well as bruising. While these injuries can be minor, they can open the skin to infection from the sea; groups like Surfers Against Sewage campaign for cleaner waters to reduce the risk of infections. Local bugs and disease can be risk factors when surfing around the globe.[42]
184
+
185
+ Falling off a surfboard or colliding with others is commonly referred to as a wipeout.
186
+
187
+ Sea life can sometimes cause injuries and even fatalities. Animals such as sharks,[43] stingrays, Weever fish, seals and jellyfish can sometimes present a danger.[44] Warmer-water surfers often do the "stingray shuffle" as they walk out through the shallows, shuffling their feet in the sand to scare away stingrays that may be resting on the bottom.[45]
188
+
189
+ Rip currents are water channels that flow away from the shore. Under the wrong circumstances these currents can endanger both experienced and inexperienced surfers. Since a rip current appears to be an area of flat water, tired or inexperienced swimmers or surfers may enter one and be carried out beyond the breaking waves. Although many rip currents are much smaller, the largest rip currents have a width of forty or fifty feet. The flow of water moving out towards the sea in a rip will be stronger than most swimmers, making swimming back to shore difficult, however, by paddling parallel to the shore, a surfer can easily exit a rip current. Alternatively, some surfers actually ride on a rip current because it is a fast and effortless way to get out beyond the zone of breaking waves.[46]
190
+
191
+ The seabed can pose a risk for surfers. If a surfer falls while riding a wave, the wave tosses and tumbles the surfer around, often in a downwards direction. At reef breaks and beach breaks, surfers have been seriously injured and even killed, because of a violent collision with the sea bed, the water above which can sometimes be very shallow, especially at beach breaks or reef breaks during low tide. Cyclops, Western Australia, for example, is one of the biggest and thickest reef breaks in the world, with waves measuring up to 10 m (33 ft) high, but the reef below is only about 2 m (7 ft) below the surface of the water.
192
+
193
+ A January 2018 study by the University of Exeter called the "Beach Bum Survey" found surfers and bodyboarders to be three times as likely as non-surfers to harbor antibiotic-resistant E. coli and four times as likely to harbor other bacteria capable of easily becoming antibiotic resistant. The researchers attributed this to the fact that surfers swallow roughly ten times as much seawater as swimmers.[47][48]
194
+
195
+ Surfers should use ear protection such as ear plugs to avoid surfer's ear, inflammation of the ear or other damage. Surfer's ear is where the bone near the ear canal grows after repeated exposure to cold water, making the ear canal narrower. The narrowed canal makes it harder for water to drain from the ear. This can result in pain, infection and sometimes ringing of the ear. If surfer's ear develops it does so after repeated surfing sessions. Yet, damage such as inflammation of the ear can occur after only surfing once. This can be caused by repeatedly falling off the surfboard into the water and having the cold water rush into the ears, which can exert a damaging amount of pressure. Those with sensitive ears should therefore wear ear protection, even if they are not planning to surf very often.[49]
196
+
197
+ Ear plugs designed for surfers, swimmers and other water athletes are primarily made to keep water out of the ear, thereby letting a protective pocket of air stay inside the ear canal. They can also block cold air, dirt and bacteria. Many designs are made to let sound through, and either float and/or have a leash in case the plug accidentally gets bumped out.[50][51]
198
+
199
+ Surfer's eye (Pterygium (conjunctiva)) is a gradual tissue growth on the cornea of the eye which ultimately can lead to vision loss. The cause of the condition is unclear, but appears to be partly related to long term exposure to UV light, dust and wind exposure. Prevention may include wearing sunglasses and a hat if in an area with strong sunlight. Surfers and other water-sport athletes should therefore wear eye protection that blocks 100% of the UV rays from the water, as is often used by snow-sport athletes. Surf goggles often have a head strap and ventilation to avoid fogging[52][53]
200
+
201
+ Users of contact lenses should take extra care, and may consider wearing surfing goggles. Some risks of exposing contact lenses to the elements that can cause eye damage or infections are sand or organisms in the sea water getting between the eye and contact lens, or that lenses might fold.[54][55]
202
+
203
+ Surfer's myelopathy is a rare spinal cord injury causing paralysis of the lower extremities, caused by hyperextension of the back. This is due to one of the main blood vessels of the spine becoming kinked, depriving the spinal cord of oxygen. In some cases the paralysis is permanent. Although any activity where the back is arched can cause this condition (i.e. yoga, pilates, etc.), this rare phenomenon has most often been seen in those surfing for the first time. According to DPT Sergio Florian, some recommendations for preventing myelopathy is proper warm up, limiting the session length and sitting on the board while waiting for waves, rather than lying.[56]
204
+
205
+ Beginner surfer in Pacific Beach, California
206
+
207
+ Surfer in Pacific Beach, California
208
+
209
+ A day of big surf in La Jolla, California
210
+
211
+ Big surf in La Jolla, California
212
+
213
+ Very big surf in La Jolla, California
214
+
215
+ Oops, La Jolla, California
en/5553.html.txt ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 4°N 56°W / 4°N 56°W / 4; -56
2
+
3
+ in South America (grey)
4
+
5
+ Suriname (/ˈsjʊərɪnæm/, US also /-nɑːm/, sometimes spelled Surinam), officially known as the Republic of Suriname (Dutch: Republiek Suriname [reːpyˌblik syːriˈnaːmə]), is a country on the northeastern Atlantic coast of South America. It is bordered by the Atlantic Ocean to the north, French Guiana to the east, Guyana to the west and Brazil to the south. At just under 165,000 square kilometers (64,000 square miles), it is the smallest sovereign state in South America.[note 1] Suriname has a population of approximately 575,990,[8][9] most of whom live on the country's north coast, in and around the capital and largest city, Paramaribo.
6
+
7
+ The country is a tropical country, whose geography is dominated by forests. These forests are important to Suriname's contribution to mitigating climate change, as the first country to reach carbon neutrality. The economy of Suriname is heavily dependent on the exports of bauxite, gold, petroleum and agricultural products.
8
+
9
+ Suriname was long inhabited by various indigenous people before being invaded and contested by European powers from the 16th century, eventually coming under Dutch rule in the late 17th century. As the chief sugar colony during the Dutch colonial period, it was primarily a plantation economy dependent on African slaves and, after abolition of slavery in 1863, indentured servants from Asia. Suriname was ruled by the Dutch-chartered company Society of Suriname between 1683 and 1795. In 1954, Suriname became one of the constituent countries of the Kingdom of the Netherlands. On 25 November 1975, the country of Suriname left the Kingdom of the Netherlands to become an independent state, nonetheless maintaining close economic, diplomatic, and cultural ties to its former colonizer.
10
+
11
+ Suriname is considered to be a culturally Caribbean country, and is a member of the Caribbean Community (CARICOM). While Dutch is the official language of government, business, media, and education,[13] Sranan Tongo, an English-based creole language, is a widely used lingua franca. Suriname is the only sovereign nation outside Europe where Dutch is spoken by a majority of the population. The people of Suriname are among the most diverse in the world, spanning a multitude of ethnic, religious, and linguistic groups.
12
+
13
+ The name Suriname may derive from an indigenous people called Surinen, who inhabited the area at the time of European contact.[14] It may also be derived from a corruption of the name "Surryham" which was the name given to the Suriname River by Lord Willoughby in honour of the Earl of Surrey when an English colony was established under a grant from King Charles II.[15][16][17]
14
+
15
+ British settlers, who founded the first European colony at Marshall's Creek[18] along the Suriname River, spelled the name as "Surinam".
16
+
17
+ When the territory was taken over by the Dutch, it became part of a group of colonies known as Dutch Guiana. The official spelling of the country's English name was changed from "Surinam" to "Suriname" in January 1978, but "Surinam" can still be found in English. A notable example is Suriname's national airline, Surinam Airways. The older English name is reflected in the English pronunciation, /ˈsjʊərɪnæm, -nɑːm/. In Dutch, the official language of Suriname, the pronunciation is [ˌsyriˈnaːmə], with the main stress on the third syllable and a schwa terminal vowel.
18
+
19
+ Indigenous settlement of Suriname dates back to 3,000 BC. The largest tribes were the Arawak, a nomadic coastal tribe that lived from hunting and fishing. They were the first inhabitants in the area. The Carib also settled in the area and conquered the Arawak by using their superior sailing ships. They settled in Galibi (Kupali Yumï, meaning "tree of the forefathers") at the mouth of the Marowijne River. While the larger Arawak and Carib tribes lived along the coast and savanna, smaller groups of indigenous people lived in the inland rainforest, such as the Akurio, Trió, Warrau, and Wayana.
20
+
21
+ Beginning in the 16th century, French, Spanish and English explorers visited the area. A century later, Dutch and English settlers established plantation colonies along the many rivers in the fertile Guiana plains. The earliest documented colony in Guiana was an English settlement named Marshall's Creek along the Suriname River.[18] After that there was another short-lived English colony called Willoughbyland that lasted from 1650 to 1674.
22
+
23
+ Disputes arose between the Dutch and the English for control of this territory. In 1667, during negotiations leading to the Treaty of Breda, the Dutch decided to keep the nascent plantation colony of Suriname they had gained from the English. The English were able to keep New Amsterdam, the main city of the former colony of New Netherland in North America on the mid-Atlantic coast. Already a cultural and economic hub in those days, they renamed it after the Duke of York: New York City.
24
+
25
+ In 1683, the Society of Suriname was founded by the city of Amsterdam, the Van Aerssen van Sommelsdijck family, and the Dutch West India Company. The society was chartered to manage and defend the colony. The planters of the colony relied heavily on African slaves to cultivate, harvest and process the commodity crops of coffee, cocoa, sugar cane and cotton plantations along the rivers. Planters' treatment of the slaves was notoriously bad[19]—historian C. R. Boxer wrote that "man's inhumanity to man just about reached its limits in Surinam"[20]—and many slaves escaped the plantations. In November 1795, the Society was nationalized by the Batavian Republic and from then on, the Batavian Republic and its legal successors (the Kingdom of Holland and the Kingdom of the Netherlands) governed the territory as a national colony, barring a period of British occupation between 1799 and 1802, and between 1804 and 1816.
26
+
27
+ With the help of the native South Americans living in the adjoining rain forests, these runaway slaves established a new and unique culture in the interior that was highly successful in its own right. They were known collectively in English as Maroons, in French as Nèg'Marrons (literally meaning "brown negroes", that is "pale-skinned negroes"), and in Dutch as Marrons. The Maroons gradually developed several independent tribes through a process of ethnogenesis, as they were made up of slaves from different African ethnicities. These tribes include the Saramaka, Paramaka, Ndyuka or Aukan, Kwinti, Aluku or Boni, and Matawai.
28
+
29
+ The Maroons often raided plantations to recruit new members from the slaves and capture women, as well as to acquire weapons, food and supplies. They sometimes killed planters and their families in the raids; colonists built defenses, which were so important they were shown on 18th-century maps, but these were not sufficient.[21]
30
+
31
+ The colonists also mounted armed campaigns against the Maroons, who generally escaped through the rain forest, which they knew much better than did the colonists. To end hostilities, in the 18th century the European colonial authorities signed several peace treaties with different tribes. They granted the Maroons sovereign status and trade rights in their inland territories, giving them autonomy.
32
+
33
+ From 1861 to 1863, with the American Civil War underway, and enslaved people escaping to Southern territory controlled by the Union, United States President Abraham Lincoln and his administration looked abroad for places to relocate people who were freed from enslavement and who wanted to leave the United States. It opened negotiations with the Dutch government regarding African-American emigration to and colonization of the Dutch colony of Suriname. Nothing came of the idea, and the idea was dropped after 1864.[22]
34
+
35
+ The Netherlands abolished slavery in Suriname in 1863, under a gradual process that required enslaved people to work on plantations for 10 transition years for minimal pay, which was considered as partial compensation for their masters. After 1873, most freedmen largely abandoned the plantations where they had worked for several generations in favor of the capital city, Paramaribo.
36
+
37
+ As a plantation colony, Suriname had an economy dependent on labor-intensive commodity crops. To make up for a shortage of labor, the Dutch recruited and transported contract or indentured laborer from the Dutch East Indies (modern Indonesia) and India (the latter through an arrangement with the British, who then ruled the area). In addition, during the late 19th and early 20th centuries, small numbers of laborers, mostly men, were recruited from China and the Middle East.
38
+
39
+ Although Suriname's population remains relatively small, because of this complex colonization and exploitation, it is one of the most ethnically and culturally diverse countries in the world.[23][24]
40
+
41
+ During World War II, on 23 November 1941, under an agreement with the Netherlands government-in-exile, the United States occupied Suriname to protect the bauxite mines to support the Allies' war effort.[25] In 1942, the Dutch government-in-exile began to review the relations between the Netherlands and its colonies in terms of the post-war period.
42
+
43
+ In 1954, Suriname became one of the constituent countries of the Kingdom of the Netherlands, along with the Netherlands Antilles and the Netherlands. In this construction, the Netherlands retained control of its defense and foreign affairs. In 1974, the local government, led by the National Party of Suriname (NPS) (whose membership was largely Creole, meaning ethnically African or mixed African-European) started negotiations with the Dutch government leading towards full independence, which was granted on 25 November 1975. A large part of Suriname's economy for the first decade following independence was fueled by foreign aid provided by the Dutch government.
44
+
45
+ The first President of the country was Johan Ferrier, the former governor, with Henck Arron (the then leader of the NPS) as Prime Minister. In the years leading up to independence, nearly one-third of the population of Suriname emigrated to the Netherlands, amidst concern that the new country would fare worse under independence than it had as a constituent country of the Kingdom of the Netherlands. Surinamese politics did degenerate into ethnic polarisation and corruption soon after independence, with the NPS using Dutch aid money for partisan purposes. Its leaders were accused of fraud in the 1977 elections, in which Arron won a further term, and the discontent was such that a large portion of the population fled to the Netherlands, joining the already significant Surinamese community there.[26]
46
+
47
+ On 25 February 1980, a military coup overthrew Arron's government. It was initiated by a group of 16 sergeants, led by Dési Bouterse.[13] Opponents of the military regime attempted counter-coups in April 1980, August 1980, 15 March 1981, and again on 12 March 1982. The first counter attempt was led by Fred Ormskerk,[27] the second by Marxist-Leninists,[28] the third by Wilfred Hawker, and the fourth by Surendre Rambocus.
48
+
49
+ Hawker escaped from prison during the fourth counter-coup attempt, but he was captured and summarily executed. Between 2 am and 5 am on 7 December 1982, the military, under the leadership of Dési Bouterse, rounded up 13 prominent citizens who had criticized the military dictatorship and held them at Fort Zeelandia in Paramaribo.[29] The dictatorship had all these men executed over the next three days, along with Rambocus and Jiwansingh Sheombar (who was also involved in the fourth counter-coup attempt).
50
+
51
+ National elections were held in 1987. The National Assembly adopted a new constitution that allowed Bouterse to remain in charge of the army. Dissatisfied with the government, Bouterse summarily dismissed the ministers in 1990, by telephone. This event became popularly known as the "Telephone Coup". His power began to wane after the 1991 elections.
52
+
53
+ The brutal civil war between the Suriname army and Maroons loyal to rebel leader Ronnie Brunswijk, begun in 1986, continued and its effects further weakened Bouterse's position during the 1990s. Due to the civil war, more than 10,000 Surinamese, mostly Maroons, fled to French Guiana in the late 1980s.[30]
54
+
55
+ In 1999, the Netherlands tried Bouterse in absentia on drug smuggling charges. He was convicted and sentenced to prison but remained in Suriname.[31]
56
+
57
+ On 19 July 2010, the former dictator Dési Bouterse returned to power when he was elected as the new President of Suriname.[32] Before his election in 2010, he, along with 24 others, had been charged with the murders of 15 prominent dissidents in the December murders. However, in 2012, two months before the verdict in the trial, the National Assembly extended its amnesty law and provided Bouterse and the others with amnesty of these charges. He was reelected on 14 July 2015.[33] However, Bouterse was convicted by a Surinamese court on 29 November 2019 and given a 20-year sentence for his role in the 1982 killings.[34]
58
+
59
+ After winning the 2020 elections,[35] Chan Santokhi was the sole nomination for President of Suriname.[36] On 13 July, Santokhi was elected President by acclamation in an uncontested election.[37] He was inaugurated on 16 July in ceremony without public due to the COVID-19 pandemic.[38]
60
+
61
+ The Republic of Suriname is a representative democratic republic, based on the Constitution of 1987. The legislative branch of government consists of a 51-member unicameral National Assembly, simultaneously and popularly elected for a five-year term.
62
+
63
+ In the elections held on Tuesday, 25 May 2010, the Megacombinatie won 23 of the National Assembly seats followed by Nationale Front with 20 seats. A much smaller number, important for coalition-building, went to the "A‑combinatie" and to the Volksalliantie. The parties held negotiations to form coalitions. Elections were held on 25 May 2015, and the National Assembly again elected Desire Bouterse as President.[39]
64
+
65
+ The President of Suriname is elected for a five-year term by a two-thirds majority of the National Assembly. If at least two-thirds of the National Assembly cannot agree to vote for one presidential candidate, a People's Assembly is formed from all National Assembly delegates and regional and municipal representatives who were elected by popular vote in the most recent national election. The president may be elected by a majority of the People's Assembly called for the special election.
66
+
67
+ As head of government, the president appoints a sixteen-minister cabinet. A vice president, is normally elected for a five-year term at the same time as the president, by a simple majority in the National Assembly or People's Assembly. There is no constitutional provision for removal or replacement of the president, except in the case of resignation.
68
+
69
+ The judiciary is headed by the High Court of Justice of Suriname (Supreme Court). This court supervises the magistrate courts. Members are appointed for life by the president in consultation with the National Assembly, the State Advisory Council, and the National Order of Private Attorneys.
70
+
71
+ President Dési Bouterse was convicted and sentenced in the Netherlands to 11 years of imprisonment for drug trafficking. He is the main suspect in the court case concerning the 'December murders,' the 1982 assassination of opponents of military rule in Fort Zeelandia, Paramaribo. These two cases still strain relations between the Netherlands and Suriname.[40]
72
+
73
+ Due to Suriname's Dutch colonial history, Suriname had a long-standing special relationship with the Netherlands. The Dutch government has stated that it will only maintain limited contact with the president.[40]
74
+
75
+ Bouterse was elected as president of Suriname in 2010. The Netherlands in July 2014 dropped Suriname as a member of its development program.[41]
76
+
77
+ Since 1991, the United States has maintained positive relations with Suriname. The two countries work together through the Caribbean Basin Security Initiative (CBSI) and the U.S. President's Emergency Plan for AIDS Relief (PEPFAR). Suriname also receives military funding from the U.S. Department of Defense.[42]
78
+
79
+ European Union relations and cooperation with Suriname are carried out both on a bilateral and a regional basis. There are ongoing EU-Community of Latin American and Caribbean States (CELAC) and EU-CARIFORUM dialogues. Suriname is party to the Cotonou Agreement, the partnership agreement among the members of the African, Caribbean and Pacific Group of States and the European Union.[43]
80
+
81
+ On 17 February 2005, the leaders of Barbados and Suriname signed the "Agreement for the deepening of bilateral cooperation between the Government of Barbados and the Government of the Republic of Suriname."[44] On 23–24 April 2009, both nations formed a Joint Commission in Paramaribo, Suriname, to improve relations and to expand into various areas of cooperation.[45] They held a second meeting toward this goal on 3–4 March 2011, in Dover, Barbados. Their representatives reviewed issues of agriculture, trade, investment, as well as international transport.[46]
82
+
83
+ In the late 2000s, Suriname intensified development cooperation with other developing countries. China's South-South cooperation with Suriname has included a number of large-scale infrastructure projects, including port rehabilitation and road construction. Brazil signed agreements to cooperate with Suriname in education, health, agriculture, and energy production.[47]
84
+
85
+ The Armed Forces of Suriname have three branches: the Army, the Air Force, and the Navy. The President of the Republic, Chan Santokhi, is the Supreme Commander-in-Chief of the Armed Forces (Opperbevelhebber van de Strijdkrachten). The President is assisted by the Minister of Defence. Beneath the President and Minister of Defence is the Commander of the Armed Forces (Bevelhebber van de Strijdkrachten). The Military Branches and regional Military Commands report to the Commander.
86
+
87
+ After the creation of the Statute of the Kingdom of the Netherlands, the Royal Netherlands Army was entrusted with the defense of Suriname, while the defense of the Netherlands Antilles was the responsibility of the Royal Netherlands Navy. The army set up a separate Troepenmacht in Suriname (Forces in Suriname, TRIS). Upon independence in 1975, this force was turned into the Surinaamse Krijgsmacht (SKM):, Surinamese Armed Forces. On 25 February 1980, a group of 15 non-commissioned officers and one junior SKM officer, under the leadership of sergeant major Dési Bouterse, overthrew the Government. Subsequently, the SKM was rebranded as Nationaal Leger (NL), National Army.
88
+
89
+ In 1965 the Dutch and Americans used Suriname's Coronie site for multiple Nike Apache sounding rocket launches.[48]
90
+
91
+ The country is divided into ten administrative districts, each headed by a district commissioner appointed by the president, who also has the power of dismissal. Suriname is further subdivided into 62 resorts (ressorten).
92
+
93
+ Suriname is the smallest independent country in South America. Situated on the Guiana Shield, it lies mostly between latitudes 1° and 6°N, and longitudes 54° and 58°W. The country can be divided into two main geographic regions. The northern, lowland coastal area (roughly above the line Albina-Paranam-Wageningen) has been cultivated, and most of the population lives here. The southern part consists of tropical rainforest and sparsely inhabited savanna along the border with Brazil, covering about 80% of Suriname's land surface.
94
+
95
+ The two main mountain ranges are the Bakhuys Mountains and the Van Asch Van Wijck Mountains. Julianatop is the highest mountain in the country at 1,286 metres (4,219 ft) above sea level. Other mountains include Tafelberg at 1,026 metres (3,366 ft), Mount Kasikasima at 718 metres (2,356 ft), Goliathberg at 358 metres (1,175 ft) and Voltzberg at 240 metres (790 ft).
96
+
97
+ Suriname's forest cover is 90.2%, the highest of any nation in the world.
98
+
99
+ Suriname is situated between French Guiana to the east and Guyana to the west. The southern border is shared with Brazil and the northern border is the Atlantic coast. The southernmost borders with French Guiana and Guyana are disputed by these countries along the Marowijne and Corantijn rivers, respectively, while a part of the disputed maritime boundary with Guyana was arbitrated by a tribunal convened under the rules set out in Annex VII of the United Nations Convention on the Law of the Sea on 20 September 2007.[50][51]
100
+
101
+ Lying 2 to 5 degrees north of the equator, Suriname has a very hot and wet tropical climate, and temperatures do not vary much throughout the year. Average relative humidity is between 80% and 90%. Its average temperature ranges from 29 to 34 degrees Celsius (84 to 93 degrees Fahrenheit). Due to the high humidity, actual temperatures are distorted and may therefore feel up to 6 degrees Celsius (11 degrees Fahrenheit) hotter than the recorded temperature. The year has two wet seasons, from April to August and from November to February. It also has two dry seasons, from August to November and February to April.
102
+
103
+ Suriname is already seeing the effects of climate change, including warming temperatures and more extreme weather events. As a relatively poor country, its contributions to climate change have been limited; moreover, because of the large forest cover the country has been running a carbon-negative economy since 2014.[52]
104
+
105
+ Located in the upper Coppename River watershed, the Central Suriname Nature Reserve has been designated a UNESCO World Heritage Site for its unspoiled forests and biodiversity. There are many national parks in the country including Galibi National Reserve along the coast; Brownsberg Nature Park and Eilerts de Haan Nature Park in central Suriname; and the Sipaliwani Nature Reserve on the Brazilian border. In all, 16% of the country's land area is national parks and lakes, according to the UNEP World Conservation Monitoring Centre.[54]
106
+
107
+ Suriname's democracy gained some strength after the turbulent 1990s, and its economy became more diversified and less dependent on Dutch financial assistance. Bauxite (aluminium ore) mining used to be a strong revenue source. The discovery and exploitation of oil and gold has added substantially to Suriname's economic independence. Agriculture, especially rice and bananas, remains a strong component of the economy, and ecotourism is providing new economic opportunities. More than 80% of Suriname's land-mass consists of unspoiled rain forest; with the establishment of the Central Suriname Nature Reserve in 1998, Suriname signalled its commitment to conservation of this precious resource. The Central Suriname Nature Reserve became a World Heritage Site in 2000.
108
+
109
+ The economy of Suriname was dominated by the bauxite industry, which accounted for more than 15% of GDP and 70% of export earnings up to 2016. Other main export products include rice, bananas and shrimp. Suriname has recently started exploiting some of its sizeable oil[55] and gold[56] reserves. About a quarter of the people work in the agricultural sector. The Surinamese economy is very dependent on commerce, its main trade partners being the Netherlands, the United States, Canada, and Caribbean countries, mainly Trinidad and Tobago and the islands of the former Netherlands Antilles.[57]
110
+
111
+ After assuming power in the fall of 1996, the Wijdenbosch government ended the structural adjustment program of the previous government, claiming it was unfair to the poorer elements of society. Tax revenues fell as old taxes lapsed and the government failed to implement new tax alternatives. By the end of 1997, the allocation of new Dutch development funds was frozen as Surinamese Government relations with the Netherlands deteriorated. Economic growth slowed in 1998, with decline in the mining, construction, and utility sectors. Rampant government expenditures, poor tax collection, a bloated civil service, and reduced foreign aid in 1999 contributed to the fiscal deficit, estimated at 11% of GDP. The government sought to cover this deficit through monetary expansion, which led to a dramatic increase in inflation. It takes longer on average to register a new business in Suriname than virtually any other country in the world (694 days or about 99 weeks).[58]
112
+
113
+ According to the 2012 census, Suriname had a population of 541,638 inhabitants.[5] The Surinamese populace is characterized by its high level of diversity, wherein no particular demographic group constitutes a majority. This is a legacy of centuries of Dutch rule, which entailed successive periods of forced, contracted, or voluntary migration by various nationalities and ethnic groups from around the world.
114
+
115
+ The largest ethnic group are the Afro-Surinamese which form about 37% of the population, and are usually divided into two groups: the Creoles and the Maroons. Surinamese Maroons, whose ancestors are mostly runaway slaves that fled to the interior, comprise 21.7% of the population; they are divided into six main groups: Ndyuka (Aucans), Saramaccans, Paramaccans, Kwinti, Aluku (Boni) and Matawai. Surinamese Creoles, mixed people descending from African slaves and mostly Dutch Europeans, form 15.7% of the population. East Indians, who form 27% of the population, are the second largest group. They are descendants of 19th-century contract workers from India, hailing mostly from the modern Indian states of Bihar, Jharkhand, and Eastern Uttar Pradesh along the Nepali border. Javanese make up 14% of the population, and like the East Indians, descend largely from workers contracted from the island of Java in the former Dutch East Indies (modern Indonesia).[59] 13.4% of the population identifies as being of mixed ethnic heritage.
116
+
117
+ Other sizeable groups include the Chinese, originating from 19th-century contract workers and some recent migration, who number over 40,000 as of 2011[update]; Lebanese, primarily Maronites; Jews of Sephardic and Ashkenazi origin, whose center of population was the community of Jodensavanne; and Brazilians, many of them laborers mining for gold.[60]
118
+
119
+ A small but influential number of Europeans remain in the country, comprising about 1 percent of the population. They are descended mostly from Dutch 19th-century immigrant farmers, known as "Boeroes" (derived from boer, the Dutch word for "farmer"), and to a lesser degree other European groups, such as Portuguese from Madeira. Many Boeroes left after independence in 1975.
120
+
121
+ Various indigenous peoples make up 3.7% of the population, with the main groups being the Akurio, Arawak, Kalina (Caribs), Tiriyó and Wayana. They live mainly in the districts of Paramaribo, Wanica, Para, Marowijne and Sipaliwini.[citation needed]
122
+
123
+ The vast majority of Suriname's inhabitants (about 90%) live in Paramaribo or on the coast.
124
+
125
+ The choice of becoming Surinamese or Dutch citizens in the years leading up to Suriname's independence in 1975 led to a mass migration to the Netherlands. This migration continued in the period immediately after independence and during military rule in the 1980s and for largely economic reasons extended throughout the 1990s. The Surinamese community in the Netherlands numbered 350,300 as of 2013[update] (including children and grandchildren of Suriname migrants born in The Netherlands); this is compared to approximately 566,000[13] Surinamese in Suriname itself.
126
+
127
+ According to the International Organization for Migration, around 272,600 people from Suriname lived in other countries in the late 2010s, in particular in the Netherlands (ca 192,000), the French Republic (ca 25,000, most of them in French Guiana),[note 2] the United States (ca 15,000), Guyana (ca 5,000), Aruba (ca 1,500), and Canada (ca 1,000).[61]
128
+
129
+ Suriname's religious makeup is heterogeneous and reflective of the country's multicultural character.
130
+
131
+ According to the 2012 census, 48.4% were Christians;[7] 26.7% of Surinamese were Protestants (11.18% Pentecostal, 11.16% Moravian, and 4.4% of various other Protestant denominations) and 21.6% were Catholics. Hindus formed the second-largest religious group in Suriname, comprising 22.3% of the population,[7] the third largest proportion of any country in the Western Hemisphere after Guyana and Trinidad and Tobago, both of which also have large proportions of Indians. Almost all practitioners of Hinduism are found among the Indo-Surinamese population. Muslims constitute 13.9% of the population, the highest proportion of Muslims in the Americas; they are largely of Javanese or Indian descent.[7] Other religious groups include Winti (1.8%),[7] an Afro-American religion practiced mostly by those of Maroon ancestry; Javanism (0.8%),[7] a syncretic faith found among some Javanese Surinamese; and various indigenous folk traditions that are often incorporated into one of the larger religions (usually Christianity). In the 2012 census, 7.5% of the population declared they had "no religion", while a further 3.2% left the question unanswered.[7]
132
+
133
+ Dutch is the sole official language, and is the language of education, government, business, and the media.[13] Over 60% of the population speaks Dutch as a mother tongue,[62] and most of the rest speaks it as a second language. In 2004, Suriname became an associate member of the Dutch Language Union.[63] It is the only Dutch-speaking country in South America as well as the only independent nation in the Americas where Dutch is spoken by a majority of the population, and one of the two non-Romance-speaking countries in South America, the other being English-speaking Guyana.
134
+
135
+ In Paramaribo, Dutch is the main home language in two-thirds of the households.[4] The recognition of "Surinaams-Nederlands" ("Surinamese Dutch") as a national dialect equal to "Nederlands-Nederlands" ("Dutch Dutch") and "Vlaams-Nederlands" ("Flemish Dutch") was expressed in 2009 by the publication of the Woordenboek Surinaams Nederlands (Surinamese–Dutch Dictionary).[64] It is the most commonly spoken language in urban areas; only in the interior of Suriname (namely parts of Sipaliwini and Brokopondo) is Dutch seldom spoken.
136
+
137
+ Sranan Tongo, a local creole language originally spoken by the Creole population group, is the most widely used vernacular language in day-to-day life and business. It and Dutch are considered to be the two principal languages of Surinamese diglossia; both are further influenced by other spoken languages which are spoken primarily within ethnic communities. Sranan Tongo is often used interchangeably with Dutch depending on the formality of the setting, where Dutch is seen as a prestige dialect and Sranan Tongo the common vernacular.[65]
138
+
139
+ Caribbean Hindustani or Sarnami, a dialect of Bhojpuri, is the fourth-most used language (after English), spoken by the descendants of South Asian contract workers from then British India. The Javanese language is used by the descendants of Javanese contract workers, and is common in Suriname. The Maroon languages, somewhat intelligible with Sranan, include Saramaka, Paramakan, Ndyuka (also called Aukan), Kwinti and Matawai. Amerindian languages, spoken by Amerindians, include Carib and Arawak. Hakka and Cantonese are spoken by the descendants of the Chinese contract workers. Mandarin is spoken by some few recent Chinese immigrants. English, Spanish, and Portuguese are also used as second languages.
140
+
141
+ The national capital, Paramaribo, is by far the dominant urban area, accounting for nearly half of Suriname's population and most of its urban residents; indeed, its population is greater than the next nine largest cities combined. Most municipalities are located within the capital's metropolitan area, or along the densely populated coastline.
142
+
143
+ Owing to the country's multicultural heritage, Suriname celebrates a variety of distinct ethnic and religious festivals.
144
+
145
+ There are several Hindu and Islamic national holidays like Diwali (deepavali), Phagwa and Eid ul-Fitr and Eid-ul-adha. These holidays do not have fixed dates on the Gregorian calendar, as they are based on the Hindu and Islamic calendars, respectively. As of 2020, Eid-ul-adha is a national holiday, and equal to a Sunday.[67]
146
+
147
+ There are several holidays which are unique to Suriname. These include the Indian, Javanese and Chinese arrival days. They celebrate the arrival of the first ships with their respective immigrants.
148
+
149
+ New Year's Eve in Suriname is called Oud jaar, Owru Yari, or "old year". It is during this period that the Surinamese population goes to the city's commercial district to watch "demonstrational fireworks". The bigger stores invest in these firecrackers and display them out in the streets. Every year the length of them is compared, and high praises are given for the company that has imported the largest ribbon.
150
+
151
+ These celebrations start at 10 in the morning and finish the next day. The day is usually filled with laughter, dance, music, and drinking. When the night starts, the big street parties are already at full capacity. The most popular fiesta is the one that is held at café 't Vat in the main tourist district. The parties there stop between 10 and 11 at night, after which people go home to light their pagaras (red-firecracker-ribbons) at midnight.
152
+ After 12, the parties continue and the streets fill again until daybreak.[68]
153
+
154
+ The major sports in Suriname are football, basketball, and volleyball. The Suriname Olympic Committee is the national governing body for sports in Suriname. The major mind sports are chess, draughts, bridge and troefcall.
155
+
156
+ Many Suriname-born football players and Dutch-born football players of Surinamese descent, like Gerald Vanenburg, Ruud Gullit, Frank Rijkaard, Edgar Davids, Clarence Seedorf, Patrick Kluivert, Aron Winter, Georginio Wijnaldum, Virgil van Dijk and Jimmy Floyd Hasselbaink have turned out to play for the Dutch national team. In 1999, Humphrey Mijnals, who played for both Suriname and the Netherlands, was elected Surinamese footballer of the century.[69] Another famous player is André Kamperveen, who captained Suriname in the 1940s and was the first Surinamese to play professionally in the Netherlands.
157
+
158
+ The most famous international track & field athlete from Suriname is Letitia Vriesde, who won a silver medal at the 1995 World Championships behind Ana Quirot in the 800 metres, the first medal won by a South American female athlete in World Championship competition. In addition, she also won a bronze medal at the 2001 World Championships and won several medals in the 800 and 1500 metres at the Pan-American Games and Central American and Caribbean Games. Tommy Asinga also received acclaim for winning a bronze medal in the 800 metres at the 1991 Pan American Games.
159
+
160
+ Swimmer Anthony Nesty is the only Olympic medalist for Suriname. He won gold in the 100-meter butterfly at the 1988 Summer Olympics in Seoul and he won bronze in the same discipline at the 1992 Summer Olympics in Barcelona. Originally from Trinidad and Tobago, he now lives in Gainesville, Florida, and is the coach of the University of Florida, mainly coaching distance swimmers.
161
+
162
+ Cricket is popular in Suriname to some extent, influenced by its popularity in the Netherlands and in neighbouring Guyana. The Surinaamse Cricket Bond is an associate member of the International Cricket Council (ICC). Suriname and Argentina are the only ICC associates in South America, although Guyana is represented on the West Indies Cricket Board, a full member. The national cricket team was ranked 47th in the world and sixth in the ICC Americas region as of June 2014, and competes in the World Cricket League (WCL) and ICC Americas Championship. Iris Jharap, born in Paramaribo, played women's One Day International matches for the Dutch national side, the only Surinamese to do so.[70]
163
+
164
+ In the sport of badminton the local heroes are Virgil Soeroredjo & Mitchel Wongsodikromo and also Crystal Leefmans. All winning medals for Suriname at the Carebaco Caribbean Championships, the Central American and Caribbean Games (CACSO Games)[71] and also at the South American Games, better known as the ODESUR Games. Virgil Soeroredjo also participated for Suriname at the 2012 London Summer Olympics, only the second badminton player, after Oscar Brandon, for Suriname to achieve this.[72] Current National Champion Sören Opti was the third Surinamese badminton player to participate at the Summer Olympics in 2016.
165
+
166
+ Multiple time K-1 kickboxing world champions Ernesto Hoost and Remy Bonjasky were born in Suriname or are of Surinamese descent. Other kickboxing world champions include Rayen Simson, Melvin Manhoef, Tyrone Spong and Regian Eersel.
167
+
168
+ Suriname also has a national korfball team, with korfball being a Dutch sport. Vinkensport is also practised.
169
+
170
+ Suriname, along with neighboring Guyana, is one of only two countries on the mainland South American continent that drive on the left, although many vehicles are left hand drive as well as right hand drive.[73] One explanation for this practice is that at the time of its colonization of Suriname, the Netherlands itself used left-hand traffic, also introducing the practice in the Dutch East Indies, now Indonesia.[74] Another is that Suriname was first colonized by the British, and for practical reasons, this was not changed when it came under Dutch administration.[75] Although the Netherlands converted to driving to the right at the end of the 18th century,[74] Suriname did not.
171
+
172
+ Airlines with departures from Suriname:
173
+
174
+ Airlines with arrivals in Suriname:
175
+
176
+ Other national companies with an air operator certification:
177
+
178
+ The Global Burden of Disease Study provides an on-line data source for analyzing updated estimates of health for 359 diseases and injuries and 84 risk factors from 1990 to 2017 in most of the world's countries.[76] Comparing Suriname with other Caribbean nations show that in 2017 the age-standardized death rate for all causes was 793 (males 969, females 641) per 100,000, far below the 1219 of Haiti, somewhat below the 944 of Guyana but considerably above the 424 of Bermuda. In 1990 the death rate was 960 per 100,000. Life expectancy in 2017 was 72 years (males 69, females 75). The death rate for children < 5 years was 581 per 100,000 compared to 1308 in Haiti and 102 in Bermuda. In 1990 and 2017, leading causes of age-standardized death rates were cardiovascular disease, cancer and diabetes/chronic kidney disease.
179
+
180
+ Education in Suriname is compulsory until the age of 12,[77] and the nation had a net primary enrollment rate of 94% in 2004.[78] Literacy is very common, particularly among men.[78] The main university in the country is the Anton de Kom University of Suriname.
181
+
182
+ From elementary school to high school there are 13 grades. The elementary school has six grades, middle school four grades and high school three grades. Students take a test in the end of elementary school to determine whether they will go to the MULO (secondary modern school) or a middle school of lower standards like LBO. Students from the elementary school wear a green shirt with jeans, while middle school students wear a blue shirt with jeans.
183
+
184
+ Students going from the second grade of middle school to the third grade have to choose between the business or science courses. This will determine what their major subjects will be. In order to go on to study math and physics, the student must have a total of 12 points. If the student has fewer points, he/she will go into the business courses or fail the grade.[citation needed]
185
+
186
+ Due to the variety of habitats and temperatures, biodiversity in Suriname is considered high.[79] In October 2013, 16 international scientists researching the ecosystems during a three-week expedition in Suriname's Upper Palumeu River Watershed catalogued 1,378 species and found 60—including six frogs, one snake, and 11 fish—that may be previously unknown species.[80][81][82][83] According to the environmental non-profit Conservation International, which funded the expedition, Suriname's ample supply of fresh water is vital to the biodiversity and healthy ecosystems of the region.[84]
187
+
188
+ Snakewood (Brosimum guianense), a shrub-like tree, is native to this tropical region of the Americas. Customs in Suriname report that snakewood is often illegally exported to French Guiana, thought to be for the crafts industry.[85]
189
+
190
+ On 21 March 2013, Suriname's REDD+ Readiness Preparation Proposal (R-PP 2013) was approved by the member countries of the Participants Committee of the Forest Carbon Partnership Facility (FCPF).[86]
191
+
192
+ As in other parts of Central and South America, indigenous communities have increased their activism to protect their lands and preserve habitat. In March 2015, the "Trio and Wayana communities presented a declaration of cooperation to the National Assembly of Suriname that announces an indigenous conservation corridor spanning 72,000 square kilometers (27,799 square miles) of southern Suriname. The declaration, led by these indigenous communities and with the support of Conservation International (CI) and World Wildlife Fund (WWF) Guianas, comprises almost half of the total area of Suriname."[87] This area includes large forests and is considered "essential for the country's climate resilience, freshwater security, and green development strategy."[87]
193
+
194
+ Traditionally, De Ware Tijd was the major newspaper of the country, but since the '90s Times of Suriname, De West and Dagblad Suriname have also been well-read newspapers; all publish primarily in Dutch.[88]
195
+
196
+ Suriname has twenty-four radio stations, most of them also broadcast through the Internet. There are twelve television sources:
197
+ ABC (Ch. 4–1, 2), RBN (Ch. 5–1, 2), Rasonic TV (Ch. 7), STVS (Ch. 8–1, 2, 3, 4, 5, 6), Apintie (Ch. 10–1), ATV (Ch. 12–1, 2, 3, 4), Radika (Ch. 14), SCCN (Ch. 17–1, 2, 3), Pipel TV (Ch. 18–1, 2), Trishul (Ch. 20–1, 2, 3, 4), Garuda (Ch. 23–1, 2, 3), Sangeetmala (Ch. 26), Ch. 30, Ch. 31, Ch.32, Ch.38, SCTV (Ch. 45). Also listened to is mArt, a broadcaster from Amsterdam founded by people from Suriname. Kondreman is one of the popular cartoons in Suriname.
198
+
199
+ There are also three major news sites: Starnieuws, Suriname Herald and GFC Nieuws.
200
+
201
+ In 2012, Suriname was ranked joint 22nd with Japan in the worldwide Press Freedom Index by the organization Reporters Without Borders.[89] This was ahead of the US (47th), the UK (28th), and France (38th).
202
+
203
+ Most tourists visit Suriname for the biodiversity of the Amazonian rain forests in the south of the country, which are noted for their flora and fauna. The Central Suriname Nature Reserve is the biggest and one of the most popular reserves, along with the Brownsberg Nature Park which overlooks the Brokopondo Reservoir, one of the largest man-made lakes in the world. In 2008, the Berg en Dal Eco & Cultural Resort opened in Brokopondo.[90] Tonka Island in the reservoir is home to a rustic eco-tourism project run by the Saramaccaner Maroons.[91] Pangi wraps and bowls made of calabashes are the two main products manufactured for tourists. The Maroons have learned that colorful and ornate pangis are popular with tourists.[92] Other popular decorative souvenirs are hand-carved purple-hardwood made into bowls, plates, canes, wooden boxes, and wall decors.
204
+
205
+ There are also many waterfalls throughout the country. Raleighvallen, or Raleigh Falls, is a 56,000-hectare (140,000-acre) nature reserve on the Coppename River, rich in bird life. Also are the Blanche Marie Falls on the Nickerie River and the Wonotobo Falls. Tafelberg Mountain in the centre of the country is surrounded by its own reserve – the Tafelberg Nature Reserve – around the source of the Saramacca River, as is the Voltzberg Nature Reserve further north on the Coppename River at Raleighvallen. In the interior are many Maroon and Amerindian villages, many of which have their own reserves that are generally open to visitors.
206
+
207
+ Suriname is one of the few countries in the world where at least one of each biome that the state possesses has been declared a wildlife reserve. Around 30% of the total land area of Suriname is protected by law as reserves.
208
+
209
+ Other attractions include plantations such as Laarwijk, which is situated along the Suriname River. This plantation can be reached only by boat via Domburg, in the north central Wanica District of Suriname.
210
+
211
+ Crime rates continue to rise in Paramaribo and armed robberies are not uncommon. According to the current U.S. Department of State Travel Advisory at the date of the 2018 report's publication, Suriname has been assessed as Level 1: exercise normal precautions.[93]
212
+
213
+ The Jules Wijdenbosch Bridge is a bridge over the river Suriname between Paramaribo and Meerzorg in the Commewijne district. The bridge was built during the tenure of President Jules Albert Wijdenbosch (1996–2000) and was completed in 2000. The bridge is 52 metres (171 ft) high, and 1,504 metres (4,934 ft) long. It connects Paramaribo with Commewijne, a connection which previously could only be made by ferry. The purpose of the bridge was to facilitate and promote the development of the eastern part of Suriname. The bridge consists of two lanes (one lane each way) and is not accessible to pedestrians.
214
+
215
+ The construction of the Sts. Peter and Paul Cathedral started on 13 January 1883. Before it became a cathedral it was a theatre. The theatre was built in 1809 and burned down in 1820.
216
+
217
+ Suriname is one of the few countries in the world where a synagogue is located next to a mosque.[94]
218
+ The two buildings are located next to each other in the centre of Paramaribo and have been known to share a parking facility during their respective religious rites, should they happen to coincide with one another.
219
+
220
+ A relatively new landmark is the Hindu Arya Dewaker temple in the Johan Adolf Pengelstraat in Wanica, Paramaribo, which was inaugurated in 2001. A special characteristic of the temple is that it does not have images of the Hindu divinities, as they are forbidden in the Arya Samaj, the Hindu movement to which the people who built the temple belong. Instead, the building is covered by many texts derived from the Vedas and other Hindu scriptures. The beautiful architecture makes the temple a tourist attraction.
en/5554.html.txt ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 4°N 56°W / 4°N 56°W / 4; -56
2
+
3
+ in South America (grey)
4
+
5
+ Suriname (/ˈsjʊərɪnæm/, US also /-nɑːm/, sometimes spelled Surinam), officially known as the Republic of Suriname (Dutch: Republiek Suriname [reːpyˌblik syːriˈnaːmə]), is a country on the northeastern Atlantic coast of South America. It is bordered by the Atlantic Ocean to the north, French Guiana to the east, Guyana to the west and Brazil to the south. At just under 165,000 square kilometers (64,000 square miles), it is the smallest sovereign state in South America.[note 1] Suriname has a population of approximately 575,990,[8][9] most of whom live on the country's north coast, in and around the capital and largest city, Paramaribo.
6
+
7
+ The country is a tropical country, whose geography is dominated by forests. These forests are important to Suriname's contribution to mitigating climate change, as the first country to reach carbon neutrality. The economy of Suriname is heavily dependent on the exports of bauxite, gold, petroleum and agricultural products.
8
+
9
+ Suriname was long inhabited by various indigenous people before being invaded and contested by European powers from the 16th century, eventually coming under Dutch rule in the late 17th century. As the chief sugar colony during the Dutch colonial period, it was primarily a plantation economy dependent on African slaves and, after abolition of slavery in 1863, indentured servants from Asia. Suriname was ruled by the Dutch-chartered company Society of Suriname between 1683 and 1795. In 1954, Suriname became one of the constituent countries of the Kingdom of the Netherlands. On 25 November 1975, the country of Suriname left the Kingdom of the Netherlands to become an independent state, nonetheless maintaining close economic, diplomatic, and cultural ties to its former colonizer.
10
+
11
+ Suriname is considered to be a culturally Caribbean country, and is a member of the Caribbean Community (CARICOM). While Dutch is the official language of government, business, media, and education,[13] Sranan Tongo, an English-based creole language, is a widely used lingua franca. Suriname is the only sovereign nation outside Europe where Dutch is spoken by a majority of the population. The people of Suriname are among the most diverse in the world, spanning a multitude of ethnic, religious, and linguistic groups.
12
+
13
+ The name Suriname may derive from an indigenous people called Surinen, who inhabited the area at the time of European contact.[14] It may also be derived from a corruption of the name "Surryham" which was the name given to the Suriname River by Lord Willoughby in honour of the Earl of Surrey when an English colony was established under a grant from King Charles II.[15][16][17]
14
+
15
+ British settlers, who founded the first European colony at Marshall's Creek[18] along the Suriname River, spelled the name as "Surinam".
16
+
17
+ When the territory was taken over by the Dutch, it became part of a group of colonies known as Dutch Guiana. The official spelling of the country's English name was changed from "Surinam" to "Suriname" in January 1978, but "Surinam" can still be found in English. A notable example is Suriname's national airline, Surinam Airways. The older English name is reflected in the English pronunciation, /ˈsjʊərɪnæm, -nɑːm/. In Dutch, the official language of Suriname, the pronunciation is [ˌsyriˈnaːmə], with the main stress on the third syllable and a schwa terminal vowel.
18
+
19
+ Indigenous settlement of Suriname dates back to 3,000 BC. The largest tribes were the Arawak, a nomadic coastal tribe that lived from hunting and fishing. They were the first inhabitants in the area. The Carib also settled in the area and conquered the Arawak by using their superior sailing ships. They settled in Galibi (Kupali Yumï, meaning "tree of the forefathers") at the mouth of the Marowijne River. While the larger Arawak and Carib tribes lived along the coast and savanna, smaller groups of indigenous people lived in the inland rainforest, such as the Akurio, Trió, Warrau, and Wayana.
20
+
21
+ Beginning in the 16th century, French, Spanish and English explorers visited the area. A century later, Dutch and English settlers established plantation colonies along the many rivers in the fertile Guiana plains. The earliest documented colony in Guiana was an English settlement named Marshall's Creek along the Suriname River.[18] After that there was another short-lived English colony called Willoughbyland that lasted from 1650 to 1674.
22
+
23
+ Disputes arose between the Dutch and the English for control of this territory. In 1667, during negotiations leading to the Treaty of Breda, the Dutch decided to keep the nascent plantation colony of Suriname they had gained from the English. The English were able to keep New Amsterdam, the main city of the former colony of New Netherland in North America on the mid-Atlantic coast. Already a cultural and economic hub in those days, they renamed it after the Duke of York: New York City.
24
+
25
+ In 1683, the Society of Suriname was founded by the city of Amsterdam, the Van Aerssen van Sommelsdijck family, and the Dutch West India Company. The society was chartered to manage and defend the colony. The planters of the colony relied heavily on African slaves to cultivate, harvest and process the commodity crops of coffee, cocoa, sugar cane and cotton plantations along the rivers. Planters' treatment of the slaves was notoriously bad[19]—historian C. R. Boxer wrote that "man's inhumanity to man just about reached its limits in Surinam"[20]—and many slaves escaped the plantations. In November 1795, the Society was nationalized by the Batavian Republic and from then on, the Batavian Republic and its legal successors (the Kingdom of Holland and the Kingdom of the Netherlands) governed the territory as a national colony, barring a period of British occupation between 1799 and 1802, and between 1804 and 1816.
26
+
27
+ With the help of the native South Americans living in the adjoining rain forests, these runaway slaves established a new and unique culture in the interior that was highly successful in its own right. They were known collectively in English as Maroons, in French as Nèg'Marrons (literally meaning "brown negroes", that is "pale-skinned negroes"), and in Dutch as Marrons. The Maroons gradually developed several independent tribes through a process of ethnogenesis, as they were made up of slaves from different African ethnicities. These tribes include the Saramaka, Paramaka, Ndyuka or Aukan, Kwinti, Aluku or Boni, and Matawai.
28
+
29
+ The Maroons often raided plantations to recruit new members from the slaves and capture women, as well as to acquire weapons, food and supplies. They sometimes killed planters and their families in the raids; colonists built defenses, which were so important they were shown on 18th-century maps, but these were not sufficient.[21]
30
+
31
+ The colonists also mounted armed campaigns against the Maroons, who generally escaped through the rain forest, which they knew much better than did the colonists. To end hostilities, in the 18th century the European colonial authorities signed several peace treaties with different tribes. They granted the Maroons sovereign status and trade rights in their inland territories, giving them autonomy.
32
+
33
+ From 1861 to 1863, with the American Civil War underway, and enslaved people escaping to Southern territory controlled by the Union, United States President Abraham Lincoln and his administration looked abroad for places to relocate people who were freed from enslavement and who wanted to leave the United States. It opened negotiations with the Dutch government regarding African-American emigration to and colonization of the Dutch colony of Suriname. Nothing came of the idea, and the idea was dropped after 1864.[22]
34
+
35
+ The Netherlands abolished slavery in Suriname in 1863, under a gradual process that required enslaved people to work on plantations for 10 transition years for minimal pay, which was considered as partial compensation for their masters. After 1873, most freedmen largely abandoned the plantations where they had worked for several generations in favor of the capital city, Paramaribo.
36
+
37
+ As a plantation colony, Suriname had an economy dependent on labor-intensive commodity crops. To make up for a shortage of labor, the Dutch recruited and transported contract or indentured laborer from the Dutch East Indies (modern Indonesia) and India (the latter through an arrangement with the British, who then ruled the area). In addition, during the late 19th and early 20th centuries, small numbers of laborers, mostly men, were recruited from China and the Middle East.
38
+
39
+ Although Suriname's population remains relatively small, because of this complex colonization and exploitation, it is one of the most ethnically and culturally diverse countries in the world.[23][24]
40
+
41
+ During World War II, on 23 November 1941, under an agreement with the Netherlands government-in-exile, the United States occupied Suriname to protect the bauxite mines to support the Allies' war effort.[25] In 1942, the Dutch government-in-exile began to review the relations between the Netherlands and its colonies in terms of the post-war period.
42
+
43
+ In 1954, Suriname became one of the constituent countries of the Kingdom of the Netherlands, along with the Netherlands Antilles and the Netherlands. In this construction, the Netherlands retained control of its defense and foreign affairs. In 1974, the local government, led by the National Party of Suriname (NPS) (whose membership was largely Creole, meaning ethnically African or mixed African-European) started negotiations with the Dutch government leading towards full independence, which was granted on 25 November 1975. A large part of Suriname's economy for the first decade following independence was fueled by foreign aid provided by the Dutch government.
44
+
45
+ The first President of the country was Johan Ferrier, the former governor, with Henck Arron (the then leader of the NPS) as Prime Minister. In the years leading up to independence, nearly one-third of the population of Suriname emigrated to the Netherlands, amidst concern that the new country would fare worse under independence than it had as a constituent country of the Kingdom of the Netherlands. Surinamese politics did degenerate into ethnic polarisation and corruption soon after independence, with the NPS using Dutch aid money for partisan purposes. Its leaders were accused of fraud in the 1977 elections, in which Arron won a further term, and the discontent was such that a large portion of the population fled to the Netherlands, joining the already significant Surinamese community there.[26]
46
+
47
+ On 25 February 1980, a military coup overthrew Arron's government. It was initiated by a group of 16 sergeants, led by Dési Bouterse.[13] Opponents of the military regime attempted counter-coups in April 1980, August 1980, 15 March 1981, and again on 12 March 1982. The first counter attempt was led by Fred Ormskerk,[27] the second by Marxist-Leninists,[28] the third by Wilfred Hawker, and the fourth by Surendre Rambocus.
48
+
49
+ Hawker escaped from prison during the fourth counter-coup attempt, but he was captured and summarily executed. Between 2 am and 5 am on 7 December 1982, the military, under the leadership of Dési Bouterse, rounded up 13 prominent citizens who had criticized the military dictatorship and held them at Fort Zeelandia in Paramaribo.[29] The dictatorship had all these men executed over the next three days, along with Rambocus and Jiwansingh Sheombar (who was also involved in the fourth counter-coup attempt).
50
+
51
+ National elections were held in 1987. The National Assembly adopted a new constitution that allowed Bouterse to remain in charge of the army. Dissatisfied with the government, Bouterse summarily dismissed the ministers in 1990, by telephone. This event became popularly known as the "Telephone Coup". His power began to wane after the 1991 elections.
52
+
53
+ The brutal civil war between the Suriname army and Maroons loyal to rebel leader Ronnie Brunswijk, begun in 1986, continued and its effects further weakened Bouterse's position during the 1990s. Due to the civil war, more than 10,000 Surinamese, mostly Maroons, fled to French Guiana in the late 1980s.[30]
54
+
55
+ In 1999, the Netherlands tried Bouterse in absentia on drug smuggling charges. He was convicted and sentenced to prison but remained in Suriname.[31]
56
+
57
+ On 19 July 2010, the former dictator Dési Bouterse returned to power when he was elected as the new President of Suriname.[32] Before his election in 2010, he, along with 24 others, had been charged with the murders of 15 prominent dissidents in the December murders. However, in 2012, two months before the verdict in the trial, the National Assembly extended its amnesty law and provided Bouterse and the others with amnesty of these charges. He was reelected on 14 July 2015.[33] However, Bouterse was convicted by a Surinamese court on 29 November 2019 and given a 20-year sentence for his role in the 1982 killings.[34]
58
+
59
+ After winning the 2020 elections,[35] Chan Santokhi was the sole nomination for President of Suriname.[36] On 13 July, Santokhi was elected President by acclamation in an uncontested election.[37] He was inaugurated on 16 July in ceremony without public due to the COVID-19 pandemic.[38]
60
+
61
+ The Republic of Suriname is a representative democratic republic, based on the Constitution of 1987. The legislative branch of government consists of a 51-member unicameral National Assembly, simultaneously and popularly elected for a five-year term.
62
+
63
+ In the elections held on Tuesday, 25 May 2010, the Megacombinatie won 23 of the National Assembly seats followed by Nationale Front with 20 seats. A much smaller number, important for coalition-building, went to the "A‑combinatie" and to the Volksalliantie. The parties held negotiations to form coalitions. Elections were held on 25 May 2015, and the National Assembly again elected Desire Bouterse as President.[39]
64
+
65
+ The President of Suriname is elected for a five-year term by a two-thirds majority of the National Assembly. If at least two-thirds of the National Assembly cannot agree to vote for one presidential candidate, a People's Assembly is formed from all National Assembly delegates and regional and municipal representatives who were elected by popular vote in the most recent national election. The president may be elected by a majority of the People's Assembly called for the special election.
66
+
67
+ As head of government, the president appoints a sixteen-minister cabinet. A vice president, is normally elected for a five-year term at the same time as the president, by a simple majority in the National Assembly or People's Assembly. There is no constitutional provision for removal or replacement of the president, except in the case of resignation.
68
+
69
+ The judiciary is headed by the High Court of Justice of Suriname (Supreme Court). This court supervises the magistrate courts. Members are appointed for life by the president in consultation with the National Assembly, the State Advisory Council, and the National Order of Private Attorneys.
70
+
71
+ President Dési Bouterse was convicted and sentenced in the Netherlands to 11 years of imprisonment for drug trafficking. He is the main suspect in the court case concerning the 'December murders,' the 1982 assassination of opponents of military rule in Fort Zeelandia, Paramaribo. These two cases still strain relations between the Netherlands and Suriname.[40]
72
+
73
+ Due to Suriname's Dutch colonial history, Suriname had a long-standing special relationship with the Netherlands. The Dutch government has stated that it will only maintain limited contact with the president.[40]
74
+
75
+ Bouterse was elected as president of Suriname in 2010. The Netherlands in July 2014 dropped Suriname as a member of its development program.[41]
76
+
77
+ Since 1991, the United States has maintained positive relations with Suriname. The two countries work together through the Caribbean Basin Security Initiative (CBSI) and the U.S. President's Emergency Plan for AIDS Relief (PEPFAR). Suriname also receives military funding from the U.S. Department of Defense.[42]
78
+
79
+ European Union relations and cooperation with Suriname are carried out both on a bilateral and a regional basis. There are ongoing EU-Community of Latin American and Caribbean States (CELAC) and EU-CARIFORUM dialogues. Suriname is party to the Cotonou Agreement, the partnership agreement among the members of the African, Caribbean and Pacific Group of States and the European Union.[43]
80
+
81
+ On 17 February 2005, the leaders of Barbados and Suriname signed the "Agreement for the deepening of bilateral cooperation between the Government of Barbados and the Government of the Republic of Suriname."[44] On 23–24 April 2009, both nations formed a Joint Commission in Paramaribo, Suriname, to improve relations and to expand into various areas of cooperation.[45] They held a second meeting toward this goal on 3–4 March 2011, in Dover, Barbados. Their representatives reviewed issues of agriculture, trade, investment, as well as international transport.[46]
82
+
83
+ In the late 2000s, Suriname intensified development cooperation with other developing countries. China's South-South cooperation with Suriname has included a number of large-scale infrastructure projects, including port rehabilitation and road construction. Brazil signed agreements to cooperate with Suriname in education, health, agriculture, and energy production.[47]
84
+
85
+ The Armed Forces of Suriname have three branches: the Army, the Air Force, and the Navy. The President of the Republic, Chan Santokhi, is the Supreme Commander-in-Chief of the Armed Forces (Opperbevelhebber van de Strijdkrachten). The President is assisted by the Minister of Defence. Beneath the President and Minister of Defence is the Commander of the Armed Forces (Bevelhebber van de Strijdkrachten). The Military Branches and regional Military Commands report to the Commander.
86
+
87
+ After the creation of the Statute of the Kingdom of the Netherlands, the Royal Netherlands Army was entrusted with the defense of Suriname, while the defense of the Netherlands Antilles was the responsibility of the Royal Netherlands Navy. The army set up a separate Troepenmacht in Suriname (Forces in Suriname, TRIS). Upon independence in 1975, this force was turned into the Surinaamse Krijgsmacht (SKM):, Surinamese Armed Forces. On 25 February 1980, a group of 15 non-commissioned officers and one junior SKM officer, under the leadership of sergeant major Dési Bouterse, overthrew the Government. Subsequently, the SKM was rebranded as Nationaal Leger (NL), National Army.
88
+
89
+ In 1965 the Dutch and Americans used Suriname's Coronie site for multiple Nike Apache sounding rocket launches.[48]
90
+
91
+ The country is divided into ten administrative districts, each headed by a district commissioner appointed by the president, who also has the power of dismissal. Suriname is further subdivided into 62 resorts (ressorten).
92
+
93
+ Suriname is the smallest independent country in South America. Situated on the Guiana Shield, it lies mostly between latitudes 1° and 6°N, and longitudes 54° and 58°W. The country can be divided into two main geographic regions. The northern, lowland coastal area (roughly above the line Albina-Paranam-Wageningen) has been cultivated, and most of the population lives here. The southern part consists of tropical rainforest and sparsely inhabited savanna along the border with Brazil, covering about 80% of Suriname's land surface.
94
+
95
+ The two main mountain ranges are the Bakhuys Mountains and the Van Asch Van Wijck Mountains. Julianatop is the highest mountain in the country at 1,286 metres (4,219 ft) above sea level. Other mountains include Tafelberg at 1,026 metres (3,366 ft), Mount Kasikasima at 718 metres (2,356 ft), Goliathberg at 358 metres (1,175 ft) and Voltzberg at 240 metres (790 ft).
96
+
97
+ Suriname's forest cover is 90.2%, the highest of any nation in the world.
98
+
99
+ Suriname is situated between French Guiana to the east and Guyana to the west. The southern border is shared with Brazil and the northern border is the Atlantic coast. The southernmost borders with French Guiana and Guyana are disputed by these countries along the Marowijne and Corantijn rivers, respectively, while a part of the disputed maritime boundary with Guyana was arbitrated by a tribunal convened under the rules set out in Annex VII of the United Nations Convention on the Law of the Sea on 20 September 2007.[50][51]
100
+
101
+ Lying 2 to 5 degrees north of the equator, Suriname has a very hot and wet tropical climate, and temperatures do not vary much throughout the year. Average relative humidity is between 80% and 90%. Its average temperature ranges from 29 to 34 degrees Celsius (84 to 93 degrees Fahrenheit). Due to the high humidity, actual temperatures are distorted and may therefore feel up to 6 degrees Celsius (11 degrees Fahrenheit) hotter than the recorded temperature. The year has two wet seasons, from April to August and from November to February. It also has two dry seasons, from August to November and February to April.
102
+
103
+ Suriname is already seeing the effects of climate change, including warming temperatures and more extreme weather events. As a relatively poor country, its contributions to climate change have been limited; moreover, because of the large forest cover the country has been running a carbon-negative economy since 2014.[52]
104
+
105
+ Located in the upper Coppename River watershed, the Central Suriname Nature Reserve has been designated a UNESCO World Heritage Site for its unspoiled forests and biodiversity. There are many national parks in the country including Galibi National Reserve along the coast; Brownsberg Nature Park and Eilerts de Haan Nature Park in central Suriname; and the Sipaliwani Nature Reserve on the Brazilian border. In all, 16% of the country's land area is national parks and lakes, according to the UNEP World Conservation Monitoring Centre.[54]
106
+
107
+ Suriname's democracy gained some strength after the turbulent 1990s, and its economy became more diversified and less dependent on Dutch financial assistance. Bauxite (aluminium ore) mining used to be a strong revenue source. The discovery and exploitation of oil and gold has added substantially to Suriname's economic independence. Agriculture, especially rice and bananas, remains a strong component of the economy, and ecotourism is providing new economic opportunities. More than 80% of Suriname's land-mass consists of unspoiled rain forest; with the establishment of the Central Suriname Nature Reserve in 1998, Suriname signalled its commitment to conservation of this precious resource. The Central Suriname Nature Reserve became a World Heritage Site in 2000.
108
+
109
+ The economy of Suriname was dominated by the bauxite industry, which accounted for more than 15% of GDP and 70% of export earnings up to 2016. Other main export products include rice, bananas and shrimp. Suriname has recently started exploiting some of its sizeable oil[55] and gold[56] reserves. About a quarter of the people work in the agricultural sector. The Surinamese economy is very dependent on commerce, its main trade partners being the Netherlands, the United States, Canada, and Caribbean countries, mainly Trinidad and Tobago and the islands of the former Netherlands Antilles.[57]
110
+
111
+ After assuming power in the fall of 1996, the Wijdenbosch government ended the structural adjustment program of the previous government, claiming it was unfair to the poorer elements of society. Tax revenues fell as old taxes lapsed and the government failed to implement new tax alternatives. By the end of 1997, the allocation of new Dutch development funds was frozen as Surinamese Government relations with the Netherlands deteriorated. Economic growth slowed in 1998, with decline in the mining, construction, and utility sectors. Rampant government expenditures, poor tax collection, a bloated civil service, and reduced foreign aid in 1999 contributed to the fiscal deficit, estimated at 11% of GDP. The government sought to cover this deficit through monetary expansion, which led to a dramatic increase in inflation. It takes longer on average to register a new business in Suriname than virtually any other country in the world (694 days or about 99 weeks).[58]
112
+
113
+ According to the 2012 census, Suriname had a population of 541,638 inhabitants.[5] The Surinamese populace is characterized by its high level of diversity, wherein no particular demographic group constitutes a majority. This is a legacy of centuries of Dutch rule, which entailed successive periods of forced, contracted, or voluntary migration by various nationalities and ethnic groups from around the world.
114
+
115
+ The largest ethnic group are the Afro-Surinamese which form about 37% of the population, and are usually divided into two groups: the Creoles and the Maroons. Surinamese Maroons, whose ancestors are mostly runaway slaves that fled to the interior, comprise 21.7% of the population; they are divided into six main groups: Ndyuka (Aucans), Saramaccans, Paramaccans, Kwinti, Aluku (Boni) and Matawai. Surinamese Creoles, mixed people descending from African slaves and mostly Dutch Europeans, form 15.7% of the population. East Indians, who form 27% of the population, are the second largest group. They are descendants of 19th-century contract workers from India, hailing mostly from the modern Indian states of Bihar, Jharkhand, and Eastern Uttar Pradesh along the Nepali border. Javanese make up 14% of the population, and like the East Indians, descend largely from workers contracted from the island of Java in the former Dutch East Indies (modern Indonesia).[59] 13.4% of the population identifies as being of mixed ethnic heritage.
116
+
117
+ Other sizeable groups include the Chinese, originating from 19th-century contract workers and some recent migration, who number over 40,000 as of 2011[update]; Lebanese, primarily Maronites; Jews of Sephardic and Ashkenazi origin, whose center of population was the community of Jodensavanne; and Brazilians, many of them laborers mining for gold.[60]
118
+
119
+ A small but influential number of Europeans remain in the country, comprising about 1 percent of the population. They are descended mostly from Dutch 19th-century immigrant farmers, known as "Boeroes" (derived from boer, the Dutch word for "farmer"), and to a lesser degree other European groups, such as Portuguese from Madeira. Many Boeroes left after independence in 1975.
120
+
121
+ Various indigenous peoples make up 3.7% of the population, with the main groups being the Akurio, Arawak, Kalina (Caribs), Tiriyó and Wayana. They live mainly in the districts of Paramaribo, Wanica, Para, Marowijne and Sipaliwini.[citation needed]
122
+
123
+ The vast majority of Suriname's inhabitants (about 90%) live in Paramaribo or on the coast.
124
+
125
+ The choice of becoming Surinamese or Dutch citizens in the years leading up to Suriname's independence in 1975 led to a mass migration to the Netherlands. This migration continued in the period immediately after independence and during military rule in the 1980s and for largely economic reasons extended throughout the 1990s. The Surinamese community in the Netherlands numbered 350,300 as of 2013[update] (including children and grandchildren of Suriname migrants born in The Netherlands); this is compared to approximately 566,000[13] Surinamese in Suriname itself.
126
+
127
+ According to the International Organization for Migration, around 272,600 people from Suriname lived in other countries in the late 2010s, in particular in the Netherlands (ca 192,000), the French Republic (ca 25,000, most of them in French Guiana),[note 2] the United States (ca 15,000), Guyana (ca 5,000), Aruba (ca 1,500), and Canada (ca 1,000).[61]
128
+
129
+ Suriname's religious makeup is heterogeneous and reflective of the country's multicultural character.
130
+
131
+ According to the 2012 census, 48.4% were Christians;[7] 26.7% of Surinamese were Protestants (11.18% Pentecostal, 11.16% Moravian, and 4.4% of various other Protestant denominations) and 21.6% were Catholics. Hindus formed the second-largest religious group in Suriname, comprising 22.3% of the population,[7] the third largest proportion of any country in the Western Hemisphere after Guyana and Trinidad and Tobago, both of which also have large proportions of Indians. Almost all practitioners of Hinduism are found among the Indo-Surinamese population. Muslims constitute 13.9% of the population, the highest proportion of Muslims in the Americas; they are largely of Javanese or Indian descent.[7] Other religious groups include Winti (1.8%),[7] an Afro-American religion practiced mostly by those of Maroon ancestry; Javanism (0.8%),[7] a syncretic faith found among some Javanese Surinamese; and various indigenous folk traditions that are often incorporated into one of the larger religions (usually Christianity). In the 2012 census, 7.5% of the population declared they had "no religion", while a further 3.2% left the question unanswered.[7]
132
+
133
+ Dutch is the sole official language, and is the language of education, government, business, and the media.[13] Over 60% of the population speaks Dutch as a mother tongue,[62] and most of the rest speaks it as a second language. In 2004, Suriname became an associate member of the Dutch Language Union.[63] It is the only Dutch-speaking country in South America as well as the only independent nation in the Americas where Dutch is spoken by a majority of the population, and one of the two non-Romance-speaking countries in South America, the other being English-speaking Guyana.
134
+
135
+ In Paramaribo, Dutch is the main home language in two-thirds of the households.[4] The recognition of "Surinaams-Nederlands" ("Surinamese Dutch") as a national dialect equal to "Nederlands-Nederlands" ("Dutch Dutch") and "Vlaams-Nederlands" ("Flemish Dutch") was expressed in 2009 by the publication of the Woordenboek Surinaams Nederlands (Surinamese–Dutch Dictionary).[64] It is the most commonly spoken language in urban areas; only in the interior of Suriname (namely parts of Sipaliwini and Brokopondo) is Dutch seldom spoken.
136
+
137
+ Sranan Tongo, a local creole language originally spoken by the Creole population group, is the most widely used vernacular language in day-to-day life and business. It and Dutch are considered to be the two principal languages of Surinamese diglossia; both are further influenced by other spoken languages which are spoken primarily within ethnic communities. Sranan Tongo is often used interchangeably with Dutch depending on the formality of the setting, where Dutch is seen as a prestige dialect and Sranan Tongo the common vernacular.[65]
138
+
139
+ Caribbean Hindustani or Sarnami, a dialect of Bhojpuri, is the fourth-most used language (after English), spoken by the descendants of South Asian contract workers from then British India. The Javanese language is used by the descendants of Javanese contract workers, and is common in Suriname. The Maroon languages, somewhat intelligible with Sranan, include Saramaka, Paramakan, Ndyuka (also called Aukan), Kwinti and Matawai. Amerindian languages, spoken by Amerindians, include Carib and Arawak. Hakka and Cantonese are spoken by the descendants of the Chinese contract workers. Mandarin is spoken by some few recent Chinese immigrants. English, Spanish, and Portuguese are also used as second languages.
140
+
141
+ The national capital, Paramaribo, is by far the dominant urban area, accounting for nearly half of Suriname's population and most of its urban residents; indeed, its population is greater than the next nine largest cities combined. Most municipalities are located within the capital's metropolitan area, or along the densely populated coastline.
142
+
143
+ Owing to the country's multicultural heritage, Suriname celebrates a variety of distinct ethnic and religious festivals.
144
+
145
+ There are several Hindu and Islamic national holidays like Diwali (deepavali), Phagwa and Eid ul-Fitr and Eid-ul-adha. These holidays do not have fixed dates on the Gregorian calendar, as they are based on the Hindu and Islamic calendars, respectively. As of 2020, Eid-ul-adha is a national holiday, and equal to a Sunday.[67]
146
+
147
+ There are several holidays which are unique to Suriname. These include the Indian, Javanese and Chinese arrival days. They celebrate the arrival of the first ships with their respective immigrants.
148
+
149
+ New Year's Eve in Suriname is called Oud jaar, Owru Yari, or "old year". It is during this period that the Surinamese population goes to the city's commercial district to watch "demonstrational fireworks". The bigger stores invest in these firecrackers and display them out in the streets. Every year the length of them is compared, and high praises are given for the company that has imported the largest ribbon.
150
+
151
+ These celebrations start at 10 in the morning and finish the next day. The day is usually filled with laughter, dance, music, and drinking. When the night starts, the big street parties are already at full capacity. The most popular fiesta is the one that is held at café 't Vat in the main tourist district. The parties there stop between 10 and 11 at night, after which people go home to light their pagaras (red-firecracker-ribbons) at midnight.
152
+ After 12, the parties continue and the streets fill again until daybreak.[68]
153
+
154
+ The major sports in Suriname are football, basketball, and volleyball. The Suriname Olympic Committee is the national governing body for sports in Suriname. The major mind sports are chess, draughts, bridge and troefcall.
155
+
156
+ Many Suriname-born football players and Dutch-born football players of Surinamese descent, like Gerald Vanenburg, Ruud Gullit, Frank Rijkaard, Edgar Davids, Clarence Seedorf, Patrick Kluivert, Aron Winter, Georginio Wijnaldum, Virgil van Dijk and Jimmy Floyd Hasselbaink have turned out to play for the Dutch national team. In 1999, Humphrey Mijnals, who played for both Suriname and the Netherlands, was elected Surinamese footballer of the century.[69] Another famous player is André Kamperveen, who captained Suriname in the 1940s and was the first Surinamese to play professionally in the Netherlands.
157
+
158
+ The most famous international track & field athlete from Suriname is Letitia Vriesde, who won a silver medal at the 1995 World Championships behind Ana Quirot in the 800 metres, the first medal won by a South American female athlete in World Championship competition. In addition, she also won a bronze medal at the 2001 World Championships and won several medals in the 800 and 1500 metres at the Pan-American Games and Central American and Caribbean Games. Tommy Asinga also received acclaim for winning a bronze medal in the 800 metres at the 1991 Pan American Games.
159
+
160
+ Swimmer Anthony Nesty is the only Olympic medalist for Suriname. He won gold in the 100-meter butterfly at the 1988 Summer Olympics in Seoul and he won bronze in the same discipline at the 1992 Summer Olympics in Barcelona. Originally from Trinidad and Tobago, he now lives in Gainesville, Florida, and is the coach of the University of Florida, mainly coaching distance swimmers.
161
+
162
+ Cricket is popular in Suriname to some extent, influenced by its popularity in the Netherlands and in neighbouring Guyana. The Surinaamse Cricket Bond is an associate member of the International Cricket Council (ICC). Suriname and Argentina are the only ICC associates in South America, although Guyana is represented on the West Indies Cricket Board, a full member. The national cricket team was ranked 47th in the world and sixth in the ICC Americas region as of June 2014, and competes in the World Cricket League (WCL) and ICC Americas Championship. Iris Jharap, born in Paramaribo, played women's One Day International matches for the Dutch national side, the only Surinamese to do so.[70]
163
+
164
+ In the sport of badminton the local heroes are Virgil Soeroredjo & Mitchel Wongsodikromo and also Crystal Leefmans. All winning medals for Suriname at the Carebaco Caribbean Championships, the Central American and Caribbean Games (CACSO Games)[71] and also at the South American Games, better known as the ODESUR Games. Virgil Soeroredjo also participated for Suriname at the 2012 London Summer Olympics, only the second badminton player, after Oscar Brandon, for Suriname to achieve this.[72] Current National Champion Sören Opti was the third Surinamese badminton player to participate at the Summer Olympics in 2016.
165
+
166
+ Multiple time K-1 kickboxing world champions Ernesto Hoost and Remy Bonjasky were born in Suriname or are of Surinamese descent. Other kickboxing world champions include Rayen Simson, Melvin Manhoef, Tyrone Spong and Regian Eersel.
167
+
168
+ Suriname also has a national korfball team, with korfball being a Dutch sport. Vinkensport is also practised.
169
+
170
+ Suriname, along with neighboring Guyana, is one of only two countries on the mainland South American continent that drive on the left, although many vehicles are left hand drive as well as right hand drive.[73] One explanation for this practice is that at the time of its colonization of Suriname, the Netherlands itself used left-hand traffic, also introducing the practice in the Dutch East Indies, now Indonesia.[74] Another is that Suriname was first colonized by the British, and for practical reasons, this was not changed when it came under Dutch administration.[75] Although the Netherlands converted to driving to the right at the end of the 18th century,[74] Suriname did not.
171
+
172
+ Airlines with departures from Suriname:
173
+
174
+ Airlines with arrivals in Suriname:
175
+
176
+ Other national companies with an air operator certification:
177
+
178
+ The Global Burden of Disease Study provides an on-line data source for analyzing updated estimates of health for 359 diseases and injuries and 84 risk factors from 1990 to 2017 in most of the world's countries.[76] Comparing Suriname with other Caribbean nations show that in 2017 the age-standardized death rate for all causes was 793 (males 969, females 641) per 100,000, far below the 1219 of Haiti, somewhat below the 944 of Guyana but considerably above the 424 of Bermuda. In 1990 the death rate was 960 per 100,000. Life expectancy in 2017 was 72 years (males 69, females 75). The death rate for children < 5 years was 581 per 100,000 compared to 1308 in Haiti and 102 in Bermuda. In 1990 and 2017, leading causes of age-standardized death rates were cardiovascular disease, cancer and diabetes/chronic kidney disease.
179
+
180
+ Education in Suriname is compulsory until the age of 12,[77] and the nation had a net primary enrollment rate of 94% in 2004.[78] Literacy is very common, particularly among men.[78] The main university in the country is the Anton de Kom University of Suriname.
181
+
182
+ From elementary school to high school there are 13 grades. The elementary school has six grades, middle school four grades and high school three grades. Students take a test in the end of elementary school to determine whether they will go to the MULO (secondary modern school) or a middle school of lower standards like LBO. Students from the elementary school wear a green shirt with jeans, while middle school students wear a blue shirt with jeans.
183
+
184
+ Students going from the second grade of middle school to the third grade have to choose between the business or science courses. This will determine what their major subjects will be. In order to go on to study math and physics, the student must have a total of 12 points. If the student has fewer points, he/she will go into the business courses or fail the grade.[citation needed]
185
+
186
+ Due to the variety of habitats and temperatures, biodiversity in Suriname is considered high.[79] In October 2013, 16 international scientists researching the ecosystems during a three-week expedition in Suriname's Upper Palumeu River Watershed catalogued 1,378 species and found 60—including six frogs, one snake, and 11 fish—that may be previously unknown species.[80][81][82][83] According to the environmental non-profit Conservation International, which funded the expedition, Suriname's ample supply of fresh water is vital to the biodiversity and healthy ecosystems of the region.[84]
187
+
188
+ Snakewood (Brosimum guianense), a shrub-like tree, is native to this tropical region of the Americas. Customs in Suriname report that snakewood is often illegally exported to French Guiana, thought to be for the crafts industry.[85]
189
+
190
+ On 21 March 2013, Suriname's REDD+ Readiness Preparation Proposal (R-PP 2013) was approved by the member countries of the Participants Committee of the Forest Carbon Partnership Facility (FCPF).[86]
191
+
192
+ As in other parts of Central and South America, indigenous communities have increased their activism to protect their lands and preserve habitat. In March 2015, the "Trio and Wayana communities presented a declaration of cooperation to the National Assembly of Suriname that announces an indigenous conservation corridor spanning 72,000 square kilometers (27,799 square miles) of southern Suriname. The declaration, led by these indigenous communities and with the support of Conservation International (CI) and World Wildlife Fund (WWF) Guianas, comprises almost half of the total area of Suriname."[87] This area includes large forests and is considered "essential for the country's climate resilience, freshwater security, and green development strategy."[87]
193
+
194
+ Traditionally, De Ware Tijd was the major newspaper of the country, but since the '90s Times of Suriname, De West and Dagblad Suriname have also been well-read newspapers; all publish primarily in Dutch.[88]
195
+
196
+ Suriname has twenty-four radio stations, most of them also broadcast through the Internet. There are twelve television sources:
197
+ ABC (Ch. 4–1, 2), RBN (Ch. 5–1, 2), Rasonic TV (Ch. 7), STVS (Ch. 8–1, 2, 3, 4, 5, 6), Apintie (Ch. 10–1), ATV (Ch. 12–1, 2, 3, 4), Radika (Ch. 14), SCCN (Ch. 17–1, 2, 3), Pipel TV (Ch. 18–1, 2), Trishul (Ch. 20–1, 2, 3, 4), Garuda (Ch. 23–1, 2, 3), Sangeetmala (Ch. 26), Ch. 30, Ch. 31, Ch.32, Ch.38, SCTV (Ch. 45). Also listened to is mArt, a broadcaster from Amsterdam founded by people from Suriname. Kondreman is one of the popular cartoons in Suriname.
198
+
199
+ There are also three major news sites: Starnieuws, Suriname Herald and GFC Nieuws.
200
+
201
+ In 2012, Suriname was ranked joint 22nd with Japan in the worldwide Press Freedom Index by the organization Reporters Without Borders.[89] This was ahead of the US (47th), the UK (28th), and France (38th).
202
+
203
+ Most tourists visit Suriname for the biodiversity of the Amazonian rain forests in the south of the country, which are noted for their flora and fauna. The Central Suriname Nature Reserve is the biggest and one of the most popular reserves, along with the Brownsberg Nature Park which overlooks the Brokopondo Reservoir, one of the largest man-made lakes in the world. In 2008, the Berg en Dal Eco & Cultural Resort opened in Brokopondo.[90] Tonka Island in the reservoir is home to a rustic eco-tourism project run by the Saramaccaner Maroons.[91] Pangi wraps and bowls made of calabashes are the two main products manufactured for tourists. The Maroons have learned that colorful and ornate pangis are popular with tourists.[92] Other popular decorative souvenirs are hand-carved purple-hardwood made into bowls, plates, canes, wooden boxes, and wall decors.
204
+
205
+ There are also many waterfalls throughout the country. Raleighvallen, or Raleigh Falls, is a 56,000-hectare (140,000-acre) nature reserve on the Coppename River, rich in bird life. Also are the Blanche Marie Falls on the Nickerie River and the Wonotobo Falls. Tafelberg Mountain in the centre of the country is surrounded by its own reserve – the Tafelberg Nature Reserve – around the source of the Saramacca River, as is the Voltzberg Nature Reserve further north on the Coppename River at Raleighvallen. In the interior are many Maroon and Amerindian villages, many of which have their own reserves that are generally open to visitors.
206
+
207
+ Suriname is one of the few countries in the world where at least one of each biome that the state possesses has been declared a wildlife reserve. Around 30% of the total land area of Suriname is protected by law as reserves.
208
+
209
+ Other attractions include plantations such as Laarwijk, which is situated along the Suriname River. This plantation can be reached only by boat via Domburg, in the north central Wanica District of Suriname.
210
+
211
+ Crime rates continue to rise in Paramaribo and armed robberies are not uncommon. According to the current U.S. Department of State Travel Advisory at the date of the 2018 report's publication, Suriname has been assessed as Level 1: exercise normal precautions.[93]
212
+
213
+ The Jules Wijdenbosch Bridge is a bridge over the river Suriname between Paramaribo and Meerzorg in the Commewijne district. The bridge was built during the tenure of President Jules Albert Wijdenbosch (1996–2000) and was completed in 2000. The bridge is 52 metres (171 ft) high, and 1,504 metres (4,934 ft) long. It connects Paramaribo with Commewijne, a connection which previously could only be made by ferry. The purpose of the bridge was to facilitate and promote the development of the eastern part of Suriname. The bridge consists of two lanes (one lane each way) and is not accessible to pedestrians.
214
+
215
+ The construction of the Sts. Peter and Paul Cathedral started on 13 January 1883. Before it became a cathedral it was a theatre. The theatre was built in 1809 and burned down in 1820.
216
+
217
+ Suriname is one of the few countries in the world where a synagogue is located next to a mosque.[94]
218
+ The two buildings are located next to each other in the centre of Paramaribo and have been known to share a parking facility during their respective religious rites, should they happen to coincide with one another.
219
+
220
+ A relatively new landmark is the Hindu Arya Dewaker temple in the Johan Adolf Pengelstraat in Wanica, Paramaribo, which was inaugurated in 2001. A special characteristic of the temple is that it does not have images of the Hindu divinities, as they are forbidden in the Arya Samaj, the Hindu movement to which the people who built the temple belong. Instead, the building is covered by many texts derived from the Vedas and other Hindu scriptures. The beautiful architecture makes the temple a tourist attraction.
en/5555.html.txt ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Obesity is a medical condition in which excess body fat has accumulated to an extent that it may have a negative effect on health.[1] People are generally considered obese when their body mass index (BMI), a measurement obtained by dividing a person's weight by the square of the person's height, is over 30 kg/m2; the range 25–30 kg/m2 is defined as overweight.[1] Some East Asian countries use lower values.[8] Obesity increases the likelihood of various diseases and conditions, particularly cardiovascular diseases, type 2 diabetes, obstructive sleep apnea, certain types of cancer, osteoarthritis, and depression.[2][3]
4
+
5
+ Obesity is most commonly caused by a combination of excessive food intake, lack of physical activity, and genetic susceptibility.[1][4] A few cases are caused primarily by genes, endocrine disorders, medications, or mental disorder.[9] The view that obese people eat little yet gain weight due to a slow metabolism is not medically supported.[10] On average, obese people have a greater energy expenditure than their normal counterparts due to the energy required to maintain an increased body mass.[10][11]
6
+
7
+ Obesity is mostly preventable through a combination of social changes and personal choices.[1] Changes to diet and exercising are the main treatments.[2] Diet quality can be improved by reducing the consumption of energy-dense foods, such as those high in fat or sugars, and by increasing the intake of dietary fiber.[1] Medications can be used, along with a suitable diet, to reduce appetite or decrease fat absorption.[5] If diet, exercise, and medication are not effective, a gastric balloon or surgery may be performed to reduce stomach volume or length of the intestines, leading to feeling full earlier or a reduced ability to absorb nutrients from food.[6][12]
8
+
9
+ Obesity is a leading preventable cause of death worldwide, with increasing rates in adults and children.[1][13] In 2015, 600 million adults (12%) and 100 million children were obese in 195 countries.[7] Obesity is more common in women than men.[1] Authorities view it as one of the most serious public health problems of the 21st century.[14] Obesity is stigmatized in much of the modern world (particularly in the Western world), though it was seen as a symbol of wealth and fertility at other times in history and still is in some parts of the world.[2][15] In 2013, several medical societies, including the American Medical Association and the American Heart Association, classified obesity as a disease.[16][17][18]
10
+
11
+ Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health.[20] It is defined by body mass index (BMI) and further evaluated in terms of fat distribution via the waist–hip ratio and total cardiovascular risk factors.[21][22] BMI is closely related to both percentage body fat and total body fat.[23]
12
+ In children, a healthy weight varies with age and sex. Obesity in children and adolescents is defined not as an absolute number but in relation to a historical normal group, such that obesity is a BMI greater than the 95th percentile.[24] The reference data on which these percentiles were based date from 1963 to 1994, and thus have not been affected by the recent increases in weight.[25] BMI is defined as the subject's weight divided by the square of their height and is calculated as follows.
13
+
14
+ BMI is usually expressed in kilograms of weight per metre squared of height. To convert from pounds per inch squared multiply by 703 (kg/m2)/(lb/sq in).[26]
15
+
16
+ The most commonly used definitions, established by the World Health Organization (WHO) in 1997 and published in 2000, provide the values listed in the table.[27][28]
17
+
18
+ Some modifications to the WHO definitions have been made by particular organizations.[29] The surgical literature breaks down class II and III obesity into further categories whose exact values are still disputed.[30]
19
+
20
+ As Asian populations develop negative health consequences at a lower BMI than Caucasians, some nations have redefined obesity; Japan has defined obesity as any BMI greater than 25 kg/m2[8] while China uses a BMI of greater than 28 kg/m2.[29]
21
+
22
+ Excessive body weight is associated with various diseases and conditions, particularly cardiovascular diseases, diabetes mellitus type 2, obstructive sleep apnea, certain types of cancer, osteoarthritis,[2] and asthma.[2][31] As a result, obesity has been found to reduce life expectancy.[2]
23
+
24
+ Obesity is one of the leading preventable causes of death worldwide.[33][34][35] A number of reviews have found that mortality risk is lowest at a BMI of 20–25 kg/m2[36][37][38] in non-smokers and at 24–27 kg/m2 in current smokers, with risk increasing along with changes in either direction.[39][40] This appears to apply in at least four continents.[38] In contrast, a 2013 review found that grade 1 obesity (BMI 30–35) was not associated with higher mortality than normal weight, and that overweight (BMI 25–30) was associated with "lower" mortality than was normal weight (BMI 18.5–25).[41] Other evidence suggests that the association of BMI and waist circumference with mortality is U- or J-shaped, while the association between waist-to-hip ratio and waist-to-height ratio with mortality is more positive.[42] In Asians the risk of negative health effects begins to increase between 22–25 kg/m2.[43] A BMI above 32 kg/m2 has been associated with a doubled mortality rate among women over a 16-year period.[44] In the United States, obesity is estimated to cause 111,909 to 365,000 deaths per year,[2][35] while 1 million (7.7%) of deaths in Europe are attributed to excess weight.[45][46] On average, obesity reduces life expectancy by six to seven years,[2][47] a BMI of 30–35 kg/m2 reduces life expectancy by two to four years,[37] while severe obesity (BMI > 40 kg/m2) reduces life expectancy by ten years.[37]
25
+
26
+ Obesity increases the risk of many physical and mental conditions. These comorbidities are most commonly shown in metabolic syndrome,[2] a combination of medical disorders which includes: diabetes mellitus type 2, high blood pressure, high blood cholesterol, and high triglyceride levels.[48]
27
+
28
+ Complications are either directly caused by obesity or indirectly related through mechanisms sharing a common cause such as a poor diet or a sedentary lifestyle. The strength of the link between obesity and specific conditions varies. One of the strongest is the link with type 2 diabetes. Excess body fat underlies 64% of cases of diabetes in men and 77% of cases in women.[49]
29
+
30
+ Health consequences fall into two broad categories: those attributable to the effects of increased fat mass (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increased number of fat cells (diabetes, cancer, cardiovascular disease, non-alcoholic fatty liver disease).[2][50] Increases in body fat alter the body's response to insulin, potentially leading to insulin resistance. Increased fat also creates a proinflammatory state,[51][52] and a prothrombotic state.[50][53]
31
+
32
+ Although the negative health consequences of obesity in the general population are well supported by the available evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox.[75] The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis,[75] and has subsequently been found in those with heart failure and peripheral artery disease (PAD).[76]
33
+
34
+ In people with heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal weight. This has been attributed to the fact that people often lose weight as they become progressively more ill.[77] Similar findings have been made in other types of heart disease. People with class I obesity and heart disease do not have greater rates of further heart problems than people of normal weight who also have heart disease. In people with greater degrees of obesity, however, the risk of further cardiovascular events is increased.[78][79] Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese.[80] One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event.[81] Another study found that if one takes into account chronic obstructive pulmonary disease (COPD) in those with PAD, the benefit of obesity no longer exists.[76]
35
+
36
+ At an individual level, a combination of excessive food energy intake and a lack of physical activity is thought to explain most cases of obesity.[82] A limited number of cases are due primarily to genetics, medical reasons, or psychiatric illness.[9] In contrast, increasing rates of obesity at a societal level are felt to be due to an easily accessible and palatable diet,[83] increased reliance on cars, and mechanized manufacturing.[84][85]
37
+
38
+ A 2006 review identified ten other possible contributors to the recent increase of obesity: (1) insufficient sleep, (2) endocrine disruptors (environmental pollutants that interfere with lipid metabolism), (3) decreased variability in ambient temperature, (4) decreased rates of smoking, because smoking suppresses appetite, (5) increased use of medications that can cause weight gain (e.g., atypical antipsychotics), (6) proportional increases in ethnic and age groups that tend to be heavier, (7) pregnancy at a later age (which may cause susceptibility to obesity in children), (8) epigenetic risk factors passed on generationally, (9) natural selection for higher BMI, and (10) assortative mating leading to increased concentration of obesity risk factors (this would increase the number of obese people by increasing population variance in weight).[86] According to the Endocrine Society, there is "growing evidence suggesting that obesity is a disorder of the energy homeostasis system, rather than simply arising from the passive accumulation of excess weight".[87]
39
+
40
+
41
+
42
+
43
+
44
+ A 2016 review supported excess food as the primary factor.[89][90] Dietary energy supply per capita varies markedly between different regions and countries. It has also changed significantly over time.[88] From the early 1970s to the late 1990s the average food energy available per person per day (the amount of food bought) increased in all parts of the world except Eastern Europe. The United States had the highest availability with 3,654 calories (15,290 kJ) per person in 1996.[88] This increased further in 2003 to 3,754 calories (15,710 kJ).[88] During the late 1990s Europeans had 3,394 calories (14,200 kJ) per person, in the developing areas of Asia there were 2,648 calories (11,080 kJ) per person, and in sub-Saharan Africa people had 2,176 calories (9,100 kJ) per person.[88][91] Total food energy consumption has been found to be related to obesity.[92]
45
+
46
+ The widespread availability of nutritional guidelines[93] has done little to address the problems of overeating and poor dietary choice.[94] From 1971 to 2000, obesity rates in the United States increased from 14.5% to 30.9%.[95] During the same period, an increase occurred in the average amount of food energy consumed. For women, the average increase was 335 calories (1,400 kJ) per day (1,542 calories (6,450 kJ) in 1971 and 1,877 calories (7,850 kJ) in 2004), while for men the average increase was 168 calories (700 kJ) per day (2,450 calories (10,300 kJ) in 1971 and 2,618 calories (10,950 kJ) in 2004). Most of this extra food energy came from an increase in carbohydrate consumption rather than fat consumption.[96] The primary sources of these extra carbohydrates are sweetened beverages, which now account for almost 25 percent of daily food energy in young adults in America,[97] and potato chips.[98] Consumption of sweetened drinks such as soft drinks, fruit drinks, iced tea, and energy and vitamin water drinks is believed to be contributing to the rising rates of obesity[99][100] and to an increased risk of metabolic syndrome and type 2 diabetes.[101] Vitamin D deficiency is related to diseases associated with obesity.[102]
47
+
48
+ As societies become increasingly reliant on energy-dense, big-portions, and fast-food meals, the association between fast-food consumption and obesity becomes more concerning.[103] In the United States consumption of fast-food meals tripled and food energy intake from these meals quadrupled between 1977 and 1995.[104]
49
+
50
+ Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables.[105] Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.
51
+
52
+ Obese people consistently under-report their food consumption as compared to people of normal weight.[106] This is supported both by tests of people carried out in a calorimeter room[107] and by direct observation.
53
+
54
+ A sedentary lifestyle plays a significant role in obesity.[108] Worldwide there has been a large shift towards less physically demanding work,[109][110][111] and currently at least 30% of the world's population gets insufficient exercise.[110] This is primarily due to increasing use of mechanized transportation and a greater prevalence of labor-saving technology in the home.[109][110][111] In children, there appear to be declines in levels of physical activity due to less walking and physical education.[112] World trends in active leisure time physical activity are less clear. The World Health Organization indicates people worldwide are taking up less active recreational pursuits, while a study from Finland[113] found an increase and a study from the United States found leisure-time physical activity has not changed significantly.[114] A 2011 review of physical activity in children found that it may not be a significant contributor.[115]
55
+
56
+ In both children and adults, there is an association between television viewing time and the risk of obesity.[116][117][118] A review found 63 of 73 studies (86%) showed an increased rate of childhood obesity with increased media exposure, with rates increasing proportionally to time spent watching television.[119]
57
+
58
+ Like many other medical conditions, obesity is the result of an interplay between genetic and environmental factors.[121] Polymorphisms in various genes controlling appetite and metabolism predispose to obesity when sufficient food energy is present. As of 2006, more than 41 of these sites on the human genome have been linked to the development of obesity when a favorable environment is present.[122] People with two copies of the FTO gene (fat mass and obesity associated gene) have been found on average to weigh 3–4 kg more and have a 1.67-fold greater risk of obesity compared with those without the risk allele.[123] The differences in BMI between people that are due to genetics varies depending on the population examined from 6% to 85%.[124]
59
+
60
+ Obesity is a major feature in several syndromes, such as Prader–Willi syndrome, Bardet–Biedl syndrome, Cohen syndrome, and MOMO syndrome. (The term "non-syndromic obesity" is sometimes used to exclude these conditions.)[125] In people with early-onset severe obesity (defined by an onset before 10 years of age and body mass index over three standard deviations above normal), 7% harbor a single point DNA mutation.[126]
61
+
62
+ Studies that have focused on inheritance patterns rather than on specific genes have found that 80% of the offspring of two obese parents were also obese, in contrast to less than 10% of the offspring of two parents who were of normal weight.[127] Different people exposed to the same environment have different risks of obesity due to their underlying genetics.[128]
63
+
64
+ The thrifty gene hypothesis postulates that, due to dietary scarcity during human evolution, people are prone to obesity. Their ability to take advantage of rare periods of abundance by storing energy as fat would be advantageous during times of varying food availability, and individuals with greater adipose reserves would be more likely to survive famine. This tendency to store fat, however, would be maladaptive in societies with stable food supplies.[129] This theory has received various criticisms, and other evolutionarily-based theories such as the drifty gene hypothesis and the thrifty phenotype hypothesis have also been proposed.[130][131]
65
+
66
+ Certain physical and mental illnesses and the pharmaceutical substances used to treat them can increase risk of obesity. Medical illnesses that increase obesity risk include several rare genetic syndromes (listed above) as well as some congenital or acquired conditions: hypothyroidism, Cushing's syndrome, growth hormone deficiency,[132] and some eating disorders such as binge eating disorder and night eating syndrome.[2] However, obesity is not regarded as a psychiatric disorder, and therefore is not listed in the DSM-IVR as a psychiatric illness.[133] The risk of overweight and obesity is higher in patients with psychiatric disorders than in persons without psychiatric disorders.[134]
67
+
68
+ Certain medications may cause weight gain or changes in body composition; these include insulin, sulfonylureas, thiazolidinediones, atypical antipsychotics, antidepressants, steroids, certain anticonvulsants (phenytoin and valproate), pizotifen, and some forms of hormonal contraception.[2]
69
+
70
+ While genetic influences are important to understanding obesity, they cannot explain the current dramatic increase seen within specific countries or globally.[135] Though it is accepted that energy consumption in excess of energy expenditure leads to obesity on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors.
71
+
72
+ The correlation between social class and BMI varies globally. A review in 1989 found that in developed countries women of a high social class were less likely to be obese. No significant differences were seen among men of different social classes. In the developing world, women, men, and children from high social classes had greater rates of obesity.[136] An update of this review carried out in 2007 found the same relationships, but they were weaker. The decrease in strength of correlation was felt to be due to the effects of globalization.[137] Among developed countries, levels of adult obesity, and percentage of teenage children who are overweight, are correlated with income inequality. A similar relationship is seen among US states: more adults, even in higher social classes, are obese in more unequal states.[138]
73
+
74
+ Many explanations have been put forth for associations between BMI and social class. It is thought that in developed countries, the wealthy are able to afford more nutritious food, they are under greater social pressure to remain slim, and have more opportunities along with greater expectations for physical fitness. In undeveloped countries the ability to afford food, high energy expenditure with physical labor, and cultural values favoring a larger body size are believed to contribute to the observed patterns.[137] Attitudes toward body weight held by people in one's life may also play a role in obesity. A correlation in BMI changes over time has been found among friends, siblings, and spouses.[139] Stress and perceived low social status appear to increase risk of obesity.[138][140][141]
75
+
76
+ Smoking has a significant effect on an individual's weight. Those who quit smoking gain an average of 4.4 kilograms (9.7 lb) for men and 5.0 kilograms (11.0 lb) for women over ten years.[142] However, changing rates of smoking have had little effect on the overall rates of obesity.[143]
77
+
78
+ In the United States the number of children a person has is related to their risk of obesity. A woman's risk increases by 7% per child, while a man's risk increases by 4% per child.[144] This could be partly explained by the fact that having dependent children decreases physical activity in Western parents.[145]
79
+
80
+ In the developing world urbanization is playing a role in increasing rate of obesity. In China overall rates of obesity are below 5%; however, in some cities rates of obesity are greater than 20%.[146]
81
+
82
+ Malnutrition in early life is believed to play a role in the rising rates of obesity in the developing world.[147] Endocrine changes that occur during periods of malnutrition may promote the storage of fat once more food energy becomes available.[147]
83
+
84
+ Consistent with cognitive epidemiological data, numerous studies confirm that obesity is associated with cognitive deficits.[148][149]
85
+
86
+ Whether obesity causes cognitive deficits, or vice versa is unclear at present.
87
+
88
+ The study of the effect of infectious agents on metabolism is still in its early stages. Gut flora has been shown to differ between lean and obese people. There is an indication that gut flora can affect the metabolic potential. This apparent alteration is believed to confer a greater capacity to harvest energy contributing to obesity. Whether these differences are the direct cause or the result of obesity has yet to be determined unequivocally.[150] The use of antibiotics among children has also been associated with obesity later in life.[151][152]
89
+
90
+ An association between viruses and obesity has been found in humans and several different animal species. The amount that these associations may have contributed to the rising rate of obesity is yet to be determined.[153]
91
+
92
+ A number of reviews have found an association between short duration of sleep and obesity.[154][155] Whether one causes the other is unclear.[154] Even if shorts sleep does increase weight gain it is unclear if this is to a meaningful degree or increasing sleep would be of benefit.[156]
93
+
94
+ Certain aspects of personality are associated with being obese.[157] Neuroticism, impulsivity, and sensitivity to reward are more common in people who are obese while conscientiousness and self-control are less common in people who are obese.[157][158] Loneliness is also a risk factor.[159]
95
+
96
+ There are many possible pathophysiological mechanisms involved in the development and maintenance of obesity.[160] This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory.[161] While leptin and ghrelin are produced peripherally, they control appetite through their actions on the central nervous system. In particular, they and other appetite-related hormones act on the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. There are several circuits within the hypothalamus that contribute to its role in integrating appetite, the melanocortin pathway being the most well understood.[160] The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively.[162]
97
+
98
+ The arcuate nucleus contains two distinct groups of neurons.[160] The first group coexpresses neuropeptide Y (NPY) and agouti-related peptide (AgRP) and has stimulatory inputs to the LH and inhibitory inputs to the VMH. The second group coexpresses pro-opiomelanocortin (POMC) and cocaine- and amphetamine-regulated transcript (CART) and has stimulatory inputs to the VMH and inhibitory inputs to the LH. Consequently, NPY/AgRP neurons stimulate feeding and inhibit satiety, while POMC/CART neurons stimulate satiety and inhibit feeding. Both groups of arcuate nucleus neurons are regulated in part by leptin. Leptin inhibits the NPY/AgRP group while stimulating the POMC/CART group. Thus a deficiency in leptin signaling, either via leptin deficiency or leptin resistance, leads to overfeeding and may account for some genetic and acquired forms of obesity.[160]
99
+
100
+ The World Health Organization (WHO) predicts that overweight and obesity may soon replace more traditional public health concerns such as undernutrition and infectious diseases as the most significant cause of poor health.[163] Obesity is a public health and policy problem because of its prevalence, costs, and health effects.[164] The United States Preventive Services Task Force recommends screening for all adults followed by behavioral interventions in those who are obese.[165] Public health efforts seek to understand and correct the environmental factors responsible for the increasing prevalence of obesity in the population. Solutions look at changing the factors that cause excess food energy consumption and inhibit physical activity. Efforts include federally reimbursed meal programs in schools, limiting direct junk food marketing to children,[166] and decreasing access to sugar-sweetened beverages in schools.[167] The World Health Organization recommends the taxing of sugary drinks.[168] When constructing urban environments, efforts have been made to increase access to parks and to develop pedestrian routes.[169] There is low quality evidence that nutritional labelling with energy information on menus can help to reduce energy intake while dining in restaurants.[170]
101
+
102
+
103
+
104
+ Many organizations have published reports pertaining to obesity. In 1998, the first US Federal guidelines were published, titled "Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults: The Evidence Report".[171] In 2006 the Canadian Obesity Network, now known as Obesity Canada published the "Canadian Clinical Practice Guidelines (CPG) on the Management and Prevention of Obesity in Adults and Children". This is a comprehensive evidence-based guideline to address the management and prevention of overweight and obesity in adults and children.[82]
105
+
106
+ In 2004, the United Kingdom Royal College of Physicians, the Faculty of Public Health and the Royal College of Paediatrics and Child Health released the report "Storing up Problems", which highlighted the growing problem of obesity in the UK.[172] The same year, the House of Commons Health Select Committee published its "most comprehensive inquiry [...] ever undertaken" into the impact of obesity on health and society in the UK and possible approaches to the problem.[173] In 2006, the National Institute for Health and Clinical Excellence (NICE) issued a guideline on the diagnosis and management of obesity, as well as policy implications for non-healthcare organizations such as local councils.[174] A 2007 report produced by Derek Wanless for the King's Fund warned that unless further action was taken, obesity had the capacity to cripple the National Health Service financially.[175]
107
+
108
+ Comprehensive approaches are being looked at to address the rising rates of obesity. The Obesity Policy Action (OPA) framework divides measure into 'upstream' policies, 'midstream' policies, 'downstream' policies. 'Upstream' policies look at changing society, 'midstream' policies try to alter individuals' behavior to prevent obesity, and 'downstream' policies try to treat currently afflicted people.[176]
109
+
110
+ The main treatment for obesity consists of weight loss via dieting and physical exercise.[16][82][177][178] Dieting, as part of a lifestyle change, produces sustained weight loss, despite slow weight regain over time.[16][179][180][181] Intensive behavioral interventions combining both dietary changes and exercise are recommended.[16][177][182]
111
+
112
+ Several diets are effective.[16] In the short-term low carbohydrate diets appear better than low fat diets for weight loss.[183] In the long term, however, all types of low-carbohydrate and low-fat diets appear equally beneficial.[183][184] A 2014 review found that the heart disease and diabetes risks associated with different diets appear to be similar.[185] Promotion of the Mediterranean diets among the obese may lower the risk of heart disease.[183] Decreased intake of sweet drinks is also related to weight-loss.[183] Success rates of long-term weight loss maintenance with lifestyle changes are low, ranging from 2–20%.[186] Dietary and lifestyle changes are effective in limiting excessive weight gain in pregnancy and improve outcomes for both the mother and the child.[187] Intensive behavioral counseling is recommended in those who are both obese and have other risk factors for heart disease.[188]
113
+
114
+ Five medications have evidence for long-term use orlistat, lorcaserin, liraglutide, phentermine–topiramate, and naltrexone–bupropion.[189] They result in weight loss after one year ranged from 3.0 to 6.7 kg (6.6-14.8 lbs) over placebo.[189] Orlistat, liraglutide, and naltrexone–bupropion are available in both the United States and Europe, whereas phentermine–topiramate are available only in the United States.[190] European regulatory authorities rejected the latter two drugs in part because of associations of heart valve problems with lorcaserin and more general heart and blood vessel problems with phentermine–topiramate.[190] Lorcaserin was available in the United States and than removed from the market in 2020 due to its association with cancer.[191] Orlistat use is associated with high rates of gastrointestinal side effects[192] and concerns have been raised about negative effects on the kidneys.[193] There is no information on how these drugs affect longer-term complications of obesity such as cardiovascular disease or death.[5]
115
+
116
+ The most effective treatment for obesity is bariatric surgery.[6][16] The types of procedures include laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, vertical-sleeve gastrectomy, and biliopancreatic diversion.[189] Surgery for severe obesity is associated with long-term weight loss, improvement in obesity-related conditions,[194] and decreased overall mortality. One study found a weight loss of between 14% and 25% (depending on the type of procedure performed) at 10 years, and a 29% reduction in all cause mortality when compared to standard weight loss measures.[195] Complications occur in about 17% of cases and reoperation is needed in 7% of cases.[194] Due to its cost and risks, researchers are searching for other effective yet less invasive treatments including devices that occupy space in the stomach.[196] For adults who have not responded to behavioral treatments with or without medication, the US guidelines on obesity recommend informing them about bariatric surgery.[177]
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+ In earlier historical periods obesity was rare, and achievable only by a small elite, although already recognised as a problem for health. But as prosperity increased in the Early Modern period, it affected increasingly larger groups of the population.[199]
127
+
128
+ In 1997 the WHO formally recognized obesity as a global epidemic.[97] As of 2008 the WHO estimates that at least 500 million adults (greater than 10%) are obese, with higher rates among women than men.[200] The percentage of adults affected in the United States as of 2015–2016 is about 39.6% overall (37.9% of males and 41.1% of females).[201]
129
+
130
+ The rate of obesity also increases with age at least up to 50 or 60 years old[202] and severe obesity in the United States, Australia, and Canada is increasing faster than the overall rate of obesity.[30][203][204] The OECD has projected an increase in obesity rates until at least 2030, especially in the United States, Mexico and England with rates reaching 47%, 39% and 35% respectively.[205]
131
+
132
+ Once considered a problem only of high-income countries, obesity rates are rising worldwide and affecting both the developed and developing world.[45] These increases have been felt most dramatically in urban settings.[200] The only remaining region of the world where obesity is not common is sub-Saharan Africa.[2]
133
+
134
+ Obesity is from the Latin obesitas, which means "stout, fat, or plump". Ēsus is the past participle of edere (to eat), with ob (over) added to it.[206] The Oxford English Dictionary documents its first usage in 1611 by Randle Cotgrave.[207]
135
+
136
+ Ancient Greek medicine recognizes obesity as a medical disorder, and records that the Ancient Egyptians saw it in the same way.[199] Hippocrates wrote that "Corpulence is not only a disease itself, but the harbinger of others".[2] The Indian surgeon Sushruta (6th century BCE) related obesity to diabetes and heart disorders.[209] He recommended physical work to help cure it and its side effects.[209] For most of human history mankind struggled with food scarcity.[210] Obesity has thus historically been viewed as a sign of wealth and prosperity. It was common among high officials in Europe in the Middle Ages and the Renaissance[208] as well as in Ancient East Asian civilizations.[211] In the 17th century, English medical author Tobias Venner is credited with being one of the first to refer to the term as a societal disease in a published English language book.[199][212]
137
+
138
+ With the onset of the Industrial Revolution it was realized that the military and economic might of nations were dependent on both the body size and strength of their soldiers and workers.[97] Increasing the average body mass index from what is now considered underweight to what is now the normal range played a significant role in the development of industrialized societies.[97] Height and weight thus both increased through the 19th century in the developed world. During the 20th century, as populations reached their genetic potential for height, weight began increasing much more than height, resulting in obesity.[97] In the 1950s increasing wealth in the developed world decreased child mortality, but as body weight increased heart and kidney disease became more common.[97][213]
139
+ During this time period, insurance companies realized the connection between weight and life expectancy and increased premiums for the obese.[2]
140
+
141
+ Many cultures throughout history have viewed obesity as the result of a character flaw. The obesus or fat character in Ancient Greek comedy was a glutton and figure of mockery. During Christian times the food was viewed as a gateway to the sins of sloth and lust.[15] In modern Western culture, excess weight is often regarded as unattractive, and obesity is commonly associated with various negative stereotypes. People of all ages can face social stigmatization, and may be targeted by bullies or shunned by their peers.[214]
142
+
143
+ Public perceptions in Western society regarding healthy body weight differ from those regarding the weight that is considered ideal  – and both have changed since the beginning of the 20th century. The weight that is viewed as an ideal has become lower since the 1920s. This is illustrated by the fact that the average height of Miss America pageant winners increased by 2% from 1922 to 1999, while their average weight decreased by 12%.[215] On the other hand, people's views concerning healthy weight have changed in the opposite direction. In Britain, the weight at which people considered themselves to be overweight was significantly higher in 2007 than in 1999.[216] These changes are believed to be due to increasing rates of adiposity leading to increased acceptance of extra body fat as being normal.[216]
144
+
145
+ Obesity is still seen as a sign of wealth and well-being in many parts of Africa. This has become particularly common since the HIV epidemic began.[2]
146
+
147
+ The first sculptural representations of the human body 20,000–35,000 years ago depict obese females. Some attribute the Venus figurines to the tendency to emphasize fertility while others feel they represent "fatness" in the people of the time.[15] Corpulence is, however, absent in both Greek and Roman art, probably in keeping with their ideals regarding moderation. This continued through much of Christian European history, with only those of low socioeconomic status being depicted as obese.[15]
148
+
149
+ During the Renaissance some of the upper class began flaunting their large size, as can be seen in portraits of Henry VIII of England and Alessandro dal Borro.[15] Rubens (1577–1640) regularly depicted full-bodied women in his pictures, from which derives the term Rubenesque. These women, however, still maintained the "hourglass" shape with its relationship to fertility.[217] During the 19th century, views on obesity changed in the Western world. After centuries of obesity being synonymous with wealth and social status, slimness began to be seen as the desirable standard.[15]
150
+
151
+ In addition to its health impacts, obesity leads to many problems including disadvantages in employment[218][219] and increased business costs. These effects are felt by all levels of society from individuals, to corporations, to governments.
152
+
153
+ In 2005, the medical costs attributable to obesity in the US were an estimated $190.2 billion or 20.6% of all medical expenditures,[220][221][222] while the cost of obesity in Canada was estimated at CA$2 billion in 1997 (2.4% of total health costs).[82] The total annual direct cost of overweight and obesity in Australia in 2005 was A$21 billion. Overweight and obese Australians also received A$35.6 billion in government subsidies.[223] The estimate range for annual expenditures on diet products is $40 billion to $100 billion in the US alone.[224]
154
+
155
+ The Lancet Commission on Obesity in 2019 called for a global treaty — modelled on the WHO Framework Convention on Tobacco Control — committing countries to address obesity and undernutrition, explicitly excluding the food industry from policy development. They estimate the global cost of obesity $2 trillion a year, about or 2.8% of world GDP.[225]
156
+
157
+ Obesity prevention programs have been found to reduce the cost of treating obesity-related disease. However, the longer people live, the more medical costs they incur. Researchers, therefore, conclude that reducing obesity may improve the public's health, but it is unlikely to reduce overall health spending.[226]
158
+
159
+ Obesity can lead to social stigmatization and disadvantages in employment.[218] When compared to their normal weight counterparts, obese workers on average have higher rates of absenteeism from work and take more disability leave, thus increasing costs for employers and decreasing productivity.[228] A study examining Duke University employees found that people with a BMI over 40 kg/m2 filed twice as many workers' compensation claims as those whose BMI was 18.5–24.9 kg/m2. They also had more than 12 times as many lost work days. The most common injuries in this group were due to falls and lifting, thus affecting the lower extremities, wrists or hands, and backs.[229] The Alabama State Employees' Insurance Board approved a controversial plan to charge obese workers $25 a month for health insurance that would otherwise be free unless they take steps to lose weight and improve their health. These measures started in January 2010 and apply to those state workers whose BMI exceeds 35 kg/m2 and who fail to make improvements in their health after one year.[230]
160
+
161
+ Some research shows that obese people are less likely to be hired for a job and are less likely to be promoted.[214] Obese people are also paid less than their non-obese counterparts for an equivalent job; obese women on average make 6% less and obese men make 3% less.[231]
162
+
163
+ Specific industries, such as the airline, healthcare and food industries, have special concerns. Due to rising rates of obesity, airlines face higher fuel costs and pressures to increase seating width.[232] In 2000, the extra weight of obese passengers cost airlines US$275 million.[233] The healthcare industry has had to invest in special facilities for handling severely obese patients, including special lifting equipment and bariatric ambulances.[234] Costs for restaurants are increased by litigation accusing them of causing obesity.[235] In 2005 the US Congress discussed legislation to prevent civil lawsuits against the food industry in relation to obesity; however, it did not become law.[235]
164
+
165
+ With the American Medical Association's 2013 classification of obesity as a chronic disease,[17] it is thought that health insurance companies will more likely pay for obesity treatment, counseling and surgery, and the cost of research and development of fat treatment pills or gene therapy treatments should be more affordable if insurers help to subsidize their cost.[236] The AMA classification is not legally binding, however, so health insurers still have the right to reject coverage for a treatment or procedure.[236]
166
+
167
+ In 2014, The European Court of Justice ruled that morbid obesity is a disability. The Court said that if an employee's obesity prevents him from "full and effective participation of that person in professional life on an equal basis with other workers", then it shall be considered a disability and that firing someone on such grounds is discriminatory.[237]
168
+
169
+ The principal goal of the fat acceptance movement is to decrease discrimination against people who are overweight and obese.[238][239] However, some in the movement are also attempting to challenge the established relationship between obesity and negative health outcomes.[240]
170
+
171
+ A number of organizations exist that promote the acceptance of obesity. They have increased in prominence in the latter half of the 20th century.[241] The US-based National Association to Advance Fat Acceptance (NAAFA) was formed in 1969 and describes itself as a civil rights organization dedicated to ending size discrimination.[242]
172
+
173
+ The International Size Acceptance Association (ISAA) is a non-governmental organization (NGO) which was founded in 1997. It has more of a global orientation and describes its mission as promoting size acceptance and helping to end weight-based discrimination.[243] These groups often argue for the recognition of obesity as a disability under the US Americans With Disabilities Act (ADA). The American legal system, however, has decided that the potential public health costs exceed the benefits of extending this anti-discrimination law to cover obesity.[240]
174
+
175
+ In 2015 the New York Times published an article on the Global Energy Balance Network, a nonprofit founded in 2014 that advocated for people to focus on increasing exercise rather than reducing calorie intake to avoid obesity and to be healthy. The organization was founded with at least $1.5M in funding from the Coca-Cola Company, and the company has provided $4M in research funding to the two founding scientists Gregory A. Hand and Steven N. Blair since 2008.[244][245]
176
+
177
+ The healthy BMI range varies with the age and sex of the child. Obesity in children and adolescents is defined as a BMI greater than the 95th percentile.[24] The reference data that these percentiles are based on is from 1963 to 1994 and thus has not been affected by the recent increases in rates of obesity.[25] Childhood obesity has reached epidemic proportions in the 21st century, with rising rates in both the developed and the developing world. Rates of obesity in Canadian boys have increased from 11% in the 1980s to over 30% in the 1990s, while during this same time period rates increased from 4 to 14% in Brazilian children.[246] In the UK, there were 60% more obese children in 2005 compared to 1989.[247] In the US, the percentage of overweight and obese children increased to 16% in 2008, a 300% increase over the prior 30 years.[248]
178
+
179
+ As with obesity in adults, many factors contribute to the rising rates of childhood obesity. Changing diet and decreasing physical activity are believed to be the two most important causes for the recent increase in the incidence of child obesity.[249] Antibiotics in the first 6 months of life have been associated with excess weight at age seven to twelve years of age.[152] Because childhood obesity often persists into adulthood and is associated with numerous chronic illnesses, children who are obese are often tested for hypertension, diabetes, hyperlipidemia, and fatty liver disease.[82] Treatments used in children are primarily lifestyle interventions and behavioral techniques, although efforts to increase activity in children have had little success.[250] In the United States, medications are not FDA approved for use in this age group.[246] Multi-component behaviour change interventions that include changes to dietary and physical activity may reduce BMI in the short term in children aged 6 to 11 years, although the benefits are small and quality of evidence is low.[251]
180
+
181
+ Obesity in pets is common in many countries. In the United States, 23–41% of dogs are overweight, and about 5.1% are obese.[252] The rate of obesity in cats was slightly higher at 6.4%.[252] In Australia the rate of obesity among dogs in a veterinary setting has been found to be 7.6%.[253] The risk of obesity in dogs is related to whether or not their owners are obese; however, there is no similar correlation between cats and their owners.[254]
en/5556.html.txt ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The wild boar (Sus scrofa), also known as the "wild swine",[3] "common wild pig",[4] or simply "wild pig",[5] is a suid native to much of the Palearctic, as well as being introduced in the Nearctic, Neotropic, Oceania, the Caribbean islands, and Southeast Asia. Human intervention has spread it further, making the species one of the widest-ranging mammals in the world, as well as the most widespread suiform.[4] It has been assessed as least concern on the IUCN Red List due to its wide range, high numbers, and adaptability to a diversity of habitats.[1] It has become an invasive species in part of its introduced range. Wild boars probably originated in Southeast Asia during the Early Pleistocene[6] and outcompeted other suid species as they spread throughout the Old World.[7]
6
+
7
+ As of 1990, up to 16 subspecies are recognized, which are divided into four regional groupings based on skull height and lacrimal bone length.[2] The species lives in matriarchal societies consisting of interrelated females and their young (both male and female). Fully grown males are usually solitary outside the breeding season.[8] The grey wolf is the wild boar's main predator in most of its natural range except in the Far East and the Lesser Sunda Islands, where it is replaced by the tiger and Komodo dragon respectively.[9][10] The wild boar has a long history of association with humans, having been the ancestor of most domestic pig breeds and a big-game animal for millennia. Boars have also re-hybridized in recent decades with feral pigs; these boar–pig hybrids have become a serious pest wild animal in the Americas and Australia.
8
+
9
+ As true wild boars became extinct in Great Britain before the development of Modern English, the same terms are often used for both true wild boar and pigs, especially large or semi-wild ones. The English 'boar' stems from the Old English bar, which is thought to be derived from the West Germanic *bairaz, of unknown origin.[11] Boar is sometimes used specifically to refer to males, and may also be used to refer to male domesticated pigs, especially breeding males that have not been castrated.
10
+
11
+ 'Sow', the traditional name for a female, again comes from Old English and Germanic; it stems from Proto-Indo-European, and is related to the Latin: sus and Greek hus, and more closely to the New High German Sau. The young may be called 'piglets'.
12
+
13
+ The animals' specific name scrofa is Latin for 'sow'.[12]
14
+
15
+ In hunting terminology, boars are given different designations according to their age:[13]
16
+
17
+ MtDNA studies indicate that the wild boar originated from islands in Southeast Asia such as Indonesia and the Philippines, and subsequently spread onto mainland Eurasia and North Africa.[6] The earliest fossil finds of the species come from both Europe and Asia, and date back to the Early Pleistocene.[14] By the late Villafranchian, S. scrofa largely displaced the related S. strozzii, a large, possibly swamp-adapted suid ancestral to the modern S. verrucosus throughout the Eurasian mainland, restricting it to insular Asia.[7] Its closest wild relative is the bearded pig of Malacca and surrounding islands.[3]
18
+
19
+ As of 2005[update],[2] 16 subspecies are recognised, which are divided into four regional groupings:
20
+
21
+ sahariensis (Heim de Balzac, 1937)
22
+
23
+ nipponicus (Heude, 1899)
24
+
25
+ mediterraneus (Ulmansky, 1911)
26
+ reiseri (Bolkay, 1925)
27
+
28
+ sardous (Ströbel, 1882)
29
+
30
+ With the exception of domestic pigs in Timor and Papua New Guinea (which appear to be of Sulawesi warty pig stock), the wild boar is the ancestor of most pig breeds.[16][24] Archaeological evidence suggests that pigs were domesticated from wild boar as early as 13,000–12,700 BCE in the Near East in the Tigris Basin,[25] being managed in the wild in a way similar to the way they are managed by some modern New Guineans.[26] Remains of pigs have been dated to earlier than 11,400 BCE in Cyprus. Those animals must have been introduced from the mainland, which suggests domestication in the adjacent mainland by then.[27] There was also a separate domestication in China, which took place about 8,000 years ago.[28][29]
31
+
32
+ DNA evidence from sub-fossil remains of teeth and jawbones of Neolithic pigs shows that the first domestic pigs in Europe had been brought from the Near East. This stimulated the domestication of local European wild boars, resulting in a third domestication event with the Near Eastern genes dying out in European pig stock. Modern domesticated pigs have involved complex exchanges, with European domesticated lines being exported in turn to the ancient Near East.[30][31] Historical records indicate that Asian pigs were introduced into Europe during the 18th and early 19th centuries.[28] Domestic pigs tend to have much more developed hindquarters than their wild boar ancestors, to the point where 70% of their body weight is concentrated in the posterior, which is the opposite of wild boar, where most of the muscles are concentrated on the head and shoulders.[32]
33
+
34
+ The wild boar is a bulky, massively built suid with short and relatively thin legs. The trunk is short and robust, while the hindquarters are comparatively underdeveloped. The region behind the shoulder blades rises into a hump and the neck is short and thick to the point of being nearly immobile. The animal's head is very large, taking up to one-third of the body's entire length.[3] The structure of the head is well suited for digging. The head acts as a plough, while the powerful neck muscles allow the animal to upturn considerable amounts of soil:[33] it is capable of digging 8–10 cm (3.1–3.9 in) into frozen ground and can upturn rocks weighing 40–50 kg (88–110 lb).[9] The eyes are small and deep-set and the ears long and broad. The species has well developed canine teeth, which protrude from the mouths of adult males. The middle hooves are larger and more elongated than the lateral ones and are capable of quick movements.[3] The animal can run at a maximum speed of 40 km/h (25 mph) and jump at a height of 140–150 cm (55–59 in).[9]
35
+
36
+ Sexual dimorphism is very pronounced in the species, with males being typically 5–10% larger and 20–30% heavier than females. Males also sport a mane running down the back, which is particularly apparent during autumn and winter.[34] The canine teeth are also much more prominent in males and grow throughout life. The upper canines are relatively short and grow sideways early in life, though they gradually curve upwards. The lower canines are much sharper and longer, with the exposed parts measuring 10–12 cm (3.9–4.7 in) in length. In the breeding period, males develop a coating of subcutaneous tissue, which may be 2–3 cm (0.79–1.18 in) thick, extending from the shoulder blades to the rump, thus protecting vital organs during fights. Males sport a roughly egg-sized sack near the opening of the penis, which collects urine and emits a sharp odour. The function of this sack is not fully understood.[3]
37
+
38
+ Adult size and weight is largely determined by environmental factors; boars living in arid areas with little productivity tend to attain smaller sizes than their counterparts inhabiting areas with abundant food and water. In most of Europe, males average 75–100 kg (165–220 lb) in weight, 75–80 cm (30–31 in) in shoulder height and 150 cm (59 in) in body length, whereas females average 60–80 kg (130–180 lb) in weight, 70 cm (28 in) in shoulder height and 140 cm (55 in) in body length. In Europe's Mediterranean regions, males may reach average weights as low as 50 kg (110 lb) and females 45 kg (99 lb), with shoulder heights of 63–65 cm (25–26 in). In the more productive areas of Eastern Europe, males average 110–130 kg (240–290 lb) in weight, 95 cm (37 in) in shoulder height and 160 cm (63 in) in body length, while females weigh 95 kg (209 lb), reach 85–90 cm (33–35 in) in shoulder height and 145 cm (57 in) in body length. In Western and Central Europe, the largest males weigh 200 kg (440 lb) and females 120 kg (260 lb). In Northeastern Asia, large males can reach brown bear-like sizes, weighing 270 kg (600 lb) and measuring 110–118 cm (43–46 in) in shoulder height. Some adult males in Ussuriland and Manchuria have been recorded to weigh 300–350 kg (660–770 lb) and measure 125 cm (49 in) in shoulder height. Adults of this size are generally immune from wolf predation.[35] Such giants are rare in modern times, due to past overhunting preventing animals from attaining their full growth.[3]
39
+
40
+ The winter coat consists of long, coarse bristles underlaid with short brown downy fur. The length of these bristles varies along the body, with the shortest being around the face and limbs and the longest running along the back. These back bristles form the aforementioned mane prominent in males and stand erect when the animal is agitated. Colour is highly variable; specimens around Lake Balkhash are very lightly coloured, and can even be white, while some boars from Belarus and Ussuriland can be black. Some subspecies sport a light-coloured patch running backward from the corners of the mouth. Coat colour also varies with age, with piglets having light brown or rusty-brown fur with pale bands extending from the flanks and back.[3]
41
+
42
+ The wild boar produces a number of different sounds which are divided into three categories:
43
+
44
+ Its sense of smell is very well developed to the point that the animal is used for drug detection in Germany.[37] Its hearing is also acute, though its eyesight is comparatively weak,[3] lacking color vision[37] and being unable to recognise a standing human 10–15 metres (33–49 ft) away.[9]
45
+
46
+ Pigs are one of four known mammalian species which possess mutations in the nicotinic acetylcholine receptor that protect against snake venom. Mongooses, honey badgers, hedgehogs, and pigs all have modifications to the receptor pocket which prevents the snake venom α-neurotoxin from binding. These represent four separate, independent mutations.[38]
47
+
48
+ Boars are typically social animals, living in female-dominated sounders consisting of barren sows and mothers with young led by an old matriarch. Male boars leave their sounder at the age of 8–15 months, while females either remain with their mothers or establish new territories nearby. Subadult males may live in loosely knit groups, while adult and elderly males tend to be solitary outside the breeding season.[8][a]
49
+
50
+ The breeding period in most areas lasts from November to January, though most mating only lasts a month and a half. Prior to mating, the males develop their subcutaneous armour in preparation for confronting rivals. The testicles double in size and the glands secrete a foamy yellowish liquid. Once ready to reproduce, males travel long distances in search of a sounder of sows, eating little on the way. Once a sounder has been located, the male drives off all young animals and persistently chases the sows. At this point, the male fiercely fights potential rivals.[3] A single male can mate with 5–10 sows.[9] By the end of the rut, males are often badly mauled and have lost 20% of their body weight,[3] with bite-induced injuries to the penis being common.[40] The gestation period varies according to the age of the expecting mother. For first-time breeders, it lasts 114–130 days, while it lasts 133–140 days in older sows. Farrowing occurs between March and May, with litter sizes depending on the age and nutrition of the mother. The average litter consists of 4–6 piglets, with the maximum being 10–12.[3][b] The piglets are whelped in a nest constructed from twigs, grasses and leaves. Should the mother die prematurely, the piglets are adopted by the other sows in the sounder.[42]
51
+
52
+ Newborn piglets weigh around 600–1,000 grams, lacking underfur and bearing a single milk incisor and canine on each half of the jaw.[3] There is intense competition between the piglets over the most milk-rich nipples, as the best-fed young grow faster and have stronger constitutions.[42] The piglets do not leave the lair for their first week of life. Should the mother be absent, the piglets lie closely pressed to each other. By two weeks of age, the piglets begin accompanying their mother on her journeys. Should danger be detected, the piglets take cover or stand immobile, relying on their camouflage to keep them hidden. The neonatal coat fades after three months, with adult colouration being attained at eight months. Although the lactation period lasts 2.5–3.5 months, the piglets begin displaying adult feeding behaviours at the age of 2–3 weeks. The permanent dentition is fully formed by 1–2 years. With the exception of the canines in males, the teeth stop growing during the middle of the fourth year. The canines in old males continue to grow throughout their lives, curving strongly as they age. Sows attain sexual maturity at the age of one year, with males attaining it a year later. However, estrus usually first occurs after two years in sows, while males begin participating in the rut after 4–5 years, as they are not permitted to mate by the older males.[3] The maximum lifespan in the wild is 10–14 years, though few specimens survive past 4–5 years.[43] Boars in captivity have lived for 20 years.[9]
53
+
54
+ The wild boar inhabits a diverse array of habitats from boreal taigas to deserts.[3] In mountainous regions, it can even occupy alpine zones, occurring up to 1,900 m (6,200 ft) in the Carpathians, 2,600 m (8,500 ft) in the Caucasus and up to 3,600–4,000 m (11,800–13,100 ft) in the mountains in Central Asia and Kazakhstan.[3] In order to survive in a given area, wild boars require a habitat fulfilling three conditions: heavily brushed areas providing shelter from predators, water for drinking and bathing purposes and an absence of regular snowfall.[44]
55
+
56
+ The main habitats favored by boars in Europe are deciduous and mixed forests, with the most favorable areas consisting of forest composed of oak and beech enclosing marshes and meadows. In the Białowieża Forest, the animal's primary habitat consists of well-developed broad-leaved and mixed forests, along with marshy mixed forests, with coniferous forests and undergrowths being of secondary importance. Forests made up entirely of oak groves and beeches are used only during the fruit-bearing season. This is in contrast to the Caucasian and Transcaucasian mountain areas, where boars will occupy such fruit-bearing forests year-round. In the mountainous areas of the Russian Far East, the species inhabits nutpine groves, hilly mixed forests where Mongolian oak and Korean pine are present, swampy mixed taiga and coastal oak forests. In Transbaikalia, boars are restricted to river valleys with nut pine and shrubs. Boars are regularly encountered in pistachio groves in winter in some areas of Tajikistan and Turkmenistan, while in spring they migrate to open deserts; boar have also colonized deserts in several areas they have been introduced to.[3][44][45]
57
+
58
+ On the islands of Komodo and Rinca, the boar mostly inhabits savanna or open monsoon forests, avoiding heavily forested areas unless pursued by humans.[10] Wild boar are known to be competent swimmers, capable of covering long distances. In 2013, one boar was reported to have completed the 11-kilometre (7 mi) swim from France to Alderney in the Channel Islands. Due to concerns about disease, it was shot and incinerated.[46]
59
+
60
+ Wild boar rest in shelters, which contain insulating material like spruce branches and dry hay. These resting places are occupied by whole families (though males lie separately) and are often located in the vicinity of streams, in swamp forests and in tall grass or shrub thickets. Boars never defecate in their shelters and will cover themselves with soil and pine needles when irritated by insects.[9]
61
+
62
+ The wild boar is a highly versatile omnivore, whose diversity in choice of food is comparable to that of humans.[33] Their foods can be divided into four categories:
63
+
64
+ A 50 kg (110 lb) boar needs around 4,000–4,500 calories of food per day, though this required amount increases during winter and pregnancy,[33] with the majority of its diet consisting of food items dug from the ground, like underground plant material and burrowing animals.[3] Acorns and beechnuts are invariably its most important food items in temperate zones,[47] as they are rich in the carbohydrates necessary for the buildup of fat reserves needed to survive lean periods.[33] In Western Europe, underground plant material favoured by boars includes bracken, willow herb, bulbs, meadow herb roots and bulbs and the bulbs of cultivated crops. Such food is favoured in early spring and summer, but may also be eaten in autumn and winter during beechnut and acorn crop failures. Should regular wild foods become scarce, boars will eat tree bark and fungi, as well as visit cultivated potato and artichoke fields.[3] Boar soil disturbance and foraging have been shown to facilitate invasive plants.[48][49] Boars of the vittatus subspecies in Ujung Kulon National Park in Java differ from most other populations by their primarily frugivorous diet, which consists of 50 different fruit species, especially figs, thus making them important seed dispersers.[4] The wild boar can consume numerous genera of poisonous plants without ill effect, including Aconitum, Anemone, Calla, Caltha, Ferula and Pteridium.[9]
65
+
66
+ Boars may occasionally prey on small vertebrates like newborn deer fawns, leporids and galliform chicks.[33] Boars inhabiting the Volga Delta and near some lakes and rivers of Kazakhstan have been recorded to feed extensively on fish like carp and Caspian roach. Boars in the former area will also feed on cormorant and heron chicks, bivalved molluscs, trapped muskrats and mice.[3] There is at least one record of a boar killing and eating a bonnet macaque in southern India's Bandipur National Park, though this may have been a case of intraguild predation, brought on by interspecific competition for human handouts.[50]
67
+
68
+ Piglets are vulnerable to attack from medium-sized felids like Eurasian lynx, jungle cats and snow leopards and other carnivorans like brown bears and yellow-throated martens.[3]
69
+
70
+ The grey wolf is the main predator of wild boar throughout most of its range. A single wolf can kill around 50 to 80 boars of differing ages in one year.[3] In Italy[51] and Belarus' Belovezhskaya Pushcha National Park, boars are the wolf's primary prey, despite an abundance of alternative, less powerful ungulates.[51] Wolves are particularly threatening during the winter, when deep snow impedes the boars' movements. In the Baltic regions, heavy snowfall can allow wolves to eliminate boars from an area almost completely. Wolves primarily target piglets and subadults and only rarely attack adult sows. Adult males are usually avoided entirely.[3] Dholes may also prey on boars, to the point of keeping their numbers down in northwestern Bhutan, despite there being many more cattle in the area.[52]
71
+
72
+ Leopards are predators of wild boar in the Caucasus (particularly Transcaucasia), the Russian Far East, India, China[53] and Iran. In most areas, boars constitute only a small part of the leopard's diet. However, in Iran's Sarigol National Park, boars are the second most frequently targeted prey species after mouflon, though adult individuals are generally avoided, as they are above the leopard's preferred weight range of 10–40 kg (22–88 lb).[54] This dependence on wild boar is largely due in part to the local leopard subspecies' large size.[55]
73
+
74
+ Boars of all ages were once the primary prey of tigers in Transcaucasia, Kazakhstan, Middle Asia and the Far East up until the late 19th century. In modern times, tiger numbers are too low to have a limiting effect on boar populations. A single tiger can systematically destroy an entire sounder by preying on its members one by one, before moving on to another sounder. Tigers have been noted to chase boars for longer distances than with other prey. In two rare cases, boars were reported to gore a small tiger and a tigress to death in self-defense.[56] In the Amur region, wild boars are one of the two most important prey species for tigers alongside the Manchurian wapiti, with the two species collectively comprising roughly 80% of the felid's prey.[57] In Sikhote Alin, a tiger can kill 30–34 boars a year.[9] Studies of tigers in India indicate that boars are usually secondary in preference to various cervids and bovids,[58] though when boars are targeted, healthy adults are caught more frequently than young and sick specimens.[59]
75
+
76
+ On the islands of Komodo, Rinca and Flores, the boar's main predator is the Komodo dragon.[10]
77
+
78
+ The species originally occurred in North Africa and much of Eurasia; from the British Isles to Korea and the Sunda Islands. The northern limit of its range extended from southern Scandinavia to southern Siberia and Japan. Within this range, it was only absent in extremely dry deserts and alpine zones. It was once found in North Africa along the Nile valley up to Khartum and north of the Sahara. The species occurs on a few Ionian and Aegean Islands, sometimes swimming between islands.[60] The reconstructed northern boundary of the animal's Asian range ran from Lake Ladoga (at 60°N) through the area of Novgorod and Moscow into the southern Urals, where it reached 52°N. From there, the boundary passed Ishim and farther east the Irtysh at 56°N. In the eastern Baraba steppe (near Novosibirsk) the boundary turned steep south, encircled the Altai Mountains and went again eastward including the Tannu-Ola Mountains and Lake Baikal. From here, the boundary went slightly north of the Amur River eastward to its lower reaches at the Sea of Okhotsk. On Sakhalin, there are only fossil reports of wild boar. The southern boundaries in Europe and Asia were almost invariably identical to the seashores of these continents. It is absent in the dry regions of Mongolia from 44–46°N southward, in China westward of Sichuan and in India north of the Himalayas. It is absent in the higher elevations of the Pamir and the Tien Shan, though they do occur in the Tarim basin and on the lower slopes of the Tien Shan.[3]
79
+
80
+ In recent centuries, the range of wild boar has changed dramatically, largely due to hunting by humans and more recently because of captive wild boar escaping into the wild. Prior to the 20th century, boar populations had declined in numerous areas, with British populations probably becoming extinct during the 13th century.[61] In the warm period after the ice age, wild boar lived in the southern parts of Sweden and Norway and north of Lake Ladoga in Karelia.[62] It was previously thought that the species did not live in Finland during prehistory because no prehistoric wild boar bones had been found within the borders of the country.[63][64] It was not until 2013, when a wild boar bone was found in Askola, that the species was found to have lived in Finland more than 8,000 years ago. It is believed, however, that man prevented its establishment by hunting.[65][66] In Denmark, the last boar was shot at the beginning of the 19th century, and by 1900 they were absent in Tunisia and Sudan and large areas of Germany, Austria and Italy. In Russia, they were extirpated in wide areas by the 1930s.[3] The last boar in Egypt reportedly died on 20 December 1912 in the Giza Zoo, with wild populations having disappeared by 1894–1902. Prince Kamal el Dine Hussein attempted to repopulate Wadi El Natrun with boars of Hungarian stock, but they were quickly exterminated by poachers.[67]
81
+
82
+ A revival of boar populations began in the middle of the 20th century. By 1950, wild boar had once again reached their original northern boundary in many parts of their Asiatic range. By 1960, they reached Leningrad and Moscow and by 1975, they were to be found in Archangelsk and Astrakhan. In the 1970s they again occurred in Denmark and Sweden, where captive animals escaped and now survive in the wild. In England, wild boar populations re-established themselves in the 1990s, after escaping from specialist farms that had imported European stock.[61]
83
+
84
+ Wild boars were apparently already becoming rare by the 11th century since a 1087 forestry law enacted by William the Conqueror punishes through blinding the unlawful killing of a boar. Charles I attempted to reintroduce the species into the New Forest, though this population was exterminated during the Civil War.
85
+
86
+ Between their medieval extinction and the 1980s, when wild boar farming began, only a handful of captive wild boar, imported from the continent, were present in Britain. Occasional escapes of wild boar from wildlife parks have occurred as early as the 1970s, but since the early 1990s significant populations have re-established themselves after escapes from farms, the number of which has increased as the demand for meat from the species has grown. A 1998 MAFF (now DEFRA) study on wild boar living wild in Britain confirmed the presence of two populations of wild boar living in Britain; one in Kent/East Sussex and another in Dorset.[61] Another DEFRA report, in February 2008,[68] confirmed the existence of these two sites as 'established breeding areas' and identified a third in Gloucestershire/Herefordshire; in the Forest of Dean/Ross on Wye area. A 'new breeding population' was also identified in Devon. There is another significant population in Dumfries and Galloway. Populations estimates were as follows:
87
+
88
+ Population estimates for the Forest of Dean are disputed as, at the time that the DEFRA population estimate was 100, a photo of a boar sounder in the forest near Staunton with over 33 animals visible was published and at about the same time over 30 boar were seen in a field near the original escape location of Weston under Penyard many kilometres or miles away. In early 2010 the Forestry Commission embarked on a cull,[70] with the aim of reducing the boar population from an estimated 150 animals to 100. By August it was stated that efforts were being made to reduce the population from 200 to 90, but that only 25 had been killed.[71] The failure to meet cull targets was confirmed in February 2011.[72]
89
+
90
+ Wild boars have crossed the River Wye into Monmouthshire, Wales. Iolo Williams, the BBC Wales wildlife expert, attempted to film Welsh boar in late 2012.[73] Many other sightings, across the UK, have also been reported.[74] The effects of wild boar on the U.K.'s woodlands were discussed with Ralph Harmer of the Forestry Commission on the BBC Radio's Farming Today radio programme in 2011. The programme prompted activist writer George Monbiot to propose a thorough population study, followed by the introduction of permit-controlled culling.[75]
91
+
92
+ Wild boars are an invasive species in the Americas and cause problems including out-competing native species for food, destroying the nests of ground-nesting species, killing fawns and young domestic livestock, destroying agricultural crops, eating tree seeds and seedlings, destroying native vegetation and wetlands through wallowing, damaging water quality, coming into violent conflict with humans and pets and carrying pig and human diseases including brucellosis, trichinosis and pseudorabies. In some jurisdictions, it is illegal to import, breed, release, possess, sell, distribute, trade, transport, hunt, or trap Eurasian boars. Hunting and trapping is done systematically, to increase the chance of eradication and to remove the incentive to illegally release boars, which have mostly been spread deliberately by sport hunters.[76]
93
+
94
+ While domestic pigs, both captive and feral (popularly termed "razorbacks"), have been in North America since the earliest days of European colonization, pure wild boars were not introduced into the New World until the 19th century. The suids were released into the wild by wealthy landowners as big game animals. The initial introductions took place in fenced enclosures, though several escapes occurred, with the escapees sometimes intermixing with already established feral pig populations.
95
+
96
+ The first of these introductions occurred in New Hampshire in 1890. Thirteen wild boars from Germany were purchased by Austin Corbin from Carl Hagenbeck and released into a 9,500-hectare (23,000-acre) game preserve in Sullivan County. Several of these boars escaped, though they were quickly hunted down by locals. Two further introductions were made from the original stocking, with several escapes taking place due to breaches in the game preserve's fencing. These escapees have ranged widely, with some specimens having been observed crossing into Vermont.[77]
97
+
98
+ In 1902, 15–20 wild boar from Germany were released into a 3,200-hectare (7,900-acre) estate in Hamilton County, New York. Several specimens escaped six years later, dispersing into the William C. Whitney Wilderness Area, with their descendants surviving for at least 20 years.[77]
99
+
100
+ The most extensive boar introduction in the US took place in western North Carolina in 1912, when 13 boars of undetermined European origin were released into two fenced enclosures in a game preserve in Hooper Bald, Graham County. Most of the specimens remained in the preserve for the next decade, until a large-scale hunt caused the remaining animals to break through their confines and escape. Some of the boars migrated to Tennessee, where they intermixed with both free-ranging and feral pigs in the area. In 1924, a dozen Hooper Bald wild pigs were shipped to California and released in a property between Carmel Valley and the Los Padres National Forest. These hybrid boar were later used as breeding stock on various private and public lands throughout the state, as well as in other states like Florida, Georgia, South Carolina, West Virginia and Mississippi.[77]
101
+
102
+ Several wild boars from Leon Springs and the San Antonio, Saint Louis and San Diego Zoos were released in the Powder Horn Ranch in Calhoun County, Texas, in 1939. These specimens escaped and established themselves in surrounding ranchlands and coastal areas, with some crossing the Espiritu Santo Bay and colonizing Matagorda Island. Descendants of the Powder Horn Ranch boars were later released onto San José Island and the coast of Chalmette, Louisiana.[77]
103
+
104
+ Wild boar of unknown origin were stocked in a ranch in the Edwards Plateau in the 1940s, only to escape during a storm and hybridize with local feral pig populations, later spreading into neighboring counties.[77]
105
+
106
+ Starting in the mid-1980s, several boars purchased from the San Diego Zoo and Tierpark Berlin were released into the United States. A decade later, more specimens from farms in Canada and Białowieża Forest were let loose. In recent years, wild pig populations have been reported in 44 states within the US, most of which are likely wild boar–feral hog hybrids. Pure wild boar populations may still be present, but are extremely localized.[77]
107
+
108
+ Wild boars are known to host at least 20 different parasitic worm species, with maximum infections occurring in summer. Young animals are vulnerable to helminths like Metastrongylus, which are consumed by boars through earthworms and cause death by parasitising the lungs. Wild boar also carry parasites known to infect humans, including Gastrodiscoides, Trichinella spiralis, Taenia solium, Balantidium coli and Toxoplasma gondii.[78] Wild boar in southern regions are frequently infested with ticks (Dermacentor, Rhipicephalus, and Hyalomma) and hog lice. The species also suffers from blood-sucking flies, which it escapes by bathing frequently or hiding in dense shrubs.[3]
109
+
110
+ Swine plague spreads very quickly in wild boar, with epizootics being recorded in Germany, Poland, Hungary, Belarus, the Caucasus, the Far East, Kazakhstan and other regions. Foot-and-mouth disease can also take on epidemic proportions in boar populations. The species occasionally, but rarely contracts Pasteurellosis, hemorrhagic sepsis, tularemia, and anthrax. Wild boar may on occasion contract swine erysipelas through rodents or hog lice and ticks.[3]
111
+
112
+ The wild boar features prominently in the cultures of Indo-European people, many of which saw the animal as embodying warrior virtues. Cultures throughout Europe and Asia Minor saw the killing of a boar as proof of one's valor and strength. Neolithic hunter gatherers depicted reliefs of ferocious wild boars on their temple pillars at Göbekli Tepe some 11,600 years ago.[80][81] Virtually all heroes in Greek mythology fight or kill a boar at one point. The demigod Herakles' third labour involves the capture of the Erymanthian Boar, Theseus slays the wild sow Phaea, and a disguised Odysseus is recognised by his handmaiden Eurycleia by the scars inflicted on him by a boar during a hunt in his youth.[82] To the mythical Hyperboreans, the boar represented spiritual authority.[79] Several Greek myths use the boar as a symbol of darkness, death and winter. One example is the story of the youthful Adonis, who is killed by a boar and is permitted by Zeus to depart from Hades only during the spring and summer period. This theme also occurs in Irish and Egyptian mythology, where the animal is explicitly linked to the month of October, therefore autumn. This association likely arose from aspects of the boar's actual nature. Its dark colour was linked to the night, while its solitary habits, proclivity to consume crops and nocturnal nature were associated with evil.[83] The foundation myth of Ephesus has the city being built over the site where Prince Androklos of Athens killed a boar.[84] Boars were frequently depicted on Greek funerary monuments alongside lions, representing gallant losers who have finally met their match, as opposed to victorious hunters as lions are. The theme of the doomed, yet valorous boar warrior also occurred in Hittite culture, where it was traditional to sacrifice a boar alongside a dog and a prisoner of war after a military defeat.[82]
113
+
114
+ The boar as a warrior also appears in Scandinavian, Germanic and Anglo-Saxon culture, with its image having been frequently engraved on helmets, shields and swords. According to Tacitus, the Baltic Aesti featured boars on their helmets and may have also worn boar masks (see for example the Guilden Morden boar). The boar and pig were held in particularly high esteem by the Celts, who considered them to be their most important sacred animal. Some Celtic deities linked to boars include Moccus and Veteris. It has been suggested that some early myths surrounding the Welsh hero Culhwch involved the character being the son of a boar god.[82] Nevertheless, the importance of the boar as a culinary item among Celtic tribes may have been exaggerated in popular culture by the Asterix series, as wild boar bones are rare among Celtic archaeological sites and the few that do occur show no signs of butchery, having probably been used in sacrificial rituals.[85]
115
+
116
+ The boar also appears in Vedic mythology and Hindu mythology. A story present in the Brahmanas has the god Indra slaying an avaricious boar, who has stolen the treasure of the asuras, then giving its carcass to the god Vishnu, who offered it as a sacrifice to the gods. In the story's retelling in the Charaka Samhita, the boar is described as a form of Prajapati and is credited with having raised the Earth from the primeval waters. In the Ramayana and the Puranas, the same boar is portrayed as Varaha, an avatar of Vishnu.[86]
117
+
118
+ In Japanese culture, the boar is widely seen as a fearsome and reckless animal, to the point that several words and expressions in Japanese referring to recklessness include references to boars. The boar is the last animal of the Oriental zodiac, with people born during the year of the Pig being said to embody the boar-like traits of determination and impetuosity. Among Japanese hunters, the boar's courage and defiance is a source of admiration and it is not uncommon for hunters and mountain people to name their sons after the animal inoshishi (猪). Boars are also seen as symbols of fertility and prosperity; in some regions, it is thought that boars are drawn to fields owned by families including pregnant women, and hunters with pregnant wives are thought to have greater chances of success when boar hunting. The animal's link to prosperity was illustrated by its inclusion on the ¥10 note during the Meiji period and it was once believed that a man could become wealthy by keeping a clump of boar hair in his wallet.[87]
119
+
120
+ In the folklore of the Mongol Altai Uriankhai tribe, the wild boar was associated with the watery underworld, as it was thought that the spirits of the dead entered the animal's head, to be ultimately transported to the water.[88] Prior to the conversion to Islam, the Kyrgyz people believed that they were descended from boars and thus did not eat pork. In Buryat mythology, the forefathers of the Buryats descended from heaven and were nourished by a boar.[89] In China, the boar is the emblem of the Miao people.[79]
121
+
122
+ The boar (sanglier) is frequently displayed in English, Scottish and Welsh heraldry. As with the lion, the boar is often shown as armed and langued. As with the bear, Scottish and Welsh heraldry displays the boar's head with the neck cropped, unlike the English version, which retains the neck.[90] The white boar served as the badge of King Richard III of England, who distributed it among his northern retainers during his tenure as Duke of Gloucester.[91]
123
+
124
+ Humans have been hunting boar for millennia, the earliest artistic depictions of such activities dating back to the Upper Paleolithic.[82] The animal was seen as a source of food among the Ancient Greeks, as well as a sporting challenge and source of epic narratives. The Romans inherited this tradition, with one of its first practitioners being Scipio Aemilianus. Boar hunting became particularly popular among the young nobility during the 3rd century BC as preparation for manhood and battle. A typical Roman boar hunting tactic involved surrounding a given area with large nets, then flushing the boar with dogs and immobilizing it with smaller nets. The animal would then be dispatched with a venabulum, a short spear with a crossguard at the base of the blade. More than their Greek predecessors, the Romans extensively took inspiration from boar hunting in their art and sculpture. With the ascension of Constantine the Great, boar hunting took on Christian allegorical themes, with the animal being portrayed as a "black beast" analogous to the dragon of Saint George. Boar hunting continued after the fall of the Western Roman Empire, though the Germanic tribes considered the red deer to be a more noble and worthy quarry. The post-Roman nobility hunted boar as their predecessors did, but primarily as training for battle rather than sport. It was not uncommon for medieval hunters to deliberately hunt boars during the breeding season when the animals were more aggressive. During the Renaissance, when deforestation and the introduction of firearms reduced boar numbers, boar hunting became the sole prerogative of the nobility, one of many charges brought up against the rich during the German Peasants' War and the French Revolution.[92] During the mid-20th century, 7,000–8,000 boars were caught in the Caucasus, 6,000–7,000 in Kazakhstan and about 5,000 in Central Asia during the Soviet period, primarily through the use of dogs and beats.[3] In Nepal, farmers and poachers eliminate boars by baiting balls of wheat flour containing explosives with kerosene oil, with the animals' chewing motions triggering the devices.[93]
125
+
126
+ Wild boar can thrive in captivity, though piglets grow slowly and poorly without their mothers. Products derived from wild boar include meat, hide and bristles.[3] Apicius devotes a whole chapter to the cooking of boar meat, providing 10 recipes involving roasting, boiling and what sauces to use. The Romans usually served boar meat with garum.[94] Boar's head was the centrepiece of most medieval Christmas celebrations among the nobility.[95] Although growing in popularity as a captive-bred source of food, the wild boar takes longer to mature than most domestic pigs and it is usually smaller and produces less meat. Nevertheless, wild boar meat is leaner and healthier than pork,[96] being of higher nutritional value and having a much higher concentration of essential amino acids.[97] Most meat-dressing organizations agree that a boar carcass should yield 50 kg (110 lb) of meat on average. Large specimens can yield 15–20 kg (33–44 lb) of fat, with some giants yielding 30 kg (66 lb) or more. A boar hide can measure 300 dm2 (4,700 sq in) and can yield 350–1,000 grams (12–35 oz) of bristle and 400 grams (14 oz) of underwool.[3]
127
+
128
+ Roman relief of a dog confronting a boar, Cologne
129
+
130
+ Southern Indian depiction of boar hunt, c. 1540
131
+
132
+ Pig-sticking in British India
133
+
134
+ Boar shot in Volgograd Oblast, Russia
135
+
136
+ The Boar Hunt – Hans Wertinger, c. 1530, the Danube Valley
137
+
138
+ Boars can be damaging to agriculture in situations where their natural habitat is sparse. Populations living on the outskirts of towns or farms can dig up potatoes and damage melons, watermelons and maize. However, they generally only encroach upon farms when natural food is scarce. In the Belovezh forest for example, 34–47% of the local boar population will enter fields in years of moderate availability of natural foods. While the role of boars in damaging crops is often exaggerated,[3] cases are known of boar depredations causing famines, as was the case in Hachinohe, Japan in 1749, where 3,000 people died of what became known as the "wild boar famine". Still, within Japanese culture, the boar's status as vermin is expressed through its title as "king of pests" and the popular saying (addressed to young men in rural areas) "When you get married, choose a place with no wild boar."[87][98]
139
+
140
+ In Central Europe, farmers typically repel boars through distraction or fright, while in Kazakhstan it is usual to employ guard dogs in plantations. Although large boar populations can play an important role in limiting forest growth, they are also useful in keeping pest populations such as June bugs under control.[3] The growth of urban areas and the corresponding decline in natural boar habitats has led to some sounders entering human habitations in search of food. As in natural conditions, sounders in peri-urban areas are matriarchal, though males tend to be much less represented and adults of both sexes can be up to 35% heavier than their forest-dwelling counterparts. As of 2010, at least 44 cities in 15 countries have experienced problems of some kind relating to the presence of habituated wild boar.[99]
141
+
142
+ Actual attacks on humans are rare, but can be serious, resulting in penetrating injuries to the lower part of the body. They generally occur during the boars' rutting season from November to January, in agricultural areas bordering forests or on paths leading through forests. The animal typically attacks by charging and pointing its tusks towards the intended victim, with most injuries occurring on the thigh region. Once the initial attack is over, the boar steps back, takes position and attacks again if the victim is still moving, only ending once the victim is completely incapacitated.[100][101]
143
+
144
+ Boar attacks on humans have been documented since the Stone Age, with one of the oldest depictions being a cave painting in Bhimbetaka, India. The Romans and Ancient Greeks wrote of these attacks (Odysseus was wounded by a boar and Adonis was killed by one). A 2012 study compiling recorded attacks from 1825–2012 found accounts of 665 human victims of both wild boars and feral pigs, with the majority (19%) of attacks in the animal's native range occurring in India. Most of the attacks occurred in rural areas during the winter months in non-hunting contexts and were committed by solitary males.[102]
145
+
en/5557.html.txt ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ – in Europe (green & dark grey)– in Norway (green)
4
+
5
+ Svalbard (/ˈsvɑːlbɑːr/ SVAHL-bar,[3] Urban East Norwegian: [ˈsvɑ̂ːlbɑr] (listen); prior to 1925 known as Spitsbergen, or Spitzbergen, Russian: Шпицберген) is a Norwegian archipelago in the Arctic Ocean. Situated north of mainland Europe, it is about midway between continental Norway and the North Pole. The islands of the group range from 74° to 81° north latitude, and from 10° to 35° east longitude. The largest island is Spitsbergen, followed by Nordaustlandet and Edgeøya. While part of the Kingdom of Norway since 1925, Svalbard is not part of geographical Norway proper; administratively, the archipelago is not part of any Norwegian county, but forms an unincorporated area administered by a governor appointed by the Norwegian government, and a special jurisdiction subject to the Svalbard Treaty that is, unlike Norway proper, outside of the Schengen Area, the Nordic Passport Union and the European Economic Area.
6
+
7
+ Since 2002, Svalbard's main settlement, Longyearbyen, has had an elected local government, somewhat similar to mainland municipalities. Other settlements include the Russian mining community of Barentsburg, the research station of Ny-Ålesund, and the mining outpost of Sveagruva. Other settlements are farther north, but are populated only by rotating groups of researchers.
8
+
9
+ The islands were first used as a whaling base by whalers who sailed far north in pursuit of whales for blubber in the 17th and 18th centuries, after which they were abandoned. Coal mining started at the beginning of the 20th century, and several permanent communities were established. The Svalbard Treaty of 1920 recognizes Norwegian sovereignty, and the 1925 Svalbard Act made Svalbard a full part of the Kingdom of Norway. They also established Svalbard as a free economic zone and a demilitarized zone. The Norwegian Store Norske and the Russian Arktikugol remain the only mining companies in place. Research and tourism have become important supplementary industries, with the University Centre in Svalbard (UNIS) and the Svalbard Global Seed Vault playing critical roles. No roads connect the settlements; instead snowmobiles, aircraft and boats serve inter-community transport. Svalbard Airport, Longyear serves as the main gateway.
10
+
11
+ The archipelago features an Arctic climate, although with significantly higher temperatures than other areas at the same latitude. The flora take advantage of the long period of midnight sun to compensate for the polar night. Svalbard is a breeding ground for many seabirds, and also features polar bears, reindeer, the Arctic fox, and certain marine mammals. Seven national parks and twenty-three nature reserves cover two-thirds of the archipelago, protecting the largely untouched, yet fragile, natural environment. Approximately 60% of the archipelago is covered with glaciers, and the islands feature many mountains and fjords.
12
+
13
+ Svalbard and Jan Mayen are collectively assigned the ISO 3166-1 alpha-2 country code "SJ". Both areas are administered by Norway, though they are separated by a distance of over 950 kilometres (590 miles; 510 nautical miles) and have very different administrative structures.
14
+
15
+ The name Svalbard comes from an older native name for the archipelago, Svalbarð, composed of the well-attested Old Norse words svalr ("cold") and barð ("edge; ridge"). The name Spitsbergen originated with Dutch navigator and explorer Willem Barentsz, who described the "pointed mountains" or, in Dutch, spitse bergen that he saw on the west coast of the main island, Spitsbergen. Barentsz did not recognize that he had discovered an archipelago, and consequently the name Spitsbergen long remained in use both for the main island and for the archipelago as a whole.[4]
16
+
17
+ The Svalbard Treaty of 1920[5] defines Svalbard as all islands, islets and skerries from 74° to 81° north latitude, and from 10° to 35° east longitude.[6][7] The land area is 61,022 km2 (23,561 sq mi), and dominated by the island of Spitsbergen, which constitutes more than half the archipelago, followed by Nordaustlandet and Edgeøya.[8] All settlements are located on Spitsbergen, except the meteorological outposts on Bjørnøya and Hopen.[5] The Norwegian state took possession of all unclaimed land, or 95.2% of the archipelago, at the time the Svalbard Treaty entered into force; Store Norske, a Norwegian coal mining company, owns 4%, Arktikugol, a Russian coal mining company, owns 0.4%, while other private owners hold 0.4%.[9]
18
+
19
+ Since Svalbard is located north of the Arctic Circle, it experiences midnight sun in summer and polar night in winter. At 74° north, the midnight sun lasts 99 days and polar night 84 days, while the respective figures at 81° are 141 and 128 days.[10] In Longyearbyen, midnight sun lasts from 20 April until 23 August, and polar night lasts from 26 October to 15 February.[6] In winter, the combination of full moon and reflective snow can give additional light.[10] Due to the Earth's tilt and the high latitude, Svalbard has extensive twilights. Longyearbyen sees the first and last day of polar night having seven and a half hours of twilight, whereas the perpetual light lasts for two weeks longer than the midnight sun.[11][12] On the summer solstice, the sun bottoms out at 12° sun angle in the middle of the night, being much higher during night than in mainland Norway's polar light areas.[13] However, the daytime strength of the sun remains as low as 35°.
20
+
21
+ Glacial ice covers 36,502 km2 (14,094 sq mi) or 60% of Svalbard; 30% is barren rock while 10% is vegetated.[14] The largest glacier is Austfonna (8,412 km2 or 3,248 sq mi) on Nordaustlandet, followed by Olav V Land and Vestfonna. During summer, it is possible to ski from Sørkapp in the south to the north of Spitsbergen, with only a short distance not being covered by snow or glacier. Kvitøya is 99.3% covered by glacier.[15]
22
+
23
+ The landforms of Svalbard were created through repeated ice ages, when glaciers cut the former plateau into fjords, valleys, and mountains.[16] The tallest peak is Newtontoppen (1,717 m or 5,633 ft), followed by Perriertoppen (1,712 m or 5,617 ft), Ceresfjellet (1,675 m or 5,495 ft), Chadwickryggen (1,640 m or 5,380 ft), and Galileotoppen (1,637 m or 5,371 ft). The longest fjord is Wijdefjorden (108 km or 67 mi), followed by Isfjorden (107 km or 66 mi), Van Mijenfjorden (83 km or 52 mi), Woodfjorden (64 km or 40 mi), and Wahlenbergfjorden (46 km or 29 mi).[17] Svalbard is part of the High Arctic Large Igneous Province,[18] and experienced Norway's strongest earthquake on 6 March 2009, which hit a magnitude of 6.5.[19]
24
+
25
+ The Dutchman Willem Barentsz made the first discovery of the archipelago in 1596, when he sighted the coast of the island of Spitsbergen while searching for the Northern Sea Route.[20]
26
+
27
+ The first recorded landing on the islands of Svalbard dates to 1604, when an English ship landed at Bjørnøya, or Bear Island, and started hunting walrus. Annual expeditions soon followed, and Spitsbergen became a base for hunting the bowhead whale from 1611.[21][22] Because of the lawless nature of the area, English, Danish, Dutch, and French companies and authorities tried to use force to keep out other countries' fleets.[23][24]
28
+
29
+ Smeerenburg was one of the first settlements, established by the Dutch in 1619.[25] Smaller bases were also built by the English, Danish, and French. At first the outposts were merely summer camps, but from the early 1630s, a few individuals started to overwinter. Whaling at Spitsbergen lasted until the 1820s, when the Dutch, British, and Danish whalers moved elsewhere in the Arctic.[26] By the late 17th century, Russian hunters arrived; they overwintered to a greater extent and hunted land mammals such as the polar bear and fox.[27]
30
+
31
+ After the Anglo-Russian War in 1809, Russian activity on Svalbard diminished, and ceased by the 1820s.[28] Norwegian hunting—mostly for walrus—started in the 1790s. The first Norwegian citizens to reach Spitsbergen proper were a number of Coast Sámi people from the Hammerfest region, who were hired as part of a Russian crew for an expedition in 1795.[29] Norwegian whaling was abandoned about the same time as the Russians left,[30] but whaling continued around Spitsbergen until the 1830s, and around Bjørnøya until the 1860s.[31]
32
+
33
+ By the 1890s, Svalbard had become a destination for Arctic tourism, coal deposits had been found and the islands were being used as a base for Arctic exploration.[32] The first mining was along Isfjorden by Norwegians in 1899; by 1904, British interests had established themselves in Adventfjorden and started the first all-year operations.[33] Production in Longyearbyen, by American interests, started in 1908;[34] and Store Norske established itself in 1916, as did other Norwegian interests during the war, in part by buying American interests.[35]
34
+
35
+ Discussions to establish the sovereignty of the archipelago commenced in the 1910s,[36] but were interrupted by World War I.[37] On 9 February 1920, following the Paris Peace Conference, the Svalbard Treaty was signed, granting full sovereignty to Norway. However, all signatory countries were granted non-discriminatory rights to fishing, hunting, and mineral resources.[38] The treaty took effect on 14 August 1925, at the same time as the Svalbard Act regulated the archipelago and the first governor, Johannes Gerckens Bassøe, took office.[39] The archipelago has traditionally been known as Spitsbergen, and the main island as West Spitsbergen. From the 1920s, Norway renamed the archipelago Svalbard, and the main island became Spitsbergen.[40] Kvitøya, Kong Karls Land, Hopen, and Bjørnøya were not regarded as part of the Spitsbergen archipelago.[41] Russians have traditionally called the archipelago Grumant (Грумант).[42] The Soviet Union retained the name Spitsbergen (Шпицберген) to support undocumented claims that Russians were the first to discover the island.[43][44] In 1928, Italian explorer Umberto Nobile and the crew of the airship Italia crashed on the icepack off the coast of Foyn Island. The subsequent rescue attempts were covered extensively in the press and Svalbard received short-lived fame as a result.
36
+
37
+ Svalbard, known to both British and Germans as Spitsbergen, was little affected by the German invasion of Norway in April 1940. The settlements continued to operate as before, mining coal and monitoring the weather.
38
+ In July 1941, following the German invasion of the Soviet Union, the Royal Navy reconnoitred the islands with a view to using them as a base of operations to facilitate sending supplies to north Russia, but the idea was rejected as too impractical.[45] Instead, with the agreement of the Soviets and the Norwegian government in exile, in August 1941 the Norwegian and Soviet settlements on Svalbard were evacuated, and facilities there destroyed, in Operation Gauntlet.[46][47]
39
+ However the Norwegian government in exile decided it would be important politically to establish a garrison in the islands, which was done in May 1942 during Operation Fritham.[48]
40
+
41
+ Meanwhile, the Germans had responded to the destruction of weather station by establishing a reporting station of their own, codenamed "Banso", in October 1941.[49] This was chased away in November by a visit from four British warships, but later returned. A second station, "Knospel", was established at Ny Alesund in 1941, remaining until 1942. In May 1942, after the arrival of the Fritham force, the German unit at Banso was evacuated.
42
+
43
+ In September 1943 in Operation Zitronella a German task force, which included the battleship Tirpitz, was sent to attack the garrison and destroy the settlements at Longyearbyen and Barentsburg.[50] This was achieved, but had little long-term effect: after their departure the Norwegians returned and re-established their presence.[51]
44
+
45
+ In September 1944, the Germans set up their last weather station, Operation Haudegen in NordOstLand; this remained functioning until after the German surrender. On 4 September 1945, the soldiers were picked up by a Norwegian seal hunting vessel and surrendered to its captain. This group of men were the last German troops to surrender after the Second World War.
46
+
47
+ After the war, the Soviet Union proposed common Norwegian and Soviet administration and military defence of Svalbard. This was rejected in 1947 by Norway, which two years later joined NATO. The Soviet Union retained high civilian activity on Svalbard, in part to ensure that the archipelago was not used by NATO.[52]
48
+
49
+ After the war, Norway re-established operations at Longyearbyen and Ny-Ålesund,[53] while the Soviet Union established mining in Barentsburg, Pyramiden and Grumant.[54] The mine at Ny-Ålesund had several fatal accidents, killing 71 people while it was in operation from 1945 to 1954 and from 1960 to 1963. The Kings Bay Affair, caused by the 1962 accident killing 21 workers, forced Gerhardsen's Third Cabinet to resign.[55][56] From 1964, Ny-Ålesund became a research outpost, and a facility for the European Space Research Organisation.[57] Petroleum test drilling was started in 1963 and continued until 1984, but no commercially viable fields were found.[58] From 1960, regular charter flights were made from the mainland to a field at Hotellneset;[59] in 1975, Svalbard Airport, Longyear opened, allowing year-round services.[60]
50
+
51
+ During the Cold War, the Soviet Union retained about two-thirds of the population on the islands (with a third being Norwegians) with the archipelago's population slightly under 4,000.[54] Russian activity has diminished considerably since then, falling from 2,500 to 450 people from 1990 to 2010.[61][62] Grumant was closed after it was depleted in 1962.[54] Pyramiden was closed in 1998.[63] Coal exports from Barentsburg ceased in 2006 because of a fire,[64] but resumed in 2010.[65] The Russian community has also experienced two air accidents, Vnukovo Airlines Flight 2801, which killed 141 people,[66] and the Heerodden helicopter accident, which killed three people.[67]
52
+
53
+ Longyearbyen remained purely a company town until 1989 when utilities, culture, and education was separated into Svalbard Samfunnsdrift.[68] In 1993, it was sold to the national government and the University Centre was established.[69] Through the 1990s, tourism increased and the town developed an economy independent of Store Norske and the mining.[70] Longyearbyen was incorporated on 1 January 2002, receiving a community council.[68]
54
+
55
+ In 2016, Svalbard had a population of 2,667, of which 423 were Russian and Ukrainian, 10 Polish, and 322 non-Norwegians living in Norwegian settlements.[8] The largest non-Norwegian groups in Longyearbyen in 2005 were from Russia, Ukraine, Poland, Germany, Sweden, Denmark, and Thailand.[62]
56
+
57
+ Longyearbyen is the largest settlement on the archipelago, the seat of the governor and the only town to be incorporated. The town features a hospital, primary and secondary school, university, sports center with a swimming pool, library, culture center, cinema,[64] bus transport, hotels, a bank,[71] and several museums.[72] The newspaper Svalbardposten is published weekly.[73] Only a small fraction of the mining activity remains at Longyearbyen; instead, workers commute to Sveagruva (or Svea) where Store Norske operates a mine. Sveagruva is a dormitory town, with workers commuting from Longyearbyen weekly.[64]
58
+
59
+ Ny-Ålesund is a permanent settlement based entirely around research. Formerly a mining town, it is still a company town operated by the Norwegian state-owned Kings Bay. While there is some tourism there, Norwegian authorities limit access to the outpost to minimize impact on the scientific work.[64] Ny-Ålesund has a winter population of 35 and a summer population of 180.[74] The Norwegian Meteorological Institute has outposts at Bjørnøya and Hopen, with respectively ten and four people stationed. Both outposts can also house temporary research staff.[64] Poland operates the Polish Polar Station at Hornsund, with ten permanent residents.[64]
60
+
61
+ Barentsburg is the only permanently inhabited Russian settlement after Pyramiden was abandoned in 1998. It is a company town: all facilities are owned by Arktikugol, which operates a coal mine. In addition to the mining facilities, Arktikugol has opened a hotel and souvenir shop, catering for tourists taking day trips or hikes from Longyearbyen.[64] The village features facilities such as a school, library, sports center, community center, swimming pool, farm, and greenhouse. Pyramiden features similar facilities; both are built in typical post-World War II Soviet architectural and planning style and contain the world's two most northerly Lenin statues and other socialist realism artwork.[75] As of 2013[update], a handful of workers are stationed in the largely abandoned Pyramiden to maintain the infrastructure and run the hotel, which has been re-opened for tourists.
62
+
63
+ Most of the population is Christian and affiliated with the Church of Norway. Catholics on the archipelago are pastorally served by the Territorial Prelature of Tromsø.[76]
64
+
65
+ The Svalbard Treaty of 1920 established full Norwegian sovereignty over the archipelago. The islands are, unlike the Norwegian Antarctic Territory, a part of the Kingdom of Norway and not a dependency. The treaty came into effect in 1925, following the Svalbard Act. All forty signatory countries of the treaty have the right to conduct commercial activities on the archipelago without discrimination, although all activity is subject to Norwegian legislation. The treaty limits Norway's right to collect taxes to that of financing services on Svalbard. Therefore, Svalbard has a lower income tax than mainland Norway, and there is no value added tax. There is a separate budget for Svalbard to ensure compliance. Svalbard is a demilitarized zone, as the treaty prohibits the establishment of military installations. Norwegian military activity is limited to fishery surveillance by the Norwegian Coast Guard as the treaty requires Norway to protect the natural environment.[7][77]
66
+
67
+ There are no restrictions on foreigners migrating in, and hence no visa requirement.[78][79]
68
+
69
+ The Svalbard Act established the institution of the Governor of Svalbard (Norwegian: Sysselmannen), who holds the responsibility as both county governor and chief of police, as well as holding other authority granted from the executive branch. Duties include environmental policy, family law, law enforcement, search and rescue, tourism management, information services, contact with foreign settlements, and judge in some areas of maritime inquiries and judicial examinations—albeit never in the same cases as acting as police.[80][81] Since 2015, Kjerstin Askholt has been governor; she is assisted by a staff of 26 professionals. The institution is subordinate to the Ministry of Justice and the Police, but reports to other ministries in matters within their portfolio.[82]
70
+
71
+ Since 2002, Longyearbyen Community Council has had many of the same responsibilities of a municipality, including utilities, education, cultural facilities, fire department, roads, and ports.[70] No care or nursing services are available, nor is welfare payment available. Norwegian residents retain pension and medical rights through their mainland municipalities.[83] The hospital is part of University Hospital of North Norway, while the airport is operated by state-owned Avinor. Ny-Ålesund and Barentsburg remain company towns with all infrastructure owned by Kings Bay and Arktikugol, respectively.[70] Other public offices with presence on Svalbard are the Norwegian Directorate of Mining, the Norwegian Polar Institute, the Norwegian Tax Administration, and the Church of Norway.[84] Svalbard is subordinate to Nord-Troms District Court and Hålogaland Court of Appeal, both located in Tromsø.[85]
72
+
73
+ Although Norway is part of the European Economic Area (EEA) and the Schengen Agreement, Svalbard is not part of the Schengen Area or the EEA.[86] Non-EU and non-Nordic Svalbard residents do not need Schengen visas, but are prohibited from reaching Svalbard from mainland Norway without such. People without a source of income can be rejected by the governor.[87] No person is required to have a visa or residence permit for Svalbard. Everybody can live and work in Svalbard indefinitely regardless of citizenship. Svalbard Treaty grants treaty nationals equal right of abode as Norwegian nationals. So far, non-treaty nationals were admitted visa-free as well. "Regulations concerning rejection and expulsion from Svalbard" in force.[88][89] Russia retains a consulate in Barentsburg.[90]
74
+
75
+ In September 2010, a treaty was made between Russia and Norway fixing the boundary between the Svalbard archipelago and the Novaya Zemlya archipelago. Increased interest in petroleum exploration in the Arctic raised interest in a resolution of the dispute. The agreement takes into account the relative positions of the archipelagos, rather than being based simply on northward extension of the continental border of Norway and Russia.[91]
76
+
77
+ The three main industries on Svalbard are coal mining, tourism, and research. In 2007, there were 484 people working in the mining sector, 211 people working in the tourism sector, and 111 people working in the education sector. The same year, the mining gave a revenue of 2.008 billion Norwegian kroner (US$227,791,078), tourism 317 million kroner ($35,967,202), and research 142 million kroner ($16,098,404).[70][93] In 2006, the average income for economically active people was 494,700 kroner; 23% higher than on the mainland.[94] Almost all housing is owned by the various employers and institutions and rented to their employees; there are only a few privately owned houses, most of which are recreational cabins. Because of this, it is nearly impossible to live on Svalbard without working for an established institution.[87]
78
+
79
+ Since the resettlement of Svalbard in the early 20th century, coal mining has been the dominant commercial activity. Store Norske Spitsbergen Kulkompani, a subsidiary of the Norwegian Ministry of Trade and Industry, operates Svea Nord in Sveagruva and Mine 7 in Longyearbyen. The former produced 3.4 million tonnes in 2008, while the latter uses 35% of its output to fuel the Longyearbyen Power Station. Since 2007, there has not been any significant mining by the Russian state-owned Arktikugol in Barentsburg. There have previously been performed test drilling for petroleum on land, but these did not give satisfactory results for permanent operation. The Norwegian authorities do not allow offshore petroleum activities for environmental reasons, and the land formerly test-drilled on have been protected as natural reserves or national parks.[70] In 2011, a 20-year plan to develop offshore oil and gas resources around Svalbard was announced.[95]
80
+
81
+ Svalbard has historically been a base for both whaling and fishing. Norway claimed a 200-nautical-mile (370 km; 230 mi) exclusive economic zone (EEZ) around Svalbard in 1977,[9] with 31,688 square kilometres (12,235 sq mi) of internal waters and 770,565 square kilometres (297,517 sq mi) of EEZ.[96] Norway retains a restrictive fisheries policy in the zone,[9] and the claims are disputed by Russia.[5] Tourism is focused on the environment and is centered on Longyearbyen. Activities include hiking, kayaking, walks through glacier caves, and snowmobile and dog-sled safari. Cruise ships generate a significant portion of the traffic, including both stops by offshore vessels and expeditionary cruises starting and ending in Svalbard. Traffic is strongly concentrated between March and August; overnights have quintupled from 1991 to 2008, when there were 93,000 guest-nights.[70]
82
+
83
+ Research on Svalbard centers on Longyearbyen and Ny-Ålesund, the most accessible areas in the high Arctic. The treaty grants permission for any nation to conduct research on Svalbard, resulting in the Polish Polar Station and the Chinese Arctic Yellow River Station, plus Russian facilities in Barentsburg.[97] The University Centre in Svalbard in Longyearbyen offers undergraduate, graduate, and postgraduate courses to 350 students in various arctic sciences, particularly biology, geology, and geophysics. Courses are provided to supplement studies at the mainland universities; there are no tuition fees and courses are held in English, with Norwegian and international students equally represented.[69]
84
+
85
+ The Svalbard Global Seed Vault is a seedbank to store seeds from as many of the world's crop varieties and their botanical wild relatives as possible. A cooperation between the government of Norway and the Global Crop Diversity Trust, the vault is cut into rock near Longyearbyen, keeping it at a natural −6 °C (21 °F) and refrigerating the seeds to −18 °C (0 °F).[98][99]
86
+
87
+ The Svalbard Undersea Cable System is a 1,440 km (890 mi) fibre optic line from Svalbard to Harstad, needed for communicating with polar orbiting satellites through Svalbard Satellite Station and installations in Ny-Ålesund.[100][101]
88
+
89
+ One source of income for the area was, until 2015, visiting cruise ships. The Norwegian government became concerned about large numbers of cruise ship passengers suddenly landing at small settlements such as Ny-Ålesund, which is conveniently close to the barren-yet-picturesque Magdalena Fjord. With the increasing size of the larger ships, up to 2,000 people can potentially appear in a community that normally numbers less than 40. As a result, the government severely restricted the size of cruise ships that may visit.[102]
90
+
91
+ Unemployment is effectively banned, and there is no welfare system.[79]
92
+
93
+ Within Longyearbyen, Barentsburg, and Ny-Ålesund, there are road systems, but they do not connect with each other. Off-road motorized transport is prohibited on bare ground, but snowmobiles are used extensively during winter—both for commercial and recreational activities. Transport from Longyearbyen to Barentsburg (45 km or 28 mi) and Pyramiden (100 km or 62 mi) is possible by snowmobile in winter, or by ship all year round. All settlements have ports and Longyearbyen has a bus system.[103]
94
+
95
+ Svalbard Airport, Longyear, located 3 kilometres (2 mi) from Longyearbyen, is the only airport offering air transport off the archipelago. Scandinavian Airlines has daily scheduled services to Tromsø and Oslo. Low-cost carrier Norwegian Air Shuttle also has a service between Oslo and Svalbard, operating three or four times a week; there are also irregular charter services to Russia.[104] Finnair operated service from Helsinki, operating three times per week between June and August 2016, but Norwegian authorities did not allow this route, citing the 1978 bilateral agreement on air traffic between Finland and Norway.[105][106][107] Lufttransport provides regular corporate charter services from Longyearbyen to Ny-Ålesund Airport and Svea Airport for Kings Bay and Store Norske; these flights are generally not available to the public.[108] There are heliports in Barentsburg and Pyramiden, and helicopters are frequently used by the governor and to a lesser extent the mining company Arktikugol.[109]
96
+
97
+ The climate of Svalbard is dominated by its high latitude, with the average summer temperature at 4 to 6 °C (39 to 43 °F) and January averages at −16 to −12 °C (3 to 10 °F).[110] The West Spitsbergen Current, the northernmost branch of the North Atlantic Current system, moderates Svalbard's temperatures, particularly during winter. Winter temperatures in Svalbard are up to 2 °C (4 °F) higher than those at similar latitudes in Russia and Canada. The warm Atlantic water keeps the surrounding waters open and navigable most of the year. The interior fjord areas and valleys, sheltered by the mountains, have larger temperature differences than the coast, giving about 20 °C (36 °F) warmer summer temperatures and 3 °C (5 °F) colder winter temperatures. On the south of Spitsbergen, the temperature is slightly higher than further north and west. During winter, the temperature difference between south and north is typically 5 °C (9 °F), and about 3 °C (5 °F) in summer. Bear Island has average temperatures even higher than the rest of the archipelago.[111]
98
+
99
+ Svalbard is where cold polar air from the north and mild, wet sea air from the south meet, creating low pressure, changeable weather and strong winds, particularly in winter; in January, a strong breeze is registered 17% of the time at Isfjord Radio, but only 1% of the time in July. In summer, particularly away from land,[clarification needed] fog is common, with visibility under 1 kilometre (0.6 mi) registered 20% of the time in July and 1% of the time in January, at Hopen and Bjørnøya.[112] Precipitation is frequent, but falls in small quantities, typically less than 400 millimetres (16 in) per year in western Spitsbergen. More rain falls on the uninhabited east side, where there can be more than 1,000 millimetres (39 in).[112]
100
+
101
+ 2016 was the warmest year on record at Svalbard Airport, with a remarkable mean temperature of 0.0 °C (32.0 °F), 7.5 °C (13.5 °F) above the 1961–90 average, and more comparable to a location at the arctic circle. The coldest temperature of the year was as high as −18 °C (0 °F), warmer than the mean minimum in a normal January, February or March. In the same year, the number of days when there was rainfall equalled the number of days when there was snowfall, a significant deviation from the usual pattern whereby there would be at least twice as many snow days.[113]
102
+
103
+ In addition to humans, three primarily terrestrial mammalian species inhabit the archipelago: the Arctic fox, the Svalbard reindeer, and accidentally introduced southern voles, which are found only in Grumant.[114] Attempts to introduce the Arctic hare and the muskox have both failed.[115] There are 15 to 20 types of marine mammals, including whales, dolphins, seals, walruses, and polar bears.[114]
104
+
105
+ Polar bears are the iconic symbol of Svalbard, and one of the main tourist attractions.[116] The animals are protected and people moving outside the settlements are required to have appropriate scare devices to ward off attacks. They are also advised to carry a firearm for use as a last resort.[117][118] A British schoolboy was killed by a polar bear in 2011.[119] In July 2018, a polar bear was shot dead after it attacked and injured a polar bear guard leading tourists off a cruise ship.[120][121] Svalbard and Franz Joseph Land share a common population of 3,000 polar bears, with Kong Karls Land being the most important breeding ground.
106
+
107
+ The Svalbard reindeer (R. tarandus platyrhynchus) is a distinct subspecies; although it was previously almost extinct, it can be legally hunted (as can Arctic fox).[114] There are limited numbers of domesticated animals in the Russian settlements.[122]
108
+
109
+ About eighty species of bird are found on Svalbard, most of which are migratory.[123] The Barents Sea is among the areas in the world with most seabirds, with about 20 million individuals during late summer. The most common are little auk, northern fulmar, thick-billed murre, and black-legged kittiwake. Sixteen species are on the IUCN Red List. Particularly Bjørnøya, Storfjorden, Nordvest-Spitsbergen, and Hopen are important breeding ground for seabirds. The Arctic tern has the furthest migration, all the way to Antarctica.[114] Only two songbirds migrate to Svalbard to breed: the snow bunting and the wheatear. Rock ptarmigan is the only bird to overwinter.[124] Remains of Predator X (Pliosaurus funkei) from the Jurassic period were discovered here; it is one of the largest dinosaur-era marine reptiles ever found.[125]
110
+
111
+ Svalbard has permafrost and tundra, with both low, middle, and high Arctic vegetation. 165 species of plants have been found on the archipelago.[114] Only those areas which defrost in the summer have vegetations, which accounts for about 10% of the archipelago.[126] Vegetation is most abundant in Nordenskiöld Land, around Isfjorden and where affected by guano.[127] While there is little precipitation, giving the archipelago a steppe climate, plants still have good access to water because the cold climate reduces evaporation.[112][114] The growing season is very short, and may last only a few weeks.[128]
112
+
113
+ There are seven national parks in Svalbard: Forlandet, Indre Wijdefjorden, Nordenskiöld Land, Nordre Isfjorden Land, Nordvest-Spitsbergen, Sassen-Bünsow Land and Sør-Spitsbergen.[129] The archipelago has fifteen bird sanctuaries, one geotopic protected area and six nature reserves—with Nordaust-Svalbard and Søraust-Svalbard both being larger than any of the national parks. Most of the nature reserves and three of the national parks were created in 1973, with the remaining areas gaining protection in the 2000s.[130] All human traces dating from before 1946 are automatically protected.[117] The protected areas make up 65% of the archipelago.[94] Svalbard is on Norway's tentative list for nomination as a UNESCO World Heritage Site.[131]
114
+
115
+ The total solar eclipse of 20 March 2015 included only Svalbard and the Faroe Islands in the band of totality. Many scientists and tourists observed it.
116
+
117
+ Longyearbyen School serves ages 6–18. It is the primary/secondary school in the northernmost location on Earth. Once pupils reach ages 16 or 17, most families move to mainland Norway.[132] Barentsburg has its own school serving the Russian community; by 2014 it had three teachers, and its welfare funds had declined.[133] A primary school served the community of Pyramiden in the pre-1998 period.[134]
118
+
119
+ There is a non-degree offering tertiary educational institution in Longyearbyen,[132] University Centre in Svalbard (UNIS), the northernmost tertiary school on Earth.[135]
120
+
121
+ Longyearbyen School
122
+
123
+ Barentsburg School
124
+
125
+ University Centre in Svalbard (UNIS)
126
+
127
+ Association football is the most popular sport in Svalbard. There are three football pitches (one at Barentsburg), but no stadiums because of the small population.[136]
128
+ There is also an indoor hall adopted for multiple sports including indoor football.[137]
en/5558.html.txt ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ – in Africa (light blue)– in the African Union (light blue)
4
+
5
+ Eswatini (/ˌɛswɑːˈtiːni/ ESS-wah-TEE-nee; Swazi: eSwatini [ɛswáˈtʼiːni]), officially the Kingdom of Eswatini (Swazi: Umbuso weSwatini) and also known as Swaziland (/ˈswɑːzilænd/ SWAH-zee-land; officially renamed in 2018),[10][11] is a landlocked country in Southern Africa. It is bordered by Mozambique to its northeast and South Africa to its north, west, and south. At no more than 200 kilometres (120 mi) north to south and 130 kilometres (81 mi) east to west, Eswatini is one of the smallest countries in Africa; despite this, its climate and topography are diverse, ranging from a cool and mountainous highveld to a hot and dry lowveld.
6
+
7
+ The population is composed primarily of ethnic Swazis. The language is Swazi (siSwati in native form). The Swazis established their kingdom in the mid-18th century under the leadership of Ngwane III.[12] The country and the Swazi take their names from Mswati II, the 19th-century king under whose rule Swazi territory was expanded and unified; the present boundaries were drawn up in 1881 in the midst of the Scramble for Africa.[13] After the Second Boer War, the kingdom, under the name of Swaziland, was a British protectorate from 1903 until it regained its independence on 6 September 1968.[14] In April 2018, the official name was changed from Kingdom of Swaziland to Kingdom of Eswatini, mirroring the name commonly used in Swazi.[15][16][11]
8
+
9
+ The government is an absolute monarchy, ruled by King Mswati III since 1986.[17][18] Elections are held every five years to determine the House of Assembly and the Senate majority. The current constitution was adopted in 2005. Umhlanga, the reed dance held in August/September,[19] and incwala, the kingship dance held in December/January, are the nation's most important events.[20]
10
+
11
+ Eswatini is a developing country with a small economy. With a GDP per capita of $4,145.97, it is classified as a country with a lower-middle income.[21] As a member of the Southern African Customs Union (SACU) and the Common Market for Eastern and Southern Africa (COMESA), its main local trading partner is South Africa; in order to ensure economic stability, Eswatini's currency, the lilangeni, is pegged to the South African rand. Eswatini's major overseas trading partners are the United States[22] and the European Union.[23] The majority of the country's employment is provided by its agricultural and manufacturing sectors. Eswatini is a member of the Southern African Development Community (SADC), the African Union, the Commonwealth of Nations, and the United Nations.
12
+
13
+ The Swazi population faces major health issues: HIV/AIDS and (to a lesser extent) tuberculosis are widespread.[24][25] It is estimated that 26% of the adult population is HIV-positive. As of 2018, Eswatini has the 12th-lowest life expectancy in the world, at 58 years.[26] The population of Eswatini is young, with a median age of 20.5 years and people aged 14 years or younger constituting 37.5% of the country's total population.[27] The present population growth rate is 1.2%.
14
+
15
+ Artifacts indicating human activity dating back to the early Stone Age, around 200,000 years ago, have been found in Eswatini. Prehistoric rock art paintings dating from as far back as c. 27,000 years ago, to as recent as the 19th century, can be found in various places around the country.[28]
16
+
17
+ The earliest known inhabitants of the region were Khoisan hunter-gatherers. They were largely replaced by the Nguni during the great Bantu migrations. These peoples originated from the Great Lakes regions of eastern and central Africa. Evidence of agriculture and iron use dates from about the 4th century. People speaking languages ancestral to the current Sotho and Nguni languages began settling no later than the 11th century.[29]
18
+
19
+ The Swazi settlers, then known as the Ngwane (or bakaNgwane) before entering Eswatini, had been settled on the banks of the Pongola River. Before that, they were settled in the area of the Tembe River near present-day Maputo, Mozambique. Continuing conflict with the Ndwandwe people pushed them further north, with Ngwane III establishing his capital at Shiselweni at the foot of the Mhlosheni hills.[29]
20
+
21
+ Under Sobhuza I, the Ngwane people eventually established their capital at Zombodze in the heartland of present-day Eswatini. In this process, they conquered and incorporated the long-established clans of the country known to the Swazi as Emakhandzambili.[29]
22
+
23
+ Eswatini derives its name from a later king named Mswati II. KaNgwane, named for Ngwane III, is an alternative name for Eswatini, the surname of whose royal house remains Nkhosi Dlamini. Nkhosi literally means "king". Mswati II was the greatest of the fighting kings of Eswatini, and he greatly extended the area of the country to twice its current size. The Emakhandzambili clans were initially incorporated into the kingdom with wide autonomy, often including grants of special ritual and political status. The extent of their autonomy, however, was drastically curtailed by Mswati, who attacked and subdued some of them in the 1850s.[29]
24
+
25
+ With his power, Mswati greatly reduced the influence of the Emakhandzambili while incorporating more people into his kingdom either through conquest or by giving them refuge. These later arrivals became known to the Swazis as Emafikamuva. The clans who accompanied the Dlamini kings were known as the Bemdzabuko or true Swazi.[citation needed]
26
+
27
+ The autonomy of the Swazi nation was influenced by British and Dutch rule of southern Africa in the 19th and early 20th centuries. In 1881, the British government signed a convention recognising Swazi independence despite the Scramble for Africa that was taking place at the time. This independence was also recognised in the London Convention of 1884.[30]
28
+
29
+ Because of controversial land/mineral rights and other concessions, Swaziland had a triumviral administration in 1890 following the death of King Mbandzeni in 1889. This government represented the British, the Dutch republics, and the Swazi people. In 1894, a convention placed Swaziland under the South African Republic as a protectorate. This continued under the rule of Ngwane V until the outbreak of the Second Boer War in October 1899.[citation needed]
30
+
31
+ King Ngwane V died in December 1899, during incwala, after the outbreak of the Second Boer War. His successor, Sobhuza II, was four months old. Swaziland was indirectly involved in the war with various skirmishes between the British and the Boers occurring in the country until 1902.[citation needed]
32
+
33
+ In 1903, after the British victory in the Second Boer War, Swaziland became a British protectorate. Much of its early administration (for example, postal services) was carried out from South Africa until 1906 when the Transvaal Colony was granted self-government. Following this, Swaziland was partitioned into European and non-European (or native reserves) areas with the former being two-thirds of the total land. Sobhuza's official coronation was in December 1921 after the regency of Labotsibeni, after which he led an unsuccessful deputation to the Privy Council of the United Kingdom in London in 1922 regarding the issue of the land.[31]
34
+
35
+ In the period between 1923 and 1963, Sobhuza II established the Swazi Commercial Amadoda which was to grant licences to small businesses on the Swazi reserves and also established the Swazi National School to counter the dominance of the missions in education. His stature grew with time and the Swazi royal leadership was successful in resisting the weakening power of the British administration and the incorporation of Swaziland into the Union of South Africa.[31]
36
+
37
+ The constitution for independent Swaziland was promulgated by Britain in November 1963 under the terms of which legislative and executive councils were established. This development was opposed by the Swazi National Council (liqoqo). Despite such opposition, elections took place and the first Legislative Council of Swaziland was constituted on 9 September 1964. Changes to the original constitution proposed by the Legislative Council were accepted by Britain and a new constitution providing for a House of Assembly and Senate was drawn up. Elections under this constitution were held in 1967.[citation needed]
38
+
39
+ Following the 1967 elections, Swaziland was a protected state until independence was regained in 1968.[32]
40
+
41
+ Following the elections of 1973, the constitution of Swaziland was suspended by King Sobhuza II who thereafter ruled the country by decree until his death in 1982. At this point, Sobhuza II had ruled Swaziland for almost 83 years, making him the longest-reigning monarch in history.[33] A regency followed his death, with Queen Regent Dzeliwe Shongwe being head of state until 1984 when she was removed by the Liqoqo and replaced by Queen Mother Ntfombi Tfwala.[33] Mswati III, the son of Ntfombi, was crowned king on 25 April 1986 as King and Ingwenyama of Swaziland.[34]
42
+
43
+ The 1990s saw a rise in student and labour protests pressuring the king to introduce reforms.[35] Thus, progress toward constitutional reforms began, culminating with the introduction of the current Swazi constitution in 2005. This happened despite objections by political activists. The current constitution does not clearly deal with the status of political parties.[36]
44
+
45
+ The first election under the new constitution took place in 2008. Members of parliament were elected from 55 constituencies (also known as tinkhundla). These MPs served five-year terms which ended in 2013.[36]
46
+
47
+ In 2011, Swaziland suffered an economic crisis, due to reduced SACU receipts. This caused the government to request a loan from neighbouring South Africa. However, they did not agree with the conditions of the loan, which included political reforms.[37]
48
+
49
+ During this period, there was increased pressure on the Swazi government to carry out more reforms. Public protests by civic organisations and trade unions became more common. Starting in 2012, improvements in SACU receipts have eased the fiscal pressure on the Swazi government. A new parliament, the second since promulgation of the constitution, was elected on 20 September 2013. At this time the king reappointed Sibusiso Dlamini as prime minister for the third time.[38]
50
+
51
+ On 19 April 2018, King Mswati III announced that the Kingdom of Swaziland had renamed itself the Kingdom of Eswatini, reflecting the extant Swazi name for the state eSwatini, to mark the 50th anniversary of Swazi independence. The new name, Eswatini, means "land of the Swazis" in the Swazi language and was partially intended to prevent confusion with the similarly named Switzerland.[10][11]
52
+
53
+ Eswatini workers began anti-government protests against low salaries on 19 September 2018. They went on a three-day strike organised by the Trade Union Congress of Swaziland (TUCOSWA) that resulted in widespread disruption.[39]
54
+
55
+ Eswatini is an absolute monarchy with constitutional provision and Swazi law and customs.[40] The head of state is the king or Ngwenyama (lit. Lion), currently King Mswati III, who ascended to the throne in 1986 after the death of his father King Sobhuza II in 1982 and a period of regency. According to the country's constitution, the Ingwenyama is a symbol of unity and the eternity of the Swazi nation.[41]
56
+
57
+ By tradition, the king reigns along with his mother (or a ritual substitute), the Ndlovukati (lit. She-Elephant). The former was viewed as the administrative head of state and the latter as a spiritual and national head of state, with real power counterbalancing that of the king, but, during the long reign of Sobhuza II, the role of the Ndlovukati became more symbolic.[citation needed]
58
+
59
+ The king appoints the prime minister from the legislature and also appoints a minority of legislators to both chambers of the Libandla (parliament) with help from an advisory council. The king is allowed by the constitution to appoint some members to parliament to represent special interests. These special interests are citizens who might have been electoral candidates who were not elected, or might not have stood as candidates. This is done to balance views in parliament. Special interests could be people of particular gender or race, people of disability, the business community, civic society, scholars, and chiefs.[citation needed]
60
+
61
+ The Swazi bicameral Parliament, or Libandla, consists of the Senate (30 seats; 10 members appointed by the House of Assembly and 20 appointed by the monarch; to serve five-year terms) and the House of Assembly (65 seats; 10 members appointed by the monarch and 55 elected by popular vote; to serve five-year terms). The elections are held every five years after dissolution of parliament by the king. The last elections were held on 18 August and 21 September 2018.[42][43] The balloting is done in a non-partisan manner. All election procedures are overseen by the Elections and Boundaries Commission.[44]
62
+
63
+ At Swaziland's independence on 6 September 1968, Swaziland adopted a Westminster-style constitution. On 12 April 1973, King Sobhuza II annulled it by decree, assuming supreme powers in all executive, judicial, and legislative matters.[45] The first non-party elections for the House of Assembly were held in 1978, and they were conducted under the tinkhundla as electoral constituencies determined by the King, and established an Electoral Committee appointed by the King to supervise elections.[45]
64
+
65
+ Until the 1993 election, the ballot was not secret, voters were not registered, and they did not elect representatives directly. Instead, voters elected an electoral college by passing through a gate designated for the candidate of choice while officials counted them.[45] Later on, a constitutional review commission was appointed by King Mswati III in July 1996, comprising chiefs, political activists, and unionists to consider public submissions and draft proposals for a new constitution.[46]
66
+
67
+ Drafts were released for comment in May 1999 and November 2000. These were strongly criticised by civil society organisations in Swaziland and human rights organisations elsewhere. A 15-member team was announced in December 2001 to draft a new constitution; several members of this team were reported to be close to the royal family.[47]
68
+
69
+ In 2005, the constitution was put into effect. There is still much debate in the country about the constitutional reforms. From the early seventies, there was active resistance to the royal hegemony.[citation needed]
70
+
71
+ Nominations take place at the chiefdoms. On the day of nomination, the name of the nominee is raised by a show of hand and the nominee is given an opportunity to indicate whether he or she accepts the nomination. If he or she accepts it, he or she must be supported by at least ten members of that chiefdom. The nominations are for the position of Member of Parliament, Constituency Headman (Indvuna), and the Constituency Executive Committee (Bucopho). The minimum number of nominees is four and the maximum is ten.[48]
72
+
73
+ Primary elections also take place at the chiefdom level. It is by secret ballot. During the Primary Elections, the voters are given an opportunity to elect the member of the executive committee (Bucopho) for that particular chiefdom. Aspiring members of parliament and the constituency Headman are also elected from each chiefdom. The secondary and final elections takes place at the various constituencies called Tinkhundla.[48]
74
+
75
+ Candidates who won primary elections in the chiefdoms are considered nominees for the secondary elections at inkhundla or constituency level. The nominees with majority votes become the winners and they become members of parliament or constituency headman.[49][50]
76
+
77
+ Eswatini is a member of the United Nations, the Commonwealth of Nations, the African Union, the Common Market for Eastern and Southern Africa, and the Southern African Development Community.[51][52][53][54][55]
78
+
79
+ The judicial system in Eswatini is a dual system. The 2006 constitution established a court system based on the Western model consisting of four regional Magistrates Courts, a High Court, and a Court of Appeal (the Supreme Court), which are independent of crown control. In addition, traditional courts (Swazi Courts or Customary Courts) deal with minor offenses and violations of traditional Swazi law and custom.[56]
80
+
81
+ Judges are appointed by the King and are usually expatriates from South Africa.[57] The Supreme Court, which replaced the previous Court of Appeal, consists of the Chief Justice and at least four other Supreme Court judges. The High Court consists of the Chief Justice and at least four High Court judges.[58]
82
+
83
+ The military of Eswatini (Umbutfo Eswatini Defence Force) is used primarily during domestic protests, with some border and customs duties. The military has never been involved in a foreign conflict.[62] The king is the Commander-in-Chief of the Defence Force and the substantive Minister of the Ministry of Defence.[63]
84
+
85
+ There are approximately 3,000 personnel in the defence force, with the army being the largest component.[64] There is a small air force, which is mainly used for transporting the king as well as cargo and personnel, surveying land with search and rescue functions, and mobilising in case of a national emergency.[65]
86
+
87
+ Eswatini is divided into four regions: Hhohho, Lubombo, Manzini, and Shiselweni. In each of the four regions, there are several tinkhundla (singular inkhundla). The regions are managed by a regional administrator, who is aided by elected members in each inkhundla.[66]
88
+
89
+ The local government is divided into differently structured rural and urban councils depending on the level of development in the area. Although there are different political structures to the local authorities, effectively the urban councils are municipalities and the rural councils are the tinkhundla. There are twelve municipalities and 55 tinkhundla.[citation needed]
90
+
91
+ There are three tiers of government in the urban areas and these are city councils, town councils and town boards. This variation considers the size of the town or city. Equally, there are three tiers in the rural areas which are the regional administration at the regional level, tinkhundla and chiefdoms. Decisions are made by full council based on recommendations made by the various sub-committees. The town clerk is the chief advisor in each local council council or town board.[citation needed]
92
+
93
+ There are twelve declared urban areas, comprising two city councils, three town councils and seven town boards. The main cities and towns in Eswatini are Manzini, Mbabane, Nhlangano and Siteki which are also regional capitals. The first two have city councils and the latter two have town councils. Other small towns or urban area with substantial population are Ezulwini, Matsapha, Hlatikhulu, Pigg's Peak, Simunye, and Big Bend.[citation needed]
94
+
95
+ As noted above, there are 55 tinkhundla in Eswatini and each elects one representative to the House of Assembly of Eswatini. Each inkhundla has a development committee (bucopho) elected from the various constituency chiefdoms in its area for a five-year term. Bucopho bring to the inkhundla all matters of interest and concern to their various chiefdoms, and take back to the chiefdoms the decisions of the inkhundla. The chairman of the bucopho is elected at the inkhundla and is called indvuna ye nkhundla.[citation needed]
96
+
97
+ Eswatini lies across a fault which runs from the Drakensberg Mountains of Lesotho, north through the Eastern highlands of Zimbabwe, and forms the Great Rift Valley of Kenya.[citation needed]
98
+
99
+ A small, landlocked kingdom, Eswatini is bordered in the North, West and South by the Republic of South Africa and by Mozambique in the East. Eswatini has a land area of 17,364 km2 (6,704 sq mi). Eswatini has four separate geographical regions. These run from North to South and are determined by altitude. Eswatini is at approximately 26°30'S, 31°30'E.[67] Eswatini has a wide variety of landscapes, from the mountains along the Mozambican border to savannas in the east and rain forest in the northwest. Several rivers flow through the country, such as the Great Usutu River.[68][citation needed]
100
+
101
+ Along the eastern border with Mozambique is the Lubombo, a mountain ridge, at an altitude of around 600 metres (2,000 ft). The mountains are broken by the canyons of three rivers, the Ngwavuma, the Usutu and the Mbuluzi River. This is cattle ranching country. The western border of Eswatini, with an average altitude of 1,200 metres (3,900 ft), lies on the edge of an escarpment. Between the mountains rivers rush through deep gorges. Mbabane, the capital, is on the Highveld.[citation needed]
102
+
103
+ The Middleveld, lying at an average 700 metres (2,300 ft) above sea level is the most densely populated region of Eswatini with a lower rainfall than the mountains. Manzini, the principal commercial and industrial city, is situated in the Middleveld.[citation needed]
104
+
105
+ The Lowveld of Eswatini, at around 250 metres (820 ft), is less populated than other areas and presents a typical African bush country of thorn trees and grasslands. Development of the region was inhibited, in early days, by the scourge of malaria.[citation needed]
106
+
107
+ Eswatini is divided into four climatic regions: the Highveld, Middleveld, Lowveld and Lubombo plateau. The seasons are the reverse of those in the Northern Hemisphere with December being mid-summer and June mid-winter. Generally speaking, rain falls mostly during the summer months, often in the form of thunderstorms.[citation needed]
108
+
109
+ Winter is the dry season. Annual rainfall is highest on the Highveld in the west, between 1,000 and 2,000 mm (39.4 and 78.7 in) depending on the year. The further east, the less rain, with the Lowveld recording 500 to 900 mm (19.7 to 35.4 in) per annum.[citation needed]
110
+
111
+ Variations in temperature are also related to the altitude of the different regions. The Highveld temperature is temperate and seldom uncomfortably hot, while the Lowveld may record temperatures around 40 °C (104 °F) in summer.[citation needed]
112
+
113
+ The average temperatures at Mbabane, according to season:
114
+
115
+ Climate change in Eswatini is mainly evident in changing precipitation - including variability, persistent drought and heightened storm intensity. In turn, this leads to desertification, increased food insecurity and reduced river flows. Despite being responsible for a negligible portion of total global greenhouse gas emissions Eswatini is vulnerable to the impacts of climate change. The government of Eswatini has expressed concern that climate change is exacerbating existing social challenges such as poverty, a high HIV prevalence and food insecurity and will drastically restrict the country's ability to develop, as per Vision 2022.[69] Economically, climate change has already adversely impacted Eswatini. For instance, the 2015-2016 drought decreased sugar and soft drink concentrate production export (Eswatini's largest economic export). Many of Eswatini's major exports are raw, agricultural products and are therefore vulnerable to a changing climate.[70]
116
+
117
+ There are known to be 507 bird species in Eswatini, including 11 globally threatened species and four introduced species, and 107 mammal species native to Eswatini, including the critically endangered South-central black rhinoceros and seven other endangered or vulnerable species.[citation needed]
118
+
119
+ Protected areas of Eswatini include seven nature reserves, four frontier conservation areas and three wildlife or game reserves. Hlane Royal National Park, the largest park in Eswatini, is rich in bird life, including white-backed vultures, white-headed, lappet-faced and Cape vultures, raptors such as martial eagles, bateleurs, and long-crested eagles, and the southernmost nesting site of the marabou stork.[71]
120
+
121
+ Eswatini's economy is diverse, with agriculture, forestry and mining accounting for about 13% of GDP, manufacturing (textiles and sugar-related processing) representing 37% of GDP and services – with government services in the lead – constituting 50% of GDP. Title Deed Lands (TDLs), where the bulk of high value crops are grown (sugar, forestry, and citrus) are characterised by high levels of investment and irrigation, and high productivity.[citation needed]
122
+
123
+ About 75% of the population is employed in subsistence agriculture upon Swazi Nation Land (SNL). In contrast with the commercial farms, Swazi Nation Land suffers from low productivity and investment. This dual nature of the Swazi economy, with high productivity in textile manufacturing and in the industrialised agricultural TDLs on the one hand, and declining productivity subsistence agriculture (on SNL) on the other, may well explain the country's overall low growth, high inequality and unemployment.[citation needed]
124
+
125
+ Economic growth in Eswatini has lagged behind that of its neighbours. Real GDP growth since 2001 has averaged 2.8%, nearly 2 percentage points lower than growth in other Southern African Customs Union (SACU) member countries. Low agricultural productivity in the SNLs, repeated droughts, the devastating effect of HIV/AIDS and an overly large and inefficient government sector are likely contributing factors. Eswatini's public finances deteriorated in the late 1990s following sizeable surpluses a decade earlier. A combination of declining revenues and increased spending led to significant budget deficits.[citation needed]
126
+
127
+ The considerable spending did not lead to more growth and did not benefit the poor. Much of the increased spending has gone to current expenditures related to wages, transfers, and subsidies. The wage bill today constitutes over 15% of GDP and 55% of total public spending; these are some of the highest levels on the African continent. The recent rapid growth in SACU revenues has, however, reversed the fiscal situation, and a sizeable surplus was recorded since 2006. SACU revenues today account for over 60% of total government revenues. On the positive side, the external debt burden has declined markedly over the last 20 years, and domestic debt is almost negligible; external debt as a percent of GDP was less than 20% in 2006.[citation needed]
128
+
129
+ Eswatini's economy is very closely linked to the economy of South Africa, from which it receives over 90% of its imports and to which it sends about 70% of its exports. Eswatini's other key trading partners are the United States and the EU, from whom the country has received trade preferences for apparel exports (under the African Growth and Opportunity Act – AGOA – to the US) and for sugar (to the EU). Under these agreements, both apparel and sugar exports did well, with rapid growth and a strong inflow of foreign direct investment. Textile exports grew by over 200% between 2000 and 2005 and sugar exports increasing by more than 50% over the same period.[citation needed]
130
+
131
+ The continued vibrancy of the export sector is threatened by the removal of trade preferences for textiles, the accession to similar preferences for East Asian countries, and the phasing out of preferential prices for sugar to the EU market. Eswatini will thus have to face the challenge of remaining competitive in a changing global environment. A crucial factor in addressing this challenge is the investment climate.[citation needed]
132
+
133
+ The recently concluded Investment Climate Assessment provides some positive findings in this regard, namely that Eswatini firms are among the most productive in Sub-Saharan Africa, although they are less productive than firms in the most productive middle-income countries in other regions. They compare more favourably with firms from lower middle income countries, but are hampered by inadequate governance arrangements and infrastructure.[citation needed]
134
+
135
+ Eswatini's currency, the lilangeni, is pegged to the South African rand, subsuming Eswatini's monetary policy to South Africa. Customs duties from the Southern African Customs Union, which may equal as much as 70% of government revenue this year, and worker remittances from South Africa substantially supplement domestically earned income. Eswatini is not poor enough to merit an IMF programme; however, the country is struggling to reduce the size of the civil service and control costs at public enterprises. The government is trying to improve the atmosphere for foreign direct investment.[citation needed]
136
+
137
+ The majority of Eswatini's population is ethnically Swazi, mixed with a small number of Zulu and White Africans, mostly people of British and Afrikaner descent. Traditionally Swazi have been subsistence farmers and herders, but most now mix such activities with work in the growing urban formal economy and in government. Some Swazi work in the mines in South Africa.[citation needed]
138
+
139
+ Eswatini also received Portuguese settlers and African refugees from Mozambique. Christianity in Eswatini is sometimes mixed with traditional beliefs and practices. Many traditionalists believe that most Swazi ascribe a special spiritual role to the monarch.[citation needed]
140
+
141
+ This is a list of major cities and towns in Eswatini. The table below also includes the population and region.
142
+
143
+ SiSwati[72] (also known as Swati, Swazi or Siswati) is a Bantu language of the Nguni Group, spoken in Eswatini and South Africa. It has 2.5 million speakers and is taught in schools. It is an official language of Eswatini, along with English,[73] and one of the official languages of South Africa. English is the medium of communication in schools and in conducting business including the press.[citation needed]
144
+
145
+ About 76,000 people in the country speak Zulu.[74] Tsonga, which is spoken by many people throughout the region is spoken by about 19,000 people in Eswatini. Afrikaans is also spoken by some residents of Afrikaner descent. Portuguese has been introduced as a third language in the schools, due to the large community of Portuguese speakers from Mozambique[citation needed] or Northern and Central Portugal.[75]
146
+
147
+ Eighty-three percent of the total population adheres to Christianity in Eswatini. Anglican, Protestant and indigenous African churches, including African Zionist, constitute the majority of Christians (40%), followed by Roman Catholicism at 6% of the population. On 18 July 2012, Ellinah Wamukoya, was elected Anglican Bishop of Swaziland, becoming the first woman to be a bishop in Africa. Fifteen percent of the population follows traditional religions; other non-Christian religions practised in the country include Islam (2%[76]), the Bahá'í Faith (0.5%), and Hinduism (0.2%).[77] There were 14 Jewish families in 2013.[78]
148
+
149
+ The Kingdom of Eswatini does not recognise non-civil marriages such as Islamic-rite marriage contracts.[79]
150
+
151
+ As of 2016, Eswatini has the highest prevalence of HIV among adults aged 15 to 49 in the world (27.2%).[80][81]
152
+
153
+ Education in Eswatini begins with pre-school education for infants, primary, secondary and high school education for general education and training (GET), and universities and colleges at the tertiary level. Pre-school education is usually for children 5-year or younger; after that the students can enroll in a primary school anywhere in the country. In Eswatini early childhood care and education (ECCE) centres are in the form of preschools or neighbourhood care points (NCPs). In the country 21.6% of preschool age children have access to early childhood education.[82]
154
+
155
+ Primary education in Eswatini begins at the age of six. It is a seven-year programme that culminates with an end of Primary school Examination [SPC] in grade 7 which is a locally based assessment administered by the Examinations Council through schools. Primary Education is from grade 1 to grade 7.[83]
156
+
157
+ The secondary and high school education system in Eswatini is a five-year programme divided into three years junior secondary and two years senior secondary. There is an external public examination (Junior Certificate) at the end of the junior secondary that learners have to pass to progress to the senior secondary level. The Examinations Council of Swaziland (ECESWA) administers this examination. At the end of the senior secondary level, learners sit for a public examination, the Swaziland General Certificate of Secondary Education (SGCSE) and International General Certificate of Secondary Education (IGCSE) which is accredited by the Cambridge International Examination (CIE). A few schools offer the Advanced Studies (AS) programme in their curriculum.[84]
158
+
159
+ There are 830 public schools in Eswatini including primary, secondary and high schools.[85] There are also 34 recognised private schools with an additional 14 unrecognised. The biggest number of schools is in the Hhohho region.[85] Education in Eswatini as of 2009 is free at primary level, mainly first through the fourth grade and also free for orphaned and vulnerable children, but not compulsory.[86]
160
+
161
+ In 1996, the net primary school enrollment rate was 90.8%, with gender parity at the primary level.[86] In 1998, 80.5% of children reached grade five.[86] Eswatini is home to a United World College. In 1963, Waterford School, later named Waterford Kamhlaba United World College of Southern Africa, was founded as southern Africa's first multiracial school. In 1981, Waterford Kamhlaba joined the United World Colleges movement as the first United World College on the African continent, and the only African UWC until 2019 when UWC East Africa was established [87].
162
+
163
+ Adult and non-formal education centres are Sebenta National Institute for adult basic literacy and Emlalatini Development Centre, which provides alternative educational opportunities for school children and young adults who have not been able to complete their schooling.[citation needed]
164
+
165
+ The University of Eswatini, Southern African Nazarene University and Swaziland Christian University (SCU) are the institutions that offer university education in the country. A campus of Limkokwing University of Creative Technology can be found at Sidvwashini (Sidwashini), a suburb of the capital Mbabane. Ngwane Teacher's College and William Pitcher College are the country's teaching colleges. The Good Shepherd Hospital in Siteki is home to the College for Nursing Assistants.[88][89]
166
+
167
+ The University of Eswatini is the national university, established in 1982 by act of parliament, and is headquartered at Kwaluseni with additional campuses in Mbabane and Luyengo.[90] The Southern African Nazarene University (SANU) was established in 2010 as a merger of the Nazarene College of Nursing, College of Theology and the Nazarene Teachers College; it is in Manzini next to the Raleigh Fitkin Memorial Hospital. It is the university that produce the most nurses in the country.As a University it encampasses three faculties of which one is at Siteki which is the faculty of Theology and the other Two are found in Manzini which are the faculties of Education and the faculty of health Sciences [91][92]
168
+
169
+ The SCU, focusing on medical education, was established in 2012 and is Eswatini's newest university.[93] It is in Mbabane.[94] The campus of Limkokwing University was opened at Sidvwashini in Mbabane in 2012.[95]
170
+
171
+ The main centre for technical training in Eswatini is the Swaziland College of Technology (SCOT) which is slated to become a full university.[96] It aims to provide high quality training in technology and business studies in collaboration with the commercial, industrial and public sectors.[97] Other technical and vocational institutions include the Gwamile Vocational and Commercial Training Institute in Matsapha, the Manzini Industrial and Training Centre (MITC) in Manzini, Nhlangano Agricultural Skills Training Centre, and Siteki Industrial Training Centre.
172
+
173
+ In addition to these institutions, the kingdom also has the Swaziland Institute of Management and Public Administration (SIMPA) and Institute of Development Management (IDM). SIMPA is a government-owned management and development institute and IDM is a regional organisation in Botswana, Lesotho, and Eswatini, providing training, consultancy, and research in management. North Carolina State University's Poole College of Management is a sister school of SIMPA.[98] The Mananga Management Centre was established at Ezulwini as Mananga Agricultural Management Centre in 1972 as an international management development centre offering training of middle and senior managers.[99]
174
+
175
+ The principal Swazi social unit is the homestead, a traditional beehive hut thatched with dry grass. In a polygamous homestead, each wife has her own hut and yard surrounded by reed fences. There are three structures for sleeping, cooking, and storage (brewing beer). In larger homesteads there are also structures used as bachelors' quarters and guest accommodation.
176
+
177
+ Central to the traditional homestead is the cattle byre, a circular area enclosed by large logs, interspaced with branches. The cattle byre has ritual as well as practical significance as a store of wealth and symbol of prestige. It contains sealed grain pits. Facing the cattle byre is the great hut which is occupied by the mother of the headman.
178
+
179
+ The headman is central to all homestead affairs and he is often polygamous. He leads through example and advises his wives on all social affairs of the home as well as seeing to the larger survival of the family. He also spends time socialising with the young boys, who are often his sons or close relatives, advising them on the expectations of growing up and manhood.
180
+
181
+ The Sangoma is a traditional diviner chosen by the ancestors of that particular family. The training of the Sangoma is called "kwetfwasa". At the end of the training, a graduation ceremony takes place where all the local sangoma come together for feasting and dancing. The diviner is consulted for various purposes, such as determining the cause of sickness or even death. His diagnosis is based on "kubhula", a process of communication, through trance, with the natural superpowers. The Inyanga (a medical and pharmaceutical specialist in western terms) possesses the bone throwing skill ("kushaya ematsambo") used to determine the cause of the sickness.
182
+
183
+ The most important cultural event in Eswatini is the Incwala ceremony. It is held on the fourth day after the full moon nearest the longest day, 21 December. Incwala is often translated in English as "first fruits ceremony", but the King's tasting of the new harvest is only one aspect among many in this long pageant. Incwala is best translated as "Kingship Ceremony": when there is no king, there is no Incwala. It is high treason for any other person to hold an Incwala.
184
+
185
+ Every Swazi may take part in the public parts of the Incwala. The climax of the event is the fourth day of the Big Incwala. The key figures are the King, Queen Mother, royal wives and children, the royal governors (indunas), the chiefs, the regiments, and the "bemanti" or "water people".
186
+
187
+ Eswatini's most well-known cultural event is the annual Umhlanga Reed Dance. In the eight-day ceremony, girls cut reeds and present them to the queen mother and then dance. (There is no formal competition.) It is done in late August or early September. Only childless, unmarried girls can take part. The aims of the ceremony are to preserve girls' chastity, provide tribute labour for the Queen mother, and to encourage solidarity by working together. The royal family appoints a commoner maiden to be "induna" (captain) of the girls and she announces the dates of the annual ceremony over the radio. The chosen induna is expected to be an expert dancer and knowledgeable on royal protocol. One of the King's daughters acts as her counterpart during the ceremony.
188
+
189
+ The Reed Dance today is not an ancient ceremony but a development of the old "umchwasho" custom. In "umchwasho", all young girls were placed in a female age-regiment. If any girl became pregnant outside of marriage, her family paid a fine of one cow to the local chief. After a number of years, when the girls had reached a marriageable age, they would perform labour service for the Queen Mother, ending with dancing and feasting. The country was under the chastity rite of "umchwasho" until 19 August 2005.
190
+
191
+ Eswatini is also known for a strong presence in the handcrafts industry. The formalised handcraft businesses of Eswatini employ over 2,500 people, many of whom are women (per TechnoServe Swaziland Handcrafts Impact Study, February 2011). The products are unique and reflect the culture of Eswatini, ranging from housewares, to artistic decorations, to complex glass, stone, or wood artwork.
192
+
193
+ Princess Sikhanyiso Dlamini at the reed dance (umhlanga) festival
194
+
195
+ A traditional Swazi homestead
196
+
197
+ Swazi warriors at the incwala ceremony
en/5559.html.txt ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A syllable is a unit of organization for a sequence of speech sounds. It is typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological "building blocks" of words.[1] They can influence the rhythm of a language, its prosody, its poetic metre and its stress patterns. Speech can usually be divided up into a whole number of syllables: for example, the word ignite is composed of two syllables: ig and nite.
2
+
3
+ Syllabic writing began several hundred years before the first letters. The earliest recorded syllables are on tablets written around 2800 BC in the Sumerian city of Ur. This shift from pictograms to syllables has been called "the most important advance in the history of writing".[2]
4
+
5
+ A word that consists of a single syllable (like English dog) is called a monosyllable (and is said to be monosyllabic). Similar terms include disyllable (and disyllabic; also bisyllable and bisyllabic) for a word of two syllables; trisyllable (and trisyllabic) for a word of three syllables; and polysyllable (and polysyllabic), which may refer either to a word of more than three syllables or to any word of more than one syllable.
6
+
7
+ Syllable is an Anglo-Norman variation of Old French sillabe, from Latin syllaba, from Koine Greek συλλαβή syllabḗ (Greek pronunciation: [sylːabɛ̌ː]). συλλαβή means "what is taken together", referring to letters that are taken together to make a single sound.[3]
8
+
9
+ συλλαβή is a verbal noun from the verb συλλαμβάνω syllambánō, a compound of the preposition σύν sýn "with" and the verb λαμβάνω lambánō "take".[4] The noun uses the root λαβ-, which appears in the aorist tense; the present tense stem λαμβάν- is formed by adding a nasal infix ⟨μ⟩ ⟨m⟩ before the β b and a suffix -αν -an at the end.[5]
10
+
11
+ In the International Phonetic Alphabet (IPA), the period ⟨.⟩ marks syllable breaks, as in the word "astronomical" ⟨/ˌæs.trəˈnɒm.ɪk.əl/⟩.
12
+
13
+ In practice, however, IPA transcription is typically divided into words by spaces, and often these spaces are also understood to be syllable breaks. In addition, the stress mark ⟨ˈ⟩ is placed immediately before a stressed syllable, and when the stressed syllable is in the middle of a word, the stress mark also marks a syllable break, for example in the word "understood" ⟨/ʌndərˈstʊd/⟩.
14
+
15
+ When a word space comes in the middle of a syllable (that is, when a syllable spans words), a tie bar ⟨‿⟩ can be used for liaison, as in the French combination les amis ⟨/le.z‿a.mi/⟩. The liaison tie is also used to join lexical words into phonological words, for example hot dog ⟨/ˈhɒt‿dɒɡ/⟩.
16
+
17
+ A Greek sigma, ⟨σ⟩, is used as a wild card for 'syllable', and a dollar/peso sign, ⟨$⟩, marks a syllable boundary where the usual period might be misunderstood. For example, ⟨σσ⟩ is a pair of syllables, and ⟨V$⟩ is a syllable-final vowel.
18
+
19
+ In the typical theory[citation needed] of syllable structure, the general structure of a syllable (σ) consists of three segments. These segments are grouped into two components:
20
+
21
+ The syllable is usually considered right-branching, i.e. nucleus and coda are grouped together as a "rime" and are only distinguished at the second level.
22
+
23
+ The nucleus is usually the vowel in the middle of a syllable. The onset is the sound or sounds occurring before the nucleus, and the coda (literally 'tail') is the sound or sounds that follow the nucleus. They are sometimes collectively known as the shell. The term rime covers the nucleus plus coda. In the one-syllable English word cat, the nucleus is a (the sound that can be shouted or sung on its own), the onset c, the coda t, and the rime at. This syllable can be abstracted as a consonant-vowel-consonant syllable, abbreviated CVC. Languages vary greatly in the restrictions on the sounds making up the onset, nucleus and coda of a syllable, according to what is termed a language's phonotactics.
24
+
25
+ Although every syllable has supra-segmental features, these are usually ignored if not semantically relevant, e.g. in tonal languages.
26
+
27
+
28
+
29
+ In Chinese syllable structure, the onset is replaced with an initial, and a semivowel or liquid forms another segment, called the medial. These four segments are grouped into two slightly different components:[example needed]
30
+
31
+ In many languages of the Mainland Southeast Asia linguistic area, such as Chinese, the syllable structure is expanded to include an additional, optional segment known as a medial, which is located between the onset (often termed the initial in this context) and the rime. The medial is normally a semivowel, but reconstructions of Old Chinese generally include liquid medials (/r/ in modern reconstructions, /l/ in older versions), and many reconstructions of Middle Chinese include a medial contrast between /i/ and /j/, where the /i/ functions phonologically as a glide rather than as part of the nucleus. In addition, many reconstructions of both Old and Middle Chinese include complex medials such as /rj/, /ji/, /jw/ and /jwi/. The medial groups phonologically with the rime rather than the onset, and the combination of medial and rime is collectively known as the final.
32
+
33
+ Some linguists, especially when discussing the modern Chinese varieties, use the terms "final" and "rime/rhyme" interchangeably. In historical Chinese phonology, however, the distinction between "final" (including the medial) and "rime" (not including the medial) is important in understanding the rime dictionaries and rime tables that form the primary sources for Middle Chinese, and as a result most authors distinguish the two according to the above definition.
34
+
35
+
36
+
37
+ In some theories of phonology, syllable structures are displayed as tree diagrams (similar to the trees found in some types of syntax). Not all phonologists agree that syllables have internal structure; in fact, some phonologists doubt the existence of the syllable as a theoretical entity.[8]
38
+
39
+ There are many arguments for a hierarchical relationship, rather than a linear one, between the syllable constituents. One hierarchical model groups the syllable nucleus and coda into an intermediate level, the rime. The hierarchical model accounts for the role that the nucleus+coda constituent plays in verse (i.e., rhyming words such as cat and bat are formed by matching both the nucleus and coda, or the entire rime), and for the distinction between heavy and light syllables, which plays a role in phonological processes such as, for example, sound change in Old English scipu and wordu.[9][further explanation needed]
40
+
41
+ In some traditional descriptions of certain languages such as Cree and Ojibwe, the syllable is considered left-branching, i.e. onset and nucleus group below a higher-level unit, called a "body" or "core". This contrasts with the coda.
42
+
43
+ The rime or rhyme of a syllable consists of a nucleus and an optional coda. It is the part of the syllable used in most poetic rhymes, and the part that is lengthened or stressed when a person elongates or stresses a word in speech.
44
+
45
+ The rime is usually the portion of a syllable from the first vowel to the end. For example, /æt/ is the rime of all of the words at, sat, and flat. However, the nucleus does not necessarily need to be a vowel in some languages. For instance, the rime of the second syllables of the words bottle and fiddle is just /l/, a liquid consonant.
46
+
47
+ Just as the rime branches into the nucleus and coda, the nucleus and coda may each branch into multiple phonemes. The limit for the number of phonemes which may be contained in each varies by language. For example, Japanese and most Sino-Tibetan languages do not have consonant clusters at the beginning or end of syllables, whereas many Eastern European languages can have more than two consonants at the beginning or end of the syllable. In English, the onset, nucleus, and coda may all have two phonemes, as in the word flouts: [fl] in the onset, the diphthong [aʊ] in the nucleus, and [ts] in the coda.
48
+
49
+ Rime and rhyme are variants of the same word, but the rarer form rime is sometimes used to mean specifically syllable rime to differentiate it from the concept of poetic rhyme. This distinction is not made by some linguists and does not appear in most dictionaries.
50
+
51
+ A heavy syllable is generally one with a branching rime, i.e. it is either a closed syllable that ends in a consonant, or a syllable with a branching nucleus, i.e. a long vowel or diphthong. The name is a metaphor, based on the nucleus or coda having lines that branch in a tree diagram.
52
+
53
+ In some languages, heavy syllables include both VV (branching nucleus) and VC (branching rime) syllables, contrasted with V, which is a light syllable.
54
+ In other languages, only VV syllables are considered heavy, while both VC and V syllables are light.
55
+ Some languages distinguish a third type of superheavy syllable, which consists of VVC syllables (with both a branching nucleus and rime) or VCC syllables (with a coda consisting of two or more consonants) or both.
56
+
57
+ In moraic theory, heavy syllables are said to have two moras, while light syllables are said to have one and superheavy syllables are said to have three. Japanese phonology is generally described this way.
58
+
59
+ Many languages forbid superheavy syllables, while a significant number forbid any heavy syllable. Some languages strive for constant syllable weight; for example, in stressed, non-final syllables in Italian, short vowels co-occur with closed syllables while long vowels co-occur with open syllables, so that all such syllables are heavy (not light or superheavy).
60
+
61
+ The difference between heavy and light frequently determines which syllables receive stress – this is the case in Latin and Arabic, for example. The system of poetic meter in many classical languages, such as Classical Greek, Classical Latin, Old Tamil and Sanskrit, is based on syllable weight rather than stress (so-called quantitative rhythm or quantitative meter).
62
+
63
+ Syllabification is the separation of a word into syllables, whether spoken or written. In most languages, the actually spoken syllables are the basis of syllabification in writing too. Due to the very weak correspondence between sounds and letters in the spelling of modern English, for example, written syllabification in English has to be based mostly on etymological i.e. morphological instead of phonetic principles. English written syllables therefore do not correspond to the actually spoken syllables of the living language.
64
+
65
+ Phonotactic rules determine which sounds are allowed or disallowed in each part of the syllable. English allows very complicated syllables; syllables may begin with up to three consonants (as in string or splash), and occasionally end with as many as four (as in prompts). Many other languages are much more restricted; Japanese, for example, only allows /ɴ/ and a chroneme in a coda, and theoretically has no consonant clusters at all, as the onset is composed of at most one consonant.[10]
66
+
67
+ There can be disagreement about the location of some divisions between syllables in spoken language. The problems of dealing with such cases have been most commonly discussed with relation to English. In the case of a word such as "hurry", the division may be /hʌr.i/ or /hʌ.ri/, neither of which seems a satisfactory analysis for a non-rhotic accent such as RP (British English): /hʌr.i/ results in a syllable-final /r/, which is not normally found, while /hʌ.ri/ gives a syllable-final short stressed vowel, which is also non-occurring. Arguments can be made in favour of one solution or the other: Wells (2002)[11] proposes a general rule that "Subject to certain conditions ..., consonants are syllabified with the more strongly stressed of two flanking syllables", while many other phonologists prefer to divide syllables with the consonant or consonants attached to the following syllable wherever possible. However, an alternative that has received some support is to treat an intervocalic consonant as ambisyllabic, i.e. belonging both to the preceding and to the following syllable: /hʌṛi/. This is discussed in more detail in English phonology § Phonotactics.
68
+
69
+ The onset (also known as anlaut) is the consonant sound or sounds at the beginning of a syllable, occurring before the nucleus. Most syllables have an onset. Syllables without an onset may be said to have a zero onset – that is, nothing where the onset would be.
70
+
71
+ Some languages restrict onsets to be only a single consonant, while others allow multiconsonant onsets according to various rules. For example, in English, onsets such as pr-, pl- and tr- are possible but tl- is not, and sk- is possible but ks- is not. In Greek, however, both ks- and tl- are possible onsets, while contrarily in Classical Arabic no multiconsonant onsets are allowed at all.
72
+
73
+ Some languages forbid null onsets. In these languages, words beginning in a vowel, like the English word at, are impossible.
74
+
75
+ This is less strange than it may appear at first, as most such languages allow syllables to begin with a phonemic glottal stop (the sound in the middle of English "uh-oh" or, in some dialects, the double T in "button", represented in the IPA as /ʔ/). In English, a word that begins with a vowel may be pronounced with an epenthetic glottal stop when following a pause, though the glottal stop may not be a phoneme in the language.
76
+
77
+ Few languages make a phonemic distinction between a word beginning with a vowel and a word beginning with a glottal stop followed by a vowel, since the distinction will generally only be audible following another word. However, Maltese and some Polynesian languages do make such a distinction, as in Hawaiian /ahi/ "fire" and /ʔahi/ "tuna".
78
+
79
+ Hebrew and Arabic forbid empty onsets. The names Israel, Abel, Abraham, Iran, Omar, Abdullah, and Iraq appear not to have onsets in the first syllable, but in the original Hebrew and Arabic forms they actually begin with various consonants: the semivowel /j/ in yisrāʔēl, the glottal fricative in /h/ heḅel, the glottal stop /ʔ/ in ʔaḅrāhām and ʔīrān, or the pharyngeal fricative /ʕ/ in ʕumar, ʕabduḷḷāh, and ʕirāq. Conversely, the Arrernte language of central Australia may prohibit onsets altogether; if so, all syllables have the underlying shape VC(C).[12]
80
+
81
+ The difference between a syllable with a null onset and one beginning with a glottal stop is often purely a difference of phonological analysis, rather than the actual pronunciation of the syllable. In some cases, the pronunciation of a (putatively) vowel-initial word when following another word – particularly, whether or not a glottal stop is inserted – indicates whether the word should be considered to have a null onset. For example, many Romance languages such as Spanish never insert such a glottal stop, while English does so only some of the time, depending on factors such as conversation speed; in both cases, this suggests that the words in question are truly vowel-initial. But there are exceptions here, too. For example, standard German (excluding many southern accents) and Arabic both require that a glottal stop be inserted between a word and a following, putatively vowel-initial word. Yet such words are said to begin with a vowel in German but a glottal stop in Arabic. The reason for this has to do with other properties of the two languages. For example, a glottal stop does not occur in other situations in German, e.g. before a consonant or at the end of word. On the other hand, in Arabic, not only does a glottal stop occur in such situations (e.g. Classical /saʔala/ "he asked", /raʔj/ "opinion", /dˤawʔ/ "light"), but it occurs in alternations that are clearly indicative of its phonemic status (cf. Classical /kaːtib/ "writer" vs. /maktuːb/ "written", /ʔaːkil/ "eater" vs. /maʔkuːl/ "eaten").
82
+
83
+ The writing system of a language may not correspond with the phonological analysis of the language in terms of its handling of (potentially) null onsets. For example, in some languages written in the Latin alphabet, an initial glottal stop is left unwritten;[example needed] on the other hand, some languages written using non-Latin alphabets such as abjads and abugidas have a special zero consonant to represent a null onset. As an example, in Hangul, the alphabet of the Korean language, a null onset is represented with ㅇ at the left or top section of a grapheme, as in 역 "station", pronounced yeok, where the diphthong yeo is the nucleus and k is the coda.
84
+
85
+
86
+
87
+ The nucleus is usually the vowel in the middle of a syllable. Generally, every syllable requires a nucleus (sometimes called the peak), and the minimal syllable consists only of a nucleus, as in the English words "eye" or "owe". The syllable nucleus is usually a vowel, in the form of a monophthong, diphthong, or triphthong, but sometimes is a syllabic consonant.
88
+
89
+ In most Germanic languages, lax vowels can occur only in closed syllables. Therefore, these vowels are also called checked vowels, as opposed to the tense vowels that are called free vowels because they can occur even in open syllables.
90
+
91
+ The notion of syllable is challenged by languages that allow long strings of obstruents without any intervening vowel or sonorant. By far the most common syllabic consonants are sonorants like [l], [r], [m], [n] or [ŋ], as in English bottle, church (in rhotic accents), rhythm, button and lock 'n key. However, English allows syllabic obstruents in a few para-verbal onomatopoeic utterances such as shh (used to command silence) and psst (used to attract attention). All of these have been analyzed as phonemically syllabic. Obstruent-only syllables also occur phonetically in some prosodic situations when unstressed vowels elide between obstruents, as in potato [pʰˈteɪɾəʊ] and today [tʰˈdeɪ], which do not change in their number of syllables despite losing a syllabic nucleus.
92
+
93
+ A few languages have so-called syllabic fricatives, also known as fricative vowels, at the phonemic level. (In the context of Chinese phonology, the related but non-synonymous term apical vowel is commonly used.) Mandarin Chinese is famous for having such sounds in at least some of its dialects, for example the pinyin syllables sī shī rī, sometimes pronounced [sź̩ ʂʐ̩́ ʐʐ̩́] respectively. Though, like the nucleus of rhotic English church, there is debate over whether these nuclei are consonants or vowels.
94
+
95
+ Languages of the northwest coast of North America, including Salishan, Wakashan and Chinookan languages, allow stop consonants and voiceless fricatives as syllables at the phonemic level, in even the most careful enunciation. An example is Chinook [ɬtʰpʰt͡ʃʰkʰtʰ] 'those two women are coming this way out of the water'. Linguists have analyzed this situation in various ways, some arguing that such syllables have no nucleus at all and some arguing that the concept of "syllable" cannot clearly be applied at all to these languages.
96
+
97
+ Other examples:
98
+
99
+ In Bagemihl's survey of previous analyses, he finds that the Bella Coola word /t͡sʼktskʷt͡sʼ/ 'he arrived' would have been parsed into 0, 2, 3, 5, or 6 syllables depending on which analysis is used. One analysis would consider all vowel and consonant segments as syllable nuclei, another would consider only a small subset (fricatives or sibilants) as nuclei candidates, and another would simply deny the existence of syllables completely. However, when working with recordings rather than transcriptions, the syllables can be obvious in such languages, and native speakers have strong intuitions as to what the syllables are.
100
+
101
+ This type of phenomenon has also been reported in Berber languages (such as Indlawn Tashlhiyt Berber), Mon–Khmer languages (such as Semai, Temiar, Khmu) and the Ōgami dialect of Miyako, a Ryukyuan language.[14]
102
+
103
+ The coda (also known as auslaut) comprises the consonant sounds of a syllable that follow the nucleus. The sequence of nucleus and coda is called a rime. Some syllables consist of only a nucleus, only an onset and a nucleus with no coda, or only a nucleus and coda with no onset.
104
+
105
+ The phonotactics of many languages forbid syllable codas. Examples are Swahili and Hawaiian. In others, codas are restricted to a small subset of the consonants that appear in onset position. At a phonemic level in Japanese, for example, a coda may only be a nasal (homorganic with any following consonant) or, in the middle of a word, gemination of the following consonant. (On a phonetic level, other codas occur due to elision of /i/ and /u/.) In other languages, nearly any consonant allowed as an onset is also allowed in the coda, even clusters of consonants. In English, for example, all onset consonants except /h/ are allowed as syllable codas.
106
+
107
+ If the coda consists of a consonant cluster, the sonority decreases from left to right, as in the English word help. This is called the sonority profile.[17] English onset and coda clusters are therefore different. The onset str in strengths does not appear as a coda in any English word, and likewise the coda ngths does not appear as an onset in any word.
108
+
109
+
110
+
111
+ A coda-less syllable of the form V, CV, CCV, etc. (V = vowel, C = consonant) is called an open syllable or free syllable, while a syllable that has a coda (VC, CVC, CVCC, etc.) is called a closed syllable or checked syllable. Note that they have nothing to do with open and close vowels, but are defined according to the phoneme that ends the syllable: a vowel (open syllable) or a consonant (closed syllable). Almost all languages allow open syllables, but some, such as Hawaiian, do not have closed syllables.
112
+
113
+ When a syllable is not the last syllable in a word, the nucleus normally must be followed by two consonants in order for the syllable to be closed. This is because a single following consonant is typically considered the onset of the following syllable. For example, Spanish casar "to marry" is composed of an open syllable followed by a closed syllable (ca-sar), whereas cansar "to get tired" is composed of two closed syllables (can-sar). When a geminate (double) consonant occurs, the syllable boundary occurs in the middle, e.g. Italian panna "cream" (pan-na); cf. Italian pane "bread" (pa-ne).
114
+
115
+ English words may consist of a single closed syllable, with nucleus denoted by ν, and coda denote by κ:
116
+
117
+ English words may also consist of a single open syllable, ending in a nucleus, without a coda:
118
+
119
+ A list of examples of syllable codas in English is found at English phonology: Coda.
120
+
121
+ Some languages, such as Hawaiian, forbid codas, so that all syllables are open.
122
+
123
+ The domain of suprasegmental features is the syllable and not a specific sound, that is to say, they affect all the segments of a syllable:
124
+
125
+ Sometimes syllable length is also counted as a suprasegmental feature; for example, in some Germanic languages, long vowels may only exist with short consonants and vice versa. However, syllables can be analyzed as compositions of long and short phonemes, as in Finnish and Japanese, where consonant gemination and vowel length are independent.
126
+
127
+ In most languages, the pitch or pitch contour in which a syllable is pronounced conveys shades of meaning such as emphasis or surprise, or distinguishes a statement from a question. In tonal languages, however, the pitch affects the basic lexical meaning (e.g. "cat" vs. "dog") or grammatical meaning (e.g. past vs. present). In some languages, only the pitch itself (e.g. high vs. low) has this effect, while in others, especially East Asian languages such as Chinese, Thai or Vietnamese, the shape or contour (e.g. level vs. rising vs. falling) also needs to be distinguished.
128
+
129
+ Syllable structure often interacts with stress or pitch accent. In Latin, for example, stress is regularly determined by syllable weight, a syllable counting as heavy if it has at least one of the following:
130
+
131
+ In each case the syllable is considered to have two morae.
132
+
133
+ The first syllable of a word is the initial syllable and the last syllable is the final syllable.
134
+
135
+ In languages accented on one of the last three syllables, the last syllable is called the ultima, the next-to-last is called the penult, and the third syllable from the end is called the antepenult. These terms come from Latin ultima "last", paenultima "almost last", and antepaenultima "before almost last".
136
+
137
+ In Ancient Greek, there are three accent marks (acute, circumflex, and grave), and terms were used to describe words based on the position and type of accent. Some of these terms are used in the description of other languages.
138
+
139
+ Guilhem Molinier, a member of the Consistori del Gay Saber, which was the first literary academy in the world and held the Floral Games to award the best troubadour with the violeta d'aur top prize, gave a definition of the syllable in his Leys d'amor (1328–1337), a book aimed at regulating then-flourishing Occitan poetry:
140
+
141
+ Sillaba votz es literals.
142
+ Segon los ditz gramaticals.
143
+ En un accen pronunciada.
144
+ Et en un trag: d'una alenada.
145
+
146
+ A syllable is the sound of several letters,
147
+ According to those called grammarians,
148
+ Pronounced in one accent
149
+ And uninterruptedly: in one breath.
en/556.html.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A beard is the hair that grows on the chin, upper lip, cheeks and neck of humans and some non-human animals. In humans, usually only pubescent or adult males are able to grow beards. Some women with hirsutism, a hormonal condition of excessive hairiness, may develop a beard.
2
+
3
+ Throughout the course of history, societal attitudes toward male beards have varied widely depending on factors such as prevailing cultural-religious traditions and the current era's fashion trends. Some religions (such as Islam, Traditional Christianity,[vague] Orthodox Judaism[citation needed] and Sikhism) have considered a full beard to be absolutely essential for all males able to grow one and mandate it as part of their official dogma. Other cultures, even while not officially mandating it, view a beard as central to a man's virility, exemplifying such virtues as wisdom, strength, sexual prowess and high social status. In cultures where facial hair is uncommon (or currently out of fashion), beards may be associated with poor hygiene or an uncivilized, dangerous demeanor. In countries with colder climates, beards help protect the wearer's face from the elements.
4
+
5
+ The beard develops during puberty. Beard growth is linked to stimulation of hair follicles in the area by dihydrotestosterone, which continues to affect beard growth after puberty. Dihydrotestosterone also promotes balding. Dihydrotestosterone is produced from testosterone, the levels of which vary with season. Beard growth rate is also genetic.[1]
6
+
7
+ Biologists characterize beards as a secondary sexual characteristic because they are unique to one sex, yet do not play a direct role in reproduction. Charles Darwin first suggested a possible evolutionary explanation of beards in his work The Descent of Man, which hypothesized that the process of sexual selection may have led to beards.[2] Modern biologists[who?] have reaffirmed the role of sexual selection in the evolution of beards, concluding that there is evidence that a majority of women find men with beards more attractive than men without beards.[3][4][5]
8
+
9
+ Evolutionary psychology explanations for the existence of beards include signalling sexual maturity and signalling dominance by the increasing perceived size of jaws, and clean-shaved faces are rated less dominant than bearded.[6] Some scholars assert that it is not yet established whether the sexual selection leading to beards is rooted in attractiveness (inter-sexual selection) or dominance (intra-sexual selection).[7] A beard can be explained as an indicator of a male's overall condition.[8] The rate of facial hairiness appears to influence male attractiveness.[9][10] The presence of a beard makes the male vulnerable in hand-to-hand fights (it provides an easy way to grab and hold the opponent's head), which is costly, so biologists have speculated that there must be other evolutionary benefits that outweigh that drawback.[11] Excess testosterone evidenced by the beard may indicate mild immunosuppression, which may support spermatogenesis.[12][13]
10
+
11
+ The ancient Semitic civilization situated on the western, coastal part of the Fertile Crescent and centered on the coastline of modern Lebanon gave great attention to the hair and beard. Where the beard has mostly a strong resemblance to that affected by the Assyrians, and familiar to us from their sculptures. It is arranged in three, four, or five rows of small tight curls, and extends from ear to ear around the cheeks and chin. Sometimes, however, in lieu of the many rows, we find one row only, the beard falling in tresses, which are curled at the extremity. There is no indication of the Phoenicians having cultivated mustachios.[14]
12
+
13
+ Israelite society placed a special importance on the beard. Many religious male figures are recorded to have had facial hair; for example, numerous prophets mentioned in the Tanakh were known to grow beards. The Torah forbids certain shaving practices altogether, in particular Leviticus 19:27 states, "You shall not round off the side-growth on your head, or destroy the side-growth of your beard."[15] The Mishnah interprets this as a prohibition on using a razor on the beard.[16] This prohibition is further expanded upon in kabbalistic literature.[17] The prohibition carries to modern Judaism to this day, with rabbinic opinion forbidding the use of a razor to shave between the "five corners of the beard" — although there is no uniform consensus on where these five vertices are located.
14
+
15
+ According to biblical scholars, the shaving of hair, particularly of the corners of the beard, was originally a mourning custom;[18] the behaviour appears, from the Book of Jeremiah, to also have been practiced by other Semitic tribes,[19][20][21] although some ancient manuscripts of the text read live in remote places rather than clip the corners of their hair. Biblical scholars think that the regulations against shaving hair may be an attack on the practice of offering hair to the dead, which was performed in the belief that it would obtain protection in sheol.[22] The prohibition may also have been an attempt to distinguish the appearance of Israelites from that of the surrounding nations, and likewise reduce the influence of foreign religions;[23] The Hittites and Elamites were clean-shaven, and the Sumerians were also frequently without a beard;[24] conversely, the Egyptians and Libyans shaved the beard into very stylised elongated goatees.[24] Maimonides criticises the shaving of the beard as being the custom of idolatrous priests.[25]
16
+
17
+ Mesopotamian civilizations (Sumerian, Assyrians, Babylonians, Chaldeans and Medians) devoted great care to oiling and dressing their beards, using tongs and curling irons to create elaborate ringlets and tiered patterns.
18
+
19
+ The highest ranking Ancient Egyptians grew hair on their chins which was often dyed or hennaed (reddish brown) and sometimes plaited with interwoven gold thread. A metal false beard, or postiche, which was a sign of sovereignty, was worn by queens and kings. This was held in place by a ribbon tied over the head and attached to a gold chin strap, a fashion existing from about 3000 to 1580 BC.
20
+
21
+ Ancient Indian warriors with various types of beards, circa 480 BCE.
22
+
23
+ Chhatrapati Shivaji of the Maratha Empire with a trimmed beard.
24
+
25
+ Maharaja Ranjit Singh of the Sikh Empire with a long beard.
26
+
27
+ Indian warrior Kunwar Singh of the Indian Rebellion of 1857 with a standard beard.
28
+
29
+ In ancient India, the beard was allowed to grow long, a symbol of dignity and of wisdom (cf. sadhu). The nations in the east generally treated their beards with great care and veneration, and the punishment for licentiousness and adultery was to have the beard of the offending parties publicly cut off. They had such a sacred regard for the preservation of their beards that a man might pledge it for the payment of a debt.
30
+
31
+ Confucius held that the human body was a gift from one's parents to which no alterations should be made. Aside from abstaining from body modifications such as tattoos, Confucians were also discouraged from cutting their hair, fingernails or beards. To what extent people could actually comply with this ideal depended on their profession; farmers or soldiers could probably not grow long beards as it would have interfered with their work.
32
+
33
+ Only a certain percentage of east asian men are capable of growing a full beard. Another proportion of east asian men are capable of growing facial hair but only in a very specific growth pattern in which hair only grows above the lip, below the lip and on the chin, with no hair growth on the cheeks or jaw. Another proportion of east asian men are capable of growing facial hair in some combination of the two.
34
+
35
+ This growth pattern can be seen on the clay soldiers in the Terracotta Army.
36
+
37
+ Close-up of one of the Lamassu beard relief in Gate of All Nations in Perspolis (south of Iran).
38
+
39
+ Fath-Ali Shah, the second Qajar Shah of Persia had a long beard.
40
+
41
+ The Iranians were fond of long beards, and almost all the Iranian kings had a beard. In Travels by Adam Olearius, a King of Iran commands his steward's head to be cut off, and on its being brought to him, remarks, "what a pity it was, that a man possessing such fine mustachios, should have been executed." Men in the Achaemenid era wore long beards, with warriors adorning theirs with jewelry. Men also commonly wore beards during the Safavid and Qajar eras.
42
+
43
+ The ancient Greeks regarded the beard as a badge or sign of virility; in the Homeric epics it had almost sanctified significance, so that a common form of entreaty was to touch the beard of the person addressed.[26] It was only shaven as a sign of mourning, though in this case it was instead often left untrimmed. A smooth face was regarded as a sign of effeminacy.[27] The Spartans punished cowards by shaving off a portion of their beards. From the earliest times, however, the shaving of the upper lip was not uncommon. Greek beards were also frequently curled with tongs.
44
+
45
+ In the time of Alexander the Great the custom of smooth shaving was introduced.[28] Alexander strongly promoted shaving during his reign because he believed it looked tidier.[29] Reportedly, Alexander ordered his soldiers to be clean-shaven, fearing that their beards would serve as handles for their enemies to grab and to hold the soldier as he was killed. The practice of shaving spread from the Macedonians, whose kings are represented on coins, etc. with smooth faces, throughout the whole known world of the Macedonian Empire. Laws were passed against it, without effect, at Rhodes and Byzantium; and even Aristotle conformed to the new custom,[30] unlike the other philosophers, who retained the beard as a badge of their profession. A man with a beard after the Macedonian period implied a philosopher,[31] and there are many allusions to this custom of the later philosophers in such proverbs as: "The beard does not make the sage."[32]
46
+
47
+ Shaving seems to have not been known to the Romans during their early history (under the kings of Rome and the early Republic). Pliny tells us that P. Ticinius was the first who brought a barber to Rome, which was in the 454th year from the founding of the city (that is, around 299 BC). Scipio Africanus (236–183 BC) was apparently the first among the Romans who shaved his beard. However, after that point, shaving seems to have caught on very quickly, and soon almost all Roman men were clean-shaven; being clean-shaven became a sign of being Roman and not Greek. Only in the later times of the Republic did the Roman youth begin shaving their beards only partially, trimming it into an ornamental form; prepubescent boys oiled their chins in hopes of forcing premature growth of a beard.[33]
48
+
49
+ Still, beards remained rare among the Romans throughout the Late Republic and the early Principate. In a general way, in Rome at this time, a long beard was considered a mark of slovenliness and squalor. The censors L. Veturius and P. Licinius compelled M. Livius, who had been banished, on his restoration to the city, to be shaved, to lay aside his dirty appearance, and then, but not until then, to come into the Senate.[34] The first occasion of shaving was regarded as the beginning of manhood, and the day on which this took place was celebrated as a festival.[35] Usually, this was done when the young Roman assumed the toga virilis. Augustus did it in his twenty-fourth year, Caligula in his twentieth. The hair cut off on such occasions was consecrated to a god. Thus Nero put his into a golden box set with pearls, and dedicated it to Jupiter Capitolinus.[36] The Romans, unlike the Greeks, let their beards grow in time of mourning; so did Augustus for the death of Julius Caesar.[37] Other occasions of mourning on which the beard was allowed to grow were, appearance as a reus, condemnation, or some public calamity. On the other hand, men of the country areas around Rome in the time of Varro seem not to have shaved except when they came to market every eighth day, so that their usual appearance was most likely a short stubble.[38]
50
+
51
+ In the second century AD the Emperor Hadrian, according to Dio Cassius, was the first of all the Caesars to grow a full beard; Plutarch says that he did it to hide scars on his face. This was a period in Rome of widespread imitation of Greek culture, and many other men grew beards in imitation of Hadrian and the Greek fashion. Until the time of Constantine the Great the emperors appear in busts and coins with beards; but Constantine and his successors until the reign of Phocas, with the exception of Julian the Apostate, are represented as beardless.
52
+
53
+ Late Hellenistic sculptures of Celts[39] portray them with long hair and mustaches but beardless. Caesar reported the Britons wore no beard except upon the upper lip.
54
+
55
+ The Anglo-Saxons on arrival in Britain wore beards and continued to do so for a considerable time after.[40]
56
+
57
+ Among the Gaelic Celts of Scotland and Ireland, men typically let their facial hair grow into a full beard, and it was often seen as dishonourable for a Gaelic man to have no facial hair.[41][42][43]
58
+
59
+ Tacitus states that among the Catti, a Germanic tribe (perhaps the Chatten), a young man was not allowed to shave or cut his hair until he had slain an enemy. The Lombards derived their name from the great length of their beards (Longobards – Long Beards). When Otto the Great said anything serious, he swore by his beard, which covered his breast.
60
+
61
+ In Medieval Europe, a beard displayed a knight's virility and honour.
62
+ The Castilian knight El Cid is described in The Lay of the Cid as "the one with the flowery beard".
63
+ Holding somebody else's beard was a serious offence that had to be righted in a duel.
64
+
65
+ While most noblemen and knights were bearded, the Catholic clergy were generally required to be clean-shaven. This was understood as a symbol of their celibacy.
66
+
67
+ In pre-Islamic Arabia men would apparently keep mustaches but shave the hair on their chins[citation needed]. The prophet Muhammad encouraged his followers to do the opposite, long chin hair but trimmed mustaches, to signify their break with the old religion. This style of beard subsequently spread along with Islam during the Muslim expansion in the Middle Ages.
68
+
69
+ Friedrich Engels exhibiting a full moustache and beard that was a common style among Europeans of the 19th century.
70
+
71
+ Johann Strauss II with a large beard, moustache, and sideburns.
72
+
73
+ Maryland Governor Thomas Swann with a long goatee. Such beards were common around the time of the American Civil War.
74
+
75
+ Emperor Meiji of Japan wore a full beard and moustache during most of his reign.
76
+
77
+ Johannes Brahms with a large beard and moustache.
78
+
79
+ Walt Whitman with a large beard and moustache.
80
+
81
+ English cricketer W. G. Grace with his trademark beard.
82
+
83
+ Cuban revolutionaries Che Guevara (left) and Fidel Castro (right) with a full beard.
84
+
85
+ The Ned Kelly beard was named after the bushranger, Ned Kelly.
86
+
87
+ Most Chinese emperors of the Ming dynasty (1368-1644) appear with beards or mustaches in portraits.
88
+
89
+ In the 15th century, most European men were clean-shaven. 16th-century beards were allowed to grow to an amazing length (see the portraits of John Knox, Bishop Gardiner, Cardinal Pole and Thomas Cranmer). Some beards of this time were the Spanish spade beard, the English square cut beard, the forked beard, and the stiletto beard. In 1587 Francis Drake claimed, in a figure of speech, to have singed the King of Spain's beard.
90
+
91
+ During the Chinese Qing dynasty (1644-1911), the ruling Manchu minority were either clean-shaven or at most wore mustaches, in contrast to the Han majority who still wore beards in keeping with the Confucian ideal.
92
+
93
+ In the beginning of the 17th century, the size of beards decreased in urban circles of Western Europe. In the second half of the century, being clean-shaven gradually became more common again, so much so that in 1698, Peter the Great of Russia ordered men to shave off their beards, and in 1705 levied a tax on beards in order to bring Russian society more in line with contemporary Western Europe.[44]
94
+
95
+ During the early 19th century most men, particularly amongst the nobility and upper classes, went clean-shaven. There was, however, a dramatic shift in the beard's popularity during the 1850s, with it becoming markedly more popular.[45] Consequently, beards were adopted by many leaders, such as Alexander III of Russia, Napoleon III of France and Frederick III of Germany, as well as many leading statesmen and cultural figures, such as Benjamin Disraeli, Charles Dickens, Giuseppe Garibaldi, Karl Marx, and Giuseppe Verdi. This trend can be recognised in the United States of America, where the shift can be seen amongst the post-Civil War presidents. Before Abraham Lincoln, no President had a beard;[46] after Lincoln until Woodrow Wilson, every President except Andrew Johnson and William McKinley had either a beard or a moustache.
96
+
97
+ The beard became linked in this period with notions of masculinity and male courage.[45] The resulting popularity has contributed to the stereotypical Victorian male figure in the popular mind, the stern figure clothed in black whose gravitas is added to by a heavy beard.
98
+
99
+ In China, the revolution of 1911 and subsequent May Fourth Movement of 1919 led the Chinese to idealize the West as more modern and progressive than themselves. This included the realm of fashion, and Chinese men began shaving their faces and cutting their hair short.
100
+
101
+ By the early-twentieth century, beards began a slow decline in popularity. Although retained by some prominent figures who were young men in the Victorian period (like Sigmund Freud), most men who retained facial hair during the 1920s and 1930s limited themselves to a moustache or a goatee (such as with Marcel Proust, Albert Einstein, Vladimir Lenin, Leon Trotsky, Adolf Hitler, and Joseph Stalin). In the United States, meanwhile, popular movies portrayed heroes with clean-shaven faces and "crew cuts". Concurrently, the psychological mass marketing of Edward Bernays and Madison Avenue was becoming prevalent. The Gillette Safety Razor Company was one of these marketers' early clients. These events conspired to popularize short hair and clean-shaven faces as the only acceptable style for decades to come. The few men who wore the beard or portions of the beard during this period were usually either old, Central European, members of a religious sect that required it, or in academia.
102
+
103
+ The beard was reintroduced to mainstream society by the counterculture, firstly with the "beatniks" in the 1950s, and then with the hippie movement of the mid-1960s. Following the Vietnam War, beards exploded in popularity. In the mid-late 1960s and throughout the 1970s, beards were worn by hippies and businessmen alike. Popular musicians like The Beatles, Barry White, The Beach Boys, Jim Morrison (lead singer of The Doors) and the male members of Peter, Paul, and Mary, among many others, wore full beards. The trend of seemingly ubiquitous beards in American culture subsided in the mid-1980s.
104
+
105
+ By the end of the 20th century, the closely clipped Verdi beard, often with a matching integrated moustache, had become relatively common. From the 1990s onward, fashion in the United States has generally trended toward either a goatee, Van Dyke, or a closely cropped full beard undercut on the throat. By 2010, the fashionable length approached a "two-day shadow".[47] The 2010s decade also saw the full beard become fashionable again amongst young men and a huge increase in the sales of male grooming products.[48]
106
+
107
+ One stratum of American society where facial hair was long rare is in government and politics. The last President of the United States to wear any type of facial hair was William Howard Taft, who was in office from 1909-1913.[49][50] The last Vice President of the United States to wear any facial hair was Charles Curtis, who was in office from 1929-1933. Both of whom wore moustaches, but the last President of the United States to wear a beard was Benjamin Harrison; who was in office from 1889-1893. The last member of the United States Supreme Court with a full beard was Chief Justice Charles Evans Hughes, who served on the Court until 1941. Since 2015 a growing number of male political figures have worn beards in office, including Speaker of the House Paul Ryan, and Senators Ted Cruz and Tom Cotton.
108
+
109
+ Beards also play an important role in some religions.
110
+
111
+ In Greek mythology and art, Zeus and Poseidon are always portrayed with beards, but Apollo never is. A bearded Hermes was replaced with the more familiar beardless youth in the 5th century BC. Zoroaster, the 11th/10th century BC era founder of Zoroastrianism is almost always depicted with a beard.
112
+ In Norse mythology, Thor the god of thunder is portrayed wearing a red beard.
113
+
114
+ Basilios Bessarion's beard contributed to his defeat in the papal conclave of 1455.[51]
115
+
116
+ Pope Paul III
117
+
118
+ Thomas Cranmer, Archbishop of Canterbury and architect of the English Reformation, wore a long beard in his later years.
119
+
120
+ Thomas Bramwell Welch was a Methodist minister.
121
+
122
+ Roman Catholic Capuchin friar, Blessed Solanus Casey (1870-1957).
123
+
124
+ Russian Orthodox Archbishop Saint Luka (Voyno-Yasenetsky) (1877-1961).
125
+
126
+ An Amish man with a Shenandoah beard.
127
+
128
+ Iconography and art dating from the 4th century onward almost always portray Jesus with a beard. In paintings and statues most of the Old Testament Biblical characters such as Moses and Abraham and Jesus' New Testament disciples such as St Peter appear with beards, as does John the Baptist. However, Western European art generally depicts John the Apostle as clean-shaven, to emphasize his relative youth. Eight of the figures portrayed in the painting entitled The Last Supper by Leonardo da Vinci are bearded. Mainstream Christianity holds Isaiah Chapter 50: Verse 6 as a prophecy of Christ's crucifixion, and as such, as a description of Christ having his beard plucked by his tormentors.
129
+
130
+ In Eastern Christianity, members of the priesthood and monastics often wear beards, and religious authorities at times have recommended or required beards for all male believers.[52]
131
+
132
+ Traditionally, Syrian Christians from Kerala wear long beards. Some view it as a necessity for men in the malayali Syrian Christian community because icons of Christ and the saints with beards were depicted from the 3rd century AD. Syrian Christian Priests and Monastics are obliged to wear beards.[citation needed]
133
+
134
+ In the 1160s Burchardus, abbot of the Cistercian monastery of Bellevaux in the Franche-Comté, wrote a treatise on beards.[53] He regarded beards as appropriate for lay brothers, but not for the priests among the monks.
135
+
136
+ At various times in its history and depending on various circumstances, the Catholic Church in the West permitted or prohibited facial hair (barbae nutritio – literally meaning "nourishing a beard") for clergy.[54] A decree of the beginning of the 6th century in either Carthage or the south of Gaul forbade clerics to let their hair and beards grow freely. The phrase "nourishing a beard" was interpreted in different ways, either as imposing a clean-shaven face or only excluding a too-lengthy beard. In relatively modern times, the first pope to wear a beard was Pope Julius II, who in 1511–12 did so for a while as a sign of mourning for the loss of the city of Bologna. Pope Clement VII let his beard grow at the time of the Sack of Rome (1527) and kept it. All his successors did so until the death in 1700 of Pope Innocent XII. Since then, no pope has worn a beard. Most Latin-rite clergy are now clean-shaven, but Capuchins and some others are bearded. Present Canon law is silent on the matter.[55]
137
+
138
+ Although most Protestant Christians regard the beard as a matter of choice, some have taken the lead in fashion by openly encouraging its growth as "a habit most natural, scriptural, manly, and beneficial" (C. H. Spurgeon).[56] Amish and Hutterite men shave until they marry, then grow a beard and are never thereafter without one, although it is a particular form of a beard (see Visual markers of marital status). Some Messianic Jews also wear beards to show their observance of the Old Testament.[citation needed]
139
+
140
+ Diarmaid MacCulloch, professor of history of the Church at University of Oxford, writes: "There is no doubt that Cranmer mourned the dead king (Henry VIII)",[57] and it was said that he showed his grief by growing a beard. However, MacCulloch also states that during the Reformation Era, many Protestant Reformers decided to grow their beards in order to emphasize their break with the Catholic tradition:
141
+
142
+ "it was a break from the past for a clergyman to abandon his clean-shaven appearance which was the norm for late medieval priesthood; with Luther providing a precedent [during his exile period], virtually all the continental reformers had deliberately grown beards as a mark of their rejection of the old church, and the significance of clerical beards as an aggressive anti-Catholic gesture was well recognised in mid-Tudor England."
143
+
144
+ Since the mid-twentieth century, The Church of Jesus Christ of Latter-day Saints (LDS Church) has encouraged men to be clean-shaven,[58] particularly those that serve in ecclesiastical leadership positions.[59] The church's encouragement of men's shaving has no theological basis, but stems from the general waning of facial hair's popularity in Western society during the twentieth century and its association with the hippie and drug culture aspects of the counterculture of the 1960s,[60] and has not been a permanent rule.[58]
145
+
146
+ After Joseph Smith, many of the early presidents of the LDS Church, such as Brigham Young and Lorenzo Snow, wore large beards. Since David O. McKay became church president in 1951, most LDS Church leaders have been clean-shaven. The church maintains no formal policy on facial hair for its general membership.[61] However, formal prohibitions against facial hair are currently enforced for young men providing two-year missionary service.[62] Students and staff of the church-sponsored higher education institutions, such as Brigham Young University (BYU), are required to adhere to the Church Educational System Honor Code,[63] which states in part: "Men are expected to be clean-shaven; beards are not acceptable", although male BYU students are permitted to wear a neatly groomed moustache.[60][64] A beard exemption is granted for "serious skin conditions",[65] and for approved theatrical performances, but until 2015 no exemption was given for any other reason, including religious convictions.[66] In January 2015, BYU clarified that students who want a beard for religious reasons, like Muslims or Sikhs, may be granted permission after applying for an exemption.[67][68][69][70]
147
+
148
+ BYU students led a campaign to loosen the beard restrictions in 2014,[60][71][72][73][74] but it had the opposite effect at Church Educational System schools: some who had previously been granted beard exemptions were found no longer to qualify, and for a brief period the LDS Business College required students with a registered exemption to wear a "beard badge", which was likened to a "badge of shame". Some students also join in with shaming their fellow beard-wearing students, even those with registered exemptions.[75]
149
+
150
+ The ancient Hindu texts regarding beards depends on the Deva and other teachings, varying according to whom the devotee worships or follows. Many Sadhus, Yogis, or Yoga practitioners keep beards, and represent all situations of life. Shaivite ascetics generally have beards, as they are not permitted to own anything, which would include a razor. The beard is also a sign of a nomadic and ascetic lifestyle.
151
+
152
+ Vaishnava men, typically of the ISKCON sect, are often clean-shaven as a sign of cleanliness.
153
+
154
+ in Sunni Islam, allowing the beard (Lihyah in Arabic) to grow and trimming the moustache is ruled as mandatory according to the sunnah by scholarly consensus,[76] and is considered part of the fitra[77] (i.e., the way man was created).
155
+
156
+ Sahih Bukhari, Book 72, Hadith #781 Narrated by Ibn 'Umar: Allah's Apostle said, "Cut the moustaches short and leave the beard (as it is)."[78]
157
+
158
+ Ibn Hazm reported that there was scholarly consensus that it is an obligation to trim the moustache and let the beard grow. He quoted a number of hadiths as evidence, including the hadith of Ibn 'Umar quoted above, and the hadith of Zayd ibn Arqam in which Mohammed said: "Whoever does not remove any of his moustache is not one of us."[76] Ibn Hazm said in al-Furoo': "This is the way of our colleagues [i.e., group of scholars]."[76]
159
+
160
+ The extent of the beard is from the cheekbones, level with the channel of the ears, until the bottom of the face. It includes the hair that grows on the cheeks. Hair on the neck is not considered a part of the beard and can be removed.[79]
161
+
162
+ In the Islamic tradition, God commanded Abraham to keep his beard, shorten his moustache, clip his nails, shave the hair around his genitals, and epilate his armpit hair.
163
+
164
+ According to the official position of the Shafi'i school of thought, shaving the beard is only makruh (disliked) but not haram (forbidden).[80][81]
165
+
166
+ According to the Hadith no man will have a beard in paradise except the Prophet Moses.[82][83][84]
167
+
168
+ According to the Twelver Shia scholars, as per Sunnah, the length of a beard should not exceed the width of a fist. Trimming of facial hair is allowed; however, shaving it is haram (religiously forbidden).[85][86][87] Ayatollahs are allowed longer facial hair.
169
+
170
+ The Hebrew Bible states in Leviticus 19:27 that "You shall not round the corners of your heads, neither shalt thou mar the corners of thy beard." Talmudic tradition explains this to mean that a man may not shave his beard with a razor with a single blade, since the cutting action of the blade against the skin "mars" the beard. Because scissors have two blades, some opinions in halakha (Jewish law) permit their use to trim the beard, as the cutting action comes from contact of the two blades and not the blade against the skin. For this reason, some poskim (Jewish legal deciders) rule that Orthodox Jews may use electric razors to remain clean-shaven, as such shavers cut by trapping the hair between the blades and the metal grating, halakhically a scissorlike action. Other poskim like Zokon Yisrael KiHilchso,[88] maintain that electric shavers constitute a razor-style action and consequently prohibit their use.
171
+
172
+ The Zohar, one of the primary sources of Kabbalah (Jewish mysticism), attributes holiness to the beard, specifying that hairs of the beard symbolize channels of subconscious holy energy that flows from above to the human soul. Therefore, most Hasidic Jews, for whom Kabbalah plays an important role in their religious practice, traditionally do not remove or even trim their beards.
173
+
174
+ Traditional Jews refrain from shaving, trimming the beard, and haircuts during certain times of the year like Passover, Sukkot, the Counting of the Omer and the Three Weeks. Cutting the hair is also restricted during the 30-day mourning period after the death of a close relative, known in Hebrew as the Shloshim (thirty).
175
+
176
+ Portrait of Sikh Guru Gobind Singh, escorted by Sikhs with beards.
177
+
178
+ A Sikh man with a full beard next to the Golden Temple at Amritsar, India.
179
+
180
+ Soldiers of the Indian Army's Sikh Light Infantry regiment with various beards.
181
+
182
+ Guru Gobind Singh, the tenth Sikh Guru, commanded the Sikhs to maintain unshorn hair, recognizing it as a necessary adornment of the body by Almighty God as well as a mandatory Article of Faith. Sikhs consider the beard to be part of the nobility and dignity of their manhood. Sikhs also refrain from cutting their hair and beards out of respect for the God-given form. Kesh, uncut hair, is one of the Five Ks, five compulsory articles of faith for a baptized Sikh. As such, a Sikh man is easily identified by his turban and uncut hair and beard.
183
+
184
+ Male Rastafarians wear beards in conformity with injunctions given in the Bible, such as Leviticus 21:5, which reads "They shall not make any baldness on their heads, nor shave off the edges of their beards, nor make any cuts in their flesh." The beard is a symbol of the covenant between God (Jah or Jehovah in Rastafari usage) and his people.
185
+
186
+ In Greco-Roman antiquity the beard was "seen as the defining characteristic of the philosopher; philosophers had to have beards, and anyone with a beard was assumed to be a philosopher."[89] While one may be tempted to think that Socrates and Plato sported "philosopher's beards", such is not the case. Shaving was not widespread in Athens during fifth and fourth-century BCE and so they would not be distinguished from the general populace for having a beard. The popularity of shaving did not rise in the region until the example of Alexander the Great near the end of the fourth century BCE. The popularity of shaving did not spread to Rome until the end of the third century BCE following its acceptance by Scipio Africanus. In Rome shaving's popularity grew to the point that for a respectable Roman citizen it was seen almost as compulsory.
187
+
188
+ The idea of the philosopher's beard gained traction when in 155 BCE three philosophers arrived in Rome as Greek diplomats: Carneades, head of the Platonic Academy; Critolaus of Aristotle's Lyceum; and the head of the Stoics, Diogenes of Babylon. "In contrast to their beautifully clean-shaven Italian audience, these three intellectuals all sported magnificent beards."[90] Thus the connection of beards and philosophy caught hold of the Roman public imagination.
189
+
190
+ The importance of the beard to Roman philosophers is best seen by the extreme value that the Stoic philosopher Epictetus placed on it. As historian John Sellars puts it, Epictetus "affirmed the philosopher's beard as something almost sacred...to express the idea that philosophy is no mere intellectual hobby but rather a way of life that, by definition, transforms every aspect of one's behavior, including one's shaving habits. If someone continues to shave in order to look the part of a respectable Roman citizen, it is clear that they have not yet embraced philosophy conceived as a way of life and have not yet escaped the social customs of the majority...the true philosopher will only act according to reason or according to nature, rejecting the arbitrary conventions that guide the behavior of everyone else."[90]
191
+
192
+ Epictetus saw his beard as an integral part of his identity and held that he would rather be executed than submit to any force demanding he remove it. In his Discourses 1.2.29, he puts forward such a hypothetical confrontation: "'Come now, Epictetus, shave your beard'. If I am a philosopher, I answer, I will not shave it off. 'Then I will have you beheaded'. If it will do you any good, behead me."[90] The act of shaving "would be to compromise his philosophical ideal of living in accordance with nature and it would be to submit to the unjustified authority of another."[90]
193
+
194
+ This was not theoretical in the age of Epictetus, for the Emperor Domitian had the hair and beard forcibly shaven off of the philosopher Apollonius of Tyana "as punishment for anti-State activities."[90] This disgraced Apollonius while avoiding making him a martyr like Socrates. Well before his declaration of "death before shaving" Epictetus had been forced to flee Rome when Domitian banished all philosophers from Italy under threat of execution.
195
+
196
+ Roman philosophers sported different styles of beards to distinguish which school they belonged to. Cynics with long dirty beards to indicate their "strict indifference to all external goods and social customs";[90] Stoics occasionally trimming and washing their beards in accordance with their view "that it is acceptable to prefer certain external goods so long as they are never valued above virtue";[90] Peripatetics took great care of their beards believing in accordance with Aristotle that "external goods and social status were necessary for the good life together with virtue".[90] To a Roman philosopher in this era, having a beard and its condition indicated their commitment to live in accordance with their philosophy.
197
+
198
+ Professional airline pilots are required to be shaven to facilitate a tight seal with auxiliary oxygen masks.[91] However, some airlines have recently lifted such bans in light of modern studies.[92] Similarly, firefighters may also be prohibited from full beards to obtain a proper seal with SCBA equipment.[93] This restriction is also fairly common in the oil & gas industry for the same reason in locations where hydrogen sulfide gas is a common danger.[citation needed] Other jobs may prohibit beards as necessary to wear masks or respirators.[94]
199
+
200
+ Isezaki city in Gunma prefecture, Japan, decided to ban beards for male municipal employees on May 19, 2010.[95]
201
+
202
+ The U.S. Court of Appeals for the Eighth Circuit has found requiring shaving to be discriminatory.[96][97]
203
+
204
+ The International Boxing Association prohibits the wearing of beards by amateur boxers, although the Amateur Boxing Association of England allows exceptions for Sikh men, on condition that the beard be covered with a fine net.[98]
205
+
206
+ The Cincinnati Reds baseball team had a longstanding enforced policy where all players had to be completely clean-shaven (no beards, long sideburns or moustaches). However, this policy was abolished following the sale of the team by Marge Schott in 1999.
207
+
208
+ Under owner George Steinbrenner, the New York Yankees baseball team had a strict appearance policy that prohibited long hair and facial hair below the lip; the regulation was continued under Hank and Hal Steinbrenner when control of the Yankees was transferred to them after the 2008 season. Willie Randolph and Joe Girardi, both former Yankee assistant coaches, adopted a similar clean-shaven policy for their ballclubs: the New York Mets and Miami Marlins, respectively. Fredi Gonzalez, who replaced Girardi as the Marlins' manager, dropped that policy when he took over after the 2006 season.
209
+
210
+ The Playoff beard is a tradition common with teams in the National Hockey League, and now in other leagues where players allow their beards to grow from the beginning of the playoff season until the playoffs are over for their team.
211
+
212
+ In 2008, some members of the Tyrone Gaelic football team vowed not to shave until the end of the season. They went on to win the All-Ireland football championship, some of them sporting impressive beards by that stage.
213
+
214
+ Canadian Rugby Union flanker Adam Kleeberger attracted much media attention before, during, and after the 2011 Rugby World Cup in New Zealand. Kleeberger was known, alongside teammates Jebb Sinclair and Hubert Buydens as one of "the beardoes". Fans in the stands could often be seen wearing fake beards and "fear the beard" became a popular expression during the team's run in the competition. Kleeberger, who became one of Canada's star players in the tournament, later used the publicity surrounding his beard to raise awareness for two causes; Christchurch earthquake relief efforts and prostate cancer. As part of this fundraising, his beard was shaved off by television personality Rick Mercer and aired on national television. The "Fear the Beard" expression was coined by the NBA's Oklahoma City Thunder fans and is now used by Houston Rockets fans to support James Harden.
215
+
216
+ Los Angeles Dodgers relief pitcher Brian Wilson, who claims not to have shaved since the 2010 All-Star Game, has grown a big beard that has become popular in MLB and with its fans. MLB Fan Cave presented a "Journey Inside Brian Wilson's Beard", which was an interactive screenshot of Wilson's beard, where one can click on different sections to see various fictional activities performed by small "residents" of the beard. The hosts on sports shows sometimes wear replica beards, and the Giants gave them away to fans as a promo.[citation needed]
217
+
218
+ The 2013 Boston Red Sox featured at least 12 players[100] with varying degrees of facial hair, ranging from the closely trimmed beard of slugger David Ortiz to the long shaggy looks of Jonny Gomes and Mike Napoli. The Red Sox used their beards as a marketing tool, offering a Dollar Beard Night,[101] where all fans with beards (real or fake) could buy a ticket for $1.00; and also as means of fostering team camaraderie.[102]
219
+
220
+ Beards have also become a source of competition between athletes. Examples of athlete "beard-offs" include NBA players DeShawn Stevenson and Drew Gooden in 2008,[103] and WWE wrestler Daniel Bryan and Oakland Athletics outfielder Josh Reddick in 2013.[104]
221
+
222
+ Depending on the country and period, facial hair was either prohibited in the army or an integral part of the uniform.
223
+
224
+ Beard hair is most commonly removed by shaving or by trimming with the use of a beard trimmer. If only the area above the upper lip is left unshaven, the resulting facial hairstyle is known as a moustache; if hair is left only on the chin, the style is a goatee.
225
+
226
+ For appearance and cleanliness, some people maintain their beards by exfoliating the skin, using soap or shampoo and sometimes conditioner, and afterward applying oils for softness.
227
+
228
+ The term "beard" is also used for a collection of stiff, hairlike feathers on the centre of the breast of turkeys. Normally, the turkey's beard remains flat and may be hidden under other feathers, but when the bird is displaying, the beard becomes erect and protrudes several centimetres from the breast.
229
+
230
+ Many goats possess a beard. The male sometimes urinates on his own beard as a marking behaviour during rutting.[citation needed]
231
+
232
+ Several animals are termed "bearded" as part of their common name. Sometimes a beard of hair on the chin or face is prominent but for some others, "beard" may refer to a pattern or colouring of the pelage reminiscent of a beard.
en/5560.html.txt ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The periodic table, also known as the periodic table of elements, is a tabular display of the chemical elements, which are arranged by atomic number, electron configuration, and recurring chemical properties. The structure of the table shows periodic trends. The seven rows of the table, called periods, generally have metals on the left and nonmetals on the right. The columns, called groups, contain elements with similar chemical behaviours. Six groups have accepted names as well as assigned numbers: for example, group 17 elements are the halogens; and group 18 are the noble gases. Also displayed are four simple rectangular areas or blocks associated with the filling of different atomic orbitals.
6
+
7
+ The elements from atomic numbers 1 (hydrogen) through 118 (oganesson) have all been discovered or synthesized, completing seven full rows of the periodic table.[1][2] The first 94 elements, hydrogen through plutonium, all occur naturally, though some are found only in trace amounts and a few were discovered in nature only after having first been synthesized.[n 1] Elements 95 to 118 have only been synthesized in laboratories, nuclear reactors, or nuclear explosions.[3] The synthesis of elements having higher atomic numbers is currently being pursued: these elements would begin an eighth row, and theoretical work has been done to suggest possible candidates for this extension. Numerous synthetic radioisotopes of naturally occurring elements have also been produced in laboratories.
8
+
9
+ The organization of the periodic table can be used to derive relationships between the various element properties, and also to predict chemical properties and behaviours of undiscovered or newly synthesized elements. Russian chemist Dmitri Mendeleev published the first recognizable periodic table in 1869, developed mainly to illustrate periodic trends of the then-known elements. He also predicted some properties of unidentified elements that were expected to fill gaps within the table. Most of his forecasts proved to be correct. Mendeleev's idea has been slowly expanded and refined with the discovery or synthesis of further new elements and the development of new theoretical models to explain chemical behaviour. The modern periodic table now provides a useful framework for analyzing chemical reactions, and continues to be widely used in chemistry, nuclear physics and other sciences. Some discussion remains ongoing regarding the placement and categorisation of specific elements, the future extension and limits of the table, and whether there is an optimal form of the table.
10
+
11
+ 1
12
+
13
+ 1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
14
+
15
+ Background color shows subcategory in the metal–metalloid–nonmetal trend:
16
+
17
+ Each chemical element has a unique atomic number (Z) representing the number of protons in its nucleus.[n 2] Most elements have differing numbers of neutrons among different atoms, with these variants being referred to as isotopes. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. Elements with no stable isotopes have the atomic masses of their most stable isotopes, where such masses are shown, listed in parentheses.[7]
18
+
19
+ In the standard periodic table, the elements are listed in order of increasing atomic number Z. A new row (period) is started when a new electron shell has its first electron. Columns (groups) are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen and selenium are in the same column because they both have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it.[8]
20
+
21
+ Since 2016, the periodic table has 118 confirmed elements, from element 1 (hydrogen) to 118 (oganesson). Elements 113, 115, 117 and 118, the most recent discoveries, were officially confirmed by the International Union of Pure and Applied Chemistry (IUPAC) in December 2015. Their proposed names, nihonium (Nh), moscovium (Mc), tennessine (Ts) and oganesson (Og) respectively, were made official in November 2016 by IUPAC.[9][10][11][12]
22
+
23
+ The first 94 elements occur naturally; the remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements.[3] No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine (element 85); francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms).[13]
24
+
25
+ A group or family is a vertical column in the periodic table. Groups usually have more significant periodic trends than periods and blocks, explained below. Modern quantum mechanical theories of atomic structure explain group trends by proposing that elements within the same group generally have the same electron configurations in their valence shell.[14] Consequently, elements in the same group tend to have a shared chemistry and exhibit a clear trend in properties with increasing atomic number.[15] In some parts of the periodic table, such as the d-block and the f-block, horizontal similarities can be as important as, or more pronounced than, vertical similarities.[16][17][18]
26
+
27
+ Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases).[19] Previously, they were known by roman numerals. In America, the roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used if the group was before group 10, and "B" was used for groups including and after group 10. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC naming system was put into use, and the old group names were deprecated.[20]
28
+
29
+ Some of these groups have been given trivial (unsystematic) names, as seen in the table below, although some are rarely used. Groups 3–10 have no trivial names and are referred to simply by their group numbers or by the name of the first member of their group (such as "the scandium group" for group 3),[19] since they display fewer similarities and/or vertical trends.
30
+
31
+ Elements in the same group tend to show patterns in atomic radius, ionization energy, and electronegativity. From top to bottom in a group, the atomic radii of the elements increase. Since there are more filled energy levels, valence electrons are found farther from the nucleus. From the top, each successive element has a lower ionization energy because it is easier to remove an electron since the atoms are less tightly bound. Similarly, a group has a top-to-bottom decrease in electronegativity due to an increasing distance between valence electrons and the nucleus.[21] There are exceptions to these trends: for example, in group 11, electronegativity increases farther down the group.[22]
32
+
33
+ A period is a horizontal row in the periodic table. Although groups generally have more significant periodic trends, there are regions where horizontal trends are more significant than vertical group trends, such as the f-block, where the lanthanides and actinides form two substantial horizontal series of elements.[24]
34
+
35
+ Elements in the same period show trends in atomic radius, ionization energy, electron affinity, and electronegativity. Moving left to right across a period, atomic radius usually decreases. This occurs because each successive element has an added proton and electron, which causes the electron to be drawn closer to the nucleus.[25] This decrease in atomic radius also causes the ionization energy to increase when moving from left to right across a period. The more tightly bound an element is, the more energy is required to remove an electron. Electronegativity increases in the same manner as ionization energy because of the pull exerted on the electrons by the nucleus.[21] Electron affinity also shows a slight trend across a period. Metals (left side of a period) generally have a lower electron affinity than nonmetals (right side of a period), with the exception of the noble gases.[26]
36
+
37
+ Specific regions of the periodic table can be referred to as blocks in recognition of the sequence in which the electron shells of the elements are filled. Elements are assigned to blocks by what orbitals their valence electrons or vacancies lie in.[27] The s-block comprises the first two groups (alkali metals and alkaline earth metals) as well as hydrogen and helium. The p-block comprises the last six groups, which are groups 13 to 18 in IUPAC group numbering (3A to 8A in American group numbering) and contains, among other elements, all of the metalloids. The d-block comprises groups 3 to 12 (or 3B to 2B in American group numbering) and contains all of the transition metals. The f-block, often offset below the rest of the periodic table, has no group numbers and comprises most of the lanthanides and actinides. A hypothetical g-block is expected to begin around element 121, a few elements away from what is currently known.[28]
38
+
39
+ According to their shared physical and chemical properties, the elements can be classified into the major categories of metals, metalloids and nonmetals. Metals are generally shiny, highly conducting solids that form alloys with one another and salt-like ionic compounds with nonmetals (other than noble gases). A majority of nonmetals are coloured or colourless insulating gases; nonmetals that form compounds with other nonmetals feature covalent bonding. In between metals and nonmetals are metalloids, which have intermediate or mixed properties.[29]
40
+
41
+ Metal and nonmetals can be further classified into subcategories that show a gradation from metallic to non-metallic properties, when going left to right in the rows. The metals may be subdivided into the highly reactive alkali metals, through the less reactive alkaline earth metals, lanthanides and actinides, via the archetypal transition metals, and ending in the physically and chemically weak post-transition metals. Nonmetals may be simply subdivided into the polyatomic nonmetals, being nearer to the metalloids and show some incipient metallic character; the essentially nonmetallic diatomic nonmetals, nonmetallic and the almost completely inert, monatomic noble gases. Specialized groupings such as refractory metals and noble metals, are examples of subsets of transition metals, also known[30] and occasionally denoted.[31]
42
+
43
+ Placing elements into categories and subcategories based just on shared properties is imperfect. There is a large disparity of properties within each category with notable overlaps at the boundaries, as is the case with most classification schemes.[32] Beryllium, for example, is classified as an alkaline earth metal although its amphoteric chemistry and tendency to mostly form covalent compounds are both attributes of a chemically weak or post-transition metal. Radon is classified as a nonmetallic noble gas yet has some cationic chemistry that is characteristic of metals. Other classification schemes are possible such as the division of the elements into mineralogical occurrence categories, or crystalline structures. Categorizing the elements in this fashion dates back to at least 1869 when Hinrichs[33] wrote that simple boundary lines could be placed on the periodic table to show elements having shared properties, such as metals, nonmetals, or gaseous elements.
44
+
45
+ The electron configuration or organisation of electrons orbiting neutral atoms shows a recurring pattern or periodicity. The electrons occupy a series of electron shells (numbered 1, 2, and so on). Each shell consists of one or more subshells (named s, p, d, f and g). As atomic number increases, electrons progressively fill these shells and subshells more or less according to the Madelung rule or energy ordering rule, as shown in the diagram. The electron configuration for neon, for example, is 1s2 2s2 2p6. With an atomic number of ten, neon has two electrons in the first shell, and eight electrons in the second shell; there are two electrons in the s subshell and six in the p subshell. In periodic table terms, the first time an electron occupies a new shell corresponds to the start of each new period, these positions being occupied by hydrogen and the alkali metals.[34][35]
46
+
47
+ Since the properties of an element are mostly determined by its electron configuration, the properties of the elements likewise show recurring patterns or periodic behaviour, some examples of which are shown in the diagrams below for atomic radii, ionization energy and electron affinity. It is this periodicity of properties, manifestations of which were noticed well before the underlying theory was developed, that led to the establishment of the periodic law (the properties of the elements recur at varying intervals) and the formulation of the first periodic tables.[34][35] The periodic law may then be successively clarified as: depending on atomic weight; depending on atomic number; and depending on the total number of s, p, d, and f electrons in each atom. The cycles last 2, 6, 10, and 14 elements respectively.[36]
48
+
49
+ There is additionally an internal "double periodicity" that splits the shells in half; this arises because the first half of the electrons going into a particular type of subshell fill unoccupied orbitals, but the second half have to fill already occupied orbitals, following Hund's rule of maximum multiplicity. The second half thus suffer additional repulsion that causes the trend to spit between first-half and second-half elements; this is for example evident when observing the ionisation energies of the 2p elements, in which the triads B-C-N and O-F-Ne show increases, but oxygen actually has a first ionisation slightly lower than that of nitrogen as it is easier to remove the extra, paired electron.[36]
50
+
51
+ Atomic radii vary in a predictable and explainable manner across the periodic table. For instance, the radii generally decrease along each period of the table, from the alkali metals to the noble gases; and increase down each group. The radius increases sharply between the noble gas at the end of each period and the alkali metal at the beginning of the next period. These trends of the atomic radii (and of various other chemical and physical properties of the elements) can be explained by the electron shell theory of the atom; they provided important evidence for the development and confirmation of quantum theory.[37]
52
+
53
+ The electrons in the 4f-subshell, which is progressively filled from lanthanum (element 57) to ytterbium (element 70),[n 4] are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii that are smaller than would be expected and that are almost identical to the atomic radii of the elements immediately above them.[39] Hence lutetium has virtually the same atomic radius (and chemistry) as yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. This is an effect of the lanthanide contraction: a similar actinide contraction also exists. The effect of the lanthanide contraction is noticeable up to platinum (element 78), after which it is masked by a relativistic effect known as the inert pair effect.[40] The d-block contraction, which is a similar effect between the d-block and p-block, is less pronounced than the lanthanide contraction but arises from a similar cause.[39]
54
+
55
+ Such contractions exist throughout the table, but are chemically most relevant for the lanthanides with their almost constant +3 oxidation state.[41]
56
+
57
+ The first ionization energy is the energy it takes to remove one electron from an atom, the second ionization energy is the energy it takes to remove a second electron from the atom, and so on. For a given atom, successive ionization energies increase with the degree of ionization. For magnesium as an example, the first ionization energy is 738 kJ/mol and the second is 1450 kJ/mol. Electrons in the closer orbitals experience greater forces of electrostatic attraction; thus, their removal requires increasingly more energy. Ionization energy becomes greater up and to the right of the periodic table.[40]
58
+
59
+ Large jumps in the successive molar ionization energies occur when removing an electron from a noble gas (complete electron shell) configuration. For magnesium again, the first two molar ionization energies of magnesium given above correspond to removing the two 3s electrons, and the third ionization energy is a much larger 7730 kJ/mol, for the removal of a 2p electron from the very stable neon-like configuration of Mg2+. Similar jumps occur in the ionization energies of other third-row atoms.[40]
60
+
61
+ Electronegativity is the tendency of an atom to attract a shared pair of electrons.[42] An atom's electronegativity is affected by both its atomic number and the distance between the valence electrons and the nucleus. The higher its electronegativity, the more an element attracts electrons. It was first proposed by Linus Pauling in 1932.[43] In general, electronegativity increases on passing from left to right along a period, and decreases on descending a group. Hence, fluorine is the most electronegative of the elements,[n 5] while caesium is the least, at least of those elements for which substantial data is available.[22]
62
+
63
+ There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon respectively because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity.[22] The anomalously high electronegativity of lead, particularly when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state instead of the +4 state.[44]
64
+
65
+ The electron affinity of an atom is the amount of energy released when an electron is added to a neutral atom to form a negative ion. Although electron affinity varies greatly, some patterns emerge. Generally, nonmetals have more positive electron affinity values than metals. Chlorine most strongly attracts an extra electron. The electron affinities of the noble gases have not been measured conclusively, so they may or may not have slightly negative values.[47]
66
+
67
+ Electron affinity generally increases across a period. This is caused by the filling of the valence shell of the atom; a group 17 atom releases more energy than a group 1 atom on gaining an electron because it obtains a filled valence shell and is therefore more stable.[47]
68
+
69
+ A trend of decreasing electron affinity going down groups would be expected. The additional electron will be entering an orbital farther away from the nucleus. As such this electron would be less attracted to the nucleus and would release less energy when added. In going down a group, around one-third of elements are anomalous, with heavier elements having higher electron affinities than their next lighter congenors. Largely, this is due to the poor shielding by d and f electrons. A uniform decrease in electron affinity only applies to group 1 atoms.[48]
70
+
71
+ The lower the values of ionization energy, electronegativity and electron affinity, the more metallic character the element has. Conversely, nonmetallic character increases with higher values of these properties.[49] Given the periodic trends of these three properties, metallic character tends to decrease going across a period (or row) and, with some irregularities (mostly) due to poor screening of the nucleus by d and f electrons, and relativistic effects,[50] tends to increase going down a group (or column or family). Thus, the most metallic elements (such as caesium) are found at the bottom left of traditional periodic tables and the most nonmetallic elements (such as neon) at the top right. The combination of horizontal and vertical trends in metallic character explains the stair-shaped dividing line between metals and nonmetals found on some periodic tables, and the practice of sometimes categorizing several elements adjacent to that line, or elements adjacent to those elements, as metalloids.[51][52]
72
+
73
+ With some minor exceptions, oxidation numbers among the elements show four main trends according to their periodic table geographic location: left; middle; right; and south. On the left (groups 1 to 4, not including the f-block elements, and also niobium, tantalum, and probably dubnium in group 5), the highest most stable oxidation number is the group number, with lower oxidation states being less stable. In the middle (groups 3 to 11), higher oxidation states become more stable going down each group. Group 12 is an exception to this trend; they behave as if they were located on the left side of the table. On the right, higher oxidation states tend to become less stable going down a group.[53] The shift between these trends is continuous: for example, group 3 also has lower oxidation states most stable in its lightest member (scandium, with CsScCl3 for example known in the +2 state),[54] and group 12 is predicted to have copernicium more readily showing oxidation states above +2.[55]
74
+
75
+ The lanthanides positioned along the south of the table are distinguished by having the +3 oxidation state in common; this is their most stable state. The early actinides show a pattern of oxidation states somewhat similar to those of their period 6 and 7 transition metal congeners; the later actinides are more similar to the lanthanides, though the last ones (excluding lawrencium) have an increasingly important +2 oxidation state that becomes the most stable state for nobelium.[56]
76
+
77
+ From left to right across the four blocks of the long- or 32-column form of the periodic table are a series of linking or bridging groups of elements, located approximately between each block. In general, groups at the peripheries of blocks display similarities to the groups of the neighbouring blocks as well as to the other groups in their own blocks, as expected as most periodic trends are continuous.[57] These groups, like the metalloids, show properties in between, or that are a mixture of, groups to either side. Chemically, the group 3 elements, lanthanides, and heavy group 4 and 5 elements show some behaviour similar to the alkaline earth metals[58] or, more generally, s block metals[59][60][61] but have some of the physical properties of d block transition metals.[62] In fact, the metals all the way up to group 6 are united by being class-A cations ("hard" acids) that form more stable complexes with ligands whose donor atoms are the most electronegative nonmetals nitrogen, oxygen, and fluorine; metals later in the table form a transition to class-B cations ("soft" acids) that form more stable complexes with ligands whose donor atoms are the less electronegative heavier elements of groups 15 through 17.[63]
78
+
79
+ Meanwhile, lutetium behaves chemically as a lanthanide (with which it is often classified) but shows a mix of lanthanide and transition metal physical properties (as does yttrium).[64][65] Lawrencium, as an analogue of lutetium, would presumably display like characteristics.[n 6] The coinage metals in group 11 (copper, silver, and gold) are chemically capable of acting as either transition metals or main group metals.[68] The volatile group 12 metals, zinc, cadmium and mercury are sometimes regarded as linking the d block to the p block. Notionally they are d block elements but they have few transition metal properties and are more like their p block neighbors in group 13.[69][70] The relatively inert noble gases, in group 18, bridge the most reactive groups of elements in the periodic table—the halogens in group 17 and the alkali metals in group 1.[57]
80
+
81
+ The 1s, 2p, 3d, 4f, and 5g shells are each the first to have their value of ℓ, the azimuthal quantum number that determines a subshell's orbital angular momentum. This gives them some special properties,[71] that has been referred to as kainosymmetry (from Greek καινός "new").[36][72] Elements filling these orbitals are usually less metallic than their heavier homologues, prefer lower oxidation states, and have smaller atomic and ionic radii.[72]
82
+
83
+ The above contractions may also be considered to be a general incomplete shielding effect in terms of how they impact the properties of the succeeding elements. The 2p, 3d, or 4f shells have no radial nodes and are smaller than expected. They therefore screen the nuclear charge incompletely, and therefore the valence electrons that fill immediately after the completion of such a core subshell are more tightly bound by the nucleus than would be expected. 1s is an exception, providing nearly complete shielding. This is in particular the reason why sodium has a first ionisation energy of 495.8 kJ/mol that is only slightly smaller than that of lithium, 520.2 kJ/mol, and why lithium acts as less electronegative than sodium in simple σ-bonded alkali metal compounds; sodium suffers an incomplete shielding effect from the preceding 2p elements, but lithium essentially does not.[71]
84
+
85
+ Kainosymmetry also explains the specific properties of the 2p, 3d, and 4f elements. The 2p subshell is small and of a similar radial extent as the 2s subshell, which facilitates orbital hybridisation. This does not work as well for the heavier p elements: for example, silicon in silane (SiH4) shows approximate sp2 hybridisation, whereas carbon in methane (CH4) shows an almost ideal sp3 hybridisation. The bonding in these nonorthogonal heavy p element hydrides is weakened; this situation worsens with more electronegative substituents as they magnify the difference in energy between the s and p subshells. The heavier p elements are often more stable in their higher oxidation states in organometallic compounds than in compounds with electronegative ligands. This follows Bent's rule: s character is concentrated in the bonds to the more electropositive substituents, while p character is concentrated in the bonds to the more electronegative substituents. Furthermore, the 2p elements prefer to participate in multiple bonding (observed in O=O and N≡N) to eliminate Pauli repulsion from the otherwise close s and p lone pairs: their π bonds are stronger and their single bonds weaker. The small size of the 2p shell is also responsible for the extremely high electronegativities of the 2p elements.[71]
86
+
87
+ The 3d elements show the opposite effect; the 3d orbitals are smaller than would be expected, with a radial extent similar to the 3p core shell, which weakens bonding to ligands because they cannot overlap with the ligands' orbitals well enough. These bonds are therefore stretched and therefore weaker compared to the homologous ones of the 4d and 5d elements (the 5d elements show an additional d-expansion due to relativistic effects). This also leads to low-lying excited states, which is probably related to the well-known fact that 3d compounds are often coloured (the light absorbed is visible). This also explains why the 3d contraction has a stronger effect on the following elements than the 4d or 5d ones do. As for the 4f elements, the difficulty that 4f has in being used for chemistry is also related to this, as are the strong incomplete screening effects; the 5g elements may show a similar contraction, but it is likely that relativistic effects will partly counteract this, as they would tend to cause expansion of the 5g shell.[71]
88
+
89
+ Another consequence is the increased metallicity of the following elements in a block after the first kainosymmetric orbital, along with a preference for higher oxidation states. This is visible comparing H and He (1s) with Li and Be (2s); N–F (2p) with P–Cl (3p); Fe and Co (3d) with Ru and Rh (4d); and Nd–Dy (4f) with U–Cf (5f). As kainosymmetric orbitals appear in the even rows (except for 1s), this creates an even–odd difference between periods from period 2 onwards: elements in even periods are smaller and have more oxidising higher oxidation states (if they exist), whereas elements in odd periods differ in the opposite direction.[72]
90
+
91
+ In 1789, Antoine Lavoisier published a list of 33 chemical elements, grouping them into gases, metals, nonmetals, and earths.[73] Chemists spent the following century searching for a more precise classification scheme. In 1829, Johann Wolfgang Döbereiner observed that many of the elements could be grouped into triads based on their chemical properties. Lithium, sodium, and potassium, for example, were grouped together in a triad as soft, reactive metals. Döbereiner also observed that, when arranged by atomic weight, the second member of each triad was roughly the average of the first and the third.[74] This became known as the Law of Triads.[75] German chemist Leopold Gmelin worked with this system, and by 1843 he had identified ten triads, three groups of four, and one group of five. Jean-Baptiste Dumas published work in 1857 describing relationships between various groups of metals. Although various chemists were able to identify relationships between small groups of elements, they had yet to build one scheme that encompassed them all.[74] In 1857, German chemist August Kekulé observed that carbon often has four other atoms bonded to it. Methane, for example, has one carbon atom and four hydrogen atoms.[76] This concept eventually became known as valency, where different elements bond with different numbers of atoms.[77]
92
+
93
+ In 1862, the French geologist Alexandre-Émile Béguyer de Chancourtois published an early form of the periodic table, which he called the telluric helix or screw. He was the first person to notice the periodicity of the elements. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois showed that elements with similar properties seemed to occur at regular intervals. His chart included some ions and compounds in addition to elements. His paper also used geological rather than chemical terms and did not include a diagram. As a result, it received little attention until the work of Dmitri Mendeleev.[78]
94
+
95
+ In 1864, Julius Lothar Meyer, a German chemist, published a table with 28 elements. Realizing that an arrangement according to atomic weight did not exactly fit the observed periodicity in chemical properties he gave valency priority over minor differences in atomic weight. A missing element between Si and Sn was predicted with atomic weight 73 and valency 4.[79] Concurrently, English chemist William Odling published an arrangement of 57 elements, ordered on the basis of their atomic weights. With some irregularities and gaps, he noticed what appeared to be a periodicity of atomic weights among the elements and that this accorded with "their usually received groupings".[80] Odling alluded to the idea of a periodic law but did not pursue it.[81] He subsequently proposed (in 1870) a valence-based classification of the elements.[82]
96
+
97
+ English chemist John Newlands produced a series of papers from 1863 to 1866 noting that when the elements were listed in order of increasing atomic weight, similar physical and chemical properties recurred at intervals of eight. He likened such periodicity to the octaves of music.[83][84] This so termed Law of Octaves was ridiculed by Newlands' contemporaries, and the Chemical Society refused to publish his work.[85] Newlands was nonetheless able to draft a table of the elements and used it to predict the existence of missing elements, such as germanium.[86] The Chemical Society only acknowledged the significance of his discoveries five years after they credited Mendeleev.[87]
98
+
99
+ In 1867, Gustavus Hinrichs, a Danish born academic chemist based in America, published a spiral periodic system based on atomic spectra and weights, and chemical similarities. His work was regarded as idiosyncratic, ostentatious and labyrinthine and this may have militated against its recognition and acceptance.[88][89]
100
+
101
+ Russian chemistry professor Dmitri Mendeleev and German chemist Julius Lothar Meyer independently published their periodic tables in 1869 and 1870, respectively.[90] Mendeleev's table, dated March 1 [O.S. February 17] 1869,[91] was his first published version. That of Meyer was an expanded version of his (Meyer's) table of 1864.[92] They both constructed their tables by listing the elements in rows or columns in order of atomic weight and starting a new row or column when the characteristics of the elements began to repeat.[93]
102
+
103
+ The recognition and acceptance afforded to Mendeleev's table came from two decisions he made. The first was to leave gaps in the table when it seemed that the corresponding element had not yet been discovered.[94] Mendeleev was not the first chemist to do so, but he was the first to be recognized as using the trends in his periodic table to predict the properties of those missing elements, such as gallium and germanium.[95] The second decision was to occasionally ignore the order suggested by the atomic weights and switch adjacent elements, such as tellurium and iodine, to better classify them into chemical families.
104
+
105
+ Mendeleev published in 1869, using atomic weight to organize the elements, information determinable to fair precision in his time. Atomic weight worked well enough to allow Mendeleev to accurately predict the properties of missing elements.
106
+
107
+ Mendeleev took the unusual step of naming missing elements using the Sanskrit numerals eka (1), dvi (2), and tri (3) to indicate that the element in question was one, two, or three rows removed from a lighter congener. It has been suggested that Mendeleev, in doing so, was paying homage to ancient Sanskrit grammarians, in particular Pāṇini, who devised a periodic alphabet for the language.[96]
108
+
109
+ Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, it was proposed that the integer count of the nuclear charge is identical to the sequential place of each element in the periodic table. In 1913, English physicist Henry Moseley using X-ray spectroscopy confirmed this proposal experimentally. Moseley determined the value of the nuclear charge of each element and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge.[97] Nuclear charge is identical to proton count and determines the value of the atomic number (Z) of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley predicted, in 1913, that the only elements still missing between aluminium (Z = 13) and gold (Z = 79) were Z = 43, 61, 72, and 75, all of which were later discovered. The atomic number is the absolute definition of an element and gives a factual basis for the ordering of the periodic table.[98]
110
+
111
+ In 1871, Mendeleev published his periodic table in a new form, with groups of similar elements arranged in columns rather than in rows, and those columns numbered I to VIII corresponding with the element's oxidation state. He also gave detailed predictions for the properties of elements he had earlier noted were missing, but should exist.[99] These gaps were subsequently filled as chemists discovered additional naturally occurring elements.[100] It is often stated that the last naturally occurring element to be discovered was francium (referred to by Mendeleev as eka-caesium) in 1939, but it was technically only the last element to be discovered in nature as opposed to by synthesis.[101] Plutonium, produced synthetically in 1940, was identified in trace quantities as a naturally occurring element in 1971.[102]
112
+
113
+ The popular[103] periodic table layout, also known as the common or standard form (as shown at various other points in this article), is attributable to Horace Groves Deming. In 1923, Deming, an American chemist, published short (Mendeleev style) and medium (18-column) form periodic tables.[104][n 7] Merck and Company prepared a handout form of Deming's 18-column medium table, in 1928, which was widely circulated in American schools. By the 1930s Deming's table was appearing in handbooks and encyclopedias of chemistry. It was also distributed for many years by the Sargent-Welch Scientific Company.[105][106][107]
114
+
115
+ With the development of modern quantum mechanical theories of electron configurations within atoms, it became apparent that each period (row) in the table corresponded to the filling of a quantum shell of electrons. Larger atoms have more electron sub-shells, so later tables have required progressively longer periods.[108]
116
+
117
+ In 1945, Glenn Seaborg, an American scientist, made the suggestion that the actinide elements, like the lanthanides, were filling an f sub-level. Before this time the actinides were thought to be forming a fourth d-block row. Seaborg's colleagues advised him not to publish such a radical suggestion as it would most likely ruin his career. As Seaborg considered he did not then have a career to bring into disrepute, he published anyway. Seaborg's suggestion was found to be correct and he subsequently went on to win the 1951 Nobel Prize in chemistry for his work in synthesizing actinide elements.[109][110][n 8]
118
+
119
+ Although minute quantities of some transuranic elements occur naturally,[3] they were all first discovered in laboratories. Their production has expanded the periodic table significantly, the first of these being neptunium, synthesized in 1939.[111] Because many of the transuranic elements are highly unstable and decay quickly, they are challenging to detect and characterize when produced. There have been controversies concerning the acceptance of competing discovery claims for some elements, requiring independent review to determine which party has priority, and hence naming rights.[112] In 2010, a joint Russia–US collaboration at Dubna, Moscow Oblast, Russia, claimed to have synthesized six atoms of tennessine (element 117), making it the most recently claimed discovery. It, along with nihonium (element 113), moscovium (element 115), and oganesson (element 118), are the four most recently named elements, whose names all became official on 28 November 2016.[113]
120
+
121
+ The modern periodic table is sometimes expanded into its long or 32-column form by reinstating the footnoted f-block elements into their natural position between the s- and d-blocks, as proposed by Alfred Werner.[114] Unlike the 18-column form, this arrangement results in "no interruptions in the sequence of increasing atomic numbers".[115] The relationship of the f-block to the other blocks of the periodic table also becomes easier to see.[116] William B. Jensen advocates a form of table with 32 columns on the grounds that the lanthanides and actinides are otherwise relegated in the minds of students as dull, unimportant elements that can be quarantined and ignored.[117] Despite these advantages, the 32-column form is generally avoided by editors on account of its undue rectangular ratio compared to a book page ratio,[118] and the familiarity of chemists with the modern form, as introduced by Seaborg.[119]
122
+
123
+ 1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
124
+
125
+ Background color shows subcategory in the metal–metalloid–nonmetal trend:
126
+
127
+ Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table.[117][122][123] As well as numerous rectangular variations, other periodic table formats have been shaped, for example,[n 9] like a circle, cube, cylinder, building, spiral, lemniscate,[124] octagonal prism, pyramid, sphere, or triangle. Such alternatives are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables.[123]
128
+
129
+ A popular[125] alternative structure is that of Otto Theodor Benfey (1960). The elements are arranged in a continuous spiral, with hydrogen at the centre and the transition metals, lanthanides, and actinides occupying peninsulas.[126]
130
+
131
+ Most periodic tables are two-dimensional;[3] three-dimensional tables are known to as far back as at least 1862 (pre-dating Mendeleev's two-dimensional table of 1869). More recent examples include Courtines' Periodic Classification (1925),[127] Wringley's Lamina System (1949),[128]
132
+ Giguère's Periodic helix (1965)[129] and Dufour's Periodic Tree (1996).[130] Going one further, Stowe's Physicist's Periodic Table (1989)[131] has been described as being four-dimensional (having three spatial dimensions and one colour dimension).[132]
133
+
134
+ The various forms of periodic tables can be thought of as lying on a chemistry–physics continuum.[133] Towards the chemistry end of the continuum can be found, as an example, Rayner-Canham's "unruly"[134] Inorganic Chemist's Periodic Table (2002),[135] which emphasizes trends and patterns, and unusual chemical relationships and properties. Near the physics end of the continuum is Janet's Left-Step Periodic Table (1928). This has a structure that shows a closer connection to the order of electron-shell filling and, by association, quantum mechanics.[136] A somewhat similar approach has been taken by Alper,[137] albeit criticized by Eric Scerri as disregarding the need to display chemical and physical periodicity.[138] Somewhere in the middle of the continuum is the ubiquitous common or standard form of periodic table. This is regarded as better expressing empirical trends in physical state, electrical and thermal conductivity, and oxidation numbers, and other properties easily inferred from traditional techniques of the chemical laboratory.[139] Its popularity is thought to be a result of this layout having a good balance of features in terms of ease of construction and size, and its depiction of atomic order and periodic trends.[81][140]
135
+
136
+ Simply following electron configurations, hydrogen (electronic configuration 1s1) and helium (1s2) should be placed in groups 1 and 2, above lithium (1s22s1) and beryllium (1s22s2).[141] While such a placement is common for hydrogen, it is rarely used for helium outside of the context of electron configurations: When the noble gases (then called "inert gases") were first discovered around 1900, they were known as "group 0", reflecting no chemical reactivity of these elements known at that point, and helium was placed on the top of that group, as it did share the extreme chemical inertness seen throughout the group. As the group changed its formal number, many authors continued to assign helium directly above neon, in group 18; one of the examples of such placing is the current IUPAC table.[142]
137
+
138
+ The position of hydrogen in group 1 is reasonably well settled. Its usual oxidation state is +1 as is the case for its heavier alkali metal congeners. Like lithium, it has a significant covalent chemistry.[143][144]
139
+ It can stand in for alkali metals in typical alkali metal structures.[145] It is capable of forming alloy-like hydrides, featuring metallic bonding, with some transition metals.[146]
140
+
141
+ Nevertheless, it is sometimes placed elsewhere. A common alternative is at the top of group 17[138] given hydrogen's strictly univalent and largely non-metallic chemistry, and the strictly univalent and non-metallic chemistry of fluorine (the element otherwise at the top of group 17). Sometimes, to show hydrogen has properties corresponding to both those of the alkali metals and the halogens, it is shown at the top of the two columns simultaneously.[147] Another suggestion is above carbon in group 14: placed that way, it fits well into the trends of increasing ionization potential values and electron affinity values, and is not too far from the electronegativity trend, even though hydrogen cannot show the tetravalence characteristic of the heavier group 14 elements.[148] Finally, hydrogen is sometimes placed separately from any group; this is based on its general properties being regarded as sufficiently different from those of the elements in any other group.
142
+
143
+ The other period 1 element, helium, is most often placed in group 18 with the other noble gases, as its extraordinary inertness is extremely close to that of the other light noble gases neon and argon.[149] Nevertheless, it is occasionally placed separately from any group as well.[150] The property that distinguishes helium from the rest of the noble gases is that in its closed electron shell, helium has only two electrons in the outermost electron orbital, while the rest of the noble gases have eight. Some authors, such as Henry Bent (the eponym of Bent's rule), Wojciech Grochala, and Felice Grandinetti, have argued that helium would be correctly placed in group 2, over beryllium; Charles Janet's left-step table also contains this assignment. The normalized ionization potentials and electron affinities show better trends with helium in group 2 than in group 18; helium is expected to be slightly more reactive than neon (which breaks the general trend of reactivity in the noble gases, where the heavier ones are more reactive); predicted helium compounds often lack neon analogues even theoretically, but sometimes have beryllium analogues; and helium over beryllium better follows the trend of first-row anomalies in the table (s >> p > d > f).[151][152][153]
144
+
145
+ Although scandium and yttrium are always the first two elements in group 3, the identity of the next two elements is not completely settled. They are commonly lanthanum and actinium, and less often lutetium and lawrencium. The two variants originate from historical difficulties in placing the lanthanides in the periodic table, and arguments as to where the f block elements start and end.[154][n 10][n 11] It has been claimed that such arguments are proof that, "it is a mistake to break the [periodic] system into sharply delimited blocks".[156] A third variant shows the two positions below yttrium as being occupied by the lanthanides and the actinides. A fourth variant shows group 3 bifurcating after Sc-Y, into an La-Ac branch, and an Lu-Lr branch.[29]
146
+
147
+ Chemical and physical arguments have been made in support of lutetium and lawrencium[157][158] but the majority of authors seem unconvinced.[159] Most working chemists are not aware there is any controversy.[160] In December 2015 an IUPAC project was established to make a recommendation on the matter.[161]
148
+
149
+ Lanthanum and actinium are commonly depicted as the remaining group 3 members.[162][n 12] It has been suggested that this layout originated in the 1940s, with the appearance of periodic tables relying on the electron configurations of the elements and the notion of the differentiating electron. The configurations of caesium, barium and lanthanum are [Xe]6s1, [Xe]6s2 and [Xe]5d16s2. Lanthanum thus has a 5d differentiating electron and this establishes it "in group 3 as the first member of the d-block for period 6".[163] A consistent set of electron configurations is then seen in group 3: scandium [Ar]3d14s2, yttrium [Kr]4d15s2 and lanthanum [Xe]5d16s2. Still in period 6, ytterbium was assigned an electron configuration of [Xe]4f135d16s2 and lutetium [Xe]4f145d16s2, "resulting in a 4f differentiating electron for lutetium and firmly establishing it as the last member of the f-block for period 6".[163] Later spectroscopic work found that the electron configuration of ytterbium was in fact [Xe]4f146s2. This meant that ytterbium and lutetium—the latter with [Xe]4f145d16s2—both had 14 f-electrons, "resulting in a d- rather than an f- differentiating electron" for lutetium and making it an "equally valid candidate" with [Xe]5d16s2 lanthanum, for the group 3 periodic table position below yttrium.[163] Lanthanum has the advantage of incumbency since the 5d1 electron appears for the first time in its structure whereas it appears for the third time in lutetium, having also made a brief second appearance in gadolinium.[164]
150
+
151
+ In terms of chemical behaviour,[165] and trends going down group 3 for properties such as melting point, electronegativity and ionic radius,[166][167] scandium, yttrium, lanthanum and actinium are similar to their group 1–2 counterparts. In this variant, the number of f electrons in the most common (trivalent) ions of the f-block elements consistently matches their position in the f-block.[168] For example, the f-electron counts for the trivalent ions of the first three f-block elements are Ce 1, Pr 2 and Nd 3.[169]
152
+
153
+ In other tables, lutetium and lawrencium are the remaining group 3 members.[n 13] Early techniques for chemically separating scandium, yttrium and lutetium relied on the fact that these elements occurred together in the so-called "yttrium group" whereas La and Ac occurred together in the "cerium group".[163] Accordingly, lutetium rather than lanthanum was assigned to group 3 by some chemists in the 1920s and 30s.[n 14] Several physicists in the 1950s and '60s favoured lutetium, in light of a comparison of several of its physical properties with those of lanthanum.[163] This arrangement, in which lanthanum is the first member of the f-block, is disputed by some authors since lanthanum lacks any f-electrons. It has been argued that this is not a valid concern given other periodic table anomalies—thorium, for example, has no f-electrons yet is part of the f-block.[170] As for lawrencium, its gas phase atomic electron configuration was confirmed in 2015 as [Rn]5f147s27p1. Such a configuration represents another periodic table anomaly, regardless of whether lawrencium is located in the f-block or the d-block, as the only potentially applicable p-block position has been reserved for nihonium with its predicted configuration of [Rn]5f146d107s27p1.[27][n 15]
154
+
155
+ Chemically, scandium, yttrium and lutetium (and presumably lawrencium) behave like trivalent versions of the group 1–2 metals.[172] On the other hand, trends going down the group for properties such as melting point, electronegativity and ionic radius, are similar to those found among their group 4–8 counterparts.[163] In this variant, the number of f electrons in the gaseous forms of the f-block atoms usually matches their position in the f-block. For example, the f-electron counts for the first five f-block elements are La 0, Ce 1, Pr 3, Nd 4 and Pm 5.[163]
156
+
157
+ A few authors position all thirty lanthanides and actinides in the two positions below yttrium (usually via footnote markers).
158
+ This variant, which is stated in the 2005 Red Book to be the IUPAC-agreed version as of 2005 (a number of later versions exist, and the last update is from 1 December 2018),[173][n 16] emphasizes similarities in the chemistry of the 15 lanthanide elements (La–Lu), possibly at the expense of ambiguity as to which elements occupy the two group 3 positions below yttrium, and a 15-column wide f block (there can only be 14 elements in any row of the f block).[n 17] However, this similarity does not extend to the 15 actinide elements (Ac–Lr), which show a much wider variety in their chemistries.[175] This form moreover reduces the f-block to a degenerate branch of group 3 of the d-block; it dates back to the 1920s when the lanthanides were thought to have their f electrons as core electrons, which is now known to be false. It is also false for the actinides, many of which show stable oxidation states above +3.[176]
159
+
160
+ In this variant, group 3 bifurcates after Sc-Y into a La-Ac branch, and a Lu-Lr branch. This arrangement is consistent with the hypothesis that arguments in favour of either Sc-Y-La-Ac or Sc-Y-Lu-Lr based on chemical and physical data are inconclusive.[177] As noted, trends going down Sc-Y-La-Ac match trends in groups 1−2[178] whereas trends going down Sc-Y-Lu-Lr better match trends in groups 4−10.[163]
161
+
162
+ The bifurcation of group 3 is a throwback to the Mendeleev eight column-form in which seven of the main groups each have two subgroups. Tables featuring a bifurcated group 3 have been periodically proposed since that time.[n 18]
163
+
164
+ The definition of a transition metal, as given by IUPAC in the Gold Book, is an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell.[179] By this definition all of the elements in groups 3–11 are transition metals. The IUPAC definition therefore excludes group 12, comprising zinc, cadmium and mercury, from the transition metals category. However, the 2005 IUPAC nomenclature as codified in the Red Book gives both the group 3–11 and group 3–12 definitions of the transition metals as alternatives.
165
+
166
+ Some chemists treat the categories "d-block elements" and "transition metals" interchangeably, thereby including groups 3–12 among the transition metals. In this instance the group 12 elements are treated as a special case of transition metal in which the d electrons are not ordinarily given up for chemical bonding (they can sometimes contribute to the valence bonding orbitals even so, as in zinc fluoride).[180] The 2007 report of mercury(IV) fluoride (HgF4), a compound in which mercury would use its d electrons for bonding, has prompted some commentators to suggest that mercury can be regarded as a transition metal.[181] Other commentators, such as Jensen,[182] have argued that the formation of a compound like HgF4 can occur only under highly abnormal conditions; indeed, its existence is currently disputed. As such, mercury could not be regarded as a transition metal by any reasonable interpretation of the ordinary meaning of the term.[182]
167
+
168
+ Still other chemists further exclude the group 3 elements from the definition of a transition metal. They do so on the basis that the group 3 elements do not form any ions having a partially occupied d shell and do not therefore exhibit properties characteristic of transition metal chemistry.[183] In this case, only groups 4–11 are regarded as transition metals. This categorisation is however not one of the alternatives considered by IUPAC. Though the group 3 elements show few of the characteristic chemical properties of the transition metals, the same is true of the heavy members of groups 4 and 5, which also are mostly restricted to the group oxidation state in their chemistry. Moreover, the group 3 elements show characteristic physical properties of transition metals (on account of the presence in each atom of a single d electron).[62]
169
+
170
+ Although all elements up to oganesson have been discovered, of the elements above hassium (element 108), only copernicium (element 112), nihonium (element 113), and flerovium (element 114) have known chemical properties, and conclusive categorisation at present has not been reached.[55] Some of these may behave differently from what would be predicted by extrapolation, due to relativistic effects; for example, copernicium and flerovium have been predicted to possibly exhibit some noble-gas-like properties, even though neither is placed in group 18 with the other noble gases.[55][184] The current experimental evidence still leaves open the question of whether copernicium and flerovium behave more like metals or noble gases.[55][185] At the same time, oganesson (element 118) is expected to be a solid semiconductor at standard conditions, despite being in group 18.[186]
171
+
172
+ Currently, the periodic table has seven complete rows, with all spaces filled in with discovered elements. Future elements would have to begin an eighth row. Nevertheless, it is unclear whether new eighth-row elements will continue the pattern of the current periodic table, or require further adaptations or adjustments. Seaborg expected the eighth period to follow the previously established pattern exactly, so that it would include a two-element s-block for elements 119 and 120, a new g-block for the next 18 elements, and 30 additional elements continuing the current f-, d-, and p-blocks, culminating in element 168, the next noble gas.[188] More recently, physicists such as Pekka Pyykkö have theorized that these additional elements do not exactly follow the Madelung rule, which predicts how electron shells are filled and thus affects the appearance of the present periodic table. There are currently several competing theoretical models for the placement of the elements of atomic number less than or equal to 172. In all of these it is element 172, rather than element 168, that emerges as the next noble gas after oganesson, although these must be regarded as speculative as no complete calculations have been done beyond element 123.[189][190]
173
+
174
+ The number of possible elements is not known. A very early suggestion made by Elliot Adams in 1911, and based on the arrangement of elements in each horizontal periodic table row, was that elements of atomic weight greater than circa 256 (which would equate to between elements 99 and 100 in modern-day terms) did not exist.[191] A higher, more recent estimate is that the periodic table may end soon after the island of stability,[192] whose centre is predicted to lie between element 110 and element 126, as the extension of the periodic and nuclide tables is restricted by proton and neutron drip lines as well as decreasing stability towards spontaneous fission.[193][194] Other predictions of an end to the periodic table include at element 128 by John Emsley,[3] at element 137 by Richard Feynman,[195] at element 146 by Yogendra Gambhir,[196] and at element 155 by Albert Khazan.[3][n 19]
175
+
176
+ The Bohr model exhibits difficulty for atoms with atomic number greater than 137, as any element with an atomic number greater than 137 would require 1s electrons to be travelling faster than c, the speed of light.[197] Hence the non-relativistic Bohr model is inaccurate when applied to such an element.
177
+
178
+ The relativistic Dirac equation has problems for elements with more than 137 protons. For such elements, the wave function of the Dirac ground state is oscillatory rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox.[198] More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds the limit for elements with more than 173 protons. For heavier elements, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron.[199] This does not happen if the innermost orbital is filled, so that element 173 is not necessarily the end of the periodic table.[195]
179
+
180
+ The many different forms of periodic table have prompted the question of whether there is an optimal or definitive form of periodic table.[200] The answer to this question is thought to depend on whether the chemical periodicity seen to occur among the elements has an underlying truth, effectively hard-wired into the universe, or if any such periodicity is instead the product of subjective human interpretation, contingent upon the circumstances, beliefs and predilections of human observers. An objective basis for chemical periodicity would settle the questions about the location of hydrogen and helium, and the composition of group 3. Such an underlying truth, if it exists, is thought to have not yet been discovered. In its absence, the many different forms of periodic table can be regarded as variations on the theme of chemical periodicity, each of which explores and emphasizes different aspects, properties, perspectives and relationships of and among the elements.[n 20]
181
+
182
+ In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science".[203]
en/5561.html.txt ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In Greek mythology and later Roman mythology, the Cyclopes (/saɪˈkloʊpiːz/ sy-KLOH-peez; Greek: Κύκλωπες, Kýklōpes, "Circle-eyes" or "Round-eyes";[1] singular Cyclops /ˈsaɪklɒps/ SY-klops; Κύκλωψ, Kýklōps) are giant one-eyed creatures.[2] Three groups of Cyclopes can be distinguished. In Hesiod's Theogony, they are the brothers: Brontes, Steropes, and Arges, who provided Zeus with his weapon the thunderbolt. In Homer's Odyssey, they are an uncivilized group of shepherds, the brethren of Polyphemus encountered by Odysseus. Cyclopes were also famous as the builders of the Cyclopean walls of Mycenae and Tiryns.
2
+
3
+ The fifth-century BC playwright Euripides wrote a satyr play entitled Cyclops, about Odysseus' encounter with Polyphemus. Mentions of the Hesiodic and the wall-builder Cyclopes also figure in his plays. The third-century BC poet Callimachus makes the Hesiodic Cyclopes the assistants of smith-god Hephaestus. So does Virgil in his Latin epic Aeneid, where he seems to equate the Hesiodic and Homeric Cyclopes.
4
+
5
+ From at least the fifth-century BC, Cyclopes have been associated with the island of Sicily and the volcanic Aeolian Islands.
6
+
7
+ Three groups of Cyclopes can be distinguished: the Hesiodic, the Homeric and the wall-builders.[3] In Hesiod's Theogony, the Cyclopes are the three brothers: Brontes, Steropes, and Arges, sons of Uranus and Gaia, who made for Zeus his characteristic weapon, the thunderbolt. In Homer's Odyssey, the Cyclopes are a uncivilized group of shepherds, one of whom, Polyphemus, the son of Poseidon, is encountered by Odysseus. Cyclopes were also said to have been the builders of the Cyclopean walls of Mycenae and Tiryns.[4] A scholiast, quoting the fifth-century BC historian Hellanicus, tells us that, in addition to the Hesiodic Cyclopes (whom the scholiast describes as "the gods themselves"), and the Homeric Cyclopes, there was a third group of Cyclopes: the builders of the walls of Mycenae.[5]
8
+
9
+ Hesiod, in the Theogony (c. 700 BC), described three Cyclopes: Brontes, Steropes, and Arges, who were the sons of Uranus (Sky) and Gaia (Earth), and the brothers of the Titans and Hundred-Handers, and who had a single eye set in the middle of their foreheads.[6] They made for Zeus his all-powerful thunderbolt, and in so doing, the Cyclopes played a key role in the Greek succession myth, which told how the Titan Cronus overthrew his father Uranus, and how in turn Zeus overthrew Cronus and his fellow Titans, and how Zeus was eventually established as the final and permanent ruler of the cosmos.[7] The names that Hesiod gives them: Arges (Bright), Brontes (Thunder), and Steropes (Lightning), reflect their fundamental role as thunderbolt makers.[8] As early as the late seventh-century BC, the Cyclopes could be used by the Spartan poet Tyrtaeus to epitomize extraodinary size and strength.[9]
10
+
11
+ According to the accounts of Hesiod and mythographer Apollodorus, the Cyclopes had been imprisoned by their father Uranus.[10] Zeus later freed the Cyclopes, and they repaid him by giving him the thunderbolt.[11] The Cyclopes provided for Hesiod, and others theogony-writers, a convenient source of heavenly weaponry, since the smith-god Hephaestus—who would eventually take over that role—had not yet been born.[12] According to Apollodorus, the Cyclopes also provided Poseidon with his trident and Hades with his cap of invisibility,[13] and the gods used these weapons to defeat the Titans.
12
+
13
+ Although the primordial Cyclopes of the Theogony were presumably immortal (as were their brothers the Titans), the sixth-century BC Hesiodic Catalogue of Women, has them being killed by Apollo.[14] Later sources tell us why: Apollo's son Asclepius had been killed by Zeus' thunderbolt, and Apollo killed the Cyclopes, the makers of the thunderbolt, in revenge.[15] According to a scholiast on Euripides' Alcestis, the fifth-century BC mythographer Pherecydes supplied the same motive, but said that Apollo, rather than killing the Cyclopes, killed their sons (one of whom he named Aortes) instead.[16] No other source mentions any offspring of the Cyclopes.[17] A Pindar fragment suggests that Zeus himself killed the Cyclopes to prevent them from making thunderbolts for anyone else.[18]
14
+
15
+ The Cyclopes' prowess as craftsmen is stressed by Hesiod who says "strength and force and contrivances were in their works."[19] Being such skilled craftsmen of great size and strength, later poets, beginning with the third-century BC poet Callimachus, imagine these Cyclopes, the primordial makers of Zeus' thunderbolt, becoming the assistants of the smith-god Hephaestus, at his forge in Sicily, underneath Mount Etna, or perhaps the nearby Aeolian Islands.[20] In his Hymn to Artemis, Callimachus has the Cyclopes on the Aeolian island of Lipari, working "at the anvils of Hephaestus", make the bows and arrows used by Apollo and Artemis.[21] The first-century BC Latin poet Virgil, in his epic Aeneid, has the Cyclopes: "Brontes and Steropes and bare-limbed Pyracmon"[22] toil under the direction of Vulcan (Hephaestus), in caves underneath Mount Etna and the Aeolian islands.[23] Virgil describes the Cyclopes, in Vulcan's smithy forging iron, making a thunderbolt, a chariot for Mars, and Pallas's Aegis, with Vulcan interrupting their work to command the Cyclopes to fashion arms for Aeneas.[24] The later Latin poet Ovid also has the Hesiodic Cyclopes Brontes and Steropes (along with a third Cyclops named Acmonides), work at forges in Sicilian caves.[25]
16
+
17
+ According to a Hellenistic astral myth, the Cyclopes were the builders of the first altar. The myth was a catasterism, which explained how the constellation the Altar (Ara) came to be in the heavens. According to the myth, the Cyclopes built an altar upon which Zeus and the other gods swore alliance before their war with the Titans. After their victory, "the gods placed the altar in the sky in commemoration", and thus began the practice, according to the myth, of men swearing oaths upon altars "as a guarantee of their good faith".[26]
18
+
19
+ According to the second-century geographer Pausanias, there was a sanctuary called the "altar of the Cyclopes" on the Isthmus of Corinth at a place sacred to Poseidon, where sacrifices were offered to the Cyclopes.[27] There is no evidence for any other cult associated with the Cyclopes.[28] According to a version of the story in the Iliad scholia (found nowhere else), when Zeus swallowed Metis, she was pregnant with Athena by the Cyclops Brontes.[29]
20
+
21
+ Although described by Hesiod as having "very violent hearts" (ὑπέρβιον ἦτορ ἔχοντας),[30] and while their extraordinary size and strength would have made them capable of great violence, there is no indication of the Hesiodic Cyclopes having behaved in any other way than as dutiful servants of the gods.[31]
22
+
23
+ Walter Burkert suggests that groups or societies of lesser gods, like the Hesiodic Cyclopes, "mirror real cult associations (thiasoi) ... It may be surmised that smith guilds lie behind Cabeiri, Idaian Dactyloi, Telchines, and Cyclopes."[32]
24
+
25
+ In an episode of Homer's Odyssey (c. 700 BC), the hero Odysseus encounters the Cyclops Polyphemus, the son of Poseidon, a one-eyed man-eating giant who lives with his fellow Cyclopes in a distant land.[33] The relationship between these Cyclopes and Hesiod's Cyclopes is unclear.[34] Homer described a very different group of Cyclopes, than the skilled and subservient craftsman of Hesiod.[35] Homer's Cyclopes live in the "world of men" rather than among the gods, as they presumably do in the Theogony.[36] The Homeric Cyclopes are presented as uncivilized shepherds, who live in caves, savages with no regard for Zeus. They have no knowledge of agriculture, ships or craft. They live apart and lack any laws.[37]
26
+
27
+ The fifth-century BC playwright Euripides also told the story of Odysseus' encounter with Polyphemus in his satyr play Cyclops. Euripides' Cyclopes, like Homer's, are uncultured cave-dwelling shepherds. They have no agriculture, no wine, and live on milk, cheese and the meat of sheep. They live solitary lives, and have no government. They are inhospitable to strangers, slaughtering and eating all who come to their land.[38] While Homer does not say if the other Cyclopes are like Polyphemus in their appearance and parentage, Euripides' makes it explicit, calling the Cyclopes "Poseidon's one-eyed sons".[39] And while Homer is vague as to their location, Euripides locates the land of the Cyclopes on the island of Sicily near Mount Etna.[40]
28
+
29
+ Like Euripides, Virgil has the Cyclopes of Polyphemus live on Sicily near Etna. For Virgil apparently, these Homeric Cyclopes are members of the same race of Cyclopes as Hesiod's Brontes and Steropes, who live nearby.[41]
30
+
31
+ Cyclopes were also said to have been the builders of the so-called 'Cyclopean' walls of Mycenae, Tiryns, and Argos.[42] Although they can be seen as being distinct, the Cyclopean wall-builders share several features with the Hesiodic Cyclopes: both groups are craftsmen of supernatural skill, possessing enormous strength, who lived in primordial times.[43] These builder Cyclopes were apparently used to explain the construction of the stupendous walls at Mycenae and Tiryns, composed of massive stones that seemed too large and heavy to have been moved by ordinary men.[44]
32
+
33
+ These master builders were famous in antiquity from at least the fifth century BC onwards.[45] The poet Pindar, has Heracles driving the cattle of Geryon through the "Cyclopean portal" of the Tirynian king Eurystheus.[46] The mythographer Pherecydes says that Perseus brought the Cyclopes with him from Seriphos to Argos, presumably to build the walls of Mycenae.[47] Proetus, the mythical king of ancient Argos, was said to have brought a group of seven Cyclopes from Lycia to build the walls of Tiryns.[48]
34
+
35
+ The late fifth and early fourth-century BC comic poet Nicophon wrote a play called either Cheirogastores or Encheirogastores (Hands-to-Mouth), which is thought to have been about these Cyclopean wall-builders.[49] Ancient lexicographers explained the title as meaning "those who feed themselves by manual labour", and, according to Eustathius of Thessalonica, the word was used to describe the Cyclopean wall-builders, while "hands-to-mouth" was one of the three kinds of Cyclopes distinguished by scholia to Aelius Aristides.[50] Similarly, possibly deriving from Nicophon's comedy, the first-century Greek geographer Strabo says these Cyclopes were called "Bellyhands" (gasterocheiras) because they earned their food by working with their hands.[51]
36
+
37
+ The first-century natural philosopher Pliny the Elder, in his Natural History, reported a tradition, attributed to Aristotle, that the Cyclopes were the inventors of masonry towers.[52] In the same work Pliny also mentions the Cyclopes, as being among those credited with being the first to work with iron,[53] as well as bronze.[54] In addition to walls, other monuments were attributed to the Cyclopes. For example, Pausanias says that at Argos there was "a head of Medusa made of stone, which is said to be another of the works of the Cyclopes".[55]
38
+
39
+ According to the Theogony of Hesiod, Uranus (Sky) mated with Gaia (Earth) and produced eighteen children.[56] First came the twelve Titans, next came the three one-eyed Cyclopes:
40
+
41
+ Then [Gaia] bore the Cyclopes, who have very violent hearts, Brontes (Thunder) and Steropes (Lightning) and strong-spirited Arges (Bright), those who gave thunder to Zeus and fashioned the thunderbolt. These were like the gods in other regards, but only one eye was set in the middle of their foreheads;[57] and they were called Cyclopes (Circle-eyed) by name, since a single circle-shaped eye was set in their foreheads. Strength and force and contrivances were in their works.[58]
42
+
43
+ Following the Cyclopes, Gaia next gave birth to three more monstrous brothers, the Hecatoncheires, or Hundred-Handed Giants. Uranus hated his monstrous children,[59] and as soon as each was born, he imprisoned them underground, somewhere deep inside Gaia.[60] Eventually Uranus' son, the Titan Cronus, castrated Uranus, becoming the new ruler of the cosmos, but he did not release his brothers, the Cyclopes and the Hecatoncheires, from their imprisonment in Tartarus.[61]
44
+
45
+ For this failing, Gaia foretold that Cronus would eventually be overthrown by one of his children, as he had overthrown his own father. To prevent this, as each of his children were born, Cronus swallowed them whole; as gods they were not killed, but imprisoned within his belly. His wife, Rhea, sought her mother's advice to avoid losing all of her children in this way, and Gaia advised her to give Cronus a stone wrapped in swaddling clothes. In this way, Zeus was spared the fate of his elder siblings, and was hidden away by his mother. When he was grown, Zeus forced his father to vomit up his siblings, who rebelled against the Titans. Zeus released the Cyclopes and Hecatoncheires, who became his allies. While the Hundred-Handed Giants fought alongside Zeus and his siblings, the Cyclopes gave Zeus his great weapon, the thunderbolt, with the aid of which he was eventually able to overthrow the Titans, establishing himself as the ruler of the cosmos.[62]
46
+
47
+ In Book 9 of the Odyssey, Odysseus describes to his hosts the Phaeacians his encounter with the Cyclops Polyphemus.[63] Having just left the land of the Lotus-eaters, Odysseus says "Thence we sailed on, grieved at heart, and we came to the land of the Cyclopes".[64]
48
+ Homer had already (Book 6) described the Cyclopes as "men overweening in pride who plundered [their neighbors the Phaeacians] continually",[65] driving the Phaeacians from their home. In Book 9, Homer gives a more detailed description of the Cyclopes as:
49
+
50
+ an overweening and lawless folk, who, trusting in the immortal gods, plant nothing with their hands nor plough; but all these things spring up for them without sowing or ploughing, wheat, and barley, and vines, which bear the rich clusters of wine, and the rain of Zeus gives them increase. Neither assemblies for council have they, nor appointed laws, but they dwell on the peaks of lofty mountains in hollow caves, and each one is lawgiver to his children and his wives, and they reck nothing one of another.[66]
51
+
52
+ According to Homer, the Cyclopes have no ships, nor ship-wrights, nor other craftsman, and know nothing of agriculture.[67] They have no regard for Zeus or the other gods, for the Cyclopes hold themselves to be "better far than they".[68]
53
+
54
+ Homer says that "godlike" Polyphemus, the son of Poseidon and the nymph Thoosa, the daughter of Phorcys, is the "greatest among all the Cyclopes".[69] Homer describes Polyphemus as a shepherd who:
55
+
56
+ mingled not with others, but lived apart, with his heart set on lawlessness. For he was fashioned a wondrous monster, and was not like a man that lives by bread, but like a wooded peak of lofty mountains, which stands out to view alone, apart from the rest,[70] ... [and as] a savage man that knew naught of justice or of law.[71]
57
+
58
+ Although Homer does not say explicitly that Polyphemus is one-eyed, for the account of his blinding to make sense he must be.[72] If Homer meant for the other Cyclopes to be assumed (as they usually are) to be like Polyphemus, then they too will be one-eyed sons of Poseidon; however Homer says nothing explicit about either the parentage or appearance of the other Cyclopes.[73]
59
+
60
+ The Hesiodic Cyclopes: makers of Zeus' thunderbolts, the Homeric Cyclopes: brothers of Polyphemus, and the Cyclopean wall-builders, all figure in the plays of the fifth-century BC playwright Euripides. In his play Alcestis, where we are told that the Cyclopes who forged Zeus' thunderbolts, were killed by Apollo. The prologue of that play has Apollo explain:
61
+
62
+ House of Admetus! In you I brought myself to taste the bread of menial servitude, god though I am. Zeus was the cause: he killed my son Asclepius, striking him in the chest with the lightning bolt, and in anger at this I slew the Cyclopes who forged Zeus’s fire. As my punishment for this Zeus compelled me to be a serf in the house of a mortal.[74]
63
+
64
+ Euripides' satyr play Cyclops tells the story of Odysseus' encounter with the Cyclops Polyphemus, famously told in Homer's Odyssey. It takes place on the island of Sicily near the volcano Mount Etna where, according to the play, "Poseidon’s one-eyed sons, the man-slaying Cyclopes, dwell in their remote caves."[75] Euripides describes the land where Polyphemus' brothers live, as having no "walls and city battlements", and a place where "no men dwell".[76] The Cyclopes have no rulers and no government, "they are solitaries: no one is anyone’s subject."[77] They grow no crops, living only "on milk and cheese and the flesh of sheep."[78] They have no wine, "hence the land they dwell in knows no dancing".[79] They show no respect for the important Greek value of Xenia ("guest friendship). When Odysseus asks "are they god-fearing and hospitable toward strangers" (φιλόξενοι δὲ χὤσιοι περὶ ξένους), he is told: "most delicious, they maintain, is the flesh of strangers ... everyone who has come here has been slaughtered."[80]
65
+
66
+ Several of Euripides' plays also make reference to the Cyclopean wall-builders. Euripides calls their walls "heaven-high" (οὐράνια),[81] describes "the Cyclopean foundations" of Mycenae as "fitted snug with red plumbline and mason’s hammer",[82] and calls Mycenae "O hearth built by the Cyclopes".[83] He calls Argos "the city built by the Cyclopes",[84] refers to "the temples the Cyclopes built"[85] and describes the "fortress of Perseus" as "the work of Cyclopean hands".[86]
67
+
68
+ For the third-century BC poet Callimachus, the Hesiodic Cyclopes Brontes, Steropes and Arges, become assistants at the forge of the smith-god Hephaestus. Callimachus has the Cyclopes make Artemis' bow, arrows and quiver, just as they had (apparently) made those of Apollo.[87] Callimachus locates the Cyclopes on the island of Lipari, the largest of the Aeolian Islands in the Tyrrhenian Sea off the northern coast of Sicily, where Artemis finds them "at the anvils of Hephaestus" making a horse-trough for Poseidon:
69
+
70
+ And the nymphs were affrighted when they saw the terrible monsters like unto the crags of Ossa: all had single eyes beneath their brows, like a shield of fourfold hide for size, glaring terribly from under; and when they heard the din of the anvil echoing loudly, and the great blast of the bellows and the heavy groaning of the Cyclopes themselves. For Aetna cried aloud, and Trinacia cried, the seat of the Sicanians, cried too their neighbour Italy, and Cyrnos therewithal uttered a mighty noise, when they lifted their hammers above their shoulders and smote with rhythmic swing the bronze glowing from the furnace or iron, labouring greatly. Wherefore the daughters of Oceanus could not untroubled look upon them face to face nor endure the din in their ears. No shame to them! on those not even the daughters of the Blessed look without shuddering, though long past childhood’s years. But when any of the maidens doth disobedience to her mother, the mother calls the Cyclopes to her child—Arges or Steropes; and from within the house comes Hermes, stained with burnt ashes. And straightway he plays bogey to the child and she runs into her mother’s lap, with her hands upon her eyes. But thou, Maiden, even earlier, while yet but three years old, when Leto came bearing thee in her arms at the bidding of Hephaestus that he might give thee handsel and Brontes set thee on his stout knees—thou didst pluck the shaggy hair of his great breast and tear it out by force. And even unto this day the mid part of his breast remains hairless, even as when mange settles on a man’s temples and eats away the hair.[88]
71
+
72
+ And Artemis asks:
73
+
74
+ Cyclopes, for me too fashion ye a Cydonian bow and arrows and a hollow casket for my shafts; for I also am a child of Leto, even as Apollo. And if I with my bow shall slay some wild creature or monstrous beast, that shall the Cyclopes eat.[89]
75
+
76
+ The first-century BC Roman poet Virgil seems to combine the Cyclopes of Hesiod with those of Homer, having them live alongside each other in the same part of Sicily.[90] In his Latin epic Aeneid, Virgil has the hero Aeneas follow in the footsteps of Odysseus, the hero of Homer's Odyssey. Approaching Sicily and Mount Etna, in Book 3 of the Aeneid, Aeneas manages to survive the dangerous Charybdis, and at sundown comes to the land of the Cyclopes, while "near at hand Aetna thunders".[91] The Cyclopes are described as being "in shape and size like Polyphemus ... a hundred other monstrous Cyclopes [who] dwell all along these curved shores and roam the high mountains."[92] After narrowly escaping from Polyphemus, Aeneas tells how, responding to the Cyclops' "mighty roar":
77
+
78
+ the race of the Cyclopes, roused from the woods and high mountains, rush to the harbour and throng the shores. We see them, standing impotent with glaring eye, the Aetnean brotherhood, their heads towering to the sky, a grim conclave: even as when on a mountaintop lofty oaks or cone-clad cypresses stand in mass, a high forest of Jove or grove of Diana.[93]
79
+
80
+ Later, in Book 8 of the same poem, Virgil has the Hesiodic Cyclopes Brontes and Steropes, along with a third Cyclopes which he names Pyracmon, work in an extensive network of caverns stretching from Mount Etna to the Aeolian Islands.[94] As the assistants of the smith-god Vulcan, they forge various items for the gods: thunderbolts for Jupiter, a chariot for Mars, and armor for Minerva:
81
+
82
+ In the vast cave the Cyclopes were forging iron—Brontes and Steropes and bare-limbed Pyracmon. They had a thunderbolt, which their hands had shaped, like the many that the Father hurls down from all over heaven upon earth, in part already polished, while part remained unfinished. Three shafts of twisted hail they had added to it, three of watery cloud, three of ruddy flame and the winged South Wind; now they were blending into the work terrifying flashes, noise, and fear, and wrath with pursuing flames. Elsewhere they were hurrying on for Mars a chariot and flying wheels, with which he stirs upmen and cities; and eagerly with golden scales of serpents were burnishing the awful aegis, armour of wrathful Pallas, the interwoven snakes, and on the breast of the goddess the Gorgon herself, with neck severed and eyes revolving.[95]
83
+
84
+ The mythographer Apollodorus, gives an account of the Hesiodic Cyclopes similar to that of Hesiod's, but with some differences, and additional details.[96] According to Apollodorus, the Cyclopes were born after the Hundred-Handers, but before the Titans (unlike Hesiod who makes the Titans the eldest and the Hundred-Handers the youngest).[97]
85
+
86
+ Uranus bound the Hundred-Handers and the Cyclopes, and cast them all into Tartarus, "a gloomy place in Hades as far distant from earth as earth is distant from the sky." But the Titans are, apparently, allowed to remain free (unlike in Hesiod).[98] When the Titans overthrew Uranus, they freed the Hundred-Handers and Cyclopes (unlike in Hesiod, where they apparently remained imprisoned), and made Cronus their sovereign.[99] But Cronus once again bound the six brothers, and reimprisoned them in Tartarus.[100]
87
+
88
+ As in Hesiod's account, Rhea saved Zeus from being swallowed by Cronus, and Zeus was eventually able to free his siblings, and together they waged war against the Titans.[101] According to Apollodorus, in the tenth year of that war, Zeus learned from Gaia, that he would be victorious if he had the Hundred-Handers and the Cyclopes as allies. So Zeus slew their warder Campe (a detail not found in Hesiod) and released them, and in addition to giving Zeus his thunderbolt (as in Hesiod), the Cyclopes also gave Poseidon his trident, and Hades a helmet (presumably the same cap of invisibility which Athena borrowed in the Iliad), and "with these weapons the gods overcame the Titans".[102]
89
+
90
+ Apollodorus also mentions a tomb of Geraestus, "the Cyclops" at Athens upon which, in the time of king Aegeus, the Athenians sacrificed the daughters of Hyacinth.[103]
91
+
92
+ Dionysiaca, composed in the 4th or 5th century BC, is the longest surviving poem from antiquity – 20,426 lines. It is written by the poet Nonnus in the Homeric dialect, and its main subject is the life of Dionysus. It describes a war that occurred between Dionysus' troops and those of the Indian king Deriades. In book 28 of the Dionysiaca the Cyclopes join with Dionysian troops, and they prove to be great warriors and crush most of the Indian king's troops.[104]
93
+
94
+ Depictions of the Cyclops Polyphemus have differed radically, depending on the literary genres in which he has appeared, and have given him an individual existence independent of the Homeric herdsman encountered by Odysseus. In the epic he was a man-eating monster dwelling in an unspecified land. Some centuries later, a dithyramb by Philoxenus of Cythera, followed by several episodes by the Greek pastoral poets, created of him a comedic and generally unsuccessful lover of the water nymph Galatea. In the course of these he woos his love to the accompaniment of either a cithara or the pan-pipes. Such episodes take place on the island of Sicily, and it was here that the Latin poet Ovid also set the tragic love story of Polyphemus and Galatea recounted in the Metamorphoses.[105] Still later tradition made him the eventually successful husband of Galatea and the ancestor of the Celtic and Illyrian races.[106]
95
+
96
+ From at least the fifth-century BC onwards, Cyclopes have been associated with the island of Sicily, or the volcanic Aeolian islands just off Sicily's north coast. The fifth-century BC historian Thucydides says that the "earliest inhabitants" of Sicily were reputed to be the Cyclopes and Laestrygones (another group of man-eating giants encountered by Odysseus in Homer's Odyssey).[107] Thucydides also reports the local belief that Hephaestus (along with his Cyclopean assistants?) had his forge on the Aeolian island of Vulcano.[108]
97
+
98
+ Euripides locates Odysseus' Cyclopes on the island of Sicily, near the volcano Mount Etna,[109] and in the same play addresses Hephaestus as "lord of Aetna".[110] The poet Callimachus locates the Cyclopes' forge on the island of Lipari, the largest of the Aeolians.[111] Virgil associates both the Hesiodic and the Homeric Cyclopes with Sicily. He has the thunderbolt makers: "Brontes and Steropes and bare-limbed Pyracmon", work in vast caverns extending underground from Mount Etna to the island of Vulcano,[112] while the Cyclops brethren of Polyphemus live on Sicily where "near at hand Aetna thunders".[113]
99
+
100
+ As Thucydides notes, in the case of Hephaestus' forge on Vulcano,[114] locating the Cyclopes' forge underneath active volcanoes provided an explanation for the fire and smoke often seen rising from them.[115]
101
+
102
+ For the ancient Greeks the name "Cyclopes" meant "Circle-eyes" or "Round-eyes",[116] derived from the Greek kúklos ("circle")[117] and ops ("eye").[118] This meaning can be seen as early as Hesiod's Theogony (8th–7th century BC),[119] which explains that the Cyclopes were called that "since a single circle-shaped eye was set in their foreheads".[120] Adalbert Kuhn, expanding on Hesiod's etymology, proposed a connection between the first element kúklos (which can also mean "wheel")[121] and the "wheel of the sun", producing the meaning "wheel (of the sun)-eyes".[122] Other etymologies have been proposed which derive the second element of the name from the Greek klops ("thief")[123] producing the meanings "wheel-thief" or "cattle-thief".[124] Although Walter Burkert has described Hesiod's etymology as "not too attractive",[125] Hesiod's explanation still finds acceptance by modern scholars.[126]
103
+
104
+ A possible origin for one-eyed Cyclopes was advanced by the palaeontologist Othenio Abel in 1914.[127] Abel proposed that fossil skulls of Pleistocene dwarf elephants, commonly found in coastal caves of Italy and Greece, may have given rise to the Polyphemus story. Abel suggested that the large, central nasal cavity (for the trunk) in the skull might have been interpreted as a large single eye-socket.[128]
105
+
106
+ A rare birth defect can result in foetuses (both human and animal) which have a single eye located in the middle of their foreheads.[129] Students of teratology have raised the possibility of a link between this deformity and the myth of the one-eyed Cyclopes.[130] However, in the case of humans with a single eye, they have a nose above the single eye,[131] rather than below, as in ancient Greek depictions of the Cyclops Polyphemus.[132]
en/5562.html.txt ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A synagogue (/ˈsɪnəɡɒɡ/; from Ancient Greek συναγωγή, synagogē, 'assembly'; Hebrew: בית כנסת bet knesset, 'house of assembly', or בית תפילה bet tefila, "house of prayer"; Yiddish: שול shul, Ladino: אשנוגה esnoga, 'bright as fire'; or קהל kahal) is a Jewish or Samaritan house of worship. Synagogues have a large place for prayer (the main sanctuary) and may also have smaller rooms for study and sometimes a social hall and offices. Some have a separate room for Torah study, called the בית מדרש beth midrash, lit. "house of study".
2
+
3
+ Synagogues are consecrated spaces used for the purpose of prayer, reading of the Tanakh (the entire Hebrew Bible, including the Torah), study and assembly; however, a synagogue is not necessary for worship. Halakha holds that communal Jewish worship can be carried out wherever ten Jews (a minyan) assemble. Worship can also be carried out alone or with fewer than ten people assembled together. However, halakha considers certain prayers as communal prayers and therefore they may be recited only by a minyan. In terms of its specific ritual and liturgical functions, the synagogue does not replace the long-since destroyed Temple in Jerusalem.
4
+
5
+ In the New Testament, the word appears 56 times, mostly in the Synoptic Gospels, but also in the Gospel of John (John 9:22; 18:20) and the Book of Revelation (Rev. 2:9; 3:9). It is used in the sense of 'assembly' in the Epistle of James (James 2:2).
6
+
7
+ Israelis use the Hebrew term beyt knesset "house of assembly". Ashkenazi Jews have traditionally used the Yiddish term shul (cognate with the German Schule, 'school') in everyday speech. Sephardi Jews and Romaniote Jews generally use the term kal (from the Hebrew Ḳahal, meaning "community"). Spanish Jews call the synagogue a sinagoga and Portuguese Jews call it an esnoga. Persian Jews and some Karaite Jews also use the term kenesa, which is derived from Aramaic, and some Mizrahi Jews use kenis. Some Reform, Reconstructionist, and Conservative Jews use the word temple. The Greek word synagogue is used in English (German, French and most Romance languages) to cover the preceding possibilities.[1]
8
+
9
+ Although synagogues existed a long time before the destruction of the Second Temple in 70 CE, communal worship in the time while the Temple still stood centered around the korbanot ("sacrificial offerings") brought by the kohanim ("priests") in the Temple in Jerusalem. The all-day Yom Kippur service, in fact, was an event in which the congregation both observed the movements of the kohen gadol ("high priest") as he offered the day's sacrifices and prayed for his success.
10
+
11
+ According to Jewish tradition, the men of the Great Assembly (around 5th century BCE) formalized and standardized the language of the Jewish prayers.[2] Prior to that people prayed as they saw fit, with each individual praying in his or her own way, and there were no standard prayers that were recited.
12
+
13
+ Johanan ben Zakai, one of the leaders at the end of the Second Temple era, promulgated the idea of creating individual houses of worship in whatever locale Jews found themselves. This contributed to the continuity of the Jewish people by maintaining a unique identity and a portable way of worship despite the destruction of the Temple, according to many historians.[citation needed]
14
+
15
+ Synagogues in the sense of purpose-built spaces for worship, or rooms originally constructed for some other purpose but reserved for formal, communal prayer, however, existed long before the destruction of the Second Temple.[3][unreliable source?] The earliest archaeological evidence for the existence of very early synagogues comes from Egypt, where stone synagogue dedication inscriptions dating from the 3rd century BCE prove that synagogues existed by that date.[4][unreliable source?] More than a dozen Jewish (and possibly Samaritan) Second Temple era synagogues have been identified by archaeologists in Israel and other countries belonging to the Hellenistic world.[3]
16
+
17
+ Any Jew or group of Jews can build a synagogue. Synagogues have been constructed by ancient Jewish kings, by wealthy patrons, as part of a wide range of human institutions including secular educational institutions, governments, and hotels, by the entire community of Jews living in a particular place, or by sub-groups of Jews arrayed according to occupation, ethnicity (i.e. the Sephardic, Polish or Persian Jews of a town), style of religious observance (i.e., a Reform or an Orthodox synagogue), or by the followers of a particular rabbi.
18
+
19
+ It has been theorized that the synagogue became a place of worship in the region upon the destruction of the Second Temple during the First Jewish–Roman War; however, others speculate that there had been places of prayer, apart from the Temple, during the Hellenistic period. The popularization of prayer over sacrifice during the years prior to the destruction of the Second Temple in 70 CE[5] had prepared the Jews for life in the diaspora, where prayer would serve as the focus of Jewish worship.[6]
20
+
21
+ Despite the possibility[dubious – discuss] of synagogue-like spaces prior to the First Jewish–Roman War, the synagogue emerged as a stronghold for Jewish worship upon the destruction of the Temple. For Jews living in the wake of the Revolt, the synagogue functioned as a "portable system of worship". Within the synagogue, Jews worshiped by way of prayer rather than sacrifices, which had previously served as the main form of worship within the Second Temple.[7]
22
+
23
+ A number of synagogues have been excavated that pre-date the destruction of the Jerusalem Temple in AD 70.
24
+
25
+ First century synagogue at Gamla
26
+
27
+ First century synagogue at Masada
28
+
29
+ First century synagogue at Magdala
30
+
31
+ First century synagogue at Herodium
32
+
33
+ Rabbi and philosopher, Maimonides (1138–1204), described the various customs in his day with respect to local synagogues:
34
+
35
+ Synagogues and houses of study must be treated with respect. They are swept and sprinkled [with water] to lay the dust. In Spain and the Maghreb, in Babylonia and in the Holy Land, it is customary to kindle lamps in the synagogues and to spread mats on the floor upon which the worshippers sit. In the lands of Edom (Christendom), they sit in synagogues upon chairs [or benches].[13]
36
+
37
+ Mosaic in the Tzippori Synagogue
38
+
39
+ Ruins of the ancient synagogue of Kfar Bar'am
40
+
41
+ The Samaritan house of worship is also called a synagogue.[14] During the 3rd and 2nd centuries BCE, during the Hellenistic period, the Greek word used in the Diaspora by Samaritans and Jews was the same: proseucheµ (literally, a place of prayer); a later, 3rd or 4th century CE inscription, uses a similar Greek term: eukteµrion (prayer house).[14] The oldest Samaritan synagogue discovered so far is from Delos in the Aegean Islands, with an inscription dated between 250 and 175 BCE, while most Samaritan synagogues excavated in the wider Land of Israel and ancient Samaria in particular, were built during the 4th-7th centuries, at the very end of the Roman and throughout the Byzantine period.[14]
42
+
43
+ The elements which distinguish Samaritan synagogues from contemporary Jewish ones are:
44
+
45
+ Ancient Samaritan synagogues are mentioned by literary sources or have been found by archaeologists in the Diaspora, in the wider Holy Land, and specifically in Samaria.[14]
46
+
47
+ During the first Christian centuries, Jewish-Christians used houses of worship known in academic literature as synagogue-churches. Scholars have claimed to have identified such houses of worship of the Jews who had accepted Jesus as the Messiah in Jerusalem[15] and Nazareth.[16][17]
48
+
49
+ There is no set blueprint for synagogues and the architectural shapes and interior designs of synagogues vary greatly. In fact, the influence from other local religious buildings can often be seen in synagogue arches, domes and towers.
50
+
51
+ Historically, synagogues were built in the prevailing architectural style of their time and place. Thus, the synagogue in Kaifeng, China looked very like Chinese temples of that region and era, with its outer wall and open garden in which several buildings were arranged. The styles of the earliest synagogues resembled the temples of other cults of the Eastern Roman Empire. The surviving synagogues of medieval Spain are embellished with mudéjar plasterwork. The surviving medieval synagogues in Budapest and Prague are typical Gothic structures.
52
+
53
+ With the emancipation of Jews in Western European countries, which not only enabled Jews to enter fields of enterprise from which they were formerly barred, but gave them the right to build synagogues without needing special permissions, synagogue architecture blossomed. Large Jewish communities wished to show not only their wealth but also their newly acquired status as citizens by constructing magnificent synagogues. These were built across Western Europe and in the United States in all of the historicist or revival styles then in fashion. Thus there were Neoclassical, Neo-Byzantine, Romanesque Revival, Moorish Revival, Gothic Revival, and Greek Revival. There are Egyptian Revival synagogues and even one Mayan Revival synagogue. In the 19th century and early 20th century heyday of historicist architecture, however, most historicist synagogues, even the most magnificent ones, did not attempt a pure style, or even any particular style, and are best described as eclectic.
54
+
55
+ In the post-war era, synagogue architecture abandoned historicist styles for modernism.
56
+
57
+ Central Synagogue of Aleppo, Aleppo, Syria (5th century)
58
+
59
+ Paradesi Synagogue, Kochi, India (1568)
60
+
61
+ Sarajevo Synagogue, Sarajevo, Bosnia and Herzegovina (1902)
62
+
63
+ Sofia Synagogue, Sofia, Bulgaria (1909)
64
+
65
+ Beth Sholom Congregation, Elkins Park, USA (1959)
66
+
67
+ Ohel Jakob synagogue, Munich, Germany (2006)
68
+
69
+ All synagogues contain a Bimah, a large, raised, reader's platform (called teḇah (reading dais) by Sephardim), where the Torah scroll is placed to be read. In Sephardi synagogues it is also used as the prayer leader's reading desk.[18]
70
+
71
+ Bimah of the Saluzzo Synagogue, Saluzzo, Italy
72
+
73
+ Bimah of the Touro Synagogue in Newport, Rhode Island, USA
74
+
75
+ Cast-iron Bimah of the Old Synagogue in Kraków, Poland
76
+
77
+ In Ashkenazi synagogues, the Torah was read on a reader's table located in the center of the room, while the leader of the prayer service, the hazzan, stood at his own lectern or table, facing the Ark. In Sephardic synagogues, the table for reading the Torah (reading dais) was commonly placed at the opposite side of the room from the Torah Ark, leaving the center of the floor empty for the use of a ceremonial procession carrying the Torah between the Ark and the reading table.[19] Most contemporary synagogues feature a lectern for the rabbi.[20]
78
+
79
+ The Torah Ark, called in Hebrew ארון קודש Aron Kodesh or 'holy chest', and alternatively called the heikhal—היכל or 'temple' by Sephardic Jews, is a cabinet in which the Torah scrolls are kept.
80
+
81
+ The ark in a synagogue is almost always positioned in such a way such that those who face it are facing towards Jerusalem. Thus, sanctuary seating plans in the Western world generally face east, while those east of Israel face west. Sanctuaries in Israel face towards Jerusalem. Occasionally synagogues face other directions for structural reasons; in such cases, some individuals might turn to face Jerusalem when standing for prayers, but the congregation as a whole does not.
82
+
83
+ The Ark is reminiscent of the Ark of the Covenant, which held the tablets inscribed with the Ten Commandments. This is the holiest spot in a synagogue, equivalent to the Holy of Holies. The Ark is often closed with an ornate curtain, the parochet פרוכת, which hangs outside or inside the ark doors.
84
+
85
+ Other traditional features include a continually lit lamp or lantern, usually electric in contemporary synagogues, called the ner tamid (נר תמיד), the "Eternal Light", used as a way to honor the Divine Presence.[21]
86
+
87
+ A synagogue may be decorated with artwork, but in the Rabbinic and Orthodox tradition, three-dimensional sculptures and depictions of the human body are not allowed as these are considered akin to idolatry.[citation needed]
88
+
89
+ Originally, synagogues were made devoid of much furniture, the Jewish congregants in Spain, the Maghreb (North Africa), Babylonia, the Land of Israel and Yemen having a custom to sit upon the floor, which had been strewn with mats and cushions, rather than upon chairs or benches. In other European towns and cities, however, Jewish congregants would sit upon chairs and benches.[22] Today, the custom has spread in all places to sit upon chairs and benches.[citation needed]
90
+
91
+ Until the 19th century, in an Ashkenazi synagogue, all seats most often faced the Torah Ark. In a Sephardic synagogue, seats were usually arranged around the perimeter of the sanctuary, but when the worshipers stood up to pray, everyone faced the Ark.[citation needed]
92
+
93
+ Many current synagogues have an elaborate chair named for the prophet Elijah, which is only sat upon during the ceremony of Brit milah.[23]
94
+
95
+ In ancient synagogues, a special chair placed on the wall facing Jerusalem and next to the Torah Shrine was reserved for the prominent members of the congregation and for important guests.[24] Such a stone-carved and inscribed seat was discovered at archaeological excavations in the synagogue at Chorazin in Galilee and dates from the 4th–6th century;[25] another one was discovered at the Delos Synagogue, complete with a footstool.
96
+
97
+ In Yemen, the Jewish custom was to remove one's shoes immediately prior to entering the synagogue, a custom that had been observed by Jews in other places in earlier times.[26] The same practice of removing one's shoes before entering the synagogue was also largely observed among Jews in Morocco in the early 20th-century. Today, the custom of removing one's shoes is no longer practiced in Israel.[citation needed]
98
+
99
+ In Orthodox synagogues, men and women do not sit together. The synagogue features a partition (mechitza) dividing the men's and women's seating areas, or a separate women's section located on a balcony.[27]
100
+
101
+ The German-Jewish Reform movement, which arose in the early 19th century, made many changes to the traditional look of the synagogue, keeping with its desire to simultaneously stay Jewish yet be accepted by the surrounding culture.
102
+
103
+ The first Reform synagogue, which opened in Hamburg in 1811, introduced changes that made the synagogue look more like a church. These included: the installation of an organ to accompany the prayers (even on Shabbat, when musical instruments are proscribed by halakha), a choir to accompany the hazzan, and vestments for the synagogue rabbi to wear.[28]
104
+
105
+ In following decades, the central reader's table, the Bimah, was moved to the front of the Reform sanctuary—previously unheard-of in Orthodox synagogues.[citation needed]
106
+
107
+ Gender separation was also removed.[citation needed]
108
+
109
+ Synagogues often take on a broader role in modern Jewish communities and may include additional facilities such as a catering hall, kosher kitchen, religious school, library, day care center and a smaller chapel for daily services.
110
+
111
+ Since many Orthodox and some non-Orthodox Jews prefer to collect a minyan (a quorum of ten) rather than pray alone, they commonly assemble at pre-arranged times in offices, living rooms, or other spaces when these are more convenient than formal synagogue buildings. A room or building that is used this way can become a dedicated small synagogue or prayer room. Among Ashkenazi Jews they are traditionally called shtiebel (שטיבל, pl. shtiebelekh or shtiebels, Yiddish for "little house"), and are found in Orthodox communities worldwide.
112
+
113
+ Another type of communal prayer group, favored by some contemporary Jews, is the chavurah (חבורה, pl. chavurot, חבורות), or prayer fellowship. These groups meet at a regular place and time, either in a private home or in a synagogue or other institutional space. In antiquity, the Pharisees lived near each other in chavurot and dined together to ensure that none of the food was unfit for consumption.[29]
114
+
115
+ Some synagogues bear the title "great synagogue".[dubious – discuss]
116
+
117
+
118
+
119
+ The Old Synagogue (Erfurt) is the oldest intact synagogue building in Europe
120
+
121
+ The Synagogue in the Gerard Doustraat in Amsterdam, Netherlands.
122
+
123
+ The New Synagogue in Berlin, Germany.
124
+
125
+ The Great Synagogue of Basel in Basel, Switzerland.
126
+
127
+ The Choral Synagogue in Moscow, Russia.
128
+
129
+ The Great Synagogue of Santiago, Chile.
130
+
131
+ The Portuguese Synagogue in Amsterdam, Netherlands.
132
+
133
+ The Dohány Street Synagogue in Budapest, Hungary.
134
+
135
+ The Great Synagogue of Plzeň, Czech Republic.
136
+
137
+ The main synagogue of the city of Frankfurt am Main (Germany) before the Kristallnacht.
138
+
139
+ The Roonstrasse Synagogue in Cologne, Germany.
140
+
141
+ The Lesko Synagogue in Lesko, Poland.
142
+
143
+ The Bobowa Synagogue in Bobowa, Poland.
144
+
145
+ Sukkat Shalom Synagogue in Belgrade, Serbia.
146
+
147
+ Jakab and Komor Square Synagogue in Subotica, Serbia.
148
+
149
+ The Jewish Street Synagogue in Novi Sad, Serbia.
150
+
151
+ Kadoorie Synagogue in Porto, Portugal. The largest synagogue in the Iberian Peninsula.
152
+
153
+ The Baal Shem Tov's shul in Medzhybizh, Ukraine (c. 1915), destroyed and recently rebuilt.
154
+
155
+ The Belzer synagogue of Belz, Ukraine. It no longer exists.
156
+
157
+ The Cymbalista Synagogue and Jewish Heritage Center at Tel Aviv University.
158
+
159
+ The synagogue of Kherson, Ukraine.
160
+
161
+ Or Zaruaa Synagogue, Jerusalem, Israel founded in 1926.
162
+
163
+ The Hurva Synagogue towered over the Jewish Quarter of Jerusalem from 1864 until 1948, when it was destroyed in war
164
+
165
+ The remains of the Hurva Synagogue as they appeared from 1977 to 2003. The synagogue has been rebuilt in 2010.
166
+
167
+ The Ashkenazi Synagogue of Istanbul, Turkey, founded in 1900
168
+
169
+ The interior of a Karaite synagogue
170
+
171
+ The Central Synagogue on Lexington Avenue in Manhattan, New York City
172
+
173
+ Temple Emanu-El, Neo-Byzantine style synagogue in Miami Beach, Florida
174
+
175
+ The Grand Choral Synagogue of St. Petersburg, Russia
176
+
177
+ The Paradesi Synagogue in Kochi, Kerala, India
178
+
179
+ The Great Choral Synagogue in Podil, Kiev, Ukraine
180
+
181
+ Great Synagogue of Rome, Italy
182
+
183
+ Abuhav synagogue, Israel
184
+
185
+ Ari Ashkenazi Synagogue, Israel
186
+
187
+ Santa María la Blanca, Spain
188
+
189
+ Córdoba Synagogue, Spain
190
+
191
+ El Transito Synagogue, Spain
192
+
193
+ Székesfehérvár Neolog synagogue, Hungary (1869; photo: c. 1930s). It no longer exists, however, the memorial plaques were moved to a building at the city's Jewish cemetery.
194
+
195
+ Sofia Synagogue, Bulgaria
196
+
197
+ Synagogue of Târgu Mureș, Romania.
198
+
199
+ Interior of a "caravan shul" (synagogue housed in a trailer-type facility), Neve Yaakov, Jerusalem
200
+
201
+ Ohev Sholom - The National Synagogue (opened 1960), mid-century building with expressionist overtones; Washington, D.C.
202
+
203
+ Beth Yaakov Synagogue, Switzerland
204
+
205
+ Sanctuary ark, Lincoln Square Synagogue, New York City (2013), created by David Ascalon
206
+
207
+ Synagogue, Szombathely, Hungary
208
+
209
+ Bevis Marks Synagogue, City of London, the oldest synagogue in the United Kingdom
210
+
211
+ The Choral Temple, Bucharest, Romania.
212
+
213
+ Stockholm Synagogue, Sweden
214
+
215
+ Brisbane Synagogue, Brisbane, Australia
216
+
217
+ Gothic interior of the 13th-century Old New Synagogue of Prague
en/5563.html.txt ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Human immunodeficiency virus infection and acquired immune deficiency syndrome (HIV/AIDS) is a spectrum of conditions caused by infection with the human immunodeficiency virus (HIV).[9][10][11] Following initial infection a person may not notice any symptoms, or may experience a brief period of influenza-like illness.[4] Typically, this is followed by a prolonged period with no symptoms.[5] If the infection progresses, it interferes more with the immune system, increasing the risk of developing common infections such as tuberculosis, as well as other opportunistic infections, and tumors which are otherwise rare in people who have normal immune function.[4] These late symptoms of infection are referred to as acquired immunodeficiency syndrome (AIDS).[5] This stage is often also associated with unintended weight loss.[5]
6
+
7
+ HIV is spread primarily by unprotected sex (including anal and oral sex), contaminated blood transfusions, hypodermic needles, and from mother to child during pregnancy, delivery, or breastfeeding.[12] Some bodily fluids, such as saliva, sweat and tears, do not transmit the virus.[13] HIV is a member of the group of viruses known as retroviruses.[14]
8
+
9
+ Methods of prevention include safe sex, needle exchange programs, treating those who are infected, and pre- & post-exposure prophylaxis.[4] Disease in a baby can often be prevented by giving both the mother and child antiretroviral medication.[4] There is no cure or vaccine; however, antiretroviral treatment can slow the course of the disease and may lead to a near-normal life expectancy.[5][6] Treatment is recommended as soon as the diagnosis is made.[15] Without treatment, the average survival time after infection is 11 years.[7]
10
+
11
+ In 2018, about 37.9 million people were living with HIV and it resulted in 770,000 deaths.[8] An estimated 20.6 million of these live in eastern and southern Africa.[16] Between the time that AIDS was identified (in the early 1980s) and 2018, the disease caused an estimated 32 million deaths worldwide.[8] HIV/AIDS is considered a pandemic—a disease outbreak which is present over a large area and is actively spreading.[17] HIV made the jump from other primates to humans in west-central Africa in the early-to-mid 20th century.[18] AIDS was first recognized by the United States Centers for Disease Control and Prevention (CDC) in 1981 and its cause—HIV infection—was identified in the early part of the decade.[19]
12
+
13
+ HIV/AIDS has had a large impact on society, both as an illness and as a source of discrimination.[20] The disease also has large economic impacts.[20] There are many misconceptions about HIV/AIDS, such as the belief that it can be transmitted by casual non-sexual contact.[21] The disease has become subject to many controversies involving religion, including the Catholic Church's position not to support condom use as prevention.[22] It has attracted international medical and political attention as well as large-scale funding since it was identified in the 1980s.[23]
14
+
15
+ There are three main stages of HIV infection: acute infection, clinical latency, and AIDS.[1][24]
16
+
17
+ The initial period following the contraction of HIV is called acute HIV, primary HIV or acute retroviral syndrome.[24][25] Many individuals develop an influenza-like illness or a mononucleosis-like illness 2–4 weeks after exposure while others have no significant symptoms.[26][27] Symptoms occur in 40–90% of cases and most commonly include fever, large tender lymph nodes, throat inflammation, a rash, headache, tiredness, and/or sores of the mouth and genitals.[25][27] The rash, which occurs in 20–50% of cases, presents itself on the trunk and is maculopapular, classically.[28] Some people also develop opportunistic infections at this stage.[25] Gastrointestinal symptoms, such as vomiting or diarrhea may occur.[27] Neurological symptoms of peripheral neuropathy or Guillain–Barré syndrome also occurs.[27] The duration of the symptoms varies, but is usually one or two weeks.[27]
18
+
19
+ Owing to their nonspecific character, these symptoms are not often recognized as signs of HIV infection. Even cases that do get seen by a family doctor or a hospital are often misdiagnosed as one of the many common infectious diseases with overlapping symptoms. Thus, it is recommended that HIV be considered in people presenting with an unexplained fever who may have risk factors for the infection.[27]
20
+
21
+ The initial symptoms are followed by a stage called clinical latency, asymptomatic HIV, or chronic HIV.[1] Without treatment, this second stage of the natural history of HIV infection can last from about three years[29] to over 20 years[30] (on average, about eight years).[31] While typically there are few or no symptoms at first, near the end of this stage many people experience fever, weight loss, gastrointestinal problems and muscle pains.[1] Between 50% and 70% of people also develop persistent generalized lymphadenopathy, characterized by unexplained, non-painful enlargement of more than one group of lymph nodes (other than in the groin) for over three to six months.[24]
22
+
23
+ Although most HIV-1 infected individuals have a detectable viral load and in the absence of treatment will eventually progress to AIDS, a small proportion (about 5%) retain high levels of CD4+ T cells (T helper cells) without antiretroviral therapy for more than five years.[27][32] These individuals are classified as "HIV controllers" or long-term nonprogressors (LTNP).[32] Another group consists of those who maintain a low or undetectable viral load without anti-retroviral treatment, known as "elite controllers" or "elite suppressors". They represent approximately 1 in 300 infected persons.[33]
24
+
25
+ Acquired immunodeficiency syndrome (AIDS) is defined as an HIV infection with either a CD4+ T cell count below 200 cells per µL or the occurrence of specific diseases associated with HIV infection.[27] In the absence of specific treatment, around half of people infected with HIV develop AIDS within ten years.[27] The most common initial conditions that alert to the presence of AIDS are pneumocystis pneumonia (40%), cachexia in the form of HIV wasting syndrome (20%), and esophageal candidiasis.[27] Other common signs include recurrent respiratory tract infections.[27]
26
+
27
+ Opportunistic infections may be caused by bacteria, viruses, fungi, and parasites that are normally controlled by the immune system.[34] Which infections occur depends partly on what organisms are common in the person's environment.[27] These infections may affect nearly every organ system.[35]
28
+
29
+ People with AIDS have an increased risk of developing various viral-induced cancers, including Kaposi's sarcoma, Burkitt's lymphoma, primary central nervous system lymphoma, and cervical cancer.[28] Kaposi's sarcoma is the most common cancer, occurring in 10% to 20% of people with HIV.[36] The second-most common cancer is lymphoma, which is the cause of death of nearly 16% of people with AIDS and is the initial sign of AIDS in 3% to 4%.[36] Both these cancers are associated with human herpesvirus 8 (HHV-8).[36] Cervical cancer occurs more frequently in those with AIDS because of its association with human papillomavirus (HPV).[36] Conjunctival cancer (of the layer that lines the inner part of eyelids and the white part of the eye) is also more common in those with HIV.[37]
30
+
31
+ Additionally, people with AIDS frequently have systemic symptoms such as prolonged fevers, sweats (particularly at night), swollen lymph nodes, chills, weakness, and unintended weight loss.[38] Diarrhea is another common symptom, present in about 90% of people with AIDS.[39] They can also be affected by diverse psychiatric and neurological symptoms independent of opportunistic infections and cancers.[40]
32
+
33
+ HIV is spread by three main routes: sexual contact, significant exposure to infected body fluids or tissues, and from mother to child during pregnancy, delivery, or breastfeeding (known as vertical transmission).[12] There is no risk of acquiring HIV if exposed to feces, nasal secretions, saliva, sputum, sweat, tears, urine, or vomit unless these are contaminated with blood.[49] It is also possible to be co-infected by more than one strain of HIV—a condition known as HIV superinfection.[50]
34
+
35
+ The most frequent mode of transmission of HIV is through sexual contact with an infected person.[12] However, an HIV-positive person who has an undetectable viral load as a result of long-term treatment has effectively no risk of transmitting HIV sexually.[51][52] The existence of functionally noncontagious HIV-positive people on antiretroviral therapy was controversially publicized in the 2008 Swiss Statement, and has since become accepted as medically sound.[53]
36
+
37
+ Globally, the most common mode of HIV transmission is via sexual contacts between people of the opposite sex;[12] however, the pattern of transmission varies among countries. As of 2017[update], most HIV transmission in the United States occurred among men who had sex with men (82% of new HIV diagnoses among males aged 13 and older and 70% of total new diagnoses).[54][55] In the US, gay and bisexual men aged 13 to 24 accounted for an estimated 92% of new HIV diagnoses among all men in their age group and 27% of new diagnoses among all gay and bisexual men.[56] About 15% of gay and bisexual men have HIV, while 28% of transgender women test positive in the US.[56][57]
38
+
39
+ With regard to unprotected heterosexual contacts, estimates of the risk of HIV transmission per sexual act appear to be four to ten times higher in low-income countries than in high-income countries.[58] In low-income countries, the risk of female-to-male transmission is estimated as 0.38% per act, and of male-to-female transmission as 0.30% per act; the equivalent estimates for high-income countries are 0.04% per act for female-to-male transmission, and 0.08% per act for male-to-female transmission.[58] The risk of transmission from anal intercourse is especially high, estimated as 1.4–1.7% per act in both heterosexual and homosexual contacts.[58][59] While the risk of transmission from oral sex is relatively low, it is still present.[60] The risk from receiving oral sex has been described as "nearly nil";[61] however, a few cases have been reported.[62] The per-act risk is estimated at 0–0.04% for receptive oral intercourse.[63] In settings involving prostitution in low-income countries, risk of female-to-male transmission has been estimated as 2.4% per act, and of male-to-female transmission as 0.05% per act.[58]
40
+
41
+ Risk of transmission increases in the presence of many sexually transmitted infections[64] and genital ulcers.[58] Genital ulcers appear to increase the risk approximately fivefold.[58] Other sexually transmitted infections, such as gonorrhea, chlamydia, trichomoniasis, and bacterial vaginosis, are associated with somewhat smaller increases in risk of transmission.[63]
42
+
43
+ The viral load of an infected person is an important risk factor in both sexual and mother-to-child transmission.[65] During the first 2.5 months of an HIV infection a person's infectiousness is twelve times higher due to the high viral load associated with acute HIV.[63] If the person is in the late stages of infection, rates of transmission are approximately eightfold greater.[58]
44
+
45
+ Commercial sex workers (including those in pornography) have an increased likelihood of contracting HIV.[66][67] Rough sex can be a factor associated with an increased risk of transmission.[68] Sexual assault is also believed to carry an increased risk of HIV transmission as condoms are rarely worn, physical trauma to the vagina or rectum is likely, and there may be a greater risk of concurrent sexually transmitted infections.[69]
46
+
47
+ The second-most frequent mode of HIV transmission is via blood and blood products.[12] Blood-borne transmission can be through needle-sharing during intravenous drug use, needle-stick injury, transfusion of contaminated blood or blood product, or medical injections with unsterilized equipment. The risk from sharing a needle during drug injection is between 0.63% and 2.4% per act, with an average of 0.8%.[70] The risk of acquiring HIV from a needle stick from an HIV-infected person is estimated as 0.3% (about 1 in 333) per act and the risk following mucous membrane exposure to infected blood as 0.09% (about 1 in 1000) per act.[49] This risk may, however, be up to 5% if the introduced blood was from a person with a high viral load and the cut was deep.[71] In the United States intravenous drug users made up 12% of all new cases of HIV in 2009,[72] and in some areas more than 80% of people who inject drugs are HIV-positive.[12]
48
+
49
+ HIV is transmitted in about 90% of blood transfusions using infected blood.[41] In developed countries the risk of acquiring HIV from a blood transfusion is extremely low (less than one in half a million) where improved donor selection and HIV screening is performed;[12] for example, in the UK the risk is reported at one in five million[73] and in the United States it was one in 1.5 million in 2008.[74] In low-income countries, only half of transfusions may be appropriately screened (as of 2008),[75] and it is estimated that up to 15% of HIV infections in these areas come from transfusion of infected blood and blood products, representing between 5% and 10% of global infections.[12][76] It is possible to acquire HIV from organ and tissue transplantation, although this is rare because of screening.[77]
50
+
51
+ Unsafe medical injections play a role in HIV spread in sub-Saharan Africa. In 2007, between 12% and 17% of infections in this region were attributed to medical syringe use.[78] The World Health Organization estimates the risk of transmission as a result of a medical injection in Africa at 1.2%.[78] Risks are also associated with invasive procedures, assisted delivery, and dental care in this area of the world.[78]
52
+
53
+ People giving or receiving tattoos, piercings, and scarification are theoretically at risk of infection but no confirmed cases have been documented.[79] It is not possible for mosquitoes or other insects to transmit HIV.[80]
54
+
55
+ HIV can be transmitted from mother to child during pregnancy, during delivery, or through breast milk, resulting in the baby also contracting HIV.[81][12] As of 2008, vertical transmission accounted for about 90% of cases of HIV in children.[82] In the absence of treatment, the risk of transmission before or during birth is around 20%, and in those who also breastfeed 35%.[82] Treatment decreases this risk to less than 5%.[83]
56
+
57
+ Antiretrovirals when taken by either the mother or the baby decrease the risk of transmission in those who do breastfeed.[84] If blood contaminates food during pre-chewing it may pose a risk of transmission.[79] If a woman is untreated, two years of breastfeeding results in an HIV/AIDS risk in her baby of about 17%.[85] Due to the increased risk of death without breastfeeding in many areas in the developing world, the World Health Organization recommends either exclusive breastfeeding or the provision of safe formula.[85] All women known to be HIV-positive should be taking lifelong antiretroviral therapy.[85]
58
+
59
+ HIV is the cause of the spectrum of disease known as HIV/AIDS. HIV is a retrovirus that primarily infects components of the human immune system such as CD4+ T cells, macrophages and dendritic cells. It directly and indirectly destroys CD4+ T cells.[86]
60
+
61
+ HIV is a member of the genus Lentivirus,[87] part of the family Retroviridae.[88] Lentiviruses share many morphological and biological characteristics. Many species of mammals are infected by lentiviruses, which are characteristically responsible for long-duration illnesses with a long incubation period.[89] Lentiviruses are transmitted as single-stranded, positive-sense, enveloped RNA viruses. Upon entry into the target cell, the viral RNA genome is converted (reverse transcribed) into double-stranded DNA by a virally encoded reverse transcriptase that is transported along with the viral genome in the virus particle. The resulting viral DNA is then imported into the cell nucleus and integrated into the cellular DNA by a virally encoded integrase and host co-factors.[90] Once integrated, the virus may become latent, allowing the virus and its host cell to avoid detection by the immune system.[91] Alternatively, the virus may be transcribed, producing new RNA genomes and viral proteins that are packaged and released from the cell as new virus particles that begin the replication cycle anew.[92]
62
+
63
+ HIV is now known to spread between CD4+ T cells by two parallel routes: cell-free spread and cell-to-cell spread, i.e. it employs hybrid spreading mechanisms.[93] In the cell-free spread, virus particles bud from an infected T cell, enter the blood/extracellular fluid and then infect another T cell following a chance encounter.[93] HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread.[94][95] The hybrid spreading mechanisms of HIV contribute to the virus's ongoing replication against antiretroviral therapies.[93][96]
64
+
65
+ Two types of HIV have been characterized: HIV-1 and HIV-2. HIV-1 is the virus that was originally discovered (and initially referred to also as LAV or HTLV-III). It is more virulent, more infective,[97] and is the cause of the majority of HIV infections globally. The lower infectivity of HIV-2 as compared with HIV-1 implies that fewer people exposed to HIV-2 will be infected per exposure. Because of its relatively poor capacity for transmission, HIV-2 is largely confined to West Africa.[98]
66
+
67
+ After the virus enters the body there is a period of rapid viral replication, leading to an abundance of virus in the peripheral blood. During primary infection, the level of HIV may reach several million virus particles per milliliter of blood.[99] This response is accompanied by a marked drop in the number of circulating CD4+ T cells. The acute viremia is almost invariably associated with activation of CD8+ T cells, which kill HIV-infected cells, and subsequently with antibody production, or seroconversion. The CD8+ T cell response is thought to be important in controlling virus levels, which peak and then decline, as the CD4+ T cell counts recover. A good CD8+ T cell response has been linked to slower disease progression and a better prognosis, though it does not eliminate the virus.[100]
68
+
69
+ Ultimately, HIV causes AIDS by depleting CD4+ T cells. This weakens the immune system and allows opportunistic infections. T cells are essential to the immune response and without them, the body cannot fight infections or kill cancerous cells. The mechanism of CD4+ T cell depletion differs in the acute and chronic phases.[101] During the acute phase, HIV-induced cell lysis and killing of infected cells by CD8+ T cells accounts for CD4+ T cell depletion, although apoptosis may also be a factor. During the chronic phase, the consequences of generalized immune activation coupled with the gradual loss of the ability of the immune system to generate new T cells appear to account for the slow decline in CD4+ T cell numbers.[102]
70
+
71
+ Although the symptoms of immune deficiency characteristic of AIDS do not appear for years after a person is infected, the bulk of CD4+ T cell loss occurs during the first weeks of infection, especially in the intestinal mucosa, which harbors the majority of the lymphocytes found in the body.[103] The reason for the preferential loss of mucosal CD4+ T cells is that the majority of mucosal CD4+ T cells express the CCR5 protein which HIV uses as a co-receptor to gain access to the cells, whereas only a small fraction of CD4+ T cells in the bloodstream do so.[104] A specific genetic change that alters the CCR5 protein when present in both chromosomes very effectively prevents HIV-1 infection.[105]
72
+
73
+ HIV seeks out and destroys CCR5 expressing CD4+ T cells during acute infection.[106] A vigorous immune response eventually controls the infection and initiates the clinically latent phase. CD4+ T cells in mucosal tissues remain particularly affected.[106] Continuous HIV replication causes a state of generalized immune activation persisting throughout the chronic phase.[107] Immune activation, which is reflected by the increased activation state of immune cells and release of pro-inflammatory cytokines, results from the activity of several HIV gene products and the immune response to ongoing HIV replication. It is also linked to the breakdown of the immune surveillance system of the gastrointestinal mucosal barrier caused by the depletion of mucosal CD4+ T cells during the acute phase of disease.[108]
74
+
75
+ HIV/AIDS is diagnosed via laboratory testing and then staged based on the presence of certain signs or symptoms.[25] HIV screening is recommended by the United States Preventive Services Task Force for all people 15 years to 65 years of age, including all pregnant women.[110] Additionally, testing is recommended for those at high risk, which includes anyone diagnosed with a sexually transmitted illness.[28][110] In many areas of the world, a third of HIV carriers only discover they are infected at an advanced stage of the disease when AIDS or severe immunodeficiency has become apparent.[28]
76
+
77
+ Most people infected with HIV develop specific antibodies (i.e. seroconvert) within three to twelve weeks after the initial infection.[27] Diagnosis of primary HIV before seroconversion is done by measuring HIV-RNA or p24 antigen.[27] Positive results obtained by antibody or PCR testing are confirmed either by a different antibody or by PCR.[25]
78
+
79
+ Antibody tests in children younger than 18 months are typically inaccurate, due to the continued presence of maternal antibodies.[111] Thus HIV infection can only be diagnosed by PCR testing for HIV RNA or DNA, or via testing for the p24 antigen.[25] Much of the world lacks access to reliable PCR testing, and people in many places simply wait until either symptoms develop or the child is old enough for accurate antibody testing.[111] In sub-Saharan Africa between 2007 and 2009, between 30% and 70% of the population were aware of their HIV status.[112] In 2009, between 3.6% and 42% of men and women in sub-Saharan countries were tested;[112] this represented a significant increase compared to previous years.[112]
80
+
81
+ Two main clinical staging systems are used to classify HIV and HIV-related disease for surveillance purposes: the WHO disease staging system for HIV infection and disease,[25] and the CDC classification system for HIV infection.[113] The CDC's classification system is more frequently adopted in developed countries. Since the WHO's staging system does not require laboratory tests, it is suited to the resource-restricted conditions encountered in developing countries, where it can also be used to help guide clinical management. Despite their differences, the two systems allow comparison for statistical purposes.[24][25][113]
82
+
83
+ The World Health Organization first proposed a definition for AIDS in 1986.[25] Since then, the WHO classification has been updated and expanded several times, with the most recent version being published in 2007.[25] The WHO system uses the following categories:
84
+
85
+ The United States Center for Disease Control and Prevention also created a classification system for HIV, and updated it in 2008 and 2014.[113][114] This system classifies HIV infections based on CD4 count and clinical symptoms, and describes the infection in five groups.[114] In those greater than six years of age it is:[114]
86
+
87
+ For surveillance purposes, the AIDS diagnosis still stands even if, after treatment, the CD4+ T cell count rises to above 200 per µL of blood or other AIDS-defining illnesses are cured.[24]
88
+
89
+ Consistent condom use reduces the risk of HIV transmission by approximately 80% over the long term.[115] When condoms are used consistently by a couple in which one person is infected, the rate of HIV infection is less than 1% per year.[116] There is some evidence to suggest that female condoms may provide an equivalent level of protection.[117] Application of a vaginal gel containing tenofovir (a reverse transcriptase inhibitor) immediately before sex seems to reduce infection rates by approximately 40% among African women.[118] By contrast, use of the spermicide nonoxynol-9 may increase the risk of transmission due to its tendency to cause vaginal and rectal irritation.[119]
90
+
91
+ Circumcision in Sub-Saharan Africa "reduces the acquisition of HIV by heterosexual men by between 38% and 66% over 24 months".[120] Owing to these studies, both the World Health Organization and UNAIDS recommended male circumcision in 2007 as a method of preventing female-to-male HIV transmission in areas with high rates of HIV.[121] However, whether it protects against male-to-female transmission is disputed,[122][123] and whether it is of benefit in developed countries and among men who have sex with men is undetermined.[124][125][126] The International Antiviral Society, however, does recommend it for all sexually active heterosexual males and that it be discussed as an option with men who have sex with men.[127] Some experts fear that a lower perception of vulnerability among circumcised men may cause more sexual risk-taking behavior, thus negating its preventive effects.[128]
92
+
93
+ Programs encouraging sexual abstinence do not appear to affect subsequent HIV risk.[129] Evidence of any benefit from peer education is equally poor.[130] Comprehensive sexual education provided at school may decrease high-risk behavior.[131][132] A substantial minority of young people continues to engage in high-risk practices despite knowing about HIV/AIDS, underestimating their own risk of becoming infected with HIV.[133] Voluntary counseling and testing people for HIV does not affect risky behavior in those who test negative but does increase condom use in those who test positive.[134] Enhanced family planning services appear to increase the likelihood of women with HIV using contraception, compared to basic services.[135] It is not known whether treating other sexually transmitted infections is effective in preventing HIV.[64]
94
+
95
+ Antiretroviral treatment among people with HIV whose CD4 count ≤ 550 cells/µL is a very effective way to prevent HIV infection of their partner (a strategy known as treatment as prevention, or TASP).[136] TASP is associated with a 10- to 20-fold reduction in transmission risk.[136][137] Pre-exposure prophylaxis (PrEP) with a daily dose of the medications tenofovir, with or without emtricitabine, is effective in people at high risk including men who have sex with men, couples where one is HIV-positive, and young heterosexuals in Africa.[118][138] It may also be effective in intravenous drug users, with a study finding a decrease in risk of 0.7 to 0.4 per 100 person years.[139] The USPSTF, in 2019, recommended PrEP in those who are at high risk.[140]
96
+
97
+ Universal precautions within the health care environment are believed to be effective in decreasing the risk of HIV.[141] Intravenous drug use is an important risk factor, and harm reduction strategies such as needle-exchange programs and opioid substitution therapy appear effective in decreasing this risk.[142][143]
98
+
99
+ A course of antiretrovirals administered within 48 to 72 hours after exposure to HIV-positive blood or genital secretions is referred to as post-exposure prophylaxis (PEP).[144] The use of the single agent zidovudine reduces the risk of a HIV infection five-fold following a needle-stick injury.[144] As of 2013[update], the prevention regimen recommended in the United States consists of three medications—tenofovir, emtricitabine and raltegravir—as this may reduce the risk further.[145]
100
+
101
+ PEP treatment is recommended after a sexual assault when the perpetrator is known to be HIV-positive, but is controversial when their HIV status is unknown.[146] The duration of treatment is usually four weeks[147] and is frequently associated with adverse effects—where zidovudine is used, about 70% of cases result in adverse effects such as nausea (24%), fatigue (22%), emotional distress (13%) and headaches (9%).[49]
102
+
103
+ Programs to prevent the vertical transmission of HIV (from mothers to children) can reduce rates of transmission by 92–99%.[82][142] This primarily involves the use of a combination of antiviral medications during pregnancy and after birth in the infant, and potentially includes bottle feeding rather than breastfeeding.[82][148] If replacement feeding is acceptable, feasible, affordable, sustainable and safe, mothers should avoid breastfeeding their infants; however, exclusive breastfeeding is recommended during the first months of life if this is not the case.[149] If exclusive breastfeeding is carried out, the provision of extended antiretroviral prophylaxis to the infant decreases the risk of transmission.[150] In 2015, Cuba became the first country in the world to eradicate mother-to-child transmission of HIV.[151]
104
+
105
+ Currently there is no licensed vaccine for HIV or AIDS.[6] The most effective vaccine trial to date, RV 144, was published in 2009; it found a partial reduction in the risk of transmission of roughly 30%, stimulating some hope in the research community of developing a truly effective vaccine.[152] Further trials of the RV 144 vaccine are ongoing.[153][154]
106
+
107
+ There is currently no cure, nor an effective HIV vaccine. Treatment consists of highly active antiretroviral therapy (HAART) which slows progression of the disease.[155] As of 2010[update] more than 6.6 million people were receiving this in low- and middle-income countries.[156] Treatment also includes preventive and active treatment of opportunistic infections. As of March 2020[update], two persons have been successfully cleared of HIV.[157] Rapid initiation of anti-retroviral therapy within one week of diagnosis appear to improve treatment outcomes in low and medium-income settings.[158]
108
+
109
+ Current HAART options are combinations (or "cocktails") consisting of at least three medications belonging to at least two types, or "classes", of antiretroviral agents.[159] Initially, treatment is typically a non-nucleoside reverse transcriptase inhibitor (NNRTI) plus two nucleoside analog reverse transcriptase inhibitors (NRTIs).[160] Typical NRTIs include: zidovudine (AZT) or tenofovir (TDF) and lamivudine (3TC) or emtricitabine (FTC).[160] As of 2019, dolutegravir/lamivudine/tenofovir is listed by the World Health Organization as the first-line treatment for adults, with tenofovir/lamivudine/efavirenz as an alternative.[161] Combinations of agents that include protease inhibitors (PI) are used if the above regimen loses effectiveness.[159]
110
+
111
+ The World Health Organization and the United States recommend antiretrovirals in people of all ages (including pregnant women) as soon as the diagnosis is made, regardless of CD4 count.[15][127][162] Once treatment is begun, it is recommended that it is continued without breaks or "holidays".[28] Many people are diagnosed only after treatment ideally should have begun.[28] The desired outcome of treatment is a long-term plasma HIV-RNA count below 50 copies/mL.[28] Levels to determine if treatment is effective are initially recommended after four weeks and once levels fall below 50 copies/mL checks every three to six months are typically adequate.[28] Inadequate control is deemed to be greater than 400 copies/mL.[28] Based on these criteria treatment is effective in more than 95% of people during the first year.[28]
112
+
113
+ Benefits of treatment include a decreased risk of progression to AIDS and a decreased risk of death.[163] In the developing world, treatment also improves physical and mental health.[164] With treatment, there is a 70% reduced risk of acquiring tuberculosis.[159] Additional benefits include a decreased risk of transmission of the disease to sexual partners and a decrease in mother-to-child transmission.[159][165] The effectiveness of treatment depends to a large part on compliance.[28] Reasons for non-adherence to treatment include poor access to medical care,[166] inadequate social supports, mental illness and drug abuse.[167] The complexity of treatment regimens (due to pill numbers and dosing frequency) and adverse effects may reduce adherence.[168] Even though cost is an important issue with some medications,[169] 47% of those who needed them were taking them in low- and middle-income countries as of 2010[update],[156] and the rate of adherence is similar in low-income and high-income countries.[170]
114
+
115
+ Specific adverse events are related to the antiretroviral agent taken.[171] Some relatively common adverse events include: lipodystrophy syndrome, dyslipidemia, and diabetes mellitus, especially with protease inhibitors.[24] Other common symptoms include diarrhea,[171][172] and an increased risk of cardiovascular disease.[173] Newer recommended treatments are associated with fewer adverse effects.[28] Certain medications may be associated with birth defects and therefore may be unsuitable for women hoping to have children.[28]
116
+
117
+ Treatment recommendations for children are somewhat different from those for adults. The World Health Organization recommends treating all children less than 5 years of age; children above 5 are treated like adults.[174] The United States guidelines recommend treating all children less than 12 months of age and all those with HIV RNA counts greater than 100,000 copies/mL between one year and five years of age.[175]
118
+
119
+ Measures to prevent opportunistic infections are effective in many people with HIV/AIDS. In addition to improving current disease, treatment with antiretrovirals reduces the risk of developing additional opportunistic infections.[171] Adults and adolescents who are living with HIV (even on anti-retroviral therapy) with no evidence of active tuberculosis in settings with high tuberculosis burden should receive isoniazid preventive therapy (IPT); the tuberculin skin test can be used to help decide if IPT is needed.[176] Vaccination against hepatitis A and B is advised for all people at risk of HIV before they become infected; however, it may also be given after infection.[177] Trimethoprim/sulfamethoxazole prophylaxis between four and six weeks of age, and ceasing breastfeeding of infants born to HIV-positive mothers, is recommended in resource-limited settings.[178] It is also recommended to prevent PCP when a person's CD4 count is below 200 cells/uL and in those who have or have previously had PCP.[179] People with substantial immunosuppression are also advised to receive prophylactic therapy for toxoplasmosis and MAC.[180] Appropriate preventive measures reduced the rate of these infections by 50% between 1992 and 1997.[181] Influenza vaccination and pneumococcal polysaccharide vaccine are often recommended in people with HIV/AIDS with some evidence of benefit.[182][183]
120
+
121
+ The World Health Organization (WHO) has issued recommendations regarding nutrient requirements in HIV/AIDS.[184] A generally healthy diet is promoted. Dietary intake of micronutrients at RDA levels by HIV-infected adults is recommended by the WHO; higher intake of vitamin A, zinc, and iron can produce adverse effects in HIV-positive adults, and is not recommended unless there is documented deficiency.[184][185][186][187] Dietary supplementation for people who are infected with HIV and who have inadequate nutrition or dietary deficiencies may strengthen their immune systems or help them recover from infections; however, evidence indicating an overall benefit in morbidity or reduction in mortality is not consistent.[188]
122
+
123
+ Evidence for supplementation with selenium is mixed with some tentative evidence of benefit.[189] For pregnant and lactating women with HIV, multivitamin supplement improves outcomes for both mothers and children.[190] If the pregnant or lactating mother has been advised to take anti-retroviral medication to prevent mother-to-child HIV transmission, multivitamin supplements should not replace these treatments.[190] There is some evidence that vitamin A supplementation in children with an HIV infection reduces mortality and improves growth.[191]
124
+
125
+ In the US, approximately 60% of people with HIV use various forms of complementary or alternative medicine,[192] whose effectiveness has not been established.[193] There is not enough evidence to support the use of herbal medicines.[194] There is insufficient evidence to recommend or support the use of medical cannabis to try to increase appetite or weight gain.[195]
126
+
127
+ HIV/AIDS has become a chronic rather than an acutely fatal disease in many areas of the world.[196] Prognosis varies between people, and both the CD4 count and viral load are useful for predicted outcomes.[27] Without treatment, average survival time after infection with HIV is estimated to be 9 to 11 years, depending on the HIV subtype.[7] After the diagnosis of AIDS, if treatment is not available, survival ranges between 6 and 19 months.[197][198] HAART and appropriate prevention of opportunistic infections reduces the death rate by 80%, and raises the life expectancy for a newly diagnosed young adult to 20–50 years.[196][199][200] This is between two thirds[199] and nearly that of the general population.[28][201] If treatment is started late in the infection, prognosis is not as good:[28] for example, if treatment is begun following the diagnosis of AIDS, life expectancy is ~10–40 years.[28][196] Half of infants born with HIV die before two years of age without treatment.[178]
128
+
129
+ The primary causes of death from HIV/AIDS are opportunistic infections and cancer, both of which are frequently the result of the progressive failure of the immune system.[181][202] Risk of cancer appears to increase once the CD4 count is below 500/μL.[28] The rate of clinical disease progression varies widely between individuals and has been shown to be affected by a number of factors such as a person's susceptibility and immune function;[203] their access to health care, the presence of co-infections;[197][204] and the particular strain (or strains) of the virus involved.[205][206]
130
+
131
+ Tuberculosis co-infection is one of the leading causes of sickness and death in those with HIV/AIDS being present in a third of all HIV-infected people and causing 25% of HIV-related deaths.[207] HIV is also one of the most important risk factors for tuberculosis.[208] Hepatitis C is another very common co-infection where each disease increases the progression of the other.[209] The two most common cancers associated with HIV/AIDS are Kaposi's sarcoma and AIDS-related non-Hodgkin's lymphoma.[202] Other cancers that are more frequent include anal cancer, Burkitt's lymphoma, primary central nervous system lymphoma, and cervical cancer.[28][210]
132
+
133
+ Even with anti-retroviral treatment, over the long term HIV-infected people may experience neurocognitive disorders,[211] osteoporosis,[212] neuropathy,[213] cancers,[214][215] nephropathy,[216] and cardiovascular disease.[172] Some conditions, such as lipodystrophy, may be caused both by HIV and its treatment.[172]
134
+
135
+ HIV/AIDS is a global pandemic.[218] As of 2016[update] approximately 36.7 million people worldwide have HIV, the number of new infections that year being about 1.8 million.[219] This is down from 3.1 million new infections in 2001.[220] Slightly over half the infected population are women and 2.1 million are children.[219] It resulted in about 1 million deaths in 2016, down from a peak of 1.9 million in 2005.[219]
136
+
137
+ Sub-Saharan Africa is the region most affected. In 2010, an estimated 68% (22.9 million) of all HIV cases and 66% of all deaths (1.2 million) occurred in this region.[221] This means that about 5% of the adult population is infected[222] and it is believed to be the cause of 10% of all deaths in children.[223] Here, in contrast to other regions, women comprise nearly 60% of cases.[221] South Africa has the largest population of people with HIV of any country in the world at 5.9 million.[221] Life expectancy has fallen in the worst-affected countries due to HIV/AIDS; for example, in 2006 it was estimated that it had dropped from 65 to 35 years in Botswana.[17] Mother-to-child transmission in Botswana and South Africa, as of 2013[update], has decreased to less than 5%, with improvement in many other African nations due to improved access to antiretroviral therapy.[224]
138
+
139
+ South & South East Asia is the second most affected; in 2010 this region contained an estimated 4 million cases or 12% of all people living with HIV resulting in approximately 250,000 deaths.[222] Approximately 2.4 million of these cases are in India.[221]
140
+
141
+ During 2008 in the United States approximately 1.2 million people were living with HIV, resulting in about 17,500 deaths. The US Centers for Disease Control and Prevention estimated that in that year, 20% of infected Americans were unaware of their infection.[225] As of 2016[update] about 675,000 people have died of HIV/AIDS in the US since the beginning of the HIV epidemic.[57] In the United Kingdom as of 2015[update], there were approximately 101,200 cases which resulted in 594 deaths.[226] In Canada as of 2008, there were about 65,000 cases causing 53 deaths.[227] Between the first recognition of AIDS (in 1981) and 2009, it has led to nearly 30 million deaths.[228] Rates of HIV are lowest in North Africa and the Middle East (0.1% or less), East Asia (0.1%), and Western and Central Europe (0.2%).[222] The worst-affected European countries, in 2009 and 2012 estimates, are Russia, Ukraine, Latvia, Moldova, Portugal and Belarus, in decreasing order of prevalence.[229]
142
+
143
+ The first news story on the disease appeared May 18, 1981 in the gay newspaper New York Native.[230][231] AIDS was first clinically reported on June 5, 1981, with five cases in the United States.[36][232] The initial cases were a cluster of injecting drug users and homosexual men with no known cause of impaired immunity who showed symptoms of Pneumocystis carinii pneumonia (PCP), a rare opportunistic infection that was known to occur in people with very compromised immune systems.[233] Soon thereafter, an unexpected number of homosexual men developed a previously rare skin cancer called Kaposi's sarcoma (KS).[234][235] Many more cases of PCP and KS emerged, alerting U.S. Centers for Disease Control and Prevention (CDC) and a CDC task force was formed to monitor the outbreak.[236]
144
+
145
+ In the early days, the CDC did not have an official name for the disease, often referring to it by way of diseases associated with it, such as lymphadenopathy, the disease after which the discoverers of HIV originally named the virus.[237][238] They also used Kaposi's sarcoma and opportunistic infections, the name by which a task force had been set up in 1981.[239] At one point the CDC coined it the "4H disease", as the syndrome seemed to affect heroin users, homosexuals, hemophiliacs, and Haitians.[240][241] In the general press the term GRID, which stood for gay-related immune deficiency, had been coined.[242] However, after determining that AIDS was not isolated to the gay community,[239] it was realized that the term GRID was misleading, and the term AIDS was introduced at a meeting in July 1982.[243] By September 1982 the CDC started referring to the disease as AIDS.[244]
146
+
147
+ In 1983, two separate research groups led by Robert Gallo and Luc Montagnier declared that a novel retrovirus may have been infecting people with AIDS, and published their findings in the same issue of the journal Science.[245][246] Gallo claimed a virus which his group had isolated from a person with AIDS was strikingly similar in shape to other human T-lymphotropic viruses (HTLVs) that his group had been the first to isolate. Gallo's group called their newly isolated virus HTLV-III. At the same time, Montagnier's group isolated a virus from a person presenting with swelling of the lymph nodes of the neck and physical weakness, two characteristic symptoms of AIDS. Contradicting the report from Gallo's group, Montagnier and his colleagues showed that core proteins of this virus were immunologically different from those of HTLV-I. Montagnier's group named their isolated virus lymphadenopathy-associated virus (LAV).[236] As these two viruses turned out to be the same, in 1986, LAV and HTLV-III were renamed HIV.[247]
148
+
149
+ Both HIV-1 and HIV-2 are believed to have originated in non-human primates in West-central Africa and were transferred to humans in the early 20th century.[18] HIV-1 appears to have originated in southern Cameroon through the evolution of SIV(cpz), a simian immunodeficiency virus (SIV) that infects wild chimpanzees (HIV-1 descends from the SIVcpz endemic in the chimpanzee subspecies Pan troglodytes troglodytes).[248][249] The closest relative of HIV-2 is SIV(smm), a virus of the sooty mangabey (Cercocebus atys atys), an Old World monkey living in coastal West Africa (from southern Senegal to western Ivory Coast).[98] New World monkeys such as the owl monkey are resistant to HIV-1 infection, possibly because of a genomic fusion of two viral resistance genes.[250] HIV-1 is thought to have jumped the species barrier on at least three separate occasions, giving rise to the three groups of the virus, M, N, and O.[251]
150
+
151
+ There is evidence that humans who participate in bushmeat activities, either as hunters or as bushmeat vendors, commonly acquire SIV.[252] However, SIV is a weak virus which is typically suppressed by the human immune system within weeks of infection. It is thought that several transmissions of the virus from individual to individual in quick succession are necessary to allow it enough time to mutate into HIV.[253] Furthermore, due to its relatively low person-to-person transmission rate, SIV can only spread throughout the population in the presence of one or more high-risk transmission channels, which are thought to have been absent in Africa before the 20th century.
152
+
153
+ Specific proposed high-risk transmission channels, allowing the virus to adapt to humans and spread throughout the society, depend on the proposed timing of the animal-to-human crossing. Genetic studies of the virus suggest that the most recent common ancestor of the HIV-1 M group dates back to c. 1910.[254] Proponents of this dating link the HIV epidemic with the emergence of colonialism and growth of large colonial African cities, leading to social changes, including a higher degree of sexual promiscuity, the spread of prostitution, and the accompanying high frequency of genital ulcer diseases (such as syphilis) in nascent colonial cities.[255] While transmission rates of HIV during vaginal intercourse are low under regular circumstances, they are increased manyfold if one of the partners suffers from a sexually transmitted infection causing genital ulcers. Early 1900s colonial cities were notable for their high prevalence of prostitution and genital ulcers, to the degree that, as of 1928, as many as 45% of female residents of eastern Kinshasa were thought to have been prostitutes, and, as of 1933, around 15% of all residents of the same city had syphilis.[255]
154
+
155
+ An alternative view holds that unsafe medical practices in Africa after World War II, such as unsterile reuse of single-use syringes during mass vaccination, antibiotic and anti-malaria treatment campaigns, were the initial vector that allowed the virus to adapt to humans and spread.[253][256][257]
156
+
157
+ The earliest well-documented case of HIV in a human dates back to 1959 in the Congo.[258] The earliest retrospectively described case of AIDS is believed to have been in Norway beginning in 1966.[259] In July 1960, in the wake of Congo's independence, the United Nations recruited Francophone experts and technicians from all over the world to assist in filling administrative gaps left by Belgium, who did not leave behind an African elite to run the country. By 1962, Haitians made up the second-largest group of well-educated experts (out of the 48 national groups recruited), that totaled around 4500 in the country.[260][261] Dr. Jacques Pépin, a Quebecer author of The Origins of AIDS, stipulates that Haiti was one of HIV's entry points to the United States and that one of them may have carried HIV back across the Atlantic in the 1960s.[261] Although the virus may have been present in the United States as early as 1966,[262] the vast majority of infections occurring outside sub-Saharan Africa (including the U.S.) can be traced back to a single unknown individual who became infected with HIV in Haiti and then brought the infection to the United States at some time around 1969.[263] The epidemic then rapidly spread among high-risk groups (initially, sexually promiscuous men who have sex with men). By 1978, the prevalence of HIV-1 among homosexual male residents of New York City and San Francisco was estimated at 5%, suggesting that several thousand individuals in the country had been infected.[263]
158
+
159
+ AIDS stigma exists around the world in a variety of ways, including ostracism, rejection, discrimination and avoidance of HIV-infected people; compulsory HIV testing without prior consent or protection of confidentiality; violence against HIV-infected individuals or people who are perceived to be infected with HIV; and the quarantine of HIV-infected individuals.[20] Stigma-related violence or the fear of violence prevents many people from seeking HIV testing, returning for their results, or securing treatment, possibly turning what could be a manageable chronic illness into a death sentence and perpetuating the spread of HIV.[265]
160
+
161
+ AIDS stigma has been further divided into the following three categories:
162
+
163
+ Often, AIDS stigma is expressed in conjunction with one or more other stigmas, particularly those associated with homosexuality, bisexuality, promiscuity, prostitution, and intravenous drug use.[268]
164
+
165
+ In many developed countries, there is an association between AIDS and homosexuality or bisexuality, and this association is correlated with higher levels of sexual prejudice, such as anti-homosexual or anti-bisexual attitudes.[269] There is also a perceived association between AIDS and all male-male sexual behavior, including sex between uninfected men.[266] However, the dominant mode of spread worldwide for HIV remains heterosexual transmission.[270]
166
+
167
+ In 2003, as part of an overall reform of marriage and population legislation, it became legal for people with AIDS to marry in China.[271]
168
+
169
+ In 2013, the U.S. National Library of Medicine developed a traveling exhibition titled Surviving and Thriving: AIDS, Politics, and Culture;[272] this covered medical research, the U.S. government's response, and personal stories from people with AIDS, caregivers, and activists.[273]
170
+
171
+ HIV/AIDS affects the economics of both individuals and countries.[223] The gross domestic product of the most affected countries has decreased due to the lack of human capital.[223][274] Without proper nutrition, health care and medicine, large numbers of people die from AIDS-related complications. Before death they will not only be unable to work, but will also require significant medical care. It is estimated that as of 2007 there were 12 million AIDS orphans.[223] Many are cared for by elderly grandparents.[275]
172
+
173
+ Returning to work after beginning treatment for HIV/AIDS is difficult, and affected people often work less than the average worker. Unemployment in people with HIV/AIDS also is associated with suicidal ideation, memory problems, and social isolation. Employment increases self-esteem, sense of dignity, confidence, and quality of life for people with HIV/AIDS. Anti-retroviral treatment may help people with HIV/AIDS work more, and may increase the chance that a person with HIV/AIDS will be employed (low-quality evidence).[276]
174
+
175
+ By affecting mainly young adults, AIDS reduces the taxable population, in turn reducing the resources available for public expenditures such as education and health services not related to AIDS, resulting in increasing pressure on the state's finances and slower growth of the economy. This causes a slower growth of the tax base, an effect that is reinforced if there are growing expenditures on treating the sick, training (to replace sick workers), sick pay and caring for AIDS orphans. This is especially true if the sharp increase in adult mortality shifts the responsibility from the family to the government in caring for these orphans.[275]
176
+
177
+ At the household level, AIDS causes both loss of income and increased spending on healthcare. A study in Côte d'Ivoire showed that households having a person with HIV/AIDS spent twice as much on medical expenses as other households. This additional expenditure also leaves less income to spend on education and other personal or family investment.[277]
178
+
179
+ The topic of religion and AIDS has become highly controversial, primarily because some religious authorities have publicly declared their opposition to the use of condoms.[278][279] The religious approach to prevent the spread of AIDS, according to a report by American health expert Matthew Hanley titled The Catholic Church and the Global AIDS Crisis, argues that cultural changes are needed, including a re-emphasis on fidelity within marriage and sexual abstinence outside of it.[279]
180
+
181
+ Some religious organizations have claimed that prayer can cure HIV/AIDS. In 2011, the BBC reported that some churches in London were claiming that prayer would cure AIDS, and the Hackney-based Centre for the Study of Sexual Health and HIV reported that several people stopped taking their medication, sometimes on the direct advice of their pastor, leading to a number of deaths.[280] The Synagogue Church Of All Nations advertised an "anointing water" to promote God's healing, although the group denies advising people to stop taking medication.[280]
182
+
183
+ One of the first high-profile cases of AIDS was the American Rock Hudson, a gay actor who had been married and divorced earlier in life, who died on October 2, 1985, having announced that he was suffering from the virus on July 25 that year. He had been diagnosed during 1984.[281] A notable British casualty of AIDS that year was Nicholas Eden, a gay politician and son of the late prime minister Anthony Eden.[282] On November 24, 1991, the virus claimed the life of British rock star Freddie Mercury, lead singer of the band Queen, who died from an AIDS-related illness having only revealed the diagnosis on the previous day.[283] However, he had been diagnosed as HIV-positive in 1987. One of the first high-profile heterosexual cases of the virus was American tennis player Arthur Ashe. He was diagnosed as HIV-positive on August 31, 1988, having contracted the virus from blood transfusions during heart surgery earlier in the 1980s. Further tests within 24 hours of the initial diagnosis revealed that Ashe had AIDS, but he did not tell the public about his diagnosis until April 1992.[284] He died as a result on February 6, 1993, aged 49.[285]
184
+
185
+ Therese Frare's photograph of gay activist David Kirby, as he lay dying from AIDS while surrounded by family, was taken in April 1990. Life magazine said the photo became the one image "most powerfully identified with the HIV/AIDS epidemic." The photo was displayed in Life, was the winner of the World Press Photo, and acquired worldwide notoriety after being used in a United Colors of Benetton advertising campaign in 1992.[286]
186
+
187
+ Criminal transmission of HIV is the intentional or reckless infection of a person with the human immunodeficiency virus (HIV). Some countries or jurisdictions, including some areas of the United States, have laws that criminalize HIV transmission or exposure.[287] Others may charge the accused under laws enacted before the HIV pandemic.
188
+
189
+ In 1996, Ugandan-born Canadian Johnson Aziga was diagnosed with HIV; he subsequently had unprotected sex with 11 women without disclosing his diagnosis. By 2003, seven had contracted HIV; two died from complications related to AIDS.[288][289] Aziga was convicted of first-degree murder and sentenced to life imprisonment.[290]
190
+
191
+ There are many misconceptions about HIV and AIDS. Three of the most common are that AIDS can spread through casual contact, that sexual intercourse with a virgin will cure AIDS,[291][292][293] and that HIV can infect only gay men and drug users. In 2014, some among the British public wrongly thought one could get HIV from kissing (16%), sharing a glass (5%), spitting (16%), a public toilet seat (4%), and coughing or sneezing (5%).[294] Other misconceptions are that any act of anal intercourse between two uninfected gay men can lead to HIV infection, and that open discussion of HIV and homosexuality in schools will lead to increased rates of AIDS.[295][296]
192
+
193
+ A small group of individuals continue to dispute the connection between HIV and AIDS,[297] the existence of HIV itself, or the validity of HIV testing and treatment methods.[298][299] These claims, known as AIDS denialism, have been examined and rejected by the scientific community.[300] However, they have had a significant political impact, particularly in South Africa, where the government's official embrace of AIDS denialism (1999–2005) was responsible for its ineffective response to that country's AIDS epidemic, and has been blamed for hundreds of thousands of avoidable deaths and HIV infections.[301][302][303]
194
+
195
+ Several discredited conspiracy theories have held that HIV was created by scientists, either inadvertently or deliberately. Operation INFEKTION was a worldwide Soviet active measures operation to spread the claim that the United States had created HIV/AIDS. Surveys show that a significant number of people believed—and continue to believe—in such claims.[304]
196
+
197
+ HIV/AIDS research includes all medical research which attempts to prevent, treat, or cure HIV/AIDS, along with fundamental research about the nature of HIV as an infectious agent, and about AIDS as the disease caused by HIV.
198
+
199
+ Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions such as sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and antiretroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, and circumcision and HIV. Public health officials, researchers, and programs can gain a more comprehensive picture of the barriers they face, and the efficacy of current approaches to HIV treatment and prevention, by tracking standard HIV indicators.[305] Use of common indicators is an increasing focus of development organizations and researchers.[306][307]
en/5564.html.txt ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in the same language. For example, the words begin, start, commence, and initiate are all synonyms of one another; they are synonymous. The standard test for synonymy is substitution: one form can be replaced by another in a sentence without changing its meaning. Words are considered synonymous in one particular sense: for example, long and extended in the context long time or extended time are synonymous, but long cannot be used in the phrase extended family. Synonyms with exactly the same meaning share a seme or denotational sememe, whereas those with inexactly similar meanings share a broader denotational or connotational sememe and thus overlap within a semantic field. The former are sometimes called cognitive synonyms and the latter, near-synonyms,[2] plesionyms[3] or poecilonyms.[4]
2
+
3
+ Some lexicographers claim that no synonyms have exactly the same meaning (in all contexts or social levels of language) because etymology, orthography, phonic qualities, connotations, ambiguous meanings, usage, and so on make them unique. Different words that are similar in meaning usually differ for a reason: feline is more formal than cat; long and extended are only synonyms in one usage and not in others (for example, a long arm is not the same as an extended arm). Synonyms are also a source of euphemisms.
4
+
5
+ Metonymy can sometimes be a form of synonymy: the White House is used as a synonym of the administration in referring to the U.S. executive branch under a specific president.[5] Thus a metonym is a type of synonym, and the word metonym is a hyponym of the word synonym.[citation needed]
6
+
7
+ The analysis of synonymy, polysemy, hyponymy, and hypernymy is inherent to taxonomy and ontology in the information-science senses of those terms.[6] It has applications in pedagogy and machine learning, because they rely on word-sense disambiguation.[7]
8
+
9
+ The word is borrowed from Latin synōnymum, in turn borrowed from Ancient Greek synōnymon (συνώνυμον), composed of sýn (σύν 'together, similar, alike') and -ōnym- (-ωνυμ-), a form of onoma (ὄνομα 'name').[8]
10
+
11
+ Synonyms are often some from the different strata making up a language. For example, in English, Norman French superstratum words and Old English substratum words continue to coexist.[9] Thus, today we have synonyms like the Norman-derived people, liberty and archer, and the Saxon-derived folk, freedom and bowman. For more examples, see the list of Germanic and Latinate equivalents in English.
12
+
13
+ Loanwords are another rich source of synonyms, often from the language of the dominant culture of a region. Thus most European languages have borrowed from Latin and ancient Greek, especially for technical terms, but the native terms continue to be used in non-technical contexts. In East Asia, borrowings from Chinese in Japanese, Korean, and Vietnamese often double native terms. In Islamic cultures, Arabic and Persian are large sources of synonymous borrowings.
14
+
15
+ For example, in Turkish, kara and siyah both mean 'black', the first being a native Turkish word, and the second being a borrowing from Persian. In Ottoman Turkish, there were often three synonyms: water can be su (Turkish), âb (Persian), or mâ (Arabic): "such a triad of synonyms exists in Ottoman for every meaning, without exception". As always with synonyms, there are nuances and shades of meaning or usage.[10]
16
+
17
+ In English, similarly, we often have Latin and Greek terms synonymous with Germanic ones: thought, notion (L), idea (Gk); ring, circle (L), cycle (Gk). English often uses the Germanic term only as a noun, but has Latin and Greek adjectives: hand, manual (L), chiral (Gk); heat, thermal (L), caloric (Gk). Sometimes the Germanic term has become rare, or restricted to special meanings: tide, time/temporal, chronic.[11][12]
18
+
19
+ Many bound morphemes in English are borrowed from Latin and Greek and are synonyms for native words or morphemes: fish, pisci- (L), ichthy- (Gk).
20
+
21
+ Another source of synonyms is coinages, which may be motivated by linguistic purism. Thus the English word foreword was coined to replace the Romance preface. In Turkish, okul was coined to replace the Arabic-derived mektep and mederese, but those words continue to be used in some contexts.[13]
22
+
23
+ Synonyms often express a nuance of meaning or are used in different registers of speech or writing.
24
+
25
+ Different technical fields may appropriate synonyms for specific technical meanings.
26
+
27
+ Some writers avoid repeating the same word in close proximity, and prefer to use synonyms: this is called elegant variation. Many modern style guides criticize this.
28
+
29
+ Synonyms can be any part of speech, as long as both words belong to the same part of speech. Examples:
30
+
31
+ Synonyms are defined with respect to certain senses of words: pupil as the aperture in the iris of the eye is not synonymous with student. Similarly, he expired means the same as he died, yet my passport has expired cannot be replaced by my passport has died.
32
+
33
+ A thesaurus or synonym dictionary lists similar or related words; these are often, but not always, synonyms.
34
+
35
+ Tools which graph words relations:
36
+
37
+ Plain words synonyms finder:
en/5565.html.txt ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 35°N 38°E / 35°N 38°E / 35; 38
4
+
5
+ Syria (Arabic: سوريا‎, romanized: Sūriyā), officially the Syrian Arab Republic (Arabic: الجمهورية العربية السورية‎, romanized: al-Jumhūrīyah al-ʻArabīyah as-Sūrīyah), is a country in Western Asia, bordering Lebanon to the southwest, the Mediterranean Sea to the west, Turkey to the north, Iraq to the east, Jordan to the south, and Israel to the southwest. A country of fertile plains, high mountains, and deserts, Syria is home to diverse ethnic and religious groups, including Syrian Arabs, Kurds, Turkemens, Assyrians, Armenians, Circassians,[8] Mandeans[9] and Greeks. Religious groups include Sunnis, Christians, Alawites, Druze, Isma'ilis, Mandeans, Shiites, Salafis, Yazidis, and Jews. Arabs are the largest ethnic group, and Sunnis the largest religious group.
6
+
7
+ Syria is a unitary republic consisting of 14 governorates and is the only country that politically espouses Ba'athism. It is a member of one international organization other than the United Nations, the Non-Aligned Movement; it was suspended from the Arab League in November 2011[10] and the Organisation of Islamic Cooperation,[11] and self-suspended from the Union for the Mediterranean.[12]
8
+
9
+ The name "Syria" historically referred to a wider region, broadly synonymous with the Levant, and known in Arabic as al-Sham. The modern state encompasses the sites of several ancient kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are among the oldest continuously inhabited cities in the world.[13] In the Islamic era, Damascus was the seat of the Umayyad Caliphate and a provincial capital of the Mamluk Sultanate in Egypt. The modern Syrian state was established in the mid-20th century after centuries of Ottoman and a brief period French mandate, and represented the largest Arab state to emerge from the formerly Ottoman-ruled Syrian provinces. It gained de jure independence as a parliamentary republic on 24 October 1945, when the Republic of Syria became a founding member of the United Nations, an act which legally ended the former French Mandate, although French troops did not leave the country until April 1946. The post-independence period was tumultuous, with many military coups and coup attempts shaking the country from 1949 to 1971. In 1958, Syria entered a brief union with Egypt called the United Arab Republic, which was terminated by the 1961 Syrian coup d'état. The republic was renamed as the Arab Republic of Syria in late 1961 after the December 1 constitutional referendum of that year, and was increasingly unstable until the 1963 Ba'athist coup d'état, since which the Ba'ath Party has maintained its power. Syria was under Emergency Law from 1963 to 2011, effectively suspending most constitutional protections for citizens.
10
+
11
+ Bashar al-Assad has been president since 2000 and was preceded by his father Hafez al-Assad,[14] who was in office from 1971 to 2000. Throughout his rule, Syria and the ruling Ba'ath Party have been condemned and criticized for various human rights abuses, including frequent executions of citizens and political prisoners, and massive censorship.[15] Since March 2011, Syria has been embroiled in an armed conflict, with a number of countries in the region and beyond involved militarily or otherwise. As a result, a number of self-proclaimed political entities have emerged on Syrian territory, including the Syrian opposition, Rojava, Tahrir al-Sham and Islamic State of Iraq and the Levant. Syria was ranked last on the Global Peace Index from 2016 to 2018,[16] making it the most violent country in the world due to the war. The conflict has killed more than 570,000 people,[17] caused 7.6 million internally displaced people (July 2015 UNHCR estimate) and over 5 million refugees (July 2017 registered by UNHCR),[18] making population assessment difficult in recent years.
12
+
13
+ As of June 2020, the national currency Syrian pound has plunged to new lows, and there are massive protests occurring due to the economic downturn; in addition, massive punitive sanctions about to be applied by the United States via the Caesar Act, are raising concerns about the possible collapse of the Syrian government, possibly resulting in massive instability and conflict throughout the region.[19]
14
+
15
+ Several sources indicate that the name Syria is derived from the 8th century BC Luwian term "Sura/i", and the derivative ancient Greek name: Σύριοι, Sýrioi, or Σύροι, Sýroi, both of which originally derived from Aššūrāyu (Assyria) in northern Mesopotamia.[20][21] However, from the Seleucid Empire (323–150 BC), this term was also applied to The Levant, and from this point the Greeks applied the term without distinction between the Assyrians of Mesopotamia and Arameans of the Levant.[22][23] Mainstream modern academic opinion strongly favors the argument that the Greek word is related to the cognate Ἀσσυρία, Assyria, ultimately derived from the Akkadian Aššur.[24] The Greek name appears to correspond to Phoenician ʾšr "Assur", ʾšrym "Assyrians", recorded in the 8th century BC Çineköy inscription.[25]
16
+
17
+ The area designated by the word has changed over time. Classically, Syria lies at the eastern end of the Mediterranean, between Arabia to the south and Asia Minor to the north, stretching inland to include parts of Iraq, and having an uncertain border to the northeast that Pliny the Elder describes as including, from west to east, Commagene, Sophene, and Adiabene.[26]
18
+
19
+ By Pliny's time, however, this larger Syria had been divided into a number of provinces under the Roman Empire (but politically independent from each other): Judaea, later renamed Palaestina in AD 135 (the region corresponding to modern-day Israel, the Palestinian Territories, and Jordan) in the extreme southwest; Phoenice (established in AD 194) corresponding to modern Lebanon, Damascus and Homs regions; Coele-Syria (or "Hollow Syria") south of the Eleutheris river, and Iraq.[27]
20
+
21
+ Since approximately 10,000 BC, Syria was one of the centers of Neolithic culture (known as Pre-Pottery Neolithic A) where agriculture and cattle breeding appeared for the first time in the world. The following Neolithic period (PPNB) is represented by rectangular houses of Mureybet culture. At the time of the pre-pottery Neolithic, people used vessels made of stone, gyps and burnt lime (Vaisselle blanche). Finds of obsidian tools from Anatolia are evidences of early trade relations. Cities of Hamoukar and Emar played an important role during the late Neolithic and Bronze Age. Archaeologists have demonstrated that civilization in Syria was one of the most ancient on earth, perhaps preceded by only those of Mesopotamia.
22
+
23
+ The earliest recorded indigenous civilization in the region was the Kingdom of Ebla[28] near present-day Idlib, northern Syria. Ebla appears to have been founded around 3500 BC,[29][30][31][32][33] and gradually built its fortune through trade with the Mesopotamian states of Sumer, Assyria, and Akkad, as well as with the Hurrian and Hattian peoples to the northwest, in Asia Minor.[34] Gifts from Pharaohs, found during excavations, confirm Ebla's contact with Egypt.
24
+
25
+ One of the earliest written texts from Syria is a trading agreement between Vizier Ibrium of Ebla and an ambiguous kingdom called Abarsal c. 2300 BC.[35][36] Scholars believe the language of Ebla to be among the oldest known written Semitic languages after Akkadian. Recent classifications of the Eblaite language have shown that it was an East Semitic language, closely related to the Akkadian language.[37]
26
+
27
+ Ebla was weakened by a long war with Mari, and the whole of Syria became part of the Mesopotamian Akkadian Empire after Sargon of Akkad and his grandson Naram-Sin's conquests ended Eblan domination over Syria in the first half of the 23rd century BC.[38][39]
28
+
29
+ By the 21st century BC, Hurrians settled the northern east parts of Syria while the rest of the region was dominated by the Amorites, Syria was called the Land of the Amurru (Amorites) by their Assyro-Babylonian neighbors. The Northwest Semitic language of the Amorites is the earliest attested of the Canaanite languages. Mari reemerged during this period, and saw renewed prosperity until conquered by Hammurabi of Babylon. Ugarit also arose during this time, circa 1800 BC, close to modern Latakia. Ugaritic was a Semitic language loosely related to the Canaanite languages, and developed the Ugaritic alphabet,[41] considered to be the world's earliest known alphabet. The Ugaritic kingdom survived until its destruction at the hands of the marauding Indo-European Sea Peoples in the 12th century BC in what was known as the Late Bronze Age Collapse which saw similar kingdoms and states witness the same destruction at the hand of the Sea Peoples.
30
+
31
+ Yamhad (modern Aleppo) dominated northern Syria for two centuries,[42] although Eastern Syria was occupied in the 19th and 18th centuries BC by the Old Assyrian Empire ruled by the Amorite Dynasty of Shamshi-Adad I, and by the Babylonian Empire which was founded by Amorites. Yamhad was described in the tablets of Mari as the mightiest state in the near east and as having more vassals than Hammurabi of Babylon.[42] Yamhad imposed its authority over Alalakh,[43] Qatna,[44] the Hurrians states and the Euphrates Valley down to the borders with Babylon.[45] The army of Yamhad campaigned as far away as Dēr on the border of Elam (modern Iran).[46] Yamhad was conquered and destroyed, along with Ebla, by the Indo-European Hittites from Asia Minor circa 1600 BC.[47]
32
+
33
+ From this time, Syria became a battle ground for various foreign empires, these being the Hittite Empire, Mitanni Empire, Egyptian Empire, Middle Assyrian Empire, and to a lesser degree Babylonia. The Egyptians initially occupied much of the south, while the Hittites, and the Mitanni, much of the north. However, Assyria eventually gained the upper hand, destroying the Mitanni Empire and annexing huge swathes of territory previously held by the Hittites and Babylon.
34
+
35
+ Around the 14th century BC, various Semitic peoples appeared in the area, such as the semi-nomadic Suteans who came into an unsuccessful conflict with Babylonia to the east, and the West Semitic speaking Arameans who subsumed the earlier Amorites. They too were subjugated by Assyria and the Hittites for centuries. The Egyptians fought the Hittites for control over western Syria; the fighting reached its zenith in 1274 BC with the Battle of Kadesh.[50][51] The west remained part of the Hittite empire until its destruction c. 1200 BC,[52] while eastern Syria largely became part of the Middle Assyrian Empire,[53] who also annexed much of the west during the reign of Tiglath-Pileser I 1114–1076 BC.
36
+
37
+ With the destruction of the Hittites and the decline of Assyria in the late 11th century BC, the Aramean tribes gained control of much of the interior, founding states such as Bit Bahiani, Aram-Damascus, Hamath, Aram-Rehob, Aram-Naharaim, and Luhuti. From this point, the region became known as Aramea or Aram. There was also a synthesis between the Semitic Arameans and the remnants of the Indo-European Hittites, with the founding of a number of Syro-Hittite states centered in north central Aram (Syria) and south central Asia Minor (modern Turkey), including Palistin, Carchemish and Sam'al.
38
+
39
+ A Canaanite group known as the Phoenicians came to dominate the coasts of Syria, (and also Lebanon and northern Palestine) from the 13th century BC, founding city states such as Amrit, Simyra, Arwad, Paltos, Ramitha and Shuksi. From these coastal regions, they eventually spread their influence throughout the Mediterranean, including building colonies in Malta, Sicily, the Iberian peninsula (modern Spain and Portugal), and the coasts of North Africa and most significantly, founding the major city state of Carthage (in modern Tunisia) in the 9th century BC, which was much later to become the center of a major empire, rivaling the Roman Empire.
40
+
41
+ Syria and the Western half of Near East then fell to the vast Neo Assyrian Empire (911 BC – 605 BC). The Assyrians introduced Imperial Aramaic as the lingua franca of their empire. This language was to remain dominant in Syria and the entire Near East until after the Arab Islamic conquest in the 7th and 8th centuries AD, and was to be a vehicle for the spread of Christianity. The Assyrians named their colonies of Syria and Lebanon Eber-Nari. Assyrian domination ended after the Assyrians greatly weakened themselves in a series of brutal internal civil wars, followed by an attacks from; the Medes, Babylonians, Chaldeans, Persians, Scythians and Cimmerians. During the fall of Assyria, the Scythians ravaged and plundered much of Syria. The last stand of the Assyrian army was at Carchemish in northern Syria in 605 BC.
42
+
43
+ The Assyrian Empire was followed by the Neo-Babylonian Empire (605 BC – 539 BC). During this period, Syria became a battle ground between Babylonia and another former Assyrian colony, that of Egypt. The Babylonians, like their Assyrian relations, were victorious over Egypt.
44
+
45
+ The Achaemenid Empire, founded by Cyrus the Great, annexed Syria along with Babylonia to its empire in 539 BC. The Persians, retained Imperial Aramaic as one of the diplomatic languages of the Achaemenid Empire (539 BC – 330 BC), as well as the Assyrian name for the new satrapy of Aram/Syria Eber-Nari.
46
+
47
+ Syria was conquered by the Greek Macedonian Empire, ruled by Alexander the Great circa 330 BC, and consequently became Coele-Syria province of the Greek Seleucid Empire (323 BC – 64 BC), with the Seleucid kings styling themselves 'King of Syria' and the city of Antioch being its capital starting from 240.
48
+
49
+ Thus, it was the Greeks who introduced the name "Syria" to the region. Originally an Indo-European corruption of "Assyria" in northern Mesopotamia, the Greeks used this term to describe not only Assyria itself but also the lands to the west which had for centuries been under Assyrian dominion.[54] Thus in the Greco-Roman world both the Arameans of Syria and the Assyrians of Mesopotamia to the east were referred to as "Syrians" or "Syriacs", despite these being distinct peoples in their own right, a confusion which would continue into the modern world. Eventually parts of southern Seleucid Syria were taken by Judean Hasmoneans upon the slow disintegration of the Hellenistic Empire.
50
+
51
+ Syria briefly came under Armenian control from 83 BC, with the conquests of the Armenian king Tigranes the Great, who was welcomed as a savior from the Seleucids and Romans by the Syrian people. However, Pompey the Great, a general of the Roman Empire rode to Syria, captured Antioch, its capital, and turned Syria into a Roman province in 64 BC, thus ending the Armenian control over the region which had lasted two decades. Syria prospered under Roman rule, being strategically located on the silk road which gave it massive wealth and importance, making it the battleground for the rivaling Romans and Persians.
52
+
53
+ Palmyra, a rich and sometimes powerful native Aramaic-speaking kingdom arose in northern Syria in the 2nd century; the Palmyrene established a trade network that made the city one of the richest in the Roman empire. Eventually, in the late 3rd century AD, the Palmyrene king Odaenathus defeated the Persian emperor Shapur I and controlled the entirety of the Roman East while his successor and widow Zenobia established the Palmyrene Empire, which briefly conquered Egypt, Syria, Palestine, much of Asia Minor, Judah and Lebanon, before being finally brought under Roman control in 273 AD.
54
+
55
+ The northern Mesopotamian Assyrian kingdom of Adiabene controlled areas of north east Syria between 10 AD and 117 AD, before it was conquered by Rome.[55]
56
+
57
+ The Aramaic language has been found as far afield as Hadrians Wall in Ancient Britain,[56] with an inscription written by a Palmyrene emigrant at the site of Fort Arbeia.[57]
58
+
59
+ Control of Syria eventually passed from the Romans to the Byzantines, with the split in the Roman Empire.[34]
60
+
61
+ The largely Aramaic-speaking population of Syria during the heyday of the Byzantine Empire was probably not exceeded again until the 19th century. Prior to the Arab Islamic Conquest in the 7th century AD, the bulk of the population were Arameans, but Syria was also home to Greek and Roman ruling classes, Assyrians still dwelt in the north east, Phoenicians along the coasts, and Jewish and Armenian communities was also extant in major cities, with Nabateans and pre-Islamic Arabs such as the Lakhmids and Ghassanids dwelling in the deserts of southern Syria. Syriac Christianity had taken hold as the major religion, although others still followed Judaism, Mithraism, Manicheanism, Greco-Roman Religion, Canaanite Religion and Mesopotamian Religion. Syria's large and prosperous population made Syria one of the most important of the Roman and Byzantine provinces, particularly during the 2nd and 3rd centuries (AD).[58]
62
+
63
+ Syrians held considerable amounts of power during the Severan dynasty. The matriarch of the family and Empress of Rome as wife of emperor Septimius Severus was Julia Domna, a Syrian from the city of Emesa (modern day Homs), whose family held hereditary rights to the priesthood of the god El-Gabal. Her great nephews, also Arabs from Syria, would also become Roman Emperors, the first being Elagabalus and the second, his cousin Alexander Severus. Another Roman emperor who was a Syrian was Philip the Arab (Marcus Julius Philippus), who was born in Roman Arabia. He was emperor from 244 to 249,[58] and ruled briefly during the Crisis of the Third Century. During his reign, he focused on his home town of Philippopolis (modern day Shahba) and began many construction projects to improve the city, most of which were halted after his death.
64
+
65
+ Syria is significant in the history of Christianity; Saulus of Tarsus, better known as the Apostle Paul, was converted on the Road to Damascus and emerged as a significant figure in the Christian Church at Antioch in ancient Syria, from which he left on many of his missionary journeys. (Acts 9:1–43)
66
+
67
+ Muhammad's first interaction with the people and tribes of Syria was during the Invasion of Dumatul Jandal in July 626[59] where he ordered his followers to invade Duma, because Muhammad received intelligence that some tribes there were involved in highway robbery and preparing to attack Medina itself.[60]
68
+
69
+ William Montgomery Watt claims that this was the most significant expedition Muhammad ordered at the time, even though it received little notice in the primary sources. Dumat Al-Jandal was 800 kilometres (500 mi) from Medina, and Watt says that there was no immediate threat to Muhammad, other than the possibility that his communications to Syria and supplies to Medina being interrupted. Watt says "It is tempting to suppose that Muhammad was already envisaging something of the expansion which took place after his death", and that the rapid march of his troops must have "impressed all those who heard of it".[61]
70
+
71
+ William Muir also believes that the expedition was important as Muhammad followed by 1000 men reached the confines of Syria, where distant tribes had now learnt his name, while the political horizon of Muhammad was extended.[59]
72
+
73
+ By AD 640, Syria was conquered by the Arab Rashidun army led by Khalid ibn al-Walid. In the mid-7th century, the Umayyad dynasty, then rulers of the empire, placed the capital of the empire in Damascus. The country's power declined during later Umayyad rule; this was mainly due to totalitarianism, corruption and the resulting revolutions. The Umayyad dynasty was then overthrown in 750 by the Abbasid dynasty, which moved the capital of empire to Baghdad.
74
+
75
+ Arabic – made official under Umayyad rule[62] – became the dominant language, replacing Greek and Aramaic of the Byzantine era. In 887, the Egypt-based Tulunids annexed Syria from the Abbasids, and were later replaced by once the Egypt-based Ikhshidids and still later by the Hamdanids originating in Aleppo founded by Sayf al-Dawla.[63]
76
+
77
+ Sections of Syria were held by French, English, Italian and German overlords between 1098 and 1189 AD during the Crusades and were known collectively as the Crusader states among which the primary one in Syria was the Principality of Antioch. The coastal mountainous region was also occupied in part by the Nizari Ismailis, the so-called Assassins, who had intermittent confrontations and truces with the Crusader States. Later in history when "the Nizaris faced renewed Frankish hostilities, they received timely assistance from the Ayyubids."[64]
78
+
79
+ After a century of Seljuk rule, Syria was largely conquered (1175–1185) by the Kurdish liberator Salah ad-Din, founder of the Ayyubid dynasty of Egypt. Aleppo fell to the Mongols of Hulegu in January 1260, and Damascus in March, but then Hulegu was forced to break off his attack to return to China to deal with a succession dispute.
80
+
81
+ A few months later, the Mamluks arrived with an army from Egypt and defeated the Mongols in the Battle of Ain Jalut in Galilee. The Mamluk leader, Baibars, made Damascus a provincial capital. When he died, power was taken by Qalawun. In the meantime, an emir named Sunqur al-Ashqar had tried to declare himself ruler of Damascus, but he was defeated by Qalawun on 21 June 1280, and fled to northern Syria. Al-Ashqar, who had married a Mongol woman, appealed for help from the Mongols. The Mongols of the Ilkhanate took the city, but Qalawun persuaded Al-Ashqar to join him, and they fought against the Mongols on 29 October 1281, in the Second Battle of Homs, which was won by the Mamluks.[65]
82
+
83
+ In 1400, the Muslim Turco-Mongol conqueror Timur Lenk (Tamurlane) invaded Syria, sacked Aleppo and captured Damascus after defeating the Mamluk army. The city's inhabitants were massacred, except for the artisans, who were deported to Samarkand. Timur-Lenk also conducted specific massacres of the Aramean and Assyrian Christian populations, greatly reducing their numbers.[66][67] By the end of the 15th century, the discovery of a sea route from Europe to the Far East ended the need for an overland trade route through Syria.
84
+
85
+ In 1516, the Ottoman Empire invaded the Mamluk Sultanate of Egypt, conquering Syria, and incorporating it into its empire. The Ottoman system was not burdensome to Syrians because the Turks respected Arabic as the language of the Quran, and accepted the mantle of defenders of the faith. Damascus was made the major entrepot for Mecca, and as such it acquired a holy character to Muslims, because of the beneficial results of the countless pilgrims who passed through on the hajj, the pilgrimage to Mecca.[68]
86
+
87
+ Ottoman administration followed a system that led to peaceful coexistence. Each ethno-religious minority—Arab Shia Muslim, Arab Sunni Muslim, Aramean-Syriac Orthodox, Greek Orthodox, Maronite Christians, Assyrian Christians, Armenians, Kurds and Jews—constituted a millet.[69] The religious heads of each community administered all personal status laws and performed certain civil functions as well.[68] In 1831, Ibrahim Pasha of Egypt renounced his loyalty to the Empire and overran Ottoman Syria, capturing Damascus. His short-term rule over the domain attempted to change the demographics and social structure of the region: he brought thousands of Egyptian villagers to populate the plains of Southern Syria, rebuilt Jaffa and settled it with veteran Egyptian soldiers aiming to turn it into a regional capital, and he crushed peasant and Druze rebellions and deported non-loyal tribesmen. By 1840, however, he had to surrender the area back to the Ottomans.
88
+
89
+ From 1864, Tanzimat reforms were applied on Ottoman Syria, carving out the provinces (vilayets) of Aleppo, Zor, Beirut and Damascus Vilayet; Mutasarrifate of Mount Lebanon was created, as well, and soon after
90
+ the Mutasarrifate of Jerusalem was given a separate status.
91
+
92
+ During World War I, the Ottoman Empire entered the conflict on the side of Germany and the Austro-Hungarian Empire. It ultimately suffered defeat and loss of control of the entire Near East to the British Empire and French Empire. During the conflict, genocide against indigenous Christian peoples was carried out by the Ottomans and their allies in the form of the Armenian Genocide and Assyrian Genocide, of which Deir ez-Zor, in Ottoman Syria, was the final destination of these death marches.[70] In the midst of World War I, two Allied diplomats (Frenchman François Georges-Picot and Briton Mark Sykes) secretly agreed on the post-war division of the Ottoman Empire into respective zones of influence in the Sykes-Picot Agreement of 1916. Initially, the two territories were separated by a border that ran in an almost straight line from Jordan to Iran. However, the discovery of oil in the region of Mosul just before the end of the war led to yet another negotiation with France in 1918 to cede this region to the British zone of influence, which was to become Iraq. The fate of the intermediate province of Zor was left unclear; its occupation by Arab nationalists resulted in its attachment to Syria. This border was recognized internationally when Syria became a League of Nations mandate in 1920[71] and has not changed to date.
93
+
94
+ In 1920, a short-lived independent Kingdom of Syria was established under Faisal I of the Hashemite family. However, his rule over Syria ended after only a few months, following the Battle of Maysalun. French troops occupied Syria later that year after the San Remo conference proposed that the League of Nations put Syria under a French mandate. General Gouraud had according to his secretary de Caix two options: "Either build a Syrian nation that does not exist... by smoothing the rifts which still divide it" or "cultivate and maintain all the phenomena, which require our abitration that these divisions give". De Caix added "I must say only the second option interests me". This is what Gouraud did.[72][73]
95
+
96
+ In 1925, Sultan al-Atrash led a revolt that broke out in the Druze Mountain and spread to engulf the whole of Syria and parts of Lebanon. Al-Atrash won several battles against the French, notably the Battle of al-Kafr on 21 July 1925, the Battle of al-Mazraa on 2–3 August 1925, and the battles of Salkhad, al-Musayfirah and Suwayda. France sent thousands of troops from Morocco and Senegal, leading the French to regain many cities, although resistance lasted until the spring of 1927. The French sentenced Sultan al-Atrash to death, but he had escaped with the rebels to Transjordan and was eventually pardoned. He returned to Syria in 1937 after the signing of the Syrian-French Treaty.
97
+
98
+ Syria and France negotiated a treaty of independence in September 1936, and Hashim al-Atassi was the first president to be elected under the first incarnation of the modern republic of Syria. However, the treaty never came into force because the French Legislature refused to ratify it. With the fall of France in 1940 during World War II, Syria came under the control of Vichy France until the British and Free French occupied the country in the Syria-Lebanon campaign in July 1941. Continuing pressure from Syrian nationalists and the British forced the French to evacuate their troops in April 1946, leaving the country in the hands of a republican government that had been formed during the mandate.[74]
99
+
100
+ Upheaval dominated Syrian politics from independence through the late 1960s. In May 1948, Syrian forces invaded Palestine, together with other Arab states, and immediately attacked Jewish settlements.[75] Their president Shukri al-Quwwatli instructed his troops in the front, "to destroy the Zionists".[76][77] The Invasion purpose was prevention of the establishment of the State of Israel.[78] Defeat in this war was one of several trigger factors for the March 1949 Syrian coup d'état by Col. Husni al-Za'im, described as the first military overthrow of the Arab World[78] since the start of the Second World War. This was soon followed by another overthrow, by Col. Sami al-Hinnawi, who was himself quickly deposed by Col. Adib Shishakli, all within the same year.[78]
101
+
102
+ Shishakli eventually abolished multipartyism altogether, but was himself overthrown in a 1954 coup and the parliamentary system was restored.[78] However, by this time, power was increasingly concentrated in the military and security establishment.[78] The weakness of Parliamentary institutions and the mismanagement of the economy led to unrest and the influence of Nasserism and other ideologies. There was fertile ground for various Arab nationalist, Syrian nationalist, and socialist movements, which represented disaffected elements of society. Notably included were religious minorities, who demanded radical reform.[78]
103
+
104
+ In November 1956, as a direct result of the Suez Crisis,[79] Syria signed a pact with the Soviet Union. This gave a foothold for Communist influence within the government in exchange for military equipment.[78] Turkey then became worried about this increase in the strength of Syrian military technology, as it seemed feasible that Syria might attempt to retake İskenderun. Only heated debates in the United Nations lessened the threat of war.[80]
105
+
106
+ On 1 February 1958, Syrian President Shukri al-Quwatli and Egypt's Nasser announced the merging of Egypt and Syria, creating the United Arab Republic, and all Syrian political parties, as well as the communists therein, ceased overt activities.[74] Meanwhile, a group of Syrian Ba'athist officers, alarmed by the party's poor position and the increasing fragility of the union, decided to form a secret Military Committee; its initial members were Lieutenant-Colonel Muhammad Umran, Major Salah Jadid and Captain Hafez al-Assad. Syria seceded from the union with Egypt on 28 September 1961, after a coup.
107
+
108
+ The ensuing instability following the 1961 coup culminated in the 8 March 1963 Ba'athist coup. The takeover was engineered by members of the Arab Socialist Ba'ath Party, led by Michel Aflaq and Salah al-Din al-Bitar. The new Syrian cabinet was dominated by Ba'ath members.[74][78]
109
+
110
+ On 23 February 1966, the Military Committee carried out an intra-party overthrow, imprisoned President Amin Hafiz and designated a regionalist, civilian Ba'ath government on 1 March.[78] Although Nureddin al-Atassi became the formal head of state, Salah Jadid was Syria's effective ruler from 1966 until November 1970,[81] when he was deposed by Hafez al-Assad, who at the time was Minister of Defense.[82] The coup led to a split within the original pan-Arab Ba'ath Party: one Iraqi-led ba'ath movement (ruled Iraq from 1968 to 2003) and one Syrian-led ba'ath movement was established.
111
+
112
+ In the first half of 1967, a low-key state of war existed between Syria and Israel. Conflict over Israeli cultivation of land in the Demilitarized Zone led to 7 April pre-war aerial clashes between Israel and Syria.[83] When the Six-Day War broke out between Egypt and Israel, Syria joined the war and attacked Israel as well. In the final days of the war, Israel turned its attention to Syria, capturing two-thirds of the Golan Heights in under 48 hours.[84] The defeat caused a split between Jadid and Assad over what steps to take next.[85]
113
+
114
+ Disagreement developed between Jadid, who controlled the party apparatus, and Assad, who controlled the military. The 1970 retreat of Syrian forces sent to aid the PLO during the "Black September" hostilities with Jordan reflected this disagreement.[86] The power struggle culminated in the November 1970 Syrian Corrective Revolution, a bloodless military overthrow that installed Hafez al-Assad as the strongman of the government.[82]
115
+
116
+ On 6 October 1973, Syria and Egypt initiated the Yom Kippur War against Israel. The Israel Defense Forces reversed the initial Syrian gains and pushed deeper into Syrian territory.[87]
117
+
118
+ In early 1976, Syria entered Lebanon, beginning their thirty-year military presence.[88] Over the following 15 years of civil war, Syria fought for control over Lebanon. Syria then remained in Lebanon until 2005.
119
+
120
+ In the late 1970s, an Islamist uprising by the Muslim Brotherhood was aimed against the government. Islamists attacked civilians and off-duty military personnel, leading security forces to also kill civilians in retaliatory strikes. The uprising had reached its climax in the 1982 Hama massacre,[89] when some 10,000 – 40,000 people were killed by regular Syrian Army troops.
121
+
122
+ In a major shift in relations with both other Arab states and the Western world, Syria participated in the US-led Gulf War against Saddam Hussein. Syria participated in the multilateral Madrid Conference of 1991, and during the 1990s engaged in negotiations with Israel. These negotiations failed, and there have been no further direct Syrian-Israeli talks since President Hafez al-Assad's meeting with then President Bill Clinton in Geneva in March 2000.[90]
123
+
124
+ Hafez al-Assad died on 10 June 2000. His son, Bashar al-Assad, was elected president in an election in which he ran unopposed.[74] His election saw the birth of the Damascus Spring and hopes of reform, but by autumn 2001, the authorities had suppressed the movement, imprisoning some of its leading intellectuals.[91] Instead, reforms have been limited to some market reforms.[14][92][93]
125
+
126
+ On 5 October 2003, Israel bombed a site near Damascus, claiming it was a terrorist training facility for members of Islamic Jihad.[94] In March 2004, Syrian Kurds and Arabs clashed in the northeastern city of al-Qamishli. Signs of rioting were seen in the cities of Qamishli and Hasakeh.[95] In 2005, Syria ended its military presence in Lebanon.[96][97] On 6 September 2007, foreign jet fighters, suspected as Israeli, reportedly carried out Operation Orchard against a suspected nuclear reactor under construction by North Korean technicians.[98]
127
+
128
+ The ongoing Syrian Civil War was inspired by the Arab Spring revolutions. It began in 2011 as a chain of peaceful protests, followed by an alleged crackdown by the Syrian Army.[99] In July 2011, Army defectors declared the formation of the Free Syrian Army and began forming fighting units. The opposition is dominated by Sunni Muslims, whereas the leading government figures are generally associated with Alawites.[100] The war is also involving rebel groups (IS and al-Nusra) and various foreign countries' interferences which the latter can be described as a proxy war in Syria.[101]
129
+
130
+ According to various sources, including the United Nations, up to 100,000 people had been killed by June 2013,[102][103][104] including 11,000 children.[105] To escape the violence, 4.9 million[106] Syrian refugees have fled to neighboring countries of Jordan,[107] Iraq,[108] Lebanon, and Turkey.[109][110] An estimated 450,000 Syrian Christians have fled their homes.[111][needs update] By October 2017, an estimated 400,000 people had been killed in the war according to the UN.[112]
131
+
132
+ On 10 June, hundreds of protesters returned to the streets of Sweida for the fourth consecutive day, rallying against the collapse of the country’s economy, as the Syrian pound plummeted to 3,000 to the dollar within the past week.[113]
133
+
134
+ On 11 June, Prime Minister Imad Khamis was dismissed by President Bashar al-Assad, amid anti-government protests over deteriorating economic conditions.[114] The new lows for the Syrian currency, and the dramatic increase in sanctions, began to appear to raise new concerns about the survival of the Assad government.[115][116][117]
135
+
136
+ Analysts noted that a resolution to the current banking crisis in Lebanon might be crucial to restoring stability in Syria.[118]
137
+
138
+ Some analysts began to raise concerns that Assad might be on the verge of losing power; but that any such collapse in the regime might cause conditions to worsen, as the result might be mass chaos, rather than an improvement in political or economic conditions.[119][120][121] Russia continued to expand its influence and military role in the areas of Syria where the main military conflict was occurring.[122]
139
+
140
+ Analysts noted that the upcoming implementation of new heavy sanctions under the US Caesar Act could devastate the Syrian economy, ruin any chances of recovery, destroy regional stability, and do nothing but destabilize the entire region.[123]
141
+
142
+ The first new sanctions will take effect on June 17th. there will be additional sanctions implemented in August, in three different groups. There are increasing reports that food is becoming difficult to find, the country's economy is under severe pressure, and the whole regime could collapse due to the sanctions.
143
+ [124]
144
+
145
+ Syria lies between latitudes 32° and 38° N, and longitudes 35° and 43° E. The climate varies from the humid Mediterranean coast, through a semiarid steppe zone, to arid desert in the east. The country consists mostly of arid plateau, although the northwest part bordering the Mediterranean is fairly green. Al-Jazira in the northeast and Hawran in the south are important agricultural areas. The Euphrates, Syria's most important river, crosses the country in the east. Syria is one of the fifteen states that comprise the so-called "cradle of civilization".[125] Its land straddles the "northwest of the Arabian plate".[126]
146
+
147
+ Petroleum in commercial quantities was first discovered in the northeast in 1956. The most important oil fields are those of Suwaydiyah, Qaratshui, Rumayian, and Tayyem, near Dayr az–Zawr. The fields are a natural extension of the Iraqi fields of Mosul and Kirkuk. Petroleum became Syria's leading natural resource and chief export after 1974. Natural gas was discovered at the field of Jbessa in 1940.[74]
148
+
149
+ Syria is formally a unitary republic. The current constitution of Syria, adopted in 2012, effectively transformed the country into a semi-presidential republic due to the constitutional right for the election of individuals who do not form part of the National Progressive Front.[127] The President is Head of State and the Prime Minister is Head of Government.[128] The legislature, the Peoples Council, is the body responsible for passing laws, approving government appropriations and debating policy.[129] In the event of a vote of no confidence by a simple majority, the Prime Minister is required to tender the resignation of their government to the President.[130] Two alternative governments formed during the Syrian Civil War, the Syrian Interim Government (formed in 2013) and the Syrian Salvation Government (formed in 2017), control portions of the north-west of the country and operate in opposition to the Syrian Arab Republic.
150
+
151
+ The executive branch consists of the president, two vice presidents, the prime minister, and the Council of Ministers (cabinet). The constitution requires the president to be a Muslim[131] but does not make Islam the state religion. On 31 January 1973, Hafez al-Assad implemented a new constitution, which led to a national crisis. Unlike previous constitutions, this one did not require that the President of Syria be a Muslim, leading to fierce demonstrations in Hama, Homs and Aleppo organized by the Muslim Brotherhood and the ulama. They labelled Assad the "enemy of Allah" and called for a jihad against his rule.[132] The government survived a series of armed revolts by Islamists, mainly members of the Muslim Brotherhood, from 1976 until 1982.
152
+
153
+ The constitution gives the president the right to appoint ministers, to declare war and state of emergency, to issue laws (which, except in the case of emergency, require ratification by the People's Council), to declare amnesty, to amend the constitution, and to appoint civil servants and military personnel.[133] According to the 2012 constitution, the president is elected by Syrian citizens in a direct election.
154
+
155
+ Syria's legislative branch is the unicameral People's Council. Under the previous constitution, Syria did not hold multi-party elections for the legislature,[133] with two-thirds of the seats automatically allocated to the ruling coalition.[134] On 7 May 2012, Syria held its first elections in which parties outside the ruling coalition could take part. Seven new political parties took part in the elections, of which Popular Front for Change and Liberation was the largest opposition party. The armed anti-government rebels, however, chose not to field candidates and called on their supporters to boycott the elections.
156
+
157
+ As of 2008 the President is the Regional Secretary of the Ba'ath party in Syria and leader of the National Progressive Front governing coalition. Outside of the coalition are 14 illegal Kurdish political parties.[135]
158
+
159
+ Syria's judicial branches include the Supreme Constitutional Court, the High Judicial Council, the Court of Cassation, and the State Security Courts. Islamic jurisprudence is a main source of legislation and Syria's judicial system has elements of Ottoman, French, and Islamic laws. Syria has three levels of courts: courts of first instance, courts of appeals, and the constitutional court, the highest tribunal. Religious courts handle questions of personal and family law.[133] The Supreme State Security Court (SSSC) was abolished by President Bashar al-Assad by legislative decree No. 53 on 21 April 2011.[136]
160
+
161
+ The Personal Status Law 59 of 1953 (amended by Law 34 of 1975) is essentially a codified sharia.[137] Article 3(2) of the 1973 constitution declares Islamic jurisprudence a main source of legislation. The Code of Personal Status is applied to Muslims by sharia courts.[138]
162
+
163
+ As a result of the ongoing civil war, various alternative governments were formed, including the Syrian Interim Government, the Democratic Union Party and localized regions governed by sharia law. Representatives of the Syrian Interim government were invited to take up Syria's seat at the Arab League on 28 March 2013 and[139] was recognised as the "sole representative of the Syrian people" by several nations including the United States, United Kingdom and France.[140][141][142]
164
+
165
+ Parliamentary elections were held on 13 April 2016 in the government-controlled areas of Syria, for all 250 seats of Syria's unicameral legislature, the Majlis al-Sha'ab, or the People's Council of Syria.[143] Even before results had been announced, several nations, including Germany, the United States and the United Kingdom, have declared their refusal to accept the results, largely citing it "not representing the will of the Syrian people.[144] However, representatives of the Russian Federation have voiced their support of this election's results. Syria's system of government is considered to be non-democratic by the North American NGO Freedom House.[145]
166
+
167
+ The situation for human rights in Syria has long been a significant concern among independent organizations such as Human Rights Watch, who in 2010 referred to the country's record as "among the worst in the world."[146] The US State Department funded Freedom House[147] ranked Syria "Not Free" in its annual Freedom in the World survey.[148]
168
+
169
+ The authorities are accused of arresting democracy and human rights activists, censoring websites, detaining bloggers, and imposing travel bans. Arbitrary detention, torture, and disappearances are widespread.[149] Although Syria's constitution guarantees gender equality, critics say that personal statutes laws and the penal code discriminate against women and girls. Moreover, it also grants leniency for so-called 'Honour killing'.[149] As of 9 November 2011 during the uprising against President Bashar al-Assad, the United Nations reported that of the over 3500 total deaths, over 250 deaths were children as young as two years old, and that boys as young as 11 years old have been gang-raped by security services officers.[150][151]
170
+ People opposing President Assad's rule claim that more than 200, mostly civilians, were massacred and about 300 injured in Hama in shelling by the Government forces on 12 July 2012.[152]
171
+
172
+ In August 2013, the government was suspected of using chemical weapons against its civilians. US Secretary of State John Kerry said it was "undeniable" that chemical weapons had been used in the country and that President Bashar al-Assad's forces had committed a "moral obscenity" against his own people. "Make no mistake," Kerry said. "President Obama believes there must be accountability for those who would use the world's most heinous weapon against the world's most vulnerable people. Nothing today is more serious, and nothing is receiving more serious scrutiny".[153]
173
+
174
+ The Emergency Law, effectively suspending most constitutional protections, was in effect from 1963 until 21 April 2011.[136] It was justified by the government in the light of the continuing war with Israel over the Golan Heights.
175
+
176
+ In August 2014, UN Human Rights chief Navi Pillay criticized the international community over its "paralysis" in dealing with the more than 3-year-old civil war gripping the country, which by 30 April 2014, had resulted in 191,369 deaths with war crimes, according to Pillay, being committed with total impunity on all sides in the conflict. Minority Alawites and Christians are being increasingly targeted by Islamists and other groups fighting in the Syrian civil war.[154][155]
177
+
178
+ In April 2017, the U.S. Navy carried out a missile attack against a Syrian air base[156] which had allegedly been used to conduct a chemical weapons attack on Syrian civilians, according to the US government.[157]
179
+
180
+ The President of Syria is commander in chief of the Syrian armed forces, comprising some 400,000 troops upon mobilization. The military is a conscripted force; males serve in the military upon reaching the age of 18.[158] The obligatory military service period is being decreased over time, in 2005 from two and a half years to two years, in 2008 to 21 months and in 2011 to year and a half.[159] About 20,000 Syrian soldiers were deployed in Lebanon until 27 April 2005, when the last of Syria's troops left the country after three decades.[158]
181
+
182
+ The breakup of the Soviet Union—long the principal source of training, material, and credit for the Syrian forces—may have slowed Syria's ability to acquire modern military equipment. It has an arsenal of surface-to-surface missiles. In the early 1990s, Scud-C missiles with a 500-kilometre (310-mile) range were procured from North Korea, and Scud-D, with a range of up to 700 kilometres (430 miles), is allegedly being developed by Syria with the help of North Korea and Iran, according to Zisser.[160]
183
+
184
+ Syria received significant financial aid from Arab states of the Persian Gulf as a result of its participation in the Persian Gulf War, with a sizable portion of these funds earmarked for military spending.
185
+
186
+ Ensuring national security, increasing influence among its Arab neighbors, and securing the return of the Golan Heights, have been the primary goals of Syria's foreign policy. At many points in its history, Syria has seen virulent tension with its geographically cultural neighbors, such as Turkey, Israel, Iraq, and Lebanon. Syria enjoyed an improvement in relations with several of the states in its region in the 21st century, prior to the Arab Spring and the Syrian Civil War.
187
+
188
+ Since the ongoing civil war of 2011, and associated killings and human rights abuses, Syria has been increasingly isolated from the countries in the region, and the wider international community. Diplomatic relations have been severed with several countries including: Britain, Canada, France, Italy, Germany, Tunisia, Egypt, Libya, the United States, Belgium, Spain, and the Arab states of the Persian Gulf.[161]
189
+
190
+ From the Arab league, Syria continues to maintain diplomatic relations with Algeria, Egypt, Iraq, Lebanon, Sudan and Yemen. Syria's violence against civilians has also seen it suspended from the Arab League and the Organisation of Islamic Cooperation in 2012. Syria continues to foster good relations with its traditional allies, Iran and Russia, who are among the few countries which have supported the Syrian government in its conflict with the Syrian opposition.
191
+
192
+ Syria is included in the European Union's European Neighbourhood Policy (ENP) which aims at bringing the EU and its neighbors closer.
193
+
194
+ In 1939, while Syria was still a French mandate the French ceded the Sanjak of Alexandretta to Turkey as part of a treaty of friendship in World War II. In order to facilitate this, a faulty election was done in which ethnic Turks who were originally from the Sanjak but lived in Adana and other areas near the border in Turkey came to vote in the elections, shifting the election in favor of secession. Through this, the Hatay Province of Turkey was formed. The move by the French was very controversial in Syria, and only five years later Syria became independent.[162]
195
+
196
+ The western two-thirds of Syria's Golan Heights region are since 1967 occupied by Israel and were in 1981 effectively annexed by Israel,[163][164] whereas the eastern third is controlled by Syria, with the UNDOF maintaining a buffer zone in between, to implement the ceasefire of the Purple Line. Israel's 1981 Golan annexation law is not recognised in international law. The UN Security Council condemned it in Resolution 497 (1981) as "null and void and without international legal effect." Since then, General Assembly resolutions on "The Occupied Syrian Golan" reaffirm the illegality of Israeli occupation and annexation.[165] The Syrian government continues to demand the return of this territory.[citation needed] The only remaining land Syria has in the Golan is a strip of territory which contains the abandoned city of Quneitra, the governorate's de facto capital Madinat al-Baath and many small villages, mostly populated by Circassians such as Beer Ajam and Hader.[dubious – discuss] In March 2019, U.S. President Donald Trump announced that the United States will recognize Israel's annexation of the Golan Heights.[166]
197
+
198
+ The Syrian occupation of Lebanon began in 1976 as a result of the civil war and ended in April 2006 in response to domestic and international pressure after the assassination of former Lebanese Prime Minister, Rafik Hariri.
199
+
200
+ Another disputed territory is the Shebaa farms, located in the intersection of the Lebanese-Syrian border and the Israeli occupied Golan Heights. The farms, which are 11 km long and about 3 kilometers wide were occupied by Israel in 1981, along with rest of the Golan Heights.[167] Yet following Syrian army advances the Israeli occupation ended and Syria became the de facto ruling power over the farms. Yet after Israeli withdrawal from Lebanon in 2000, Hezbollah claimed that the withdrawal was not complete because Shebaa was on Lebanese – not Syrian – territory.[168] After studying 81 different maps, the United Nations concluded that there is no evidence of the abandoned farmlands being Lebanese.[169] Nevertheless, Lebanon has continued to claim ownership of the territory.
201
+
202
+ Syria is divided into 14 governorates, which are sub-divided into 61 districts, which are further divided into sub-districts. The Democratic Federation of Northern Syria, while de facto autonomous, is not recognized by the Syrian Arab Republic as such.
203
+
204
+ Agrarian reform measures were introduced into Syria which consisted of three interrelated programs: Legislation regulation the relationship between agriculture laborers and landowners: legislation governing the ownership and use of private and state domain land and directing the economic organization of peasants; and measures reorganizing agricultural production under state control.[170] Despite high levels of inequality in land ownership these reforms allowed for progress in redistribution of land from 1958 to 1961 than any other reforms in Syria's history, since independence.
205
+
206
+ The first law passed (Law 134; passed 4 September 1958) in response to concern about peasant mobilization and expanding peasants' rights.[171] This was designed to strengthen the position of sharecroppers and agricultural laborers in relation to land owners.[171] This law led to the creation of the Ministry of Labor and Social Affairs, which announced the implementation of new laws that would allow the regulation of working condition especially for women and adolescents, set hours of work, and introduce the principle of minimum wage for paid laborers and an equitable division of harvest for sharecroppers.[172] Furthermore, it obligated landlords to honor both written and oral contracts, established collective bargaining, contained provisions for workers' compensation, health, housing, and employment services.[171] Law 134 was not designed strictly to protect workers. It also acknowledged the rights of landlords to form their own syndicates.[171]
207
+
208
+ Telecommunications in Syria are overseen by the Ministry of Communications and Technology.[173] In addition, Syrian Telecom plays an integral role in the distribution of government internet access.[174] The Syrian Electronic Army serves as a pro-government military faction in cyberspace and has been long considered an enemy of the hacktivist group Anonymous.[175] Because of internet censorship laws, 13,000 internet activists were arrested between March 2011 and August 2012.[176]
209
+
210
+ As of 2015[update], the Syrian economy relies upon inherently unreliable revenue sources such as dwindling customs and income taxes which are heavily bolstered by lines of credit from Iran.[177] Iran is believed to spend between $6 billion and US$20 billion a year on Syria during the Syrian Civil War.[178] The Syrian economy has contracted 60% and the Syrian pound has lost 80% of its value, with the economy becoming part state-owned and part war economy.[179] At the outset of the ongoing Syrian Civil War, Syria was classified by the World Bank as a "lower middle income country."[180] In 2010, Syria remained dependent on the oil and agriculture sectors.[181] The oil sector provided about 40% of export earnings.[181] Proven offshore expeditions have indicated that large sums of oil exist on the Mediterranean Sea floor between Syria and Cyprus.[182] The agriculture sector contributes to about 20% of GDP and 20% of employment. Oil reserves are expected to decrease in the coming years and Syria has already become a net oil importer.[181] Since the civil war began, the economy shrank by 35%, and the Syrian pound has fallen to one-sixth of its prewar value.[183] The government increasingly relies on credit from Iran, Russia and China.[183]
211
+
212
+ The economy is highly regulated by the government, which has increased subsidies and tightened trade controls to assuage protesters and protect foreign currency reserves.[184] Long-run economic constraints include foreign trade barriers, declining oil production, high unemployment, rising budget deficits, and increasing pressure on water supplies caused by heavy use in agriculture, rapid population growth, industrial expansion, and water pollution.[184] The UNDP announced in 2005 that 30% of the Syrian population lives in poverty and 11.4% live below the subsistence level.[74]
213
+
214
+ Syria's share in global exports has eroded gradually since 2001.[185] The real per capita GDP growth was just 2.5% per year in the 2000–2008 period.[185] Unemployment is high at above 10%. Poverty rates have increased from 11% in 2004 to 12.3% in 2007.[185] In 2007, Syria's main exports include crude oil, refined products, raw cotton, clothing, fruits, and grains. The bulk of Syrian imports are raw materials essential for industry, vehicles, agricultural equipment, and heavy machinery. Earnings from oil exports as well as remittances from Syrian workers are the government's most important sources of foreign exchange.[74]
215
+
216
+ Political instability poses a significant threat to future economic development.[186] Foreign investment is constrained by violence, government restrictions, economic sanctions, and international isolation. Syria's economy also remains hobbled by state bureaucracy, falling oil production, rising budget deficits, and inflation.[186]
217
+
218
+ Prior to the civil war in 2011, the government hoped to attract new investment in the tourism, natural gas, and service sectors to diversify its economy and reduce its dependence on oil and agriculture. The government began to institute economic reforms aimed at liberalizing most markets, but those reforms were slow and ad hoc, and have been completely reversed since the outbreak of conflict in 2011.[187]
219
+
220
+ As of 2012[update], because of the ongoing Syrian civil war, the value of Syria's overall exports has been slashed by two-thirds, from the figure of US$12 billion in 2010 to only US$4 billion in 2012.[188] Syria's GDP declined by over 3% in 2011,[189] and is expected to further decline by 20% in 2012.[190]
221
+
222
+ As of 2012[update], Syria's oil and tourism industries in particular have been devastated, with US$5 billion lost to the ongoing conflict of the civil war.[188] Reconstruction needed because of the ongoing civil war will cost as much as US$10 billion.[188] Sanctions have sapped the government's finance. US and European Union bans on oil imports, which went into effect in 2012, are estimated to cost Syria about $400 million a month.[191]
223
+
224
+ Revenues from tourism have dropped dramatically, with hotel occupancy rates falling from 90% before the war to less than 15% in May 2012.[192] Around 40% of all employees in the tourism sector have lost their jobs since the beginning of the war.[192]
225
+
226
+ In May 2015, ISIS captured Syria's phosphate mines, one of the Syrian governments last chief sources of income.[193] The following month, ISIS blew up a gas pipeline to Damascus that was used to generate heating and electricity in Damascus and Homs; "the name of its game for now is denial of key resources to the regime" an analyst stated.[194] In addition, ISIS was closing in on Shaer gas field and three other facilities in the area—Hayan, Jihar and Ebla—with the loss of these western gas fields having the potential to cause Iran to further subsidize the Syrian government.[195]
227
+
228
+ Syria's petroleum industry has been subject to sharp decline. In September 2014, ISIS was producing more oil than the government at 80,000 bbl/d (13,000 m3/d) compared to the government's 17,000 bbl/d (2,700 m3/d) with the Syrian Oil Ministry stating that by the end of 2014, oil production had plunged further to 9,329 bbl/d (1,483.2 m3/d); ISIS has since captured a further oil field, leading to a projected oil production of 6,829 bbl/d (1,085.7 m3/d).[177] In the third year of the Syrian Civil War, the deputy economy minister Salman Hayan stated that Syria's two main oil refineries were operating at less than 10% capacity.[196]
229
+
230
+ Historically, the country produced heavy-grade oil from fields located in the northeast since the late 1960s. In the early 1980s, light-grade, low-sulphur oil was discovered near Deir ez-Zor in eastern Syria. Syria's rate of oil production has decreased dramatically from a peak close to 600,000 barrels per day (95,000 m3/d) (bpd) in 1995 down to less than 182,500 bbl/d (29,020 m3/d) in 2012.[197] Since 2012 the production has decreased even more, reaching in 2014 32,000 barrels per day (5,100 m3/d) (bpd). Official figures quantity the production in 2015 at 27,000 barrels per day (4,300 m3/d), but those figures have to be taken with precaution because it is difficult to estimate the oil that is currently produced in the rebel held areas.
231
+
232
+ Prior to the uprising, more than 90% of Syrian oil exports were to EU countries, with the remainder going to Turkey.[192] Oil and gas revenues constituted in 2012 around 20% of total GDP and 25% of total government revenue.[192]
233
+
234
+ On 27 January 2020, the Baniyas oil refinery of Syria was attacked by militants by the means of explosives on underwater pipelines. It was the third attack against Syria's oil and gas industry in less than a year, and aimed at preventing oil imports into the country.[198]
235
+
236
+ Syria has four international airports (Damascus, Aleppo, Lattakia and Kamishly), which serve as hubs for Syrian Air and are also served by a variety of foreign carriers.[citation needed]
237
+
238
+ The majority of Syrian cargo is carried by Syrian Railways (the Syrian railway company), which links up with Turkish State Railways (the Turkish counterpart). For a relatively underdeveloped country, Syria's railway infrastructure is well maintained with many express services and modern trains.[199]
239
+
240
+ The road network in Syria is 69,873 kilometres (43,417 miles) long, including 1,103 kilometres (685 miles) of expressways. The country also has 900 kilometres (560 miles) of navigable but not economically significant waterways.[200]
241
+
242
+ Syria is a semiarid country with scarce water resources. The largest water consuming sector in Syria is agriculture. Domestic water use stands at only about 9% of total water use.[201] A big challenge for Syria is its high population growth with a rapidly increasing demand for urban and industrial water. In 2006 the population of Syria was 19.4 million with a growth rate of 2.7%.[202]
243
+
244
+ Most people live in the Euphrates River valley and along the coastal plain, a fertile strip between the coastal mountains and the desert. Overall population density in Syria is about 99 per square kilometre (258 per square mile). According to the World Refugee Survey 2008, published by the U.S. Committee for Refugees and Immigrants, Syria hosted a population of refugees and asylum seekers numbering approximately 1,852,300. The vast majority of this population was from Iraq (1,300,000), but sizeable populations from Palestine (543,400) and Somalia (5,200) also lived in the country.[205]
245
+
246
+ In what the UN has described as "the biggest humanitarian emergency of our era",[206] about 9.5 million Syrians, half the population, have been displaced since the outbreak of the Syrian Civil War in March 2011;[207] 4 million are outside the country as refugees.[208]
247
+
248
+ Syrians are an overall indigenous Levantine people, closely related to their immediate neighbors, such as Lebanese, Palestinians, Jordanians and Jews.[209][210] Syria has a population of approximately 18,500,000 (2019 estimate). Syrian Arabs, together with some 600,000 Palestinian not including the 6 million refugees outside the country. Arabs make up roughly 74% of the population.[184]
249
+
250
+ The indigenous Assyrians and Western Aramaic-speakers number around 400,000 people,[211] with the Western Aramaic-speakers living mainly in the villages of Ma'loula, Jubb'adin and Bakh'a, while the Assyrians mainly reside in the north and northeast (Homs, Aleppo, Qamishli, Hasakah). Many (particularly the Assyrian group) still retain several Neo-Aramaic dialects as spoken and written languages.[212]
251
+
252
+ The second-largest ethnic group in Syria are the Kurds. They constitute about 9%[213] to 10%[214] of the population, or approximately 1.6 million people (including 40,000 Yazidis[214]). Most Kurds reside in the northeastern corner of Syria and most speak the Kurmanji variant of the Kurdish language.[213]
253
+
254
+ The third largest ethnic group are the Turkish-speaking Syrian Turkmen/Turkoman. There are no reliable estimates of their total population, with estimates ranging from several hundred thousand to 3.5 million.[215][216][217]
255
+
256
+ The fourth largest ethnic group are the Assyrians (3–4%),[214] followed by the Circassians (1.5%)[214] and the Armenians (1%),[214] most of which are the descendants of refugees who arrived in Syria during the Armenian Genocide. Syria holds the 7th largest Armenian population in the world. They are mainly gathered in Aleppo, Qamishli, Damascus and Kesab.
257
+
258
+ There are also smaller ethnic minority groups, such as the Albanians, Bosnians, Georgians, Greeks, Persians, Pashtuns and Russians.[214] However, most of these ethnic minorities have become Arabized to some degree, particularly those who practice the Muslim faith.[214]
259
+
260
+ Syria was once home to a substantial population of Jews, with large communities in Damascus, Aleppo, and Qamishii. Due to a combination of persecution in Syria and opportunities elsewhere, the Jews began to emigrate in the second half of the 19th century to Great Britain, the United States, and Israel. The process was completed with the establishment of the State of Israel in 1948. Today only a few Jews remain in Syria.
261
+
262
+ The largest concentration of the Syrian diaspora outside the Arab world is in Brazil, which has millions of people of Arab and other Near Eastern ancestries.[218] Brazil is the first country in the Americas to offer humanitarian visas to Syrian refugees.[219] The majority of Arab Argentines are from either Lebanese or Syrian background.[220]
263
+
264
+ Religion in Syria (est. 2019)[221]
265
+
266
+ Sunni Muslims make up between 69–74% of Syria's population[221] and Sunni Arabs account for 59–60% of the population. Most Kurds (8.5%)[222] and most Turkoman (3%)[222] are Sunni and account for the difference between Sunnis and Sunni Arabs, while 13% of Syrians are Shia Muslims (particularly Alawite, Twelvers, and Ismailis but there are also Arabs, Kurds and Turkoman), 10% Christian[221] (the majority are Antiochian Greek Orthodox, the rest are Syrian Orthodox, Greek Catholic and other Catholic Rites, Assyrian Church of the East, Armenian Orthodox, Protestants and other denominations), and 3% Druze.[221] Druze number around 500,000, and concentrate mainly in the southern area of Jabal al-Druze.[223]
267
+
268
+ President Bashar al-Assad's family is Alawite and Alawites dominate the government of Syria and hold key military positions.[224] In May 2013, SOHR stated that out of 94,000 killed during the Syrian Civil War, at least 41,000 were Alawites.[225]
269
+
270
+ Christians (1.2 million), a sizable number of whom are found among Syria's population of Palestinian refugees, are divided into several sects: Chalcedonian Antiochian Orthodox make up 45.7% of the Christian population; the Catholics (Melkite, Armenian Catholic, Syriac Catholic, Maronite, Chaldean Catholic and Latin) make up 16.2%; the Armenian Apostolic Church 10.9%, the Syriac Orthodox make up 22.4%; Assyrian Church of the East and several smaller Christian denominations account for the remainder. Many Christian monasteries also exist. Many Christian Syrians belong to a high socio-economic class.[226]
271
+
272
+ Arabic is the official language of the country. Several modern Arabic dialects are used in everyday life, most notably Levantine in the west and Mesopotamian in the northeast. According to The Encyclopedia of Arabic Language and Linguistics, in addition to Arabic, the following languages are spoken in the country, in order of the number of speakers: Kurdish,[227] Turkish,[227] Neo-Aramaic (four dialects),[227] Circassian,[227] Chechen,[227] Armenian,[227] and finally Greek.[227] However, none of these minority languages have official status.[227]
273
+
274
+ Aramaic was the lingua franca of the region before the advent of Arabic, and is still spoken among Assyrians, and Classical Syriac is still used as the liturgical language of various Syriac Christian denominations. Most remarkably, Western Neo-Aramaic is still spoken in the village of Ma'loula as well as two neighboring villages, 56 km (35 mi) northeast of Damascus.
275
+
276
+ English and French are widely spoken as second languages, but English is more often used.
277
+
278
+ Syria is a traditional society with a long cultural history.[228] Importance is placed on family, religion, education, self-discipline and respect. Syrians' taste for the traditional arts is expressed in dances such as the al-Samah, the Dabkeh in all their variations, and the sword dance. Marriage ceremonies and the births of children are occasions for the lively demonstration of folk customs.[229]
279
+
280
+ The literature of Syria has contributed to Arabic literature and has a proud tradition of oral and written poetry. Syrian writers, many of whom migrated to Egypt, played a crucial role in the nahda or Arab literary and cultural revival of the 19th century. Prominent contemporary Syrian writers include, among others, Adonis, Muhammad Maghout, Haidar Haidar, Ghada al-Samman, Nizar Qabbani and Zakariyya Tamer.
281
+
282
+ Ba'ath Party rule, since the 1966 coup, has brought about renewed censorship. In this context, the genre of the historical novel, spearheaded by Nabil Sulayman, Fawwaz Haddad, Khyri al-Dhahabi and Nihad Siris, is sometimes used as a means of expressing dissent, critiquing the present through a depiction of the past. Syrian folk narrative, as a subgenre of historical fiction, is imbued with magical realism, and is also used as a means of veiled criticism of the present. Salim Barakat, a Syrian émigré living in Sweden, is one of the leading figures of the genre. Contemporary Syrian literature also encompasses science fiction and futuristic utopiae (Nuhad Sharif, Talib Umran), which may also serve as media of dissent.
283
+
284
+ The Syrian music scene, in particular that of Damascus, has long been among the Arab world's most important, especially in the field of classical Arab music. Syria has produced several pan-Arab stars, including Asmahan, Farid al-Atrash and singer Lena Chamamyan. The city of Aleppo is known for its muwashshah, a form of Andalous sung poetry popularized by Sabri Moudallal, as well as for popular stars like Sabah Fakhri.
285
+
286
+ Television was introduced to Syria and Egypt in 1960, when both were part of the United Arab Republic. It broadcast in black and white until 1976. Syrian soap operas have considerable market penetration throughout the eastern Arab world.[230]
287
+
288
+ Nearly all of Syria's media outlets are state-owned, and the Ba'ath Party controls nearly all newspapers.[231] The authorities operate several intelligence agencies,[232] among them Shu'bat al-Mukhabarat al-'Askariyya, employing many operatives.[233] During the Syrian Civil War many of Syria's artists, poets, writers and activists have been incarcerated, and some have been killed, including famed cartoonist Akram Raslam.[234]
289
+
290
+ The most popular sports in Syria are football, basketball, swimming, and tennis. Damascus was home to the fifth and seventh Pan Arab Games.
291
+
292
+ Syrian cuisine is rich and varied in its ingredients, linked to the regions of Syria where a specific dish has originated. Syrian food mostly consists of Southern Mediterranean, Greek, and Southwest Asian dishes. Some Syrian dishes also evolved from Turkish and French cooking: dishes like shish kebab, stuffed zucchini/courgette, and yabraʾ (stuffed grape leaves, the word yabraʾ deriving from the Turkish word yaprak, meaning leaf).
293
+
294
+ The main dishes that form Syrian cuisine are kibbeh, hummus, tabbouleh, fattoush, labneh, shawarma, mujaddara, shanklish, pastırma, sujuk and baklava. Baklava is made of filo pastry filled with chopped nuts and soaked in honey. Syrians often serve selections of appetizers, known as meze, before the main course. Za'atar, minced beef, and cheese manakish are popular hors d'œuvres. The Arabic flatbread khubz is always eaten together with meze.
295
+
296
+ Drinks in Syria vary, depending on the time of day and the occasion. Arabic coffee is the most well-known hot drink, usually prepared in the morning at breakfast or in the evening. It is usually served for guests or after food. Arak, an alcoholic drink, is a well-known beverage, served mostly on special occasions. Other Syrian beverages include ayran, jallab, white coffee, and a locally manufactured beer called Al Shark.[235]
297
+
298
+ Education is free and compulsory from ages 6 to 12. Schooling consists of 6 years of primary education followed by a 3-year general or vocational training period and a 3-year academic or vocational program. The second 3-year period of academic training is required for university admission. Total enrollment at post-secondary schools is over 150,000. The literacy rate of Syrians aged 15 and older is 90.7% for males and 82.2% for females.[236][237]
299
+
300
+ Since 1967, all schools, colleges, and universities have been under close government supervision by the Ba'ath Party.[238]
301
+
302
+ There are 6 state universities in Syria[239] and 15 private universities.[240] The top two state universities are Damascus University (210,000 students as of 2014)[241] and University of Aleppo.[242] The top private universities in Syria are: Syrian Private University, Arab International University, University of Kalamoon and International University for Science and Technology. There are also many higher institutes in Syria, like the Higher Institute of Business Administration, which offer undergraduate and graduate programs in business.[243]
303
+
304
+ According to the Webometrics Ranking of World Universities, the top-ranking universities in the country are Damascus University (3540th worldwide), the University of Aleppo (7176th) and Tishreen University (7968th).[244]
305
+
306
+ In 2010, spending on healthcare accounted for 3.4% of the country's GDP. In 2008, there were 14.9 physicians and 18.5 nurses per 10,000 inhabitants.[245] The life expectancy at birth was 75.7 years in 2010, or 74.2 years for males and 77.3 years for females.[246]
en/5566.html.txt ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Hearing, or auditory perception, is the ability to perceive sounds by detecting vibrations,[1] changes in the pressure of the surrounding medium through time, through an organ such as the ear. The academic field concerned with hearing is auditory science.
4
+
5
+ Sound may be heard through solid, liquid, or gaseous matter.[2] It is one of the traditional five senses; partial or total inability to hear is called hearing loss.
6
+
7
+ In humans and other vertebrates, hearing is performed primarily by the auditory system: mechanical waves, known as vibrations, are detected by the ear and transduced into nerve impulses that are perceived by the brain (primarily in the temporal lobe). Like touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of mechanosensation.[3][4]
8
+
9
+ There are three main components of the human auditory system: the outer ear, the middle ear, and the inner ear.
10
+
11
+ The outer ear includes the pinna, the visible part of the ear, as well as the ear canal, which terminates at the eardrum, also called the tympanic membrane. The pinna serves to focus sound waves through the ear canal toward the eardrum. Because of the asymmetrical character of the outer ear of most mammals, sound is filtered differently on its way into the ear depending on the location of its origin. This gives these animals the ability to localize sound vertically. The eardrum is an airtight membrane, and when sound waves arrive there, they cause it to vibrate following the waveform of the sound. Cerumen (ear wax) is produced by ceruminous and sebaceous glands in the skin of the human ear canal, protecting the ear canal and tympanic membrane from physical damage and microbial invasion.[5]
12
+
13
+ The middle ear consists of a small air-filled chamber that is located medial to the eardrum. Within this chamber are the three smallest bones in the body, known collectively as the ossicles which include the malleus, incus, and stapes (also known as the hammer, anvil, and stirrup, respectively). They aid in the transmission of the vibrations from the eardrum into the inner ear, the cochlea. The purpose of the middle ear ossicles is to overcome the impedance mismatch between air waves and cochlear waves, by providing impedance matching.
14
+
15
+ Also located in the middle ear are the stapedius muscle and tensor tympani muscle, which protect the hearing mechanism through a stiffening reflex. The stapes transmits sound waves to the inner ear through the oval window, a flexible membrane separating the air-filled middle ear from the fluid-filled inner ear. The round window, another flexible membrane, allows for the smooth displacement of the inner ear fluid caused by the entering sound waves.
16
+
17
+ The inner ear consists of the cochlea, which is a spiral-shaped, fluid-filled tube. It is divided lengthwise by the organ of Corti, which is the main organ of mechanical to neural transduction. Inside the organ of Corti is the basilar membrane, a structure that vibrates when waves from the middle ear propagate through the cochlear fluid – endolymph. The basilar membrane is tonotopic, so that each frequency has a characteristic place of resonance along it. Characteristic frequencies are high at the basal entrance to the cochlea, and low at the apex. Basilar membrane motion causes depolarization of the hair cells, specialized auditory receptors located within the organ of Corti.[6] While the hair cells do not produce action potentials themselves, they release neurotransmitter at synapses with the fibers of the auditory nerve, which does produce action potentials. In this way, the patterns of oscillations on the basilar membrane are converted to spatiotemporal patterns of firings which transmit information about the sound to the brainstem.[7]
18
+
19
+ The sound information from the cochlea travels via the auditory nerve to the cochlear nucleus in the brainstem. From there, the signals are projected to the inferior colliculus in the midbrain tectum. The inferior colliculus integrates auditory input with limited input from other parts of the brain and is involved in subconscious reflexes such as the auditory startle response.
20
+
21
+ The inferior colliculus in turn projects to the medial geniculate nucleus, a part of the thalamus where sound information is relayed to the primary auditory cortex in the temporal lobe. Sound is believed to first become consciously experienced at the primary auditory cortex. Around the primary auditory cortex lies Wernickes area, a cortical area involved in interpreting sounds that is necessary to understand spoken words.
22
+
23
+ Disturbances (such as stroke or trauma) at any of these levels can cause hearing problems, especially if the disturbance is bilateral. In some instances it can also lead to auditory hallucinations or more complex difficulties in perceiving sound.
24
+
25
+ Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions (OAE) and electrocochleography (ECochG). Technical advances in these tests have allowed hearing screening for infants to become widespread.
26
+
27
+ Hearing can be measured by mobile applications which includes audiological hearing test function or hearing aid application. These applications allow the user to measure hearing thresholds at different frequencies (audiogram). Despite possible errors in measurements, hearing loss can be detected.[8][9]
28
+
29
+ There are several different types of hearing loss: Conductive hearing loss, sensorineural hearing loss and mixed types.
30
+
31
+ There are defined degrees of hearing loss:[10][11]
32
+
33
+ Hearing protection is the use of devices designed to prevent Noise-Induced Hearing Loss (NIHL), a type of post-lingual hearing impairment. The various means used to prevent hearing loss generally focus on reducing the levels of noise to which people are exposed. One way this is done is through environmental modifications such as acoustic quieting, which may be achieved with as basic a measure as lining a room with curtains, or as complex a measure as employing an anechoic chamber, which absorbs nearly all sound. Another means is the use of devices such as earplugs, which are inserted into the ear canal to block noise, or earmuffs, objects designed to cover a person's ears entirely.
34
+
35
+ The loss of hearing, when it is caused by neural loss, cannot presently be cured. Instead, its effects can be mitigated by the use of audioprosthetic devices, i.e. hearing assistive devices such as hearing aids and cochlear implants. In a clinical setting, this management is offered by otologists and audiologists.
36
+
37
+ Hearing loss is associated with Alzheimer's disease and dementia with a greater degree of hearing loss tied to a higher risk.[12] There is also an association between type 2 diabetes and hearing loss.[13]
38
+
39
+ Hearing threshold and the ability to localize sound sources are reduced underwater in humans, but not in aquatic animals, including whales, seals, and fish which have ears adapted to process water-borne sound.[14][15]
40
+
41
+ Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both amplitude and frequency. Many animals use sound to communicate with each other, and hearing in these species is particularly important for survival and reproduction. In species that use sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.
42
+
43
+ Frequencies capable of being heard by humans are called audio or sonic. The range is typically considered to be between 20 Hz and 20,000 Hz.[16] Frequencies higher than audio are referred to as ultrasonic, while frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echolocation while in flight. Dogs are able to hear ultrasound, which is the principle of 'silent' dog whistles. Snakes sense infrasound through their jaws, and baleen whales, giraffes, dolphins and elephants use it for communication. Some fish have the ability to hear more sensitively due to a well-developed, bony connection between the ear and their swim bladder. This "aid to the deaf" for fishes appears in some species such as carp and herring.[17]
44
+
45
+ Even though they don’t have ears, invertebrates have developed other structures and systems to decode vibrations traveling through the air, or “sound.” Charles Henry Turner (zoologist) was the first scientist to formally show this phenomenon through rigorously controlled experiments in ants [18]. Turner ruled out the detection of ground vibration and suggested that other insects likely have auditory systems as well.
46
+
47
+ Many insects detect sound through the way air vibrations deflect hairs along their body. Some insects have even developed specialized hairs tuned to detecting particular frequencies, such as certain caterpillar species that have evolved hair with properties such that it resonates most with the sound of buzzing wasps, thus warning them of the presence of natural enemies. [19].
48
+
49
+ Some insects possess a tympanal organ. These are "eardrums", that cover air filled chambers on the legs. Similar to the hearing process with vertebrates, the eardrums react to sonar waves. Receptors that are placed on the inside translate the oscillation into electric signals and send them to the brain. Several groups of flying insects that are preyed upon by echolocating bats can perceive the ultrasound emissions this way and reflexively practice ultrasound avoidance.
en/5567.html.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
4
+
5
+ Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
6
+
7
+ For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
8
+
9
+ The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).[3] In the mobile sector (including smartphones and tablets), Android's share is up to 70% in the year 2017.[4] According to third quarter 2016 data, Android's share on smartphones is dominant with 87.5 percent with also a growth rate of 10.3 percent per year, followed by Apple's iOS with 12.1 percent with per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.[5] Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
10
+
11
+ A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency. This is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking; 32-bit versions of both Windows NT and Win9x used preemptive multi-tasking.
12
+
13
+ Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem.[6] A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, and the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources to multiple users.
14
+
15
+ A distributed operating system manages a group of distinct, networked computers and makes them appear to be a single computer, as all computations are distributed (divided amongst the constituent computers).[7]
16
+
17
+ In the distributed and cloud computing context of an OS, templating refers to creating a single virtual machine image as a guest operating system, then saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses.[8]
18
+
19
+ Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines with less autonomy (e.g. PDAs). They are very compact and extremely efficient by design, and are able to operate with a limited amount of resources. Windows CE and Minix 3 are some examples of embedded operating systems.
20
+
21
+ A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. Such an event-driven system switches between tasks based on their priorities or external events, whereas time-sharing operating systems switch tasks based on clock interrupts.
22
+
23
+ A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments.
24
+
25
+ Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s.[9] Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
26
+
27
+ In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plugboards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).[full citation needed]
28
+
29
+ In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period and would arrive at a scheduled time with their program and data on punched paper cards or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the universal Turing machine.[9]
30
+
31
+ Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and compiling (generating machine code from human-readable symbolic code). This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England, the job queue was at one time a washing line (clothes line) from which tapes were hung with different colored clothes-pegs to indicate job priority.[citation needed]
32
+
33
+ An improvement was the Atlas Supervisor. Introduced with the Manchester Atlas in 1962, it is considered by many to be the first recognisable modern operating system.[10] Brinch Hansen described it as "the most significant breakthrough in the history of operating systems."[11]
34
+
35
+ Through the 1950s, many major features were pioneered in the field of operating systems on mainframe computers, including batch processing, input/output interrupting, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959, the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
36
+
37
+ During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and modern machines are backwards-compatible with applications written for OS/360.[citation needed]
38
+
39
+ OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during updates. When a process is terminated for any reason, all of these resources are re-claimed by the operating system.
40
+
41
+ The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
42
+
43
+ Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.
44
+
45
+ In 1961, Burroughs Corporation introduced the B5000 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler; indeed, the MCP was the first OS to be written exclusively in a high-level language (ESPOL, a dialect of ALGOL). MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS/400, IBM made an approach to Burroughs to license MCP to run on the AS/400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys company's ClearPath/MCP line of computers.
46
+
47
+ UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems[citation needed]. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
48
+
49
+ General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
50
+
51
+ Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Before the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. RT-11 was a single-user real-time OS for the PDP-11 class minicomputer, and RSX-11 was the corresponding multi-user OS.
52
+
53
+ From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
54
+
55
+ The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:
56
+
57
+ The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became widely popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the 1980s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative graphical user interface (GUI) to the Mac OS.
58
+
59
+ The introduction of the Intel 80386 CPU chip in October 1985,[12] with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X (macOS after latest name change).
60
+
61
+ The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
62
+
63
+ Unix was originally written in assembly language.[13] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
64
+
65
+ The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
66
+
67
+ Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
68
+
69
+ Four operating systems are certified by The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's macOS, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
70
+
71
+ Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
72
+
73
+ A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NeXTSTEP.
74
+
75
+ In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkeley received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
76
+
77
+ Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
78
+
79
+ Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. After two years of legal disputes, the BSD project spawned a number of free derivatives, such as NetBSD and FreeBSD (both in 1993), and OpenBSD (from NetBSD in 1995).
80
+
81
+ macOS (formerly "Mac OS X" and later "OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. macOS is the successor to the original classic Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, macOS is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
82
+ The operating system was first released in 1999 as Mac OS X Server 1.0, followed in March 2001 by a client version (Mac OS X v10.0 "Cheetah"). Since then, six more distinct "client" and "server" editions of macOS have been released, until the two were merged in OS X 10.7 "Lion".
83
+
84
+ Prior to its merging with macOS, the server edition – macOS Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. macOS Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.[14]
85
+
86
+ The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel.
87
+
88
+ Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs,[15] it has been widely adopted for use in servers[16] and embedded systems[17] such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385.[18] Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS.
89
+
90
+ Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[15][19][20][21] The latest version is Windows 10.
91
+
92
+ In 2011, Windows 7 overtook Windows XP as most common version in use.[22][23][24]
93
+
94
+ Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[25][26] and 16-bit Windows 3.x[27] drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and 32-bit ARM microprocessors.[28] In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures.
95
+
96
+ Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share.[29][30]
97
+
98
+ ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows – without using any of Microsoft's code.
99
+
100
+ There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group.
101
+
102
+ Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
103
+
104
+ The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
105
+
106
+ With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
107
+
108
+ The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
109
+
110
+ Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative – having the operating system "watch" the various sources of input for events (polling) that require action – can be found in older systems with very small stacks (50 or 60 bytes) but is unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
111
+
112
+ When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or the running program.
113
+
114
+ When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called a device driver, which may be part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
115
+
116
+ A program may also trigger an interrupt to the operating system. If a program wishes to access hardware, for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel then processes the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it triggers an interrupt to get the kernel's attention.
117
+
118
+ Modern microprocessors (CPU or MPU) support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.
119
+
120
+ At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established.
121
+
122
+ Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the microprocessor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control.
123
+
124
+ In user mode, programs usually have access to a restricted set of microprocessor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources.
125
+
126
+ The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program).
127
+
128
+ Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
129
+
130
+ Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
131
+
132
+ Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
133
+
134
+ In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
135
+
136
+ Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
137
+
138
+ The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
139
+
140
+ If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
141
+
142
+ When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
143
+
144
+ In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
145
+
146
+ "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[31]
147
+
148
+ Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
149
+
150
+ An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
151
+
152
+ An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
153
+
154
+ Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
155
+
156
+ The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
157
+
158
+ On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
159
+
160
+ Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
161
+
162
+ Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
163
+
164
+ While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
165
+
166
+ A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
167
+
168
+ When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
169
+
170
+ Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
171
+
172
+ Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system.
173
+
174
+ A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
175
+
176
+ The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view.
177
+
178
+ Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
179
+
180
+ Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
181
+
182
+ Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
183
+
184
+ Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
185
+
186
+ A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.[citation needed]
187
+
188
+ The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.[citation needed]
189
+
190
+ In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.[citation needed]
191
+
192
+ External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
193
+
194
+ Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
195
+
196
+ An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
197
+
198
+ Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
199
+
200
+ Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
201
+
202
+ Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of the classic Mac OS, the GUI is integrated into the kernel.
203
+
204
+ While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and macOS are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
205
+
206
+ Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma 5 is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
207
+
208
+ Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
209
+
210
+ Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[32]
211
+
212
+ A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
213
+
214
+ An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
215
+
216
+ Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[33] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
217
+
218
+ Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
219
+
220
+ Operating system development is one of the most complicated activities in which a computing hobbyist may engage.[citation needed] A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[34]
221
+
222
+ In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
223
+
224
+ Examples of a hobby operating system include Syllable and TempleOS.
225
+
226
+ Application software is generally written for use on a specific operating system, and sometimes even for specific hardware.[citation needed] When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
227
+
228
+ Unix was the first operating system not written in assembly language, making it very portable to systems different from its native PDP-11.[35]
229
+
230
+ This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
231
+
232
+ Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
en/5568.html.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
4
+
5
+ Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
6
+
7
+ For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
8
+
9
+ The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).[3] In the mobile sector (including smartphones and tablets), Android's share is up to 70% in the year 2017.[4] According to third quarter 2016 data, Android's share on smartphones is dominant with 87.5 percent with also a growth rate of 10.3 percent per year, followed by Apple's iOS with 12.1 percent with per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.[5] Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
10
+
11
+ A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency. This is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking; 32-bit versions of both Windows NT and Win9x used preemptive multi-tasking.
12
+
13
+ Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem.[6] A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, and the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources to multiple users.
14
+
15
+ A distributed operating system manages a group of distinct, networked computers and makes them appear to be a single computer, as all computations are distributed (divided amongst the constituent computers).[7]
16
+
17
+ In the distributed and cloud computing context of an OS, templating refers to creating a single virtual machine image as a guest operating system, then saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses.[8]
18
+
19
+ Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines with less autonomy (e.g. PDAs). They are very compact and extremely efficient by design, and are able to operate with a limited amount of resources. Windows CE and Minix 3 are some examples of embedded operating systems.
20
+
21
+ A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. Such an event-driven system switches between tasks based on their priorities or external events, whereas time-sharing operating systems switch tasks based on clock interrupts.
22
+
23
+ A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments.
24
+
25
+ Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s.[9] Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
26
+
27
+ In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plugboards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).[full citation needed]
28
+
29
+ In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period and would arrive at a scheduled time with their program and data on punched paper cards or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the universal Turing machine.[9]
30
+
31
+ Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and compiling (generating machine code from human-readable symbolic code). This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England, the job queue was at one time a washing line (clothes line) from which tapes were hung with different colored clothes-pegs to indicate job priority.[citation needed]
32
+
33
+ An improvement was the Atlas Supervisor. Introduced with the Manchester Atlas in 1962, it is considered by many to be the first recognisable modern operating system.[10] Brinch Hansen described it as "the most significant breakthrough in the history of operating systems."[11]
34
+
35
+ Through the 1950s, many major features were pioneered in the field of operating systems on mainframe computers, including batch processing, input/output interrupting, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959, the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
36
+
37
+ During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and modern machines are backwards-compatible with applications written for OS/360.[citation needed]
38
+
39
+ OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during updates. When a process is terminated for any reason, all of these resources are re-claimed by the operating system.
40
+
41
+ The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
42
+
43
+ Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.
44
+
45
+ In 1961, Burroughs Corporation introduced the B5000 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler; indeed, the MCP was the first OS to be written exclusively in a high-level language (ESPOL, a dialect of ALGOL). MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS/400, IBM made an approach to Burroughs to license MCP to run on the AS/400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys company's ClearPath/MCP line of computers.
46
+
47
+ UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems[citation needed]. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
48
+
49
+ General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
50
+
51
+ Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Before the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. RT-11 was a single-user real-time OS for the PDP-11 class minicomputer, and RSX-11 was the corresponding multi-user OS.
52
+
53
+ From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
54
+
55
+ The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:
56
+
57
+ The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became widely popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the 1980s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative graphical user interface (GUI) to the Mac OS.
58
+
59
+ The introduction of the Intel 80386 CPU chip in October 1985,[12] with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X (macOS after latest name change).
60
+
61
+ The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
62
+
63
+ Unix was originally written in assembly language.[13] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
64
+
65
+ The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
66
+
67
+ Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
68
+
69
+ Four operating systems are certified by The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's macOS, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
70
+
71
+ Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
72
+
73
+ A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NeXTSTEP.
74
+
75
+ In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkeley received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
76
+
77
+ Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
78
+
79
+ Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. After two years of legal disputes, the BSD project spawned a number of free derivatives, such as NetBSD and FreeBSD (both in 1993), and OpenBSD (from NetBSD in 1995).
80
+
81
+ macOS (formerly "Mac OS X" and later "OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. macOS is the successor to the original classic Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, macOS is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
82
+ The operating system was first released in 1999 as Mac OS X Server 1.0, followed in March 2001 by a client version (Mac OS X v10.0 "Cheetah"). Since then, six more distinct "client" and "server" editions of macOS have been released, until the two were merged in OS X 10.7 "Lion".
83
+
84
+ Prior to its merging with macOS, the server edition – macOS Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. macOS Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.[14]
85
+
86
+ The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel.
87
+
88
+ Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs,[15] it has been widely adopted for use in servers[16] and embedded systems[17] such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385.[18] Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS.
89
+
90
+ Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[15][19][20][21] The latest version is Windows 10.
91
+
92
+ In 2011, Windows 7 overtook Windows XP as most common version in use.[22][23][24]
93
+
94
+ Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[25][26] and 16-bit Windows 3.x[27] drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and 32-bit ARM microprocessors.[28] In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures.
95
+
96
+ Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share.[29][30]
97
+
98
+ ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows – without using any of Microsoft's code.
99
+
100
+ There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group.
101
+
102
+ Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
103
+
104
+ The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
105
+
106
+ With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
107
+
108
+ The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
109
+
110
+ Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative – having the operating system "watch" the various sources of input for events (polling) that require action – can be found in older systems with very small stacks (50 or 60 bytes) but is unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
111
+
112
+ When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or the running program.
113
+
114
+ When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called a device driver, which may be part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
115
+
116
+ A program may also trigger an interrupt to the operating system. If a program wishes to access hardware, for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel then processes the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it triggers an interrupt to get the kernel's attention.
117
+
118
+ Modern microprocessors (CPU or MPU) support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.
119
+
120
+ At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established.
121
+
122
+ Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the microprocessor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control.
123
+
124
+ In user mode, programs usually have access to a restricted set of microprocessor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources.
125
+
126
+ The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program).
127
+
128
+ Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
129
+
130
+ Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
131
+
132
+ Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
133
+
134
+ In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
135
+
136
+ Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
137
+
138
+ The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
139
+
140
+ If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
141
+
142
+ When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
143
+
144
+ In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
145
+
146
+ "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[31]
147
+
148
+ Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
149
+
150
+ An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
151
+
152
+ An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
153
+
154
+ Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
155
+
156
+ The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
157
+
158
+ On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
159
+
160
+ Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
161
+
162
+ Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
163
+
164
+ While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
165
+
166
+ A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
167
+
168
+ When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
169
+
170
+ Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
171
+
172
+ Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system.
173
+
174
+ A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
175
+
176
+ The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view.
177
+
178
+ Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
179
+
180
+ Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
181
+
182
+ Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
183
+
184
+ Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
185
+
186
+ A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.[citation needed]
187
+
188
+ The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.[citation needed]
189
+
190
+ In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.[citation needed]
191
+
192
+ External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
193
+
194
+ Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
195
+
196
+ An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
197
+
198
+ Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
199
+
200
+ Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
201
+
202
+ Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of the classic Mac OS, the GUI is integrated into the kernel.
203
+
204
+ While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and macOS are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
205
+
206
+ Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma 5 is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
207
+
208
+ Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
209
+
210
+ Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[32]
211
+
212
+ A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
213
+
214
+ An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
215
+
216
+ Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[33] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
217
+
218
+ Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
219
+
220
+ Operating system development is one of the most complicated activities in which a computing hobbyist may engage.[citation needed] A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[34]
221
+
222
+ In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
223
+
224
+ Examples of a hobby operating system include Syllable and TempleOS.
225
+
226
+ Application software is generally written for use on a specific operating system, and sometimes even for specific hardware.[citation needed] When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
227
+
228
+ Unix was the first operating system not written in assembly language, making it very portable to systems different from its native PDP-11.[35]
229
+
230
+ This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
231
+
232
+ Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
en/5569.html.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
4
+
5
+ Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
6
+
7
+ For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
8
+
9
+ The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).[3] In the mobile sector (including smartphones and tablets), Android's share is up to 70% in the year 2017.[4] According to third quarter 2016 data, Android's share on smartphones is dominant with 87.5 percent with also a growth rate of 10.3 percent per year, followed by Apple's iOS with 12.1 percent with per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.[5] Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
10
+
11
+ A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency. This is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking; 32-bit versions of both Windows NT and Win9x used preemptive multi-tasking.
12
+
13
+ Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem.[6] A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, and the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources to multiple users.
14
+
15
+ A distributed operating system manages a group of distinct, networked computers and makes them appear to be a single computer, as all computations are distributed (divided amongst the constituent computers).[7]
16
+
17
+ In the distributed and cloud computing context of an OS, templating refers to creating a single virtual machine image as a guest operating system, then saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses.[8]
18
+
19
+ Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines with less autonomy (e.g. PDAs). They are very compact and extremely efficient by design, and are able to operate with a limited amount of resources. Windows CE and Minix 3 are some examples of embedded operating systems.
20
+
21
+ A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. Such an event-driven system switches between tasks based on their priorities or external events, whereas time-sharing operating systems switch tasks based on clock interrupts.
22
+
23
+ A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments.
24
+
25
+ Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s.[9] Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
26
+
27
+ In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plugboards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).[full citation needed]
28
+
29
+ In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period and would arrive at a scheduled time with their program and data on punched paper cards or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the universal Turing machine.[9]
30
+
31
+ Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and compiling (generating machine code from human-readable symbolic code). This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England, the job queue was at one time a washing line (clothes line) from which tapes were hung with different colored clothes-pegs to indicate job priority.[citation needed]
32
+
33
+ An improvement was the Atlas Supervisor. Introduced with the Manchester Atlas in 1962, it is considered by many to be the first recognisable modern operating system.[10] Brinch Hansen described it as "the most significant breakthrough in the history of operating systems."[11]
34
+
35
+ Through the 1950s, many major features were pioneered in the field of operating systems on mainframe computers, including batch processing, input/output interrupting, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959, the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
36
+
37
+ During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and modern machines are backwards-compatible with applications written for OS/360.[citation needed]
38
+
39
+ OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during updates. When a process is terminated for any reason, all of these resources are re-claimed by the operating system.
40
+
41
+ The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
42
+
43
+ Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.
44
+
45
+ In 1961, Burroughs Corporation introduced the B5000 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler; indeed, the MCP was the first OS to be written exclusively in a high-level language (ESPOL, a dialect of ALGOL). MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS/400, IBM made an approach to Burroughs to license MCP to run on the AS/400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys company's ClearPath/MCP line of computers.
46
+
47
+ UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems[citation needed]. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
48
+
49
+ General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
50
+
51
+ Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Before the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. RT-11 was a single-user real-time OS for the PDP-11 class minicomputer, and RSX-11 was the corresponding multi-user OS.
52
+
53
+ From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
54
+
55
+ The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:
56
+
57
+ The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became widely popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the 1980s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative graphical user interface (GUI) to the Mac OS.
58
+
59
+ The introduction of the Intel 80386 CPU chip in October 1985,[12] with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X (macOS after latest name change).
60
+
61
+ The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
62
+
63
+ Unix was originally written in assembly language.[13] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
64
+
65
+ The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
66
+
67
+ Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
68
+
69
+ Four operating systems are certified by The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's macOS, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
70
+
71
+ Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
72
+
73
+ A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NeXTSTEP.
74
+
75
+ In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkeley received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
76
+
77
+ Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
78
+
79
+ Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. After two years of legal disputes, the BSD project spawned a number of free derivatives, such as NetBSD and FreeBSD (both in 1993), and OpenBSD (from NetBSD in 1995).
80
+
81
+ macOS (formerly "Mac OS X" and later "OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. macOS is the successor to the original classic Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, macOS is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
82
+ The operating system was first released in 1999 as Mac OS X Server 1.0, followed in March 2001 by a client version (Mac OS X v10.0 "Cheetah"). Since then, six more distinct "client" and "server" editions of macOS have been released, until the two were merged in OS X 10.7 "Lion".
83
+
84
+ Prior to its merging with macOS, the server edition – macOS Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. macOS Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.[14]
85
+
86
+ The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel.
87
+
88
+ Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs,[15] it has been widely adopted for use in servers[16] and embedded systems[17] such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385.[18] Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS.
89
+
90
+ Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[15][19][20][21] The latest version is Windows 10.
91
+
92
+ In 2011, Windows 7 overtook Windows XP as most common version in use.[22][23][24]
93
+
94
+ Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[25][26] and 16-bit Windows 3.x[27] drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and 32-bit ARM microprocessors.[28] In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures.
95
+
96
+ Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share.[29][30]
97
+
98
+ ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows – without using any of Microsoft's code.
99
+
100
+ There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group.
101
+
102
+ Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
103
+
104
+ The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
105
+
106
+ With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
107
+
108
+ The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
109
+
110
+ Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative – having the operating system "watch" the various sources of input for events (polling) that require action – can be found in older systems with very small stacks (50 or 60 bytes) but is unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
111
+
112
+ When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or the running program.
113
+
114
+ When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called a device driver, which may be part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
115
+
116
+ A program may also trigger an interrupt to the operating system. If a program wishes to access hardware, for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel then processes the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it triggers an interrupt to get the kernel's attention.
117
+
118
+ Modern microprocessors (CPU or MPU) support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.
119
+
120
+ At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established.
121
+
122
+ Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the microprocessor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control.
123
+
124
+ In user mode, programs usually have access to a restricted set of microprocessor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources.
125
+
126
+ The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program).
127
+
128
+ Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
129
+
130
+ Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
131
+
132
+ Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
133
+
134
+ In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
135
+
136
+ Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
137
+
138
+ The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
139
+
140
+ If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
141
+
142
+ When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
143
+
144
+ In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
145
+
146
+ "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[31]
147
+
148
+ Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
149
+
150
+ An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
151
+
152
+ An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
153
+
154
+ Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
155
+
156
+ The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
157
+
158
+ On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
159
+
160
+ Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
161
+
162
+ Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
163
+
164
+ While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
165
+
166
+ A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
167
+
168
+ When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
169
+
170
+ Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
171
+
172
+ Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system.
173
+
174
+ A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
175
+
176
+ The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view.
177
+
178
+ Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
179
+
180
+ Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
181
+
182
+ Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
183
+
184
+ Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
185
+
186
+ A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.[citation needed]
187
+
188
+ The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.[citation needed]
189
+
190
+ In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.[citation needed]
191
+
192
+ External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
193
+
194
+ Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
195
+
196
+ An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
197
+
198
+ Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
199
+
200
+ Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
201
+
202
+ Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of the classic Mac OS, the GUI is integrated into the kernel.
203
+
204
+ While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and macOS are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
205
+
206
+ Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma 5 is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
207
+
208
+ Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
209
+
210
+ Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[32]
211
+
212
+ A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
213
+
214
+ An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
215
+
216
+ Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[33] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
217
+
218
+ Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
219
+
220
+ Operating system development is one of the most complicated activities in which a computing hobbyist may engage.[citation needed] A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[34]
221
+
222
+ In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
223
+
224
+ Examples of a hobby operating system include Syllable and TempleOS.
225
+
226
+ Application software is generally written for use on a specific operating system, and sometimes even for specific hardware.[citation needed] When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
227
+
228
+ Unix was the first operating system not written in assembly language, making it very portable to systems different from its native PDP-11.[35]
229
+
230
+ This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
231
+
232
+ Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
en/557.html.txt ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Barcelona (/ˌbɑːrsəˈloʊnə/ BAR-sə-LOH-nə, Catalan: [bəɾsəˈlonə], Spanish: [baɾθeˈlona]) is a city on the coast of northeastern Spain. It is the capital and largest city of the autonomous community of Catalonia, as well as the second most populous municipality of Spain. With a population of 1.6 million within city limits,[7] its urban area extends to numerous neighbouring municipalities within the Province of Barcelona and is home to around 4.8 million people,[3] making it the fifth most populous urban area in the European Union after Paris, the Ruhr area, Madrid, and Milan.[3] It is one of the largest metropolises on the Mediterranean Sea, located on the coast between the mouths of the rivers Llobregat and Besòs, and bounded to the west by the Serra de Collserola mountain range, the tallest peak of which is 512 metres (1,680 feet) high.
4
+
5
+ Founded as a Roman city, in the Middle Ages Barcelona became the capital of the County of Barcelona. After merging with the Kingdom of Aragon, Barcelona continued to be an important city in the Crown of Aragon as an economic and administrative centre of this Crown and the capital of the Principality of Catalonia. Barcelona has a rich cultural heritage and is today an important cultural centre and a major tourist destination. Particularly renowned are the architectural works of Antoni Gaudí and Lluís Domènech i Montaner, which have been designated UNESCO World Heritage Sites. Since 1450, it is home to the University of Barcelona. The headquarters of the Union for the Mediterranean are located in Barcelona. The city is known for hosting the 1992 Summer Olympics as well as world-class conferences and expositions and also many international sport tournaments.
6
+
7
+ Barcelona is a major cultural, economic, and financial centre in southwestern Europe,[8] as well as the main biotech hub in Spain.[9] As a leading world city, Barcelona's influence in global socio-economic affairs qualifies it for global city status.[10][11]
8
+
9
+ Barcelona is a transport hub, with the Port of Barcelona being one of Europe's principal seaports and busiest European passenger port,[12] an international airport, Barcelona–El Prat Airport, which handles over 50 million passengers per year,[13] an extensive motorway network, and a high-speed rail line with a link to France and the rest of Europe.[14]
10
+
11
+ The name Barcelona comes from the ancient Iberian Barkeno, attested in an ancient coin inscription found on the right side of the coin in Iberian script as ,[15] in ancient Greek sources as Βαρκινών, Barkinṓn;[16][17] and in Latin as Barcino,[18] Barcilonum[19] and Barcenona.[20][21][22]
12
+
13
+ Some older sources suggest that the city may have been named after the Carthaginian general Hamilcar Barca, who was supposed to have founded the city in the 3rd century BC,[23] but there is no evidence that Barcelona was ever a Carthaginian settlement, or that its name in antiquity, Barcino, had any connection with the Barcid family of Hamilcar.[24]
14
+ During the Middle Ages, the city was variously known as Barchinona, Barçalona, Barchelonaa, and Barchenona.
15
+
16
+ Internationally, Barcelona's name is wrongly abbreviated to 'Barça'. However, this name refers only to FC Barcelona, the football club. The common abbreviated form used by locals is Barna.
17
+
18
+ Another common abbreviation is 'BCN', which is also the IATA airport code of the Barcelona-El Prat Airport.
19
+
20
+ The city is also referred to as the Ciutat Comtal in Catalan, and Ciudad Condal in Spanish (i.e. Comital City or City of Counts), owing to its past as the seat of the Count of Barcelona.[25]
21
+
22
+ The origin of the earliest settlement at the site of present-day Barcelona is unclear. The ruins of an early settlement have been
23
+ found, including different tombs and dwellings dating to earlier than 5000 BC.[26][27] The founding of Barcelona is the subject of two different legends. The first attributes the founding of the city to the mythological Hercules. The second legend attributes the foundation of the city directly to the historical Carthaginian general, Hamilcar Barca, father of Hannibal, who supposedly named the city Barcino after his family in the 3rd century BC,[28] but there is no historical or linguistic evidence that this is true.[24]
24
+
25
+ In about 15 BC, the Romans redrew the town as a castrum (Roman military camp) centred on the "Mons Taber", a little hill near the contemporary city hall (Plaça de Sant Jaume). Under the Romans, it was a colony with the surname of Faventia,[29] or, in full, Colonia Faventia Julia Augusta Pia Barcino[30] or Colonia Julia Augusta Faventia Paterna Barcino. Pomponius Mela[31] mentions it among the small towns of the district, probably as it was eclipsed by its neighbour Tarraco (modern Tarragona), but it may be gathered from later writers that it gradually grew in wealth and consequence, favoured as it was with a beautiful situation and an excellent harbour.[32] It enjoyed immunity from imperial burdens.[33] The city minted its own coins; some from the era of Galba survive.
26
+
27
+ Important Roman vestiges are displayed in Plaça del Rei underground, as a part of the Barcelona City History Museum (MUHBA); the typically Roman grid plan is still visible today in the layout of the historical centre, the Barri Gòtic (Gothic Quarter). Some remaining fragments of the Roman walls have been incorporated into the cathedral.[34] The cathedral, known very formally by the long name of Catedral Basílica Metropolitana de Barcelona, is also sometimes called La Seu, which simply means cathedral (and see, among other things) in Catalan.[35][36] It is said to have been founded in 343.
28
+
29
+ The city was conquered by the Visigoths in the early 5th century, becoming for a few years the capital of all Hispania. After being conquered by the Arabs in the early 8th century, it was conquered in 801 by Charlemagne's son Louis, who made Barcelona the seat of the Carolingian "Hispanic March" (Marca Hispanica), a buffer zone ruled by the Count of Barcelona.
30
+
31
+ The Counts of Barcelona became increasingly independent and expanded their territory to include all of Catalonia, although on 6 July 985, Barcelona was sacked by the army of Almanzor.[37] The sack was so traumatic that most of Barcelona's population was either killed or enslaved.[38] In 1137, Aragon and the County of Barcelona merged in dynastic union[39][40] by the marriage of Ramon Berenguer IV and Petronilla of Aragon, their titles finally borne by only one person when their son Alfonso II of Aragon ascended to the throne in 1162. His territories were later to be known as the Crown of Aragon, which conquered many overseas possessions and ruled the western Mediterranean Sea with outlying territories in Naples and Sicily and as far as Athens in the 13th century.
32
+
33
+ Barcelona was the leading slave trade centre of the Crown of Aragon up until the 15th century, when it was eclipsed by Valencia.[41] It initially fed from eastern and balkan slave stock later drawing from a Maghribian and, ultimately, Subsaharan pool of slaves.[42]
34
+
35
+ The forging of a dynastic link between the Crowns of Aragon and Castile marked the beginning of Barcelona's decline.[why?] The Bank of Barcelona (Taula de canvi), probably the oldest public bank in Europe, was established by the city magistrates in 1401. It originated from necessities of the state, as did the Bank of Venice (1402) and the Bank of Genoa (1407).[43]
36
+
37
+ The marriage of Ferdinand II of Aragon and Isabella I of Castile in 1469 united the two royal lines. Madrid became the centre of political power whilst the colonisation of the Americas reduced the financial importance (at least in relative terms) of Mediterranean trade. Barcelona was a centre of Catalan separatism, including the Catalan Revolt (1640–52) against Philip IV of Spain. The great plague of 1650–1654 halved the city's population.[44]
38
+
39
+ In the 18th century, a fortress was built at Montjuïc that overlooked the harbour. In 1794, this fortress was used by the French astronomer Pierre François André Méchain for observations relating to a survey stretching to Dunkirk that provided the official basis of the measurement of a metre.[45] The definitive metre bar, manufactured from platinum, was presented to the French legislative assembly on 22 June 1799. Much of Barcelona was negatively affected by the Napoleonic wars, but the start of industrialisation saw the fortunes of the province improve.
40
+
41
+ During the Spanish Civil War, the city, and Catalonia in general, were resolutely Republican. Many enterprises and public services were collectivised by the CNT and UGT unions. As the power of the Republican government and the Generalitat diminished, much of the city was under the effective control of anarchist groups. The anarchists lost control of the city to their own allies, the Communists and official government troops, after the street fighting of the Barcelona May Days. The fall of the city on 26 January 1939, caused a mass exodus of civilians who fled to the French border. The resistance of Barcelona to Franco's coup d'état was to have lasting effects after the defeat of the Republican government. The autonomous institutions of Catalonia were abolished,[48] and the use of the Catalan language in public life was suppressed. Barcelona remained the second largest city in Spain, at the heart of a region which was relatively industrialised and prosperous, despite the devastation of the civil war. The result was a large-scale immigration from poorer regions of Spain (particularly Andalusia, Murcia and Galicia), which in turn led to rapid urbanisation.
42
+
43
+ In 1992, Barcelona hosted the Summer Olympics. The after-effects of this are credited with driving major changes in what had, up until then, been a largely industrial city. As part of the preparation for the games, industrial buildings along the sea-front were demolished and two miles of beach were created. New construction increased the road capacity of the city by 17%, the sewage handling capacity by 27% and the amount of new green areas and beaches by 78%. Between 1990 and 2004, the number of hotel rooms in the city doubled. Perhaps more importantly, the outside perception of the city was changed making, by 2012, Barcelona the 12th most popular city destination in the world and the 5th amongst European cities.[49][50][51][52][53]
44
+
45
+ The death of Franco in 1975 brought on a period of democratisation throughout Spain. Pressure for change was particularly strong in Barcelona, which considered that it had been punished during nearly forty years of Francoism for its support of the Republican government.[54] Massive, but peaceful, demonstrations on 11 September 1977 assembled over a million people in the streets of Barcelona to call for the restoration of Catalan autonomy. It was granted less than a month later.[55]
46
+
47
+ The development of Barcelona was promoted by two events in 1986: Spanish accession to the European Community, and particularly Barcelona's designation as host city of the 1992 Summer Olympics.[56][57] The process of urban regeneration has been rapid, and accompanied by a greatly increased international reputation of the city as a tourist destination. The increased cost of housing has led to a slight decline (−16.6%) in the population over the last two decades of the 20th century as many families move out into the suburbs. This decline has been reversed since 2001, as a new wave of immigration (particularly from Latin America and from Morocco) has gathered pace.[58]
48
+
49
+ In 1987, an ETA car bombing at Hipercor killed 21 people. On 17 August 2017, a van was driven into pedestrians on La Rambla in the city, killing 14 and injuring at least 100, one of whom later died. Other attacks took place elsewhere in Catalonia. The Prime Minister of Spain, Mariano Rajoy, called the attack in Barcelona a jihadist attack. Amaq News Agency attributed indirect responsibility for the attack to the Islamic State of Iraq and the Levant (ISIL).[59][60][61]
50
+
51
+ Barcelona is located on the northeast coast of the Iberian Peninsula, facing the Mediterranean Sea, on a plain approximately 5 km (3 mi) wide limited by the mountain range of Collserola, the Llobregat river to the southwest and the Besòs river to the north.[62] This plain covers an area of 170 km2 (66 sq mi),[62] of which 101 km2 (39.0 sq mi)[63] are occupied by the city itself. It is 120 kilometres (75 miles) south of the Pyrenees and the Catalan border with France.
52
+
53
+ Tibidabo, 512 m (1,680 ft) high, offers striking views over the city[64] and is topped by the 288.4 m (946.2 ft) Torre de Collserola, a telecommunications tower that is visible from most of the city. Barcelona is peppered with small hills, most of them urbanised, that gave their name to the neighbourhoods built upon them, such as Carmel (267 metres or 876 feet), Putget (181 metres or 594 feet) and Rovira (261 metres or 856 feet). The escarpment of Montjuïc (173 metres or 568 feet), situated to the southeast, overlooks the harbour and is topped by Montjuïc Castle, a fortress built in the 17–18th centuries to control the city as a replacement for the Ciutadella. Today, the fortress is a museum and Montjuïc is home to several sporting and cultural venues, as well as Barcelona's biggest park and gardens.
54
+
55
+ The city borders on the municipalities of Santa Coloma de Gramenet and Sant Adrià de Besòs to the north; the Mediterranean Sea to the east; El Prat de Llobregat and L'Hospitalet de Llobregat to the south; and Sant Feliu de Llobregat, Sant Just Desvern, Esplugues de Llobregat, Sant Cugat del Vallès, and Montcada i Reixac to the west. The municipality includes two small sparsely-inhabited exclaves to the north-west.
56
+
57
+ According to the Köppen climate classification, Barcelona has a maritime Mediterranean climate (Csa), with mild winters and warm to hot summers,[65] while the rainiest seasons are autumn and spring. The rainfall pattern is characterised by a short (3 months) dry season in summer, as well as less winter rainfall than in a typical Mediterranean climate. This subtype, labelled as "Portuguese" by the French geographer George Viers after the climate classification of Emmanuel de Martonne[66] and found in the NW Mediterranean area (e.g. Marseille), can be seen as transitional to the humid subtropical climate (Cfa) found in inland areas such as the Po Valley (e.g. Milan), whose rainfall is greater in summer, a feature of continental climates.
58
+
59
+ Its average annual temperature is 21.2 °C (70.2 °F) during the day and 15.1 °C (59.2 °F) at night. The average annual temperature of the sea is about 20 °C (68 °F). In the coldest month, January, the temperature typically ranges from 12 to 18 °C (54 to 64 °F) during the day, 6 to 12 °C (43 to 54 °F) at night and the average sea temperature is 13 °C (55 °F).[67] In the warmest month, August, the typical temperature ranges from 27 to 31 °C (81 to 88 °F) during the day, about 23 °C (73 °F) at night and the average sea temperature is 26 °C (79 °F).[67] Generally, the summer or "holiday" season lasts about six months, from May to October. Two months – April and November – are transitional; sometimes the temperature exceeds 20 °C (68 °F), with an average temperature of 18–19 °C (64–66 °F) during the day and 11–13 °C (52–55 °F) at night. December, January and February are the coldest months, with average temperatures around 15 °C (59 °F) during the day and 9 °C (48 °F) at night. Large fluctuations in temperature are rare, particularly in the summer months. Because of the proximity to the warm sea plus the urban heat island, frosts are very rare in the city of Barcelona. Snow is also very infrequent.
60
+
61
+ Barcelona averages 78 rainy days per year (≥ 1 mm), and annual average relative humidity is 72%, ranging from 69% in July to 75% in October. Rainfall totals are highest in late summer and autumn (September–November) and lowest in early and mid-summer (June–August), with a secondary winter minimum (February–March). Sunshine duration is 2,524 hours per year, from 138 (average 4.5 hours of sunshine a day) in December to 310 (average 10 hours of sunshine a day) in July.[68]
62
+
63
+ According to Barcelona's City Council, Barcelona's population as of 1 January 2016[update] was 1,608,746 people,[70] on a land area of 101.4 km2 (39 sq mi). It is the main component of an administrative area of Greater Barcelona, with a population of 3,218,071 in an area of 636 square kilometres (246 square miles) (density 5,060 inhabitants/km2). The population of the urban area was 4,840,000.[3] It is the central nucleus of the Barcelona metropolitan area, which relies on a population of 5,474,482.[4]
64
+
65
+ Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. Catalan is also very commonly spoken in the city: it is understood by 95% of the population, while 72.3% can speak it, 79% can read it, and 53% can write it.[71] Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system.
66
+
67
+ In 1900, Barcelona had a population of 533,000 people,[62] which grew steadily but slowly until 1950, when it started absorbing a high number of people from other less-industrialised parts of Spain. Barcelona's population peaked in 1979 with 1,906,998 people, and fell throughout the 1980s and 1990s as more people sought a higher quality of life in outlying cities in the Barcelona Metropolitan Area. After bottoming out in 2000 with 1,496,266 people, the city's population began to rise again as younger people started to return, causing a great increase in housing prices.[72]
68
+
69
+ Note: This text is entirely based on the municipal statistical database provided by the city council.
70
+
71
+ Barcelona is one of the most densely populated cities in Europe. For the year 2008 the city council calculated the population to 1,621,090 living in the 102.2 km2 sized municipality, giving the city an average population density of 15,926 inhabitants per square kilometre with Eixample being the most populated district.
72
+
73
+ In the case of Barcelona though, the land distribution is extremely uneven. Half of the municipality or 50.2 km2, all of it located on the municipal edge is made up of the ten least densely populated neighbourhoods containing less than 10% of the city's population, the uninhabited Zona Franca industrial area and Montjuïc forest park. Leaving the remaining 90% or slightly below 1.5 million inhabitants living on the remaining 52 square kilometres (20 square miles) at an average density close to 28,500 inhabitants per square kilometre.
74
+
75
+ Of the 73 neighbourhoods in the city, 45 had a population density above 20,000 inhabitants per square kilometre with a combined population of 1,313,424 inhabitants living on 38.6 km2 at an average density of 33,987 inhabitants per square km. The 30 most densely populated neighbourhoods accounted for 57.5% of the city population occupying only 22.7% of the municipality, or in other words, 936,406 people living at an average density of 40,322 inhabitants per square kilometre. The city's highest density is found at and around the neighbourhood of la Sagrada Família where four of the city's most densely populated neighbourhoods are located side by side, all with a population density above 50,000 inhabitants per square kilometre.
76
+
77
+ In 1900 almost a third (28.9 percent) were children (aged younger than 14 years), In 2017 this age group constituted only 12.7; those aged between 15 and 24 years in 2017 were 9 percent; those aged between 25 and 44 years a 30.6 percent. In contrast, in 2017 the aged between 45 and 64 years formed the 56.9% of all Barcelonans; while in 1900 the aged 65 and older were just the 6.5 percent, in 2017 reached a 21.5.[73][74]
78
+
79
+ In 2016 about 59% of the inhabitants of the city were born in Catalonia and 18.5% coming from the rest of the country. In addition to that, 22.5% of the population was born outside of Spain, a proportion which has more than doubled since 2001 and more than quintupled since 1996 when it was 8.6% respectively 3.9%.[70]
80
+
81
+ The most important region of origin of migrants is Europe, with many coming from Italy (26,676) or France (13,506).[70] Moreover, many migrants come from Latin American nations such as Bolivia, Ecuador or Colombia. Since the 1990s, and similar to other migrants, many Latin Americans have settled in northern parts of the city.[76]
82
+
83
+ There exists a relatively large Pakistani community in Barcelona with up to twenty thousand nationals. The community consists of significantly more men than women. Many of the Pakistanis are living in Ciutat Vella. First Pakistani migrants came in the 1970s, with increasing numbers in the 1990s.[77]
84
+
85
+ Other significant migrant groups come from Asia as from China and the Philippines.[70] There is a Japanese community clustered in Bonanova, Les Tres Torres, Pedralbes, and other northern neighbourhoods, and a Japanese international school serves that community.[78]
86
+
87
+ Most of the inhabitants state they are Roman Catholic (208 churches).[79] In a 2011 survey conducted by InfoCatólica, 49.5% of Barcelona residents of all ages identified themselves as Catholic.[80] This was the first time that more than half of respondents did not identify themselves as Catholic Christians.[80] The numbers reflect a broader trend in Spain whereby the numbers of self-identified Catholics have declined.[80] In 2019, a survey by Centro de Investigaciones Sociológicas showed that 53.2% of residents in Barcelona identified themselves as Catholic (9.9% practising Catholics, 43.3% non-practising Catholics).[81]
88
+
89
+ The province has the largest Muslim community in Spain, 322,698 people in Barcelona province are of Muslim religion.[82] A considerable number of Muslims live in Barcelona due to immigration (169 locations, mostly professed by Moroccans in Spain).[79] In 2014, 322,698 out of 5.5 million people in the province of Barcelona identified themselves as Muslim,[82] which makes 5.6% of total population.
90
+
91
+ The city also has the largest Jewish community in Spain, with an estimated 3,500 Jews living in the city.[83] There are also a number of other groups, including Evangelical (71 locations, mostly professed by Roma), Jehovah's Witnesses (21 Kingdom Halls), Buddhists (13 locations),[84] and Eastern Orthodox.[85]
92
+
93
+ The Barcelona metropolitan area comprises over 66% of the people of Catalonia, one of the richer regions in Europe and the fourth richest region per capita in Spain, with a GDP per capita amounting to €28,400 (16% more than the EU average). The greater Barcelona metropolitan area had a GDP amounting to $177 billion (equivalent to $34,821 in per capita terms, 44% more than the EU average), making it the 4th most economically powerful city by gross GDP in the European Union, and 35th in the world in 2009.[86] Barcelona city had a very high GDP of €80,894 per head in 2004, according to Eurostat.[87] Furthermore, Barcelona was Europe's fourth best business city and fastest improving European city, with growth improved by 17% per year as of 2009[update].[88]
94
+
95
+ Barcelona was the 24th most "livable city" in the world in 2015 according to lifestyle magazine Monocle.[89] Similarly, according to Innovation Analysts 2thinknow, Barcelona occupies 13th place in the world on Innovation Cities™ Global Index.[90]
96
+
97
+ Barcelona has a long-standing mercantile tradition. Less well known is that the city industrialised early, taking off in 1833, when Catalonia's already sophisticated textile industry began to use steam power. It became the first and most important industrial city in the Mediterranean basin. Since then, manufacturing has played a large role in its history.
98
+
99
+ Borsa de Barcelona (Barcelona Stock Exchange) is the main stock exchange in the northeastern part of the Iberian Peninsula.
100
+
101
+ Barcelona was recognised as the Southern European City of the Future for 2014/15, based on its economic potential,[91] by FDi Magazine in their bi-annual rankings.[92]
102
+
103
+ Drawing upon its tradition of creative art and craftsmanship, Barcelona is known for its award-winning industrial design. It also has several congress halls, notably Fira de Barcelona – the second largest trade fair and exhibition centre in Europe, that host a quickly growing number of national and international events each year (at present above 50). The total exhibition floor space of Fira de Barcelona venues is 405,000 m2 (41 ha), not counting Gran Via centre on the Plaza de Europa. However, the Eurozone crisis and deep cuts in business travel affected the Council's positioning of the city as a convention centre.
104
+
105
+ An important business centre, the World Trade Center Barcelona, is located in Barcelona's Port Vell harbour.
106
+
107
+ The city is known for hosting well as world-class conferences and expositions, including the 1888 Exposición Universal de Barcelona, the 1929 Barcelona International Exposition (Expo 1929), the 2004 Universal Forum of Cultures and the 2004 World Urban Forum.[93]
108
+
109
+ Barcelona was the 20th-most-visited city in the world by international visitors and the fifth most visited city in Europe after London, Paris, Istanbul and Rome, with 5.5 million international visitors in 2011.[94] By 2015, both Prague and Milan had more international visitors.[95] With its Rambles, Barcelona is ranked the most popular city to visit in Spain.[96]
110
+
111
+ Barcelona as internationally renowned a tourist destination, with numerous recreational areas, one of the best beaches in the world,[97][98] mild and warm climate, historical monuments, including eight UNESCO World Heritage Sites, 519 hotels as of March 2016[update][99] including 35 five star hotels,[100] and developed tourist infrastructure.
112
+
113
+ Due to its large influx of tourists each year, Barcelona, like many other tourism capitals, has to deal with pickpockets, with wallets and passports being commonly stolen items. For this reason, most travel guides recommend that visitors take precautions to ensure their possessions' safety, especially inside the metro premises. Despite its moderate pickpocket rate, Barcelona is considered one of the safest cities in terms of health security and personal safety,[101] mainly because of a sophisticated policing strategy that has dropped crime by 32% in just over three years and has led it to be considered the 15th safest city in the world by Business Insider.[102]
114
+
115
+ While tourism produces economic benefits, according to one report[citation needed], the city is "overrun [by] hordes of tourists". In early 2017, over 150,000 protesters warned that tourism is destabilizing the city. Slogans included "Tourists go home", "Barcelona is not for sale" and "We will not be driven out". By then, number of visitors had increased from 1.7 million in 1990 to 32 million in a city with a population of 1.62 million, increasing the cost of rental housing for residents and overcrowding the public places. While tourists spent an estimated €30 billion in 2017, they are viewed by some as a threat to Barcelona's identity.[103]
116
+
117
+ A May 2017 article in England's The Telegraph newspaper included Barcelona among the Eight Places That Hate Tourists the Most and included a comment from Mayor Ada Colau, "We don't want the city to become a cheap souvenir shop [like Venice]". To moderate the problem, the city has stopped issuing licenses for new hotels and holiday apartments; it also fined AirBnb with a €30,000. The mayor has suggested an additional tourist tax and setting a limit on the number of visitors.[104] One industry insider, Justin Francis, founder of the Responsible Travel agency, stated that steps must be taken to limit the number of visitors that are causing an "overtourism crisis" in several major European cities. "Ultimately, residents must be prioritised over tourists for housing, infrastructure and access to services because they have a long-term stake in the city's success.", he said.[105] "Managing tourism more responsibly can help", Francis later told a journalist, "but some destinations may just have too many tourists, and Barcelona may be a case of that".[106]
118
+
119
+ Industry generates 21% of the total gross domestic product (GDP) of the region,[107] with the energy, chemical and metallurgy industries accounting for 47% of industrial production.[108] The Barcelona metropolitan area had 67% of the total number of industrial establishments in Catalonia as of 1997.[109]
120
+
121
+ Barcelona has long been an important European automobile manufacturing centre. Formerly there were automobile factories of AFA, Abadal, Actividades Industriales, Alvarez, America, Artés de Arcos, Balandrás, Baradat-Esteve, Biscúter, J. Castro, Clúa, David, Delfín, Díaz y Grilló, Ebro trucks, Edis [ca], Elizalde, Automóviles España, Eucort, Fenix, Fábrica Hispano, Auto Academia Garriga, Fábrica Española de Automóviles Hebe, Hispano-Suiza, Huracán Motors, Talleres Hereter, Junior SL, Kapi, La Cuadra, M.A., Automóviles Matas, Motores y Motos, Nacional Custals, National Pescara, Nacional RG, Nacional Rubi, Nacional Sitjes, Automóviles Nike, Orix, Otro Ford, Partia, Pegaso, PTV, Ricart, Ricart-España, Industrias Salvador, Siata Española, Stevenson, Romagosa y Compañía, Garaje Storm, Talleres Hereter, Trimak, Automóviles Victoria, Manufacturas Mecánicas Aleu.[110][111]
122
+
123
+ Today, the headquarters and a large factory of SEAT (the largest Spanish automobile manufacturer) are in one of its suburbs. There is also a Nissan factory in the logistics and industrial area of the city.[112] The factory of Derbi, a large manufacturer of motorcycles, scooters and mopeds, also lies near the city.[113]
124
+
125
+ As in other modern cities, the manufacturing sector has long since been overtaken by the services sector, though it remains very important. The region's leading industries are textiles, chemical, pharmaceutical, motor, electronic, printing, logistics, publishing, in telecommunications industry and culture the notable Mobile World Congress, and information technology services.
126
+
127
+ The traditional importance of textiles is reflected in Barcelona's drive to become a major fashion centre. There have been many attempts to launch Barcelona as a fashion capital, notably Gaudi Home.[citation needed]
128
+
129
+ Beginning in the summer of 2000, the city hosted the Bread & Butter urban fashion fair until 2009, when its organisers announced that it would be returning to Berlin. This was a hard blow for the city as the fair brought €100 m to the city in just three days.[114][115]
130
+
131
+ Since 2009, The Brandery, an urban fashion show, has been held in Barcelona twice a year until 2012. According to the Global Language Monitor's annual ranking of the world's top fifty fashion capitals Barcelona was named as the seventh most important fashion capital of the world right after Milano and before Berlin in 2015.[116]
132
+
133
+ As the capital of the autonomous community of Catalonia, Barcelona is the seat of the Catalan government, known as the Generalitat de Catalunya; of particular note are the executive branch, the parliament, and the High Court of Justice of Catalonia. The city is also the capital of the Province of Barcelona and the Barcelonès comarca (district).
134
+
135
+ Barcelona is governed by a city council formed by 41 city councillors, elected for a four-year term by universal suffrage. As one of the two biggest cities in Spain, Barcelona is subject to a special law articulated through the Carta Municipal (Municipal Law). A first version of this law was passed in 1960 and amended later, but the current version was approved in March 2006.[117] According to this law, Barcelona's city council is organised in two levels: a political one, with elected city councillors, and one executive, which administrates the programs and executes the decisions taken on the political level.[118] This law also gives the local government a special relationship with the central government and it also gives the mayor wider prerogatives by the means of municipal executive commissions.[119] It expands the powers of the city council in areas like telecommunications, city traffic, road safety and public safety. It also gives a special economic regime to the city's treasury and it gives the council a veto in matters that will be decided by the central government, but that will need a favourable report from the council.[117]
136
+
137
+ The Comissió de Govern (Government Commission) is the executive branch, formed by 24 councillors, led by the Mayor, with 5 lieutenant-mayors and 17 city councillors, each in charge of an area of government, and 5 non-elected councillors.[120] The plenary, formed by the 41 city councillors, has advisory, planning, regulatory, and fiscal executive functions.[121] The six Commissions del Consell Municipal (City council commissions) have executive and controlling functions in the field of their jurisdiction. They are composed by a number of councillors proportional to the number of councillors each political party has in the plenary.[122] The city council has jurisdiction in the fields of city planning, transportation, municipal taxes, public highways security through the Guàrdia Urbana (the municipal police), city maintenance, gardens, parks and environment, facilities (like schools, nurseries, sports centres, libraries, and so on), culture, sports, youth and social welfare. Some of these competencies are not exclusive, but shared with the Generalitat de Catalunya or the central Spanish government. In some fields with shared responsibility (such as public health, education or social services), there is a shared Agency or Consortium between the city and the Generalitat to plan and manage services.[123]
138
+
139
+ The executive branch is led by a Chief Municipal Executive Officer which answers to the Mayor. It is made up of departments which are legally part of the city council and by separate legal entities of two types: autonomous public departments and public enterprises.[124]
140
+
141
+ The seat of the city council is on the Plaça de Sant Jaume, opposite the seat of Generalitat de Catalunya. Since the coming of the Spanish democracy, Barcelona had been governed by the PSC, first with an absolute majority and later in coalition with ERC and ICV. After the May 2007 election, the ERC did not renew the coalition agreement and the PSC governed in a minority coalition with ICV as the junior partner.
142
+
143
+ After 32 years, on 22 May 2011, CiU gained a plurality of seats at the municipal election, gaining 15 seats to the PSC's 11. The PP hold 8 seats, ICV 5 and ERC 2.
144
+
145
+ Since 1987, the city has been divided into 10 administrative districts (districtes in Catalan, distritos in Spanish):
146
+
147
+ The districts are based mostly on historical divisions, and several are former towns annexed by the city of Barcelona in the 18th and 19th centuries that still maintain their own distinct character. Each district has its own council led by a city councillor. The composition of each district council depends on the number of votes each political party had in that district, so a district can be led by a councillor from a different party than the executive council.
148
+
149
+ Barcelona has a well-developed higher education system of public universities. Most prominent among these is the University of Barcelona (established in 1450), a world-renowned research and teaching institution with campuses around the city. Barcelona is also home to the Polytechnic University of Catalonia, and the newer Pompeu Fabra University, and, in the private sector the EADA Business School founded in 1957, became the first Barcelona institution to run manager training programmes for the business community. IESE Business School, as well as the largest private educational institution, the Ramon Llull University, which encompasses schools and institutes such as the ESADE Business School. The Autonomous University of Barcelona, another public university, is located in Bellaterra, a town in the Metropolitan Area. Toulouse Business School and the Open University of Catalonia (a private Internet-centred open university) are also based in Barcelona.
150
+
151
+ The city has a network of public schools, from nurseries to high schools, under the responsibility of a consortium led by city council (though the curriculum is the responsibility of the Generalitat de Catalunya). There are also many private schools, some of them Roman Catholic. Most such schools receive a public subsidy on a per-student basis, are subject to inspection by the public authorities, and are required to follow the same curricular guidelines as public schools, though they charge tuition. Known as escoles concertades, they are distinct from schools whose funding is entirely private (escoles privades).
152
+
153
+ The language of instruction at public schools and escoles concertades is Catalan, as stipulated by the 2009 Catalan Education Act. Spanish may be used as a language of instruction by teachers of Spanish literature or language, and foreign languages by teachers of those languages. An experimental partial immersion programme adopted by some schools allows for the teaching of a foreign language (English, generally) across the curriculum, though this is limited to a maximum of 30% of the school day. No public school or escola concertada in Barcelona may offer 50% or full immersion programmes in a foreign language, nor does any public school or escola concertada offer International Baccalaureate programmes.
154
+
155
+ Barcelona's cultural roots go back 2000 years. Since the arrival of democracy, the Catalan language (very much repressed during the dictatorship of Franco) has been promoted, both by recovering works from the past and by stimulating the creation of new works. Barcelona is designated as a world-class city by the Globalization and World Cities Study Group and Network.[125] It has also been part of the UNESCO Creative Cities Network as a City of Literature since 2015.[126]
156
+
157
+ Barcelona has many venues for live music and theatre, including the world-renowned Gran Teatre del Liceu opera house, the Teatre Nacional de Catalunya, the Teatre Lliure and the Palau de la Música Catalana concert hall. Barcelona also is home to the Barcelona Symphony and Catalonia National Orchestra (Orquestra Simfònica de Barcelona i Nacional de Catalunya, usually known as OBC), the largest symphonic orchestra in Catalonia. In 1999, the OBC inaugurated its new venue in the brand-new Auditorium (L'Auditori). It performs around 75 concerts per season and its current director is Eiji Oue.[127] It is home to the Barcelona Guitar Orchestra, directed by Sergi Vicente.
158
+ The major thoroughfare of La Rambla is home to mime artists and street performers.
159
+ Yearly, two major pop music festivals take place in the city, the Sónar Festival and the Primavera Sound Festival. The city also has a thriving alternative music scene, with groups such as The Pinker Tones receiving international attention.[128]
160
+
161
+ El Periódico de Catalunya, La Vanguardia and Ara are Barcelona's three major daily newspapers (the first two with Catalan and Spanish editions, Ara only in Catalan) while Sport and El Mundo Deportivo (both in Spanish) are the city's two major sports daily newspapers, published by the same companies. The city is also served by a number of smaller publications such as Ara and El Punt Avui (in Catalan), by nationwide newspapers with special Barcelona editions like El Pais (in Spanish, with an online version in Catalan) and El Mundo (in Spanish), and by several free newspapers like 20 minutos and Què (all bilingual).
162
+
163
+ Barcelona's oldest and main online newspaper VilaWeb is also the oldest one in Europe (with Catalan and English editions).
164
+
165
+ Several major FM stations include Catalunya Ràdio, RAC 1, RAC 105 and Cadena SER. Barcelona also has a local TV stations, BTV, owned by city council. The headquarters of Televisió de Catalunya, Catalonia's public network, are located in Sant Joan Despí, in Barcelona's metropolitan area.
166
+
167
+ Barcelona has a long sporting tradition and hosted the highly successful 1992 Summer Olympics as well as several matches during the 1982 FIFA World Cup (at the two stadiums). It has hosted about 30 sports events of international significance.[citation needed]
168
+
169
+ FC Barcelona is a sports club best known worldwide for its football team, one of the largest and the second richest in the world.[129] It has 74 national trophies (while finishing 46 times as runners-up) and 17 continental prizes (with being runners-up 11 times), including five UEFA Champions League trophies out of eight finals and three FIFA Club World Cup wins out of four finals. It is the only male football team in the world to win six trophies in a calendar year (in 2009). FC Barcelona also has professional teams in other sports like FC Barcelona Regal (basketball), FC Barcelona Handbol (handball), FC Barcelona Hoquei (roller hockey), FC Barcelona Ice Hockey (ice hockey), FC Barcelona Futsal (futsal) and FC Barcelona Rugby (rugby union), all at one point winners of the highest national and/or European competitions. The club's museum is the second most visited in Catalonia. The matches against cross-town rivals RCD Espanyol are of particular interest, but there are other Barcelonan football clubs in lower categories, like CE Europa and UE Sant Andreu. FC Barcelona's basketball team has a noted rivalry in the Liga ACB with nearby Joventut Badalona.
170
+
171
+ Barcelona has three UEFA elite stadiums: FC Barcelona's Camp Nou, the largest stadium in Europe with a capacity of 99,354; the publicly owned Estadi Olímpic Lluís Companys, with a capacity of 55,926; used for the 1992 Olympics; and Estadi Cornellà-El Prat, with a capacity of 40,500. Furthermore, the city has several smaller stadiums such as Mini Estadi (also owned by FC Barcelona) with a capacity of 15,000, Camp Municipal Narcís Sala with a capacity of 6,563 and Nou Sardenya with a capacity of 7,000. The city has a further three multifunctional venues for sports and concerts: the Palau Sant Jordi with a capacity of 12,000 to 24,000 (depending on use), the Palau Blaugrana with a capacity of 7,500, and the Palau dels Esports de Barcelona with a capacity of 3,500.
172
+
173
+ Barcelona was the host city for the 2013 World Aquatics Championships, which were held at the Palau San Jordi.[130]
174
+
175
+ Several road running competitions are organised year-round in Barcelona: the Barcelona Marathon every March with over 10,000 participants in 2010, the Cursa de Bombers in April, the Cursa de El Corte Inglés in May (with about 60,000 participants each year), the Cursa de la Mercè, the Cursa Jean Bouin, the Milla Sagrada Família and the San Silvestre. There's also the Ultratrail Collserola which passes 85 kilometres (53 miles) through the Collserola forest. The Open Seat Godó, a 50-year-old ATP World Tour 500 Series tennis tournament, is held annually in the facilities of the Real Club de Tenis Barcelona. Each Christmas, a swimming race across the port is organised.[citation needed] Near Barcelona, in Montmeló, the 107,000 capacity Circuit de Barcelona-Catalunya racetrack hosts the Formula One Spanish Grand Prix, the Catalan motorcycle Grand Prix, the Spanish GT Championship and races in the GP2 Series. Skateboarding and cycling are also very popular in Barcelona; in and around the city there are dozens of kilometers of bicycle paths.[citation needed]
176
+
177
+ Barcelona is also home to numerous social centres and illegal squats that effectively form a shadow society mainly made up of the unemployed, immigrants, dropouts, anarchists, anti-authoritarians and autonomists.[132] Peter Gelderloos estimates that there around 200 squatted buildings and 40 social centres across the city with thousands of inhabitants, making it one of the largest squatter movements in the world. He notes that they pirate electricity, internet and water allowing them to live on less than one euro a day. He argues that these squats embrace an anarcho-communist and anti-work philosophy, often freely fixing up new houses, cleaning, patching roofs, installing windows, toilets, showers, lights and kitchens. In the wake of austerity, the squats have provided a number of social services to the surrounding residents, including bicycle repair workshops, carpentry workshops, self-defense classes, free libraries, community gardens, free meals, computer labs, language classes, theatre groups, free medical care and legal support services.[133] The squats help elderly residents avoid eviction and organise various protests throughout Barcelona. Notable squats include Can Vies and Can Masdeu. Police have repeatedly tried to shut down the squatters movement with waves of evictions and raids, but the movement is still going strong.
178
+
179
+ Barcelona is served by Barcelona-El Prat Airport, about 17 km (11 mi) from the centre of Barcelona. It is the second-largest airport in Spain, and the largest on the Mediterranean coast, which handled more than 50.17 million passengers in 2018, showing an annual upward trend.[134] It is a main hub for Vueling Airlines and Ryanair, and also a focus for Iberia and Air Europa. The airport mainly serves domestic and European destinations, although some airlines offer destinations in Latin America, Asia and the United States. The airport is connected to the city by highway, metro (Airport T1 and Airport T2 stations), commuter train (Barcelona Airport railway station) and scheduled bus service. A new terminal (T1) has been built, and entered service on 17 June 2009.
180
+
181
+ Some low-cost airlines, also use Girona-Costa Brava Airport, about 90 km (56 mi) to the north, Reus Airport, 77 km (48 mi) to the south, or Lleida-Alguaire Airport, about 150 km (93 mi) to the west, of the city. Sabadell Airport is a smaller airport in the nearby town of Sabadell, devoted to pilot training, aerotaxi and private flights.
182
+
183
+ The Port of Barcelona has a 2000-year-old history and a great contemporary commercial importance. It is Europe's ninth largest container port, with a trade volume of 1.72 million TEU's in 2013.[135] The port is managed by the Port Authority of Barcelona. Its 10 km2 (4 sq mi) are divided into three zones: Port Vell (the old port), the commercial port and the logistics port (Barcelona Free Port). The port is undergoing an enlargement that will double its size thanks to diverting the mouth of the Llobregat river 2 kilometres (1 mile) to the south.[136]
184
+
185
+ The Barcelona harbour is the leading European cruiser port and a most important Mediterranean turnaround base.[137] In 2013, 3,6 million of pleasure cruises passengers used services of the Port of Barcelona.[135]
186
+
187
+ The Port Vell area also houses the Maremagnum (a commercial mall), a multiplex cinema, the IMAX Port Vell and one of Europe's largest aquariums – Aquarium Barcelona, containing 8,000 fish and 11 sharks contained in 22 basins filled with 4 million litres of sea water. The Maremagnum, being situated within the confines of the port, is the only commercial mall in the city that can open on Sundays and public holidays.
188
+
189
+ Barcelona is a major hub for RENFE, the Spanish state railway network. The city's main Inter-city rail station is Barcelona Sants railway station, whilst Estació de França terminus serves a secondary role handling suburban, regional and medium distance services. Freight services operate to local industries and to the Port of Barcelona.
190
+
191
+ RENFE's AVE high-speed rail system, which is designed for speeds of 310 km/h (193 mph), was extended from Madrid to Barcelona in 2008 in the form of the Madrid–Barcelona high-speed rail line. A shared RENFE-SNCF high-speed rail connecting Barcelona and France (Paris, Marseilles and Toulouse, through Perpignan–Barcelona high-speed rail line) was launched in 2013. Both these lines serve Barcelona Sants terminal station.[138][139]
192
+
193
+ Barcelona is served by an extensive local public transport network that includes a metro system, a bus network, a regional railway system, trams, funiculars, rack railways, a Gondola lift and aerial cable cars. These networks and lines are run by a number of different operators but they are integrated into a coordinated fare system, administered by the Autoritat del Transport Metropolità (ATM). The system is divided into fare zones (1 to 6) and various Integrated Travel Cards are available.[140]
194
+
195
+ The Barcelona Metro network comprises twelve lines, identified by an "L" followed by the line number as well as by individual colours. The Metro largely runs underground; eight Metro lines are operated on dedicated track by the Transports Metropolitans de Barcelona (TMB), whilst four lines are operated by the Ferrocarrils de la Generalitat de Catalunya (FGC) and some of them share tracks with RENFE commuter lines.
196
+
197
+ In addition to the city Metro, several regional rail lines operated by RENFE's Rodalies de Catalunya run across the city, providing connections to outlying towns in the surrounding region.
198
+
199
+ The city's two modern tram systems, Trambaix and Trambesòs, are operated by TRAMMET.[141] A heritage tram line, the Tramvia Blau, also operates between the metro Line 7 and the Funicular del Tibidabo.[142]
200
+
201
+ Barcelona's metro and rail system is supplemented by several aerial cable cars, funiculars and rack railways that provide connections to mountain-top stations. FGC operates the Funicular de Tibidabo up the hill of Tibidabo and the Funicular de Vallvidrera (FGC), while TMB runs the Funicular de Montjuïc up Montjuïc. The city has two aerial cable cars: the Montjuïc Cable Car, which serves Montjuïc castle, and the Port Vell Aerial Tramway that runs via Torre Jaume I and Torre Sant Sebastià over the port.
202
+
203
+ Buses in Barcelona are a major form of public transport, with extensive local, interurban and night bus networks. Most local services are operated by the TMB, although some other services are operated by a number of private companies, albeit still within the ATM fare structure. A separate private bus line, known as Aerobús, links the airport with the city centre, with its own fare structure.
204
+
205
+ The Estació del Nord (Northern Station), a former railway station which was renovated for the 1992 Olympic Games, now serves as the terminus for long-distance and regional bus services.
206
+
207
+ Barcelona has a metered taxi fleet governed by the Institut Metropolità del Taxi (Metropolitan Taxi Institute), composed of more than 10,000 cars. Most of the licences are in the hands of self-employed drivers. With their black and yellow livery, Barcelona's taxis are easily spotted, and can be caught from one of many taxi ranks, hailed on street, called by telephone or via app.[143][144]
208
+
209
+ On 22 March 2007,[145] Barcelona's City Council started the Bicing service, a bicycle service understood as a public transport. Once the user has their user card, they can take a bicycle from any of the more than 400 stations spread around the city and use it anywhere the urban area of the city, and then leave it at another station.[146] The service has been a success, with 50,000 subscribed users in three months.[147]
210
+
211
+ Barcelona lies on three international routes, including European route E15 that follows the Mediterranean coast, European route E90 to Madrid and Lisbon, and European route E09 to Paris. It is also served by a comprehensive network of motorways and highways throughout the metropolitan area, including A-2, A-7/AP-7, C-16, C-17, C-31, C-32, C-33, C-60.
212
+
213
+ The city is circled by three half ring roads or bypasses, Ronda de Dalt (B-20) (on the mountain side), Ronda del Litoral (B-10) (along the coast) and Ronda del Mig (separated into two parts: Travessera de Dalt in the north and the Gran Via de Carles III), two partially covered[148] fast highways with several exits that bypass the city.
214
+
215
+ The city's main arteries include Diagonal Avenue, which crosses it diagonally, Meridiana Avenue which leads to Glòries and connects with Diagonal Avenue and Gran Via de les Corts Catalanes, which crosses the city from east to west, passing through its centre. The famous boulevard of La Rambla, whilst no longer an important vehicular route, remains an important pedestrian route.
216
+
217
+ The Barri Gòtic (Catalan for "Gothic Quarter") is the centre of the old city of Barcelona. Many of the buildings date from medieval times, some from as far back as the Roman settlement of Barcelona. Catalan modernista architecture (related to the movement known as Art Nouveau in the rest of Europe) developed between 1885 and 1950 and left an important legacy in Barcelona. Several of these buildings are World Heritage Sites. Especially remarkable is the work of architect Antoni Gaudí, which can be seen throughout the city. His best-known work is the immense but still unfinished church of the Sagrada Família, which has been under construction since 1882 and is still financed by private donations. As of 2015[update], completion is planned for 2026.[149]
218
+
219
+ Barcelona was also home to Mies van der Rohe's Barcelona Pavilion. Designed in 1929 for the International Exposition for Germany, it was an iconic building that came to symbolise modern architecture as the embodiment of van der Rohe's aphorisms "less is more" and "God is in the details."[150] The Barcelona pavilion was intended as a temporary structure and was torn down in 1930 less than a year after it was constructed. A modern re-creation by Spanish architects now stands in Barcelona, however, constructed in 1986.
220
+
221
+ Barcelona won the 1999 RIBA Royal Gold Medal for its architecture,[151] the first (and as of 2015[update], only) time that the winner has been a city rather than an individual architect.
222
+
223
+ Barcelona is the home of many points of interest declared World Heritage Sites by UNESCO:[152]
224
+
225
+ Barcelona has a great number of museums, which cover different areas and eras. The National Museum of Art of Catalonia possesses a well-known collection of Romanesque art, while the Barcelona Museum of Contemporary Art focuses on post-1945 Catalan and Spanish art. The Fundació Joan Miró, Picasso Museum, and Fundació Antoni Tàpies hold important collections of these world-renowned artists, as well as the Can Framis Museum, focused on post-1960 Catalan Art owned by Fundació Vila Casas.
226
+ Several museums cover the fields of history and archaeology, like the Barcelona City History Museum (MUHBA), the Museum of the History of Catalonia, the Archeology Museum of Catalonia, the Barcelona Maritime Museum, the Music Museum of Barcelona and the privately owned Egyptian Museum. The Erotic museum of Barcelona is among the most peculiar ones, while CosmoCaixa is a science museum that received the European Museum of the Year Award in 2006.[citation needed]
227
+
228
+ The Museum of Natural Sciences of Barcelona was founded in 1882 under the name of "Museo Martorell de Arqueología y Ciencias Naturales"[153][154] (Spanish for "Martorell Museum of Archaeology and Natural Sciences"). In 2011 the Museum of Natural Sciences ended up with a merge of five institutions: the Museum of Natural Sciences of Barcelona (the main site, at the Forum Building), the Martorell Museum (the historical seat of the Museum, opened to the public from 1924 to 2010 as a geology museum), the Laboratori de Natura, at the Castle of the Three Dragons (from 1920 to 2010: the Zoology Museum), the Historical Botanical Garden of Barcelona, founded 1930, and the Botanical garden of Barcelona, founded 1999. Those two gardens are a part of the Botanical Institute of Barcelona too.
229
+
230
+ The FC Barcelona Museum has been the most visited museum in the city of Barcelona, with 1,506,022 visitors in 2013.[citation needed]
231
+
232
+ Barcelona contains sixty municipal parks, twelve of which are historic, five of which are thematic (botanical), forty-five of which are urban, and six of which are forest.[155] They range from vest-pocket parks to large recreation areas. The urban parks alone cover 10% of the city (549.7 ha or 1,358.3 acres).[63] The total park surface grows about 10 ha (25 acres) per year,[156] with a proportion of 18.1 square metres (195 sq ft) of park area per inhabitant.[157]
233
+
234
+ Of Barcelona's parks, Montjuïc is the largest, with 203 ha located on the mountain of the same name.[63] It is followed by Parc de la Ciutadella (which occupies the site of the old military citadel and which houses the Parliament building, the Barcelona Zoo, and several museums); 31 ha or 76.6 acres including the zoo), the Guinardó Park (19 ha or 47.0 acres), Park Güell (designed by Antoni Gaudí; 17.2 ha or 42.5 acres), Oreneta Castle Park (also 17.2 ha or 42.5 acres), Diagonal Mar Park (13.3 ha or 32.9 acres, inaugurated in 2002), Nou Barris Central Park (13.2 ha or 32.6 acres), Can Dragó Sports Park and Poblenou Park (both 11.9 ha or 29.4 acres), the Labyrinth Park (9.10 ha or 22.5 acres), named after the garden maze it contains.[63] There are also several smaller parks, for example, the Parc de Les Aigües (2 ha or 4.9 acres). A part of the Collserola Park is also within the city limits. PortAventura World, one of the largest resort in Europe, with 5,837,509 visitors per year, is located one hour's drive from Barcelona.[158][159] Also, within the city lies Tibidabo Amusement Park, a smaller amusement park in Plaza del Tibidabo, with the Muntanya Russa amusement ride.
235
+
236
+ Barcelona beach was listed as number one in a list of the top ten city beaches in the world according to National Geographic[97] and Discovery Channel.[160] Barcelona contains seven beaches, totalling 4.5 kilometres (2.8 miles) of coastline. Sant Sebastià, Barceloneta and Somorrostro beaches, both 1,100 m (3,610 ft) in length,[63] are the largest, oldest and the most-frequented beaches in Barcelona.
237
+
238
+ The Olympic Harbour separates them from the other city beaches: Nova Icària, Bogatell, Mar Bella, Nova Mar Bella and Llevant. These beaches (ranging from 400 to 640 m (1,310 to 2,100 ft) were opened as a result of the city restructuring to host the 1992 Summer Olympics, when a great number of industrial buildings were demolished. At present, the beach sand is artificially replenished given that storms regularly remove large quantities of material. The 2004 Universal Forum of Cultures left the city a large concrete bathing zone on the eastmost part of the city's coastline. Most recently, Llevant is the first beach to allow dogs access during summer season.
239
+
240
+ Santa Maria del Mar church
241
+
242
+ Santa Maria del Pi church
243
+
244
+ The Roman and Medieval walls
245
+
246
+ Can Framis Museum
247
+
248
+ Fabra Observatory
249
+
250
+ The Arc de Triomf
251
+
252
+ Castell dels Tres Dragons
253
+
254
+ Hotel Arts (left) and Torre Mapfre (right)
255
+
256
+ Torre Agbar
257
+
258
+ The Torre de Collserola on Tibidabo
259
+
260
+ Sagrat Cor on Tibidabo
261
+
262
+ The view from Gaudí's Park Güell
263
+
264
+ Port Vell Aerial Tramway
265
+
266
+ Statue of Christopher Columbus
267
+
268
+ W Barcelona (Hotel Vela)
269
+
270
+ Colón building
271
+
272
+ Magic Fountain of Montjuïc
273
+
274
+ The Venetian Towers in Plaça d'Espanya
275
+
276
+ Plaça de Catalunya
277
+
278
+ La Rambla
279
+
280
+ Gothic Quarter
281
+
282
+ Barcelona's old Customs building at Port Vell
283
+
284
+ La Illa de la Discòrdia
285
+
286
+ Barcelona is twinned with the following cities:(in chronological order)[161]
287
+
288
+ Other forms of co-operation and city friendship similar to the twin city programmes exist to many cities worldwide.[161]
en/5570.html.txt ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Digestion is the breakdown of large insoluble food molecules into small water-soluble food molecules so that they can be absorbed into the watery blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. In chemical digestion, enzymes break down food into the small molecules the body can use.
4
+
5
+ In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work. After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers gastric juice also contains rennin. As the first two chemicals may damage the stomach wall, mucus is secreted by the stomach, providing a slimy layer that acts as a shield against the damaging effects of the chemicals. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes.
6
+
7
+ After some time (typically 1–2 hours in humans, 4–6 hours in dogs, 3–4 hours in house cats),[citation needed] the resulting thick liquid is called chyme. When the pyloric sphincter valve opens, chyme enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic about 5.6 ~ 6.9. Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Waste material is eliminated from the rectum during defecation.[1]
8
+
9
+ Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it.[2] In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled.[3]
10
+
11
+ Some organisms, including nearly all spiders, simply secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient.
12
+
13
+ Bacteria use several systems to obtain nutrients from other organisms in the environments.
14
+
15
+ In a channel transupport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein (OMP)[specify]. This secretion system transports various molecules, from ions, drugs, to proteins of various sizes (20–900 kDa). The molecules secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa.[4]
16
+
17
+ A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than simply be secreted into the extracellular medium.[5]
18
+
19
+ The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor).[6] The VirB complex of Agrobacterium tumefaciens is the prototypic system.[7]
20
+
21
+ The nitrogen fixing Rhizobia are an interesting case, wherein conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
22
+
23
+ The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria.
24
+
25
+ In addition to the use of the multiprotein complexes listed above, Gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles.[8][9] Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective.[10]
26
+
27
+ The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut.
28
+
29
+ In a plant such as the Venus Flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat.[11]
30
+
31
+ A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells.[12]
32
+
33
+ To aid in the digestion of their food animals evolved organs such as beaks, tongues, teeth, a crop, gizzard, and others.
34
+
35
+ Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their impressive beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak.
36
+
37
+ The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species.[13] The beak is the only indigestible part of the squid.
38
+
39
+ The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is utilized to roll food particles into a bolus before being transported down the esophagus through peristalsis.
40
+
41
+ The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract.
42
+
43
+ Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying.
44
+
45
+ The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat.
46
+
47
+ A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds.[14]
48
+
49
+ Certain insects may have a crop or enlarged esophagus.
50
+
51
+ Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size.
52
+
53
+ Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.
54
+
55
+ The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine.
56
+
57
+ Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation.[15]
58
+
59
+ Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins).
60
+
61
+ Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten.
62
+
63
+ Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components.
64
+
65
+ An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body.
66
+
67
+ In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:
68
+
69
+ Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.).
70
+
71
+ The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion.
72
+
73
+ In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation.[1]
74
+
75
+ The human gastrointestinal tract is around 9 meters long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours.[16]
76
+
77
+ Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which would damage the walls of the stomach and mucus is secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells is covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine.
78
+
79
+ In the large intestine the passage of food is slower to enable fermentation by the gut flora to take place. Here water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.
80
+
81
+ Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase.
82
+
83
+ The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin.
84
+
85
+ The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid (HCl), which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine.
86
+
87
+ The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes.
88
+
89
+ Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides.
90
+
91
+ Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine.[17] The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids.[17] Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, as well as some undigested triglycerides, but no free glycerol molecules.[17]
92
+
93
+ In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine.
94
+
95
+ Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.[18]
96
+
97
+ Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine.
98
+
99
+ DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas.
100
+
101
+ Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes.[19]
102
+
103
+ After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood.[20]
104
+
105
+ There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. For instance, more connections to metabolic control (largely the glucose-insulin system) have been uncovered in recent years.
106
+
107
+ Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment.
108
+
109
+ The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens.[citation needed]
110
+
111
+ In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.[citation needed]
en/5571.html.txt ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Feudalism was a combination of legal, economic, military and cultural customs that flourished in Medieval Europe between the 9th and 15th centuries. Broadly defined, it was a way of structuring society around relationships that were derived from the holding of land in exchange for service or labour.
2
+ Although it is derived from the Latin word feodum or feudum (fief),[1] which was used during the Medieval period, the term feudalism and the system which it describes were not conceived of as a formal political system by the people who lived during the Middle Ages.[2] The classic definition, by François-Louis Ganshof (1944),[3] describes a set of reciprocal legal and military obligations which existed among the warrior nobility and revolved around the three key concepts of lords, vassals and fiefs.[3]
3
+
4
+ A broader definition of feudalism, as described by Marc Bloch (1939), includes not only the obligations of the warrior nobility but the obligations of all three estates of the realm: the nobility, the clergy, and the peasantry, all of whom were bound by a system of manorialism; this is sometimes referred to as a "feudal society". Since the publication of Elizabeth A. R. Brown's "The Tyranny of a Construct" (1974) and Susan Reynolds's Fiefs and Vassals (1994), there has been ongoing inconclusive discussion among medieval historians as to whether feudalism is a useful construct for understanding medieval society.[4][5][6][7][8][9]
5
+
6
+ There is no commonly accepted modern definition of feudalism, at least among scholars.[4][7] The adjective feudal was coined in the 17th century, and the noun feudalism, often used in a political and propaganda context, was not coined until the 19th century,[4] from the French féodalité (feudality), itself an 18th-century creation.
7
+
8
+ According to a classic definition by François-Louis Ganshof (1944),[3] feudalism describes a set of reciprocal legal and military obligations which existed among the warrior nobility and revolved around the three key concepts of lords, vassals and fiefs,[3] though Ganshof himself noted that his treatment was only related to the "narrow, technical, legal sense of the word".
9
+
10
+ A broader definition, as described in Marc Bloch's Feudal Society (1939),[10] includes not only the obligations of the warrior nobility but the obligations of all three estates of the realm: the nobility, the clergy, and those who lived off their labor, most directly the peasantry which was bound by a system of manorialism; this order is often referred to as a "feudal society", echoing Bloch's usage.
11
+
12
+ Outside its European context,[4] the concept of feudalism is often used by analogy, most often in discussions of feudal Japan under the shoguns, and sometimes in discussions of the Zagwe dynasty in medieval Ethiopia,[11] which had some feudal characteristics (sometimes called "semifeudal").[12][13] Some have taken the feudalism analogy further, seeing feudalism (or traces of it) in places as diverse as China during the Spring and Autumn period, ancient Egypt, the Parthian empire, the Indian subcontinent and the Antebellum and Jim Crow American South.[11] Wu Ta-k'un argued that China's fengjian, being kinship-based and tied to land which was controlled by a king, was entirely distinct from feudalism. This despite the fact that in translation fengjian is frequently paired in both directions with feudal.[14]
13
+
14
+ The term feudalism has also been applied—often inappropriately or pejoratively—to non-Western societies where institutions and attitudes which are similar to those which existed in medieval Europe are perceived to prevail.[15] Some historians and political theorists believe that the term feudalism has been deprived of specific meaning by the many ways it has been used, leading them to reject it as a useful concept for understanding society.[4][5]
15
+
16
+ The term "féodal" was used in 17th-century French legal treatises (1614)[16][17] and translated into English legal treatises as an adjective, such as "feodal government".
17
+
18
+ In the 18th century, Adam Smith, seeking to describe economic systems, effectively coined the forms "feudal government" and "feudal system" in his book Wealth of Nations (1776).[18] In the 19th century the adjective "feudal" evolved into a noun: "feudalism".[18] The term feudalism is recent, first appearing in French in 1823, Italian in 1827, English in 1839, and in German in the second half of the 19th century.[18]
19
+
20
+ The term "feudal" or "feodal" is derived from the medieval Latin word feodum. The etymology of feodum is complex with multiple theories, some suggesting a Germanic origin (the most widely held view) and others suggesting an Arabic origin. Initially in medieval Latin European documents, a land grant in exchange for service was called a beneficium (Latin).[19] Later, the term feudum, or feodum, began to replace beneficium in the documents.[19] The first attested instance of this is from 984, although more primitive forms were seen up to one-hundred years earlier.[19] The origin of the feudum and why it replaced beneficium has not been well established, but there are multiple theories, described below.[19]
21
+
22
+ The most widely held theory was proposed by Johan Hendrik Caspar Kern in 1870,[20][21] being supported by, amongst others, William Stubbs[19][22] and Marc Bloch.[19][23][24] Kern derived the word from a putative Frankish term *fehu-ôd, in which *fehu means "cattle" and -ôd means "goods", implying "a moveable object of value".[23][24] Bloch explains that by the beginning of the 10th century it was common to value land in monetary terms but to pay for it with moveable objects of equivalent value, such as arms, clothing, horses or food. This was known as feos, a term that took on the general meaning of paying for something in lieu of money. This meaning was then applied to land itself, in which land was used to pay for fealty, such as to a vassal. Thus the old word feos meaning movable property changed little by little to feus meaning the exact opposite: landed property.[23][24]
23
+
24
+ Another theory was put forward by Archibald R. Lewis.[19] Lewis said the origin of 'fief' is not feudum (or feodum), but rather foderum, the earliest attested use being in Astronomus's Vita Hludovici (840).[25] In that text is a passage about Louis the Pious that says annona militaris quas vulgo foderum vocant, which can be translated as "Louis forbade that military provender (which they popularly call "fodder") be furnished.."[19]
25
+
26
+ Another theory by Alauddin Samarrai suggests an Arabic origin, from fuyū (the plural of fay, which literally means "the returned", and was used especially for 'land that has been conquered from enemies that did not fight').[19][26] Samarrai's theory is that early forms of 'fief' include feo, feu, feuz, feuum and others, the plurality of forms strongly suggesting origins from a loanword. The first use of these terms is in Languedoc, one of the least Germanic areas of Europe and bordering Muslim Spain. Further, the earliest use of feuum (as a replacement for beneficium) can be dated to 899, the same year a Muslim base at Fraxinetum (La Garde-Freinet) in Provence was established. It is possible, Samarrai says, that French scribes, writing in Latin, attempted to transliterate the Arabic word fuyū (the plural of fay), which was being used by the Muslim invaders and occupiers at the time, resulting in a plurality of forms – feo, feu, feuz, feuum and others – from which eventually feudum derived. Samarrai, however, also advises to handle this theory with care, as Medieval and Early Modern Muslim scribes often used etymologically "fanciful roots" in order to claim the most outlandish things to be of Arabian or Muslim origin.[26]
27
+
28
+ Feudalism, in its various forms, usually emerged as a result of the decentralization of an empire: especially in the Carolingian Empire in 8th century AD/CE, which lacked the bureaucratic infrastructure[clarification needed] necessary to support cavalry without allocating land to these mounted troops. Mounted soldiers began to secure a system of hereditary rule over their allocated land and their power over the territory came to encompass the social, political, judicial, and economic spheres.[27]
29
+
30
+ These acquired powers significantly diminished unitary power in these empires. Only when the infrastructure existed to maintain unitary power—as with the European monarchies—did feudalism begin to yield to this new power structure and eventually disappear.[27]
31
+
32
+ The classic François-Louis Ganshof version of feudalism[4][3] describes a set of reciprocal legal and military obligations which existed among the warrior nobility, revolving around the three key concepts of lords, vassals and fiefs. In broad terms a lord was a noble who held land, a vassal was a person who was granted possession of the land by the lord, and the land was known as a fief. In exchange for the use of the fief and protection by the lord, the vassal would provide some sort of service to the lord. There were many varieties of feudal land tenure, consisting of military and non-military service. The obligations and corresponding rights between lord and vassal concerning the fief form the basis of the feudal relationship.[3]
33
+
34
+ Before a lord could grant land (a fief) to someone, he had to make that person a vassal. This was done at a formal and symbolic ceremony called a commendation ceremony, which was composed of the two-part act of homage and oath of fealty. During homage, the lord and vassal entered into a contract in which the vassal promised to fight for the lord at his command, whilst the lord agreed to protect the vassal from external forces. Fealty comes from the Latin fidelitas and denotes the fidelity owed by a vassal to his feudal lord. "Fealty" also refers to an oath that more explicitly reinforces the commitments of the vassal made during homage. Such an oath follows homage.[28]
35
+
36
+ Once the commendation ceremony was complete, the lord and vassal were in a feudal relationship with agreed obligations to one another. The vassal's principal obligation to the lord was to "aid", or military service. Using whatever equipment the vassal could obtain by virtue of the revenues from the fief, the vassal was responsible to answer calls to military service on behalf of the lord. This security of military help was the primary reason the lord entered into the feudal relationship. In addition, the vassal could have other obligations to his lord, such as attendance at his court, whether manorial, baronial, both termed court baron, or at the king's court.[29]
37
+
38
+ It could also involve the vassal providing "counsel", so that if the lord faced a major decision he would summon all his vassals and hold a council. At the level of the manor this might be a fairly mundane matter of agricultural policy, but also included sentencing by the lord for criminal offences, including capital punishment in some cases. Concerning the king's feudal court, such deliberation could include the question of declaring war. These are examples; depending on the period of time and location in Europe, feudal customs and practices varied; see examples of feudalism.
39
+
40
+ In its origin, the feudal grant of land had been seen in terms of a personal bond between lord and vassal, but with time and the transformation of fiefs into hereditary holdings, the nature of the system came to be seen as a form of "politics of land" (an expression used by the historian Marc Bloch). The 11th century in France saw what has been called by historians a "feudal revolution" or "mutation" and a "fragmentation of powers" (Bloch) that was unlike the development of feudalism in England or Italy or Germany in the same period or later:[30] Counties and duchies began to break down into smaller holdings as castellans and lesser seigneurs took control of local lands, and (as comital families had done before them) lesser lords usurped/privatized a wide range of prerogatives and rights of the state, most importantly the highly profitable rights of justice, but also travel dues, market dues, fees for using woodlands, obligations to use the lord's mill, etc.[31] (what Georges Duby called collectively the "seigneurie banale"[31]). Power in this period became more personal.[32]
41
+
42
+ This "fragmentation of powers" was not, however, systematic throughout France, and in certain counties (such as Flanders, Normandy, Anjou, Toulouse), counts were able to maintain control of their lands into the 12th century or later.[33] Thus, in some regions (like Normandy and Flanders), the vassal/feudal system was an effective tool for ducal and comital control, linking vassals to their lords; but in other regions, the system led to significant confusion, all the more so as vassals could and frequently did pledge themselves to two or more lords. In response to this, the idea of a "liege lord" was developed (where the obligations to one lord are regarded as superior) in the 12th century.[34]
43
+
44
+ Most of the military aspects of feudalism effectively ended by about 1500.[35] This was partly since the military shifted from armies consisting of the nobility to professional fighters thus reducing the nobility's claim on power, but also because the Black Death reduced the nobility's hold over the lower classes. Vestiges of the feudal system hung on in France until the French Revolution of the 1790s, and the system lingered on in parts of Central and Eastern Europe as late as the 1850s. Slavery in Romania was abolished in 1856. Russia finally abolished serfdom in 1861.[36][37]
45
+
46
+ Even when the original feudal relationships had disappeared, there were many institutional remnants of feudalism left in place. Historian Georges Lefebvre explains how at an early stage of the French Revolution, on just one night of August 4, 1789, France abolished the long-lasting remnants of the feudal order. It announced, "The National Assembly abolishes the feudal system entirely." Lefebvre explains:
47
+
48
+ Without debate the Assembly enthusiastically adopted equality of taxation and redemption of all manorial rights except for those involving personal servitude—which were to be abolished without indemnification. Other proposals followed with the same success: the equality of legal punishment, admission of all to public office, abolition of venality in office, conversion of the tithe into payments subject to redemption, freedom of worship, prohibition of plural holding of benefices ... Privileges of provinces and towns were offered as a last sacrifice.[38]
49
+
50
+ Originally the peasants were supposed to pay for the release of seigneurial dues; these dues affected more than a quarter of the farmland in France and provided most of the income of the large landowners.[39] The majority refused to pay and in 1793 the obligation was cancelled. Thus the peasants got their land free, and also no longer paid the tithe to the church.[40]
51
+
52
+ The phrase "feudal society" as defined by Marc Bloch[10] offers a wider definition than Ganshof's and includes within the feudal structure not only the warrior aristocracy bound by vassalage, but also the peasantry bound by manorialism, and the estates of the Church. Thus the feudal order embraces society from top to bottom, though the "powerful and well-differentiated social group of the urban classes" came to occupy a distinct position to some extent outside the classic feudal hierarchy.
53
+
54
+ The idea of feudalism was unknown and the system it describes was not conceived of as a formal political system by the people living in the Medieval Period. This section describes the history of the idea of feudalism, how the concept originated among scholars and thinkers, how it changed over time, and modern debates about its use.
55
+
56
+ The concept of a feudal state or period, in the sense of either a regime or a period dominated by lords who possess financial or social power and prestige, became widely held in the middle of the 18th century, as a result of works such as Montesquieu's De L'Esprit des Lois (1748; published in English as The Spirit of the Laws), and Henri de Boulainvilliers’s Histoire des anciens Parlements de France (1737; published in English as An Historical Account of the Ancient Parliaments of France or States-General of the Kingdom, 1739).[18] In the 18th century, writers of the Enlightenment wrote about feudalism to denigrate the antiquated system of the Ancien Régime, or French monarchy. This was the Age of Enlightenment when writers valued reason and the Middle Ages were viewed as the "Dark Ages". Enlightenment authors generally mocked and ridiculed anything from the "Dark Ages" including feudalism, projecting its negative characteristics on the current French monarchy as a means of political gain.[41] For them "feudalism" meant seigneurial privileges and prerogatives. When the French Constituent Assembly abolished the "feudal regime" in August 1789 this is what was meant.
57
+
58
+ Adam Smith used the term "feudal system" to describe a social and economic system defined by inherited social ranks, each of which possessed inherent social and economic privileges and obligations. In such a system wealth derived from agriculture, which was arranged not according to market forces but on the basis of customary labour services owed by serfs to landowning nobles.[42]
59
+
60
+ Karl Marx also used the term in the 19th century in his analysis of society's economic and political development, describing feudalism (or more usually feudal society or the feudal mode of production) as the order coming before capitalism. For Marx, what defined feudalism was the power of the ruling class (the aristocracy) in their control of arable land, leading to a class society based upon the exploitation of the peasants who farm these lands, typically under serfdom and principally by means of labour, produce and money rents.[43] Marx thus defined feudalism primarily by its economic characteristics.
61
+
62
+ He also took it as a paradigm for understanding the power-relationships between capitalists and wage-labourers in his own time: "in pre-capitalist systems it was obvious that most people did not control their own destiny—under feudalism, for instance, serfs had to work for their lords. Capitalism seems different because people are in theory free to work for themselves or for others as they choose. Yet most workers have as little control over their lives as feudal serfs."[44] Some later Marxist theorists (e.g. Eric Wolf) have applied this label to include non-European societies, grouping feudalism together with Imperial Chinese and pre-Columbian Incan societies as 'tributary'.
63
+
64
+ In the late 19th and early 20th centuries, John Horace Round and Frederic William Maitland, both historians of medieval Britain, arrived at different conclusions as to the character of English society before the Norman Conquest in 1066. Round argued that the Normans had brought feudalism with them to England, while Maitland contended that its fundamentals were already in place in Britain before 1066. The debate continues today, but a consensus viewpoint is that England before the Conquest had commendation (which embodied some of the personal elements in feudalism) while William the Conqueror introduced a modified and stricter northern French feudalism to England incorporating (1086) oaths of loyalty to the king by all who held by feudal tenure, even the vassals of his principal vassals (holding by feudal tenure meant that vassals must provide the quota of knights required by the king or a money payment in substitution).
65
+
66
+ In the 20th century, two outstanding historians offered still more widely differing perspectives. The French historian Marc Bloch, arguably the most influential 20th-century medieval historian,[43] approached feudalism not so much from a legal and military point of view but from a sociological one, presenting in Feudal Society (1939; English 1961) a feudal order not limited solely to the nobility. It is his radical notion that peasants were part of the feudal relationship that sets Bloch apart from his peers: while the vassal performed military service in exchange for the fief, the peasant performed physical labour in return for protection – both are a form of feudal relationship. According to Bloch, other elements of society can be seen in feudal terms; all the aspects of life were centered on "lordship", and so we can speak usefully of a feudal church structure, a feudal courtly (and anti-courtly) literature, and a feudal economy.[43]
67
+
68
+ In contradistinction to Bloch, the Belgian historian François-Louis Ganshof defined feudalism from a narrow legal and military perspective, arguing that feudal relationships existed only within the medieval nobility itself. Ganshof articulated this concept in Qu'est-ce que la féodalité? ("What is feudalism?", 1944; translated in English as Feudalism). His classic definition of feudalism is widely accepted today among medieval scholars,[43] though questioned both by those who view the concept in wider terms and by those who find insufficient uniformity in noble exchanges to support such a model.
69
+
70
+ Although he was never formally a student in the circle of scholars around Marc Bloch and Lucien Febvre that came to be known as the Annales School, Georges Duby was an exponent of the Annaliste tradition. In a published version of his 1952 doctoral thesis entitled La société aux XIe et XIIe siècles dans la région mâconnaise (Society in the 11th and 12th centuries in the Mâconnais region), and working from the extensive documentary sources surviving from the Burgundian monastery of Cluny, as well as the dioceses of Mâcon and Dijon, Duby excavated the complex social and economic relationships among the individuals and institutions of the Mâconnais region and charted a profound shift in the social structures of medieval society around the year 1000. He argued that in early 11th century, governing institutions—particularly comital courts established under the Carolingian monarchy—that had represented public justice and order in Burgundy during the 9th and 10th centuries receded and gave way to a new feudal order wherein independent aristocratic knights wielded power over peasant communities through strong-arm tactics and threats of violence.
71
+
72
+ In 1939 the Austrian historian Theodor Mayer subordinated the feudal state as secondary to his concept of a persons association state (Personenverbandsstaat [de]), understanding it in contrast to the territorial state.[45] This form of statehood, identified with the Holy Roman Empire, is described as the most complete form of medieval rule, completing conventional feudal structure of lordship and vassalage with the personal association between the nobility.[46]  But the applicability of this concept to cases outside of the Holy Roman Empire has been questioned, as by Susan Reynolds.[47] The concept has also been questioned and superseded in German histography because of its bias and reductionism towards legitimating the Führerprinzip.
73
+
74
+ In 1974, the American historian Elizabeth A. R. Brown[5] rejected the label feudalism as an anachronism that imparts a false sense of uniformity to the concept. Having noted the current use of many, often contradictory, definitions of feudalism, she argued that the word is only a construct with no basis in medieval reality, an invention of modern historians read back "tyrannically" into the historical record. Supporters of Brown have suggested that the term should be expunged from history textbooks and lectures on medieval history entirely.[43] In Fiefs and Vassals: The Medieval Evidence Reinterpreted (1994),[6] Susan Reynolds expanded upon Brown's original thesis. Although some contemporaries questioned Reynolds's methodology, other historians have supported it and her argument.[43] Reynolds argues:
75
+
76
+ Too many models of feudalism used for comparisons, even by Marxists, are still either constructed on the 16th-century basis or incorporate what, in a Marxist view, must surely be superficial or irrelevant features from it. Even when one restricts oneself to Europe and to feudalism in its narrow sense it is extremely doubtful whether feudo-vassalic institutions formed a coherent bundle of institutions or concepts that were structurally separate from other institutions and concepts of the time.[48]
77
+
78
+ The term feudal has also been applied to non-Western societies in which institutions and attitudes similar to those of medieval Europe are perceived to have prevailed (See Examples of feudalism). Japan has been extensively studied in this regard.[49] Friday notes that in the 21st century historians of Japan rarely invoke feudalism; instead of looking at similarities, specialists attempting comparative analysis concentrate on fundamental differences.[50] Ultimately, critics say, the many ways the term feudalism has been used have deprived it of specific meaning, leading some historians and political theorists to reject it as a useful concept for understanding society.[43]
79
+
80
+ Richard Abels notes that "Western Civilization and World Civilization textbooks now shy away from the term 'feudalism'."[51]
81
+
82
+ Military:
83
+
84
+ Non-European:
en/5572.html.txt ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[Note 1] Twenty-two derived units have been provided with special names and symbols.[Note 2] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 3] which are adopted to facilitate measurement of diverse quantities. The SI system also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
4
+
5
+ Since 2019, the magnitudes of all SI units have been defined by declaring exact numerical values for seven defining constants when expressed in terms of their SI units. These defining constants are the speed of light in vacuum, c, the hyperfine transition frequency of caesium ΔνCs, the Planck constant h, the elementary charge e, the Boltzmann constant k, the Avogadro constant NA, and the luminous efficacy Kcd. The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units. One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants.[2]:129
6
+
7
+ The current way of defining the SI system is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology. The last artefact used by the SI was the International Prototype of the Kilogram, a cylinder of platinum-iridium.
8
+
9
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
10
+
11
+ The International System of Units, the SI,[2]:123 is a decimal[Note 4] and metric[Note 5] system of units established in 1960 and periodically updated since then. The SI has an official status in most countries,[Note 6] including the United States[Note 8] and the United Kingdom, with these two countries being amongst a handful of nations that, to various degrees, continue to resist widespread internal adoption of the SI system. As a consequence, the SI system “has been used around the world as the preferred system of units, the basic language for science, technology, industry and trade.”[2]:123
12
+
13
+ The only other types of measurement system that still have widespread use across the world are the Imperial and US customary measurement systems, and they are legally defined in terms of the SI system.[Note 9] There are other, less widespread systems of measurement that are occasionally used in particular regions of the world. In addition, there are many individual non-SI units that don't belong to any comprehensive system of units, but that are nevertheless still regularly used in particular fields and regions. Both of these categories of unit are also typically defined legally in terms of SI units.[Note 10]
14
+
15
+ The SI was established and is maintained by the General Conference on Weights and Measures (CGPM[Note 11]).[4] In practice, the CGPM follows the recommendations of the Consultative Committee for Units (CCU), which is the actual body conducting technical deliberations concerning new scientific and technological developments related to the definition of units and the SI. The CCU reports to the International Committee for Weights and Measures (CIPM[Note 12]), which, in turn, reports to the CGPM. See below for more details.
16
+
17
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[Note 13], which is published by the International Bureau of Weights and Measures (BIPM[Note 14]) and periodically updated.
18
+
19
+ The SI selects seven units to serve as base units, corresponding to seven base physical quantities.[Note 15] They are the second, with the symbol s, which is the SI unit of the physical quantity of time; the metre, symbol m, the SI unit of length; kilogram (kg, the unit of mass); ampere (A, electric current); kelvin (K, thermodynamic temperature), mole (mol, amount of substance); and candela (cd, luminous intensity).[2] Note that 'the choice of the base units was never unique, but grew historically and became familiar to users of the SI'.[2]:126 All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units.
20
+
21
+ The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit.[Note 16] The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units).[Note 17] Twenty-two coherent derived units have been provided with special names and symbols.[Note 18] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 19] which are adopted to facilitate measurement of diverse quantities.
22
+
23
+ Like all metric systems, the SI uses metric prefixes to systematically construct, for one and the same physical quantity, a whole set of units of widely different sizes that are decimal multiples of each other.
24
+
25
+ For example, while the coherent unit of length is the metre,[Note 20] the SI provides a full range of smaller and larger units of length, any of which may be more convenient for any given application – for example, driving distances are normally given in kilometres (symbol km) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, 1 km = 1000 m.[Note 21]
26
+
27
+ The current version of the SI provides twenty metric prefixes that signify decimal powers ranging from 10−24 to 1024.[2]:143–4 Apart from the prefixes for 1/100, 1/10, 10, and 100, all the other ones are powers of 1000.
28
+
29
+ In general, given any coherent unit with a separate name and symbol,[Note 22] one forms a new unit by simply adding an appropriate metric prefix to the name of the coherent unit (and a corresponding prefix symbol to the unit's symbol). Since the metric prefix signifies a particular power of ten, the new unit is always a power-of-ten multiple or sub-multiple of the coherent unit. Thus, the conversion between units within the SI is always through a power of ten; this is why the SI system (and metric systems more generally) are called decimal systems of measurement units.[6][Note 23]
30
+
31
+ The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power and can be combined with other unit symbols to form compound unit symbols.[2]:143 For example, g/cm3 is an SI unit of density, where cm3 is to be interpreted as (cm)3.
32
+
33
+ When prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[2]:137 The one exception is the kilogram, the only coherent SI unit whose name and symbol, for historical reasons, include a prefix.[Note 24]
34
+
35
+ The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[2]:138 For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a coherent SI unit. A similar statement holds for derived units: for example, kg/m3, g/dm3, g/cm3, Pg/km3, etc. are all SI units of density, but of these, only kg/m3 is a coherent SI unit.
36
+
37
+ Moreover, the metre is the only coherent SI unit of length. Every physical quantity has exactly one coherent SI unit, although this unit may be expressible in different forms by using some of the special names and symbols.[2]:140 For example, the coherent SI unit of linear momentum may be written as either kg⋅m/s or as N⋅s, and both forms are in use (e.g. compare respectively here[7]:205 and here[8]:135).
38
+
39
+ On the other hand, several different quantities may share same coherent SI unit. For example, the joule per kelvin is the coherent SI unit for two distinct quantities: heat capacity and entropy. Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is the coherent SI unit for both electric current and magnetomotive force, but it is a base unit in the former case and a derived unit in the latter.[2]:140[Note 26]
40
+
41
+ There is a special group of units that are called 'non-SI units that are accepted for use with the SI'.[2]:145 See Non-SI units mentioned in the SI for a full list. Most of these, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of 60 s/min, since 1 min = 60 s), the hour (3600 s), and the day (86400 s); the degree (for measuring plane angles, 1° = π/180 rad); and the electronvolt (a unit of energy, 1 eV = 1.602176634×10−19 J).
42
+
43
+ The SI is intended to be an evolving system; units[Note 27] and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
44
+
45
+ Since 2019, the magnitudes of all SI units have been defined in an abstract way, which is conceptually separated from any practical realisation of them.[2]:126[Note 28] Namely, the SI units are defined by declaring that seven defining constants[2]:125–9 have certain exact numerical values when expressed in terms of their SI units. Probably the most widely known of these constants is the speed of light in vacuum, c, which in the SI by definition has the exact value of c = 299792458 m/s. The other six constants are
46
+
47
+
48
+
49
+ Δ
50
+
51
+ ν
52
+
53
+ Cs
54
+
55
+
56
+
57
+
58
+ {\displaystyle \Delta \nu _{\text{Cs}}}
59
+
60
+ , the hyperfine transition frequency of caesium; h, the Planck constant; e, the elementary charge; k, the Boltzmann constant; NA, the Avogadro constant; and Kcd, the luminous efficacy of monochromatic radiation of frequency 540×1012 Hz.[Note 29] The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd.[2]:128–9. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units.
61
+
62
+ As far as realisations, what are believed to be the current best practical realisations of units are described in the so-called 'mises en pratique',[Note 30] which are also published by the BIPM.[11] The abstract nature of the definitions of units is what makes it possible to improve and change the mises en pratique as science and technology develop without having to change the actual definitions themselves.[Note 33]
63
+
64
+ In a sense, this way of defining the SI units is no more abstract than the way derived units are traditionally defined in terms of the base units. Let us consider a particular derived unit, say the joule, the unit of energy. Its definition in terms of the base units is kg⋅m2/s2. Even if the practical realisations of the metre, kilogram, and second are available, we do not thereby immediately have a practical realisation of the joule; such a realisation will require some sort of reference to the underlying physical definition of work or energy—some actual physical procedure (a mise en pratique, if you will) for realising the energy in the amount of one joule such that it can be compared to other instances of energy (such as the energy content of gasoline put into a car or of electricity delivered to a household).
65
+
66
+ The situation with the defining constants and all of the SI units is analogous. In fact, purely mathematically speaking, the SI units are defined as if we declared that it is the defining constant's units that are now the base units, with all other SI units being derived units. To make this clearer, first note that each defining constant can be taken as determining the magnitude of that defining constant's unit of measurement;[2]:128 for example, the definition of c defines the unit m/s as 1 m/s = c/299792458 ('the speed of one metre per second is equal to one 299792458th of the speed of light'). In this way, the defining constants directly define the following seven units: the hertz (Hz), a unit of the physical quantity of frequency (note that problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.[12][13][14][15][16]); the metre per second (m/s), a unit of speed; joule-second (J⋅s), a unit of action; coulomb (C), a unit of electric charge; joule per kelvin (J/K), a unit of both entropy and heat capacity; the inverse mole (mol−1), a unit of a conversion constant between the amount of substance and the number of elementary entities (atoms, molecules, etc.); and lumen per watt (lm/W), a unit of a conversion constant between the physical power carried by electromagnetic radiation and the intrinsic ability of that same radiation to produce visual perception of brightness in humans. Further, one can show, using dimensional analysis, that every coherent SI unit (whether base or derived) can be written as a unique product of powers of the units of the SI defining constants (in complete analogy to the fact that every coherent derived SI unit can be written as a unique product of powers of the base SI units). For example, the kilogram can be written as kg = (Hz)(J⋅s)/(m/s)2.[Note 34] Thus, the kilogram is defined in terms of the three defining constants ΔνCs, c, and h because, on the one hand, these three defining constants respectively define the units Hz, m/s, and J⋅s,[Note 35] while, on the other hand, the kilogram can be written in terms of these three units, namely, kg = (Hz)(J⋅s)/(m/s)2.[Note 36] True, the question of how to actually realise the kilogram in practice would, at this point, still be open, but that is not really different from the fact that the question of how to actually realise the joule in practice is still in principle open even once one has achieved the practical realisations of the metre, kilogram, and second.
67
+
68
+ One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants. Nevertheless, the distinction is retained because 'it is useful and historically well established', and also because the ISO/IEC 80000 series of standards[Note 37] specifies base and derived quantities that necessarily have the corresponding SI units.[2]:129
69
+
70
+ The current way of defining the SI system is the result of a decades-long move towards increasingly abstract and idealised formulation in which the
71
+ realisations of the units are separated conceptually from the definitions.[2]:126
72
+
73
+ The great advantage of doing it this way is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the units.[Note 31] Units can now be realised with ‘an accuracy that is ultimately limited only by the quantum structure of nature and our technical abilities but not by the definitions themselves.[Note 32] Any valid equation of physics relating the defining constants to a unit can be used to realise the unit, thus creating opportunities for innovation... with increasing accuracy as technology proceeds.’[2]:122 In practice, the CIPM Consultative Committees provide so-called "mises en pratique" (practical techniques),[11] which are the descriptions of what are currently believed to be best experimental realisations of the units.[19]
74
+
75
+ This system lacks the conceptual simplicity of using artefacts (referred to as prototypes) as realisations of units to define those units: with prototypes, the definition and the realisation are one and the same.[Note 38] However, using artefacts has two major disadvantages that, as soon as it is technologically and scientifically feasible, result in abandoning them as means for defining units.[Note 42] One major disadvantage is that artefacts can be lost, damaged,[Note 44] or changed.[Note 45] The other is that they largely cannot benefit from advancements in science and technology. The last artefact used by the SI was the International Prototype Kilogram (IPK), a particular cylinder of platinum-iridium; from 1889 to 2019, the kilogram was by definition equal to the mass of the IPK. Concerns regarding its stability on the one hand, and progress in precise measurements of the Planck constant and the Avogadro constant on the other, led to a revision of the definition of the base units, put into effect on 20 May 2019.[26] This was the biggest change in the SI system since it was first formally defined and established in 1960,[citation needed] and it resulted in the definitions described above.
76
+
77
+ In the past, there were also various other approaches to the definitions of some of the SI units. One made use of a specific physical state of a specific substance (the triple point of water, which was used in the definition of the kelvin[27]:113–4); others referred to idealised experimental prescriptions[2]:125 (as in the case of the former SI definition of the ampere[27]:113 and the former SI definition (originally enacted in 1979) of the candela[27]:115).
78
+
79
+ In the future, the set of defining constants used by the SI may be modified as more stable constants are found, or if it turns out that other constants can be more precisely measured.[Note 46]
80
+
81
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
82
+
83
+ The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM[Note 11]), the International Committee for Weights and Measures (CIPM[Note 12]), and the International Bureau of Weights and Measures (BIPM[Note 14]). The ultimate authority rests with the CGPM, which is a plenary body through which its Member States[Note 48] act together on matters related to measurement science and measurement standards; it usually convenes every four years.[28] The CGPM elects the CIPM, which is an 18-person committee of eminent scientists. The CIPM operates based on the advice of a number of its Consultative Committees, which bring together the world's experts in their specified fields as advisers on scientific and technical matters.[29][Note 49] One of these committees is the Consultative Committee for Units (CCU), which is responsible for matters related to the development of the International System of Units (SI), preparation of successive editions of the SI brochure, and advice to the CIPM on matters concerning units of measurement.[30] It is the CCU which considers in detail all new scientific and technological developments related to the definition of units and the SI. In practice, when it comes to the definition of the SI, the CGPM simply formally approves the recommendations of the CIPM, which, in turn, follows the advice of the CCU.
84
+
85
+ The CCU has the following as members:[31][32] national laboratories of the Member States of the CGPM charged with establishing national standards;[Note 50] relevant intergovernmental organisations and international bodies;[Note 51]
86
+ international commissions or committees;[Note 52]
87
+ scientific unions;[Note 53] personal members;[Note 54]
88
+ and, as an ex officio member of all Consultative Committees, the Director of the BIPM.
89
+
90
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[2][Note 13], which is published by the BIPM and periodically updated.
91
+
92
+ The International System of Units consists of a set of base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[27]:103–106 The units, excluding prefixed units,[Note 55] form a coherent system of units, which is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a.
93
+
94
+ Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[Note 56] Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, which is defined in SI units as m/s2.
95
+
96
+ The SI base units are the building blocks of the system and all the other units are derived from them.
97
+
98
+ The derived units in the SI are formed by powers, products, or quotients of the base units and are potentially unlimited in number.[27]:103[35]:14,16 Derived units are associated with derived quantities; for example, velocity is a quantity that is derived from the base quantities of time and length, and thus the SI derived unit is metre per second (symbol m/s). The dimensions of derived units can be expressed in terms of the dimensions of the base units.
99
+
100
+ Combinations of base and derived units may be used to express other derived units. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa)—and the pascal can be defined as one newton per square metre (N/m2).[38]
101
+
102
+ Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.[27]:122[39]:14 When prefixes are used to form multiples and submultiples of SI base and derived units, the resulting units are no longer coherent.[27]:7
103
+
104
+ The BIPM specifies 20 prefixes for the International System of Units (SI):
105
+
106
+ Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI:[27]
107
+
108
+ Some units of time, angle, and legacy non-SI units have a long history of use. Most societies have used the solar day and its non-decimal subdivisions as a basis of time and, unlike the foot or the pound, these were the same regardless of where they were being measured. The radian, being 1/2π of a revolution, has mathematical advantages but is rarely used for navigation. Further, the units used in navigation around the world are similar. The tonne, litre, and hectare were adopted by the CGPM in 1879 and have been retained as units that may be used alongside SI units, having been given unique symbols. The catalogued units are given below:
109
+
110
+ These units are used in combination with SI units in common units such as the kilowatt-hour (1 kW⋅h = 3.6 MJ).
111
+
112
+ The basic units of the metric system, as originally defined, represented common quantities or relationships in nature. They still do – the modern precisely defined quantities are refinements of definition and methodology, but still with the same magnitudes. In cases where laboratory precision may not be required or available, or where approximations are good enough, the original definitions may suffice.[Note 57]
113
+
114
+ The symbols for the SI units are intended to be identical, regardless of the language used,[27]:130–135 but names are ordinary nouns and use the character set and follow the grammatical rules of the language concerned. Names of units follow the grammatical rules associated with common nouns: in English and in French they start with a lowercase letter (e.g., newton, hertz, pascal), even when the unit is named after a person and its symbol begins with a capital letter.[27]:148 This also applies to "degrees Celsius", since "degree" is the beginning of the unit.[48][49] The only exceptions are in the beginning of sentences and in headings and publication titles.[27]:148 The English spelling for certain SI units differs: US English uses the spelling deka-, meter, and liter, whilst International English uses deca-, metre, and litre.
115
+
116
+ Although the writing of unit names is language-specific, the writing of unit symbols and the values of quantities is consistent across all languages and therefore the SI Brochure has specific rules in respect of writing them.[27]:130–135 The guideline produced by the National Institute of Standards and Technology (NIST)[50] clarifies language-specific areas in respect of American English that were left open by the SI Brochure, but is otherwise identical to the SI Brochure.[51]
117
+
118
+ General rules[Note 62] for writing SI units and quantities apply to text that is either handwritten or produced using an automated process:
119
+
120
+ The rules covering printing of quantities and units are part of ISO 80000-1:2009.[53]
121
+
122
+ Further rules[Note 62] are specified in respect of production of text using printing presses, word processors, typewriters, and the like.
123
+
124
+ The CGPM publishes a brochure that defines and presents the SI.[27] Its official version is in French, in line with the Metre Convention.[27]:102 It leaves some scope for local variations, particularly regarding unit names and terms in different languages.[Note 63][35]
125
+
126
+ The writing and maintenance of the CGPM brochure is carried out by one of the committees of the International Committee for Weights and Measures (CIPM).
127
+ The definitions of the terms "quantity", "unit", "dimension" etc. that are used in the SI Brochure are those given in the International vocabulary of metrology.[54]
128
+
129
+ The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
130
+ The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[55] The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1,[56] and has largely been revised in 2019–2020 with the remainder being under review.
131
+
132
+ Metrologists carefully distinguish between the definition of a unit and its realisation. The definition of each base unit of the SI is drawn up so that it is unique and provides a sound theoretical basis on which the most accurate and reproducible measurements can be made. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. A description of the mise en pratique[Note 64] of the base units is given in an electronic appendix to the SI Brochure.[58][27]:168–169
133
+
134
+ The published mise en pratique is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit."[27]:111 In the current (2016) exercise to overhaul the definitions of the base units, various consultative committees of the CIPM have required that more than one mise en pratique shall be developed for determining the value of each unit.[59] In particular:
135
+
136
+ The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system".[27]:95 Changing technology has led to an evolution of the definitions and standards that has followed two principal strands – changes to SI itself, and clarification of how to use units of measure that are not part of SI but are still nevertheless used on a worldwide basis.
137
+
138
+ Since 1960 the CGPM has made a number of changes to the SI to meet the needs of specific fields, notably chemistry and radiometry. These are mostly additions to the list of named derived units, and include the mole (symbol mol) for an amount of substance, the pascal (symbol Pa) for pressure, the siemens (symbol S) for electrical conductance, the becquerel (symbol Bq) for "activity referred to a radionuclide", the gray (symbol Gy) for ionising radiation, the sievert (symbol Sv) as the unit of dose equivalent radiation, and the katal (symbol kat) for catalytic activity.[27]:156[63][27]:156[27]:158[27]:159[27]:165
139
+
140
+ The range of defined prefixes pico- (10−12) to tera- (1012) was extended to 10−24 to 1024.[27]:152[27]:158[27]:164
141
+
142
+ The 1960 definition of the standard metre in terms of wavelengths of a specific emission of the krypton 86 atom was replaced with the distance that light travels in vacuum in exactly 1/299792458 second, so that the speed of light is now an exactly specified constant of nature.
143
+
144
+ A few changes to notation conventions have also been made to alleviate lexicographic ambiguities. An analysis under the aegis of CSIRO, published in 2009 by the Royal Society, has pointed out the opportunities to finish the realisation of that goal, to the point of universal zero-ambiguity machine readability.[64]
145
+
146
+ After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[65] During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
147
+
148
+ A proposal was made that:
149
+
150
+ The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[66] The change was adopted by the European Union through Directive (EU) 2019/1258.[67]
151
+
152
+ The units and unit magnitudes of the metric system which became the SI were improvised piecemeal from everyday physical quantities starting in the mid-18th century. Only later were they moulded into an orthogonal coherent decimal system of measurement.
153
+
154
+ The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, in 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale.
155
+
156
+ The metric system was developed from 1791 onwards by a committee of the French Academy of Sciences, commissioned to create a unified and rational system of measures.[69] The group, which included preeminent French men of science,[70]:89 used the same principles for relating length, volume, and mass that had been proposed by the English clergyman John Wilkins in 1668[71][72] and the concept of using the Earth's meridian as the basis of the definition of length, originally proposed in 1670 by the French abbot Mouton.[73][74]
157
+
158
+ In March 1791, the Assembly adopted the committee's proposed principles for the new decimal system of measure including the metre defined to be 1/10,000,000 of the length of the quadrant of Earth's meridian passing through Paris, and authorised a survey to precisely establish the length of the meridian. In July 1792, the committee proposed the names metre, are, litre and grave for the units of length, area, capacity, and mass, respectively. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth and kilo for a thousand.[75]:82
159
+
160
+ Later, during the process of adoption of the metric system, the Latin gramme and kilogramme, replaced the former provincial terms gravet (1/1000 grave) and grave. In June 1799, based on the results of the meridian survey, the standard mètre des Archives and kilogramme des Archives were deposited in the French National Archives. Subsequently, that year, the metric system was adopted by law in France.[81]
161
+ [82] The French system was short-lived due to its unpopularity. Napoleon ridiculed it, and in 1812, introduced a replacement system, the mesures usuelles or "customary measures" which restored many of the old units, but redefined in terms of the metric system.
162
+
163
+ During the first half of the 19th century there was little consistency in the choice of preferred multiples of the base units: typically the myriametre (10000 metres) was in widespread use in both France and parts of Germany, while the kilogram (1000 grams) rather than the myriagram was used for mass.[68]
164
+
165
+ In 1832, the German mathematician Carl Friedrich Gauss, assisted by Wilhelm Weber, implicitly defined the second as a base unit when he quoted the Earth's magnetic field in terms of millimetres, grams, and seconds.[76] Prior to this, the strength of the Earth's magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a suspended magnet of known mass by the Earth's magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length and time to the magnetic field.[Note 65][83]
166
+
167
+ A candlepower as a unit of illuminance was originally defined by an 1860 English law as the light produced by a pure spermaceti candle weighing ​1⁄6 pound (76 grams) and burning at a specified rate. Spermaceti, a waxy substance found in the heads of sperm whales, was once used to make high-quality candles. At this time the French standard of light was based upon the illumination from a Carcel oil lamp. The unit was defined as that illumination emanating from a lamp burning pure rapeseed oil at a defined rate. It was accepted that ten standard candles were about equal to one Carcel lamp.
168
+
169
+ A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.[Note 66][70]:353–354 Initially the convention only covered standards for the metre and the kilogram. In 1921, the Metre Convention was extended to include all physical units, including the ampere and others thereby enabling the CGPM to address inconsistencies in the way that the metric system had been used.[77][27]:96
170
+
171
+ A set of 30 prototypes of the metre and 40 prototypes of the kilogram,[Note 67] in each case made of a 90% platinum-10% iridium alloy, were manufactured by British metallurgy specialty firm and accepted by the CGPM in 1889. One of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the remaining prototypes to serve as the national prototype for that country.[84]
172
+
173
+ The treaty also established a number of international organisations to oversee the keeping of international standards of measurement:[85]
174
+ [86]
175
+
176
+ In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin) and others working under the auspices of the British Association for the Advancement of Science, built on Gauss's work and formalised the concept of a coherent system of units with base units and derived units christened the centimetre–gram–second system of units in 1874. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.[79]
177
+
178
+ In 1879, the CIPM published recommendations for writing the symbols for length, area, volume and mass, but it was outside its domain to publish recommendations for other quantities. Beginning in about 1900, physicists who had been using the symbol "μ" (mu) for "micrometre" or "micron", "λ" (lambda) for "microlitre", and "γ" (gamma) for "microgram" started to use the symbols "μm", "μL" and "μg".[87]
179
+
180
+ At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU) and an International system based on units defined by the Metre Convention.[88] for electrical distribution systems.
181
+ Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties—the dimensions depended on whether one used the ESU or EMU systems.[80] This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.[89] Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. This became the foundation of the MKS system of units.
182
+
183
+ In the late 19th and early 20th centuries, a number of non-coherent units of measure based on the gram/kilogram, centimetre/metre, and second, such as the Pferdestärke (metric horsepower) for power,[90][Note 68] the darcy for permeability[91] and "millimetres of mercury" for barometric and blood pressure were developed or propagated, some of which incorporated standard gravity in their definitions.[Note 69]
184
+
185
+ At the end of the Second World War, a number of different systems of measurement were in use throughout the world. Some of these systems were metric system variations; others were based on customary systems of measure, like the U.S customary system and Imperial system of the UK and British Empire.
186
+
187
+ In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[92] This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived from six base units including units of temperature and optical radiation in addition to those for the MKS system mass, length, and time units and Giorgi's current unit. Six base units were recommended: the metre, kilogram, second, ampere, degree Kelvin, and candela.
188
+
189
+ The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[93] These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[27]:104,130
190
+
191
+ In 1960, the 11th CGPM synthesised the results of the 12-year study into a set of 16 resolutions. The system was named the International System of Units, abbreviated SI from the French name, Le Système International d'Unités.[27]:110[94]
192
+
193
+ When Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass, length, and time. Giorgi later identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units (for temperature, amount of substance, and luminous intensity) were added later.
194
+
195
+ The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N (newton) applied to a mass of 1 kg will accelerate it at 1 m/s2. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g; mass times the acceleration due to gravity, which is 9.81 newtons at the Earth's surface and is about 3.5 newtons at the surface of Mars. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision measurements of a property of a body, and this makes a unit of weight unsuitable as a base unit.
196
+
197
+ The Prior definitions of the various base units in the above table were made by the following authors and authorities:
198
+
199
+ All other definitions result from resolutions by either CGPM or the CIPM and are catalogued in the SI Brochure.
200
+
201
+ Although the term metric system is often used as an informal alternative name for the International System of Units,[98] other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.[Note 71][Note 73] Here are some examples. The centimetre–gram–second (CGS) system was the dominant metric system in the physical sciences and electrical engineering from the 1860s until at least the 1960s, and is still in use in some fields. It includes such SI-unrecognised units as the gal, dyne, erg, barye, etc. in its mechanical sector, as well as the poise and stokes in fluid dynamics. When it comes to the units for quantities in electricity and magnetism, there are several versions of the CGS system. Two of these are obsolete: the CGS electrostatic ('CGS-ESU', with the SI-unrecognised units of statcoulomb, statvolt, statampere, etc.) and the CGS electromagnetic system ('CGS-EMU', with abampere, abcoulomb, oersted, maxwell, abhenry, gilbert, etc.).[Note 74] A 'blend' of these two systems is still popular and is known as the Gaussian system (which includes the gauss as a special name for the CGS-EMU unit maxwell per square centimetre).[Note 75] In engineering (other than electrical engineering), there was formerly a long tradition of using the gravitational metric system, whose SI-unrecognised units include the kilogram-force (kilopond), technical atmosphere, metric horsepower, etc. The metre–tonne–second (mts) system, used in the Soviet Union from 1933 to 1955, had such SI-unrecognised units as the sthène, pièze, etc. Other groups of SI-unrecognised metric units are the various legacy and CGS units related to ionising radiation (rutherford, curie, roentgen, rad, rem, etc.), radiometry (langley, jansky), photometry (phot, nox, stilb, nit, metre-candle,[102]:17 lambert, apostilb, skot, brill, troland, talbot, candlepower, candle), thermodynamics (calorie), and spectroscopy (reciprocal centimetre). The angstrom is still used in various fields. Some other SI-unrecognised metric units that don't fit into any of the already mentioned categories include the are, bar, barn, fermi,[103]:20–21 gradian (gon, grad, or grade), metric carat, micron, millimetre of mercury, torr, millimetre (or centimetre, or metre) of water, millimicron, mho, stere, x unit, γ (unit of mass), γ (unit of magnetic flux density), and λ (unit of volume).[citation needed] In some cases, the SI-unrecognised metric units have equivalent SI units formed by combining a metric prefix with a coherent SI unit. For example, 1 γ (unit of magnetic flux density) = 1 nT, 1 Gal = 1 cm⋅s−2, 1 barye = 1 decipascal, etc. (a related group are the correspondences[Note 74] such as 1 abampere ≘ 1 decaampere, 1 abhenry ≘ 1 nanohenry, etc.[Note 76]). Sometimes it is not even a matter of a metric prefix: the SI-nonrecognised unit may be exactly the same as an SI coherent unit, except for the fact that the SI does not recognise the special name and symbol. For example, the nit is just an SI-unrecognised name for the SI unit candela per square metre and the talbot is an SI-unrecognised name for the SI unit lumen second. Frequently, a non-SI metric unit is related to an SI unit through a power of ten factor, but not one that has a metric prefix, e.g. 1 dyn = 10−5 newton, 1 Å = 10−10 m, etc. (and correspondences[Note 74] like 1 gauss ≘ 10−4 tesla). Finally, there are metric units whose conversion factors to SI units are not powers of ten, e.g. 1 calorie = 4.184 joules and 1 kilogram-force = 9.806650 newtons. Some SI-unrecognised metric units are still frequently used, e.g. the calorie (in nutrition), the rem (in the U.S.), the jansky (in radio astronomy), the reciprocal centimetre (in spectroscopy), the gauss (in industry) and the CGS-Gaussian units[Note 75] more generally (in some subfields of physics), the metric horsepower (for engine power, in Europe), the kilogram-force (for rocket engine thrust, in China and sometimes in Europe), etc. Others are now rarely used, such as the sthène and the rutherford.
202
+
203
+ Organisations
204
+
205
+ Standards and conventions
206
+
207
+ It is therefore the declared policy of the United States-
208
+
209
+ (1) to designate the metric system of measurement as the preferred system of weights and measures for United States trade and commerce;
210
+
211
+ (2) to require that each Federal agency, by a date certain and to the extent economically feasible by the end of the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other business-related activities, except to the extent that such use is impractical or is likely to cause significant inefficiencies or loss of markets to United States firms, such as when foreign competitors are producing competing products in non-metric units;
212
+
213
+ (3) to seek out ways to increase understanding of the metric system of measurement through educational information and guidance and in Government publications; and
214
+
215
+ (4) to permit the continued use of traditional systems of weights and measures in non-business activities.
216
+
217
+ The unit of length is the metre, defined by the distance, at 0°, between the axes of the two central lines marked on the bar of platinum-iridium kept at the Bureau International des Poids et Mesures and declared Prototype of the metre by the 1st Conférence Générale des Poids et Mesures, this bar being subject to standard atmospheric pressure and supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
218
+
219
+ We shall in the first place describe the state of the Standards recovered from the ruins of the House of Commons, as ascertained in our inspection of them made on 1st June, 1838, at the Journal Office, where they are preserved under the care of Mr. James Gudge, Principal Clerk of the Journal Office. The following list, taken by ourselves from inspection, was compared with a list produced by Mr. Gudge, and stated by him to have been made by Mr. Charles Rowland, one of the Clerks of the Journal Office, immediately after the fire, and was found to agree with it. Mr. Gudge stated that no other Standards of Length or Weight were in his custody.
220
+
221
+ No. 1. A brass bar marked “Standard [G. II. crown emblem] Yard, 1758,” which on examination was found to have its right hand stud perfect, with the point and line visible, but with its left hand stud completely melted out, a hole only remaining. The bar was somewhat bent, and discoloured in every part.
222
+
223
+ No. 2. A brass bar with a projecting cock at each end, forming a bed for the trial of yard-measures; discoloured.
224
+
225
+ No. 3. A brass bar marked “Standard [G. II. crown emblem] Yard, 1760,” from which the left hand stud was completely melted out, and which in other respects was in the same condition as No. 1.
226
+
227
+ No. 4. A yard-bed similar to No. 2; discoloured.
228
+
229
+ No. 5. A weight of the form [drawing of a weight] marked [2 lb. T. 1758], apparently of brass or copper; much discoloured.
230
+
231
+ No. 6. A weight marked in the same manner for 4 lbs., in the same state.
232
+
233
+ No. 7. A weight similar to No. 6, with a hollow space at its base, which appeared at first sight to have been originally filled with some soft metal that had been now melted out, but which on a rough trial was found to have nearly the same weight as No. 6.
234
+
235
+ No. 8. A similar weight of 8 lbs., similarly marked (with the alteration of 8 lbs. for 4 lbs.), and in the same state.
236
+
237
+ No. 9. Another exactly like No. 8.
238
+
239
+ Nos. 10 and 11. Two weights of 16 lbs., similarly marked.
240
+
241
+ Nos. 12 and 13. Two weights of 32 lbs., similarly marked.
242
+
243
+ No. 14. A weight with a triangular ring-handle, marked "S.F. 1759 17 lbs. 8 dwts. Troy", apparently intended to represent the stone of 14 lbs. avoirdupois, allowing 7008 troy grains to each avoirdupois pound.
244
+
245
+ It appears from this list that the bar adopted in the Act 5th Geo. IV., cap. 74, sect. 1, for the legal standard of one yard, (No. 3 of the preceding list), is so far injured, that it is impossible to ascertain from it, with the most moderate accuracy, the statutable length of one yard. The legal standard of one troy pound is missing. We have therefore to report that it is absolutely necessary that steps be taken for the formation and legalising of new Standards of Length and Weight.
246
+
247
+ [t]he bronze yard No. 11, which was an exact copy of the British imperial yard both in form and material, had shown changes when compared with the imperial yard in 1876 and 1888 which could not reasonably be said to be entirely due to changes in No. 11. Suspicion as to the constancy of the length of the British standard was therefore aroused.
248
+
249
+ In 1890, as a signatory of the Metre Convention, the US received two copies of the International Prototype Metre, the construction of which represented the most advanced ideas of standards of the time. Therefore it seemed that US measures would have greater stability and higher accuracy by accepting the international metre as fundamental standard, which was formalised in 1893 by the Mendenhall Order.[25]:379–81
250
+
en/5573.html.txt ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[Note 1] Twenty-two derived units have been provided with special names and symbols.[Note 2] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 3] which are adopted to facilitate measurement of diverse quantities. The SI system also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
4
+
5
+ Since 2019, the magnitudes of all SI units have been defined by declaring exact numerical values for seven defining constants when expressed in terms of their SI units. These defining constants are the speed of light in vacuum, c, the hyperfine transition frequency of caesium ΔνCs, the Planck constant h, the elementary charge e, the Boltzmann constant k, the Avogadro constant NA, and the luminous efficacy Kcd. The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units. One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants.[2]:129
6
+
7
+ The current way of defining the SI system is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology. The last artefact used by the SI was the International Prototype of the Kilogram, a cylinder of platinum-iridium.
8
+
9
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
10
+
11
+ The International System of Units, the SI,[2]:123 is a decimal[Note 4] and metric[Note 5] system of units established in 1960 and periodically updated since then. The SI has an official status in most countries,[Note 6] including the United States[Note 8] and the United Kingdom, with these two countries being amongst a handful of nations that, to various degrees, continue to resist widespread internal adoption of the SI system. As a consequence, the SI system “has been used around the world as the preferred system of units, the basic language for science, technology, industry and trade.”[2]:123
12
+
13
+ The only other types of measurement system that still have widespread use across the world are the Imperial and US customary measurement systems, and they are legally defined in terms of the SI system.[Note 9] There are other, less widespread systems of measurement that are occasionally used in particular regions of the world. In addition, there are many individual non-SI units that don't belong to any comprehensive system of units, but that are nevertheless still regularly used in particular fields and regions. Both of these categories of unit are also typically defined legally in terms of SI units.[Note 10]
14
+
15
+ The SI was established and is maintained by the General Conference on Weights and Measures (CGPM[Note 11]).[4] In practice, the CGPM follows the recommendations of the Consultative Committee for Units (CCU), which is the actual body conducting technical deliberations concerning new scientific and technological developments related to the definition of units and the SI. The CCU reports to the International Committee for Weights and Measures (CIPM[Note 12]), which, in turn, reports to the CGPM. See below for more details.
16
+
17
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[Note 13], which is published by the International Bureau of Weights and Measures (BIPM[Note 14]) and periodically updated.
18
+
19
+ The SI selects seven units to serve as base units, corresponding to seven base physical quantities.[Note 15] They are the second, with the symbol s, which is the SI unit of the physical quantity of time; the metre, symbol m, the SI unit of length; kilogram (kg, the unit of mass); ampere (A, electric current); kelvin (K, thermodynamic temperature), mole (mol, amount of substance); and candela (cd, luminous intensity).[2] Note that 'the choice of the base units was never unique, but grew historically and became familiar to users of the SI'.[2]:126 All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units.
20
+
21
+ The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit.[Note 16] The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units).[Note 17] Twenty-two coherent derived units have been provided with special names and symbols.[Note 18] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 19] which are adopted to facilitate measurement of diverse quantities.
22
+
23
+ Like all metric systems, the SI uses metric prefixes to systematically construct, for one and the same physical quantity, a whole set of units of widely different sizes that are decimal multiples of each other.
24
+
25
+ For example, while the coherent unit of length is the metre,[Note 20] the SI provides a full range of smaller and larger units of length, any of which may be more convenient for any given application – for example, driving distances are normally given in kilometres (symbol km) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, 1 km = 1000 m.[Note 21]
26
+
27
+ The current version of the SI provides twenty metric prefixes that signify decimal powers ranging from 10−24 to 1024.[2]:143–4 Apart from the prefixes for 1/100, 1/10, 10, and 100, all the other ones are powers of 1000.
28
+
29
+ In general, given any coherent unit with a separate name and symbol,[Note 22] one forms a new unit by simply adding an appropriate metric prefix to the name of the coherent unit (and a corresponding prefix symbol to the unit's symbol). Since the metric prefix signifies a particular power of ten, the new unit is always a power-of-ten multiple or sub-multiple of the coherent unit. Thus, the conversion between units within the SI is always through a power of ten; this is why the SI system (and metric systems more generally) are called decimal systems of measurement units.[6][Note 23]
30
+
31
+ The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power and can be combined with other unit symbols to form compound unit symbols.[2]:143 For example, g/cm3 is an SI unit of density, where cm3 is to be interpreted as (cm)3.
32
+
33
+ When prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[2]:137 The one exception is the kilogram, the only coherent SI unit whose name and symbol, for historical reasons, include a prefix.[Note 24]
34
+
35
+ The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[2]:138 For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a coherent SI unit. A similar statement holds for derived units: for example, kg/m3, g/dm3, g/cm3, Pg/km3, etc. are all SI units of density, but of these, only kg/m3 is a coherent SI unit.
36
+
37
+ Moreover, the metre is the only coherent SI unit of length. Every physical quantity has exactly one coherent SI unit, although this unit may be expressible in different forms by using some of the special names and symbols.[2]:140 For example, the coherent SI unit of linear momentum may be written as either kg⋅m/s or as N⋅s, and both forms are in use (e.g. compare respectively here[7]:205 and here[8]:135).
38
+
39
+ On the other hand, several different quantities may share same coherent SI unit. For example, the joule per kelvin is the coherent SI unit for two distinct quantities: heat capacity and entropy. Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is the coherent SI unit for both electric current and magnetomotive force, but it is a base unit in the former case and a derived unit in the latter.[2]:140[Note 26]
40
+
41
+ There is a special group of units that are called 'non-SI units that are accepted for use with the SI'.[2]:145 See Non-SI units mentioned in the SI for a full list. Most of these, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of 60 s/min, since 1 min = 60 s), the hour (3600 s), and the day (86400 s); the degree (for measuring plane angles, 1° = π/180 rad); and the electronvolt (a unit of energy, 1 eV = 1.602176634×10−19 J).
42
+
43
+ The SI is intended to be an evolving system; units[Note 27] and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
44
+
45
+ Since 2019, the magnitudes of all SI units have been defined in an abstract way, which is conceptually separated from any practical realisation of them.[2]:126[Note 28] Namely, the SI units are defined by declaring that seven defining constants[2]:125–9 have certain exact numerical values when expressed in terms of their SI units. Probably the most widely known of these constants is the speed of light in vacuum, c, which in the SI by definition has the exact value of c = 299792458 m/s. The other six constants are
46
+
47
+
48
+
49
+ Δ
50
+
51
+ ν
52
+
53
+ Cs
54
+
55
+
56
+
57
+
58
+ {\displaystyle \Delta \nu _{\text{Cs}}}
59
+
60
+ , the hyperfine transition frequency of caesium; h, the Planck constant; e, the elementary charge; k, the Boltzmann constant; NA, the Avogadro constant; and Kcd, the luminous efficacy of monochromatic radiation of frequency 540×1012 Hz.[Note 29] The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd.[2]:128–9. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units.
61
+
62
+ As far as realisations, what are believed to be the current best practical realisations of units are described in the so-called 'mises en pratique',[Note 30] which are also published by the BIPM.[11] The abstract nature of the definitions of units is what makes it possible to improve and change the mises en pratique as science and technology develop without having to change the actual definitions themselves.[Note 33]
63
+
64
+ In a sense, this way of defining the SI units is no more abstract than the way derived units are traditionally defined in terms of the base units. Let us consider a particular derived unit, say the joule, the unit of energy. Its definition in terms of the base units is kg⋅m2/s2. Even if the practical realisations of the metre, kilogram, and second are available, we do not thereby immediately have a practical realisation of the joule; such a realisation will require some sort of reference to the underlying physical definition of work or energy—some actual physical procedure (a mise en pratique, if you will) for realising the energy in the amount of one joule such that it can be compared to other instances of energy (such as the energy content of gasoline put into a car or of electricity delivered to a household).
65
+
66
+ The situation with the defining constants and all of the SI units is analogous. In fact, purely mathematically speaking, the SI units are defined as if we declared that it is the defining constant's units that are now the base units, with all other SI units being derived units. To make this clearer, first note that each defining constant can be taken as determining the magnitude of that defining constant's unit of measurement;[2]:128 for example, the definition of c defines the unit m/s as 1 m/s = c/299792458 ('the speed of one metre per second is equal to one 299792458th of the speed of light'). In this way, the defining constants directly define the following seven units: the hertz (Hz), a unit of the physical quantity of frequency (note that problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.[12][13][14][15][16]); the metre per second (m/s), a unit of speed; joule-second (J⋅s), a unit of action; coulomb (C), a unit of electric charge; joule per kelvin (J/K), a unit of both entropy and heat capacity; the inverse mole (mol−1), a unit of a conversion constant between the amount of substance and the number of elementary entities (atoms, molecules, etc.); and lumen per watt (lm/W), a unit of a conversion constant between the physical power carried by electromagnetic radiation and the intrinsic ability of that same radiation to produce visual perception of brightness in humans. Further, one can show, using dimensional analysis, that every coherent SI unit (whether base or derived) can be written as a unique product of powers of the units of the SI defining constants (in complete analogy to the fact that every coherent derived SI unit can be written as a unique product of powers of the base SI units). For example, the kilogram can be written as kg = (Hz)(J⋅s)/(m/s)2.[Note 34] Thus, the kilogram is defined in terms of the three defining constants ΔνCs, c, and h because, on the one hand, these three defining constants respectively define the units Hz, m/s, and J⋅s,[Note 35] while, on the other hand, the kilogram can be written in terms of these three units, namely, kg = (Hz)(J⋅s)/(m/s)2.[Note 36] True, the question of how to actually realise the kilogram in practice would, at this point, still be open, but that is not really different from the fact that the question of how to actually realise the joule in practice is still in principle open even once one has achieved the practical realisations of the metre, kilogram, and second.
67
+
68
+ One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants. Nevertheless, the distinction is retained because 'it is useful and historically well established', and also because the ISO/IEC 80000 series of standards[Note 37] specifies base and derived quantities that necessarily have the corresponding SI units.[2]:129
69
+
70
+ The current way of defining the SI system is the result of a decades-long move towards increasingly abstract and idealised formulation in which the
71
+ realisations of the units are separated conceptually from the definitions.[2]:126
72
+
73
+ The great advantage of doing it this way is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the units.[Note 31] Units can now be realised with ‘an accuracy that is ultimately limited only by the quantum structure of nature and our technical abilities but not by the definitions themselves.[Note 32] Any valid equation of physics relating the defining constants to a unit can be used to realise the unit, thus creating opportunities for innovation... with increasing accuracy as technology proceeds.’[2]:122 In practice, the CIPM Consultative Committees provide so-called "mises en pratique" (practical techniques),[11] which are the descriptions of what are currently believed to be best experimental realisations of the units.[19]
74
+
75
+ This system lacks the conceptual simplicity of using artefacts (referred to as prototypes) as realisations of units to define those units: with prototypes, the definition and the realisation are one and the same.[Note 38] However, using artefacts has two major disadvantages that, as soon as it is technologically and scientifically feasible, result in abandoning them as means for defining units.[Note 42] One major disadvantage is that artefacts can be lost, damaged,[Note 44] or changed.[Note 45] The other is that they largely cannot benefit from advancements in science and technology. The last artefact used by the SI was the International Prototype Kilogram (IPK), a particular cylinder of platinum-iridium; from 1889 to 2019, the kilogram was by definition equal to the mass of the IPK. Concerns regarding its stability on the one hand, and progress in precise measurements of the Planck constant and the Avogadro constant on the other, led to a revision of the definition of the base units, put into effect on 20 May 2019.[26] This was the biggest change in the SI system since it was first formally defined and established in 1960,[citation needed] and it resulted in the definitions described above.
76
+
77
+ In the past, there were also various other approaches to the definitions of some of the SI units. One made use of a specific physical state of a specific substance (the triple point of water, which was used in the definition of the kelvin[27]:113–4); others referred to idealised experimental prescriptions[2]:125 (as in the case of the former SI definition of the ampere[27]:113 and the former SI definition (originally enacted in 1979) of the candela[27]:115).
78
+
79
+ In the future, the set of defining constants used by the SI may be modified as more stable constants are found, or if it turns out that other constants can be more precisely measured.[Note 46]
80
+
81
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
82
+
83
+ The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM[Note 11]), the International Committee for Weights and Measures (CIPM[Note 12]), and the International Bureau of Weights and Measures (BIPM[Note 14]). The ultimate authority rests with the CGPM, which is a plenary body through which its Member States[Note 48] act together on matters related to measurement science and measurement standards; it usually convenes every four years.[28] The CGPM elects the CIPM, which is an 18-person committee of eminent scientists. The CIPM operates based on the advice of a number of its Consultative Committees, which bring together the world's experts in their specified fields as advisers on scientific and technical matters.[29][Note 49] One of these committees is the Consultative Committee for Units (CCU), which is responsible for matters related to the development of the International System of Units (SI), preparation of successive editions of the SI brochure, and advice to the CIPM on matters concerning units of measurement.[30] It is the CCU which considers in detail all new scientific and technological developments related to the definition of units and the SI. In practice, when it comes to the definition of the SI, the CGPM simply formally approves the recommendations of the CIPM, which, in turn, follows the advice of the CCU.
84
+
85
+ The CCU has the following as members:[31][32] national laboratories of the Member States of the CGPM charged with establishing national standards;[Note 50] relevant intergovernmental organisations and international bodies;[Note 51]
86
+ international commissions or committees;[Note 52]
87
+ scientific unions;[Note 53] personal members;[Note 54]
88
+ and, as an ex officio member of all Consultative Committees, the Director of the BIPM.
89
+
90
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[2][Note 13], which is published by the BIPM and periodically updated.
91
+
92
+ The International System of Units consists of a set of base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[27]:103–106 The units, excluding prefixed units,[Note 55] form a coherent system of units, which is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a.
93
+
94
+ Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[Note 56] Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, which is defined in SI units as m/s2.
95
+
96
+ The SI base units are the building blocks of the system and all the other units are derived from them.
97
+
98
+ The derived units in the SI are formed by powers, products, or quotients of the base units and are potentially unlimited in number.[27]:103[35]:14,16 Derived units are associated with derived quantities; for example, velocity is a quantity that is derived from the base quantities of time and length, and thus the SI derived unit is metre per second (symbol m/s). The dimensions of derived units can be expressed in terms of the dimensions of the base units.
99
+
100
+ Combinations of base and derived units may be used to express other derived units. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa)—and the pascal can be defined as one newton per square metre (N/m2).[38]
101
+
102
+ Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.[27]:122[39]:14 When prefixes are used to form multiples and submultiples of SI base and derived units, the resulting units are no longer coherent.[27]:7
103
+
104
+ The BIPM specifies 20 prefixes for the International System of Units (SI):
105
+
106
+ Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI:[27]
107
+
108
+ Some units of time, angle, and legacy non-SI units have a long history of use. Most societies have used the solar day and its non-decimal subdivisions as a basis of time and, unlike the foot or the pound, these were the same regardless of where they were being measured. The radian, being 1/2π of a revolution, has mathematical advantages but is rarely used for navigation. Further, the units used in navigation around the world are similar. The tonne, litre, and hectare were adopted by the CGPM in 1879 and have been retained as units that may be used alongside SI units, having been given unique symbols. The catalogued units are given below:
109
+
110
+ These units are used in combination with SI units in common units such as the kilowatt-hour (1 kW⋅h = 3.6 MJ).
111
+
112
+ The basic units of the metric system, as originally defined, represented common quantities or relationships in nature. They still do – the modern precisely defined quantities are refinements of definition and methodology, but still with the same magnitudes. In cases where laboratory precision may not be required or available, or where approximations are good enough, the original definitions may suffice.[Note 57]
113
+
114
+ The symbols for the SI units are intended to be identical, regardless of the language used,[27]:130–135 but names are ordinary nouns and use the character set and follow the grammatical rules of the language concerned. Names of units follow the grammatical rules associated with common nouns: in English and in French they start with a lowercase letter (e.g., newton, hertz, pascal), even when the unit is named after a person and its symbol begins with a capital letter.[27]:148 This also applies to "degrees Celsius", since "degree" is the beginning of the unit.[48][49] The only exceptions are in the beginning of sentences and in headings and publication titles.[27]:148 The English spelling for certain SI units differs: US English uses the spelling deka-, meter, and liter, whilst International English uses deca-, metre, and litre.
115
+
116
+ Although the writing of unit names is language-specific, the writing of unit symbols and the values of quantities is consistent across all languages and therefore the SI Brochure has specific rules in respect of writing them.[27]:130–135 The guideline produced by the National Institute of Standards and Technology (NIST)[50] clarifies language-specific areas in respect of American English that were left open by the SI Brochure, but is otherwise identical to the SI Brochure.[51]
117
+
118
+ General rules[Note 62] for writing SI units and quantities apply to text that is either handwritten or produced using an automated process:
119
+
120
+ The rules covering printing of quantities and units are part of ISO 80000-1:2009.[53]
121
+
122
+ Further rules[Note 62] are specified in respect of production of text using printing presses, word processors, typewriters, and the like.
123
+
124
+ The CGPM publishes a brochure that defines and presents the SI.[27] Its official version is in French, in line with the Metre Convention.[27]:102 It leaves some scope for local variations, particularly regarding unit names and terms in different languages.[Note 63][35]
125
+
126
+ The writing and maintenance of the CGPM brochure is carried out by one of the committees of the International Committee for Weights and Measures (CIPM).
127
+ The definitions of the terms "quantity", "unit", "dimension" etc. that are used in the SI Brochure are those given in the International vocabulary of metrology.[54]
128
+
129
+ The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
130
+ The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[55] The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1,[56] and has largely been revised in 2019–2020 with the remainder being under review.
131
+
132
+ Metrologists carefully distinguish between the definition of a unit and its realisation. The definition of each base unit of the SI is drawn up so that it is unique and provides a sound theoretical basis on which the most accurate and reproducible measurements can be made. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. A description of the mise en pratique[Note 64] of the base units is given in an electronic appendix to the SI Brochure.[58][27]:168–169
133
+
134
+ The published mise en pratique is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit."[27]:111 In the current (2016) exercise to overhaul the definitions of the base units, various consultative committees of the CIPM have required that more than one mise en pratique shall be developed for determining the value of each unit.[59] In particular:
135
+
136
+ The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system".[27]:95 Changing technology has led to an evolution of the definitions and standards that has followed two principal strands – changes to SI itself, and clarification of how to use units of measure that are not part of SI but are still nevertheless used on a worldwide basis.
137
+
138
+ Since 1960 the CGPM has made a number of changes to the SI to meet the needs of specific fields, notably chemistry and radiometry. These are mostly additions to the list of named derived units, and include the mole (symbol mol) for an amount of substance, the pascal (symbol Pa) for pressure, the siemens (symbol S) for electrical conductance, the becquerel (symbol Bq) for "activity referred to a radionuclide", the gray (symbol Gy) for ionising radiation, the sievert (symbol Sv) as the unit of dose equivalent radiation, and the katal (symbol kat) for catalytic activity.[27]:156[63][27]:156[27]:158[27]:159[27]:165
139
+
140
+ The range of defined prefixes pico- (10−12) to tera- (1012) was extended to 10−24 to 1024.[27]:152[27]:158[27]:164
141
+
142
+ The 1960 definition of the standard metre in terms of wavelengths of a specific emission of the krypton 86 atom was replaced with the distance that light travels in vacuum in exactly 1/299792458 second, so that the speed of light is now an exactly specified constant of nature.
143
+
144
+ A few changes to notation conventions have also been made to alleviate lexicographic ambiguities. An analysis under the aegis of CSIRO, published in 2009 by the Royal Society, has pointed out the opportunities to finish the realisation of that goal, to the point of universal zero-ambiguity machine readability.[64]
145
+
146
+ After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[65] During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
147
+
148
+ A proposal was made that:
149
+
150
+ The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[66] The change was adopted by the European Union through Directive (EU) 2019/1258.[67]
151
+
152
+ The units and unit magnitudes of the metric system which became the SI were improvised piecemeal from everyday physical quantities starting in the mid-18th century. Only later were they moulded into an orthogonal coherent decimal system of measurement.
153
+
154
+ The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, in 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale.
155
+
156
+ The metric system was developed from 1791 onwards by a committee of the French Academy of Sciences, commissioned to create a unified and rational system of measures.[69] The group, which included preeminent French men of science,[70]:89 used the same principles for relating length, volume, and mass that had been proposed by the English clergyman John Wilkins in 1668[71][72] and the concept of using the Earth's meridian as the basis of the definition of length, originally proposed in 1670 by the French abbot Mouton.[73][74]
157
+
158
+ In March 1791, the Assembly adopted the committee's proposed principles for the new decimal system of measure including the metre defined to be 1/10,000,000 of the length of the quadrant of Earth's meridian passing through Paris, and authorised a survey to precisely establish the length of the meridian. In July 1792, the committee proposed the names metre, are, litre and grave for the units of length, area, capacity, and mass, respectively. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth and kilo for a thousand.[75]:82
159
+
160
+ Later, during the process of adoption of the metric system, the Latin gramme and kilogramme, replaced the former provincial terms gravet (1/1000 grave) and grave. In June 1799, based on the results of the meridian survey, the standard mètre des Archives and kilogramme des Archives were deposited in the French National Archives. Subsequently, that year, the metric system was adopted by law in France.[81]
161
+ [82] The French system was short-lived due to its unpopularity. Napoleon ridiculed it, and in 1812, introduced a replacement system, the mesures usuelles or "customary measures" which restored many of the old units, but redefined in terms of the metric system.
162
+
163
+ During the first half of the 19th century there was little consistency in the choice of preferred multiples of the base units: typically the myriametre (10000 metres) was in widespread use in both France and parts of Germany, while the kilogram (1000 grams) rather than the myriagram was used for mass.[68]
164
+
165
+ In 1832, the German mathematician Carl Friedrich Gauss, assisted by Wilhelm Weber, implicitly defined the second as a base unit when he quoted the Earth's magnetic field in terms of millimetres, grams, and seconds.[76] Prior to this, the strength of the Earth's magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a suspended magnet of known mass by the Earth's magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length and time to the magnetic field.[Note 65][83]
166
+
167
+ A candlepower as a unit of illuminance was originally defined by an 1860 English law as the light produced by a pure spermaceti candle weighing ​1⁄6 pound (76 grams) and burning at a specified rate. Spermaceti, a waxy substance found in the heads of sperm whales, was once used to make high-quality candles. At this time the French standard of light was based upon the illumination from a Carcel oil lamp. The unit was defined as that illumination emanating from a lamp burning pure rapeseed oil at a defined rate. It was accepted that ten standard candles were about equal to one Carcel lamp.
168
+
169
+ A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.[Note 66][70]:353–354 Initially the convention only covered standards for the metre and the kilogram. In 1921, the Metre Convention was extended to include all physical units, including the ampere and others thereby enabling the CGPM to address inconsistencies in the way that the metric system had been used.[77][27]:96
170
+
171
+ A set of 30 prototypes of the metre and 40 prototypes of the kilogram,[Note 67] in each case made of a 90% platinum-10% iridium alloy, were manufactured by British metallurgy specialty firm and accepted by the CGPM in 1889. One of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the remaining prototypes to serve as the national prototype for that country.[84]
172
+
173
+ The treaty also established a number of international organisations to oversee the keeping of international standards of measurement:[85]
174
+ [86]
175
+
176
+ In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin) and others working under the auspices of the British Association for the Advancement of Science, built on Gauss's work and formalised the concept of a coherent system of units with base units and derived units christened the centimetre–gram–second system of units in 1874. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.[79]
177
+
178
+ In 1879, the CIPM published recommendations for writing the symbols for length, area, volume and mass, but it was outside its domain to publish recommendations for other quantities. Beginning in about 1900, physicists who had been using the symbol "μ" (mu) for "micrometre" or "micron", "λ" (lambda) for "microlitre", and "γ" (gamma) for "microgram" started to use the symbols "μm", "μL" and "μg".[87]
179
+
180
+ At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU) and an International system based on units defined by the Metre Convention.[88] for electrical distribution systems.
181
+ Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties—the dimensions depended on whether one used the ESU or EMU systems.[80] This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.[89] Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. This became the foundation of the MKS system of units.
182
+
183
+ In the late 19th and early 20th centuries, a number of non-coherent units of measure based on the gram/kilogram, centimetre/metre, and second, such as the Pferdestärke (metric horsepower) for power,[90][Note 68] the darcy for permeability[91] and "millimetres of mercury" for barometric and blood pressure were developed or propagated, some of which incorporated standard gravity in their definitions.[Note 69]
184
+
185
+ At the end of the Second World War, a number of different systems of measurement were in use throughout the world. Some of these systems were metric system variations; others were based on customary systems of measure, like the U.S customary system and Imperial system of the UK and British Empire.
186
+
187
+ In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[92] This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived from six base units including units of temperature and optical radiation in addition to those for the MKS system mass, length, and time units and Giorgi's current unit. Six base units were recommended: the metre, kilogram, second, ampere, degree Kelvin, and candela.
188
+
189
+ The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[93] These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[27]:104,130
190
+
191
+ In 1960, the 11th CGPM synthesised the results of the 12-year study into a set of 16 resolutions. The system was named the International System of Units, abbreviated SI from the French name, Le Système International d'Unités.[27]:110[94]
192
+
193
+ When Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass, length, and time. Giorgi later identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units (for temperature, amount of substance, and luminous intensity) were added later.
194
+
195
+ The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N (newton) applied to a mass of 1 kg will accelerate it at 1 m/s2. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g; mass times the acceleration due to gravity, which is 9.81 newtons at the Earth's surface and is about 3.5 newtons at the surface of Mars. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision measurements of a property of a body, and this makes a unit of weight unsuitable as a base unit.
196
+
197
+ The Prior definitions of the various base units in the above table were made by the following authors and authorities:
198
+
199
+ All other definitions result from resolutions by either CGPM or the CIPM and are catalogued in the SI Brochure.
200
+
201
+ Although the term metric system is often used as an informal alternative name for the International System of Units,[98] other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.[Note 71][Note 73] Here are some examples. The centimetre–gram–second (CGS) system was the dominant metric system in the physical sciences and electrical engineering from the 1860s until at least the 1960s, and is still in use in some fields. It includes such SI-unrecognised units as the gal, dyne, erg, barye, etc. in its mechanical sector, as well as the poise and stokes in fluid dynamics. When it comes to the units for quantities in electricity and magnetism, there are several versions of the CGS system. Two of these are obsolete: the CGS electrostatic ('CGS-ESU', with the SI-unrecognised units of statcoulomb, statvolt, statampere, etc.) and the CGS electromagnetic system ('CGS-EMU', with abampere, abcoulomb, oersted, maxwell, abhenry, gilbert, etc.).[Note 74] A 'blend' of these two systems is still popular and is known as the Gaussian system (which includes the gauss as a special name for the CGS-EMU unit maxwell per square centimetre).[Note 75] In engineering (other than electrical engineering), there was formerly a long tradition of using the gravitational metric system, whose SI-unrecognised units include the kilogram-force (kilopond), technical atmosphere, metric horsepower, etc. The metre–tonne–second (mts) system, used in the Soviet Union from 1933 to 1955, had such SI-unrecognised units as the sthène, pièze, etc. Other groups of SI-unrecognised metric units are the various legacy and CGS units related to ionising radiation (rutherford, curie, roentgen, rad, rem, etc.), radiometry (langley, jansky), photometry (phot, nox, stilb, nit, metre-candle,[102]:17 lambert, apostilb, skot, brill, troland, talbot, candlepower, candle), thermodynamics (calorie), and spectroscopy (reciprocal centimetre). The angstrom is still used in various fields. Some other SI-unrecognised metric units that don't fit into any of the already mentioned categories include the are, bar, barn, fermi,[103]:20–21 gradian (gon, grad, or grade), metric carat, micron, millimetre of mercury, torr, millimetre (or centimetre, or metre) of water, millimicron, mho, stere, x unit, γ (unit of mass), γ (unit of magnetic flux density), and λ (unit of volume).[citation needed] In some cases, the SI-unrecognised metric units have equivalent SI units formed by combining a metric prefix with a coherent SI unit. For example, 1 γ (unit of magnetic flux density) = 1 nT, 1 Gal = 1 cm⋅s−2, 1 barye = 1 decipascal, etc. (a related group are the correspondences[Note 74] such as 1 abampere ≘ 1 decaampere, 1 abhenry ≘ 1 nanohenry, etc.[Note 76]). Sometimes it is not even a matter of a metric prefix: the SI-nonrecognised unit may be exactly the same as an SI coherent unit, except for the fact that the SI does not recognise the special name and symbol. For example, the nit is just an SI-unrecognised name for the SI unit candela per square metre and the talbot is an SI-unrecognised name for the SI unit lumen second. Frequently, a non-SI metric unit is related to an SI unit through a power of ten factor, but not one that has a metric prefix, e.g. 1 dyn = 10−5 newton, 1 Å = 10−10 m, etc. (and correspondences[Note 74] like 1 gauss ≘ 10−4 tesla). Finally, there are metric units whose conversion factors to SI units are not powers of ten, e.g. 1 calorie = 4.184 joules and 1 kilogram-force = 9.806650 newtons. Some SI-unrecognised metric units are still frequently used, e.g. the calorie (in nutrition), the rem (in the U.S.), the jansky (in radio astronomy), the reciprocal centimetre (in spectroscopy), the gauss (in industry) and the CGS-Gaussian units[Note 75] more generally (in some subfields of physics), the metric horsepower (for engine power, in Europe), the kilogram-force (for rocket engine thrust, in China and sometimes in Europe), etc. Others are now rarely used, such as the sthène and the rutherford.
202
+
203
+ Organisations
204
+
205
+ Standards and conventions
206
+
207
+ It is therefore the declared policy of the United States-
208
+
209
+ (1) to designate the metric system of measurement as the preferred system of weights and measures for United States trade and commerce;
210
+
211
+ (2) to require that each Federal agency, by a date certain and to the extent economically feasible by the end of the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other business-related activities, except to the extent that such use is impractical or is likely to cause significant inefficiencies or loss of markets to United States firms, such as when foreign competitors are producing competing products in non-metric units;
212
+
213
+ (3) to seek out ways to increase understanding of the metric system of measurement through educational information and guidance and in Government publications; and
214
+
215
+ (4) to permit the continued use of traditional systems of weights and measures in non-business activities.
216
+
217
+ The unit of length is the metre, defined by the distance, at 0°, between the axes of the two central lines marked on the bar of platinum-iridium kept at the Bureau International des Poids et Mesures and declared Prototype of the metre by the 1st Conférence Générale des Poids et Mesures, this bar being subject to standard atmospheric pressure and supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
218
+
219
+ We shall in the first place describe the state of the Standards recovered from the ruins of the House of Commons, as ascertained in our inspection of them made on 1st June, 1838, at the Journal Office, where they are preserved under the care of Mr. James Gudge, Principal Clerk of the Journal Office. The following list, taken by ourselves from inspection, was compared with a list produced by Mr. Gudge, and stated by him to have been made by Mr. Charles Rowland, one of the Clerks of the Journal Office, immediately after the fire, and was found to agree with it. Mr. Gudge stated that no other Standards of Length or Weight were in his custody.
220
+
221
+ No. 1. A brass bar marked “Standard [G. II. crown emblem] Yard, 1758,” which on examination was found to have its right hand stud perfect, with the point and line visible, but with its left hand stud completely melted out, a hole only remaining. The bar was somewhat bent, and discoloured in every part.
222
+
223
+ No. 2. A brass bar with a projecting cock at each end, forming a bed for the trial of yard-measures; discoloured.
224
+
225
+ No. 3. A brass bar marked “Standard [G. II. crown emblem] Yard, 1760,” from which the left hand stud was completely melted out, and which in other respects was in the same condition as No. 1.
226
+
227
+ No. 4. A yard-bed similar to No. 2; discoloured.
228
+
229
+ No. 5. A weight of the form [drawing of a weight] marked [2 lb. T. 1758], apparently of brass or copper; much discoloured.
230
+
231
+ No. 6. A weight marked in the same manner for 4 lbs., in the same state.
232
+
233
+ No. 7. A weight similar to No. 6, with a hollow space at its base, which appeared at first sight to have been originally filled with some soft metal that had been now melted out, but which on a rough trial was found to have nearly the same weight as No. 6.
234
+
235
+ No. 8. A similar weight of 8 lbs., similarly marked (with the alteration of 8 lbs. for 4 lbs.), and in the same state.
236
+
237
+ No. 9. Another exactly like No. 8.
238
+
239
+ Nos. 10 and 11. Two weights of 16 lbs., similarly marked.
240
+
241
+ Nos. 12 and 13. Two weights of 32 lbs., similarly marked.
242
+
243
+ No. 14. A weight with a triangular ring-handle, marked "S.F. 1759 17 lbs. 8 dwts. Troy", apparently intended to represent the stone of 14 lbs. avoirdupois, allowing 7008 troy grains to each avoirdupois pound.
244
+
245
+ It appears from this list that the bar adopted in the Act 5th Geo. IV., cap. 74, sect. 1, for the legal standard of one yard, (No. 3 of the preceding list), is so far injured, that it is impossible to ascertain from it, with the most moderate accuracy, the statutable length of one yard. The legal standard of one troy pound is missing. We have therefore to report that it is absolutely necessary that steps be taken for the formation and legalising of new Standards of Length and Weight.
246
+
247
+ [t]he bronze yard No. 11, which was an exact copy of the British imperial yard both in form and material, had shown changes when compared with the imperial yard in 1876 and 1888 which could not reasonably be said to be entirely due to changes in No. 11. Suspicion as to the constancy of the length of the British standard was therefore aroused.
248
+
249
+ In 1890, as a signatory of the Metre Convention, the US received two copies of the International Prototype Metre, the construction of which represented the most advanced ideas of standards of the time. Therefore it seemed that US measures would have greater stability and higher accuracy by accepting the international metre as fundamental standard, which was formalised in 1893 by the Mendenhall Order.[25]:379–81
250
+
en/5574.html.txt ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[Note 1] Twenty-two derived units have been provided with special names and symbols.[Note 2] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 3] which are adopted to facilitate measurement of diverse quantities. The SI system also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
4
+
5
+ Since 2019, the magnitudes of all SI units have been defined by declaring exact numerical values for seven defining constants when expressed in terms of their SI units. These defining constants are the speed of light in vacuum, c, the hyperfine transition frequency of caesium ΔνCs, the Planck constant h, the elementary charge e, the Boltzmann constant k, the Avogadro constant NA, and the luminous efficacy Kcd. The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units. One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants.[2]:129
6
+
7
+ The current way of defining the SI system is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology. The last artefact used by the SI was the International Prototype of the Kilogram, a cylinder of platinum-iridium.
8
+
9
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
10
+
11
+ The International System of Units, the SI,[2]:123 is a decimal[Note 4] and metric[Note 5] system of units established in 1960 and periodically updated since then. The SI has an official status in most countries,[Note 6] including the United States[Note 8] and the United Kingdom, with these two countries being amongst a handful of nations that, to various degrees, continue to resist widespread internal adoption of the SI system. As a consequence, the SI system “has been used around the world as the preferred system of units, the basic language for science, technology, industry and trade.”[2]:123
12
+
13
+ The only other types of measurement system that still have widespread use across the world are the Imperial and US customary measurement systems, and they are legally defined in terms of the SI system.[Note 9] There are other, less widespread systems of measurement that are occasionally used in particular regions of the world. In addition, there are many individual non-SI units that don't belong to any comprehensive system of units, but that are nevertheless still regularly used in particular fields and regions. Both of these categories of unit are also typically defined legally in terms of SI units.[Note 10]
14
+
15
+ The SI was established and is maintained by the General Conference on Weights and Measures (CGPM[Note 11]).[4] In practice, the CGPM follows the recommendations of the Consultative Committee for Units (CCU), which is the actual body conducting technical deliberations concerning new scientific and technological developments related to the definition of units and the SI. The CCU reports to the International Committee for Weights and Measures (CIPM[Note 12]), which, in turn, reports to the CGPM. See below for more details.
16
+
17
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[Note 13], which is published by the International Bureau of Weights and Measures (BIPM[Note 14]) and periodically updated.
18
+
19
+ The SI selects seven units to serve as base units, corresponding to seven base physical quantities.[Note 15] They are the second, with the symbol s, which is the SI unit of the physical quantity of time; the metre, symbol m, the SI unit of length; kilogram (kg, the unit of mass); ampere (A, electric current); kelvin (K, thermodynamic temperature), mole (mol, amount of substance); and candela (cd, luminous intensity).[2] Note that 'the choice of the base units was never unique, but grew historically and became familiar to users of the SI'.[2]:126 All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units.
20
+
21
+ The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit.[Note 16] The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units).[Note 17] Twenty-two coherent derived units have been provided with special names and symbols.[Note 18] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 19] which are adopted to facilitate measurement of diverse quantities.
22
+
23
+ Like all metric systems, the SI uses metric prefixes to systematically construct, for one and the same physical quantity, a whole set of units of widely different sizes that are decimal multiples of each other.
24
+
25
+ For example, while the coherent unit of length is the metre,[Note 20] the SI provides a full range of smaller and larger units of length, any of which may be more convenient for any given application – for example, driving distances are normally given in kilometres (symbol km) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, 1 km = 1000 m.[Note 21]
26
+
27
+ The current version of the SI provides twenty metric prefixes that signify decimal powers ranging from 10−24 to 1024.[2]:143–4 Apart from the prefixes for 1/100, 1/10, 10, and 100, all the other ones are powers of 1000.
28
+
29
+ In general, given any coherent unit with a separate name and symbol,[Note 22] one forms a new unit by simply adding an appropriate metric prefix to the name of the coherent unit (and a corresponding prefix symbol to the unit's symbol). Since the metric prefix signifies a particular power of ten, the new unit is always a power-of-ten multiple or sub-multiple of the coherent unit. Thus, the conversion between units within the SI is always through a power of ten; this is why the SI system (and metric systems more generally) are called decimal systems of measurement units.[6][Note 23]
30
+
31
+ The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power and can be combined with other unit symbols to form compound unit symbols.[2]:143 For example, g/cm3 is an SI unit of density, where cm3 is to be interpreted as (cm)3.
32
+
33
+ When prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[2]:137 The one exception is the kilogram, the only coherent SI unit whose name and symbol, for historical reasons, include a prefix.[Note 24]
34
+
35
+ The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[2]:138 For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a coherent SI unit. A similar statement holds for derived units: for example, kg/m3, g/dm3, g/cm3, Pg/km3, etc. are all SI units of density, but of these, only kg/m3 is a coherent SI unit.
36
+
37
+ Moreover, the metre is the only coherent SI unit of length. Every physical quantity has exactly one coherent SI unit, although this unit may be expressible in different forms by using some of the special names and symbols.[2]:140 For example, the coherent SI unit of linear momentum may be written as either kg⋅m/s or as N⋅s, and both forms are in use (e.g. compare respectively here[7]:205 and here[8]:135).
38
+
39
+ On the other hand, several different quantities may share same coherent SI unit. For example, the joule per kelvin is the coherent SI unit for two distinct quantities: heat capacity and entropy. Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is the coherent SI unit for both electric current and magnetomotive force, but it is a base unit in the former case and a derived unit in the latter.[2]:140[Note 26]
40
+
41
+ There is a special group of units that are called 'non-SI units that are accepted for use with the SI'.[2]:145 See Non-SI units mentioned in the SI for a full list. Most of these, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of 60 s/min, since 1 min = 60 s), the hour (3600 s), and the day (86400 s); the degree (for measuring plane angles, 1° = π/180 rad); and the electronvolt (a unit of energy, 1 eV = 1.602176634×10−19 J).
42
+
43
+ The SI is intended to be an evolving system; units[Note 27] and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
44
+
45
+ Since 2019, the magnitudes of all SI units have been defined in an abstract way, which is conceptually separated from any practical realisation of them.[2]:126[Note 28] Namely, the SI units are defined by declaring that seven defining constants[2]:125–9 have certain exact numerical values when expressed in terms of their SI units. Probably the most widely known of these constants is the speed of light in vacuum, c, which in the SI by definition has the exact value of c = 299792458 m/s. The other six constants are
46
+
47
+
48
+
49
+ Δ
50
+
51
+ ν
52
+
53
+ Cs
54
+
55
+
56
+
57
+
58
+ {\displaystyle \Delta \nu _{\text{Cs}}}
59
+
60
+ , the hyperfine transition frequency of caesium; h, the Planck constant; e, the elementary charge; k, the Boltzmann constant; NA, the Avogadro constant; and Kcd, the luminous efficacy of monochromatic radiation of frequency 540×1012 Hz.[Note 29] The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd.[2]:128–9. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units.
61
+
62
+ As far as realisations, what are believed to be the current best practical realisations of units are described in the so-called 'mises en pratique',[Note 30] which are also published by the BIPM.[11] The abstract nature of the definitions of units is what makes it possible to improve and change the mises en pratique as science and technology develop without having to change the actual definitions themselves.[Note 33]
63
+
64
+ In a sense, this way of defining the SI units is no more abstract than the way derived units are traditionally defined in terms of the base units. Let us consider a particular derived unit, say the joule, the unit of energy. Its definition in terms of the base units is kg⋅m2/s2. Even if the practical realisations of the metre, kilogram, and second are available, we do not thereby immediately have a practical realisation of the joule; such a realisation will require some sort of reference to the underlying physical definition of work or energy—some actual physical procedure (a mise en pratique, if you will) for realising the energy in the amount of one joule such that it can be compared to other instances of energy (such as the energy content of gasoline put into a car or of electricity delivered to a household).
65
+
66
+ The situation with the defining constants and all of the SI units is analogous. In fact, purely mathematically speaking, the SI units are defined as if we declared that it is the defining constant's units that are now the base units, with all other SI units being derived units. To make this clearer, first note that each defining constant can be taken as determining the magnitude of that defining constant's unit of measurement;[2]:128 for example, the definition of c defines the unit m/s as 1 m/s = c/299792458 ('the speed of one metre per second is equal to one 299792458th of the speed of light'). In this way, the defining constants directly define the following seven units: the hertz (Hz), a unit of the physical quantity of frequency (note that problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.[12][13][14][15][16]); the metre per second (m/s), a unit of speed; joule-second (J⋅s), a unit of action; coulomb (C), a unit of electric charge; joule per kelvin (J/K), a unit of both entropy and heat capacity; the inverse mole (mol−1), a unit of a conversion constant between the amount of substance and the number of elementary entities (atoms, molecules, etc.); and lumen per watt (lm/W), a unit of a conversion constant between the physical power carried by electromagnetic radiation and the intrinsic ability of that same radiation to produce visual perception of brightness in humans. Further, one can show, using dimensional analysis, that every coherent SI unit (whether base or derived) can be written as a unique product of powers of the units of the SI defining constants (in complete analogy to the fact that every coherent derived SI unit can be written as a unique product of powers of the base SI units). For example, the kilogram can be written as kg = (Hz)(J⋅s)/(m/s)2.[Note 34] Thus, the kilogram is defined in terms of the three defining constants ΔνCs, c, and h because, on the one hand, these three defining constants respectively define the units Hz, m/s, and J⋅s,[Note 35] while, on the other hand, the kilogram can be written in terms of these three units, namely, kg = (Hz)(J⋅s)/(m/s)2.[Note 36] True, the question of how to actually realise the kilogram in practice would, at this point, still be open, but that is not really different from the fact that the question of how to actually realise the joule in practice is still in principle open even once one has achieved the practical realisations of the metre, kilogram, and second.
67
+
68
+ One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants. Nevertheless, the distinction is retained because 'it is useful and historically well established', and also because the ISO/IEC 80000 series of standards[Note 37] specifies base and derived quantities that necessarily have the corresponding SI units.[2]:129
69
+
70
+ The current way of defining the SI system is the result of a decades-long move towards increasingly abstract and idealised formulation in which the
71
+ realisations of the units are separated conceptually from the definitions.[2]:126
72
+
73
+ The great advantage of doing it this way is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the units.[Note 31] Units can now be realised with ‘an accuracy that is ultimately limited only by the quantum structure of nature and our technical abilities but not by the definitions themselves.[Note 32] Any valid equation of physics relating the defining constants to a unit can be used to realise the unit, thus creating opportunities for innovation... with increasing accuracy as technology proceeds.’[2]:122 In practice, the CIPM Consultative Committees provide so-called "mises en pratique" (practical techniques),[11] which are the descriptions of what are currently believed to be best experimental realisations of the units.[19]
74
+
75
+ This system lacks the conceptual simplicity of using artefacts (referred to as prototypes) as realisations of units to define those units: with prototypes, the definition and the realisation are one and the same.[Note 38] However, using artefacts has two major disadvantages that, as soon as it is technologically and scientifically feasible, result in abandoning them as means for defining units.[Note 42] One major disadvantage is that artefacts can be lost, damaged,[Note 44] or changed.[Note 45] The other is that they largely cannot benefit from advancements in science and technology. The last artefact used by the SI was the International Prototype Kilogram (IPK), a particular cylinder of platinum-iridium; from 1889 to 2019, the kilogram was by definition equal to the mass of the IPK. Concerns regarding its stability on the one hand, and progress in precise measurements of the Planck constant and the Avogadro constant on the other, led to a revision of the definition of the base units, put into effect on 20 May 2019.[26] This was the biggest change in the SI system since it was first formally defined and established in 1960,[citation needed] and it resulted in the definitions described above.
76
+
77
+ In the past, there were also various other approaches to the definitions of some of the SI units. One made use of a specific physical state of a specific substance (the triple point of water, which was used in the definition of the kelvin[27]:113–4); others referred to idealised experimental prescriptions[2]:125 (as in the case of the former SI definition of the ampere[27]:113 and the former SI definition (originally enacted in 1979) of the candela[27]:115).
78
+
79
+ In the future, the set of defining constants used by the SI may be modified as more stable constants are found, or if it turns out that other constants can be more precisely measured.[Note 46]
80
+
81
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
82
+
83
+ The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM[Note 11]), the International Committee for Weights and Measures (CIPM[Note 12]), and the International Bureau of Weights and Measures (BIPM[Note 14]). The ultimate authority rests with the CGPM, which is a plenary body through which its Member States[Note 48] act together on matters related to measurement science and measurement standards; it usually convenes every four years.[28] The CGPM elects the CIPM, which is an 18-person committee of eminent scientists. The CIPM operates based on the advice of a number of its Consultative Committees, which bring together the world's experts in their specified fields as advisers on scientific and technical matters.[29][Note 49] One of these committees is the Consultative Committee for Units (CCU), which is responsible for matters related to the development of the International System of Units (SI), preparation of successive editions of the SI brochure, and advice to the CIPM on matters concerning units of measurement.[30] It is the CCU which considers in detail all new scientific and technological developments related to the definition of units and the SI. In practice, when it comes to the definition of the SI, the CGPM simply formally approves the recommendations of the CIPM, which, in turn, follows the advice of the CCU.
84
+
85
+ The CCU has the following as members:[31][32] national laboratories of the Member States of the CGPM charged with establishing national standards;[Note 50] relevant intergovernmental organisations and international bodies;[Note 51]
86
+ international commissions or committees;[Note 52]
87
+ scientific unions;[Note 53] personal members;[Note 54]
88
+ and, as an ex officio member of all Consultative Committees, the Director of the BIPM.
89
+
90
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[2][Note 13], which is published by the BIPM and periodically updated.
91
+
92
+ The International System of Units consists of a set of base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[27]:103–106 The units, excluding prefixed units,[Note 55] form a coherent system of units, which is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a.
93
+
94
+ Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[Note 56] Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, which is defined in SI units as m/s2.
95
+
96
+ The SI base units are the building blocks of the system and all the other units are derived from them.
97
+
98
+ The derived units in the SI are formed by powers, products, or quotients of the base units and are potentially unlimited in number.[27]:103[35]:14,16 Derived units are associated with derived quantities; for example, velocity is a quantity that is derived from the base quantities of time and length, and thus the SI derived unit is metre per second (symbol m/s). The dimensions of derived units can be expressed in terms of the dimensions of the base units.
99
+
100
+ Combinations of base and derived units may be used to express other derived units. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa)—and the pascal can be defined as one newton per square metre (N/m2).[38]
101
+
102
+ Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.[27]:122[39]:14 When prefixes are used to form multiples and submultiples of SI base and derived units, the resulting units are no longer coherent.[27]:7
103
+
104
+ The BIPM specifies 20 prefixes for the International System of Units (SI):
105
+
106
+ Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI:[27]
107
+
108
+ Some units of time, angle, and legacy non-SI units have a long history of use. Most societies have used the solar day and its non-decimal subdivisions as a basis of time and, unlike the foot or the pound, these were the same regardless of where they were being measured. The radian, being 1/2π of a revolution, has mathematical advantages but is rarely used for navigation. Further, the units used in navigation around the world are similar. The tonne, litre, and hectare were adopted by the CGPM in 1879 and have been retained as units that may be used alongside SI units, having been given unique symbols. The catalogued units are given below:
109
+
110
+ These units are used in combination with SI units in common units such as the kilowatt-hour (1 kW⋅h = 3.6 MJ).
111
+
112
+ The basic units of the metric system, as originally defined, represented common quantities or relationships in nature. They still do – the modern precisely defined quantities are refinements of definition and methodology, but still with the same magnitudes. In cases where laboratory precision may not be required or available, or where approximations are good enough, the original definitions may suffice.[Note 57]
113
+
114
+ The symbols for the SI units are intended to be identical, regardless of the language used,[27]:130–135 but names are ordinary nouns and use the character set and follow the grammatical rules of the language concerned. Names of units follow the grammatical rules associated with common nouns: in English and in French they start with a lowercase letter (e.g., newton, hertz, pascal), even when the unit is named after a person and its symbol begins with a capital letter.[27]:148 This also applies to "degrees Celsius", since "degree" is the beginning of the unit.[48][49] The only exceptions are in the beginning of sentences and in headings and publication titles.[27]:148 The English spelling for certain SI units differs: US English uses the spelling deka-, meter, and liter, whilst International English uses deca-, metre, and litre.
115
+
116
+ Although the writing of unit names is language-specific, the writing of unit symbols and the values of quantities is consistent across all languages and therefore the SI Brochure has specific rules in respect of writing them.[27]:130–135 The guideline produced by the National Institute of Standards and Technology (NIST)[50] clarifies language-specific areas in respect of American English that were left open by the SI Brochure, but is otherwise identical to the SI Brochure.[51]
117
+
118
+ General rules[Note 62] for writing SI units and quantities apply to text that is either handwritten or produced using an automated process:
119
+
120
+ The rules covering printing of quantities and units are part of ISO 80000-1:2009.[53]
121
+
122
+ Further rules[Note 62] are specified in respect of production of text using printing presses, word processors, typewriters, and the like.
123
+
124
+ The CGPM publishes a brochure that defines and presents the SI.[27] Its official version is in French, in line with the Metre Convention.[27]:102 It leaves some scope for local variations, particularly regarding unit names and terms in different languages.[Note 63][35]
125
+
126
+ The writing and maintenance of the CGPM brochure is carried out by one of the committees of the International Committee for Weights and Measures (CIPM).
127
+ The definitions of the terms "quantity", "unit", "dimension" etc. that are used in the SI Brochure are those given in the International vocabulary of metrology.[54]
128
+
129
+ The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
130
+ The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[55] The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1,[56] and has largely been revised in 2019–2020 with the remainder being under review.
131
+
132
+ Metrologists carefully distinguish between the definition of a unit and its realisation. The definition of each base unit of the SI is drawn up so that it is unique and provides a sound theoretical basis on which the most accurate and reproducible measurements can be made. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. A description of the mise en pratique[Note 64] of the base units is given in an electronic appendix to the SI Brochure.[58][27]:168–169
133
+
134
+ The published mise en pratique is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit."[27]:111 In the current (2016) exercise to overhaul the definitions of the base units, various consultative committees of the CIPM have required that more than one mise en pratique shall be developed for determining the value of each unit.[59] In particular:
135
+
136
+ The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system".[27]:95 Changing technology has led to an evolution of the definitions and standards that has followed two principal strands – changes to SI itself, and clarification of how to use units of measure that are not part of SI but are still nevertheless used on a worldwide basis.
137
+
138
+ Since 1960 the CGPM has made a number of changes to the SI to meet the needs of specific fields, notably chemistry and radiometry. These are mostly additions to the list of named derived units, and include the mole (symbol mol) for an amount of substance, the pascal (symbol Pa) for pressure, the siemens (symbol S) for electrical conductance, the becquerel (symbol Bq) for "activity referred to a radionuclide", the gray (symbol Gy) for ionising radiation, the sievert (symbol Sv) as the unit of dose equivalent radiation, and the katal (symbol kat) for catalytic activity.[27]:156[63][27]:156[27]:158[27]:159[27]:165
139
+
140
+ The range of defined prefixes pico- (10−12) to tera- (1012) was extended to 10−24 to 1024.[27]:152[27]:158[27]:164
141
+
142
+ The 1960 definition of the standard metre in terms of wavelengths of a specific emission of the krypton 86 atom was replaced with the distance that light travels in vacuum in exactly 1/299792458 second, so that the speed of light is now an exactly specified constant of nature.
143
+
144
+ A few changes to notation conventions have also been made to alleviate lexicographic ambiguities. An analysis under the aegis of CSIRO, published in 2009 by the Royal Society, has pointed out the opportunities to finish the realisation of that goal, to the point of universal zero-ambiguity machine readability.[64]
145
+
146
+ After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[65] During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
147
+
148
+ A proposal was made that:
149
+
150
+ The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[66] The change was adopted by the European Union through Directive (EU) 2019/1258.[67]
151
+
152
+ The units and unit magnitudes of the metric system which became the SI were improvised piecemeal from everyday physical quantities starting in the mid-18th century. Only later were they moulded into an orthogonal coherent decimal system of measurement.
153
+
154
+ The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, in 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale.
155
+
156
+ The metric system was developed from 1791 onwards by a committee of the French Academy of Sciences, commissioned to create a unified and rational system of measures.[69] The group, which included preeminent French men of science,[70]:89 used the same principles for relating length, volume, and mass that had been proposed by the English clergyman John Wilkins in 1668[71][72] and the concept of using the Earth's meridian as the basis of the definition of length, originally proposed in 1670 by the French abbot Mouton.[73][74]
157
+
158
+ In March 1791, the Assembly adopted the committee's proposed principles for the new decimal system of measure including the metre defined to be 1/10,000,000 of the length of the quadrant of Earth's meridian passing through Paris, and authorised a survey to precisely establish the length of the meridian. In July 1792, the committee proposed the names metre, are, litre and grave for the units of length, area, capacity, and mass, respectively. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth and kilo for a thousand.[75]:82
159
+
160
+ Later, during the process of adoption of the metric system, the Latin gramme and kilogramme, replaced the former provincial terms gravet (1/1000 grave) and grave. In June 1799, based on the results of the meridian survey, the standard mètre des Archives and kilogramme des Archives were deposited in the French National Archives. Subsequently, that year, the metric system was adopted by law in France.[81]
161
+ [82] The French system was short-lived due to its unpopularity. Napoleon ridiculed it, and in 1812, introduced a replacement system, the mesures usuelles or "customary measures" which restored many of the old units, but redefined in terms of the metric system.
162
+
163
+ During the first half of the 19th century there was little consistency in the choice of preferred multiples of the base units: typically the myriametre (10000 metres) was in widespread use in both France and parts of Germany, while the kilogram (1000 grams) rather than the myriagram was used for mass.[68]
164
+
165
+ In 1832, the German mathematician Carl Friedrich Gauss, assisted by Wilhelm Weber, implicitly defined the second as a base unit when he quoted the Earth's magnetic field in terms of millimetres, grams, and seconds.[76] Prior to this, the strength of the Earth's magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a suspended magnet of known mass by the Earth's magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length and time to the magnetic field.[Note 65][83]
166
+
167
+ A candlepower as a unit of illuminance was originally defined by an 1860 English law as the light produced by a pure spermaceti candle weighing ​1⁄6 pound (76 grams) and burning at a specified rate. Spermaceti, a waxy substance found in the heads of sperm whales, was once used to make high-quality candles. At this time the French standard of light was based upon the illumination from a Carcel oil lamp. The unit was defined as that illumination emanating from a lamp burning pure rapeseed oil at a defined rate. It was accepted that ten standard candles were about equal to one Carcel lamp.
168
+
169
+ A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.[Note 66][70]:353–354 Initially the convention only covered standards for the metre and the kilogram. In 1921, the Metre Convention was extended to include all physical units, including the ampere and others thereby enabling the CGPM to address inconsistencies in the way that the metric system had been used.[77][27]:96
170
+
171
+ A set of 30 prototypes of the metre and 40 prototypes of the kilogram,[Note 67] in each case made of a 90% platinum-10% iridium alloy, were manufactured by British metallurgy specialty firm and accepted by the CGPM in 1889. One of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the remaining prototypes to serve as the national prototype for that country.[84]
172
+
173
+ The treaty also established a number of international organisations to oversee the keeping of international standards of measurement:[85]
174
+ [86]
175
+
176
+ In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin) and others working under the auspices of the British Association for the Advancement of Science, built on Gauss's work and formalised the concept of a coherent system of units with base units and derived units christened the centimetre–gram–second system of units in 1874. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.[79]
177
+
178
+ In 1879, the CIPM published recommendations for writing the symbols for length, area, volume and mass, but it was outside its domain to publish recommendations for other quantities. Beginning in about 1900, physicists who had been using the symbol "μ" (mu) for "micrometre" or "micron", "λ" (lambda) for "microlitre", and "γ" (gamma) for "microgram" started to use the symbols "μm", "μL" and "μg".[87]
179
+
180
+ At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU) and an International system based on units defined by the Metre Convention.[88] for electrical distribution systems.
181
+ Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties—the dimensions depended on whether one used the ESU or EMU systems.[80] This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.[89] Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. This became the foundation of the MKS system of units.
182
+
183
+ In the late 19th and early 20th centuries, a number of non-coherent units of measure based on the gram/kilogram, centimetre/metre, and second, such as the Pferdestärke (metric horsepower) for power,[90][Note 68] the darcy for permeability[91] and "millimetres of mercury" for barometric and blood pressure were developed or propagated, some of which incorporated standard gravity in their definitions.[Note 69]
184
+
185
+ At the end of the Second World War, a number of different systems of measurement were in use throughout the world. Some of these systems were metric system variations; others were based on customary systems of measure, like the U.S customary system and Imperial system of the UK and British Empire.
186
+
187
+ In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[92] This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived from six base units including units of temperature and optical radiation in addition to those for the MKS system mass, length, and time units and Giorgi's current unit. Six base units were recommended: the metre, kilogram, second, ampere, degree Kelvin, and candela.
188
+
189
+ The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[93] These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[27]:104,130
190
+
191
+ In 1960, the 11th CGPM synthesised the results of the 12-year study into a set of 16 resolutions. The system was named the International System of Units, abbreviated SI from the French name, Le Système International d'Unités.[27]:110[94]
192
+
193
+ When Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass, length, and time. Giorgi later identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units (for temperature, amount of substance, and luminous intensity) were added later.
194
+
195
+ The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N (newton) applied to a mass of 1 kg will accelerate it at 1 m/s2. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g; mass times the acceleration due to gravity, which is 9.81 newtons at the Earth's surface and is about 3.5 newtons at the surface of Mars. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision measurements of a property of a body, and this makes a unit of weight unsuitable as a base unit.
196
+
197
+ The Prior definitions of the various base units in the above table were made by the following authors and authorities:
198
+
199
+ All other definitions result from resolutions by either CGPM or the CIPM and are catalogued in the SI Brochure.
200
+
201
+ Although the term metric system is often used as an informal alternative name for the International System of Units,[98] other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.[Note 71][Note 73] Here are some examples. The centimetre–gram–second (CGS) system was the dominant metric system in the physical sciences and electrical engineering from the 1860s until at least the 1960s, and is still in use in some fields. It includes such SI-unrecognised units as the gal, dyne, erg, barye, etc. in its mechanical sector, as well as the poise and stokes in fluid dynamics. When it comes to the units for quantities in electricity and magnetism, there are several versions of the CGS system. Two of these are obsolete: the CGS electrostatic ('CGS-ESU', with the SI-unrecognised units of statcoulomb, statvolt, statampere, etc.) and the CGS electromagnetic system ('CGS-EMU', with abampere, abcoulomb, oersted, maxwell, abhenry, gilbert, etc.).[Note 74] A 'blend' of these two systems is still popular and is known as the Gaussian system (which includes the gauss as a special name for the CGS-EMU unit maxwell per square centimetre).[Note 75] In engineering (other than electrical engineering), there was formerly a long tradition of using the gravitational metric system, whose SI-unrecognised units include the kilogram-force (kilopond), technical atmosphere, metric horsepower, etc. The metre–tonne–second (mts) system, used in the Soviet Union from 1933 to 1955, had such SI-unrecognised units as the sthène, pièze, etc. Other groups of SI-unrecognised metric units are the various legacy and CGS units related to ionising radiation (rutherford, curie, roentgen, rad, rem, etc.), radiometry (langley, jansky), photometry (phot, nox, stilb, nit, metre-candle,[102]:17 lambert, apostilb, skot, brill, troland, talbot, candlepower, candle), thermodynamics (calorie), and spectroscopy (reciprocal centimetre). The angstrom is still used in various fields. Some other SI-unrecognised metric units that don't fit into any of the already mentioned categories include the are, bar, barn, fermi,[103]:20–21 gradian (gon, grad, or grade), metric carat, micron, millimetre of mercury, torr, millimetre (or centimetre, or metre) of water, millimicron, mho, stere, x unit, γ (unit of mass), γ (unit of magnetic flux density), and λ (unit of volume).[citation needed] In some cases, the SI-unrecognised metric units have equivalent SI units formed by combining a metric prefix with a coherent SI unit. For example, 1 γ (unit of magnetic flux density) = 1 nT, 1 Gal = 1 cm⋅s−2, 1 barye = 1 decipascal, etc. (a related group are the correspondences[Note 74] such as 1 abampere ≘ 1 decaampere, 1 abhenry ≘ 1 nanohenry, etc.[Note 76]). Sometimes it is not even a matter of a metric prefix: the SI-nonrecognised unit may be exactly the same as an SI coherent unit, except for the fact that the SI does not recognise the special name and symbol. For example, the nit is just an SI-unrecognised name for the SI unit candela per square metre and the talbot is an SI-unrecognised name for the SI unit lumen second. Frequently, a non-SI metric unit is related to an SI unit through a power of ten factor, but not one that has a metric prefix, e.g. 1 dyn = 10−5 newton, 1 Å = 10−10 m, etc. (and correspondences[Note 74] like 1 gauss ≘ 10−4 tesla). Finally, there are metric units whose conversion factors to SI units are not powers of ten, e.g. 1 calorie = 4.184 joules and 1 kilogram-force = 9.806650 newtons. Some SI-unrecognised metric units are still frequently used, e.g. the calorie (in nutrition), the rem (in the U.S.), the jansky (in radio astronomy), the reciprocal centimetre (in spectroscopy), the gauss (in industry) and the CGS-Gaussian units[Note 75] more generally (in some subfields of physics), the metric horsepower (for engine power, in Europe), the kilogram-force (for rocket engine thrust, in China and sometimes in Europe), etc. Others are now rarely used, such as the sthène and the rutherford.
202
+
203
+ Organisations
204
+
205
+ Standards and conventions
206
+
207
+ It is therefore the declared policy of the United States-
208
+
209
+ (1) to designate the metric system of measurement as the preferred system of weights and measures for United States trade and commerce;
210
+
211
+ (2) to require that each Federal agency, by a date certain and to the extent economically feasible by the end of the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other business-related activities, except to the extent that such use is impractical or is likely to cause significant inefficiencies or loss of markets to United States firms, such as when foreign competitors are producing competing products in non-metric units;
212
+
213
+ (3) to seek out ways to increase understanding of the metric system of measurement through educational information and guidance and in Government publications; and
214
+
215
+ (4) to permit the continued use of traditional systems of weights and measures in non-business activities.
216
+
217
+ The unit of length is the metre, defined by the distance, at 0°, between the axes of the two central lines marked on the bar of platinum-iridium kept at the Bureau International des Poids et Mesures and declared Prototype of the metre by the 1st Conférence Générale des Poids et Mesures, this bar being subject to standard atmospheric pressure and supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
218
+
219
+ We shall in the first place describe the state of the Standards recovered from the ruins of the House of Commons, as ascertained in our inspection of them made on 1st June, 1838, at the Journal Office, where they are preserved under the care of Mr. James Gudge, Principal Clerk of the Journal Office. The following list, taken by ourselves from inspection, was compared with a list produced by Mr. Gudge, and stated by him to have been made by Mr. Charles Rowland, one of the Clerks of the Journal Office, immediately after the fire, and was found to agree with it. Mr. Gudge stated that no other Standards of Length or Weight were in his custody.
220
+
221
+ No. 1. A brass bar marked “Standard [G. II. crown emblem] Yard, 1758,” which on examination was found to have its right hand stud perfect, with the point and line visible, but with its left hand stud completely melted out, a hole only remaining. The bar was somewhat bent, and discoloured in every part.
222
+
223
+ No. 2. A brass bar with a projecting cock at each end, forming a bed for the trial of yard-measures; discoloured.
224
+
225
+ No. 3. A brass bar marked “Standard [G. II. crown emblem] Yard, 1760,” from which the left hand stud was completely melted out, and which in other respects was in the same condition as No. 1.
226
+
227
+ No. 4. A yard-bed similar to No. 2; discoloured.
228
+
229
+ No. 5. A weight of the form [drawing of a weight] marked [2 lb. T. 1758], apparently of brass or copper; much discoloured.
230
+
231
+ No. 6. A weight marked in the same manner for 4 lbs., in the same state.
232
+
233
+ No. 7. A weight similar to No. 6, with a hollow space at its base, which appeared at first sight to have been originally filled with some soft metal that had been now melted out, but which on a rough trial was found to have nearly the same weight as No. 6.
234
+
235
+ No. 8. A similar weight of 8 lbs., similarly marked (with the alteration of 8 lbs. for 4 lbs.), and in the same state.
236
+
237
+ No. 9. Another exactly like No. 8.
238
+
239
+ Nos. 10 and 11. Two weights of 16 lbs., similarly marked.
240
+
241
+ Nos. 12 and 13. Two weights of 32 lbs., similarly marked.
242
+
243
+ No. 14. A weight with a triangular ring-handle, marked "S.F. 1759 17 lbs. 8 dwts. Troy", apparently intended to represent the stone of 14 lbs. avoirdupois, allowing 7008 troy grains to each avoirdupois pound.
244
+
245
+ It appears from this list that the bar adopted in the Act 5th Geo. IV., cap. 74, sect. 1, for the legal standard of one yard, (No. 3 of the preceding list), is so far injured, that it is impossible to ascertain from it, with the most moderate accuracy, the statutable length of one yard. The legal standard of one troy pound is missing. We have therefore to report that it is absolutely necessary that steps be taken for the formation and legalising of new Standards of Length and Weight.
246
+
247
+ [t]he bronze yard No. 11, which was an exact copy of the British imperial yard both in form and material, had shown changes when compared with the imperial yard in 1876 and 1888 which could not reasonably be said to be entirely due to changes in No. 11. Suspicion as to the constancy of the length of the British standard was therefore aroused.
248
+
249
+ In 1890, as a signatory of the Metre Convention, the US received two copies of the International Prototype Metre, the construction of which represented the most advanced ideas of standards of the time. Therefore it seemed that US measures would have greater stability and higher accuracy by accepting the international metre as fundamental standard, which was formalised in 1893 by the Mendenhall Order.[25]:379–81
250
+
en/5575.html.txt ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[Note 1] Twenty-two derived units have been provided with special names and symbols.[Note 2] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 3] which are adopted to facilitate measurement of diverse quantities. The SI system also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
4
+
5
+ Since 2019, the magnitudes of all SI units have been defined by declaring exact numerical values for seven defining constants when expressed in terms of their SI units. These defining constants are the speed of light in vacuum, c, the hyperfine transition frequency of caesium ΔνCs, the Planck constant h, the elementary charge e, the Boltzmann constant k, the Avogadro constant NA, and the luminous efficacy Kcd. The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units. One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants.[2]:129
6
+
7
+ The current way of defining the SI system is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology. The last artefact used by the SI was the International Prototype of the Kilogram, a cylinder of platinum-iridium.
8
+
9
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
10
+
11
+ The International System of Units, the SI,[2]:123 is a decimal[Note 4] and metric[Note 5] system of units established in 1960 and periodically updated since then. The SI has an official status in most countries,[Note 6] including the United States[Note 8] and the United Kingdom, with these two countries being amongst a handful of nations that, to various degrees, continue to resist widespread internal adoption of the SI system. As a consequence, the SI system “has been used around the world as the preferred system of units, the basic language for science, technology, industry and trade.”[2]:123
12
+
13
+ The only other types of measurement system that still have widespread use across the world are the Imperial and US customary measurement systems, and they are legally defined in terms of the SI system.[Note 9] There are other, less widespread systems of measurement that are occasionally used in particular regions of the world. In addition, there are many individual non-SI units that don't belong to any comprehensive system of units, but that are nevertheless still regularly used in particular fields and regions. Both of these categories of unit are also typically defined legally in terms of SI units.[Note 10]
14
+
15
+ The SI was established and is maintained by the General Conference on Weights and Measures (CGPM[Note 11]).[4] In practice, the CGPM follows the recommendations of the Consultative Committee for Units (CCU), which is the actual body conducting technical deliberations concerning new scientific and technological developments related to the definition of units and the SI. The CCU reports to the International Committee for Weights and Measures (CIPM[Note 12]), which, in turn, reports to the CGPM. See below for more details.
16
+
17
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[Note 13], which is published by the International Bureau of Weights and Measures (BIPM[Note 14]) and periodically updated.
18
+
19
+ The SI selects seven units to serve as base units, corresponding to seven base physical quantities.[Note 15] They are the second, with the symbol s, which is the SI unit of the physical quantity of time; the metre, symbol m, the SI unit of length; kilogram (kg, the unit of mass); ampere (A, electric current); kelvin (K, thermodynamic temperature), mole (mol, amount of substance); and candela (cd, luminous intensity).[2] Note that 'the choice of the base units was never unique, but grew historically and became familiar to users of the SI'.[2]:126 All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units.
20
+
21
+ The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit.[Note 16] The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units).[Note 17] Twenty-two coherent derived units have been provided with special names and symbols.[Note 18] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 19] which are adopted to facilitate measurement of diverse quantities.
22
+
23
+ Like all metric systems, the SI uses metric prefixes to systematically construct, for one and the same physical quantity, a whole set of units of widely different sizes that are decimal multiples of each other.
24
+
25
+ For example, while the coherent unit of length is the metre,[Note 20] the SI provides a full range of smaller and larger units of length, any of which may be more convenient for any given application – for example, driving distances are normally given in kilometres (symbol km) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, 1 km = 1000 m.[Note 21]
26
+
27
+ The current version of the SI provides twenty metric prefixes that signify decimal powers ranging from 10−24 to 1024.[2]:143–4 Apart from the prefixes for 1/100, 1/10, 10, and 100, all the other ones are powers of 1000.
28
+
29
+ In general, given any coherent unit with a separate name and symbol,[Note 22] one forms a new unit by simply adding an appropriate metric prefix to the name of the coherent unit (and a corresponding prefix symbol to the unit's symbol). Since the metric prefix signifies a particular power of ten, the new unit is always a power-of-ten multiple or sub-multiple of the coherent unit. Thus, the conversion between units within the SI is always through a power of ten; this is why the SI system (and metric systems more generally) are called decimal systems of measurement units.[6][Note 23]
30
+
31
+ The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power and can be combined with other unit symbols to form compound unit symbols.[2]:143 For example, g/cm3 is an SI unit of density, where cm3 is to be interpreted as (cm)3.
32
+
33
+ When prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[2]:137 The one exception is the kilogram, the only coherent SI unit whose name and symbol, for historical reasons, include a prefix.[Note 24]
34
+
35
+ The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[2]:138 For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a coherent SI unit. A similar statement holds for derived units: for example, kg/m3, g/dm3, g/cm3, Pg/km3, etc. are all SI units of density, but of these, only kg/m3 is a coherent SI unit.
36
+
37
+ Moreover, the metre is the only coherent SI unit of length. Every physical quantity has exactly one coherent SI unit, although this unit may be expressible in different forms by using some of the special names and symbols.[2]:140 For example, the coherent SI unit of linear momentum may be written as either kg⋅m/s or as N⋅s, and both forms are in use (e.g. compare respectively here[7]:205 and here[8]:135).
38
+
39
+ On the other hand, several different quantities may share same coherent SI unit. For example, the joule per kelvin is the coherent SI unit for two distinct quantities: heat capacity and entropy. Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is the coherent SI unit for both electric current and magnetomotive force, but it is a base unit in the former case and a derived unit in the latter.[2]:140[Note 26]
40
+
41
+ There is a special group of units that are called 'non-SI units that are accepted for use with the SI'.[2]:145 See Non-SI units mentioned in the SI for a full list. Most of these, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of 60 s/min, since 1 min = 60 s), the hour (3600 s), and the day (86400 s); the degree (for measuring plane angles, 1° = π/180 rad); and the electronvolt (a unit of energy, 1 eV = 1.602176634×10−19 J).
42
+
43
+ The SI is intended to be an evolving system; units[Note 27] and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
44
+
45
+ Since 2019, the magnitudes of all SI units have been defined in an abstract way, which is conceptually separated from any practical realisation of them.[2]:126[Note 28] Namely, the SI units are defined by declaring that seven defining constants[2]:125–9 have certain exact numerical values when expressed in terms of their SI units. Probably the most widely known of these constants is the speed of light in vacuum, c, which in the SI by definition has the exact value of c = 299792458 m/s. The other six constants are
46
+
47
+
48
+
49
+ Δ
50
+
51
+ ν
52
+
53
+ Cs
54
+
55
+
56
+
57
+
58
+ {\displaystyle \Delta \nu _{\text{Cs}}}
59
+
60
+ , the hyperfine transition frequency of caesium; h, the Planck constant; e, the elementary charge; k, the Boltzmann constant; NA, the Avogadro constant; and Kcd, the luminous efficacy of monochromatic radiation of frequency 540×1012 Hz.[Note 29] The nature of the defining constants ranges from fundamental constants of nature such as c to the purely technical constant Kcd.[2]:128–9. Prior to 2019, h, e, k, and NA were not defined a priori but were rather very precisely measured quantities. In 2019, their values were fixed by definition to their best estimates at the time, ensuring continuity with previous definitions of the base units.
61
+
62
+ As far as realisations, what are believed to be the current best practical realisations of units are described in the so-called 'mises en pratique',[Note 30] which are also published by the BIPM.[11] The abstract nature of the definitions of units is what makes it possible to improve and change the mises en pratique as science and technology develop without having to change the actual definitions themselves.[Note 33]
63
+
64
+ In a sense, this way of defining the SI units is no more abstract than the way derived units are traditionally defined in terms of the base units. Let us consider a particular derived unit, say the joule, the unit of energy. Its definition in terms of the base units is kg⋅m2/s2. Even if the practical realisations of the metre, kilogram, and second are available, we do not thereby immediately have a practical realisation of the joule; such a realisation will require some sort of reference to the underlying physical definition of work or energy—some actual physical procedure (a mise en pratique, if you will) for realising the energy in the amount of one joule such that it can be compared to other instances of energy (such as the energy content of gasoline put into a car or of electricity delivered to a household).
65
+
66
+ The situation with the defining constants and all of the SI units is analogous. In fact, purely mathematically speaking, the SI units are defined as if we declared that it is the defining constant's units that are now the base units, with all other SI units being derived units. To make this clearer, first note that each defining constant can be taken as determining the magnitude of that defining constant's unit of measurement;[2]:128 for example, the definition of c defines the unit m/s as 1 m/s = c/299792458 ('the speed of one metre per second is equal to one 299792458th of the speed of light'). In this way, the defining constants directly define the following seven units: the hertz (Hz), a unit of the physical quantity of frequency (note that problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.[12][13][14][15][16]); the metre per second (m/s), a unit of speed; joule-second (J⋅s), a unit of action; coulomb (C), a unit of electric charge; joule per kelvin (J/K), a unit of both entropy and heat capacity; the inverse mole (mol−1), a unit of a conversion constant between the amount of substance and the number of elementary entities (atoms, molecules, etc.); and lumen per watt (lm/W), a unit of a conversion constant between the physical power carried by electromagnetic radiation and the intrinsic ability of that same radiation to produce visual perception of brightness in humans. Further, one can show, using dimensional analysis, that every coherent SI unit (whether base or derived) can be written as a unique product of powers of the units of the SI defining constants (in complete analogy to the fact that every coherent derived SI unit can be written as a unique product of powers of the base SI units). For example, the kilogram can be written as kg = (Hz)(J⋅s)/(m/s)2.[Note 34] Thus, the kilogram is defined in terms of the three defining constants ΔνCs, c, and h because, on the one hand, these three defining constants respectively define the units Hz, m/s, and J⋅s,[Note 35] while, on the other hand, the kilogram can be written in terms of these three units, namely, kg = (Hz)(J⋅s)/(m/s)2.[Note 36] True, the question of how to actually realise the kilogram in practice would, at this point, still be open, but that is not really different from the fact that the question of how to actually realise the joule in practice is still in principle open even once one has achieved the practical realisations of the metre, kilogram, and second.
67
+
68
+ One consequence of the redefinition of the SI is that the distinction between the base units and derived units is in principle not needed, since any unit can be constructed directly from the seven defining constants. Nevertheless, the distinction is retained because 'it is useful and historically well established', and also because the ISO/IEC 80000 series of standards[Note 37] specifies base and derived quantities that necessarily have the corresponding SI units.[2]:129
69
+
70
+ The current way of defining the SI system is the result of a decades-long move towards increasingly abstract and idealised formulation in which the
71
+ realisations of the units are separated conceptually from the definitions.[2]:126
72
+
73
+ The great advantage of doing it this way is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the units.[Note 31] Units can now be realised with ‘an accuracy that is ultimately limited only by the quantum structure of nature and our technical abilities but not by the definitions themselves.[Note 32] Any valid equation of physics relating the defining constants to a unit can be used to realise the unit, thus creating opportunities for innovation... with increasing accuracy as technology proceeds.’[2]:122 In practice, the CIPM Consultative Committees provide so-called "mises en pratique" (practical techniques),[11] which are the descriptions of what are currently believed to be best experimental realisations of the units.[19]
74
+
75
+ This system lacks the conceptual simplicity of using artefacts (referred to as prototypes) as realisations of units to define those units: with prototypes, the definition and the realisation are one and the same.[Note 38] However, using artefacts has two major disadvantages that, as soon as it is technologically and scientifically feasible, result in abandoning them as means for defining units.[Note 42] One major disadvantage is that artefacts can be lost, damaged,[Note 44] or changed.[Note 45] The other is that they largely cannot benefit from advancements in science and technology. The last artefact used by the SI was the International Prototype Kilogram (IPK), a particular cylinder of platinum-iridium; from 1889 to 2019, the kilogram was by definition equal to the mass of the IPK. Concerns regarding its stability on the one hand, and progress in precise measurements of the Planck constant and the Avogadro constant on the other, led to a revision of the definition of the base units, put into effect on 20 May 2019.[26] This was the biggest change in the SI system since it was first formally defined and established in 1960,[citation needed] and it resulted in the definitions described above.
76
+
77
+ In the past, there were also various other approaches to the definitions of some of the SI units. One made use of a specific physical state of a specific substance (the triple point of water, which was used in the definition of the kelvin[27]:113–4); others referred to idealised experimental prescriptions[2]:125 (as in the case of the former SI definition of the ampere[27]:113 and the former SI definition (originally enacted in 1979) of the candela[27]:115).
78
+
79
+ In the future, the set of defining constants used by the SI may be modified as more stable constants are found, or if it turns out that other constants can be more precisely measured.[Note 46]
80
+
81
+ The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units (MKS) rather than any variant of the CGS.
82
+
83
+ The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM[Note 11]), the International Committee for Weights and Measures (CIPM[Note 12]), and the International Bureau of Weights and Measures (BIPM[Note 14]). The ultimate authority rests with the CGPM, which is a plenary body through which its Member States[Note 48] act together on matters related to measurement science and measurement standards; it usually convenes every four years.[28] The CGPM elects the CIPM, which is an 18-person committee of eminent scientists. The CIPM operates based on the advice of a number of its Consultative Committees, which bring together the world's experts in their specified fields as advisers on scientific and technical matters.[29][Note 49] One of these committees is the Consultative Committee for Units (CCU), which is responsible for matters related to the development of the International System of Units (SI), preparation of successive editions of the SI brochure, and advice to the CIPM on matters concerning units of measurement.[30] It is the CCU which considers in detail all new scientific and technological developments related to the definition of units and the SI. In practice, when it comes to the definition of the SI, the CGPM simply formally approves the recommendations of the CIPM, which, in turn, follows the advice of the CCU.
84
+
85
+ The CCU has the following as members:[31][32] national laboratories of the Member States of the CGPM charged with establishing national standards;[Note 50] relevant intergovernmental organisations and international bodies;[Note 51]
86
+ international commissions or committees;[Note 52]
87
+ scientific unions;[Note 53] personal members;[Note 54]
88
+ and, as an ex officio member of all Consultative Committees, the Director of the BIPM.
89
+
90
+ All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI)[2][Note 13], which is published by the BIPM and periodically updated.
91
+
92
+ The International System of Units consists of a set of base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[27]:103–106 The units, excluding prefixed units,[Note 55] form a coherent system of units, which is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a.
93
+
94
+ Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[Note 56] Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, which is defined in SI units as m/s2.
95
+
96
+ The SI base units are the building blocks of the system and all the other units are derived from them.
97
+
98
+ The derived units in the SI are formed by powers, products, or quotients of the base units and are potentially unlimited in number.[27]:103[35]:14,16 Derived units are associated with derived quantities; for example, velocity is a quantity that is derived from the base quantities of time and length, and thus the SI derived unit is metre per second (symbol m/s). The dimensions of derived units can be expressed in terms of the dimensions of the base units.
99
+
100
+ Combinations of base and derived units may be used to express other derived units. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa)—and the pascal can be defined as one newton per square metre (N/m2).[38]
101
+
102
+ Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.[27]:122[39]:14 When prefixes are used to form multiples and submultiples of SI base and derived units, the resulting units are no longer coherent.[27]:7
103
+
104
+ The BIPM specifies 20 prefixes for the International System of Units (SI):
105
+
106
+ Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI:[27]
107
+
108
+ Some units of time, angle, and legacy non-SI units have a long history of use. Most societies have used the solar day and its non-decimal subdivisions as a basis of time and, unlike the foot or the pound, these were the same regardless of where they were being measured. The radian, being 1/2π of a revolution, has mathematical advantages but is rarely used for navigation. Further, the units used in navigation around the world are similar. The tonne, litre, and hectare were adopted by the CGPM in 1879 and have been retained as units that may be used alongside SI units, having been given unique symbols. The catalogued units are given below:
109
+
110
+ These units are used in combination with SI units in common units such as the kilowatt-hour (1 kW⋅h = 3.6 MJ).
111
+
112
+ The basic units of the metric system, as originally defined, represented common quantities or relationships in nature. They still do – the modern precisely defined quantities are refinements of definition and methodology, but still with the same magnitudes. In cases where laboratory precision may not be required or available, or where approximations are good enough, the original definitions may suffice.[Note 57]
113
+
114
+ The symbols for the SI units are intended to be identical, regardless of the language used,[27]:130–135 but names are ordinary nouns and use the character set and follow the grammatical rules of the language concerned. Names of units follow the grammatical rules associated with common nouns: in English and in French they start with a lowercase letter (e.g., newton, hertz, pascal), even when the unit is named after a person and its symbol begins with a capital letter.[27]:148 This also applies to "degrees Celsius", since "degree" is the beginning of the unit.[48][49] The only exceptions are in the beginning of sentences and in headings and publication titles.[27]:148 The English spelling for certain SI units differs: US English uses the spelling deka-, meter, and liter, whilst International English uses deca-, metre, and litre.
115
+
116
+ Although the writing of unit names is language-specific, the writing of unit symbols and the values of quantities is consistent across all languages and therefore the SI Brochure has specific rules in respect of writing them.[27]:130–135 The guideline produced by the National Institute of Standards and Technology (NIST)[50] clarifies language-specific areas in respect of American English that were left open by the SI Brochure, but is otherwise identical to the SI Brochure.[51]
117
+
118
+ General rules[Note 62] for writing SI units and quantities apply to text that is either handwritten or produced using an automated process:
119
+
120
+ The rules covering printing of quantities and units are part of ISO 80000-1:2009.[53]
121
+
122
+ Further rules[Note 62] are specified in respect of production of text using printing presses, word processors, typewriters, and the like.
123
+
124
+ The CGPM publishes a brochure that defines and presents the SI.[27] Its official version is in French, in line with the Metre Convention.[27]:102 It leaves some scope for local variations, particularly regarding unit names and terms in different languages.[Note 63][35]
125
+
126
+ The writing and maintenance of the CGPM brochure is carried out by one of the committees of the International Committee for Weights and Measures (CIPM).
127
+ The definitions of the terms "quantity", "unit", "dimension" etc. that are used in the SI Brochure are those given in the International vocabulary of metrology.[54]
128
+
129
+ The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
130
+ The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[55] The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1,[56] and has largely been revised in 2019–2020 with the remainder being under review.
131
+
132
+ Metrologists carefully distinguish between the definition of a unit and its realisation. The definition of each base unit of the SI is drawn up so that it is unique and provides a sound theoretical basis on which the most accurate and reproducible measurements can be made. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. A description of the mise en pratique[Note 64] of the base units is given in an electronic appendix to the SI Brochure.[58][27]:168–169
133
+
134
+ The published mise en pratique is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit."[27]:111 In the current (2016) exercise to overhaul the definitions of the base units, various consultative committees of the CIPM have required that more than one mise en pratique shall be developed for determining the value of each unit.[59] In particular:
135
+
136
+ The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system".[27]:95 Changing technology has led to an evolution of the definitions and standards that has followed two principal strands – changes to SI itself, and clarification of how to use units of measure that are not part of SI but are still nevertheless used on a worldwide basis.
137
+
138
+ Since 1960 the CGPM has made a number of changes to the SI to meet the needs of specific fields, notably chemistry and radiometry. These are mostly additions to the list of named derived units, and include the mole (symbol mol) for an amount of substance, the pascal (symbol Pa) for pressure, the siemens (symbol S) for electrical conductance, the becquerel (symbol Bq) for "activity referred to a radionuclide", the gray (symbol Gy) for ionising radiation, the sievert (symbol Sv) as the unit of dose equivalent radiation, and the katal (symbol kat) for catalytic activity.[27]:156[63][27]:156[27]:158[27]:159[27]:165
139
+
140
+ The range of defined prefixes pico- (10−12) to tera- (1012) was extended to 10−24 to 1024.[27]:152[27]:158[27]:164
141
+
142
+ The 1960 definition of the standard metre in terms of wavelengths of a specific emission of the krypton 86 atom was replaced with the distance that light travels in vacuum in exactly 1/299792458 second, so that the speed of light is now an exactly specified constant of nature.
143
+
144
+ A few changes to notation conventions have also been made to alleviate lexicographic ambiguities. An analysis under the aegis of CSIRO, published in 2009 by the Royal Society, has pointed out the opportunities to finish the realisation of that goal, to the point of universal zero-ambiguity machine readability.[64]
145
+
146
+ After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[65] During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
147
+
148
+ A proposal was made that:
149
+
150
+ The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[66] The change was adopted by the European Union through Directive (EU) 2019/1258.[67]
151
+
152
+ The units and unit magnitudes of the metric system which became the SI were improvised piecemeal from everyday physical quantities starting in the mid-18th century. Only later were they moulded into an orthogonal coherent decimal system of measurement.
153
+
154
+ The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, in 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale.
155
+
156
+ The metric system was developed from 1791 onwards by a committee of the French Academy of Sciences, commissioned to create a unified and rational system of measures.[69] The group, which included preeminent French men of science,[70]:89 used the same principles for relating length, volume, and mass that had been proposed by the English clergyman John Wilkins in 1668[71][72] and the concept of using the Earth's meridian as the basis of the definition of length, originally proposed in 1670 by the French abbot Mouton.[73][74]
157
+
158
+ In March 1791, the Assembly adopted the committee's proposed principles for the new decimal system of measure including the metre defined to be 1/10,000,000 of the length of the quadrant of Earth's meridian passing through Paris, and authorised a survey to precisely establish the length of the meridian. In July 1792, the committee proposed the names metre, are, litre and grave for the units of length, area, capacity, and mass, respectively. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth and kilo for a thousand.[75]:82
159
+
160
+ Later, during the process of adoption of the metric system, the Latin gramme and kilogramme, replaced the former provincial terms gravet (1/1000 grave) and grave. In June 1799, based on the results of the meridian survey, the standard mètre des Archives and kilogramme des Archives were deposited in the French National Archives. Subsequently, that year, the metric system was adopted by law in France.[81]
161
+ [82] The French system was short-lived due to its unpopularity. Napoleon ridiculed it, and in 1812, introduced a replacement system, the mesures usuelles or "customary measures" which restored many of the old units, but redefined in terms of the metric system.
162
+
163
+ During the first half of the 19th century there was little consistency in the choice of preferred multiples of the base units: typically the myriametre (10000 metres) was in widespread use in both France and parts of Germany, while the kilogram (1000 grams) rather than the myriagram was used for mass.[68]
164
+
165
+ In 1832, the German mathematician Carl Friedrich Gauss, assisted by Wilhelm Weber, implicitly defined the second as a base unit when he quoted the Earth's magnetic field in terms of millimetres, grams, and seconds.[76] Prior to this, the strength of the Earth's magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a suspended magnet of known mass by the Earth's magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length and time to the magnetic field.[Note 65][83]
166
+
167
+ A candlepower as a unit of illuminance was originally defined by an 1860 English law as the light produced by a pure spermaceti candle weighing ​1⁄6 pound (76 grams) and burning at a specified rate. Spermaceti, a waxy substance found in the heads of sperm whales, was once used to make high-quality candles. At this time the French standard of light was based upon the illumination from a Carcel oil lamp. The unit was defined as that illumination emanating from a lamp burning pure rapeseed oil at a defined rate. It was accepted that ten standard candles were about equal to one Carcel lamp.
168
+
169
+ A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.[Note 66][70]:353–354 Initially the convention only covered standards for the metre and the kilogram. In 1921, the Metre Convention was extended to include all physical units, including the ampere and others thereby enabling the CGPM to address inconsistencies in the way that the metric system had been used.[77][27]:96
170
+
171
+ A set of 30 prototypes of the metre and 40 prototypes of the kilogram,[Note 67] in each case made of a 90% platinum-10% iridium alloy, were manufactured by British metallurgy specialty firm and accepted by the CGPM in 1889. One of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the remaining prototypes to serve as the national prototype for that country.[84]
172
+
173
+ The treaty also established a number of international organisations to oversee the keeping of international standards of measurement:[85]
174
+ [86]
175
+
176
+ In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin) and others working under the auspices of the British Association for the Advancement of Science, built on Gauss's work and formalised the concept of a coherent system of units with base units and derived units christened the centimetre–gram–second system of units in 1874. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.[79]
177
+
178
+ In 1879, the CIPM published recommendations for writing the symbols for length, area, volume and mass, but it was outside its domain to publish recommendations for other quantities. Beginning in about 1900, physicists who had been using the symbol "μ" (mu) for "micrometre" or "micron", "λ" (lambda) for "microlitre", and "γ" (gamma) for "microgram" started to use the symbols "μm", "μL" and "μg".[87]
179
+
180
+ At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU) and an International system based on units defined by the Metre Convention.[88] for electrical distribution systems.
181
+ Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties—the dimensions depended on whether one used the ESU or EMU systems.[80] This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.[89] Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. This became the foundation of the MKS system of units.
182
+
183
+ In the late 19th and early 20th centuries, a number of non-coherent units of measure based on the gram/kilogram, centimetre/metre, and second, such as the Pferdestärke (metric horsepower) for power,[90][Note 68] the darcy for permeability[91] and "millimetres of mercury" for barometric and blood pressure were developed or propagated, some of which incorporated standard gravity in their definitions.[Note 69]
184
+
185
+ At the end of the Second World War, a number of different systems of measurement were in use throughout the world. Some of these systems were metric system variations; others were based on customary systems of measure, like the U.S customary system and Imperial system of the UK and British Empire.
186
+
187
+ In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[92] This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived from six base units including units of temperature and optical radiation in addition to those for the MKS system mass, length, and time units and Giorgi's current unit. Six base units were recommended: the metre, kilogram, second, ampere, degree Kelvin, and candela.
188
+
189
+ The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[93] These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[27]:104,130
190
+
191
+ In 1960, the 11th CGPM synthesised the results of the 12-year study into a set of 16 resolutions. The system was named the International System of Units, abbreviated SI from the French name, Le Système International d'Unités.[27]:110[94]
192
+
193
+ When Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass, length, and time. Giorgi later identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units (for temperature, amount of substance, and luminous intensity) were added later.
194
+
195
+ The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N (newton) applied to a mass of 1 kg will accelerate it at 1 m/s2. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g; mass times the acceleration due to gravity, which is 9.81 newtons at the Earth's surface and is about 3.5 newtons at the surface of Mars. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision measurements of a property of a body, and this makes a unit of weight unsuitable as a base unit.
196
+
197
+ The Prior definitions of the various base units in the above table were made by the following authors and authorities:
198
+
199
+ All other definitions result from resolutions by either CGPM or the CIPM and are catalogued in the SI Brochure.
200
+
201
+ Although the term metric system is often used as an informal alternative name for the International System of Units,[98] other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.[Note 71][Note 73] Here are some examples. The centimetre–gram–second (CGS) system was the dominant metric system in the physical sciences and electrical engineering from the 1860s until at least the 1960s, and is still in use in some fields. It includes such SI-unrecognised units as the gal, dyne, erg, barye, etc. in its mechanical sector, as well as the poise and stokes in fluid dynamics. When it comes to the units for quantities in electricity and magnetism, there are several versions of the CGS system. Two of these are obsolete: the CGS electrostatic ('CGS-ESU', with the SI-unrecognised units of statcoulomb, statvolt, statampere, etc.) and the CGS electromagnetic system ('CGS-EMU', with abampere, abcoulomb, oersted, maxwell, abhenry, gilbert, etc.).[Note 74] A 'blend' of these two systems is still popular and is known as the Gaussian system (which includes the gauss as a special name for the CGS-EMU unit maxwell per square centimetre).[Note 75] In engineering (other than electrical engineering), there was formerly a long tradition of using the gravitational metric system, whose SI-unrecognised units include the kilogram-force (kilopond), technical atmosphere, metric horsepower, etc. The metre–tonne–second (mts) system, used in the Soviet Union from 1933 to 1955, had such SI-unrecognised units as the sthène, pièze, etc. Other groups of SI-unrecognised metric units are the various legacy and CGS units related to ionising radiation (rutherford, curie, roentgen, rad, rem, etc.), radiometry (langley, jansky), photometry (phot, nox, stilb, nit, metre-candle,[102]:17 lambert, apostilb, skot, brill, troland, talbot, candlepower, candle), thermodynamics (calorie), and spectroscopy (reciprocal centimetre). The angstrom is still used in various fields. Some other SI-unrecognised metric units that don't fit into any of the already mentioned categories include the are, bar, barn, fermi,[103]:20–21 gradian (gon, grad, or grade), metric carat, micron, millimetre of mercury, torr, millimetre (or centimetre, or metre) of water, millimicron, mho, stere, x unit, γ (unit of mass), γ (unit of magnetic flux density), and λ (unit of volume).[citation needed] In some cases, the SI-unrecognised metric units have equivalent SI units formed by combining a metric prefix with a coherent SI unit. For example, 1 γ (unit of magnetic flux density) = 1 nT, 1 Gal = 1 cm⋅s−2, 1 barye = 1 decipascal, etc. (a related group are the correspondences[Note 74] such as 1 abampere ≘ 1 decaampere, 1 abhenry ≘ 1 nanohenry, etc.[Note 76]). Sometimes it is not even a matter of a metric prefix: the SI-nonrecognised unit may be exactly the same as an SI coherent unit, except for the fact that the SI does not recognise the special name and symbol. For example, the nit is just an SI-unrecognised name for the SI unit candela per square metre and the talbot is an SI-unrecognised name for the SI unit lumen second. Frequently, a non-SI metric unit is related to an SI unit through a power of ten factor, but not one that has a metric prefix, e.g. 1 dyn = 10−5 newton, 1 Å = 10−10 m, etc. (and correspondences[Note 74] like 1 gauss ≘ 10−4 tesla). Finally, there are metric units whose conversion factors to SI units are not powers of ten, e.g. 1 calorie = 4.184 joules and 1 kilogram-force = 9.806650 newtons. Some SI-unrecognised metric units are still frequently used, e.g. the calorie (in nutrition), the rem (in the U.S.), the jansky (in radio astronomy), the reciprocal centimetre (in spectroscopy), the gauss (in industry) and the CGS-Gaussian units[Note 75] more generally (in some subfields of physics), the metric horsepower (for engine power, in Europe), the kilogram-force (for rocket engine thrust, in China and sometimes in Europe), etc. Others are now rarely used, such as the sthène and the rutherford.
202
+
203
+ Organisations
204
+
205
+ Standards and conventions
206
+
207
+ It is therefore the declared policy of the United States-
208
+
209
+ (1) to designate the metric system of measurement as the preferred system of weights and measures for United States trade and commerce;
210
+
211
+ (2) to require that each Federal agency, by a date certain and to the extent economically feasible by the end of the fiscal year 1992, use the metric system of measurement in its procurements, grants, and other business-related activities, except to the extent that such use is impractical or is likely to cause significant inefficiencies or loss of markets to United States firms, such as when foreign competitors are producing competing products in non-metric units;
212
+
213
+ (3) to seek out ways to increase understanding of the metric system of measurement through educational information and guidance and in Government publications; and
214
+
215
+ (4) to permit the continued use of traditional systems of weights and measures in non-business activities.
216
+
217
+ The unit of length is the metre, defined by the distance, at 0°, between the axes of the two central lines marked on the bar of platinum-iridium kept at the Bureau International des Poids et Mesures and declared Prototype of the metre by the 1st Conférence Générale des Poids et Mesures, this bar being subject to standard atmospheric pressure and supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
218
+
219
+ We shall in the first place describe the state of the Standards recovered from the ruins of the House of Commons, as ascertained in our inspection of them made on 1st June, 1838, at the Journal Office, where they are preserved under the care of Mr. James Gudge, Principal Clerk of the Journal Office. The following list, taken by ourselves from inspection, was compared with a list produced by Mr. Gudge, and stated by him to have been made by Mr. Charles Rowland, one of the Clerks of the Journal Office, immediately after the fire, and was found to agree with it. Mr. Gudge stated that no other Standards of Length or Weight were in his custody.
220
+
221
+ No. 1. A brass bar marked “Standard [G. II. crown emblem] Yard, 1758,” which on examination was found to have its right hand stud perfect, with the point and line visible, but with its left hand stud completely melted out, a hole only remaining. The bar was somewhat bent, and discoloured in every part.
222
+
223
+ No. 2. A brass bar with a projecting cock at each end, forming a bed for the trial of yard-measures; discoloured.
224
+
225
+ No. 3. A brass bar marked “Standard [G. II. crown emblem] Yard, 1760,” from which the left hand stud was completely melted out, and which in other respects was in the same condition as No. 1.
226
+
227
+ No. 4. A yard-bed similar to No. 2; discoloured.
228
+
229
+ No. 5. A weight of the form [drawing of a weight] marked [2 lb. T. 1758], apparently of brass or copper; much discoloured.
230
+
231
+ No. 6. A weight marked in the same manner for 4 lbs., in the same state.
232
+
233
+ No. 7. A weight similar to No. 6, with a hollow space at its base, which appeared at first sight to have been originally filled with some soft metal that had been now melted out, but which on a rough trial was found to have nearly the same weight as No. 6.
234
+
235
+ No. 8. A similar weight of 8 lbs., similarly marked (with the alteration of 8 lbs. for 4 lbs.), and in the same state.
236
+
237
+ No. 9. Another exactly like No. 8.
238
+
239
+ Nos. 10 and 11. Two weights of 16 lbs., similarly marked.
240
+
241
+ Nos. 12 and 13. Two weights of 32 lbs., similarly marked.
242
+
243
+ No. 14. A weight with a triangular ring-handle, marked "S.F. 1759 17 lbs. 8 dwts. Troy", apparently intended to represent the stone of 14 lbs. avoirdupois, allowing 7008 troy grains to each avoirdupois pound.
244
+
245
+ It appears from this list that the bar adopted in the Act 5th Geo. IV., cap. 74, sect. 1, for the legal standard of one yard, (No. 3 of the preceding list), is so far injured, that it is impossible to ascertain from it, with the most moderate accuracy, the statutable length of one yard. The legal standard of one troy pound is missing. We have therefore to report that it is absolutely necessary that steps be taken for the formation and legalising of new Standards of Length and Weight.
246
+
247
+ [t]he bronze yard No. 11, which was an exact copy of the British imperial yard both in form and material, had shown changes when compared with the imperial yard in 1876 and 1888 which could not reasonably be said to be entirely due to changes in No. 11. Suspicion as to the constancy of the length of the British standard was therefore aroused.
248
+
249
+ In 1890, as a signatory of the Metre Convention, the US received two copies of the International Prototype Metre, the construction of which represented the most advanced ideas of standards of the time. Therefore it seemed that US measures would have greater stability and higher accuracy by accepting the international metre as fundamental standard, which was formalised in 1893 by the Mendenhall Order.[25]:379–81
250
+
en/5576.html.txt ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A metric system is a system of measurement that succeeded the decimalised system based on the metre introduced in France in the 1790s. The historical development of these systems culminated in the definition of the International System of Units (SI), under the oversight of an international standards body.
4
+
5
+ The historical evolution of metric systems has resulted in the recognition of several principles. Each of the fundamental dimensions of nature is expressed by a single base unit of measure. The definition of base units has increasingly been realised from natural principles, rather than by copies of physical artefacts. For quantities derived from the fundamental base units of the system, units derived from the base units are used–e.g., the square metre is the derived unit for area, a quantity derived from length. These derived units are coherent, which means that they involve only products of powers of the base units, without empirical factors. For any given quantity whose unit has a special name and symbol, an extended set of smaller and larger units is defined that are related in a systematic system of factors of powers of ten. The unit of time should be the second; the unit of length should be either the metre or a decimal multiple of it; and the unit of mass should be the gram or a decimal multiple of it.
6
+
7
+ Metric systems have evolved since the 1790s, as science and technology have evolved, in providing a single universal measuring system. Before and in addition to the SI, some other examples of metric systems are the following: the MKS system of units and the MKSA systems, which are the direct forerunners of the SI; the centimetre–gram–second (CGS) system and its subtypes, the CGS electrostatic (cgs-esu) system, the CGS electromagnetic (cgs-emu) system, and their still-popular blend, the Gaussian system; the metre–tonne–second (MTS) system; and the gravitational metric systems, which can be based on either the metre or the centimetre, and either the gram(-force) or the kilogram(-force).
8
+
9
+ The French revolution (1789-99) provided an opportunity for the French to reform their unwieldy and archaic system of many local weights and measures. Charles Maurice de Talleyrand championed a new system based on natural units, proposing to the French National Assembly in 1790 that such a system be developed. Talleyrand had ambitions that a new natural and standardised system would be embraced worldwide, and was keen to involve other countries in its development. Great Britain ignored invitations to co-operate, so the French Academy of Sciences decided in 1791 to go it alone and they set up a commission for the purpose. The commission decided that the standard of length should be based on the size of the Earth. They defined that length to be the 'metre' and its length as one ten-millionth of the length of a quadrant on the Earth's surface from the equator to the north pole. In 1799, after the length of that quadrant had been surveyed, the new system was launched in France.[1]:145–149
10
+
11
+ The units of the metric system, originally taken from observable features of nature, are now defined by seven physical constants being given exact numerical values in terms of the units. In the modern form of the International System of Units (SI), the seven base units are: metre for length, kilogram for mass, second for time, ampere for electric current, kelvin for temperature, candela for luminous intensity and mole for amount of substance. These, together with their derived units, can measure any physical quantity. Derived units may have their own unit name, such as the watt (J/s) and lux (cd/m2), or may just be expressed as combinations of base units, such as velocity (m/s) and acceleration (m/s2).[2]
12
+
13
+ The metric system was designed to have properties that make it easy to use and widely applicable, including units based on the natural world, decimal ratios, prefixes for multiples and sub-multiples, and a structure of base and derived units. It is also a coherent system, which means that its units do not introduce conversion factors not already present in equations relating quantities. It has a property called rationalisation that eliminates certain constants of proportionality in equations of physics.
14
+
15
+ The metric system is extensible, and new derived units are defined as needed in fields such as radiology and chemistry. For example, the katal, a derived unit for catalytic activity equivalent to a one mole per second (1 mol/s), was added in 1999.
16
+
17
+ Although the metric system has changed and developed since its inception, its basic concepts have hardly changed. Designed for transnational use, it consisted of a basic set of units of measurement, now known as base units. Derived units were built up from the base units using logical rather than empirical relationships while multiples and submultiples of both base and derived units were decimal-based and identified by a standard set of prefixes.
18
+
19
+ The base units used in a measurement system must be realisable. Each of the definitions of the base units in the SI is accompanied by a defined mise en pratique [practical realisation] that describes in detail at least one way in which the base unit can be measured.[4] Where possible, definitions of the base units were developed so that any laboratory equipped with proper instruments would be able to realise a standard without reliance on an artefact held by another country. In practice, such realisation is done under the auspices of a mutual acceptance arrangement.[5]
20
+
21
+ In the SI, the standard metre is defined as exactly 1/299,792,458 of the distance that light travels in a second. The realisation of the metre depends in turn on precise realisation of the second. There are both astronomical observation methods and laboratory measurement methods that are used to realise units of the standard metre. Because the speed of light is now exactly defined in terms of the metre, more precise measurement of the speed of light does not result in a more accurate figure for its velocity in standard units, but rather a more accurate definition of the metre. The accuracy of the measured speed of light is considered to be within 1 m/s, and the realisation of the metre is within about 3 parts in 1,000,000,000, or a proportion of 0.3x10−8:1.
22
+
23
+ The kilogram was originally defined as the mass of a man-made artefact of platinum-iridium held in a laboratory in France, until the new definition was introduced in May 2019. Replicas made in 1879 at the time of the artefact's fabrication and distributed to signatories of the Metre Convention serve as de facto standards of mass in those countries. Additional replicas have been fabricated since as additional countries have joined the convention. The replicas were subject to periodic validation by comparison to the original, called the IPK. It became apparent that either the IPK or the replicas or both were deteriorating, and are no longer comparable: they had diverged by 50 μg since fabrication, so figuratively, the accuracy of the kilogram was no better than 5 parts in a hundred million or a proportion of 5x10−8:1. The accepted redefinition of SI base units replaced the IPK with an exact definition of the Planck constant, which defines the kilogram in terms of the second and metre.
24
+
25
+ The metric system base units were originally adopted because they represented fundamental orthogonal dimensions of measurement corresponding to how we perceive nature: a spatial dimension, a time dimension, one for inertia, and later, a more subtle one for the dimension of an "invisible substance" known as electricity or more generally, electromagnetism. One and only one unit in each of these dimensions was defined, unlike older systems where multiple perceptual quantities with the same dimension were prevalent, like inches, feet and yards or ounces, pounds and tons. Units for other quantities like area and volume, which are also spatial dimensional quantities, were derived from the fundamental ones by logical relationships, so that a unit of square area for example, was the unit of length squared.
26
+
27
+ Many derived units were already in use before and during the time the metric system evolved, because they represented convenient abstractions of whatever base units were defined for the system, especially in the sciences. So analogous units were scaled in terms of the units of the newly established metric system, and their names adopted into the system. Many of these were associated with electromagnetism. Other perceptual units, like volume, which were not defined in terms of base units, were incorporated into the system with definitions in the metric base units, so that the system remained simple. It grew in number of units, but the system retained a uniform structure.
28
+
29
+ Some customary systems of weights and measures had duodecimal ratios, which meant quantities were conveniently divisible by 2, 3, 4, and 6. But it was difficult to do arithmetic with things like ​1⁄4 pound or ​1⁄3 foot. There was no system of notation for successive fractions: for example, ​1⁄3 of ​1⁄3 of a foot was not an inch or any other unit. But the system of counting in decimal ratios did have notation, and the system had the algebraic property of multiplicative closure: a fraction of a fraction, or a multiple of a fraction was a quantity in the system, like ​1⁄10 of ​1⁄10 which is ​1⁄100. So a decimal radix became the ratio between unit sizes of the metric system.
30
+
31
+ In the metric system, multiples and submultiples of units follow a decimal pattern.[Note 1]
32
+
33
+ A common set of decimal-based prefixes that have the effect of multiplication or division by an integer power of ten can be applied to units that are themselves too large or too small for practical use. The concept of using consistent classical (Latin or Greek) names for the prefixes was first proposed in a report by the French Revolutionary Commission on Weights and Measures in May 1793.[3]:89–96 The prefix kilo, for example, is used to multiply the unit by 1000, and the prefix milli is to indicate a one-thousandth part of the unit. Thus the kilogram and kilometre are a thousand grams and metres respectively, and a milligram and millimetre are one thousandth of a gram and metre respectively. These relations can be written symbolically as:[6]
34
+
35
+ In the early days, multipliers that were positive powers of ten were given Greek-derived prefixes such as kilo- and mega-, and those that were negative powers of ten were given Latin-derived prefixes such as centi- and milli-. However, 1935 extensions to the prefix system did not follow this convention: the prefixes nano- and micro-, for example have Greek roots.[1]:222–223 During the 19th century the prefix myria-, derived from the Greek word μύριοι (mýrioi), was used as a multiplier for 10000.[7]
36
+
37
+ When applying prefixes to derived units of area and volume that are expressed in terms of units of length squared or cubed, the square and cube operators are applied to the unit of length including the prefix, as illustrated below.[6]
38
+
39
+ Prefixes are not usually used to indicate multiples of a second greater than 1; the non-SI units of minute, hour and day are used instead. On the other hand, prefixes are used for multiples of the non-SI unit of volume, the litre (l, L) such as millilitres (ml).[6]
40
+
41
+ Each variant of the metric system has a degree of coherence—the derived units are directly related to the base units without the need for intermediate conversion factors.[8] For example, in a coherent system the units of force, energy and power are chosen so that the equations
42
+
43
+ hold without the introduction of unit conversion factors. Once a set of coherent units have been defined, other relationships in physics that use those units will automatically be true. Therefore, Einstein's mass–energy equation, E = mc2, does not require extraneous constants when expressed in coherent units.[9]
44
+
45
+ The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy; so only one of them (the erg) could bear a coherent relationship to the base units. Coherence was a design aim of SI, which resulted in only one unit of energy being defined – the joule.[10]
46
+
47
+ Maxwell's equations of electromagnetism contained a factor relating to steradians, representative of the fact that electric charges and magnetic fields may be considered to emanate from a point and propagate equally in all directions, i.e. spherically. This factor appeared awkwardly in many equations of physics dealing with the dimensionality of electromagnetism and sometimes other things.
48
+
49
+ A number of different metric system have been developed, all using the Mètre des Archives and Kilogramme des Archives (or their descendants) as their base units, but differing in the definitions of the various derived units.
50
+
51
+ In 1832, Gauss used the astronomical second as a base unit in defining the gravitation of the earth, and together with the gram and millimetre, became the first system of mechanical units.
52
+
53
+ The centimetre–gram–second system of units (CGS) was the first coherent metric system, having been developed in the 1860s and promoted by Maxwell and Thomson. In 1874, this system was formally promoted by the British Association for the Advancement of Science (BAAS).[11] The system's characteristics are that density is expressed in g/cm3, force expressed in dynes and mechanical energy in ergs. Thermal energy was defined in calories, one calorie being the energy required to raise the temperature of one gram of water from 15.5 °C to 16.5 °C. The meeting also recognised two sets of units for electrical and magnetic properties – the electrostatic set of units and the electromagnetic set of units.[12]
54
+
55
+ Several systems of electrical units were defined following discovery of Ohm's law in 1824.
56
+
57
+ The CGS units of electricity were cumbersome to work with. This was remedied at the 1893 International Electrical Congress held in Chicago by defining the "international" ampere and ohm using definitions based on the metre, kilogram and second.[13]
58
+
59
+ During the same period in which the CGS system was being extended to include electromagnetism, other systems were developed, distinguished by their choice of coherent base unit, including the Practical System of Electric Units, or QES (quad–eleventhgram–second) system, was being used.[14]:268[15]:17 Here, the base units are the quad, equal to 107 m (approximately a quadrant of the earth's circumference), the eleventhgram, equal to 10−11 g, and the second. These were chosen so that the corresponding electrical units of potential difference, current and resistance had a convenient magnitude.
60
+
61
+ In 1901, Giovanni Giorgi showed that by adding an electrical unit as a fourth base unit, the various anomalies in electromagnetic systems could be resolved. The metre–kilogram–second–coulomb (MKSC) and metre–kilogram–second–ampere (MKSA) systems are examples of such systems.[16]
62
+
63
+ The International System of Units (Système international d'unités or SI) is the current international standard metric system and is also the system most widely used around the world. It is an extension of Giorgi's MKSA system – its base units are the metre, kilogram, second, ampere, kelvin, candela and mole.[10]
64
+ The MKS (metre–kilogram–second) system came into existence in 1889, when artefacts for the metre and kilogram were fabricated according to the Metre Convention. Early in the 20th century, an unspecified electrical unit was added, and the system was called MKSX. When it became apparent that the unit would be the ampere, the system was referred to as the MKSA system, and was the direct predecessor of the SI.
65
+
66
+ The metre–tonne–second system of units (MTS) was based on the metre, tonne and second – the unit of force was the sthène and the unit of pressure was the pièze. It was invented in France for industrial use and from 1933 to 1955 was used both in France and in the Soviet Union.[17][18]
67
+
68
+ Gravitational metric systems use the kilogram-force (kilopond) as a base unit of force, with mass measured in a unit known as the hyl, Technische Masseneinheit (TME), mug or metric slug.[19] Although the CGPM passed a resolution in 1901 defining the standard value of acceleration due to gravity to be 980.665 cm/s2, gravitational units are not part of the International System of Units (SI).[20]
69
+
70
+ The International System of Units is the modern metric system. It is based on the metre–kilogram–second–ampere (MKSA) system of units from early in the 20th century. It also includes numerous coherent derived units for common quantities like power (watt) and irradience (lumen). Electrical units were taken from the International system then in use. Other units like those for energy (joule) were modelled on those from the older CGS system, but scaled to be coherent with MKSA units. Two additional base units, degree Kelvin equivalent to degree Celsius for thermodynamic temperature, and candela, roughly equivalent to the international candle unit of illumination, were introduced. Later, another base unit, the mole, a unit of mass equivalent to Avogadro's number of specified molecules, was added along with several other derived units.
71
+
72
+ The system was promulgated by the General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM) in 1960. At that time, the metre was redefined in terms of the wavelength of a spectral line of the krypton-86[Note 2] atom, and the standard metre artefact from 1889 was retired.
73
+
74
+ Today, the International system of units consists of 7 base units and innumerable coherent derived units including 22 with special names. The last new derived unit, the katal for catalytic activity, was added in 1999. Some of the base units are now realised in terms of invariant constants of physics. As a consequence, the speed of light has now become an exactly defined constant, and defines the metre as ​1⁄299,792,458 of the distance light travels in a second. Until 2019, the kilogram was defined by a man-made artefact of deteriorating platinum-iridium. The range of decimal prefixes has been extended to those for 1024, yotta, and 10−24, yocto, which are unfamiliar because nothing in our everyday lives is that big or that small.
75
+
76
+ The International System of Units has been adopted as the official system of weights and measures by all nations in the world except for Myanmar, Liberia, and the United States, while the United States is the only industrialised country where the metric system is not the predominant system of units.[21]
en/5577.html.txt ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ In biology, the nervous system is a highly complex part of an animal that coordinates its actions and sensory information by transmitting signals to and from different parts of its body. The nervous system detects environmental changes that impact the body, then works in tandem with the endocrine system to respond to such events.[1] Nervous tissue first arose in wormlike organisms about 550 to 600 million years ago. In vertebrates it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers or axons, that connect the CNS to every other part of the body. Nerves that transmit signals from the brain are called motor or efferent nerves, while those nerves that transmit information from the body to the CNS are called sensory or afferent. Spinal nerves serve both functions and are called mixed nerves. The PNS is divided into three separate subsystems, the somatic, autonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit from the cranium are called cranial nerves while those exiting from the spinal cord are called spinal nerves.
4
+
5
+ At the cellular level, the nervous system is defined by the presence of a special type of cell, called the neuron, also known as a "nerve cell". Neurons have special structures that allow them to send signals rapidly and precisely to other cells. They send these signals in the form of electrochemical waves traveling along thin fibers called axons, which cause chemicals called neurotransmitters to be released at junctions called synapses. A cell that receives a synaptic signal from a neuron may be excited, inhibited, or otherwise modulated. The connections between neurons can form neural pathways, neural circuits, and larger networks that generate an organism's perception of the world and determine its behavior. Along with neurons, the nervous system contains other specialized cells called glial cells (or simply glia), which provide structural and metabolic support.
6
+
7
+ Nervous systems are found in most multicellular animals, but vary greatly in complexity.[2] The only multicellular animals that have no nervous system at all are sponges, placozoans, and mesozoans, which have very simple body plans. The nervous systems of the radially symmetric organisms ctenophores (comb jellies) and cnidarians (which include anemones, hydras, corals and jellyfish) consist of a diffuse nerve net. All other animal species, with the exception of a few types of worm, have a nervous system containing a brain, a central cord (or two cords running in parallel), and nerves radiating from the brain and central cord. The size of the nervous system ranges from a few hundred cells in the simplest worms, to around 300 billion cells in African elephants.[3]
8
+
9
+ The central nervous system functions to send signals from one cell to others, or from one part of the body to others and to receive feedback. Malfunction of the nervous system can occur as a result of genetic defects, physical damage due to trauma or toxicity, infection, or simply senesence. The medical specialty of neurology studies disorders of the nervous system and looks for interventions that can prevent or treat them. In the peripheral nervous system, the most common problem is the failure of nerve conduction, which can be due to different causes including diabetic neuropathy and demyelinating disorders such as multiple sclerosis and amyotrophic lateral sclerosis. Neuroscience is the field of science that focuses on the study of the nervous system.
10
+
11
+ The nervous system derives its name from nerves, which are cylindrical bundles of fibers (the axons of neurons), that emanate from the brain and spinal cord, and branch repeatedly to innervate every part of the body.[4] Nerves are large enough to have been recognized by the ancient Egyptians, Greeks, and Romans,[5] but their internal structure was not understood until it became possible to examine them using a microscope.[6] The author Michael Nikoletseas wrote:[7]
12
+
13
+ "It is difficult to believe that until approximately year 1900 it was not known that neurons are the basic units of the brain (Santiago Ramón y Cajal). Equally surprising is the fact that the concept of chemical transmission in the brain was not known until around 1930 (Henry Hallett Dale and Otto Loewi). We began to understand the basic electrical phenomenon that neurons use in order to communicate among themselves, the action potential, in the 1950s (Alan Lloyd Hodgkin, Andrew Huxley and John Eccles). It was in the 1960s that we became aware of how basic neuronal networks code stimuli and thus basic concepts are possible (David H. Hubel and Torsten Wiesel). The molecular revolution swept across US universities in the 1980s. It was in the 1990s that molecular mechanisms of behavioral phenomena became widely known (Eric Richard Kandel)."
14
+
15
+ A microscopic examination shows that nerves consist primarily of axons, along with different membranes that wrap around them and segregate them into fascicles. The neurons that give rise to nerves do not lie entirely within the nerves themselves—their cell bodies reside within the brain, spinal cord, or peripheral ganglia.[4]
16
+
17
+ All animals more advanced than sponges have nervous systems. However, even sponges, unicellular animals, and non-animals such as slime molds have cell-to-cell signalling mechanisms that are precursors to those of neurons.[8] In radially symmetric animals such as the jellyfish and hydra, the nervous system consists of a nerve net, a diffuse network of isolated cells.[9] In bilaterian animals, which make up the great majority of existing species, the nervous system has a common structure that originated early in the Ediacaran period, over 550 million years ago.[10][11]
18
+
19
+ The nervous system contains two main categories or types of cells: neurons and glial cells.
20
+
21
+ The nervous system is defined by the presence of a special type of cell—the neuron (sometimes called "neurone" or "nerve cell").[4] Neurons can be distinguished from other cells in a number of ways, but their most fundamental property is that they communicate with other cells via synapses, which are membrane-to-membrane junctions containing molecular machinery that allows rapid transmission of signals, either electrical or chemical.[4] Many types of neuron possess an axon, a protoplasmic protrusion that can extend to distant parts of the body and make thousands of synaptic contacts;[12] axons typically extend throughout the body in bundles called nerves.
22
+
23
+ Even in the nervous system of a single species such as humans, hundreds of different types of neurons exist, with a wide variety of morphologies and functions.[12] These include sensory neurons that transmute physical stimuli such as light and sound into neural signals, and motor neurons that transmute neural signals into activation of muscles or glands; however in many species the great majority of neurons participate in the formation of centralized structures (the brain and ganglia) and they receive all of their input from other neurons and send their output to other neurons.[4]
24
+
25
+ Glial cells (named from the Greek for "glue") are non-neuronal cells that provide support and nutrition, maintain homeostasis, form myelin, and participate in signal transmission in the nervous system.[13] In the human brain, it is estimated that the total number of glia roughly equals the number of neurons, although the proportions vary in different brain areas.[14] Among the most important functions of glial cells are to support neurons and hold them in place; to supply nutrients to neurons; to insulate neurons electrically; to destroy pathogens and remove dead neurons; and to provide guidance cues directing the axons of neurons to their targets.[13] A very important type of glial cell (oligodendrocytes in the central nervous system, and Schwann cells in the peripheral nervous system) generates layers of a fatty substance called myelin that wraps around axons and provides electrical insulation which allows them to transmit action potentials much more rapidly and efficiently. Recent findings indicate that glial cells, such as microglia and astrocytes, serve as important resident immune cells within the central nervous system.
26
+
27
+ The nervous system of vertebrates (including humans) is divided into the central nervous system (CNS) and the peripheral nervous system (PNS).[15]
28
+
29
+ The (CNS) is the major division, and consists of the brain and the spinal cord.[15] The spinal canal contains the spinal cord, while the cranial cavity contains the brain. The CNS is enclosed and protected by the meninges, a three-layered system of membranes, including a tough, leathery outer layer called the dura mater. The brain is also protected by the skull, and the spinal cord by the vertebrae.
30
+
31
+ The peripheral nervous system (PNS) is a collective term for the nervous system structures that do not lie within the CNS.[16] The large majority of the axon bundles called nerves are considered to belong to the PNS, even when the cell bodies of the neurons to which they belong reside within the brain or spinal cord. The PNS is divided into somatic and visceral parts. The somatic part consists of the nerves that innervate the skin, joints, and muscles. The cell bodies of somatic sensory neurons lie in dorsal root ganglia of the spinal cord. The visceral part, also known as the autonomic nervous system, contains neurons that innervate the internal organs, blood vessels, and glands. The autonomic nervous system itself consists of two parts: the sympathetic nervous system and the parasympathetic nervous system. Some authors also include sensory neurons whose cell bodies lie in the periphery (for senses such as hearing) as part of the PNS; others, however, omit them.[17]
32
+
33
+ The vertebrate nervous system can also be divided into areas called gray matter and white matter.[18] Gray matter (which is only gray in preserved tissue, and is better described as pink or light brown in living tissue) contains a high proportion of cell bodies of neurons. White matter is composed mainly of myelinated axons, and takes its color from the myelin. White matter includes all of the nerves, and much of the interior of the brain and spinal cord. Gray matter is found in clusters of neurons in the brain and spinal cord, and in cortical layers that line their surfaces. There is an anatomical convention that a cluster of neurons in the brain or spinal cord is called a nucleus, whereas a cluster of neurons in the periphery is called a ganglion.[19] There are, however, a few exceptions to this rule, notably including the part of the forebrain called the basal ganglia.[20]
34
+
35
+ Sponges have no cells connected to each other by synaptic junctions, that is, no neurons, and therefore no nervous system. They do, however, have homologs of many genes that play key roles in synaptic function. Recent studies have shown that sponge cells express a group of proteins that cluster together to form a structure resembling a postsynaptic density (the signal-receiving part of a synapse).[8] However, the function of this structure is currently unclear. Although sponge cells do not show synaptic transmission, they do communicate with each other via calcium waves and other impulses, which mediate some simple actions such as whole-body contraction.[21]
36
+
37
+ Jellyfish, comb jellies, and related animals have diffuse nerve nets rather than a central nervous system. In most jellyfish the nerve net is spread more or less evenly across the body; in comb jellies it is concentrated near the mouth. The nerve nets consist of sensory neurons, which pick up chemical, tactile, and visual signals; motor neurons, which can activate contractions of the body wall; and intermediate neurons, which detect patterns of activity in the sensory neurons and, in response, send signals to groups of motor neurons. In some cases groups of intermediate neurons are clustered into discrete ganglia.[9]
38
+
39
+ The development of the nervous system in radiata is relatively unstructured. Unlike bilaterians, radiata only have two primordial cell layers, endoderm and ectoderm. Neurons are generated from a special set of ectodermal precursor cells, which also serve as precursors for every other ectodermal cell type.[22]
40
+
41
+ The vast majority of existing animals are bilaterians, meaning animals with left and right sides that are approximate mirror images of each other. All bilateria are thought to have descended from a common wormlike ancestor that appeared in the Ediacaran period, 550–600 million years ago.[10] The fundamental bilaterian body form is a tube with a hollow gut cavity running from mouth to anus, and a nerve cord with an enlargement (a "ganglion") for each body segment, with an especially large ganglion at the front, called the "brain".
42
+
43
+ Even mammals, including humans, show the segmented bilaterian body plan at the level of the nervous system. The spinal cord contains a series of segmental ganglia, each giving rise to motor and sensory nerves that innervate a portion of the body surface and underlying musculature. On the limbs, the layout of the innervation pattern is complex, but on the trunk it gives rise to a series of narrow bands. The top three segments belong to the brain, giving rise to the forebrain, midbrain, and hindbrain.[23]
44
+
45
+ Bilaterians can be divided, based on events that occur very early in embryonic development, into two groups (superphyla) called protostomes and deuterostomes.[24] Deuterostomes include vertebrates as well as echinoderms, hemichordates (mainly acorn worms), and Xenoturbellidans.[25] Protostomes, the more diverse group, include arthropods, molluscs, and numerous types of worms. There is a basic difference between the two groups in the placement of the nervous system within the body: protostomes possess a nerve cord on the ventral (usually bottom) side of the body, whereas in deuterostomes the nerve cord is on the dorsal (usually top) side. In fact, numerous aspects of the body are inverted between the two groups, including the expression patterns of several genes that show dorsal-to-ventral gradients. Most anatomists now consider that the bodies of protostomes and deuterostomes are "flipped over" with respect to each other, a hypothesis that was first proposed by Geoffroy Saint-Hilaire for insects in comparison to vertebrates. Thus insects, for example, have nerve cords that run along the ventral midline of the body, while all vertebrates have spinal cords that run along the dorsal midline.[26]
46
+
47
+ Worms are the simplest bilaterian animals, and reveal the basic structure of the bilaterian nervous system in the most straightforward way. As an example, earthworms have dual nerve cords running along the length of the body and merging at the tail and the mouth. These nerve cords are connected by transverse nerves like the rungs of a ladder. These transverse nerves help coordinate the two sides of the animal. Two ganglia at the head (the "nerve ring") end function similar to a simple brain. Photoreceptors on the animal's eyespots provide sensory information on light and dark.[27]
48
+
49
+ The nervous system of one very small roundworm, the nematode Caenorhabditis elegans, has been completely mapped out in a connectome including its synapses. Every neuron and its cellular lineage has been recorded and most, if not all, of the neural connections are known. In this species, the nervous system is sexually dimorphic; the nervous systems of the two sexes, males and female hermaphrodites, have different numbers of neurons and groups of neurons that perform sex-specific functions. In C. elegans, males have exactly 383 neurons, while hermaphrodites have exactly 302 neurons.[28]
50
+
51
+ Arthropods, such as insects and crustaceans, have a nervous system made up of a series of ganglia, connected by a ventral nerve cord made up of two parallel connectives running along the length of the belly.[29] Typically, each body segment has one ganglion on each side, though some ganglia are fused to form the brain and other large ganglia. The head segment contains the brain, also known as the supraesophageal ganglion. In the insect nervous system, the brain is anatomically divided into the protocerebrum, deutocerebrum, and tritocerebrum. Immediately behind the brain is the subesophageal ganglion, which is composed of three pairs of fused ganglia. It controls the mouthparts, the salivary glands and certain muscles. Many arthropods have well-developed sensory organs, including compound eyes for vision and antennae for olfaction and pheromone sensation. The sensory information from these organs is processed by the brain.
52
+
53
+ In insects, many neurons have cell bodies that are positioned at the edge of the brain and are electrically passive—the cell bodies serve only to provide metabolic support and do not participate in signalling. A protoplasmic fiber runs from the cell body and branches profusely, with some parts transmitting signals and other parts receiving signals. Thus, most parts of the insect brain have passive cell bodies arranged around the periphery, while the neural signal processing takes place in a tangle of protoplasmic fibers called neuropil, in the interior.[30]
54
+
55
+ A neuron is called identified if it has properties that distinguish it from every other neuron in the same animal—properties such as location, neurotransmitter, gene expression pattern, and connectivity—and if every individual organism belonging to the same species has one and only one neuron with the same set of properties.[31] In vertebrate nervous systems very few neurons are "identified" in this sense—in humans, there are believed to be none—but in simpler nervous systems, some or all neurons may be thus unique. In the roundworm C. elegans, whose nervous system is the most thoroughly described of any animal's, every neuron in the body is uniquely identifiable, with the same location and the same connections in every individual worm. One notable consequence of this fact is that the form of the C. elegans nervous system is completely specified by the genome, with no experience-dependent plasticity.[28]
56
+
57
+ The brains of many molluscs and insects also contain substantial numbers of identified neurons.[31] In vertebrates, the best known identified neurons are the gigantic Mauthner cells of fish.[32] Every fish has two Mauthner cells, in the bottom part of the brainstem, one on the left side and one on the right. Each Mauthner cell has an axon that crosses over, innervating neurons at the same brain level and then travelling down through the spinal cord, making numerous connections as it goes. The synapses generated by a Mauthner cell are so powerful that a single action potential gives rise to a major behavioral response: within milliseconds the fish curves its body into a C-shape, then straightens, thereby propelling itself rapidly forward. Functionally this is a fast escape response, triggered most easily by a strong sound wave or pressure wave impinging on the lateral line organ of the fish. Mauthner cells are not the only identified neurons in fish—there are about 20 more types, including pairs of "Mauthner cell analogs" in each spinal segmental nucleus. Although a Mauthner cell is capable of bringing about an escape response individually, in the context of ordinary behavior other types of cells usually contribute to shaping the amplitude and direction of the response.
58
+
59
+ Mauthner cells have been described as command neurons. A command neuron is a special type of identified neuron, defined as a neuron that is capable of driving a specific behavior individually.[33] Such neurons appear most commonly in the fast escape systems of various species—the squid giant axon and squid giant synapse, used for pioneering experiments in neurophysiology because of their enormous size, both participate in the fast escape circuit of the squid. The concept of a command neuron has, however, become controversial, because of studies showing that some neurons that initially appeared to fit the description were really only capable of evoking a response in a limited set of circumstances.[34]
60
+
61
+ At the most basic level, the function of the nervous system is to send signals from one cell to others, or from one part of the body to others. There are multiple ways that a cell can send signals to other cells. One is by releasing chemicals called hormones into the internal circulation, so that they can diffuse to distant sites. In contrast to this "broadcast" mode of signaling, the nervous system provides "point-to-point" signals—neurons project their axons to specific target areas and make synaptic connections with specific target cells.[35] Thus, neural signaling is capable of a much higher level of specificity than hormonal signaling. It is also much faster: the fastest nerve signals travel at speeds that exceed 100 meters per second.
62
+
63
+ At a more integrative level, the primary function of the nervous system is to control the body.[4] It does this by extracting information from the environment using sensory receptors, sending signals that encode this information into the central nervous system, processing the information to determine an appropriate response, and sending output signals to muscles or glands to activate the response. The evolution of a complex nervous system has made it possible for various animal species to have advanced perception abilities such as vision, complex social interactions, rapid coordination of organ systems, and integrated processing of concurrent signals. In humans, the sophistication of the nervous system makes it possible to have language, abstract representation of concepts, transmission of culture, and many other features of human society that would not exist without the human brain.
64
+
65
+ Most neurons send signals via their axons, although some types are capable of dendrite-to-dendrite communication. (In fact, the types of neurons called amacrine cells have no axons, and communicate only via their dendrites.) Neural signals propagate along an axon in the form of electrochemical waves called action potentials, which produce cell-to-cell signals at points where axon terminals make synaptic contact with other cells.[36]
66
+
67
+ Synapses may be electrical or chemical. Electrical synapses make direct electrical connections between neurons,[37] but chemical synapses are much more common, and much more diverse in function.[38] At a chemical synapse, the cell that sends signals is called presynaptic, and the cell that receives signals is called postsynaptic. Both the presynaptic and postsynaptic areas are full of molecular machinery that carries out the signalling process. The presynaptic area contains large numbers of tiny spherical vessels called synaptic vesicles, packed with neurotransmitter chemicals.[36] When the presynaptic terminal is electrically stimulated, an array of molecules embedded in the membrane are activated, and cause the contents of the vesicles to be released into the narrow space between the presynaptic and postsynaptic membranes, called the synaptic cleft. The neurotransmitter then binds to receptors embedded in the postsynaptic membrane, causing them to enter an activated state.[38] Depending on the type of receptor, the resulting effect on the postsynaptic cell may be excitatory, inhibitory, or modulatory in more complex ways. For example, release of the neurotransmitter acetylcholine at a synaptic contact between a motor neuron and a muscle cell induces rapid contraction of the muscle cell.[39] The entire synaptic transmission process takes only a fraction of a millisecond, although the effects on the postsynaptic cell may last much longer (even indefinitely, in cases where the synaptic signal leads to the formation of a memory trace).[12]
68
+
69
+ There are literally hundreds of different types of synapses. In fact, there are over a hundred known neurotransmitters, and many of them have multiple types of receptors.[40] Many synapses use more than one neurotransmitter—a common arrangement is for a synapse to use one fast-acting small-molecule neurotransmitter such as glutamate or GABA, along with one or more peptide neurotransmitters that play slower-acting modulatory roles. Molecular neuroscientists generally divide receptors into two broad groups: chemically gated ion channels and second messenger systems. When a chemically gated ion channel is activated, it forms a passage that allows specific types of ions to flow across the membrane. Depending on the type of ion, the effect on the target cell may be excitatory or inhibitory. When a second messenger system is activated, it starts a cascade of molecular interactions inside the target cell, which may ultimately produce a wide variety of complex effects, such as increasing or decreasing the sensitivity of the cell to stimuli, or even altering gene transcription.
70
+
71
+ According to a rule called Dale's principle, which has only a few known exceptions, a neuron releases the same neurotransmitters at all of its synapses.[41] This does not mean, though, that a neuron exerts the same effect on all of its targets, because the effect of a synapse depends not on the neurotransmitter, but on the receptors that it activates.[38] Because different targets can (and frequently do) use different types of receptors, it is possible for a neuron to have excitatory effects on one set of target cells, inhibitory effects on others, and complex modulatory effects on others still. Nevertheless, it happens that the two most widely used neurotransmitters, glutamate and GABA, each have largely consistent effects. Glutamate has several widely occurring types of receptors, but all of them are excitatory or modulatory. Similarly, GABA has several widely occurring receptor types, but all of them are inhibitory.[42] Because of this consistency, glutamatergic cells are frequently referred to as "excitatory neurons", and GABAergic cells as "inhibitory neurons". Strictly speaking, this is an abuse of terminology—it is the receptors that are excitatory and inhibitory, not the neurons—but it is commonly seen even in scholarly publications.
72
+
73
+ One very important subset of synapses are capable of forming memory traces by means of long-lasting activity-dependent changes in synaptic strength.[43] The best-known form of neural memory is a process called long-term potentiation (abbreviated LTP), which operates at synapses that use the neurotransmitter glutamate acting on a special type of receptor known as the NMDA receptor.[44] The NMDA receptor has an "associative" property: if the two cells involved in the synapse are both activated at approximately the same time, a channel opens that permits calcium to flow into the target cell.[45] The calcium entry initiates a second messenger cascade that ultimately leads to an increase in the number of glutamate receptors in the target cell, thereby increasing the effective strength of the synapse. This change in strength can last for weeks or longer. Since the discovery of LTP in 1973, many other types of synaptic memory traces have been found, involving increases or decreases in synaptic strength that are induced by varying conditions, and last for variable periods of time.[44] The reward system, that reinforces desired behaviour for example, depends on a variant form of LTP that is conditioned on an extra input coming from a reward-signalling pathway that uses dopamine as neurotransmitter.[46] All these forms of synaptic modifiability, taken collectively, give rise to neural plasticity, that is, to a capability for the nervous system to adapt itself to variations in the environment.
74
+
75
+ The basic neuronal function of sending signals to other cells includes a capability for neurons to exchange signals with each other. Networks formed by interconnected groups of neurons are capable of a wide variety of functions, including feature detection, pattern generation and timing,[47] and there are seen to be countless types of information processing possible. Warren McCulloch and Walter Pitts showed in 1943 that even artificial neural networks formed from a greatly simplified mathematical abstraction of a neuron are capable of universal computation.[48]
76
+
77
+ Historically, for many years the predominant view of the function of the nervous system was as a stimulus-response associator.[49] In this conception, neural processing begins with stimuli that activate sensory neurons, producing signals that propagate through chains of connections in the spinal cord and brain, giving rise eventually to activation of motor neurons and thereby to muscle contraction, i.e., to overt responses. Descartes believed that all of the behaviors of animals, and most of the behaviors of humans, could be explained in terms of stimulus-response circuits, although he also believed that higher cognitive functions such as language were not capable of being explained mechanistically.[50] Charles Sherrington, in his influential 1906 book The Integrative Action of the Nervous System,[49] developed the concept of stimulus-response mechanisms in much more detail, and Behaviorism, the school of thought that dominated Psychology through the middle of the 20th century, attempted to explain every aspect of human behavior in stimulus-response terms.[51]
78
+
79
+ However, experimental studies of electrophysiology, beginning in the early 20th century and reaching high productivity by the 1940s, showed that the nervous system contains many mechanisms for maintaining cell excitability and generating patterns of activity intrinsically, without requiring an external stimulus.[52] Neurons were found to be capable of producing regular sequences of action potentials, or sequences of bursts, even in complete isolation.[53] When intrinsically active neurons are connected to each other in complex circuits, the possibilities for generating intricate temporal patterns become far more extensive.[47] A modern conception views the function of the nervous system partly in terms of stimulus-response chains, and partly in terms of intrinsically generated activity patterns—both types of activity interact with each other to generate the full repertoire of behavior.[54]
80
+
81
+ The simplest type of neural circuit is a reflex arc, which begins with a sensory input and ends with a motor output, passing through a sequence of neurons connected in series.[55] This can be shown in the "withdrawal reflex" causing a hand to jerk back after a hot stove is touched. The circuit begins with sensory receptors in the skin that are activated by harmful levels of heat: a special type of molecular structure embedded in the membrane causes heat to change the electrical field across the membrane. If the change in electrical potential is large enough to pass the given threshold, it evokes an action potential, which is transmitted along the axon of the receptor cell, into the spinal cord. There the axon makes excitatory synaptic contacts with other cells, some of which project (send axonal output) to the same region of the spinal cord, others projecting into the brain. One target is a set of spinal interneurons that project to motor neurons controlling the arm muscles. The interneurons excite the motor neurons, and if the excitation is strong enough, some of the motor neurons generate action potentials, which travel down their axons to the point where they make excitatory synaptic contacts with muscle cells. The excitatory signals induce contraction of the muscle cells, which causes the joint angles in the arm to change, pulling the arm away.
82
+
83
+ In reality, this straightforward schema is subject to numerous complications.[55] Although for the simplest reflexes there are short neural paths from sensory neuron to motor neuron, there are also other nearby neurons that participate in the circuit and modulate the response. Furthermore, there are projections from the brain to the spinal cord that are capable of enhancing or inhibiting the reflex.
84
+
85
+ Although the simplest reflexes may be mediated by circuits lying entirely within the spinal cord, more complex responses rely on signal processing in the brain.[56] For example, when an object in the periphery of the visual field moves, and a person looks toward it many stages of signal processing are initiated. The initial sensory response, in the retina of the eye, and the final motor response, in the oculomotor nuclei of the brain stem, are not all that different from those in a simple reflex, but the intermediate stages are completely different. Instead of a one or two step chain of processing, the visual signals pass through perhaps a dozen stages of integration, involving the thalamus, cerebral cortex, basal ganglia, superior colliculus, cerebellum, and several brainstem nuclei. These areas perform signal-processing functions that include feature detection, perceptual analysis, memory recall, decision-making, and motor planning.[57]
86
+
87
+ Feature detection is the ability to extract biologically relevant information from combinations of sensory signals.[58] In the visual system, for example, sensory receptors in the retina of the eye are only individually capable of detecting "points of light" in the outside world.[59] Second-level visual neurons receive input from groups of primary receptors, higher-level neurons receive input from groups of second-level neurons, and so on, forming a hierarchy of processing stages. At each stage, important information is extracted from the signal ensemble and unimportant information is discarded. By the end of the process, input signals representing "points of light" have been transformed into a neural representation of objects in the surrounding world and their properties. The most sophisticated sensory processing occurs inside the brain, but complex feature extraction also takes place in the spinal cord and in peripheral sensory organs such as the retina.
88
+
89
+ Although stimulus-response mechanisms are the easiest to understand, the nervous system is also capable of controlling the body in ways that do not require an external stimulus, by means of internally generated rhythms of activity. Because of the variety of voltage-sensitive ion channels that can be embedded in the membrane of a neuron, many types of neurons are capable, even in isolation, of generating rhythmic sequences of action potentials, or rhythmic alternations between high-rate bursting and quiescence. When neurons that are intrinsically rhythmic are connected to each other by excitatory or inhibitory synapses, the resulting networks are capable of a wide variety of dynamical behaviors, including attractor dynamics, periodicity, and even chaos. A network of neurons that uses its internal structure to generate temporally structured output, without requiring a corresponding temporally structured stimulus, is called a central pattern generator.
90
+
91
+ Internal pattern generation operates on a wide range of time scales, from milliseconds to hours or longer. One of the most important types of temporal pattern is circadian rhythmicity—that is, rhythmicity with a period of approximately 24 hours. All animals that have been studied show circadian fluctuations in neural activity, which control circadian alternations in behavior such as the sleep-wake cycle. Experimental studies dating from the 1990s have shown that circadian rhythms are generated by a "genetic clock" consisting of a special set of genes whose expression level rises and falls over the course of the day. Animals as diverse as insects and vertebrates share a similar genetic clock system. The circadian clock is influenced by light but continues to operate even when light levels are held constant and no other external time-of-day cues are available. The clock genes are expressed in many parts of the nervous system as well as many peripheral organs, but in mammals, all of these "tissue clocks" are kept in synchrony by signals that emanate from a master timekeeper in a tiny part of the brain called the suprachiasmatic nucleus.
92
+
93
+ A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another.[60][61][62] Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate species.[63] Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system.[63][64] In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex.[65] The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception/action coupling (see the common coding theory).[62] They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills,[66][67] while others relate mirror neurons to language abilities.[68] However, to date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions such as imitation.[69] There are neuroscientists who caution that the claims being made for the role of mirror neurons are not supported by adequate research.[70][71]
94
+
95
+ In vertebrates, landmarks of embryonic neural development include the birth and differentiation of neurons from stem cell precursors, the migration of immature neurons from their birthplaces in the embryo to their final positions, outgrowth of axons from neurons and guidance of the motile growth cone through the embryo towards postsynaptic partners, the generation of synapses between these axons and their postsynaptic partners, and finally the lifelong changes in synapses which are thought to underlie learning and memory.[72]
96
+
97
+ All bilaterian animals at an early stage of development form a gastrula, which is polarized, with one end called the animal pole and the other the vegetal pole. The gastrula has the shape of a disk with three layers of cells, an inner layer called the endoderm, which gives rise to the lining of most internal organs, a middle layer called the mesoderm, which gives rise to the bones and muscles, and an outer layer called the ectoderm, which gives rise to the skin and nervous system.[73]
98
+
99
+ In vertebrates, the first sign of the nervous system is the appearance of a thin strip of cells along the center of the back, called the neural plate. The inner portion of the neural plate (along the midline) is destined to become the central nervous system (CNS), the outer portion the peripheral nervous system (PNS). As development proceeds, a fold called the neural groove appears along the midline. This fold deepens, and then closes up at the top. At this point the future CNS appears as a cylindrical structure called the neural tube, whereas the future PNS appears as two strips of tissue called the neural crest, running lengthwise above the neural tube. The sequence of stages from neural plate to neural tube and neural crest is known as neurulation.
100
+
101
+ In the early 20th century, a set of famous experiments by Hans Spemann and Hilde Mangold showed that the formation of nervous tissue is "induced" by signals from a group of mesodermal cells called the organizer region.[72] For decades, though, the nature of neural induction defeated every attempt to figure it out, until finally it was resolved by genetic approaches in the 1990s. Induction of neural tissue requires inhibition of the gene for a so-called bone morphogenetic protein, or BMP. Specifically the protein BMP4 appears to be involved. Two proteins called Noggin and Chordin, both secreted by the mesoderm, are capable of inhibiting BMP4 and thereby inducing ectoderm to turn into neural tissue. It appears that a similar molecular mechanism is involved for widely disparate types of animals, including arthropods as well as vertebrates. In some animals, however, another type of molecule called Fibroblast Growth Factor or FGF may also play an important role in induction.
102
+
103
+ Induction of neural tissues causes formation of neural precursor cells, called neuroblasts.[74] In drosophila, neuroblasts divide asymmetrically, so that one product is a "ganglion mother cell" (GMC), and the other is a neuroblast. A GMC divides once, to give rise to either a pair of neurons or a pair of glial cells. In all, a neuroblast is capable of generating an indefinite number of neurons or glia.
104
+
105
+ As shown in a 2008 study, one factor common to all bilateral organisms (including humans) is a family of secreted signaling molecules called neurotrophins which regulate the growth and survival of neurons.[75] Zhu et al. identified DNT1, the first neurotrophin found in flies. DNT1 shares structural similarity with all known neurotrophins and is a key factor in the fate of neurons in Drosophila. Because neurotrophins have now been identified in both vertebrate and invertebrates, this evidence suggests that neurotrophins were present in an ancestor common to bilateral organisms and may represent a common mechanism for nervous system formation.
106
+
107
+ The central nervous system is protected by major physical and chemical barriers. Physically, the brain and spinal cord are surrounded by tough meningeal membranes, and enclosed in the bones of the skull and vertebral column, which combine to form a strong physical shield. Chemically, the brain and spinal cord are isolated by the blood–brain barrier, which prevents most types of chemicals from moving from the bloodstream into the interior of the CNS. These protections make the CNS less susceptible in many ways than the PNS; the flip side, however, is that damage to the CNS tends to have more serious consequences.
108
+
109
+ Although nerves tend to lie deep under the skin except in a few places such as the ulnar nerve near the elbow joint, they are still relatively exposed to physical damage, which can cause pain, loss of sensation, or loss of muscle control. Damage to nerves can also be caused by swelling or bruises at places where a nerve passes through a tight bony channel, as happens in carpal tunnel syndrome. If a nerve is completely transected, it will often regenerate, but for long nerves this process may take months to complete. In addition to physical damage, peripheral neuropathy may be caused by many other medical problems, including genetic conditions, metabolic conditions such as diabetes, inflammatory conditions such as Guillain–Barré syndrome, vitamin deficiency, infectious diseases such as leprosy or shingles, or poisoning by toxins such as heavy metals. Many cases have no cause that can be identified, and are referred to as idiopathic. It is also possible for nerves to lose function temporarily, resulting in numbness as stiffness—common causes include mechanical pressure, a drop in temperature, or chemical interactions with local anesthetic drugs such as lidocaine.
110
+
111
+ Physical damage to the spinal cord may result in loss of sensation or movement. If an injury to the spine produces nothing worse than swelling, the symptoms may be transient, but if nerve fibers in the spine are actually destroyed, the loss of function is usually permanent. Experimental studies have shown that spinal nerve fibers attempt to regrow in the same way as nerve fibers, but in the spinal cord, tissue destruction usually produces scar tissue that cannot be penetrated by the regrowing nerves.
en/5578.html.txt ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ In modern politics and history, a parliament is a legislative body of government. Generally, a modern parliament has three functions: representing the electorate, making laws, and overseeing the government via hearings and inquiries. The term is similar to the idea of a senate, synod or congress, and is commonly used in countries that are current or former monarchies, a form of government with a monarch as the head. Some contexts restrict the use of the word parliament to parliamentary systems, although it is also used to describe the legislature in some presidential systems (e.g. the Parliament of Ghana), even where it is not in the official name.
4
+
5
+ Historically, parliaments included various kinds of deliberative, consultative, and judicial assemblies, e.g. medieval parliaments.
6
+
7
+ The English term is derived from Anglo-Norman and dates to the 14th century, coming from the 11th century Old French parlement, from parler, meaning "to talk".[2] The meaning evolved over time, originally referring to any discussion, conversation, or negotiation through various kinds of deliberative or judicial groups, often summoned by a monarch. By the 15th century, in Britain, it had come to specifically mean the legislature.[3]
8
+
9
+ Since ancient times, when societies were tribal, there were councils or a headman whose decisions were assessed by village elders. This is called tribalism.[4] Some scholars suggest that in ancient Mesopotamia there was a primitive democratic government where the kings were assessed by council.[5] The same has been said about ancient India, where some form of deliberative assemblies existed, and therefore there was some form of democracy.[6] However, these claims are not accepted by most scholars, who see these forms of government as oligarchies.[7][8][9][10][11]
10
+
11
+ Ancient Athens was the cradle of democracy.[12] The Athenian assembly (ἐκκλησία, ekklesia) was the most important institution, and every free male citizen could take part in the discussions. Slaves and women could not. However, Athenian democracy was not representative, but rather direct, and therefore the ekklesia was different from the parliamentary system.
12
+
13
+ The Roman Republic had legislative assemblies, who had the final say regarding the election of magistrates, the enactment of new statutes, the carrying out of capital punishment, the declaration of war and peace, and the creation (or dissolution) of alliances.[13] The Roman Senate controlled money, administration, and the details of foreign policy.[14]
14
+
15
+ Some Muslim scholars argue that the Islamic shura (a method of taking decisions in Islamic societies) is analogous to the parliament.[15] However, others highlight what they consider fundamental differences between the shura system and the parliamentary system.[16][17][18]
16
+
17
+ The first recorded signs of a council to decide on different issues in ancient Iran dates back to 247 BC while the Parthian empire was in power. The Parthians established the first Iranian empire since the conquest of Persia by Alexander. In the early years of their rule, an assembly of the nobles called “Mehestan” was formed that made the final decision on serious issues of state.[19]
18
+
19
+ The word "Mehestan" consists of two parts. "Meh", a word of the old Persian origin, which literally means "The Great" and "-Stan", a suffix in the Persian language, which describes an especial place. Altogether Mehestan means a place where the greats come together.[20]
20
+
21
+ The Mehestan Assembly, which consisted of Zoroastrian religious leaders and clan elders exerted great influence over the administration of the kingdom.[21]
22
+
23
+ One of the most important decisions of the council took place in 208 AD, when a civil war broke out and the Mehestan decided that the empire would be ruled by two brothers simultaneously, Ardavan V and Blash V.[22]
24
+ In 224 AD, following the dissolution of the Parthian empire, after over 470 years, the Mahestan council came to an end.
25
+
26
+ Although there are documented councils held in 873, 1020, 1050 and 1063, there was no representation of commoners. What is considered to be the first parliament (with the presence of commoners), the Cortes of León, was held in the Kingdom of León in 1188.[23][24][25] According to the UNESCO, the Decreta of Leon of 1188 is the oldest documentary manifestation of the European parliamentary system. In addition, UNESCO granted the 1188 Cortes of Alfonso IX the title of "Memory of the World" and the city of Leon has been recognized as the "Cradle of Parliamentarism".[26][27]
27
+
28
+ After coming to power, King Alfonso IX, facing an attack by his two neighbors, Castile and Portugal, decided to summon the "Royal Curia". This was a medieval organization composed of aristocrats and bishops but because of the seriousness of the situation and the need to maximize political support, Alfonso IX took the decision to also call the representatives of the urban middle class from the most important cities of the kingdom to the assembly.[28] León's Cortes dealt with matters like the right to private property, the inviolability of domicile, the right to appeal to justice opposite the King and the obligation of the King to consult the Cortes before entering a war.[29] Prelates, nobles and commoners met separately in the three estates of the Cortes. In this meeting, new laws were approved to protect commoners against the arbitrarities of nobles, prelates and the king. This important set of laws is known as the Carta Magna Leonesa.
29
+
30
+ Following this event, new Cortes would appear in the other different territories that would make up Spain: Principality of Catalonia in 1192, the Kingdom of Castile in 1250, Kingdom of Aragon in 1274, Kingdom of Valencia in 1283 and Kingdom of Navarre in 1300.
31
+
32
+ After the union of the Kingdoms of Leon and Castile under the Crown of Castile, their Cortes were united as well in 1258. The Castilian Cortes had representatives from Burgos, Toledo, León, Seville, Córdoba, Murcia, Jaén, Zamora, Segovia, Ávila, Salamanca, Cuenca, Toro, Valladolid, Soria, Madrid, Guadalajara and Granada (after 1492). The Cortes' assent was required to pass new taxes, and could also advise the king on other matters. The comunero rebels intended a stronger role for the Cortes, but were defeated by the forces of Habsburg Emperor Charles V in 1521. The Cortes maintained some power, however, though it became more of a consultative entity. However, by the time of King Philip II, Charles's son, the Castilian Cortes had come under functionally complete royal control, with its delegates dependent on the Crown for their income.[30]
33
+
34
+ The Cortes of the Crown of Aragon kingdoms retained their power to control the king's spending with regard to the finances of those kingdoms. But after the War of the Spanish Succession and the victory of another royal house – the Bourbons – and King Philip V, their Cortes were suppressed (those of Aragon and Valencia in 1707, and those of Catalonia and the Balearic islands in 1714).
35
+
36
+ The very first Cortes representing the whole of Spain (and the Spanish empire of the day) assembled in 1812, in Cadiz, where it operated as a government in exile as at that time most of the rest of Spain was in the hands of Napoleon's army.
37
+
38
+ After its self-proclamation as an independent kingdom in 1139 by Afonso I of Portugal (followed by the recognition by the Kingdom of León in the Treaty of Zamora of 1143), the first historically established Cortes of the Kingdom of Portugal occurred in 1211 in Coimbra by initiative of Afonso II of Portugal. These established the first general laws of the kingdom (Leis Gerais do Reino): protection of the king's property, stipulation of measures for the administration of justice and the rights of his subjects to be protected from abuses by royal officials, and confirming the clerical donations of the previous king Sancho I of Portugal. These Cortes also affirmed the validity of canon law for the Church in Portugal, while introducing the prohibition of the purchase of lands by churches or monasteries (although they can be acquired by donations and legacies).
39
+
40
+ After the conquest of Algarve in 1249, the Kingdom of Portugal completed its Reconquista. In 1254 King Afonso III of Portugal summoned Portuguese Cortes in Leiria, with the inclusion of burghers from old and newly incorporated municipalities. This inclusion establishes the Cortes of Leiria of 1254 as the second sample of modern parliamentarism in the history of Europe (after the Cortes of León in 1188). In these Cortes the monetagio was introduced: a fixed sum was to be paid by the burghers to the Crown as a substitute for the septennium (the traditional revision of the face value of coinage by the Crown every seven years). These Cortes also introduced staple[disambiguation needed] laws on the Douro River, favoring the new royal city of Vila Nova de Gaia at the expense of the old episcopal city of Porto.
41
+
42
+ The Portuguese Cortes met again under King Afonso III of Portugal in 1256, 1261 and 1273, always by royal summon. Medieval Kings of Portugal continued to rely on small assemblies of notables, and only summoned the full Cortes on extraordinary occasions. A Cortes would be called if the king wanted to introduce new taxes, change some fundamental laws, announce significant shifts in foreign policy (e.g. ratify treaties), or settle matters of royal succession, issues where the cooperation and assent of the towns was thought necessary. Changing taxation (especially requesting war subsidies), was probably the most frequent reason for convening the Cortes. As the nobles and clergy were largely tax-exempt, setting taxation involved intensive negotiations between the royal council and the burgher delegates at the Cortes.
43
+
44
+ Delegates (procuradores) not only considered the king's proposals, but, in turn, also used the Cortes to submit petitions of their own to the royal council on a myriad of matters, e.g. extending and confirming town privileges, punishing abuses of officials, introducing new price controls, constraints on Jews, pledges on coinage, etc. The royal response to these petitions became enshrined as ordinances and statutes, thus giving the Cortes the aspect of a legislature. These petitions were originally referred to as aggravamentos (grievances) then artigos (articles) and eventually capitulos (chapters). In a Cortes-Gerais, petitions were discussed and voted upon separately by each estate and required the approval of at least two of the three estates before being passed up to the royal council. The proposal was then subject to royal veto (either accepted or rejected by the king in its entirety) before becoming law.
45
+
46
+ Nonetheless, the exact extent of Cortes power was ambiguous. Kings insisted on their ancient prerogative to promulgate laws independently of the Cortes. The compromise, in theory, was that ordinances enacted in Cortes could only be modified or repealed by Cortes. But even that principle was often circumvented or ignored in practice.
47
+
48
+ The Cortes probably had their heyday in the 14th and 15th centuries, reaching their apex when John I of Portugal relied almost wholly upon the bourgeoisie for his power. For a period after the 1383–1385 Crisis, the Cortes were convened almost annually. But as time went on, they became less important. Portuguese monarchs, tapping into the riches of the Portuguese empire overseas, grew less dependent on Cortes subsidies and convened them less frequently. John II (r.1481-1495) used them to break the high nobility, but dispensed with them otherwise. Manuel I (r.1495-1521) convened them only four times in his long reign. By the time of Sebastian (r.1554–1578), the Cortes was practically an irrelevance.
49
+
50
+ Curiously, the Cortes gained a new importance with the Iberian Union of 1581, finding a role as the representative of Portuguese interests to the new Habsburg monarch. The Cortes played a critical role in the 1640 Restoration, and enjoyed a brief period of resurgence during the reign of John IV of Portugal (r.1640-1656). But by the end of the 17th century, it found itself sidelined once again. The last Cortes met in 1698, for the mere formality of confirming the appointment of Infante John (future John V of Portugal) as the successor of Peter II of Portugal. Thereafter, Portuguese kings ruled as absolute monarchs and no Cortes were assembled for over a century. This state of affairs came to an end with the Liberal Revolution of 1820, which set in motion the introduction of a new constitution, and a permanent and proper parliament, that however inherited the name of Cortes Gerais.
51
+
52
+ England has long had a tradition of a body of men who would assist and advise the king on important matters. Under the Anglo-Saxon kings, there was an advisory council, the Witenagemot. The name derives from the Old English ƿitena ȝemōt, or witena gemōt, for "meeting of wise men". The first recorded act of a witenagemot was the law code issued by King Æthelberht of Kent ca. 600, the earliest document which survives in sustained Old English prose; however, the witan was certainly in existence long before this time.[31] The Witan, along with the folkmoots (local assemblies), is an important ancestor of the modern English parliament.[32]
53
+
54
+ As part of the Norman Conquest of England, the new king, William I, did away with the Witenagemot, replacing it with a Curia Regis ("King's Council"). Membership of the Curia was largely restricted to the tenants in chief, the few nobles who "rented" great estates directly from the king, along with ecclesiastics. William brought to England the feudal system of his native Normandy, and sought the advice of the curia regis before making laws. This is the original body from which the Parliament, the higher courts of law, and the Privy Council and Cabinet descend. Of these, the legislature is formally the High Court of Parliament; judges sit in the Supreme Court of Judicature. Only the executive government is no longer conducted in a royal court.
55
+
56
+ Most historians date the emergence of a parliament with some degree of power to which the throne had to defer no later than the rule of Edward I.[33] Like previous kings, Edward called leading nobles and church leaders to discuss government matters, especially finance and taxation. A meeting in 1295 became known as the Model Parliament because it set the pattern for later Parliaments. The significant difference between the Model Parliament and the earlier Curia Regis was the addition of the Commons; that is, the inclusion of elected representatives of rural landowners and of townsmen. In 1307, Edward I agreed not to collect certain taxes without the "consent of the realm" through parliament. He also enlarged the court system.
57
+
58
+ The tenants-in-chief often struggled with their spiritual counterparts and with the king for power. In 1215, they secured from King John of England Magna Carta, which established that the king may not levy or collect any taxes (except the feudal taxes to which they were hitherto accustomed), save with the consent of a council. It was also established that the most important tenants-in-chief and ecclesiastics be summoned to the council by personal writs from the sovereign, and that all others be summoned to the council by general writs from the sheriffs of their counties. Modern government has its origins in the Curia Regis; parliament descends from the Great Council later known as the parliamentum established by Magna Carta.
59
+
60
+ During the reign of King Henry III, 13th-Century English Parliaments incorporated elected representatives from shires and towns. These parliaments are, as such, considered forerunners of the modern parliament.[34]
61
+
62
+ In 1265, Simon de Montfort, then in rebellion against Henry III, summoned a parliament of his supporters without royal authorization. The archbishops, bishops, abbots, earls, and barons were summoned, as were two knights from each shire and two burgesses from each borough. Knights had been summoned to previous councils, but it was unprecedented for the boroughs to receive any representation. Come 1295, Edward I later adopted de Montfort's ideas for representation and election in the so-called "Model Parliament". At first, each estate debated independently; by the reign of Edward III, however, Parliament recognisably assumed its modern form, with authorities dividing the legislative body into two separate chambers.
63
+
64
+ The purpose and structure of Parliament in Tudor England underwent a significant transformation under the reign of Henry VIII. Originally its methods were primarily medieval, and the monarch still possessed a form of inarguable dominion over its decisions. According to Elton, it was Thomas Cromwell, 1st Earl of Essex, then chief minister to Henry VIII, who initiated still other changes within parliament.
65
+
66
+ The Reformation Acts supplied Parliament with unlimited power over the country. This included authority over virtually every matter, whether social, economic, political, or religious[citation needed]; it legalised the Reformation, officially and indisputably. The king had to rule through the council, not over it, and all sides needed to reach a mutual agreement when creating or passing laws, adjusting or implementing taxes, or changing religious doctrines. This was significant: the monarch no longer had sole control over the country. For instance, during the later years of Mary, Parliament exercised its authority in originally rejecting Mary's bid to revive Catholicism in the realm. Later on, the legislative body even denied Elizabeth her request to marry[citation needed]. If Parliament had possessed this power before Cromwell, such as when Wolsey served as secretary, the Reformation may never have happened, as the king would have had to gain the consent of all parliament members before so drastically changing the country's religious laws and fundamental identity[citation needed].
67
+
68
+ The power of Parliament increased considerably after Cromwell's adjustments. It also provided the country with unprecedented stability. More stability, in turn, helped assure more effective management, organisation, and efficiency. Parliament printed statutes and devised a more coherent parliamentary procedure.
69
+
70
+ The rise of Parliament proved especially important in the sense that it limited the repercussions of dynastic complications that had so often plunged England into civil war. Parliament still ran the country even in the absence of suitable heirs to the throne, and its legitimacy as a decision-making body reduced the royal prerogatives of kings like Henry VIII and the importance of their whims. For example, Henry VIII could not simply establish supremacy by proclamation; he required Parliament to enforce statutes and add felonies and treasons. An important liberty for Parliament was its freedom of speech; Henry allowed anything to be spoken openly within Parliament and speakers could not face arrest – a fact which they exploited incessantly. Nevertheless, Parliament in Henry VIII's time offered up very little objection to the monarch's desires. Under his and Edward's reign, the legislative body complied willingly with the majority of the kings' decisions.
71
+
72
+ Much of this compliance stemmed from how the English viewed and traditionally understood authority. As Williams described it, "King and parliament were not separate entities, but a single body, of which the monarch was the senior partner and the Lords and the Commons the lesser, but still essential, members."[citation needed].
73
+
74
+ Although its role in government expanded significantly during the reigns of Henry VIII and Edward VI, the Parliament of England saw some of its most important gains in the 17th century. A series of conflicts between the Crown and Parliament culminated in the execution of King Charles I in 1649. Afterward, England became a commonwealth, with Oliver Cromwell, its lord protector, the de facto ruler. Frustrated with its decisions, Cromwell purged and suspended Parliament on several occasions.
75
+
76
+ A controversial figure accused of despotism, war crimes, and even genocide, Cromwell is nonetheless regarded as essential to the growth of democracy in England.[35] The years of the Commonwealth, coupled with the restoration of the monarchy in 1660 and the subsequent Glorious Revolution of 1688, helped reinforce and strengthen Parliament as an institution separate from the Crown.
77
+
78
+ The Parliament of England met until it merged with the Parliament of Scotland under the Acts of Union. This union created the new Parliament of Great Britain in 1707.
79
+
80
+ From the 10th century the Kingdom of Alba was ruled by chiefs (toisechs) and subkings (mormaers) under the suzerainty, real or nominal, of a High King. Popular assemblies, as in Ireland, were involved in law-making, and sometimes in king-making, although the introduction of tanistry—naming a successor in the lifetime of a king—made the second less than common. These early assemblies cannot be considered "parliaments" in the later sense of the word, and were entirely separate from the later, Norman-influenced, institution.
81
+
82
+ The Parliament of Scotland evolved during the Middle Ages from the King's Council of Bishops and Earls. The unicameral parliament is first found on record, referred to as a colloquium, in 1235 at Kirkliston (a village now in Edinburgh).
83
+
84
+ By the early fourteenth century the attendance of knights and freeholders had become important, and from 1326 burgh commissioners attended. Consisting of the Three Estates; of clerics, lay tenants-in-chief and burgh commissioners sitting in a single chamber, the Scottish parliament acquired significant powers over particular issues. Most obviously it was needed for consent for taxation (although taxation was only raised irregularly in Scotland in the medieval period), but it also had a strong influence over justice, foreign policy, war, and all manner of other legislation, whether political, ecclesiastical, social or economic. Parliamentary business was also carried out by "sister" institutions, before c. 1500 by General Council and thereafter by the Convention of Estates. These could carry out much business also dealt with by Parliament – taxation, legislation and policy-making – but lacked the ultimate authority of a full parliament.
85
+
86
+ The parliament, which is also referred to as the Estates of Scotland, the Three Estates, the Scots Parliament or the auld Scots Parliament (Eng: old), met until the Acts of Union merged the Parliament of Scotland and the Parliament of England, creating the new Parliament of Great Britain in 1707.
87
+
88
+ Following the 1997 Scottish devolution referendum, and the passing of the Scotland Act 1998 by the Parliament of the United Kingdom, the Scottish Parliament was reconvened on 1 July 1999, although with much more limited powers than its 18th-century predecessor. The parliament has sat since 2004 at its newly constructed Scottish Parliament Building in Edinburgh, situated at the foot of the Royal Mile, next to the royal palace of Holyroodhouse.
89
+
90
+ A thing or ting (Old Norse and Icelandic: þing; other modern Scandinavian: ting, ding in Dutch) was the governing assembly in Germanic societies, made up of the free men of the community and presided by lawspeakers.
91
+
92
+ The thing was the assembly of the free men of a country, province or a hundred (hundare/härad/herred). There were consequently, hierarchies of things, so that the local things were represented at the thing for a larger area, for a province or land. At the thing, disputes were solved and political decisions were made. The place for the thing was often also the place for public religious rites and for commerce.
93
+
94
+ The thing met at regular intervals, legislated, elected chieftains and kings, and judged according to the law, which was memorised and recited by the "law speaker" (the judge).
95
+
96
+ The Icelandic, Faroese and Manx parliaments trace their origins back to the Viking expansion originating from the Petty kingdoms of Norway as well as Denmark, replicating Viking government systems in the conquered territories, such as those represented by the Gulating near Bergen in western Norway.[citation needed]
97
+
98
+ Later national diets with chambers for different estates developed, e.g. in Sweden and in Finland (which was part of Sweden until 1809), each with a House of Knights for the nobility. In both these countries, the national parliaments are now called riksdag (in Finland also eduskunta), a word used since the Middle Ages and equivalent of the German word Reichstag.
99
+
100
+ Today the term lives on in the official names of national legislatures, political and judicial institutions in the North-Germanic countries. In the Yorkshire and former Danelaw areas of England, which were subject to much Norse invasion and settlement, the wapentake was another name for the same institution.
101
+
102
+ The Sicilian Parliament, dating to 1097, evolved as the legislature of the Kingdom of Sicily.[41][42]
103
+
104
+ The Federal Diet of Switzerland was one of the longest-lived representative bodies in history, continuing from the 13th century to 1848.
105
+
106
+ Originally, there was only the Parliament of Paris, born out of the Curia Regis in 1307, and located inside the medieval royal palace, now the Paris Hall of Justice. The jurisdiction of the Parliament of Paris covered the entire kingdom. In the thirteenth century, judicial functions were added. In 1443, following the turmoil of the Hundred Years' War, King Charles VII of France granted Languedoc its own parliament by establishing the Parliament of Toulouse, the first parliament outside of Paris, whose jurisdiction extended over the most part of southern France. From 1443 until the French Revolution several other parliaments were created in some provinces of France (Grenoble, Bordeaux).
107
+
108
+ All the parliaments could issue regulatory decrees for the application of royal edicts or of customary practices; they could also refuse to register laws that they judged contrary to fundamental law or simply as being untimely. Parliamentary power in France was suppressed more so than in England as a result of absolutism, and parliaments were eventually overshadowed by the larger Estates General, up until the French Revolution, when the National Assembly became the lower house of France's bicameral legislature.
109
+
110
+ According to the Chronicles of Gallus Anonymus, the first legendary Polish ruler, Siemowit, who began the Piast Dynasty, was chosen by a wiec. The veche (Russian: вече, Polish: wiec) was a popular assembly in medieval Slavic countries, and in late medieval period, a parliament. The idea of the wiec led in 1182 to the development of the Polish parliament, the Sejm.
111
+
112
+ The term "sejm" comes from an old Polish expression denoting a meeting of the populace. The power of early sejms grew between 1146–1295, when the power of individual rulers waned and various councils and wiece grew stronger. The history of the national Sejm dates back to 1182. Since the 14th century irregular sejms (described in various Latin sources as contentio generalis, conventio magna, conventio solemna, parlamentum, parlamentum generale, dieta or Polish sejm walny) have been called by Polish kings. From 1374, the king had to receive sejm permission to raise taxes. The General Sejm (Polish Sejm Generalny or Sejm Walny), first convoked by the king John I Olbracht in 1493 near Piotrków, evolved from earlier regional and provincial meetings (sejmiks). It followed most closely the sejmik generally, which arose from the 1454 Nieszawa Statutes, granted to the szlachta (nobles) by King Casimir IV the Jagiellonian. From 1493 forward, indirect elections were repeated every two years. With the development of the unique Polish Golden Liberty the Sejm's powers increased.
113
+
114
+ The Commonwealth's general parliament consisted of three estates: the King of Poland (who also acted as the Grand Duke of Lithuania, Russia/Ruthenia, Prussia, Mazovia, etc.), the Senat (consisting of Ministers, Palatines, Castellans and Bishops) and the Chamber of Envoys—circa 170 nobles (szlachta) acting on behalf of their Lands and sent by Land Parliaments. Also representatives of selected cities but without any voting powers. Since 1573 at a royal election all peers of the Commonwealth could participate in the Parliament and become the King's electors.
115
+
116
+ Cossack Rada was the legislative body of a military republic of the Ukrainian Cossacks that grew rapidly in the 15th century from serfs fleeing the more controlled parts of the Polish Lithuanian Commonwealth. The republic did not regard social origin/nobility and accepted all people who declared to be Orthodox Christians.
117
+
118
+ Originally established at the Zaporizhian Sich, the rada (council) was an institution of Cossack administration in Ukraine from the 16th to the 18th century. With the establishment of the Hetman state in 1648, it was officially known as the General Military Council until 1750.
119
+
120
+ The zemsky sobor (Russian: зе́мский собо́р) was the first Russian parliament of the feudal Estates type, in the 16th and 17th centuries. The term roughly means assembly of the land.
121
+
122
+ It could be summoned either by tsar, or patriarch, or the Boyar Duma. Three categories of population, comparable to the Estates-General of France but with the numbering of the first two Estates reversed, participated in the assembly:
123
+
124
+ Nobility and high bureaucracy, including the Boyar Duma
125
+
126
+ The Holy Sobor of high Orthodox clergy
127
+
128
+ Representatives of merchants and townspeople (third estate)
129
+
130
+ The name of the parliament of nowadays Russian Federation is the Federal Assembly of Russia. The term for its lower house, State Duma (which is better known than the Federal Assembly itself, and is often mistaken for the entirety of the parliament) comes from the Russian word думать (dumat), "to think". The Boyar Duma was an advisory council to the grand princes and tsars of Muscovy. The Duma was discontinued by Peter the Great, who transferred its functions to the Governing Senate in 1711.
131
+
132
+ The veche was the highest legislature and judicial authority in the republic of Novgorod until 1478. In its sister state, Pskov, a separate veche operated until 1510.
133
+
134
+ Since the Novgorod revolution of 1137 ousted the ruling grand prince, the veche became the supreme state authority. After the reforms of 1410, the veche was restructured on a model similar to that of Venice, becoming the Commons chamber of the parliament. An upper Senate-like Council of Lords was also created, with title membership for all former city magistrates. Some sources indicate that veche membership may have become full-time, and parliament deputies were now called vechniks. It is recounted that the Novgorod assembly could be summoned by anyone who rung the veche bell, although it is more likely that the common procedure was more complex. This bell was a symbol of republican sovereignty and independence. The whole population of the city—boyars, merchants, and common citizens—then gathered at Yaroslav's Court. Separate assemblies could be held in the districts of Novgorod. In Pskov the veche assembled in the court of the Trinity cathedral.
135
+
136
+ "Conciliarism" or the "conciliar movement", was a reform movement in the 14th and 15th century Roman Catholic Church which held that final authority in spiritual matters resided with the Roman Church as corporation of Christians, embodied by a general church council, not with the pope. In effect, the movement sought – ultimately, in vain – to create an All-Catholic Parliament. Its struggle with the Papacy had many points in common with the struggle of parliaments in specific countries against the authority of Kings and other secular rulers.
137
+
138
+ The development of the modern concept of parliamentary government dates back to the Kingdom of Great Britain (1707–1800) and the parliamentary system in Sweden during the Age of Liberty (1718–1772).
139
+
140
+ Greater than 10%[43]
141
+
142
+ Greater than 20%[44]
143
+
144
+ Greater than 30%[45]
145
+
146
+ The British Parliament is often referred to as the Mother of Parliaments (in fact a misquotation of John Bright, who remarked in 1865 that "England is the Mother of Parliaments") because the British Parliament has been the model for most other parliamentary systems, and its Acts have created many other parliaments.[46] Many nations with parliaments have to some degree emulated the British "three-tier" model. Most countries in Europe and the Commonwealth have similarly organised parliaments with a largely ceremonial head of state who formally opens and closes parliament, a large elected lower house and a smaller, upper house.[47][48]
147
+
148
+ The Parliament of Great Britain was formed in 1707 by the Acts of Union that replaced the former parliaments of England and Scotland. A further union in 1801 united the Parliament of Great Britain and the Parliament of Ireland into a Parliament of the United Kingdom.
149
+
150
+ In the United Kingdom, Parliament consists of the House of Commons, the House of Lords, and the Monarch. The House of Commons is composed of 650 (soon to be 600)[citation needed] members who are directly elected by British citizens to represent single-member constituencies. The leader of a Party that wins more than half the seats, or less than half but is able to gain the support of smaller parties to achieve a majority in the house is invited by the Monarch to form a government. The House of Lords is a body of long-serving, unelected members: Lords Temporal – 92 of whom inherit their titles (and of whom 90 are elected internally by members of the House to lifetime seats), 588 of whom have been appointed to lifetime seats, and Lords Spiritual – 26 bishops, who are part of the house while they remain in office.
151
+
152
+ Legislation can originate from either the Lords or the Commons. It is voted on in several distinct stages, called readings, in each house. First reading is merely a formality. Second reading is where the bill as a whole is considered. Third reading is detailed consideration of clauses of the bill.
153
+
154
+ In addition to the three readings a bill also goes through a committee stage where it is considered in great detail. Once the bill has been passed by one house it goes to the other and essentially repeats the process. If after the two sets of readings there are disagreements between the versions that the two houses passed it is returned to the first house for consideration of the amendments made by the second. If it passes through the amendment stage Royal Assent is granted and the bill becomes law as an Act of Parliament.
155
+
156
+ The House of Lords is the less powerful of the two houses as a result of the Parliament Acts 1911 and 1949. These Acts removed the veto power of the Lords over a great deal of legislation. If a bill is certified by the Speaker of the House of Commons as a money bill (i.e. acts raising taxes and similar) then the Lords can only block it for a month. If an ordinary bill originates in the Commons the Lords can only block it for a maximum of one session of Parliament. The exceptions to this rule are things like bills to prolong the life of a Parliament beyond five years.
157
+
158
+ In addition to functioning as the second chamber of Parliament, the House of Lords was also the final court of appeal for much of the law of the United Kingdom—a combination of judicial and legislative function that recalls its origin in the Curia Regis. This changed in October 2009 when the Supreme Court of the United Kingdom opened and acquired the former jurisdiction of the House of Lords.
159
+
160
+ Since 1999, there has been a Scottish Parliament in Edinburgh, and, since 2020, a Welsh Parliament—or Senedd—in Cardiff. However, these national, unicameral legislatures do not have complete power over their respective countries of the United Kingdom, holding only those powers devolved to them by Westminster from 1997. They cannot legislate on defence issues, currency, or national taxation (e.g. VAT, or Income Tax). Additionally, the bodies can be dissolved, at any given time, by the British Parliament without the consent of the devolved government.
161
+
162
+ In Sweden, the half-century period of parliamentary government beginning with Charles XII's death in 1718 and ending with Gustav III's self-coup in 1772 is known as the Age of Liberty. During this period, civil rights were expanded and power shifted from the monarch to parliament.
163
+
164
+ While suffrage did not become universal, the taxed peasantry was represented in Parliament, although with little influence and commoners without taxed property had no suffrage at all.
165
+
166
+ Many parliaments are part of a parliamentary system of government, in which the executive is constitutionally answerable to the parliament. Some restrict the use of the word parliament to parliamentary systems, while others use the word for any elected legislative body. Parliaments usually consist of chambers or houses, and are usually either bicameral or unicameral although more complex models exist, or have existed (see Tricameralism).
167
+
168
+ In some parliamentary systems, the prime minister is a member of the parliament (e.g. in the United Kingdom), whereas in others they are not (e.g. in the Netherlands). They are commonly the leader of the majority party in the lower house of parliament, but only hold the office as long as the "confidence of the house" is maintained. If members of the lower house lose faith in the leader for whatever reason, they can call a vote of no confidence and force the prime minister to resign.
169
+
170
+ This can be particularly dangerous to a government when the distribution of seats among different parties is relatively even, in which case a new election is often called shortly thereafter. However, in case of general discontent with the head of government, their replacement can be made very smoothly without all the complications that it represents in the case of a presidential system.
171
+
172
+ The parliamentary system can be contrasted with a presidential system, such as the American congressional system, which operates under a stricter separation of powers, whereby the executive does not form part of, nor is it appointed by, the parliamentary or legislative body. In such a system, congresses do not select or dismiss heads of governments, and governments cannot request an early dissolution as may be the case for parliaments. Some states, such as France, have a semi-presidential system which falls between parliamentary and congressional systems, combining a powerful head of state (president) with a head of government, the prime minister, who is responsible to parliament.
173
+
174
+ Australia's States and territories:
175
+
176
+ In the federal (bicameral) kingdom of Belgium, there is a curious asymmetrical constellation serving as directly elected legislatures for three "territorial" regions—Flanders (Dutch), Brussels (bilingual, certain peculiarities of competence, also the only region not comprising any of the 10 provinces) and Wallonia (French)—and three cultural communities—Flemish (Dutch, competent in Flanders and for the Dutch-speaking inhabitants of Brussels), Francophone (French, for Wallonia and for Francophones in Brussels) and German (for speakers of that language in a few designated municipalities in the east of the Walloon Region, living alongside Francophones but under two different regimes):
177
+
178
+ Canada's provinces and territories:
179
+
180
+
181
+
182
+ Map
183
+
184
+ Indian States and Territories Legislative assemblies:
185
+
186
+
187
+
188
+ Indian States Legislative councils
en/5579.html.txt ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The reproductive system of an organism, also known as the genital system, is the biological system made up of all the anatomical organs involved in sexual reproduction. Many non-living substances such as fluids, hormones, and pheromones are also important accessories to the reproductive system.[1] Unlike most organ systems, the sexes of differentiated species often have significant differences. These differences allow for a combination of genetic material between two individuals, which allows for the possibility of greater genetic fitness of the offspring.[2]
2
+
3
+ In mammals, the major organs of the reproductive system include the external genitalia (penis and vulva) as well as a number of internal organs, including the gamete-producing gonads (testicles and ovaries). Diseases of the human reproductive system are very common and widespread, particularly communicable sexually transmitted diseases.[3]
4
+
5
+ Most other vertebrates have generally similar reproductive systems consisting of gonads, ducts, and openings. However, there is a great diversity of physical adaptations as well as reproductive strategies in every group of vertebrates.
6
+
7
+ Vertebrates share key elements of their reproductive systems. They all have gamete-producing organs known as gonads. In females, these gonads are then connected by oviducts to an opening to the outside of the body, typically the cloaca, but sometimes to a unique pore such as a vagina or intromittent organ.
8
+
9
+ The human reproductive system usually involves internal fertilization by sexual intercourse. During this process, the male inserts his erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm then travels through the vagina and cervix into the uterus or fallopian tubes for fertilization of the ovum. Upon successful fertilization and implantation, gestation of the fetus then occurs within the female's uterus for approximately nine months, this process is known as pregnancy in humans. Gestation ends with childbirth, delivery following labor. Labor consists of the muscles of the uterus contracting, the cervix dilating, and the baby passing out the vagina (the female genital organ). Human's babies and children are nearly helpless and require high levels of parental care for many years. One important type of parental care is the use of the mammary glands in the female breasts to nurse the baby.[4]
10
+
11
+ The female reproductive system has two functions: The first is to produce egg cells, and the second is to protect and nourish the offspring until birth. The male reproductive system has one function, and it is to produce and deposit sperm. Humans have a high level of sexual differentiation. In addition to differences in nearly every reproductive organ, numerous differences typically occur in secondary sexual characteristics.
12
+
13
+ The male reproductive system is a series of organs located outside of the body and around the pelvic region of a male that contribute towards the reproduction process. The primary direct function of the male reproductive system is to provide the male sperm for fertilization of the ovum.
14
+
15
+ The major reproductive organs of the male can be grouped into three categories. The first category is sperm production and storage. Production takes place in the testes which are housed in the temperature regulating scrotum, immature sperm then travel to the epididymis for development and storage. The second category are the ejaculatory fluid producing glands which include the seminal vesicles, prostate, and the vas deferens. The final category are those used for copulation, and deposition of the spermatozoa (sperm) within the male, these include the penis, urethra, vas deferens, and Cowper's gland.
16
+
17
+ Major secondary sexual characteristics includes: larger, more muscular stature, deepened voice, facial and body hair, broad shoulders, and development of an Adam's apple. An important sexual hormone of males is androgen, and particularly testosterone.
18
+
19
+ The testes release a hormone that controls the development of sperm. This hormone is also responsible for the development of physical characteristics in men such as facial hair and a deep voice.
20
+
21
+ The human female reproductive system is a series of organs primarily located inside of the body and around the pelvic region of a female that contribute towards the reproductive process. The human female reproductive system contains three main parts: the vulva, which leads to the vagina, the vaginal opening, to the uterus; the uterus, which holds the developing fetus; and the ovaries, which produce the female's ova. The breasts are involved during the parenting stage of reproduction, but in most classifications they are not considered to be part of the female reproductive system.
22
+
23
+ The vagina meets the outside at the vulva, which also includes the labia, clitoris and urethra; during intercourse this area is lubricated by mucus secreted by the Bartholin's glands. The vagina is attached to the uterus through the cervix, while the uterus is attached to the ovaries via the fallopian tubes. Each ovary contains hundreds of ova (singular ovum).
24
+
25
+ Approximately every 28 days, the pituitary gland releases a hormone that stimulates some of the ova to develop and grow. One ovum is released and it passes through the fallopian tube into the uterus. Hormones produced by the ovaries prepare the uterus to receive the ovum. It sita her and awaits the sperm for fertilization to occur. When this does not occur i.e. no sperm for fertilization, the lining of the uterus, called the endometrium, and unfertilized ova are shed each cycle through the process of menstruation. If the ovum is fertilized by sperm, it attaches to the endometrium and the fetus develops.
26
+
27
+ Most mammal reproductive systems are similar, however, there are some notable differences between the non-human mammals and humans. For instance, most male mammals have a penis which is stored internally until erect, and most have a penis bone or baculum.[5] Additionally, males of most species do not remain continually sexually fertile as humans do. Like humans, most groups of mammals have descended testicles found within a scrotum, however, others have descended testicles that rest on the ventral body wall, and a few groups of mammals, such as elephants, have undescended testicles found deep within their body cavities near their kidneys.[6]
28
+
29
+ The reproductive system of marsupials is unique in that the female has two vaginae, both of which open externally through one orifice but lead to different compartments within the uterus; males usually have a two-pronged penis, which corresponds to the females' two vaginae.[7][8] Marsupials typically develop their offspring in an external pouch containing teats to which their newborn young (joeys) attach themselves for post uterine development. Also, marsupials have a unique prepenial scrotum.[9] The 15mm (5/8 in) long newborn joey instinctively crawls and wriggles the several inches (15 cm), while clinging to fur, on the way to its mother's pouch.
30
+
31
+ The uterus and vagina are unique to mammals with no homologue in birds, reptiles, amphibians, or fish.[citation needed] In place of the uterus the other vertebrate groups have an unmodified oviduct leading directly to a cloaca, which is a shared exit-hole for gametes, urine, and feces. Monotremes (i.e. platypus and echidnas), a group of egg-laying mammals, also lack a uterus and vagina, and in that respect have a reproductive system resembling that of a reptile.
32
+
33
+ In domestic canines, sexual maturity (puberty) occurs between the ages of 6 to 12 months for both males and females, although this can be delayed until up to two years of age for some large breeds.
34
+
35
+ The mare's reproductive system is responsible for controlling gestation, birth, and lactation, as well as her estrous cycle and mating behavior. The stallion's reproductive system is responsible for his sexual behavior and secondary sex characteristics (such as a large crest).
36
+
37
+ Male and female birds have a cloaca, an opening through which eggs, sperm, and wastes pass. Intercourse is performed by pressing the lips of the cloacae together, which is sometimes known as intromittent organ which is known as a phallus that is analogous to the mammals' penis. The female lays amniotic eggs in which the young fetus continues to develop after it leaves the female's body. Unlike most vertebrates female birds typically have only one functional ovary and oviduct.[10] As a group, birds, like mammals, are noted for their high level of parental care.
38
+
39
+ Reptiles are almost all sexually dimorphic, and exhibit internal fertilization through the cloaca. Some reptiles lay eggs while others are ovoviviparous (animals that deliver live young). Reproductive organs are found within the cloaca of reptiles. Most male reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis-like organ, while male snakes and lizards each possess a pair of penis-like organs.
40
+
41
+ Most amphibians exhibit external fertilization of eggs, typically within the water, though some amphibians such as caecilians have internal fertilization.[11] All have paired, internal gonads, connected by ducts to the cloaca.
42
+
43
+ Fish exhibit a wide range of different reproductive strategies. Most fish, however, are oviparous and exhibit external fertilization. In this process, females use their cloaca to release large quantities of their gametes, called spawn into the water and one or more males release "milt", a white fluid containing many sperm over the unfertilized eggs. Other species of fish are oviparous and have internal fertilization aided by pelvic or anal fins that are modified into an intromittent organ analogous to the human penis.[12] A small portion of fish species are either viviparous or ovoviviparous, and are collectively known as livebearers.[13]
44
+
45
+ Fish gonads are typically pairs of either ovaries or testes. Most fish are sexually dimorphic but some species are hermaphroditic or unisexual.[14]
46
+
47
+ Invertebrates have an extremely diverse array of reproductive systems, the only commonality may be that they all lay eggs. Also, aside from cephalopods and arthropods, nearly all other invertebrates are hermaphroditic and exhibit external fertilization.
48
+
49
+ All cephalopods are sexually dimorphic and reproduce by laying eggs. Most cephalopods have semi-internal fertilization, in which the male places his gametes inside the female's mantle cavity or pallial cavity to fertilize the ova found in the female's single ovary.[15] Likewise, male cephalopods have only a single testicle. In the female of most cephalopods the nidamental glands aid in development of the egg.
50
+
51
+ The "penis" in most unshelled male cephalopods (Coleoidea) is a long and muscular end of the gonoduct used to transfer spermatophores to a modified arm called a hectocotylus. That in turn is used to transfer the spermatophores to the female. In species where the hectocotylus is missing, the "penis" is long and able to extend beyond the mantle cavity and transfer the spermatophores directly to the female.
52
+
53
+ Most insects reproduce oviparously, i.e. by laying eggs. The eggs are produced by the female in a pair of ovaries. Sperm, produced by the male in one testis or more commonly two, is transmitted to the female during mating by means of external genitalia. The sperm is stored within the female in one or more spermathecae. At the time of fertilization, the eggs travel along oviducts to be fertilized by the sperm and are then expelled from the body ("laid"), in most cases via an ovipositor.
54
+
55
+ Arachnids may have one or two gonads, which are located in the abdomen. The genital opening is usually located on the underside of the second abdominal segment. In most species, the male transfers sperm to the female in a package, or spermatophore. Complex courtship rituals have evolved in many arachnids to ensure the safe delivery of the sperm to the female.[16]
56
+
57
+ Arachnids usually lay yolky eggs, which hatch into immatures that resemble adults. Scorpions, however, are either ovoviviparous or viviparous, depending on species, and bear live young.
58
+
59
+ Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction.[17] Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions.
60
+
61
+ Fungal reproduction is complex, reflecting the differences in lifestyles and genetic makeup within this diverse kingdom of organisms.[18] It is estimated that a third of all fungi reproduce using more than one method of propagation; for example, reproduction may occur in two well-differentiated stages within the life cycle of a species, the teleomorph and the anamorph.[19] Environmental conditions trigger genetically determined developmental states that lead to the creation of specialized structures for sexual or asexual reproduction. These structures aid reproduction by efficiently dispersing spores or spore-containing propagules.
en/558.html.txt ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A bar is a long raised narrow table or bench designed for dispensing beer or other alcoholic drinks. They were originally chest high, and a bar, often brass, ran the length of the table, just above floor height, for customers to rest a foot on, which gave the table its name. Over many years, heights of bars were lowered, and high stools added, and the brass bar remains today. The name bar became identified with the business, (also known as a saloon or a tavern or sometimes as a pub or club, referring to the actual establishment, as in pub bar or club bar etc.) is a retail business establishment that serves alcoholic beverages, such as beer, wine, liquor, cocktails, and other beverages such as mineral water and soft drinks. Bars often also sell snack foods such as crisps(also referred to as potato chips) or peanuts, for consumption on their premises.[1] Some types of bars, such as pubs, may also serve food from a restaurant menu. The term "bar" also refers to the countertop and area where drinks are served. The term "bar" derives from the metal or wooden bar (barrier) that is often located along the length of the "bar".[2]
2
+
3
+ Bars provide stools or chairs that are placed at tables or counters for their patrons. Bars that offer entertainment or live music are often referred to as "music bars", "live venues", or "nightclubs". Types of bars range from inexpensive dive bars[3]
4
+ to elegant places of entertainment, often accompanying restaurants for dining.
5
+
6
+ Many bars operate a discount period, designated a "happy hour" or discount of the day to encourage off-peak-time patronage. Bars that fill to capacity sometimes implement a cover charge or a minimum drink-purchase requirement during their peak hours. Bars may have bouncers to ensure that patrons are of legal age, to eject drunk or belligerent patrons, and to collect cover charges. Such bars often feature entertainment, which may be a live band, vocalist, comedian, or disc jockey playing recorded music.
7
+
8
+ Patrons may sit or stand at the counter and be served by the bartender. Depending on the size of a bar and its approach, alcohol may be served at the bar by bartenders, at tables by servers, or by a combination of the two. The "back bar" is a set of shelves of glasses and bottles behind the counter. In some establishments, the back bar is elaborately decorated with woodwork, etched glass, mirrors, and lights.
9
+
10
+ There have been many different names for public drinking spaces throughout history. In the colonial era of the United States, taverns were an important meeting place, as most other institutions were weak. During the 19th century saloons were very important to the leisure time of the working class.[4] Today, even when an establishment uses a different name, such as "tavern" or "saloon" or, in the United Kingdom, a "pub", the area of the establishment where the bartender pours or mixes beverages is normally called "the bar".
11
+
12
+ The sale and/or consumption of alcoholic beverages was prohibited in the first half of the 20th century in several countries, including Finland, Iceland, Norway, and the United States. In the United States, illegal bars during Prohibition were called "speakeasies", "blind pigs", and "blind tigers".
13
+
14
+ Laws in many jurisdictions prohibit minors from entering a bar. If those under legal drinking age are allowed to enter, as is the case with pubs that serve food, they are not allowed to drink.[citation needed] In some jurisdictions, bars cannot serve a patron who is already intoxicated. Cities and towns usually have legal restrictions on where bars may be located and on the types of alcohol they may serve to their customers. Some bars may have a license to serve beer and wine, but not hard liquor. In some jurisdictions, patrons buying alcohol must also order food. In some jurisdictions, bar owners have a legal liability for the conduct of patrons who they serve (this liability may arise in cases of driving under the influence which cause injuries or deaths).
15
+
16
+ Many Islamic countries prohibit bars as well as the possession or sale of alcohol for religious reasons, while others, including Qatar and the United Arab Emirates, allow bars in some specific areas, but only permit non-Muslims to drink in them.
17
+
18
+ A bar's owners and managers choose the bar's name, décor, drink menu, lighting, and other elements which they think will attract a certain kind of patron. However, they have only limited influence over who patronizes their establishment. Thus, a bar originally intended for one demographic profile can become popular with another. For example, a gay or lesbian bar with a dance or disco floor might, over time, attract an increasingly heterosexual clientele, or a blues bar may become a biker bar if most its patrons are bikers. Bars can also be an integral part of larger venues. For example, hotels, casinos and nightclubs are usually home to one or several bars.
19
+
20
+ A cocktail lounge is an upscale bar that is typically located within a hotel, restaurant or airport.
21
+
22
+ A full bar serves liquor, cocktails, wine, and beer.
23
+
24
+ A wine bar is a bar that focuses on wine rather than on beer or liquor. Patrons of these bars may taste wines before deciding to buy them. Some wine bars also serve small plates of food or other snacks.
25
+
26
+ A beer bar focuses on beer, particularly craft beer, rather than on wine or liquor. A brew pub has an on-site brewery and serves craft beers.
27
+
28
+ "Fern bar" is an American slang term for an upscale or preppy (or yuppie) bar.
29
+
30
+ A music bar is a bar that presents live music as an attraction, such as a piano bar.
31
+
32
+ A dive bar, often referred to simply as a "dive", is a very informal bar which may be considered by some to be disreputable.
33
+
34
+ A non-alcoholic bar is a bar that does not serve alcoholic beverages.
35
+
36
+ A strip club is a bar with nude entertainers.
37
+
38
+ A bar and grill is also a restaurant.
39
+
40
+ Some persons may designate either a room or an area of a room as a home bar. Furniture and arrangements vary from efficient to full bars that could be suited as businesses.
41
+
42
+ Bars categorized by the kind of entertainment they offer:
43
+
44
+ Bars can categorized by the kind of patrons who frequent them:
45
+
46
+ The counter at which drinks are served by a bartender is called "the bar". This term is applied, as a synecdoche, to drinking establishments called "bars". This counter typically stores a variety of beers, wines, liquors, and non-alcoholic ingredients, and is organized to facilitate the bartender's work.
47
+
48
+ Counters for serving other types of food and drink may also be called bars. Examples of this usage of the word include snack bars, sushi bars, juice bars, salad bars, dairy bars, and ice cream sundae bars.
49
+
50
+ In Australia, the major form of licensed commercial alcohol outlet from the colonial period to the present was the pub, a local variant of the English original. Until the 1970s, Australian pubs were traditionally organised into gender-segregated drinking areas—the "public bar" was only open to men, while the "lounge bar" or "saloon bar" served both men and women (i.e. mixed drinking). This distinction was gradually eliminated as anti-discrimination legislation and women's rights activism broke down the concept of a public drinking area accessible to only men. Where two bars still exist in the one establishment, one (that derived from the "public bar") will be more downmarket while the other (deriving from the "lounge bar") will be more upmarket. Over time, with the introduction of gaming machines into hotels, many "lounge bars" have or are being converted into gaming rooms.
51
+
52
+ Beginning in the mid-1950s, the formerly strict state liquor licensing laws were progressively relaxed and reformed, with the result that pub trading hours were extended. This was in part to eliminate the social problems associated with early closing times—notably the infamous "six o'clock swill"—and the thriving trade in "sly grog" (illicit alcohol sales). More licensed liquor outlets began to appear, including retail "bottle shops" (over-the-counter bottle sales were previously only available at pubs and were strictly controlled). Particularly in Sydney, a new class of licensed premises, the wine bar, appeared; there alcohol could be served on the proviso that it was provided in tandem with a meal. These venues became very popular in the late 1960s and early 1970s and many offered free entertainment, becoming an important facet of the Sydney music scene in that period.
53
+
54
+ In the major Australian cities today there is a large and diverse bar scene with a range of ambiences, modes and styles catering for every echelon of cosmopolitan society.
55
+
56
+ Public drinking began with the establishment of colonial taverns in both the U.S and Canada. While the term changed to Public house especially in the U.K., the term Tavern continued to be used instead of Pub in both the U.S and Canada.
57
+ Public drinking establishments were banned by the Prohibition of alcohol, which was (and is) a provincial jurisdiction. Prohibition was repealed, province by province in the 1920s. There was not a universal right to consume alcohol, and only males of legal age were permitted to do so. "Beer parlours" were common in the wake of prohibition, with local laws often not permitting entertainment (such as the playing of games or music) in these establishments, which were set aside for the purpose solely of consuming alcohol.
58
+
59
+ Since the end of the Second World War, and exposure by roughly one million Canadians to the public house traditions common in the UK by servicemen and women serving there, those traditions became more common in Canada. These traditions include the drinking of dark ales and stouts, the "pub" as a social gathering place for both sexes, and the playing of games (such as darts, snooker or pool). Tavern became extremely popular during the 1960s and 1970s, especially for working-class people. Canadian taverns, which can still be found in remote regions of Northern Canada, have long tables with benches lining the sides. Patrons in these taverns often order beer in large quart bottles and drink inexpensive "bar brand" Canadian rye whisky. In some provinces, taverns used to have separate entrances for men and women. Even in a large city like Toronto the separate entrances existed into the early 1970s.
60
+
61
+ Canada has adopted some of the newer U.S. bar traditions (such as the "sports bar") of the last decades. As a result, the term "bar" has come to be differentiated from the term "pub", in that bars are usually 'themed' and sometimes have a dance floor. Bars with dance floors are usually relegated to small or Suburban communities. In larger cities bars with large dance floors are usually referred to as clubs and are strictly for dancing, Establishments which call themselves pubs are often much more similar to a British pub in style. Before the 1980s, most "bars" were referred to simply as "tavern".
62
+
63
+ Often, bars and pubs in Canada will cater to supporters of a local sporting team, usually a hockey team. There is a difference between the sports bar and the pub; sports bars focus on TV screens showing games and showcasing uniforms, equipment, etc. Pubs will generally also show games but do not exclusively focus on them. The Tavern was popular until the early 1980s, when American-style bars, as we know them today became popular. In the 1990s imitation British- and Irish-style pubs become popular and adopted names like "The Fox and Fiddle" and "The Queen and Beaver" reflect naming trends in Britain. Tavern or pub style mixed food and drink establishment are generally more common than bars in Canada, although both can be found.
64
+
65
+ Legal restrictions on bars are set by the Canadian provinces and territories, which has led to a great deal of variety. While some provinces have been very restrictive with their bar regulation, setting strict closing times and banning the removal of alcohol from the premises, other provinces have been more liberal. Closing times generally run from 2:00 to 4:00 a.m.
66
+
67
+ In Nova Scotia, particularly in Halifax, there was, until the 1980s, a very distinct system of gender-based laws were in effect for decades. Taverns, bars, halls, and other classifications differentiated whether it was exclusively for men or women, men with invited women, vice versa, or mixed. After this fell by the wayside, there was the issue of water closets. This led to many taverns adding on "powder rooms"; sometimes they were constructed later, or used parts of kitchens or upstairs halls, if plumbing allowed. This was also true of conversions in former "sitting rooms", for men's facilities.
68
+
69
+ In Italy, a "bar" is a place more similar to a café, where people go during the morning or the afternoon, usually to drink a coffee, a cappuccino, or a hot chocolate and eat some kind of snack such as sandwiches (panini or tramezzini) or pastries. However, any kind of alcoholic beverages are served. Opening hours vary: some establishments are open very early in the morning and close relatively early in the evening; others, especially if next to a theater or a cinema, may be open until late at night. Many larger bars are also restaurants and disco clubs.
70
+ Many Italian bars have introduced a so-called "aperitivo" time in the evening, in which everyone who purchases an alcoholic drink then has free access to a usually abundant buffet of cold dishes such as pasta salads, vegetables, and various appetizers.
71
+
72
+ In modern Polish, in most cases a bar would be referred to as pub (plural puby), a loan from English. Polish puby serve various kinds of alcoholic drinks as well as other beverages and simple snacks such as crisps, peanuts or pretzel sticks. Most establishments feature loud music and some have frequent live performances. While Polish word bar can be also applied to this kind of establishment, it is often used to describe any kind of inexpensive restaurant, and therefore can be translated as diner or cafeteria. Both in bary and in puby, the counter at which one orders is called bar, itself being another obvious loanword from English.
73
+
74
+ Bar mleczny (literally 'milk bar') is a kind of inexpensive self-service restaurant serving wide range of dishes, with simple interior design, usually opened during breakfast and lunch hours. It is very similar to Russian столовая in both menu and decor. It can be also compared to what is called greasy spoon in English-speaking countries. Bary mleczne rarely serve alcoholic beverages.
75
+
76
+ The term bar szybkiej obsługi (lit. 'quick service restaurant') also refers to eating - not drinking - establishments. It is being gradually replaced by the English term fast food. Another name, bar samoobsługowy may be applied for any kind of self-service restaurant. Some kinds of Polish bar serve only one type of meal. An example are restaurants serving pasztecik szczeciński, a traditional specialty of the city of Szczecin. It can be consumed at the table or take-out.
77
+
78
+ Bars are common in Spain and form an important part in Spanish culture. In Spain, it is common for a town to have many bars and even to have several lined up on the same street. Most bars have a section of the street or plaza outside with tables and chairs with parasols if the weather allows it. Spanish bars are also known for serving a wide range of sandwiches (bocadillos), as well as snacks called tapas or pinchos.
79
+
80
+ Tapas and pinchos may be offered to customers in two ways, either complementary to order a drink or in some cases there are charged independently, either case this is usually clearly indicated to bar customers by display of wall information, on menus and price lists.
81
+ The anti-smoking law has entered in effect January 1, 2011 and since that date it is prohibited to smoke in bars and restaurants as well as all other indoor areas, closed commercial and state owned facilities are now smoke-free areas.
82
+
83
+ Spain is the country with the highest ratio of bars/population with almost six bars per thousand inhabitants, three times UK's ratio and four times Germany's, and it alone has double the number of bars than the oldest of the 28 members of the European Union. The meaning of the word 'bar' in Spain, however, does not have the negative connotation inherent in the same word in many other languages. For Spanish people a bar is essentially a meeting place, and not necessarily a place to engage in the consumption of alcoholic beverages. As a result, children are normally allowed into bars, and it is common to see families in bars during week-ends of the end of the day. In small towns, the 'bar' may constitute the very center of social life, and it is customary that, after social events, people go to bars, including seniors and children alike.
84
+
85
+ In the UK, bars are either areas that serve alcoholic drinks within establishments such as hotels, restaurants, universities, or are a particular type of establishment which serves alcoholic drinks such as wine bars, "style bars", private membership only bars. However, the main type of establishment selling alcohol for consumption on the premises is the pub. Some bars are similar to nightclubs in that they feature loud music, subdued lighting, or operate a dress code and admissions policy, with inner city bars generally having door staff at the entrance.
86
+
87
+ 'Bar' also designates a separate drinking area within a pub. Until recent years most pubs had two or more bars – very often the public bar or tap room and the saloon bar or lounge, where the decor was better and prices were sometimes higher. The designations of the bars varied regionally. In the last two decades, many pub interiors have been opened up into single spaces, which some people regret as it loses the flexibility, intimacy, and traditional feel of a multi-roomed public house.
88
+
89
+ One of the last dive bars in London was underneath the Kings Head Pub in Gerrard Street, Soho.
90
+
91
+ In the United States, legal distinctions often exist between restaurants and bars, and even between types of bars. These distinctions vary from state to state, and even among municipalities. Beer bars (sometimes called taverns or pubs) are legally restricted to selling only beer, and possibly wine or cider. Liquor bars, also simply called bars, also sell hard liquor.
92
+
93
+ Bars are sometimes exempt from smoking bans that restaurants are subject to, even if those restaurants have liquor licenses. The distinction between a restaurant that serves liquor and a bar is usually made by the percentage of revenue earned from selling liquor, although increasingly, smoking bans include bars as well.
94
+
95
+ In most places, bars are prohibited from selling alcoholic beverages to go, and this makes them clearly different from liquor stores. Some brewpubs and wineries can serve alcohol to go, but under the rules applied to a liquor store. In some areas, such as New Orleans and parts of Las Vegas and Savannah, Georgia, open containers of alcohol may be prepared to go. This kind of restriction is usually dependent on an open container law. In Pennsylvania and Ohio, bars may sell six-packs of beer "to-go" in original (sealed) containers by obtaining a take-out license. New Jersey permits all forms of packaged goods to be sold at bars, and permits packaged beer and wine to be sold at any time on-premises sales of alcoholic beverages are allowed.
96
+
97
+ During the 19th century, drinking establishments were called saloons. In the American Old West the most popular establishment in town was usually the Western saloon. Many of these Western saloons survive, though their services and features have changed with the times. Newer establishments have sometimes been built in Western saloon style for a nostalgic effect. In American cities there were also numerous saloons, which allowed only male patrons and were usually owned by one of the major breweries. Drunkenness, fights, and alcoholism made the saloon into a powerful symbol of all that was wrong with alcohol.[8] Saloons were the primary target of the Temperance movement, and the Anti-Saloon League, founded in 1892, was the most powerful lobby in favor of Prohibition. When Prohibition was repealed, President Franklin D. Roosevelt asked the states not to permit the return of saloons.[9]
98
+
99
+ Many Irish- or British-themed "pubs" exist throughout United States and Canada and in some continental European countries.
100
+
101
+ As of May, 2014, Pittsburgh, Pennsylvania had the most bars per capita in the United States.[10]
102
+
103
+ In Bosnia and Herzegovina, Croatia, Montenegro and Serbia, modern bars overlap with coffeehouses and larger ones are sometimes also nightclubs. Since the 1980s, they have become similar in social function to the bars of Italy, Spain and Greece, as meeting places for people in a city.
104
+
105
+ Interior of Seth Kinman's Table Bluff Hotel and Saloon in Table Bluff, California, 1889
106
+
107
+ Tourists sit outside a bar in Chiang Mai, Thailand
108
+
109
+ A retro bar in Berlin, Germany
110
+
111
+ The original Drifter's Reef bar at Wake Island
112
+
113
+ A bar in Bristol, England
114
+
115
+ A bartender at work in a pub in Jerusalem
116
+
117
+ A bar in Dire Dawa, Ethiopia
118
+
119
+ Bars are a popular setting for fictional works, and in many cases, authors and other creators have developed imaginary bar locations that have become notable, such as the bar for Cheers, Cocktails and Dreams bar in the film Cocktail (1988), the Copacabana bar in the crime film Goodfellas, the rough and tumble Double Deuce in Road House (1989), The Kit Kat Klub in Cabaret, the Korova Milk Bar in the dystopian novel and film adaptation of A Clockwork Orange, the Mos Eisley cantina-bar in Star Wars Episode IV: A New Hope (1977), and the Steinway Beer Garden from the crime-themed video game Grand Theft Auto IV.
en/5580.html.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
4
+
5
+ Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
6
+
7
+ For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
8
+
9
+ The dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. macOS by Apple Inc. is in second place (13.23%), and the varieties of Linux are collectively in third place (1.57%).[3] In the mobile sector (including smartphones and tablets), Android's share is up to 70% in the year 2017.[4] According to third quarter 2016 data, Android's share on smartphones is dominant with 87.5 percent with also a growth rate of 10.3 percent per year, followed by Apple's iOS with 12.1 percent with per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.[5] Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications.
10
+
11
+ A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency. This is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking; 32-bit versions of both Windows NT and Win9x used preemptive multi-tasking.
12
+
13
+ Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem.[6] A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, and the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources to multiple users.
14
+
15
+ A distributed operating system manages a group of distinct, networked computers and makes them appear to be a single computer, as all computations are distributed (divided amongst the constituent computers).[7]
16
+
17
+ In the distributed and cloud computing context of an OS, templating refers to creating a single virtual machine image as a guest operating system, then saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses.[8]
18
+
19
+ Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines with less autonomy (e.g. PDAs). They are very compact and extremely efficient by design, and are able to operate with a limited amount of resources. Windows CE and Minix 3 are some examples of embedded operating systems.
20
+
21
+ A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. Such an event-driven system switches between tasks based on their priorities or external events, whereas time-sharing operating systems switch tasks based on clock interrupts.
22
+
23
+ A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments.
24
+
25
+ Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their modern and more complex forms until the early 1960s.[9] Hardware features were added, that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
26
+
27
+ In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plugboards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).[full citation needed]
28
+
29
+ In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period and would arrive at a scheduled time with their program and data on punched paper cards or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the universal Turing machine.[9]
30
+
31
+ Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and compiling (generating machine code from human-readable symbolic code). This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England, the job queue was at one time a washing line (clothes line) from which tapes were hung with different colored clothes-pegs to indicate job priority.[citation needed]
32
+
33
+ An improvement was the Atlas Supervisor. Introduced with the Manchester Atlas in 1962, it is considered by many to be the first recognisable modern operating system.[10] Brinch Hansen described it as "the most significant breakthrough in the history of operating systems."[11]
34
+
35
+ Through the 1950s, many major features were pioneered in the field of operating systems on mainframe computers, including batch processing, input/output interrupting, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959, the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
36
+
37
+ During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and modern machines are backwards-compatible with applications written for OS/360.[citation needed]
38
+
39
+ OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during updates. When a process is terminated for any reason, all of these resources are re-claimed by the operating system.
40
+
41
+ The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
42
+
43
+ Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.
44
+
45
+ In 1961, Burroughs Corporation introduced the B5000 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler; indeed, the MCP was the first OS to be written exclusively in a high-level language (ESPOL, a dialect of ALGOL). MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS/400, IBM made an approach to Burroughs to license MCP to run on the AS/400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys company's ClearPath/MCP line of computers.
46
+
47
+ UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems[citation needed]. Like all early main-frame systems, this batch-oriented system managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
48
+
49
+ General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed General Comprehensive Operating System (GCOS).
50
+
51
+ Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Before the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. RT-11 was a single-user real-time OS for the PDP-11 class minicomputer, and RSX-11 was the corresponding multi-user OS.
52
+
53
+ From the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact, most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations.
54
+
55
+ The enormous investment in software for these systems made since the 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. Notable supported mainframe operating systems include:
56
+
57
+ The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became widely popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the 1980s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative graphical user interface (GUI) to the Mac OS.
58
+
59
+ The introduction of the Intel 80386 CPU chip in October 1985,[12] with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X (macOS after latest name change).
60
+
61
+ The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
62
+
63
+ Unix was originally written in assembly language.[13] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).
64
+
65
+ The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
66
+
67
+ Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
68
+
69
+ Four operating systems are certified by The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's macOS, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
70
+
71
+ Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
72
+
73
+ A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NeXTSTEP.
74
+
75
+ In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkeley received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
76
+
77
+ Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
78
+
79
+ Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. After two years of legal disputes, the BSD project spawned a number of free derivatives, such as NetBSD and FreeBSD (both in 1993), and OpenBSD (from NetBSD in 1995).
80
+
81
+ macOS (formerly "Mac OS X" and later "OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. macOS is the successor to the original classic Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, macOS is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
82
+ The operating system was first released in 1999 as Mac OS X Server 1.0, followed in March 2001 by a client version (Mac OS X v10.0 "Cheetah"). Since then, six more distinct "client" and "server" editions of macOS have been released, until the two were merged in OS X 10.7 "Lion".
83
+
84
+ Prior to its merging with macOS, the server edition – macOS Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. macOS Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.[14]
85
+
86
+ The Linux kernel originated in 1991, as a project of Linus Torvalds, while a university student in Finland. He posted information about his project on a newsgroup for computer students and programmers, and received support and assistance from volunteers who succeeded in creating a complete and functional kernel.
87
+
88
+ Linux is Unix-like, but was developed without any Unix code, unlike BSD and its variants. Because of its open license model, the Linux kernel code is available for study and modification, which resulted in its use on a wide range of computing machinery from supercomputers to smart-watches. Although estimates suggest that Linux is used on only 1.82% of all "desktop" (or laptop) PCs,[15] it has been widely adopted for use in servers[16] and embedded systems[17] such as cell phones. Linux has superseded Unix on many platforms and is used on most supercomputers including the top 385.[18] Many of the same computers are also on Green500 (but in different order), and Linux runs on the top 10. Linux is also commonly used on other small energy-efficient computers, such as smartphones and smartwatches. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android, Chrome OS, and Chromium OS.
89
+
90
+ Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[15][19][20][21] The latest version is Windows 10.
91
+
92
+ In 2011, Windows 7 overtook Windows XP as most common version in use.[22][23][24]
93
+
94
+ Microsoft Windows was first released in 1985, as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[25][26] and 16-bit Windows 3.x[27] drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current client versions of Windows run on IA-32, x86-64 and 32-bit ARM microprocessors.[28] In addition Itanium is still supported in older server version Windows Server 2008 R2. In the past, Windows NT supported additional architectures.
95
+
96
+ Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers as Windows competes against Linux and BSD for server market share.[29][30]
97
+
98
+ ReactOS is a Windows-alternative operating system, which is being developed on the principles of Windows – without using any of Microsoft's code.
99
+
100
+ There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; classic Mac OS, the non-Unix precursor to Apple's macOS; BeOS; XTS-300; RISC OS; MorphOS; Haiku; BareMetal and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS, formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Another example is the Oberon System designed at ETH Zürich by Niklaus Wirth, Jürg Gutknecht and a group of students at the former Computer Systems Institute in the 1980s. It was used mainly for research, teaching, and daily work in Wirth's group.
101
+
102
+ Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
103
+
104
+ The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
105
+
106
+ With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
107
+
108
+ The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
109
+
110
+ Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative – having the operating system "watch" the various sources of input for events (polling) that require action – can be found in older systems with very small stacks (50 or 60 bytes) but is unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
111
+
112
+ When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or the running program.
113
+
114
+ When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called a device driver, which may be part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
115
+
116
+ A program may also trigger an interrupt to the operating system. If a program wishes to access hardware, for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel then processes the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it triggers an interrupt to get the kernel's attention.
117
+
118
+ Modern microprocessors (CPU or MPU) support multiple modes of operation. CPUs with this capability offer at least two modes: user mode and supervisor mode. In general terms, supervisor mode operation allows unrestricted access to all machine resources, including all MPU instructions. User mode operation sets limits on instruction use and typically disallows direct access to machine resources. CPUs might have other modes similar to user mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.
119
+
120
+ At power-on or reset, the system begins in supervisor mode. Once an operating system kernel has been loaded and started, the boundary between user mode and supervisor mode (also known as kernel mode) can be established.
121
+
122
+ Supervisor mode is used by the kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is accessed, and communicating with devices such as disk drives and video display devices. User mode, in contrast, is used for almost everything else. Application programs, such as word processors and database managers, operate within user mode, and can only access machine resources by turning control over to the kernel, a process which causes a switch to supervisor mode. Typically, the transfer of control to the kernel is achieved by executing a software interrupt instruction, such as the Motorola 68000 TRAP instruction. The software interrupt causes the microprocessor to switch from user mode to supervisor mode and begin executing code that allows the kernel to take control.
123
+
124
+ In user mode, programs usually have access to a restricted set of microprocessor instructions, and generally cannot execute any instructions that could potentially cause disruption to the system's operation. In supervisor mode, instruction execution restrictions are typically removed, allowing the kernel unrestricted access to all machine resources.
125
+
126
+ The term "user mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting, for example, by forcibly terminating ("killing") the program).
127
+
128
+ Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.
129
+
130
+ Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.
131
+
132
+ Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
133
+
134
+ In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt which cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error.
135
+
136
+ Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
137
+
138
+ The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
139
+
140
+ If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
141
+
142
+ When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
143
+
144
+ In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
145
+
146
+ "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[31]
147
+
148
+ Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.
149
+
150
+ An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
151
+
152
+ An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
153
+
154
+ Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.
155
+
156
+ The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
157
+
158
+ On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having preemptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
159
+
160
+ Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
161
+
162
+ Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
163
+
164
+ While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.
165
+
166
+ A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.
167
+
168
+ When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
169
+
170
+ Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drivers are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).
171
+
172
+ Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD, DVD...), a USB flash drive, or even contained within a file located on another file system.
173
+
174
+ A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
175
+
176
+ The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver ensures that the device appears to operate as usual from the operating system's point of view.
177
+
178
+ Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
179
+
180
+ Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
181
+
182
+ Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's IP address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
183
+
184
+ Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
185
+
186
+ A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.[citation needed]
187
+
188
+ The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.[citation needed]
189
+
190
+ In addition to the allow or disallow model of security, a system with a high level of security also offers auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.[citation needed]
191
+
192
+ External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.
193
+
194
+ Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
195
+
196
+ An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is where the operating system is not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
197
+
198
+ Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
199
+
200
+ Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
201
+
202
+ Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of the classic Mac OS, the GUI is integrated into the kernel.
203
+
204
+ While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and macOS are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
205
+
206
+ Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma 5 is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
207
+
208
+ Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
209
+
210
+ Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[32]
211
+
212
+ A real-time operating system (RTOS) is an operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
213
+
214
+ An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.
215
+
216
+ Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[33] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.
217
+
218
+ Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
219
+
220
+ Operating system development is one of the most complicated activities in which a computing hobbyist may engage.[citation needed] A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers.[34]
221
+
222
+ In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.
223
+
224
+ Examples of a hobby operating system include Syllable and TempleOS.
225
+
226
+ Application software is generally written for use on a specific operating system, and sometimes even for specific hardware.[citation needed] When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.
227
+
228
+ Unix was the first operating system not written in assembly language, making it very portable to systems different from its native PDP-11.[35]
229
+
230
+ This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
231
+
232
+ Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
en/5581.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
2
+
3
+ In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
4
+
5
+ Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
6
+
7
+ Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
8
+
9
+ Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
10
+
11
+ Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
12
+
13
+ One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
14
+
15
+ The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
16
+
17
+ A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
18
+
19
+ A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
20
+
21
+ Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
22
+
23
+ Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
24
+
25
+ Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
26
+
27
+ Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
28
+
29
+ Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
30
+
31
+ Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
32
+
33
+ When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
34
+
35
+ Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
36
+
37
+ Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
38
+
39
+ All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
40
+
41
+ Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
42
+
43
+ Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
44
+
45
+ Some examples of human absolute thresholds for the 9-21 external senses.[21]
46
+
47
+ Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]Siddhu
48
+
49
+ External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
50
+
51
+ The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
52
+
53
+ At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
54
+
55
+ The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
56
+
57
+ There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
58
+
59
+ The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
60
+
61
+ On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
62
+
63
+ Visual Perception in Psychology
64
+
65
+ According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
66
+
67
+ The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
68
+
69
+ The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
70
+
71
+ The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
72
+
73
+ The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
74
+
75
+ The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
76
+
77
+ The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
78
+
79
+ The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
80
+
81
+ Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
82
+
83
+ Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
84
+
85
+ Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
86
+
87
+ There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
88
+
89
+ Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument.  [32]
90
+
91
+ Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
92
+
93
+ Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
94
+
95
+ Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
96
+
97
+ The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
98
+
99
+ The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
100
+
101
+ The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
102
+
103
+ The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
104
+
105
+ The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
106
+
107
+ Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
108
+
109
+ Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
110
+
111
+ Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
112
+
113
+ Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
114
+
115
+ There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.  [40]
116
+
117
+ Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
118
+
119
+ The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
120
+
121
+ In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
122
+
123
+ Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
124
+
125
+ As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
126
+
127
+ Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
128
+
129
+ Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
130
+
131
+ Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
132
+
133
+ An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
134
+ Some examples of specific receptors are:
135
+
136
+ Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
137
+
138
+ An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
139
+
140
+ Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
141
+
142
+ Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
143
+
144
+ Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
145
+ Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
146
+
147
+ Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
148
+
149
+ Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
150
+
151
+ In addition, some animals have senses that humans do not, including the following:
152
+
153
+ Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
154
+
155
+ Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
156
+
157
+ Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
158
+
159
+ Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
160
+
161
+ The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
162
+
163
+ A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
164
+
165
+ Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
166
+
167
+ Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
168
+
169
+ Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
170
+
171
+ The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
172
+
173
+ In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
174
+
175
+ Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
176
+
177
+ Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
178
+
179
+ Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
180
+
181
+ Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
182
+
183
+ By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
184
+
185
+ However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
186
+
187
+ Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
188
+
189
+ In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
190
+
191
+ The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
192
+
193
+ Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
194
+
195
+ In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
en/5582.html.txt ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Solar System[b] is the gravitationally bound system of the Sun and the objects that orbit it, either directly or indirectly.[c] Of the objects that orbit the Sun directly, the largest are the eight planets,[d] with the remainder being smaller objects, the dwarf planets and small Solar System bodies. Of the objects that orbit the Sun indirectly—the moons—two are larger than the smallest planet, Mercury.[e]
4
+
5
+ The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant interstellar molecular cloud. The vast majority of the system's mass is in the Sun, with the majority of the remaining mass contained in Jupiter. The four smaller inner planets, Mercury, Venus, Earth and Mars, are terrestrial planets, being primarily composed of rock and metal. The four outer planets are giant planets, being substantially more massive than the terrestrials. The two largest planets, Jupiter and Saturn, are gas giants, being composed mainly of hydrogen and helium; the two outermost planets, Uranus and Neptune, are ice giants, being composed mostly of substances with relatively high melting points compared with hydrogen and helium, called volatiles, such as water, ammonia and methane. All eight planets have almost circular orbits that lie within a nearly flat disc called the ecliptic.
6
+
7
+ The Solar System also contains smaller objects.[f] The asteroid belt, which lies between the orbits of Mars and Jupiter, mostly contains objects composed, like the terrestrial planets, of rock and metal. Beyond Neptune's orbit lie the Kuiper belt and scattered disc, which are populations of trans-Neptunian objects composed mostly of ices, and beyond them a newly discovered population of sednoids. Within these populations, some objects are large enough to have rounded under their own gravity, though there is considerable debate as to how many there will prove to be.[9][10] Such objects are categorized as dwarf planets. The only certain dwarf planet is Pluto, with another trans-Neptunian object, Eris, expected to be, and the asteroid Ceres at least close to being a dwarf planet.[f] In addition to these two regions, various other small-body populations, including comets, centaurs and interplanetary dust clouds, freely travel between regions. Six of the planets, the six largest possible dwarf planets, and many of the smaller bodies are orbited by natural satellites, usually termed "moons" after the Moon. Each of the outer planets is encircled by planetary rings of dust and other small objects.
8
+
9
+ The solar wind, a stream of charged particles flowing outwards from the Sun, creates a bubble-like region in the interstellar medium known as the heliosphere. The heliopause is the point at which pressure from the solar wind is equal to the opposing pressure of the interstellar medium; it extends out to the edge of the scattered disc. The Oort cloud, which is thought to be the source for long-period comets, may also exist at a distance roughly a thousand times further than the heliosphere. The Solar System is located in the Orion Arm, 26,000 light-years from the center of the Milky Way galaxy.
10
+
11
+ For most of history, humanity did not recognize or understand the concept of the Solar System. Most people up to the Late Middle Ages–Renaissance believed Earth to be stationary at the centre of the universe and categorically different from the divine or ethereal objects that moved through the sky. Although the Greek philosopher Aristarchus of Samos had speculated on a heliocentric reordering of the cosmos, Nicolaus Copernicus was the first to develop a mathematically predictive heliocentric system.[11][12]
12
+
13
+ In the 17th century, Galileo discovered that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it.[13] Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn.[14] Edmond Halley realised in 1705 that repeated sightings of a comet were recording the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets orbited the Sun.[15] Around this time (1704), the term "Solar System" first appeared in English.[16] In 1838, Friedrich Bessel successfully measured a stellar parallax, an apparent shift in the position of a star created by Earth's motion around the Sun, providing the first direct, experimental proof of heliocentrism.[17] Improvements in observational astronomy and the use of unmanned spacecraft have since enabled the detailed investigation of other bodies orbiting the Sun.
14
+
15
+ The principal component of the Solar System is the Sun, a G2 main-sequence star that contains 99.86% of the system's known mass and dominates it gravitationally.[18] The Sun's four largest orbiting bodies, the giant planets, account for 99% of the remaining mass, with Jupiter and Saturn together comprising more than 90%. The remaining objects of the Solar System (including the four terrestrial planets, the dwarf planets, moons, asteroids, and comets) together comprise less than 0.002% of the Solar System's total mass.[g]
16
+
17
+ Most large objects in orbit around the Sun lie near the plane of Earth's orbit, known as the ecliptic. The planets are very close to the ecliptic, whereas comets and Kuiper belt objects are frequently at significantly greater angles to it.[22][23] As a result of the formation of the Solar System planets, and most other objects, orbit the Sun in the same direction that the Sun is rotating (counter-clockwise, as viewed from above Earth's north pole).[24] There are exceptions, such as Halley's Comet. Most of the larger moons orbit their planets in this prograde direction (with Triton being the largest retrograde exception) and most larger objects rotate themselves in the same direction (with Venus being a notable retrograde exception).
18
+
19
+ The overall structure of the charted regions of the Solar System consists of the Sun, four relatively small inner planets surrounded by a belt of mostly rocky asteroids, and four giant planets surrounded by the Kuiper belt of mostly icy objects. Astronomers sometimes informally divide this structure into separate regions. The inner Solar System includes the four terrestrial planets and the asteroid belt. The outer Solar System is beyond the asteroids, including the four giant planets.[25] Since the discovery of the Kuiper belt, the outermost parts of the Solar System are considered a distinct region consisting of the objects beyond Neptune.[26]
20
+
21
+ Most of the planets in the Solar System have secondary systems of their own, being orbited by planetary objects called natural satellites, or moons (two of which, Titan and Ganymede, are larger than the planet Mercury), and, in the case of the four giant planets, by planetary rings, thin bands of tiny particles that orbit them in unison. Most of the largest natural satellites are in synchronous rotation, with one face permanently turned toward their parent.
22
+
23
+ Kepler's laws of planetary motion describe the orbits of objects about the Sun. Following Kepler's laws, each object travels along an ellipse with the Sun at one focus. Objects closer to the Sun (with smaller semi-major axes) travel more quickly because they are more affected by the Sun's gravity. On an elliptical orbit, a body's distance from the Sun varies over the course of its year. A body's closest approach to the Sun is called its perihelion, whereas its most distant point from the Sun is called its aphelion. The orbits of the planets are nearly circular, but many comets, asteroids, and Kuiper belt objects follow highly elliptical orbits. The positions of the bodies in the Solar System can be predicted using numerical models.
24
+
25
+ Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum.[27][28] The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.[27]
26
+
27
+ The Sun, which comprises nearly all the matter in the Solar System, is composed of roughly 98% hydrogen and helium.[29] Jupiter and Saturn, which comprise nearly all the remaining matter, are also primarily composed of hydrogen and helium.[30][31] A composition gradient exists in the Solar System, created by heat and light pressure from the Sun; those objects closer to the Sun, which are more affected by heat and light pressure, are composed of elements with high melting points. Objects farther from the Sun are composed largely of materials with lower melting points.[32] The boundary in the Solar System beyond which those volatile substances could condense is known as the frost line, and it lies at roughly 5 AU from the Sun.[4]
28
+
29
+ The objects of the inner Solar System are composed mostly of rock,[33] the collective name for compounds with high melting points, such as silicates, iron or nickel, that remained solid under almost all conditions in the protoplanetary nebula.[34] Jupiter and Saturn are composed mainly of gases, the astronomical term for materials with extremely low melting points and high vapour pressure, such as hydrogen, helium, and neon, which were always in the gaseous phase in the nebula.[34] Ices, like water, methane, ammonia, hydrogen sulfide, and carbon dioxide,[33] have melting points up to a few hundred kelvins.[34] They can be found as ices, liquids, or gases in various places in the Solar System, whereas in the nebula they were either in the solid or gaseous phase.[34] Icy substances comprise the majority of the satellites of the giant planets, as well as most of Uranus and Neptune (the so-called "ice giants") and the numerous small objects that lie beyond Neptune's orbit.[33][35] Together, gases and ices are referred to as volatiles.[36]
30
+
31
+ The distance from Earth to the Sun is 1 astronomical unit [AU] (150,000,000 km; 93,000,000 mi). For comparison, the radius of the Sun is 0.0047 AU (700,000 km). Thus, the Sun occupies 0.00001% (10−5 %) of the volume of a sphere with a radius the size of Earth's orbit, whereas Earth's volume is roughly one millionth (10−6) that of the Sun. Jupiter, the largest planet, is 5.2 astronomical units (780,000,000 km) from the Sun and has a radius of 71,000 km (0.00047 AU), whereas the most distant planet, Neptune, is 30 AU (4.5×109 km) from the Sun.
32
+
33
+ With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearer object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances (for example, the Titius–Bode law),[37] but no such theory has been accepted. The images at the beginning of this section show the orbits of the various constituents of the Solar System on different scales.
34
+
35
+ Some Solar System models attempt to convey the relative scales involved in the Solar System on human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas.[38] The largest such scale model, the Sweden Solar System, uses the 110-metre (361 ft) Ericsson Globe in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-metre (25-foot) sphere at Stockholm Arlanda Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away.[39][40]
36
+
37
+ If the Sun–Neptune distance is scaled to 100 metres, then the Sun would be about 3 cm in diameter (roughly two-thirds the diameter of a golf ball), the giant planets would be all smaller than about 3 mm, and Earth's diameter along with that of the other terrestrial planets would be smaller than a flea (0.3 mm) at this scale.[41]
38
+
39
+ Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively, hence long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image.
40
+
41
+ The Solar System formed 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud.[h] This initial cloud was likely several light-years across and probably birthed several stars.[43] As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars. As the region that would become the Solar System, known as the pre-solar nebula,[44] collapsed, conservation of angular momentum caused it to rotate faster. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc.[43] As the contracting nebula rotated faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU[43] and a hot, dense protostar at the centre.[45][46] The planets formed by accretion from this disc,[47] in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed, leaving the planets, dwarf planets, and leftover minor bodies.
42
+
43
+ Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun, and these would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because metallic elements only comprised a very small fraction of the solar nebula, the terrestrial planets could not grow very large. The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud. The Nice model is an explanation for the creation of these regions and how the outer planets could have formed in different positions and migrated to their current orbits through various gravitational interactions.
44
+
45
+ Within 50 million years, the pressure and density of hydrogen in the centre of the protostar became great enough for it to begin thermonuclear fusion.[49] The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure equalled the force of gravity. At this point, the Sun became a main-sequence star.[50] The main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other phases of the Sun's pre-remnant life combined.[51] Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space, ending the planetary formation process. The Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today.[52]
46
+
47
+ The Solar System will remain roughly as we know it today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At this time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be much greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its vastly increased surface area, the surface of the Sun will be considerably cooler (2,600 K at its coolest) than it is on the main sequence.[51] The expanding Sun is expected to vaporize Mercury and render Earth uninhabitable. Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will move away into space, leaving a white dwarf, an extraordinarily dense object, half the original mass of the Sun but only the size of Earth.[53] The ejected outer layers will form what is known as a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.
48
+
49
+ The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses),[54] which comprises 99.86% of all the mass in the Solar System,[55] produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium, making it a main-sequence star.[56] This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.[57]
50
+
51
+ The Sun is a G2-type main-sequence star. Hotter main-sequence stars are more luminous. The Sun's temperature is intermediate between that of the hottest stars and that of the coolest stars. Stars brighter and hotter than the Sun are rare, whereas substantially dimmer and cooler stars, known as red dwarfs, make up 85% of the stars in the Milky Way.[58][59]
52
+
53
+ The Sun is a population I star; it has a higher abundance of elements heavier than hydrogen and helium ("metals" in astronomical parlance) than the older population II stars.[60] Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the Universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This high metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets form from the accretion of "metals".[61]
54
+
55
+ The vast majority of the Solar System consists of a near-vacuum known as the interplanetary medium. Along with light, the Sun radiates a continuous stream of charged particles (a plasma) known as the solar wind. This stream of particles spreads outwards at roughly 1.5 million kilometres per hour,[62] creating a tenuous atmosphere that permeates the interplanetary medium out to at least 100 AU (see § Heliosphere).[63] Activity on the Sun's surface, such as solar flares and coronal mass ejections, disturbs the heliosphere, creating space weather and causing geomagnetic storms.[64] The largest structure within the heliosphere is the heliospheric current sheet, a spiral form created by the actions of the Sun's rotating magnetic field on the interplanetary medium.[65][66]
56
+
57
+ Earth's magnetic field stops its atmosphere from being stripped away by the solar wind.[67] Venus and Mars do not have magnetic fields, and as a result the solar wind is causing their atmospheres to gradually bleed away into space.[68] Coronal mass ejections and similar events blow a magnetic field and huge quantities of material from the surface of the Sun. The interaction of this magnetic field and material with Earth's magnetic field funnels charged particles into Earth's upper atmosphere, where its interactions create aurorae seen near the magnetic poles.
58
+
59
+ The heliosphere and planetary magnetic fields (for those planets that have them) partially shield the Solar System from high-energy interstellar particles called cosmic rays. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic-ray penetration in the Solar System varies, though by how much is unknown.[69]
60
+
61
+ The interplanetary medium is home to at least two disc-like regions of cosmic dust. The first, the zodiacal dust cloud, lies in the inner Solar System and causes the zodiacal light. It was likely formed by collisions within the asteroid belt brought on by gravitational interactions with the planets.[70] The second dust cloud extends from about 10 AU to about 40 AU, and was probably created by similar collisions within the Kuiper belt.[71][72]
62
+
63
+ The inner Solar System is the region comprising the terrestrial planets and the asteroid belt.[73] Composed mainly of silicates and metals, the objects of the inner Solar System are relatively close to the Sun; the radius of this entire region is less than the distance between the orbits of Jupiter and Saturn. This region is also within the frost line, which is a little less than 5 AU (about 700 million km) from the Sun.[74]
64
+
65
+ The four terrestrial or inner planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of refractory minerals, such as the silicates—which form their crusts and mantles—and metals, such as iron and nickel, which form their cores. Three of the four inner planets (Venus, Earth and Mars) have atmospheres substantial enough to generate weather; all have impact craters and tectonic surface features, such as rift valleys and volcanoes. The term inner planet should not be confused with inferior planet, which designates those planets that are closer to the Sun than Earth is (i.e. Mercury and Venus).
66
+
67
+ Mercury (0.4 AU from the Sun) is the closest planet to the Sun and on average, all seven other planets.[75][76] The smallest planet in the Solar System (0.055 M⊕), Mercury has no natural satellites. Besides impact craters, its only known geological features are lobed ridges or rupes that were probably produced by a period of contraction early in its history.[77] Mercury's very tenuous atmosphere consists of atoms blasted off its surface by the solar wind.[78] Its relatively large iron core and thin mantle have not yet been adequately explained. Hypotheses include that its outer layers were stripped off by a giant impact, or that it was prevented from fully accreting by the young Sun's energy.[79][80]
68
+
69
+ Venus (0.7 AU from the Sun) is close in size to Earth (0.815 M⊕) and, like Earth, has a thick silicate mantle around an iron core, a substantial atmosphere, and evidence of internal geological activity. It is much drier than Earth, and its atmosphere is ninety times as dense. Venus has no natural satellites. It is the hottest planet, with surface temperatures over 400 °C (752 °F), most likely due to the amount of greenhouse gases in the atmosphere.[81] No definitive evidence of current geological activity has been detected on Venus, but it has no magnetic field that would prevent depletion of its substantial atmosphere, which suggests that its atmosphere is being replenished by volcanic eruptions.[82]
70
+
71
+ Earth (1 AU from the Sun) is the largest and densest of the inner planets, the only one known to have current geological activity, and the only place where life is known to exist.[83] Its liquid hydrosphere is unique among the terrestrial planets, and it is the only planet where plate tectonics has been observed. Earth's atmosphere is radically different from those of the other planets, having been altered by the presence of life to contain 21% free oxygen.[84] It has one natural satellite, the Moon, the only large satellite of a terrestrial planet in the Solar System.
72
+
73
+ Mars (1.5 AU from the Sun) is smaller than Earth and Venus (0.107 M⊕). It has an atmosphere of mostly carbon dioxide with a surface pressure of 6.1 millibars (roughly 0.6% of that of Earth).[85] Its surface, peppered with vast volcanoes, such as Olympus Mons, and rift valleys, such as Valles Marineris, shows geological activity that may have persisted until as recently as 2 million years ago.[86] Its red colour comes from iron oxide (rust) in its soil.[87] Mars has two tiny natural satellites (Deimos and Phobos) thought to be either captured asteroids,[88] or ejected debris from a massive impact early in Mars's history.[89]
74
+
75
+ Asteroids except for the largest, Ceres, are classified as small Solar System bodies[f] and are composed mainly of refractory rocky and metallic minerals, with some ice.[90][91] They range from a few metres to hundreds of kilometres in size. Asteroids smaller than one meter are usually called meteoroids and micrometeoroids (grain-sized), depending on different, somewhat arbitrary definitions.
76
+
77
+ The asteroid belt occupies the orbit between Mars and Jupiter, between 2.3 and 3.3 AU from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter.[92] The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter.[93] Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of Earth.[21] The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident.
78
+
79
+ Ceres (2.77 AU) is the largest asteroid, a protoplanet, and a dwarf planet.[f] It has a diameter of slightly under 1000 km, and a mass large enough for its own gravity to pull it into a spherical shape. Ceres was considered a planet when it was discovered in 1801, and was reclassified to asteroid in the 1850s as further observations revealed additional asteroids.[94] It was classified as a dwarf planet in 2006 when the definition of a planet was created.
80
+
81
+ Asteroids in the asteroid belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets, which may have been the source of Earth's water.[95]
82
+
83
+ Jupiter trojans are located in either of Jupiter's L4 or L5 points (gravitationally stable regions leading and trailing a planet in its orbit); the term trojan is also used for small bodies in any other planetary or satellite Lagrange point. Hilda asteroids are in a 2:3 resonance with Jupiter; that is, they go around the Sun three times for every two Jupiter orbits.[96]
84
+
85
+ The inner Solar System also contains near-Earth asteroids, many of which cross the orbits of the inner planets.[97] Some of them are potentially hazardous objects.
86
+
87
+ The outer region of the Solar System is home to the giant planets and their large moons. The centaurs and many short-period comets also orbit in this region. Due to their greater distance from the Sun, the solid objects in the outer Solar System contain a higher proportion of volatiles, such as water, ammonia, and methane than those of the inner Solar System because the lower temperatures allow these compounds to remain solid.
88
+
89
+ The four outer planets, or giant planets (sometimes called Jovian planets), collectively make up 99% of the mass known to orbit the Sun.[g] Jupiter and Saturn are together more than 400 times the mass of Earth and consist overwhelmingly of hydrogen and helium. Uranus and Neptune are far less massive—less than 20 Earth masses (M⊕) each—and are composed primarily of ices. For these reasons, some astronomers suggest they belong in their own category, ice giants.[98] All four giant planets have rings, although only Saturn's ring system is easily observed from Earth. The term superior planet designates planets outside Earth's orbit and thus includes both the outer planets and Mars.
90
+
91
+ Jupiter (5.2 AU), at 318 M⊕, is 2.5 times the mass of all the other planets put together. It is composed largely of hydrogen and helium. Jupiter's strong internal heat creates semi-permanent features in its atmosphere, such as cloud bands and the Great Red Spot. Jupiter has 79 known satellites. The four largest, Ganymede, Callisto, Io, and Europa, show similarities to the terrestrial planets, such as volcanism and internal heating.[99] Ganymede, the largest satellite in the Solar System, is larger than Mercury.
92
+
93
+ Saturn (9.5 AU), distinguished by its extensive ring system, has several similarities to Jupiter, such as its atmospheric composition and magnetosphere. Although Saturn has 60% of Jupiter's volume, it is less than a third as massive, at 95 M⊕. Saturn is the only planet of the Solar System that is less dense than water.[100] The rings of Saturn are made up of small ice and rock particles. Saturn has 82 confirmed satellites composed largely of ice. Two of these, Titan and Enceladus, show signs of geological activity.[101] Titan, the second-largest moon in the Solar System, is larger than Mercury and the only satellite in the Solar System with a substantial atmosphere.
94
+
95
+ Uranus (19.2 AU), at 14 M⊕, is the lightest of the outer planets. Uniquely among the planets, it orbits the Sun on its side; its axial tilt is over ninety degrees to the ecliptic. It has a much colder core than the other giant planets and radiates very little heat into space.[102] Uranus has 27 known satellites, the largest ones being Titania, Oberon, Umbriel, Ariel, and Miranda.
96
+
97
+ Neptune (30.1 AU), though slightly smaller than Uranus, is more massive (17 M⊕) and hence more dense. It radiates more internal heat, but not as much as Jupiter or Saturn.[103] Neptune has 14 known satellites. The largest, Triton, is geologically active, with geysers of liquid nitrogen.[104] Triton is the only large satellite with a retrograde orbit. Neptune is accompanied in its orbit by several minor planets, termed Neptune trojans, that are in 1:1 resonance with it.
98
+
99
+ The centaurs are icy comet-like bodies whose orbits have semi-major axes greater than Jupiter's (5.5 AU) and less than Neptune's (30 AU). The largest known centaur, 10199 Chariklo, has a diameter of about 250 km.[105] The first centaur discovered, 2060 Chiron, has also been classified as comet (95P) because it develops a coma just as comets do when they approach the Sun.[106]
100
+
101
+ Comets are small Solar System bodies,[f] typically only a few kilometres across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
102
+
103
+ Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets are thought to originate in the Kuiper belt, whereas long-period comets, such as Hale–Bopp, are thought to originate in the Oort cloud. Many comet groups, such as the Kreutz Sungrazers, formed from the breakup of a single parent.[107] Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult.[108] Old comets that have had most of their volatiles driven out by solar warming are often categorised as asteroids.[109]
104
+
105
+ Beyond the orbit of Neptune lies the area of the "trans-Neptunian region", with the doughnut-shaped Kuiper belt, home of Pluto and several other dwarf planets, and an overlapping disc of scattered objects, which is tilted toward the plane of the Solar System and reaches much further out than the Kuiper belt. The entire region is still largely unexplored. It appears to consist overwhelmingly of many thousands of small worlds—the largest having a diameter only a fifth that of Earth and a mass far smaller than that of the Moon—composed mainly of rock and ice. This region is sometimes described as the "third zone of the Solar System", enclosing the inner and the outer Solar System.[110]
106
+
107
+ The Kuiper belt is a great ring of debris similar to the asteroid belt, but consisting mainly of objects composed primarily of ice.[111] It extends between 30 and 50 AU from the Sun. Though it is estimated to contain anything from dozens to thousands of dwarf planets, it is composed mainly of small Solar System bodies. Many of the larger Kuiper belt objects, such as Quaoar, Varuna, and Orcus, may prove to be dwarf planets with further data. There are estimated to be over 100,000 Kuiper belt objects with a diameter greater than 50 km, but the total mass of the Kuiper belt is thought to be only a tenth or even a hundredth the mass of Earth.[20] Many Kuiper belt objects have multiple satellites,[112] and most have orbits that take them outside the plane of the ecliptic.[113]
108
+
109
+ The Kuiper belt can be roughly divided into the "classical" belt and the resonances.[111] Resonances are orbits linked to that of Neptune (e.g. twice for every three Neptune orbits, or once for every two). The first resonance begins within the orbit of Neptune itself. The classical belt consists of objects having no resonance with Neptune, and extends from roughly 39.4 AU to 47.7 AU.[114] Members of the classical Kuiper belt are classified as cubewanos, after the first of their kind to be discovered, 15760 Albion (which previously had the provisional designation 1992 QB1), and are still in near primordial, low-eccentricity orbits.[115]
110
+
111
+ The dwarf planet Pluto (39 AU average) is the largest known object in the Kuiper belt. When discovered in 1930, it was considered to be the ninth planet; this changed in 2006 with the adoption of a formal definition of planet. Pluto has a relatively eccentric orbit inclined 17 degrees to the ecliptic plane and ranging from 29.7 AU from the Sun at perihelion (within the orbit of Neptune) to 49.5 AU at aphelion. Pluto has a 3:2 resonance with Neptune, meaning that Pluto orbits twice round the Sun for every three Neptunian orbits. Kuiper belt objects whose orbits share this resonance are called plutinos.[116]
112
+
113
+ Charon, the largest of Pluto's moons, is sometimes described as part of a binary system with Pluto, as the two bodies orbit a barycentre of gravity above their surfaces (i.e. they appear to "orbit each other"). Beyond Charon, four much smaller moons, Styx, Nix, Kerberos, and Hydra, orbit within the system.
114
+
115
+ Makemake (45.79 AU average), although smaller than Pluto, is the largest known object in the classical Kuiper belt (that is, a Kuiper belt object not in a confirmed resonance with Neptune). Makemake is the brightest object in the Kuiper belt after Pluto. It was assigned a naming committee under the expectation that it would prove to be a dwarf planet in 2008.[6] Its orbit is far more inclined than Pluto's, at 29°.[117]
116
+
117
+ Haumea (43.13 AU average) is in an orbit similar to Makemake, except that it is in a temporary 7:12 orbital resonance with Neptune.[118]
118
+ It was named under the same expectation that it would prove to be a dwarf planet, though subsequent observations have indicated that it may not be a dwarf planet after all.[119]
119
+
120
+ The scattered disc, which overlaps the Kuiper belt but extends out to about 200 AU, is thought to be the source of short-period comets. Scattered-disc objects are thought to have been ejected into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects (SDOs) have perihelia within the Kuiper belt but aphelia far beyond it (some more than 150 AU from the Sun). SDOs' orbits are also highly inclined to the ecliptic plane and are often almost perpendicular to it. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt and describe scattered disc objects as "scattered Kuiper belt objects".[120] Some astronomers also classify centaurs as inward-scattered Kuiper belt objects along with the outward-scattered residents of the scattered disc.[121]
121
+
122
+ Eris (68 AU average) is the largest known scattered disc object, and caused a debate about what constitutes a planet, because it is 25% more massive than Pluto[122] and about the same diameter. It is the most massive of the known dwarf planets. It has one known moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane.
123
+
124
+ The point at which the Solar System ends and interstellar space begins is not precisely defined because its outer boundaries are shaped by two separate forces: the solar wind and the Sun's gravity. The limit of the solar wind's influence is roughly four times Pluto's distance from the Sun; this heliopause, the outer boundary of the heliosphere, is considered the beginning of the interstellar medium.[63] The Sun's Hill sphere, the effective range of its gravitational dominance, is thought to extend up to a thousand times farther and encompasses the theorized Oort cloud.[123]
125
+
126
+ The heliosphere is a stellar-wind bubble, a region of space dominated by the Sun, which radiates at roughly 400 km/s its solar wind, a stream of charged particles, until it collides with the wind of the interstellar medium.
127
+
128
+ The collision occurs at the termination shock, which is roughly 80–100 AU from the Sun upwind of the interstellar medium and roughly 200 AU from the Sun downwind.[124] Here the wind slows dramatically, condenses and becomes more turbulent,[124] forming a great oval structure known as the heliosheath. This structure is thought to look and behave very much like a comet's tail, extending outward for a further 40 AU on the upwind side but tailing many times that distance downwind; evidence from Cassini and Interstellar Boundary Explorer spacecraft has suggested that it is forced into a bubble shape by the constraining action of the interstellar magnetic field.[125]
129
+
130
+ The outer boundary of the heliosphere, the heliopause, is the point at which the solar wind finally terminates and is the beginning of interstellar space.[63] Voyager 1 and Voyager 2 are reported to have passed the termination shock and entered the heliosheath, at 94 and 84 AU from the Sun, respectively.[126][127] Voyager 1 is reported to have crossed the heliopause in August 2012.[128]
131
+
132
+ The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium as well as solar magnetic fields prevailing to the south, e.g. it is bluntly shaped with the northern hemisphere extending 9 AU farther than the southern hemisphere.[124] Beyond the heliopause, at around 230 AU, lies the bow shock, a plasma "wake" left by the Sun as it travels through the Milky Way.[129]
133
+
134
+ Due to a lack of data, conditions in local interstellar space are not known for certain. It is expected that NASA's Voyager spacecraft, as they pass the heliopause, will transmit valuable data on radiation levels and solar wind to Earth.[130] How well the heliosphere shields the Solar System from cosmic rays is poorly understood. A NASA-funded team has developed a concept of a "Vision Mission" dedicated to sending a probe to the heliosphere.[131][132]
135
+
136
+ 90377 Sedna (520 AU average) is a large, reddish object with a gigantic, highly elliptical orbit that takes it from about 76 AU at perihelion to 940 AU at aphelion and takes 11,400 years to complete. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper belt because its perihelion is too distant to have been affected by Neptune's migration. He and other astronomers consider it to be the first in an entirely new population, sometimes termed "distant detached objects" (DDOs), which also may include the object 2000 CR105, which has a perihelion of 45 AU, an aphelion of 415 AU, and an orbital period of 3,420 years.[133] Brown terms this population the "inner Oort cloud" because it may have formed through a similar process, although it is far closer to the Sun.[134] Sedna is very likely a dwarf planet, though its shape has yet to be determined. The second unequivocally detached object, with a perihelion farther than Sedna's at roughly 81 AU, is 2012 VP113, discovered in 2012. Its aphelion is only half that of Sedna's, at 400–500 AU.[135][136]
137
+
138
+ The Oort cloud is a hypothetical spherical cloud of up to a trillion icy objects that is thought to be the source for all long-period comets and to surround the Solar System at roughly 50,000 AU (around 1 light-year (ly)), and possibly to as far as 100,000 AU (1.87 ly). It is thought to be composed of comets that were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events, such as collisions, the gravitational effects of a passing star, or the galactic tide, the tidal force exerted by the Milky Way.[137][138]
139
+
140
+ Much of the Solar System is still unknown. The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light years (125,000 AU). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than 50,000 AU.[139] Despite discoveries such as Sedna, the region between the Kuiper belt and the Oort cloud, an area tens of thousands of AU in radius, is still virtually unmapped. There are also ongoing studies of the region between Mercury and the Sun.[140] Objects may yet be discovered in the Solar System's uncharted regions.
141
+
142
+ Currently, the furthest known objects, such as Comet West, have aphelia around 70,000 AU from the Sun, but as the Oort cloud becomes better known, this may change.
143
+
144
+ The Solar System is located in the Milky Way, a barred spiral galaxy with a diameter of about 100,000 light-years containing more than 100 billion stars.[141] The Sun resides in one of the Milky Way's outer spiral arms, known as the Orion–Cygnus Arm or Local Spur.[142] The Sun lies between 25,000 and 28,000 light-years from the Galactic Centre,[143] and its speed within the Milky Way is about 220 km/s, so that it completes one revolution every 225–250 million years. This revolution is known as the Solar System's galactic year.[144] The solar apex, the direction of the Sun's path through interstellar space, is near the constellation Hercules in the direction of the current location of the bright star Vega.[145] The plane of the ecliptic lies at an angle of about 60° to the galactic plane.[i]
145
+
146
+ The Solar System's location in the Milky Way is a factor in the evolutionary history of life on Earth. Its orbit is close to circular, and orbits near the Sun are at roughly the same speed as that of the spiral arms.[147][148] Therefore, the Sun passes through arms only rarely. Because spiral arms are home to a far larger concentration of supernovae, gravitational instabilities, and radiation that could disrupt the Solar System, this has given Earth long periods of stability for life to evolve.[147] The Solar System also lies well outside the star-crowded environs of the galactic centre. Near the centre, gravitational tugs from nearby stars could perturb bodies in the Oort cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. The intense radiation of the galactic centre could also interfere with the development of complex life.[147] Even at the Solar System's current location, some scientists have speculated that recent supernovae may have adversely affected life in the last 35,000 years, by flinging pieces of expelled stellar core towards the Sun, as radioactive dust grains and larger, comet-like bodies.[149]
147
+
148
+ The Solar System is in the Local Interstellar Cloud or Local Fluff. It is thought to be near the neighbouring G-Cloud but it is not known if the Solar System is embedded in the Local Interstellar Cloud, or if it is in the region where the Local Interstellar Cloud and G-Cloud are interacting.[150][151] The Local Interstellar Cloud is an area of denser cloud in an otherwise sparse region known as the Local Bubble, an hourglass-shaped cavity in the interstellar medium roughly 300 light-years (ly) across. The bubble is suffused with high-temperature plasma, that suggests it is the product of several recent supernovae.[152]
149
+
150
+ There are relatively few stars within ten light-years of the Sun. The closest is the triple star system Alpha Centauri, which is about 4.4 light-years away. Alpha Centauri A and B are a closely tied pair of Sun-like stars, whereas the small red dwarf, Proxima Centauri, orbits the pair at a distance of 0.2 light-year. In 2016, a potentially habitable exoplanet was confirmed to be orbiting Proxima Centauri, called Proxima Centauri b, the closest confirmed exoplanet to the Sun.[153] The stars next closest to the Sun are the red dwarfs Barnard's Star (at 5.9 ly), Wolf 359 (7.8 ly), and Lalande 21185 (8.3 ly).
151
+
152
+ The largest nearby star is Sirius, a bright main-sequence star roughly 8.6 light-years away and roughly twice the Sun's mass and that is orbited by a white dwarf, Sirius B. The nearest brown dwarfs are the binary Luhman 16 system at 6.6 light-years. Other systems within ten light-years are the binary red-dwarf system Luyten 726-8 (8.7 ly) and the solitary red dwarf Ross 154 (9.7 ly).[154] The closest solitary Sun-like star to the Solar System is Tau Ceti at 11.9 light-years. It has roughly 80% of the Sun's mass but only 60% of its luminosity.[155] The closest known free-floating planetary-mass object to the Sun is WISE 0855−0714,[156] an object with a mass less than 10 Jupiter masses roughly 7 light-years away.
153
+
154
+ Compared to many other planetary systems, the Solar System stands out in lacking planets interior to the orbit of Mercury.[157][158] The known Solar System also lacks super-Earths (Planet Nine could be a super-Earth beyond the known Solar System).[157] Uncommonly, it has only small rocky planets and large gas giants; elsewhere planets of intermediate size are typical—both rocky and gas—so there is no "gap" as seen between the size of Earth and of Neptune (with a radius 3.8 times as large). Also, these super-Earths have closer orbits than Mercury.[157] This led to the hypothesis that all planetary systems start with many close-in planets, and that typically a sequence of their collisions causes consolidation of mass into few larger planets, but in case of the Solar System the collisions caused their destruction and ejection.[159][160]
155
+
156
+ The orbits of Solar System planets are nearly circular. Compared to other systems, they have smaller orbital eccentricity.[157] Although there are attempts to explain it partly with a bias in the radial-velocity detection method and partly with long interactions of a quite high number of planets, the exact causes remain undetermined.[157][161]
157
+
158
+ This section is a sampling of Solar System bodies, selected for size and quality of imagery, and sorted by volume. Some omitted objects are larger than the ones included here, notably Eris, because these have not been imaged in high quality.
159
+
160
+ Venus, Earth (Pale Blue Dot), Jupiter, Saturn, Uranus, Neptune (13 September 1996).
161
+
162
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
163
+
en/5583.html.txt ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Solar System[b] is the gravitationally bound system of the Sun and the objects that orbit it, either directly or indirectly.[c] Of the objects that orbit the Sun directly, the largest are the eight planets,[d] with the remainder being smaller objects, the dwarf planets and small Solar System bodies. Of the objects that orbit the Sun indirectly—the moons—two are larger than the smallest planet, Mercury.[e]
4
+
5
+ The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant interstellar molecular cloud. The vast majority of the system's mass is in the Sun, with the majority of the remaining mass contained in Jupiter. The four smaller inner planets, Mercury, Venus, Earth and Mars, are terrestrial planets, being primarily composed of rock and metal. The four outer planets are giant planets, being substantially more massive than the terrestrials. The two largest planets, Jupiter and Saturn, are gas giants, being composed mainly of hydrogen and helium; the two outermost planets, Uranus and Neptune, are ice giants, being composed mostly of substances with relatively high melting points compared with hydrogen and helium, called volatiles, such as water, ammonia and methane. All eight planets have almost circular orbits that lie within a nearly flat disc called the ecliptic.
6
+
7
+ The Solar System also contains smaller objects.[f] The asteroid belt, which lies between the orbits of Mars and Jupiter, mostly contains objects composed, like the terrestrial planets, of rock and metal. Beyond Neptune's orbit lie the Kuiper belt and scattered disc, which are populations of trans-Neptunian objects composed mostly of ices, and beyond them a newly discovered population of sednoids. Within these populations, some objects are large enough to have rounded under their own gravity, though there is considerable debate as to how many there will prove to be.[9][10] Such objects are categorized as dwarf planets. The only certain dwarf planet is Pluto, with another trans-Neptunian object, Eris, expected to be, and the asteroid Ceres at least close to being a dwarf planet.[f] In addition to these two regions, various other small-body populations, including comets, centaurs and interplanetary dust clouds, freely travel between regions. Six of the planets, the six largest possible dwarf planets, and many of the smaller bodies are orbited by natural satellites, usually termed "moons" after the Moon. Each of the outer planets is encircled by planetary rings of dust and other small objects.
8
+
9
+ The solar wind, a stream of charged particles flowing outwards from the Sun, creates a bubble-like region in the interstellar medium known as the heliosphere. The heliopause is the point at which pressure from the solar wind is equal to the opposing pressure of the interstellar medium; it extends out to the edge of the scattered disc. The Oort cloud, which is thought to be the source for long-period comets, may also exist at a distance roughly a thousand times further than the heliosphere. The Solar System is located in the Orion Arm, 26,000 light-years from the center of the Milky Way galaxy.
10
+
11
+ For most of history, humanity did not recognize or understand the concept of the Solar System. Most people up to the Late Middle Ages–Renaissance believed Earth to be stationary at the centre of the universe and categorically different from the divine or ethereal objects that moved through the sky. Although the Greek philosopher Aristarchus of Samos had speculated on a heliocentric reordering of the cosmos, Nicolaus Copernicus was the first to develop a mathematically predictive heliocentric system.[11][12]
12
+
13
+ In the 17th century, Galileo discovered that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it.[13] Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn.[14] Edmond Halley realised in 1705 that repeated sightings of a comet were recording the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets orbited the Sun.[15] Around this time (1704), the term "Solar System" first appeared in English.[16] In 1838, Friedrich Bessel successfully measured a stellar parallax, an apparent shift in the position of a star created by Earth's motion around the Sun, providing the first direct, experimental proof of heliocentrism.[17] Improvements in observational astronomy and the use of unmanned spacecraft have since enabled the detailed investigation of other bodies orbiting the Sun.
14
+
15
+ The principal component of the Solar System is the Sun, a G2 main-sequence star that contains 99.86% of the system's known mass and dominates it gravitationally.[18] The Sun's four largest orbiting bodies, the giant planets, account for 99% of the remaining mass, with Jupiter and Saturn together comprising more than 90%. The remaining objects of the Solar System (including the four terrestrial planets, the dwarf planets, moons, asteroids, and comets) together comprise less than 0.002% of the Solar System's total mass.[g]
16
+
17
+ Most large objects in orbit around the Sun lie near the plane of Earth's orbit, known as the ecliptic. The planets are very close to the ecliptic, whereas comets and Kuiper belt objects are frequently at significantly greater angles to it.[22][23] As a result of the formation of the Solar System planets, and most other objects, orbit the Sun in the same direction that the Sun is rotating (counter-clockwise, as viewed from above Earth's north pole).[24] There are exceptions, such as Halley's Comet. Most of the larger moons orbit their planets in this prograde direction (with Triton being the largest retrograde exception) and most larger objects rotate themselves in the same direction (with Venus being a notable retrograde exception).
18
+
19
+ The overall structure of the charted regions of the Solar System consists of the Sun, four relatively small inner planets surrounded by a belt of mostly rocky asteroids, and four giant planets surrounded by the Kuiper belt of mostly icy objects. Astronomers sometimes informally divide this structure into separate regions. The inner Solar System includes the four terrestrial planets and the asteroid belt. The outer Solar System is beyond the asteroids, including the four giant planets.[25] Since the discovery of the Kuiper belt, the outermost parts of the Solar System are considered a distinct region consisting of the objects beyond Neptune.[26]
20
+
21
+ Most of the planets in the Solar System have secondary systems of their own, being orbited by planetary objects called natural satellites, or moons (two of which, Titan and Ganymede, are larger than the planet Mercury), and, in the case of the four giant planets, by planetary rings, thin bands of tiny particles that orbit them in unison. Most of the largest natural satellites are in synchronous rotation, with one face permanently turned toward their parent.
22
+
23
+ Kepler's laws of planetary motion describe the orbits of objects about the Sun. Following Kepler's laws, each object travels along an ellipse with the Sun at one focus. Objects closer to the Sun (with smaller semi-major axes) travel more quickly because they are more affected by the Sun's gravity. On an elliptical orbit, a body's distance from the Sun varies over the course of its year. A body's closest approach to the Sun is called its perihelion, whereas its most distant point from the Sun is called its aphelion. The orbits of the planets are nearly circular, but many comets, asteroids, and Kuiper belt objects follow highly elliptical orbits. The positions of the bodies in the Solar System can be predicted using numerical models.
24
+
25
+ Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum.[27][28] The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.[27]
26
+
27
+ The Sun, which comprises nearly all the matter in the Solar System, is composed of roughly 98% hydrogen and helium.[29] Jupiter and Saturn, which comprise nearly all the remaining matter, are also primarily composed of hydrogen and helium.[30][31] A composition gradient exists in the Solar System, created by heat and light pressure from the Sun; those objects closer to the Sun, which are more affected by heat and light pressure, are composed of elements with high melting points. Objects farther from the Sun are composed largely of materials with lower melting points.[32] The boundary in the Solar System beyond which those volatile substances could condense is known as the frost line, and it lies at roughly 5 AU from the Sun.[4]
28
+
29
+ The objects of the inner Solar System are composed mostly of rock,[33] the collective name for compounds with high melting points, such as silicates, iron or nickel, that remained solid under almost all conditions in the protoplanetary nebula.[34] Jupiter and Saturn are composed mainly of gases, the astronomical term for materials with extremely low melting points and high vapour pressure, such as hydrogen, helium, and neon, which were always in the gaseous phase in the nebula.[34] Ices, like water, methane, ammonia, hydrogen sulfide, and carbon dioxide,[33] have melting points up to a few hundred kelvins.[34] They can be found as ices, liquids, or gases in various places in the Solar System, whereas in the nebula they were either in the solid or gaseous phase.[34] Icy substances comprise the majority of the satellites of the giant planets, as well as most of Uranus and Neptune (the so-called "ice giants") and the numerous small objects that lie beyond Neptune's orbit.[33][35] Together, gases and ices are referred to as volatiles.[36]
30
+
31
+ The distance from Earth to the Sun is 1 astronomical unit [AU] (150,000,000 km; 93,000,000 mi). For comparison, the radius of the Sun is 0.0047 AU (700,000 km). Thus, the Sun occupies 0.00001% (10−5 %) of the volume of a sphere with a radius the size of Earth's orbit, whereas Earth's volume is roughly one millionth (10−6) that of the Sun. Jupiter, the largest planet, is 5.2 astronomical units (780,000,000 km) from the Sun and has a radius of 71,000 km (0.00047 AU), whereas the most distant planet, Neptune, is 30 AU (4.5×109 km) from the Sun.
32
+
33
+ With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearer object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances (for example, the Titius–Bode law),[37] but no such theory has been accepted. The images at the beginning of this section show the orbits of the various constituents of the Solar System on different scales.
34
+
35
+ Some Solar System models attempt to convey the relative scales involved in the Solar System on human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas.[38] The largest such scale model, the Sweden Solar System, uses the 110-metre (361 ft) Ericsson Globe in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-metre (25-foot) sphere at Stockholm Arlanda Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away.[39][40]
36
+
37
+ If the Sun–Neptune distance is scaled to 100 metres, then the Sun would be about 3 cm in diameter (roughly two-thirds the diameter of a golf ball), the giant planets would be all smaller than about 3 mm, and Earth's diameter along with that of the other terrestrial planets would be smaller than a flea (0.3 mm) at this scale.[41]
38
+
39
+ Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively, hence long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image.
40
+
41
+ The Solar System formed 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud.[h] This initial cloud was likely several light-years across and probably birthed several stars.[43] As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars. As the region that would become the Solar System, known as the pre-solar nebula,[44] collapsed, conservation of angular momentum caused it to rotate faster. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc.[43] As the contracting nebula rotated faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU[43] and a hot, dense protostar at the centre.[45][46] The planets formed by accretion from this disc,[47] in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed, leaving the planets, dwarf planets, and leftover minor bodies.
42
+
43
+ Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun, and these would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because metallic elements only comprised a very small fraction of the solar nebula, the terrestrial planets could not grow very large. The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud. The Nice model is an explanation for the creation of these regions and how the outer planets could have formed in different positions and migrated to their current orbits through various gravitational interactions.
44
+
45
+ Within 50 million years, the pressure and density of hydrogen in the centre of the protostar became great enough for it to begin thermonuclear fusion.[49] The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure equalled the force of gravity. At this point, the Sun became a main-sequence star.[50] The main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other phases of the Sun's pre-remnant life combined.[51] Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space, ending the planetary formation process. The Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today.[52]
46
+
47
+ The Solar System will remain roughly as we know it today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At this time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be much greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its vastly increased surface area, the surface of the Sun will be considerably cooler (2,600 K at its coolest) than it is on the main sequence.[51] The expanding Sun is expected to vaporize Mercury and render Earth uninhabitable. Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will move away into space, leaving a white dwarf, an extraordinarily dense object, half the original mass of the Sun but only the size of Earth.[53] The ejected outer layers will form what is known as a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.
48
+
49
+ The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses),[54] which comprises 99.86% of all the mass in the Solar System,[55] produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium, making it a main-sequence star.[56] This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.[57]
50
+
51
+ The Sun is a G2-type main-sequence star. Hotter main-sequence stars are more luminous. The Sun's temperature is intermediate between that of the hottest stars and that of the coolest stars. Stars brighter and hotter than the Sun are rare, whereas substantially dimmer and cooler stars, known as red dwarfs, make up 85% of the stars in the Milky Way.[58][59]
52
+
53
+ The Sun is a population I star; it has a higher abundance of elements heavier than hydrogen and helium ("metals" in astronomical parlance) than the older population II stars.[60] Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the Universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This high metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets form from the accretion of "metals".[61]
54
+
55
+ The vast majority of the Solar System consists of a near-vacuum known as the interplanetary medium. Along with light, the Sun radiates a continuous stream of charged particles (a plasma) known as the solar wind. This stream of particles spreads outwards at roughly 1.5 million kilometres per hour,[62] creating a tenuous atmosphere that permeates the interplanetary medium out to at least 100 AU (see § Heliosphere).[63] Activity on the Sun's surface, such as solar flares and coronal mass ejections, disturbs the heliosphere, creating space weather and causing geomagnetic storms.[64] The largest structure within the heliosphere is the heliospheric current sheet, a spiral form created by the actions of the Sun's rotating magnetic field on the interplanetary medium.[65][66]
56
+
57
+ Earth's magnetic field stops its atmosphere from being stripped away by the solar wind.[67] Venus and Mars do not have magnetic fields, and as a result the solar wind is causing their atmospheres to gradually bleed away into space.[68] Coronal mass ejections and similar events blow a magnetic field and huge quantities of material from the surface of the Sun. The interaction of this magnetic field and material with Earth's magnetic field funnels charged particles into Earth's upper atmosphere, where its interactions create aurorae seen near the magnetic poles.
58
+
59
+ The heliosphere and planetary magnetic fields (for those planets that have them) partially shield the Solar System from high-energy interstellar particles called cosmic rays. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic-ray penetration in the Solar System varies, though by how much is unknown.[69]
60
+
61
+ The interplanetary medium is home to at least two disc-like regions of cosmic dust. The first, the zodiacal dust cloud, lies in the inner Solar System and causes the zodiacal light. It was likely formed by collisions within the asteroid belt brought on by gravitational interactions with the planets.[70] The second dust cloud extends from about 10 AU to about 40 AU, and was probably created by similar collisions within the Kuiper belt.[71][72]
62
+
63
+ The inner Solar System is the region comprising the terrestrial planets and the asteroid belt.[73] Composed mainly of silicates and metals, the objects of the inner Solar System are relatively close to the Sun; the radius of this entire region is less than the distance between the orbits of Jupiter and Saturn. This region is also within the frost line, which is a little less than 5 AU (about 700 million km) from the Sun.[74]
64
+
65
+ The four terrestrial or inner planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of refractory minerals, such as the silicates—which form their crusts and mantles—and metals, such as iron and nickel, which form their cores. Three of the four inner planets (Venus, Earth and Mars) have atmospheres substantial enough to generate weather; all have impact craters and tectonic surface features, such as rift valleys and volcanoes. The term inner planet should not be confused with inferior planet, which designates those planets that are closer to the Sun than Earth is (i.e. Mercury and Venus).
66
+
67
+ Mercury (0.4 AU from the Sun) is the closest planet to the Sun and on average, all seven other planets.[75][76] The smallest planet in the Solar System (0.055 M⊕), Mercury has no natural satellites. Besides impact craters, its only known geological features are lobed ridges or rupes that were probably produced by a period of contraction early in its history.[77] Mercury's very tenuous atmosphere consists of atoms blasted off its surface by the solar wind.[78] Its relatively large iron core and thin mantle have not yet been adequately explained. Hypotheses include that its outer layers were stripped off by a giant impact, or that it was prevented from fully accreting by the young Sun's energy.[79][80]
68
+
69
+ Venus (0.7 AU from the Sun) is close in size to Earth (0.815 M⊕) and, like Earth, has a thick silicate mantle around an iron core, a substantial atmosphere, and evidence of internal geological activity. It is much drier than Earth, and its atmosphere is ninety times as dense. Venus has no natural satellites. It is the hottest planet, with surface temperatures over 400 °C (752 °F), most likely due to the amount of greenhouse gases in the atmosphere.[81] No definitive evidence of current geological activity has been detected on Venus, but it has no magnetic field that would prevent depletion of its substantial atmosphere, which suggests that its atmosphere is being replenished by volcanic eruptions.[82]
70
+
71
+ Earth (1 AU from the Sun) is the largest and densest of the inner planets, the only one known to have current geological activity, and the only place where life is known to exist.[83] Its liquid hydrosphere is unique among the terrestrial planets, and it is the only planet where plate tectonics has been observed. Earth's atmosphere is radically different from those of the other planets, having been altered by the presence of life to contain 21% free oxygen.[84] It has one natural satellite, the Moon, the only large satellite of a terrestrial planet in the Solar System.
72
+
73
+ Mars (1.5 AU from the Sun) is smaller than Earth and Venus (0.107 M⊕). It has an atmosphere of mostly carbon dioxide with a surface pressure of 6.1 millibars (roughly 0.6% of that of Earth).[85] Its surface, peppered with vast volcanoes, such as Olympus Mons, and rift valleys, such as Valles Marineris, shows geological activity that may have persisted until as recently as 2 million years ago.[86] Its red colour comes from iron oxide (rust) in its soil.[87] Mars has two tiny natural satellites (Deimos and Phobos) thought to be either captured asteroids,[88] or ejected debris from a massive impact early in Mars's history.[89]
74
+
75
+ Asteroids except for the largest, Ceres, are classified as small Solar System bodies[f] and are composed mainly of refractory rocky and metallic minerals, with some ice.[90][91] They range from a few metres to hundreds of kilometres in size. Asteroids smaller than one meter are usually called meteoroids and micrometeoroids (grain-sized), depending on different, somewhat arbitrary definitions.
76
+
77
+ The asteroid belt occupies the orbit between Mars and Jupiter, between 2.3 and 3.3 AU from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter.[92] The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter.[93] Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of Earth.[21] The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident.
78
+
79
+ Ceres (2.77 AU) is the largest asteroid, a protoplanet, and a dwarf planet.[f] It has a diameter of slightly under 1000 km, and a mass large enough for its own gravity to pull it into a spherical shape. Ceres was considered a planet when it was discovered in 1801, and was reclassified to asteroid in the 1850s as further observations revealed additional asteroids.[94] It was classified as a dwarf planet in 2006 when the definition of a planet was created.
80
+
81
+ Asteroids in the asteroid belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets, which may have been the source of Earth's water.[95]
82
+
83
+ Jupiter trojans are located in either of Jupiter's L4 or L5 points (gravitationally stable regions leading and trailing a planet in its orbit); the term trojan is also used for small bodies in any other planetary or satellite Lagrange point. Hilda asteroids are in a 2:3 resonance with Jupiter; that is, they go around the Sun three times for every two Jupiter orbits.[96]
84
+
85
+ The inner Solar System also contains near-Earth asteroids, many of which cross the orbits of the inner planets.[97] Some of them are potentially hazardous objects.
86
+
87
+ The outer region of the Solar System is home to the giant planets and their large moons. The centaurs and many short-period comets also orbit in this region. Due to their greater distance from the Sun, the solid objects in the outer Solar System contain a higher proportion of volatiles, such as water, ammonia, and methane than those of the inner Solar System because the lower temperatures allow these compounds to remain solid.
88
+
89
+ The four outer planets, or giant planets (sometimes called Jovian planets), collectively make up 99% of the mass known to orbit the Sun.[g] Jupiter and Saturn are together more than 400 times the mass of Earth and consist overwhelmingly of hydrogen and helium. Uranus and Neptune are far less massive—less than 20 Earth masses (M⊕) each—and are composed primarily of ices. For these reasons, some astronomers suggest they belong in their own category, ice giants.[98] All four giant planets have rings, although only Saturn's ring system is easily observed from Earth. The term superior planet designates planets outside Earth's orbit and thus includes both the outer planets and Mars.
90
+
91
+ Jupiter (5.2 AU), at 318 M⊕, is 2.5 times the mass of all the other planets put together. It is composed largely of hydrogen and helium. Jupiter's strong internal heat creates semi-permanent features in its atmosphere, such as cloud bands and the Great Red Spot. Jupiter has 79 known satellites. The four largest, Ganymede, Callisto, Io, and Europa, show similarities to the terrestrial planets, such as volcanism and internal heating.[99] Ganymede, the largest satellite in the Solar System, is larger than Mercury.
92
+
93
+ Saturn (9.5 AU), distinguished by its extensive ring system, has several similarities to Jupiter, such as its atmospheric composition and magnetosphere. Although Saturn has 60% of Jupiter's volume, it is less than a third as massive, at 95 M⊕. Saturn is the only planet of the Solar System that is less dense than water.[100] The rings of Saturn are made up of small ice and rock particles. Saturn has 82 confirmed satellites composed largely of ice. Two of these, Titan and Enceladus, show signs of geological activity.[101] Titan, the second-largest moon in the Solar System, is larger than Mercury and the only satellite in the Solar System with a substantial atmosphere.
94
+
95
+ Uranus (19.2 AU), at 14 M⊕, is the lightest of the outer planets. Uniquely among the planets, it orbits the Sun on its side; its axial tilt is over ninety degrees to the ecliptic. It has a much colder core than the other giant planets and radiates very little heat into space.[102] Uranus has 27 known satellites, the largest ones being Titania, Oberon, Umbriel, Ariel, and Miranda.
96
+
97
+ Neptune (30.1 AU), though slightly smaller than Uranus, is more massive (17 M⊕) and hence more dense. It radiates more internal heat, but not as much as Jupiter or Saturn.[103] Neptune has 14 known satellites. The largest, Triton, is geologically active, with geysers of liquid nitrogen.[104] Triton is the only large satellite with a retrograde orbit. Neptune is accompanied in its orbit by several minor planets, termed Neptune trojans, that are in 1:1 resonance with it.
98
+
99
+ The centaurs are icy comet-like bodies whose orbits have semi-major axes greater than Jupiter's (5.5 AU) and less than Neptune's (30 AU). The largest known centaur, 10199 Chariklo, has a diameter of about 250 km.[105] The first centaur discovered, 2060 Chiron, has also been classified as comet (95P) because it develops a coma just as comets do when they approach the Sun.[106]
100
+
101
+ Comets are small Solar System bodies,[f] typically only a few kilometres across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
102
+
103
+ Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets are thought to originate in the Kuiper belt, whereas long-period comets, such as Hale–Bopp, are thought to originate in the Oort cloud. Many comet groups, such as the Kreutz Sungrazers, formed from the breakup of a single parent.[107] Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult.[108] Old comets that have had most of their volatiles driven out by solar warming are often categorised as asteroids.[109]
104
+
105
+ Beyond the orbit of Neptune lies the area of the "trans-Neptunian region", with the doughnut-shaped Kuiper belt, home of Pluto and several other dwarf planets, and an overlapping disc of scattered objects, which is tilted toward the plane of the Solar System and reaches much further out than the Kuiper belt. The entire region is still largely unexplored. It appears to consist overwhelmingly of many thousands of small worlds—the largest having a diameter only a fifth that of Earth and a mass far smaller than that of the Moon—composed mainly of rock and ice. This region is sometimes described as the "third zone of the Solar System", enclosing the inner and the outer Solar System.[110]
106
+
107
+ The Kuiper belt is a great ring of debris similar to the asteroid belt, but consisting mainly of objects composed primarily of ice.[111] It extends between 30 and 50 AU from the Sun. Though it is estimated to contain anything from dozens to thousands of dwarf planets, it is composed mainly of small Solar System bodies. Many of the larger Kuiper belt objects, such as Quaoar, Varuna, and Orcus, may prove to be dwarf planets with further data. There are estimated to be over 100,000 Kuiper belt objects with a diameter greater than 50 km, but the total mass of the Kuiper belt is thought to be only a tenth or even a hundredth the mass of Earth.[20] Many Kuiper belt objects have multiple satellites,[112] and most have orbits that take them outside the plane of the ecliptic.[113]
108
+
109
+ The Kuiper belt can be roughly divided into the "classical" belt and the resonances.[111] Resonances are orbits linked to that of Neptune (e.g. twice for every three Neptune orbits, or once for every two). The first resonance begins within the orbit of Neptune itself. The classical belt consists of objects having no resonance with Neptune, and extends from roughly 39.4 AU to 47.7 AU.[114] Members of the classical Kuiper belt are classified as cubewanos, after the first of their kind to be discovered, 15760 Albion (which previously had the provisional designation 1992 QB1), and are still in near primordial, low-eccentricity orbits.[115]
110
+
111
+ The dwarf planet Pluto (39 AU average) is the largest known object in the Kuiper belt. When discovered in 1930, it was considered to be the ninth planet; this changed in 2006 with the adoption of a formal definition of planet. Pluto has a relatively eccentric orbit inclined 17 degrees to the ecliptic plane and ranging from 29.7 AU from the Sun at perihelion (within the orbit of Neptune) to 49.5 AU at aphelion. Pluto has a 3:2 resonance with Neptune, meaning that Pluto orbits twice round the Sun for every three Neptunian orbits. Kuiper belt objects whose orbits share this resonance are called plutinos.[116]
112
+
113
+ Charon, the largest of Pluto's moons, is sometimes described as part of a binary system with Pluto, as the two bodies orbit a barycentre of gravity above their surfaces (i.e. they appear to "orbit each other"). Beyond Charon, four much smaller moons, Styx, Nix, Kerberos, and Hydra, orbit within the system.
114
+
115
+ Makemake (45.79 AU average), although smaller than Pluto, is the largest known object in the classical Kuiper belt (that is, a Kuiper belt object not in a confirmed resonance with Neptune). Makemake is the brightest object in the Kuiper belt after Pluto. It was assigned a naming committee under the expectation that it would prove to be a dwarf planet in 2008.[6] Its orbit is far more inclined than Pluto's, at 29°.[117]
116
+
117
+ Haumea (43.13 AU average) is in an orbit similar to Makemake, except that it is in a temporary 7:12 orbital resonance with Neptune.[118]
118
+ It was named under the same expectation that it would prove to be a dwarf planet, though subsequent observations have indicated that it may not be a dwarf planet after all.[119]
119
+
120
+ The scattered disc, which overlaps the Kuiper belt but extends out to about 200 AU, is thought to be the source of short-period comets. Scattered-disc objects are thought to have been ejected into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects (SDOs) have perihelia within the Kuiper belt but aphelia far beyond it (some more than 150 AU from the Sun). SDOs' orbits are also highly inclined to the ecliptic plane and are often almost perpendicular to it. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt and describe scattered disc objects as "scattered Kuiper belt objects".[120] Some astronomers also classify centaurs as inward-scattered Kuiper belt objects along with the outward-scattered residents of the scattered disc.[121]
121
+
122
+ Eris (68 AU average) is the largest known scattered disc object, and caused a debate about what constitutes a planet, because it is 25% more massive than Pluto[122] and about the same diameter. It is the most massive of the known dwarf planets. It has one known moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane.
123
+
124
+ The point at which the Solar System ends and interstellar space begins is not precisely defined because its outer boundaries are shaped by two separate forces: the solar wind and the Sun's gravity. The limit of the solar wind's influence is roughly four times Pluto's distance from the Sun; this heliopause, the outer boundary of the heliosphere, is considered the beginning of the interstellar medium.[63] The Sun's Hill sphere, the effective range of its gravitational dominance, is thought to extend up to a thousand times farther and encompasses the theorized Oort cloud.[123]
125
+
126
+ The heliosphere is a stellar-wind bubble, a region of space dominated by the Sun, which radiates at roughly 400 km/s its solar wind, a stream of charged particles, until it collides with the wind of the interstellar medium.
127
+
128
+ The collision occurs at the termination shock, which is roughly 80–100 AU from the Sun upwind of the interstellar medium and roughly 200 AU from the Sun downwind.[124] Here the wind slows dramatically, condenses and becomes more turbulent,[124] forming a great oval structure known as the heliosheath. This structure is thought to look and behave very much like a comet's tail, extending outward for a further 40 AU on the upwind side but tailing many times that distance downwind; evidence from Cassini and Interstellar Boundary Explorer spacecraft has suggested that it is forced into a bubble shape by the constraining action of the interstellar magnetic field.[125]
129
+
130
+ The outer boundary of the heliosphere, the heliopause, is the point at which the solar wind finally terminates and is the beginning of interstellar space.[63] Voyager 1 and Voyager 2 are reported to have passed the termination shock and entered the heliosheath, at 94 and 84 AU from the Sun, respectively.[126][127] Voyager 1 is reported to have crossed the heliopause in August 2012.[128]
131
+
132
+ The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium as well as solar magnetic fields prevailing to the south, e.g. it is bluntly shaped with the northern hemisphere extending 9 AU farther than the southern hemisphere.[124] Beyond the heliopause, at around 230 AU, lies the bow shock, a plasma "wake" left by the Sun as it travels through the Milky Way.[129]
133
+
134
+ Due to a lack of data, conditions in local interstellar space are not known for certain. It is expected that NASA's Voyager spacecraft, as they pass the heliopause, will transmit valuable data on radiation levels and solar wind to Earth.[130] How well the heliosphere shields the Solar System from cosmic rays is poorly understood. A NASA-funded team has developed a concept of a "Vision Mission" dedicated to sending a probe to the heliosphere.[131][132]
135
+
136
+ 90377 Sedna (520 AU average) is a large, reddish object with a gigantic, highly elliptical orbit that takes it from about 76 AU at perihelion to 940 AU at aphelion and takes 11,400 years to complete. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper belt because its perihelion is too distant to have been affected by Neptune's migration. He and other astronomers consider it to be the first in an entirely new population, sometimes termed "distant detached objects" (DDOs), which also may include the object 2000 CR105, which has a perihelion of 45 AU, an aphelion of 415 AU, and an orbital period of 3,420 years.[133] Brown terms this population the "inner Oort cloud" because it may have formed through a similar process, although it is far closer to the Sun.[134] Sedna is very likely a dwarf planet, though its shape has yet to be determined. The second unequivocally detached object, with a perihelion farther than Sedna's at roughly 81 AU, is 2012 VP113, discovered in 2012. Its aphelion is only half that of Sedna's, at 400–500 AU.[135][136]
137
+
138
+ The Oort cloud is a hypothetical spherical cloud of up to a trillion icy objects that is thought to be the source for all long-period comets and to surround the Solar System at roughly 50,000 AU (around 1 light-year (ly)), and possibly to as far as 100,000 AU (1.87 ly). It is thought to be composed of comets that were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events, such as collisions, the gravitational effects of a passing star, or the galactic tide, the tidal force exerted by the Milky Way.[137][138]
139
+
140
+ Much of the Solar System is still unknown. The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light years (125,000 AU). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than 50,000 AU.[139] Despite discoveries such as Sedna, the region between the Kuiper belt and the Oort cloud, an area tens of thousands of AU in radius, is still virtually unmapped. There are also ongoing studies of the region between Mercury and the Sun.[140] Objects may yet be discovered in the Solar System's uncharted regions.
141
+
142
+ Currently, the furthest known objects, such as Comet West, have aphelia around 70,000 AU from the Sun, but as the Oort cloud becomes better known, this may change.
143
+
144
+ The Solar System is located in the Milky Way, a barred spiral galaxy with a diameter of about 100,000 light-years containing more than 100 billion stars.[141] The Sun resides in one of the Milky Way's outer spiral arms, known as the Orion–Cygnus Arm or Local Spur.[142] The Sun lies between 25,000 and 28,000 light-years from the Galactic Centre,[143] and its speed within the Milky Way is about 220 km/s, so that it completes one revolution every 225–250 million years. This revolution is known as the Solar System's galactic year.[144] The solar apex, the direction of the Sun's path through interstellar space, is near the constellation Hercules in the direction of the current location of the bright star Vega.[145] The plane of the ecliptic lies at an angle of about 60° to the galactic plane.[i]
145
+
146
+ The Solar System's location in the Milky Way is a factor in the evolutionary history of life on Earth. Its orbit is close to circular, and orbits near the Sun are at roughly the same speed as that of the spiral arms.[147][148] Therefore, the Sun passes through arms only rarely. Because spiral arms are home to a far larger concentration of supernovae, gravitational instabilities, and radiation that could disrupt the Solar System, this has given Earth long periods of stability for life to evolve.[147] The Solar System also lies well outside the star-crowded environs of the galactic centre. Near the centre, gravitational tugs from nearby stars could perturb bodies in the Oort cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. The intense radiation of the galactic centre could also interfere with the development of complex life.[147] Even at the Solar System's current location, some scientists have speculated that recent supernovae may have adversely affected life in the last 35,000 years, by flinging pieces of expelled stellar core towards the Sun, as radioactive dust grains and larger, comet-like bodies.[149]
147
+
148
+ The Solar System is in the Local Interstellar Cloud or Local Fluff. It is thought to be near the neighbouring G-Cloud but it is not known if the Solar System is embedded in the Local Interstellar Cloud, or if it is in the region where the Local Interstellar Cloud and G-Cloud are interacting.[150][151] The Local Interstellar Cloud is an area of denser cloud in an otherwise sparse region known as the Local Bubble, an hourglass-shaped cavity in the interstellar medium roughly 300 light-years (ly) across. The bubble is suffused with high-temperature plasma, that suggests it is the product of several recent supernovae.[152]
149
+
150
+ There are relatively few stars within ten light-years of the Sun. The closest is the triple star system Alpha Centauri, which is about 4.4 light-years away. Alpha Centauri A and B are a closely tied pair of Sun-like stars, whereas the small red dwarf, Proxima Centauri, orbits the pair at a distance of 0.2 light-year. In 2016, a potentially habitable exoplanet was confirmed to be orbiting Proxima Centauri, called Proxima Centauri b, the closest confirmed exoplanet to the Sun.[153] The stars next closest to the Sun are the red dwarfs Barnard's Star (at 5.9 ly), Wolf 359 (7.8 ly), and Lalande 21185 (8.3 ly).
151
+
152
+ The largest nearby star is Sirius, a bright main-sequence star roughly 8.6 light-years away and roughly twice the Sun's mass and that is orbited by a white dwarf, Sirius B. The nearest brown dwarfs are the binary Luhman 16 system at 6.6 light-years. Other systems within ten light-years are the binary red-dwarf system Luyten 726-8 (8.7 ly) and the solitary red dwarf Ross 154 (9.7 ly).[154] The closest solitary Sun-like star to the Solar System is Tau Ceti at 11.9 light-years. It has roughly 80% of the Sun's mass but only 60% of its luminosity.[155] The closest known free-floating planetary-mass object to the Sun is WISE 0855−0714,[156] an object with a mass less than 10 Jupiter masses roughly 7 light-years away.
153
+
154
+ Compared to many other planetary systems, the Solar System stands out in lacking planets interior to the orbit of Mercury.[157][158] The known Solar System also lacks super-Earths (Planet Nine could be a super-Earth beyond the known Solar System).[157] Uncommonly, it has only small rocky planets and large gas giants; elsewhere planets of intermediate size are typical—both rocky and gas—so there is no "gap" as seen between the size of Earth and of Neptune (with a radius 3.8 times as large). Also, these super-Earths have closer orbits than Mercury.[157] This led to the hypothesis that all planetary systems start with many close-in planets, and that typically a sequence of their collisions causes consolidation of mass into few larger planets, but in case of the Solar System the collisions caused their destruction and ejection.[159][160]
155
+
156
+ The orbits of Solar System planets are nearly circular. Compared to other systems, they have smaller orbital eccentricity.[157] Although there are attempts to explain it partly with a bias in the radial-velocity detection method and partly with long interactions of a quite high number of planets, the exact causes remain undetermined.[157][161]
157
+
158
+ This section is a sampling of Solar System bodies, selected for size and quality of imagery, and sorted by volume. Some omitted objects are larger than the ones included here, notably Eris, because these have not been imaged in high quality.
159
+
160
+ Venus, Earth (Pale Blue Dot), Jupiter, Saturn, Uranus, Neptune (13 September 1996).
161
+
162
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
163
+
en/5584.html.txt ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Solar System[b] is the gravitationally bound system of the Sun and the objects that orbit it, either directly or indirectly.[c] Of the objects that orbit the Sun directly, the largest are the eight planets,[d] with the remainder being smaller objects, the dwarf planets and small Solar System bodies. Of the objects that orbit the Sun indirectly—the moons—two are larger than the smallest planet, Mercury.[e]
4
+
5
+ The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant interstellar molecular cloud. The vast majority of the system's mass is in the Sun, with the majority of the remaining mass contained in Jupiter. The four smaller inner planets, Mercury, Venus, Earth and Mars, are terrestrial planets, being primarily composed of rock and metal. The four outer planets are giant planets, being substantially more massive than the terrestrials. The two largest planets, Jupiter and Saturn, are gas giants, being composed mainly of hydrogen and helium; the two outermost planets, Uranus and Neptune, are ice giants, being composed mostly of substances with relatively high melting points compared with hydrogen and helium, called volatiles, such as water, ammonia and methane. All eight planets have almost circular orbits that lie within a nearly flat disc called the ecliptic.
6
+
7
+ The Solar System also contains smaller objects.[f] The asteroid belt, which lies between the orbits of Mars and Jupiter, mostly contains objects composed, like the terrestrial planets, of rock and metal. Beyond Neptune's orbit lie the Kuiper belt and scattered disc, which are populations of trans-Neptunian objects composed mostly of ices, and beyond them a newly discovered population of sednoids. Within these populations, some objects are large enough to have rounded under their own gravity, though there is considerable debate as to how many there will prove to be.[9][10] Such objects are categorized as dwarf planets. The only certain dwarf planet is Pluto, with another trans-Neptunian object, Eris, expected to be, and the asteroid Ceres at least close to being a dwarf planet.[f] In addition to these two regions, various other small-body populations, including comets, centaurs and interplanetary dust clouds, freely travel between regions. Six of the planets, the six largest possible dwarf planets, and many of the smaller bodies are orbited by natural satellites, usually termed "moons" after the Moon. Each of the outer planets is encircled by planetary rings of dust and other small objects.
8
+
9
+ The solar wind, a stream of charged particles flowing outwards from the Sun, creates a bubble-like region in the interstellar medium known as the heliosphere. The heliopause is the point at which pressure from the solar wind is equal to the opposing pressure of the interstellar medium; it extends out to the edge of the scattered disc. The Oort cloud, which is thought to be the source for long-period comets, may also exist at a distance roughly a thousand times further than the heliosphere. The Solar System is located in the Orion Arm, 26,000 light-years from the center of the Milky Way galaxy.
10
+
11
+ For most of history, humanity did not recognize or understand the concept of the Solar System. Most people up to the Late Middle Ages–Renaissance believed Earth to be stationary at the centre of the universe and categorically different from the divine or ethereal objects that moved through the sky. Although the Greek philosopher Aristarchus of Samos had speculated on a heliocentric reordering of the cosmos, Nicolaus Copernicus was the first to develop a mathematically predictive heliocentric system.[11][12]
12
+
13
+ In the 17th century, Galileo discovered that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it.[13] Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn.[14] Edmond Halley realised in 1705 that repeated sightings of a comet were recording the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets orbited the Sun.[15] Around this time (1704), the term "Solar System" first appeared in English.[16] In 1838, Friedrich Bessel successfully measured a stellar parallax, an apparent shift in the position of a star created by Earth's motion around the Sun, providing the first direct, experimental proof of heliocentrism.[17] Improvements in observational astronomy and the use of unmanned spacecraft have since enabled the detailed investigation of other bodies orbiting the Sun.
14
+
15
+ The principal component of the Solar System is the Sun, a G2 main-sequence star that contains 99.86% of the system's known mass and dominates it gravitationally.[18] The Sun's four largest orbiting bodies, the giant planets, account for 99% of the remaining mass, with Jupiter and Saturn together comprising more than 90%. The remaining objects of the Solar System (including the four terrestrial planets, the dwarf planets, moons, asteroids, and comets) together comprise less than 0.002% of the Solar System's total mass.[g]
16
+
17
+ Most large objects in orbit around the Sun lie near the plane of Earth's orbit, known as the ecliptic. The planets are very close to the ecliptic, whereas comets and Kuiper belt objects are frequently at significantly greater angles to it.[22][23] As a result of the formation of the Solar System planets, and most other objects, orbit the Sun in the same direction that the Sun is rotating (counter-clockwise, as viewed from above Earth's north pole).[24] There are exceptions, such as Halley's Comet. Most of the larger moons orbit their planets in this prograde direction (with Triton being the largest retrograde exception) and most larger objects rotate themselves in the same direction (with Venus being a notable retrograde exception).
18
+
19
+ The overall structure of the charted regions of the Solar System consists of the Sun, four relatively small inner planets surrounded by a belt of mostly rocky asteroids, and four giant planets surrounded by the Kuiper belt of mostly icy objects. Astronomers sometimes informally divide this structure into separate regions. The inner Solar System includes the four terrestrial planets and the asteroid belt. The outer Solar System is beyond the asteroids, including the four giant planets.[25] Since the discovery of the Kuiper belt, the outermost parts of the Solar System are considered a distinct region consisting of the objects beyond Neptune.[26]
20
+
21
+ Most of the planets in the Solar System have secondary systems of their own, being orbited by planetary objects called natural satellites, or moons (two of which, Titan and Ganymede, are larger than the planet Mercury), and, in the case of the four giant planets, by planetary rings, thin bands of tiny particles that orbit them in unison. Most of the largest natural satellites are in synchronous rotation, with one face permanently turned toward their parent.
22
+
23
+ Kepler's laws of planetary motion describe the orbits of objects about the Sun. Following Kepler's laws, each object travels along an ellipse with the Sun at one focus. Objects closer to the Sun (with smaller semi-major axes) travel more quickly because they are more affected by the Sun's gravity. On an elliptical orbit, a body's distance from the Sun varies over the course of its year. A body's closest approach to the Sun is called its perihelion, whereas its most distant point from the Sun is called its aphelion. The orbits of the planets are nearly circular, but many comets, asteroids, and Kuiper belt objects follow highly elliptical orbits. The positions of the bodies in the Solar System can be predicted using numerical models.
24
+
25
+ Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum.[27][28] The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.[27]
26
+
27
+ The Sun, which comprises nearly all the matter in the Solar System, is composed of roughly 98% hydrogen and helium.[29] Jupiter and Saturn, which comprise nearly all the remaining matter, are also primarily composed of hydrogen and helium.[30][31] A composition gradient exists in the Solar System, created by heat and light pressure from the Sun; those objects closer to the Sun, which are more affected by heat and light pressure, are composed of elements with high melting points. Objects farther from the Sun are composed largely of materials with lower melting points.[32] The boundary in the Solar System beyond which those volatile substances could condense is known as the frost line, and it lies at roughly 5 AU from the Sun.[4]
28
+
29
+ The objects of the inner Solar System are composed mostly of rock,[33] the collective name for compounds with high melting points, such as silicates, iron or nickel, that remained solid under almost all conditions in the protoplanetary nebula.[34] Jupiter and Saturn are composed mainly of gases, the astronomical term for materials with extremely low melting points and high vapour pressure, such as hydrogen, helium, and neon, which were always in the gaseous phase in the nebula.[34] Ices, like water, methane, ammonia, hydrogen sulfide, and carbon dioxide,[33] have melting points up to a few hundred kelvins.[34] They can be found as ices, liquids, or gases in various places in the Solar System, whereas in the nebula they were either in the solid or gaseous phase.[34] Icy substances comprise the majority of the satellites of the giant planets, as well as most of Uranus and Neptune (the so-called "ice giants") and the numerous small objects that lie beyond Neptune's orbit.[33][35] Together, gases and ices are referred to as volatiles.[36]
30
+
31
+ The distance from Earth to the Sun is 1 astronomical unit [AU] (150,000,000 km; 93,000,000 mi). For comparison, the radius of the Sun is 0.0047 AU (700,000 km). Thus, the Sun occupies 0.00001% (10−5 %) of the volume of a sphere with a radius the size of Earth's orbit, whereas Earth's volume is roughly one millionth (10−6) that of the Sun. Jupiter, the largest planet, is 5.2 astronomical units (780,000,000 km) from the Sun and has a radius of 71,000 km (0.00047 AU), whereas the most distant planet, Neptune, is 30 AU (4.5×109 km) from the Sun.
32
+
33
+ With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearer object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances (for example, the Titius–Bode law),[37] but no such theory has been accepted. The images at the beginning of this section show the orbits of the various constituents of the Solar System on different scales.
34
+
35
+ Some Solar System models attempt to convey the relative scales involved in the Solar System on human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas.[38] The largest such scale model, the Sweden Solar System, uses the 110-metre (361 ft) Ericsson Globe in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-metre (25-foot) sphere at Stockholm Arlanda Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away.[39][40]
36
+
37
+ If the Sun–Neptune distance is scaled to 100 metres, then the Sun would be about 3 cm in diameter (roughly two-thirds the diameter of a golf ball), the giant planets would be all smaller than about 3 mm, and Earth's diameter along with that of the other terrestrial planets would be smaller than a flea (0.3 mm) at this scale.[41]
38
+
39
+ Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively, hence long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image.
40
+
41
+ The Solar System formed 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud.[h] This initial cloud was likely several light-years across and probably birthed several stars.[43] As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars. As the region that would become the Solar System, known as the pre-solar nebula,[44] collapsed, conservation of angular momentum caused it to rotate faster. The centre, where most of the mass collected, became increasingly hotter than the surrounding disc.[43] As the contracting nebula rotated faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU[43] and a hot, dense protostar at the centre.[45][46] The planets formed by accretion from this disc,[47] in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed, leaving the planets, dwarf planets, and leftover minor bodies.
42
+
43
+ Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun, and these would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because metallic elements only comprised a very small fraction of the solar nebula, the terrestrial planets could not grow very large. The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud. The Nice model is an explanation for the creation of these regions and how the outer planets could have formed in different positions and migrated to their current orbits through various gravitational interactions.
44
+
45
+ Within 50 million years, the pressure and density of hydrogen in the centre of the protostar became great enough for it to begin thermonuclear fusion.[49] The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure equalled the force of gravity. At this point, the Sun became a main-sequence star.[50] The main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other phases of the Sun's pre-remnant life combined.[51] Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space, ending the planetary formation process. The Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today.[52]
46
+
47
+ The Solar System will remain roughly as we know it today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At this time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be much greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its vastly increased surface area, the surface of the Sun will be considerably cooler (2,600 K at its coolest) than it is on the main sequence.[51] The expanding Sun is expected to vaporize Mercury and render Earth uninhabitable. Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will move away into space, leaving a white dwarf, an extraordinarily dense object, half the original mass of the Sun but only the size of Earth.[53] The ejected outer layers will form what is known as a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.
48
+
49
+ The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses),[54] which comprises 99.86% of all the mass in the Solar System,[55] produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium, making it a main-sequence star.[56] This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.[57]
50
+
51
+ The Sun is a G2-type main-sequence star. Hotter main-sequence stars are more luminous. The Sun's temperature is intermediate between that of the hottest stars and that of the coolest stars. Stars brighter and hotter than the Sun are rare, whereas substantially dimmer and cooler stars, known as red dwarfs, make up 85% of the stars in the Milky Way.[58][59]
52
+
53
+ The Sun is a population I star; it has a higher abundance of elements heavier than hydrogen and helium ("metals" in astronomical parlance) than the older population II stars.[60] Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the Universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This high metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets form from the accretion of "metals".[61]
54
+
55
+ The vast majority of the Solar System consists of a near-vacuum known as the interplanetary medium. Along with light, the Sun radiates a continuous stream of charged particles (a plasma) known as the solar wind. This stream of particles spreads outwards at roughly 1.5 million kilometres per hour,[62] creating a tenuous atmosphere that permeates the interplanetary medium out to at least 100 AU (see § Heliosphere).[63] Activity on the Sun's surface, such as solar flares and coronal mass ejections, disturbs the heliosphere, creating space weather and causing geomagnetic storms.[64] The largest structure within the heliosphere is the heliospheric current sheet, a spiral form created by the actions of the Sun's rotating magnetic field on the interplanetary medium.[65][66]
56
+
57
+ Earth's magnetic field stops its atmosphere from being stripped away by the solar wind.[67] Venus and Mars do not have magnetic fields, and as a result the solar wind is causing their atmospheres to gradually bleed away into space.[68] Coronal mass ejections and similar events blow a magnetic field and huge quantities of material from the surface of the Sun. The interaction of this magnetic field and material with Earth's magnetic field funnels charged particles into Earth's upper atmosphere, where its interactions create aurorae seen near the magnetic poles.
58
+
59
+ The heliosphere and planetary magnetic fields (for those planets that have them) partially shield the Solar System from high-energy interstellar particles called cosmic rays. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic-ray penetration in the Solar System varies, though by how much is unknown.[69]
60
+
61
+ The interplanetary medium is home to at least two disc-like regions of cosmic dust. The first, the zodiacal dust cloud, lies in the inner Solar System and causes the zodiacal light. It was likely formed by collisions within the asteroid belt brought on by gravitational interactions with the planets.[70] The second dust cloud extends from about 10 AU to about 40 AU, and was probably created by similar collisions within the Kuiper belt.[71][72]
62
+
63
+ The inner Solar System is the region comprising the terrestrial planets and the asteroid belt.[73] Composed mainly of silicates and metals, the objects of the inner Solar System are relatively close to the Sun; the radius of this entire region is less than the distance between the orbits of Jupiter and Saturn. This region is also within the frost line, which is a little less than 5 AU (about 700 million km) from the Sun.[74]
64
+
65
+ The four terrestrial or inner planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of refractory minerals, such as the silicates—which form their crusts and mantles—and metals, such as iron and nickel, which form their cores. Three of the four inner planets (Venus, Earth and Mars) have atmospheres substantial enough to generate weather; all have impact craters and tectonic surface features, such as rift valleys and volcanoes. The term inner planet should not be confused with inferior planet, which designates those planets that are closer to the Sun than Earth is (i.e. Mercury and Venus).
66
+
67
+ Mercury (0.4 AU from the Sun) is the closest planet to the Sun and on average, all seven other planets.[75][76] The smallest planet in the Solar System (0.055 M⊕), Mercury has no natural satellites. Besides impact craters, its only known geological features are lobed ridges or rupes that were probably produced by a period of contraction early in its history.[77] Mercury's very tenuous atmosphere consists of atoms blasted off its surface by the solar wind.[78] Its relatively large iron core and thin mantle have not yet been adequately explained. Hypotheses include that its outer layers were stripped off by a giant impact, or that it was prevented from fully accreting by the young Sun's energy.[79][80]
68
+
69
+ Venus (0.7 AU from the Sun) is close in size to Earth (0.815 M⊕) and, like Earth, has a thick silicate mantle around an iron core, a substantial atmosphere, and evidence of internal geological activity. It is much drier than Earth, and its atmosphere is ninety times as dense. Venus has no natural satellites. It is the hottest planet, with surface temperatures over 400 °C (752 °F), most likely due to the amount of greenhouse gases in the atmosphere.[81] No definitive evidence of current geological activity has been detected on Venus, but it has no magnetic field that would prevent depletion of its substantial atmosphere, which suggests that its atmosphere is being replenished by volcanic eruptions.[82]
70
+
71
+ Earth (1 AU from the Sun) is the largest and densest of the inner planets, the only one known to have current geological activity, and the only place where life is known to exist.[83] Its liquid hydrosphere is unique among the terrestrial planets, and it is the only planet where plate tectonics has been observed. Earth's atmosphere is radically different from those of the other planets, having been altered by the presence of life to contain 21% free oxygen.[84] It has one natural satellite, the Moon, the only large satellite of a terrestrial planet in the Solar System.
72
+
73
+ Mars (1.5 AU from the Sun) is smaller than Earth and Venus (0.107 M⊕). It has an atmosphere of mostly carbon dioxide with a surface pressure of 6.1 millibars (roughly 0.6% of that of Earth).[85] Its surface, peppered with vast volcanoes, such as Olympus Mons, and rift valleys, such as Valles Marineris, shows geological activity that may have persisted until as recently as 2 million years ago.[86] Its red colour comes from iron oxide (rust) in its soil.[87] Mars has two tiny natural satellites (Deimos and Phobos) thought to be either captured asteroids,[88] or ejected debris from a massive impact early in Mars's history.[89]
74
+
75
+ Asteroids except for the largest, Ceres, are classified as small Solar System bodies[f] and are composed mainly of refractory rocky and metallic minerals, with some ice.[90][91] They range from a few metres to hundreds of kilometres in size. Asteroids smaller than one meter are usually called meteoroids and micrometeoroids (grain-sized), depending on different, somewhat arbitrary definitions.
76
+
77
+ The asteroid belt occupies the orbit between Mars and Jupiter, between 2.3 and 3.3 AU from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter.[92] The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter.[93] Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of Earth.[21] The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident.
78
+
79
+ Ceres (2.77 AU) is the largest asteroid, a protoplanet, and a dwarf planet.[f] It has a diameter of slightly under 1000 km, and a mass large enough for its own gravity to pull it into a spherical shape. Ceres was considered a planet when it was discovered in 1801, and was reclassified to asteroid in the 1850s as further observations revealed additional asteroids.[94] It was classified as a dwarf planet in 2006 when the definition of a planet was created.
80
+
81
+ Asteroids in the asteroid belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets, which may have been the source of Earth's water.[95]
82
+
83
+ Jupiter trojans are located in either of Jupiter's L4 or L5 points (gravitationally stable regions leading and trailing a planet in its orbit); the term trojan is also used for small bodies in any other planetary or satellite Lagrange point. Hilda asteroids are in a 2:3 resonance with Jupiter; that is, they go around the Sun three times for every two Jupiter orbits.[96]
84
+
85
+ The inner Solar System also contains near-Earth asteroids, many of which cross the orbits of the inner planets.[97] Some of them are potentially hazardous objects.
86
+
87
+ The outer region of the Solar System is home to the giant planets and their large moons. The centaurs and many short-period comets also orbit in this region. Due to their greater distance from the Sun, the solid objects in the outer Solar System contain a higher proportion of volatiles, such as water, ammonia, and methane than those of the inner Solar System because the lower temperatures allow these compounds to remain solid.
88
+
89
+ The four outer planets, or giant planets (sometimes called Jovian planets), collectively make up 99% of the mass known to orbit the Sun.[g] Jupiter and Saturn are together more than 400 times the mass of Earth and consist overwhelmingly of hydrogen and helium. Uranus and Neptune are far less massive—less than 20 Earth masses (M⊕) each—and are composed primarily of ices. For these reasons, some astronomers suggest they belong in their own category, ice giants.[98] All four giant planets have rings, although only Saturn's ring system is easily observed from Earth. The term superior planet designates planets outside Earth's orbit and thus includes both the outer planets and Mars.
90
+
91
+ Jupiter (5.2 AU), at 318 M⊕, is 2.5 times the mass of all the other planets put together. It is composed largely of hydrogen and helium. Jupiter's strong internal heat creates semi-permanent features in its atmosphere, such as cloud bands and the Great Red Spot. Jupiter has 79 known satellites. The four largest, Ganymede, Callisto, Io, and Europa, show similarities to the terrestrial planets, such as volcanism and internal heating.[99] Ganymede, the largest satellite in the Solar System, is larger than Mercury.
92
+
93
+ Saturn (9.5 AU), distinguished by its extensive ring system, has several similarities to Jupiter, such as its atmospheric composition and magnetosphere. Although Saturn has 60% of Jupiter's volume, it is less than a third as massive, at 95 M⊕. Saturn is the only planet of the Solar System that is less dense than water.[100] The rings of Saturn are made up of small ice and rock particles. Saturn has 82 confirmed satellites composed largely of ice. Two of these, Titan and Enceladus, show signs of geological activity.[101] Titan, the second-largest moon in the Solar System, is larger than Mercury and the only satellite in the Solar System with a substantial atmosphere.
94
+
95
+ Uranus (19.2 AU), at 14 M⊕, is the lightest of the outer planets. Uniquely among the planets, it orbits the Sun on its side; its axial tilt is over ninety degrees to the ecliptic. It has a much colder core than the other giant planets and radiates very little heat into space.[102] Uranus has 27 known satellites, the largest ones being Titania, Oberon, Umbriel, Ariel, and Miranda.
96
+
97
+ Neptune (30.1 AU), though slightly smaller than Uranus, is more massive (17 M⊕) and hence more dense. It radiates more internal heat, but not as much as Jupiter or Saturn.[103] Neptune has 14 known satellites. The largest, Triton, is geologically active, with geysers of liquid nitrogen.[104] Triton is the only large satellite with a retrograde orbit. Neptune is accompanied in its orbit by several minor planets, termed Neptune trojans, that are in 1:1 resonance with it.
98
+
99
+ The centaurs are icy comet-like bodies whose orbits have semi-major axes greater than Jupiter's (5.5 AU) and less than Neptune's (30 AU). The largest known centaur, 10199 Chariklo, has a diameter of about 250 km.[105] The first centaur discovered, 2060 Chiron, has also been classified as comet (95P) because it develops a coma just as comets do when they approach the Sun.[106]
100
+
101
+ Comets are small Solar System bodies,[f] typically only a few kilometres across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
102
+
103
+ Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets are thought to originate in the Kuiper belt, whereas long-period comets, such as Hale–Bopp, are thought to originate in the Oort cloud. Many comet groups, such as the Kreutz Sungrazers, formed from the breakup of a single parent.[107] Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult.[108] Old comets that have had most of their volatiles driven out by solar warming are often categorised as asteroids.[109]
104
+
105
+ Beyond the orbit of Neptune lies the area of the "trans-Neptunian region", with the doughnut-shaped Kuiper belt, home of Pluto and several other dwarf planets, and an overlapping disc of scattered objects, which is tilted toward the plane of the Solar System and reaches much further out than the Kuiper belt. The entire region is still largely unexplored. It appears to consist overwhelmingly of many thousands of small worlds—the largest having a diameter only a fifth that of Earth and a mass far smaller than that of the Moon—composed mainly of rock and ice. This region is sometimes described as the "third zone of the Solar System", enclosing the inner and the outer Solar System.[110]
106
+
107
+ The Kuiper belt is a great ring of debris similar to the asteroid belt, but consisting mainly of objects composed primarily of ice.[111] It extends between 30 and 50 AU from the Sun. Though it is estimated to contain anything from dozens to thousands of dwarf planets, it is composed mainly of small Solar System bodies. Many of the larger Kuiper belt objects, such as Quaoar, Varuna, and Orcus, may prove to be dwarf planets with further data. There are estimated to be over 100,000 Kuiper belt objects with a diameter greater than 50 km, but the total mass of the Kuiper belt is thought to be only a tenth or even a hundredth the mass of Earth.[20] Many Kuiper belt objects have multiple satellites,[112] and most have orbits that take them outside the plane of the ecliptic.[113]
108
+
109
+ The Kuiper belt can be roughly divided into the "classical" belt and the resonances.[111] Resonances are orbits linked to that of Neptune (e.g. twice for every three Neptune orbits, or once for every two). The first resonance begins within the orbit of Neptune itself. The classical belt consists of objects having no resonance with Neptune, and extends from roughly 39.4 AU to 47.7 AU.[114] Members of the classical Kuiper belt are classified as cubewanos, after the first of their kind to be discovered, 15760 Albion (which previously had the provisional designation 1992 QB1), and are still in near primordial, low-eccentricity orbits.[115]
110
+
111
+ The dwarf planet Pluto (39 AU average) is the largest known object in the Kuiper belt. When discovered in 1930, it was considered to be the ninth planet; this changed in 2006 with the adoption of a formal definition of planet. Pluto has a relatively eccentric orbit inclined 17 degrees to the ecliptic plane and ranging from 29.7 AU from the Sun at perihelion (within the orbit of Neptune) to 49.5 AU at aphelion. Pluto has a 3:2 resonance with Neptune, meaning that Pluto orbits twice round the Sun for every three Neptunian orbits. Kuiper belt objects whose orbits share this resonance are called plutinos.[116]
112
+
113
+ Charon, the largest of Pluto's moons, is sometimes described as part of a binary system with Pluto, as the two bodies orbit a barycentre of gravity above their surfaces (i.e. they appear to "orbit each other"). Beyond Charon, four much smaller moons, Styx, Nix, Kerberos, and Hydra, orbit within the system.
114
+
115
+ Makemake (45.79 AU average), although smaller than Pluto, is the largest known object in the classical Kuiper belt (that is, a Kuiper belt object not in a confirmed resonance with Neptune). Makemake is the brightest object in the Kuiper belt after Pluto. It was assigned a naming committee under the expectation that it would prove to be a dwarf planet in 2008.[6] Its orbit is far more inclined than Pluto's, at 29°.[117]
116
+
117
+ Haumea (43.13 AU average) is in an orbit similar to Makemake, except that it is in a temporary 7:12 orbital resonance with Neptune.[118]
118
+ It was named under the same expectation that it would prove to be a dwarf planet, though subsequent observations have indicated that it may not be a dwarf planet after all.[119]
119
+
120
+ The scattered disc, which overlaps the Kuiper belt but extends out to about 200 AU, is thought to be the source of short-period comets. Scattered-disc objects are thought to have been ejected into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects (SDOs) have perihelia within the Kuiper belt but aphelia far beyond it (some more than 150 AU from the Sun). SDOs' orbits are also highly inclined to the ecliptic plane and are often almost perpendicular to it. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt and describe scattered disc objects as "scattered Kuiper belt objects".[120] Some astronomers also classify centaurs as inward-scattered Kuiper belt objects along with the outward-scattered residents of the scattered disc.[121]
121
+
122
+ Eris (68 AU average) is the largest known scattered disc object, and caused a debate about what constitutes a planet, because it is 25% more massive than Pluto[122] and about the same diameter. It is the most massive of the known dwarf planets. It has one known moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane.
123
+
124
+ The point at which the Solar System ends and interstellar space begins is not precisely defined because its outer boundaries are shaped by two separate forces: the solar wind and the Sun's gravity. The limit of the solar wind's influence is roughly four times Pluto's distance from the Sun; this heliopause, the outer boundary of the heliosphere, is considered the beginning of the interstellar medium.[63] The Sun's Hill sphere, the effective range of its gravitational dominance, is thought to extend up to a thousand times farther and encompasses the theorized Oort cloud.[123]
125
+
126
+ The heliosphere is a stellar-wind bubble, a region of space dominated by the Sun, which radiates at roughly 400 km/s its solar wind, a stream of charged particles, until it collides with the wind of the interstellar medium.
127
+
128
+ The collision occurs at the termination shock, which is roughly 80–100 AU from the Sun upwind of the interstellar medium and roughly 200 AU from the Sun downwind.[124] Here the wind slows dramatically, condenses and becomes more turbulent,[124] forming a great oval structure known as the heliosheath. This structure is thought to look and behave very much like a comet's tail, extending outward for a further 40 AU on the upwind side but tailing many times that distance downwind; evidence from Cassini and Interstellar Boundary Explorer spacecraft has suggested that it is forced into a bubble shape by the constraining action of the interstellar magnetic field.[125]
129
+
130
+ The outer boundary of the heliosphere, the heliopause, is the point at which the solar wind finally terminates and is the beginning of interstellar space.[63] Voyager 1 and Voyager 2 are reported to have passed the termination shock and entered the heliosheath, at 94 and 84 AU from the Sun, respectively.[126][127] Voyager 1 is reported to have crossed the heliopause in August 2012.[128]
131
+
132
+ The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium as well as solar magnetic fields prevailing to the south, e.g. it is bluntly shaped with the northern hemisphere extending 9 AU farther than the southern hemisphere.[124] Beyond the heliopause, at around 230 AU, lies the bow shock, a plasma "wake" left by the Sun as it travels through the Milky Way.[129]
133
+
134
+ Due to a lack of data, conditions in local interstellar space are not known for certain. It is expected that NASA's Voyager spacecraft, as they pass the heliopause, will transmit valuable data on radiation levels and solar wind to Earth.[130] How well the heliosphere shields the Solar System from cosmic rays is poorly understood. A NASA-funded team has developed a concept of a "Vision Mission" dedicated to sending a probe to the heliosphere.[131][132]
135
+
136
+ 90377 Sedna (520 AU average) is a large, reddish object with a gigantic, highly elliptical orbit that takes it from about 76 AU at perihelion to 940 AU at aphelion and takes 11,400 years to complete. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper belt because its perihelion is too distant to have been affected by Neptune's migration. He and other astronomers consider it to be the first in an entirely new population, sometimes termed "distant detached objects" (DDOs), which also may include the object 2000 CR105, which has a perihelion of 45 AU, an aphelion of 415 AU, and an orbital period of 3,420 years.[133] Brown terms this population the "inner Oort cloud" because it may have formed through a similar process, although it is far closer to the Sun.[134] Sedna is very likely a dwarf planet, though its shape has yet to be determined. The second unequivocally detached object, with a perihelion farther than Sedna's at roughly 81 AU, is 2012 VP113, discovered in 2012. Its aphelion is only half that of Sedna's, at 400–500 AU.[135][136]
137
+
138
+ The Oort cloud is a hypothetical spherical cloud of up to a trillion icy objects that is thought to be the source for all long-period comets and to surround the Solar System at roughly 50,000 AU (around 1 light-year (ly)), and possibly to as far as 100,000 AU (1.87 ly). It is thought to be composed of comets that were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events, such as collisions, the gravitational effects of a passing star, or the galactic tide, the tidal force exerted by the Milky Way.[137][138]
139
+
140
+ Much of the Solar System is still unknown. The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light years (125,000 AU). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than 50,000 AU.[139] Despite discoveries such as Sedna, the region between the Kuiper belt and the Oort cloud, an area tens of thousands of AU in radius, is still virtually unmapped. There are also ongoing studies of the region between Mercury and the Sun.[140] Objects may yet be discovered in the Solar System's uncharted regions.
141
+
142
+ Currently, the furthest known objects, such as Comet West, have aphelia around 70,000 AU from the Sun, but as the Oort cloud becomes better known, this may change.
143
+
144
+ The Solar System is located in the Milky Way, a barred spiral galaxy with a diameter of about 100,000 light-years containing more than 100 billion stars.[141] The Sun resides in one of the Milky Way's outer spiral arms, known as the Orion–Cygnus Arm or Local Spur.[142] The Sun lies between 25,000 and 28,000 light-years from the Galactic Centre,[143] and its speed within the Milky Way is about 220 km/s, so that it completes one revolution every 225–250 million years. This revolution is known as the Solar System's galactic year.[144] The solar apex, the direction of the Sun's path through interstellar space, is near the constellation Hercules in the direction of the current location of the bright star Vega.[145] The plane of the ecliptic lies at an angle of about 60° to the galactic plane.[i]
145
+
146
+ The Solar System's location in the Milky Way is a factor in the evolutionary history of life on Earth. Its orbit is close to circular, and orbits near the Sun are at roughly the same speed as that of the spiral arms.[147][148] Therefore, the Sun passes through arms only rarely. Because spiral arms are home to a far larger concentration of supernovae, gravitational instabilities, and radiation that could disrupt the Solar System, this has given Earth long periods of stability for life to evolve.[147] The Solar System also lies well outside the star-crowded environs of the galactic centre. Near the centre, gravitational tugs from nearby stars could perturb bodies in the Oort cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. The intense radiation of the galactic centre could also interfere with the development of complex life.[147] Even at the Solar System's current location, some scientists have speculated that recent supernovae may have adversely affected life in the last 35,000 years, by flinging pieces of expelled stellar core towards the Sun, as radioactive dust grains and larger, comet-like bodies.[149]
147
+
148
+ The Solar System is in the Local Interstellar Cloud or Local Fluff. It is thought to be near the neighbouring G-Cloud but it is not known if the Solar System is embedded in the Local Interstellar Cloud, or if it is in the region where the Local Interstellar Cloud and G-Cloud are interacting.[150][151] The Local Interstellar Cloud is an area of denser cloud in an otherwise sparse region known as the Local Bubble, an hourglass-shaped cavity in the interstellar medium roughly 300 light-years (ly) across. The bubble is suffused with high-temperature plasma, that suggests it is the product of several recent supernovae.[152]
149
+
150
+ There are relatively few stars within ten light-years of the Sun. The closest is the triple star system Alpha Centauri, which is about 4.4 light-years away. Alpha Centauri A and B are a closely tied pair of Sun-like stars, whereas the small red dwarf, Proxima Centauri, orbits the pair at a distance of 0.2 light-year. In 2016, a potentially habitable exoplanet was confirmed to be orbiting Proxima Centauri, called Proxima Centauri b, the closest confirmed exoplanet to the Sun.[153] The stars next closest to the Sun are the red dwarfs Barnard's Star (at 5.9 ly), Wolf 359 (7.8 ly), and Lalande 21185 (8.3 ly).
151
+
152
+ The largest nearby star is Sirius, a bright main-sequence star roughly 8.6 light-years away and roughly twice the Sun's mass and that is orbited by a white dwarf, Sirius B. The nearest brown dwarfs are the binary Luhman 16 system at 6.6 light-years. Other systems within ten light-years are the binary red-dwarf system Luyten 726-8 (8.7 ly) and the solitary red dwarf Ross 154 (9.7 ly).[154] The closest solitary Sun-like star to the Solar System is Tau Ceti at 11.9 light-years. It has roughly 80% of the Sun's mass but only 60% of its luminosity.[155] The closest known free-floating planetary-mass object to the Sun is WISE 0855−0714,[156] an object with a mass less than 10 Jupiter masses roughly 7 light-years away.
153
+
154
+ Compared to many other planetary systems, the Solar System stands out in lacking planets interior to the orbit of Mercury.[157][158] The known Solar System also lacks super-Earths (Planet Nine could be a super-Earth beyond the known Solar System).[157] Uncommonly, it has only small rocky planets and large gas giants; elsewhere planets of intermediate size are typical—both rocky and gas—so there is no "gap" as seen between the size of Earth and of Neptune (with a radius 3.8 times as large). Also, these super-Earths have closer orbits than Mercury.[157] This led to the hypothesis that all planetary systems start with many close-in planets, and that typically a sequence of their collisions causes consolidation of mass into few larger planets, but in case of the Solar System the collisions caused their destruction and ejection.[159][160]
155
+
156
+ The orbits of Solar System planets are nearly circular. Compared to other systems, they have smaller orbital eccentricity.[157] Although there are attempts to explain it partly with a bias in the radial-velocity detection method and partly with long interactions of a quite high number of planets, the exact causes remain undetermined.[157][161]
157
+
158
+ This section is a sampling of Solar System bodies, selected for size and quality of imagery, and sorted by volume. Some omitted objects are larger than the ones included here, notably Eris, because these have not been imaged in high quality.
159
+
160
+ Venus, Earth (Pale Blue Dot), Jupiter, Saturn, Uranus, Neptune (13 September 1996).
161
+
162
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
163
+
en/5585.html.txt ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tobacco is the common name of several plants in the Nicotiana genus and the Solanaceae (nightshade) family, and the general term for any product prepared from the cured leaves of the tobacco plant. More than 70 species of tobacco are known, but the chief commercial crop is N. tabacum. The more potent variant N. rustica is also used around the world.
2
+
3
+ Tobacco contains the highly addictive stimulant alkaloid nicotine as well as harmala alkaloids.[3] Dried tobacco leaves are mainly used for smoking in cigarettes and cigars, as well as pipes and shishas. They can also be consumed as snuff, chewing tobacco, dipping tobacco and snus.
4
+
5
+ Tobacco use is a cause or risk factor for many diseases; especially those affecting the heart, liver, and lungs, as well as many cancers. In 2008, the World Health Organization named tobacco use as the world's single greatest preventable cause of death.[4]
6
+
7
+ The English word tobacco originates from the Spanish and Portuguese word tabaco. The precise origin of this word is disputed, but it is generally thought to have derived at least in part, from Taíno, the Arawakan language of the Caribbean. In Taíno, it was said to mean either a roll of tobacco leaves (according to Bartolomé de las Casas, 1552) or to tabago, a kind of L-shaped pipe used for sniffing tobacco smoke (according to Oviedo; with the leaves themselves being referred to as cohiba)[clarify].[5][6]
8
+
9
+ However, perhaps coincidentally, similar words in Spanish, Portuguese and Italian were used from 1410 for certain medicinal herbs. These probably derived from the Arabic طُبّاق ṭubbāq (also طُباق ṭubāq), a word reportedly dating to the 9th century, referring to various herbs.[7][8]
10
+
11
+ Tobacco has long been used in the Americas, with some cultivation sites in Mexico dating back to 1400–1000 BC.[9] Many Native American tribes traditionally grow and use tobacco. Historically, people from the Northeast Woodlands cultures have carried tobacco in pouches as a readily accepted trade item. It was smoked both socially and ceremonially, such as to seal a peace treaty or trade agreement.[10][11] In some Native cultures, tobacco is seen as a gift from the Creator, with the ceremonial tobacco smoke carrying one's thoughts and prayers to the Creator.[12]
12
+
13
+ Following the arrival of the Europeans to the Americas, tobacco became increasingly popular as a trade item. Hernández de Boncalo, Spanish chronicler of the Indies, was the first European to bring tobacco seeds to the Old World in 1559 following orders of King Philip II of Spain. These seeds were planted in the outskirts of Toledo, more specifically in an area known as "Los Cigarrales" named after the continuous plagues of cicadas (cigarras in Spanish). Before the development of the lighter Virginia and white burley strains of tobacco, the smoke was too harsh to be inhaled. Small quantities were smoked at a time, using a pipe like the midwakh or kiseru, or newly invented waterpipes such as the bong or the hookah (see thuốc lào for a modern continuance of this practice). Tobacco became so popular that the English colony of Jamestown used it as currency and began exporting it as a cash crop; tobacco is often credited as being the export that saved Virginia from ruin.[13]
14
+
15
+ The alleged benefits of tobacco also contributed to its success. The astronomer Thomas Harriot, who accompanied Sir Richard Grenville on his 1585 expedition to Roanoke Island, mistakenly explained that the plant "openeth all the pores and passages of the body" so that the natives’ "bodies are notably preserved in health, and know not many grievous diseases, wherewithal we in England are often times afflicted." [14]
16
+
17
+ Production of tobacco for smoking, chewing, and snuffing became a major industry in Europe and its colonies by 1700.[15][16]
18
+
19
+ Tobacco has been a major cash crop in Cuba and in other parts of the Caribbean since the 18th century. Cuban cigars are world-famous.[17]
20
+
21
+ In the late 19th century, cigarettes became popular. James Bonsack invented a machine to automate cigarette production. This increase in production allowed tremendous growth in the tobacco industry until the health revelations of the late 20th century.[18][19]
22
+
23
+ Following the scientific revelations of the mid-20th century, tobacco was condemned as a health hazard, and eventually became recognized as a cause of cancer, as well as other respiratory and circulatory diseases. In the United States, this led to the Tobacco Master Settlement Agreement, which settled the many lawsuits by the U.S. states in exchange for a combination of yearly payments to the states and voluntary restrictions on advertising and marketing of tobacco products.
24
+
25
+ In the 1970s, Brown & Williamson cross-bred a strain of tobacco to produce Y1, a strain containing an unusually high nicotine content, nearly doubling from 3.2-3.5% to 6.5%. In the 1990s, this prompted the Food and Drug Administration to allege that tobacco companies were intentionally manipulating the nicotine content of cigarettes.
26
+
27
+ In 2003, in response to growth of tobacco use in developing countries, the World Health Organization[20] successfully rallied 168 countries to sign the Framework Convention on Tobacco Control. The convention is designed to push for effective legislation and enforcement in all countries to reduce the harmful effects of tobacco.
28
+
29
+ The desire of many addicted smokers to quit has led to the development of tobacco cessation products.
30
+
31
+ Many species of tobacco are in the genus of herbs Nicotiana. It is part of the nightshade family (Solanaceae) indigenous to North and South America, Australia, south west Africa, and the South Pacific.[21]
32
+
33
+ Most nightshades contain varying amounts of nicotine, a powerful neurotoxin to insects. However, tobaccos tend to contain a much higher concentration of nicotine than the others. Unlike many other Solanaceae species, they do not contain tropane alkaloids, which are often poisonous to humans and other animals.
34
+
35
+ Despite containing enough nicotine and other compounds such as germacrene and anabasine and other piperidine alkaloids (varying between species) to deter most herbivores,[22] a number of such animals have evolved the ability to feed on Nicotiana species without being harmed. Nonetheless, tobacco is unpalatable to many species due to its other attributes. For example, although the cabbage looper is a generalist pest, tobacco's gummosis and trichomes can harm early larvae survival.[23] As a result, some tobacco plants (chiefly N. glauca) have become established as invasive weeds in some places.
36
+
37
+ The types of tobacco include:
38
+
39
+ Tobacco is cultivated similarly to other agricultural products. Seeds were at first quickly scattered onto the soil. However, young plants came under increasing attack from flea beetles (Epitrix cucumeris or E. pubescens), which caused destruction of half the tobacco crops in United States in 1876. By 1890, successful experiments were conducted that placed the plant in a frame covered by thin cotton fabric. Today, tobacco seeds are sown in cold frames or hotbeds, as their germination is activated by light.[24] In the United States, tobacco is often fertilized with the mineral apatite, which partially starves the plant of nitrogen, to produce a more desired flavor.
40
+
41
+ After the plants are about 8 inches (20 cm) tall, they are transplanted into the fields. Farmers used to have to wait for rainy weather to plant. A hole is created in the tilled earth with a tobacco peg, either a curved wooden tool or deer antler. After making two holes to the right and left, the planter would move forward two feet, select plants from his/her bag, and repeat. Various mechanical tobacco planters like Bemis, New Idea Setter, and New Holland Transplanter were invented in the late 19th and 20th centuries to automate the process: making the hole, watering it, guiding the plant in — all in one motion.[25]
42
+
43
+ Tobacco is cultivated annually, and can be harvested in several ways. In the oldest method, still used today, the entire plant is harvested at once by cutting off the stalk at the ground with a tobacco knife; it is then speared onto sticks, four to six plants a stick, and hung in a curing barn. In the 19th century, bright tobacco began to be harvested by pulling individual leaves off the stalk as they ripened. The leaves ripen from the ground upwards, so a field of tobacco harvested in this manner entails the serial harvest of a number of "primings", beginning with the volado leaves near the ground, working to the seco leaves in the middle of the plant, and finishing with the potent ligero leaves at the top. Before harvesting, the crop must be topped when the pink flowers develop. Topping always refers to the removal of the tobacco flower before the leaves are systematically harvested. As the industrial revolution took hold, the harvesting wagons which were used to transport leaves were equipped with man-powered stringers, an apparatus that used twine to attach leaves to a pole. In modern times, large fields are harvested mechanically, although topping the flower and in some cases the plucking of immature leaves is still done by hand.
44
+
45
+ In the U.S., North Carolina and Kentucky are the leaders in tobacco production, followed by Tennessee, Virginia, Georgia, South Carolina and Pennsylvania.[26]
46
+
47
+ Curing and subsequent aging allow for the slow oxidation and degradation of carotenoids in tobacco leaf. This produces certain compounds in the tobacco leaves and gives a sweet hay, tea, rose oil, or fruity aromatic flavor that contributes to the "smoothness" of the smoke. Starch is converted to sugar, which glycates protein, and is oxidized into advanced glycation endproducts (AGEs), a caramelization process that also adds flavor. Inhalation of these AGEs in tobacco smoke contributes to atherosclerosis and cancer.[27] Levels of AGEs are dependent on the curing method used.
48
+
49
+ Tobacco can be cured through several methods, including:
50
+
51
+ Some tobaccos go through a second stage of curing, known as fermenting or sweating.[30] Cavendish undergoes fermentation pressed in a casing solution containing sugar and/or flavoring.[31]
52
+
53
+ Production of tobacco leaf increased by 40% between 1971, when 4.2 million tons of leaf were produced, and 1997, when 5.9 million tons of leaf were produced.[33] According to the Food and Agriculture organization of the UN, tobacco leaf production was expected to hit 7.1 million tons by 2010. This number is a bit lower than the record-high production of 1992, when 7.5 million tons of leaf were produced.[34] The production growth was almost entirely due to increased productivity by developing nations, where production increased by 128%.[35] During that same time, production in developed countries actually decreased.[34] China's increase in tobacco production was the single biggest factor in the increase in world production. China's share of the world market increased from 17% in 1971 to 47% in 1997.[33] This growth can be partially explained by the existence of a high import tariff on foreign tobacco entering China. While this tariff has been reduced from 64% in 1999 to 10% in 2004,[36] it still has led to local, Chinese cigarettes being preferred over foreign cigarettes because of their lower cost.
54
+
55
+ Every year, about 6.7 million tons of tobacco are produced throughout the world. The top producers of tobacco are China (39.6%), India (8.3%), Brazil (7.0%) and the United States (4.6%).[38]
56
+
57
+ Around the peak of global tobacco production, 20 million rural Chinese households were producing tobacco on 2.1 million hectares of land.[39] While it is the major crop for millions of Chinese farmers, growing tobacco is not as profitable as cotton or sugarcane, because the Chinese government sets the market price. While this price is guaranteed, it is lower than the natural market price, because of the lack of market risk. To further control tobacco in their borders, China founded a State Tobacco Monopoly Administration (STMA) in 1982. The STMA controls tobacco production, marketing, imports, and exports, and contributes 12% to the nation's national income.[40] As noted above, despite the income generated for the state by profits from state-owned tobacco companies and the taxes paid by companies and retailers, China's government has acted to reduce tobacco use.[41]
58
+
59
+ India's Tobacco Board is headquartered in Guntur in the state of Andhra Pradesh.[42] India has 96,865 registered tobacco farmers[43] and many more who are not registered. In 2010, 3,120 tobacco product manufacturing facilities were operating in all of India.[44] Around 0.25% of India's cultivated land is used for tobacco production.[45]
60
+
61
+ Since 1947, the Indian government has supported growth in the tobacco industry. India has seven tobacco research centers, located in Tamil Nadu, Andhra Pradesh, Punjab, Bihar, Mysore, and West Bengal houses the core research institute.
62
+
63
+ In Brazil, around 135,000 family farmers cite tobacco production as their main economic activity.[39] Tobacco has never exceeded 0.7% of the country's total cultivated area.[46] In the southern regions of Brazil, Virginia, and Amarelinho, flue-cured tobacco, as well as burley and Galpão Comum air-cured tobacco, are produced. These types of tobacco are used for cigarettes. In the northeast, darker, air- and sun-cured tobacco is grown. These types of tobacco are used for cigars, twists, and dark cigarettes.[46]
64
+ Brazil's government has made attempts to reduce the production of tobacco but has not had a successful systematic antitobacco farming initiative. Brazil's government, however, provides small loans for family farms, including those that grow tobacco, through the Programa Nacional de Fortalecimento da Agricultura Familiar.[47]
65
+
66
+ The International Labour Office reported that the most child-laborers work in agriculture, which is one of the most hazardous types of work.[48][failed verification – see discussion] The tobacco industry houses some of these working children. Use of children is widespread on farms in Brazil, China, India, Indonesia, Malawi, and Zimbabwe.[49] While some of these children work with their families on small, family-owned farms, others work on large plantations.
67
+ In late 2009, reports were released by the London-based human-rights group Plan International, claiming that child labor was common on Malawi (producer of 1.8% of the world's tobacco[33]) tobacco farms. The organization interviewed 44 teens, who worked full-time on farms during the 2007-8 growing season. The child-laborers complained of low pay and long hours, as well as physical and sexual abuse by their supervisors.[50] They also reported suffering from Green tobacco sickness, a form of nicotine poisoning. When wet leaves are handled, nicotine from the leaves gets absorbed in the skin and causes nausea, vomiting, and dizziness. Children were exposed to levels of nicotine equivalent to smoking 50 cigarettes, just through direct contact with tobacco leaves. This level of nicotine in children can permanently alter brain structure and function.[48][failed verification – see discussion]
68
+
69
+ Major tobacco companies have encouraged global tobacco production. Philip Morris, British American Tobacco, and Japan Tobacco each own or lease tobacco-manufacturing facilities in at least 50 countries and buy crude tobacco leaf from at least 12 more countries.[51] This encouragement, along with government subsidies, has led to a glut in the tobacco market. This surplus has resulted in lower prices, which are devastating to small-scale tobacco farmers. According to the World Bank, between 1985 and 2000, the inflation-adjusted price of tobacco dropped 37%.[52] Tobacco is the most widely smuggled legal product.[53]
70
+
71
+ Tobacco production requires the use of large amounts of pesticides. Tobacco companies recommend up to 16 separate applications of pesticides just in the period between planting the seeds in greenhouses and transplanting the young plants to the field.[54] Pesticide use has been worsened by the desire to produce larger crops in less time because of the decreasing market value of tobacco. Pesticides often harm tobacco farmers because they are unaware of the health effects and the proper safety protocol for working with pesticides. These pesticides, as well as fertilizers, end up in the soil, waterways, and the food chain.[55] Coupled with child labor, pesticides pose an even greater threat. Early exposure to pesticides may increase a child's lifelong cancer risk, as well as harm his or her nervous and immune systems.[56]
72
+
73
+ As with all crops, tobacco crops extract nutrients (such as phosphorus, nitrogen, and potassium) from soil, decreasing its fertility.[57]
74
+
75
+ Furthermore, the wood used to cure tobacco in some places leads to deforestation. While some big tobacco producers such as China and the United States have access to petroleum, coal, and natural gas, which can be used as alternatives to wood, most developing countries still rely on wood in the curing process.[57] Brazil alone uses the wood of 60 million trees per year for curing, packaging, and rolling cigarettes.[54]
76
+
77
+ In 2017 WHO released a study on the environmental effects of tobacco.[58]
78
+
79
+ Several tobacco plants have been used as model organisms in genetics. Tobacco BY-2 cells, derived from N. tabacum cultivar 'Bright Yellow-2', are among the most important research tools in plant cytology.[59] Tobacco has played a pioneering role in callus culture research and the elucidation of the mechanism by which kinetin works, laying the groundwork for modern agricultural biotechnology. The first genetically modified plant was produced in 1982, using Agrobacterium tumefaciens to create an antibiotic-resistant tobacco plant.[60] This research laid the groundwork for all genetically modified crops.[61]
80
+
81
+ Because of its importance as a research tool, transgenic tobacco was the first GM crop to be tested in field trials, in the United States and France in 1986; China became the first country in the world to approve commercial planting of a GM crop in 1993, which was tobacco.[62]
82
+
83
+ Many varieties of transgenic tobacco have been intensively tested in field trials. Agronomic traits such as resistance to pathogens (viruses, particularly to the tobacco mosaic virus (TMV); fungi; bacteria and nematodes); weed management via herbicide tolerance; resistance against insect pests; resistance to drought and cold; and production of useful products such as pharmaceuticals; and use of GM plants for bioremediation, have all been tested in over 400 field trials using tobacco.[63]
84
+
85
+ Currently, only the US is producing GM tobacco.[62][63] The Chinese virus-resistant tobacco was withdrawn from the market in China in 1997.[64]:3 From 2002 to 2010, cigarettes made with GM tobacco with reduced nicotine content were available in the US under the market name Quest.[63][65]
86
+
87
+ Tobacco is consumed in many forms and through a number of different methods. Some examples are:
88
+
89
+ Smoking in public was, for a long time, reserved for men, and when done by women was sometimes associated with promiscuity; in Japan, during the Edo period, prostitutes and their clients often approached one another under the guise of offering a smoke. The same was true in 19th-century Europe.[69]
90
+
91
+ Following the American Civil War, the use of tobacco, primarily in cigars, became associated with masculinity and power. Today, tobacco use is often stigmatized; this has spawned quitting associations and antismoking campaigns.[70][71] Bhutan is the only country in the world where tobacco sales are illegal.[72] Due to its propensity for causing detumescence and erectile dysfunction, some studies have described tobacco as an anaphrodisiacal substance.[73]
92
+
93
+ Research on tobacco use is limited mainly to smoking, which has been studied more extensively than any other form of consumption. An estimated 1.1 billion people, and up to one-third of the adult population, use tobacco in some form.[74] Smoking is more prevalent among men[75] (however, the gender gap declines with age),[76][77] the poor, and in transitional or developing countries.[78] A study published in Morbidity and Mortality Weekly Report found that in 2019 approximately one in four youths (23.0%) in the U.S. had used a tobacco product during the past 30 days. This represented approximately three in 10 high school students (31.2%) and approximately one in eight middle school students (12.5%).[79]
94
+
95
+ Rates of smoking continue to rise in developing countries, but have leveled off or declined in developed countries.[80] Smoking rates in the United States have dropped by half from 1965 to 2006, falling from 42% to 20.8% in adults.[81] In the developing world, tobacco consumption is rising by 3.4% per year.[82]
96
+
97
+ Tobacco smoking endangers health due to the inhalation of poisonous chemicals in tobacco smoke, such as carbon monoxide, cyanide, and carcinogens, which have been proven to cause heart and lung diseases and cancer.
98
+ Thousands of different substances in cigarette smoke, including polycyclic aromatic hydrocarbons (such as benzopyrene), formaldehyde, cadmium, nickel, arsenic, tobacco-specific nitrosamines, and phenols contribute to the harmful effects of smoking.[83]
99
+
100
+ According to the World Health Organization (WHO), tobacco is the single greatest cause of preventable death globally.[84] The WHO estimates that tobacco caused 5.4 million deaths in 2004[85] and 100 million deaths over the course of the 20th century.[86] Similarly, the United States Centers for Disease Control and Prevention describe tobacco use as "the single most important preventable risk to human health in developed countries and an important cause of premature death worldwide."[87] Due to these health consequences, it is estimated that a 10 hectare field of tobacco used for cigarettes causes 30 deaths per year – 10 from lung cancer and 20 from cigarette-induced diseases like cardiac arrest, gangrene, bladder cancer, mouth cancer, etc.[88]
101
+
102
+ The harms caused by inhalation of tobacco smoke include diseases of the heart and lungs, with smoking being a major risk factor for heart attacks, strokes, chronic obstructive pulmonary disease (emphysema), and cancer (particularly cancers of the lungs, larynx, mouth, and pancreas). Cancer is caused by inhaling carcinogenic substances in tobacco smoke.
103
+
104
+ Inhaling secondhand tobacco smoke (which has been exhaled by a smoker) can cause lung cancer in nonsmoking adults. In the United States, about 3,000 adults die each year due to lung cancer from secondhand smoke exposure. Heart disease caused by secondhand smoke kills around 46,000 nonsmokers every year.[89]
105
+
106
+ The addictive alkaloid nicotine is a stimulant, and popularly known as the most characteristic constituent of tobacco. In drug effect preference questionnaires, a rough indicator of addictive potential, nicotine scores almost as highly as opioids.[90] Users typically develop tolerance and dependence.[91][92] Nicotine is known to produce conditioned place preference, a sign of psychological enforcement value.[93] In one medical study, tobacco's overall harm to user and self was determined at 3 percent below cocaine, and 13 percent above amphetamines, ranking 6th most harmful of the 20 drugs assessed.[94]
107
+
108
+ Polonium-210 is a radioactive trace contaminant of tobacco, providing additional explanation for the link between smoking and bronchial cancer.[95] A 1968 study found 0.33-0.36 picocurie polonium-210 (about 0.000,000,000,077 microgram) per gram cigarette tobacco,[96] a tiny fraction of the lethal dose of 1 microgram.[97]
109
+
110
+ Tobacco has a significant economic impact. The global tobacco market in 2010 was estimated at US$760 billion, excluding China.[98] Statistica estimates that in the U.S. alone, the tobacco industry has a market of US$121 billion,[99] despite the fact the CDC reports that US smoking rates are declining steadily.[100] In the US, the decline in the number of smokers, the end of the Tobacco Transition Payment Program in 2014, and competition from growers in other countries, made tobacco farming economics more challenging.[101]
111
+
112
+ Of the 1.22 billion smokers worldwide, 1 billion of them live in developing or transitional economies, and much of the disease burden and premature mortality attributable to tobacco use disproportionately affect the poor.[78] While smoking prevalence has declined in many developed countries, it remains high in others, and is increasing among women and in developing countries. Between one-fifth and two-thirds of men in most populations smoke. Women's smoking rates vary more widely but rarely equal male rates.[102]
113
+
114
+ In Indonesia, the lowest income group spends 15% of its total expenditures on tobacco. In Egypt, more than 10% of low-income household expenditure is on tobacco. The poorest 20% of households in Mexico spend 11% of their income on tobacco.[103]
115
+
116
+ The tobacco industry advertises its products through a variety of media, including sponsorship, particularly of sporting events. Because of the health risks of these products, this is now one of the most highly regulated forms of marketing. Some or all forms of tobacco advertising are banned in many countries.
117
+
en/5586.html.txt ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The Renaissance (UK: /rɪˈneɪsəns/ rin-AY-sənss, US: /ˈrɛnəsɑːns/ (listen) REN-ə-sahnss)[2][a] was a period in European history marking the transition from the Middle Ages to Modernity and covering the 15th and 16th centuries. It occurred after the Crisis of the Late Middle Ages and was associated with great social change. In addition to the standard periodization, proponents of a long Renaissance put its beginning in the 14th century and its end in the 17th century. The traditional view focuses more on the early modern aspects of the Renaissance and argues that it was a break from the past, but many historians today focus more on its medieval aspects and argue that it was an extension of the Middle Ages.[4][5]
6
+
7
+ The intellectual basis of the Renaissance was its version of humanism, derived from the concept of Roman Humanitas and the rediscovery of classical Greek philosophy, such as that of Protagoras, who said that "Man is the measure of all things." This new thinking became manifest in art, architecture, politics, science and literature. Early examples were the development of perspective in oil painting and the recycled knowledge of how to make concrete. Although the invention of metal movable type sped the dissemination of ideas from the later 15th century, the changes of the Renaissance were not uniformly experienced across Europe: the first traces appear in Italy as early as the late 13th century, in particular with the writings of Dante and the paintings of Giotto.
8
+
9
+ As a cultural movement, the Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with the 14th-century resurgence of learning based on classical sources, which contemporaries credited to Petrarch; the development of linear perspective and other techniques of rendering a more natural reality in painting; and gradual but widespread educational reform. In politics, the Renaissance contributed to the development of the customs and conventions of diplomacy, and in science to an increased reliance on observation and inductive reasoning. Although the Renaissance saw revolutions in many intellectual pursuits, as well as social and political upheaval, it is perhaps best known for its artistic developments and the contributions of such polymaths as Leonardo da Vinci and Michelangelo, who inspired the term "Renaissance man".[6][7]
10
+
11
+ The Renaissance began in the 14th century in Florence, Italy.[8] Various theories have been proposed to account for its origins and characteristics, focusing on a variety of factors including the social and civic peculiarities of Florence at the time: its political structure, the patronage of its dominant family, the Medici,[9][10] and the migration of Greek scholars and their texts to Italy following the Fall of Constantinople to the Ottoman Turks which inherited from the Timurid Renaissance.[11][12][13] Other major centres were northern Italian city-states such as Venice, Genoa, Milan, Bologna, and Rome during the Renaissance Papacy or Belgian cities such as Bruges, Ghent, Brussels, Leuven or Antwerp.
12
+
13
+ The Renaissance has a long and complex historiography, and, in line with general scepticism of discrete periodizations, there has been much debate among historians reacting to the 19th-century glorification of the "Renaissance" and individual culture heroes as "Renaissance men", questioning the usefulness of Renaissance as a term and as a historical delineation.[14] The art historian Erwin Panofsky observed of this resistance to the concept of "Renaissance":
14
+
15
+ It is perhaps no accident that the factuality of the Italian Renaissance has been most vigorously questioned by those who are not obliged to take a professional interest in the aesthetic aspects of civilization – historians of economic and social developments, political and religious situations, and, most particularly, natural science – but only exceptionally by students of literature and hardly ever by historians of Art.[15]
16
+
17
+ Some observers have called into question whether the Renaissance was a cultural "advance" from the Middle Ages, instead seeing it as a period of pessimism and nostalgia for classical antiquity,[16] while social and economic historians, especially of the longue durée, have instead focused on the continuity between the two eras,[17] which are linked, as Panofsky observed, "by a thousand ties".[18]
18
+
19
+ The term rinascita ('rebirth') first appeared in Giorgio Vasari's Lives of the Artists (c. 1550), anglicized as the Renaissance in the 1830s.[19] The word has also been extended to other historical and cultural movements, such as the Carolingian Renaissance (8th and 9th centuries), Ottonian Renaissance (10th and 11th century), and the Renaissance of the 12th century.[20]
20
+
21
+ The Renaissance was a cultural movement that profoundly affected European intellectual life in the early modern period. Beginning in Italy, and spreading to the rest of Europe by the 16th century, its influence was felt in art, architecture, philosophy, literature, music, science and technology, politics, religion, and other aspects of intellectual inquiry. Renaissance scholars employed the humanist method in study, and searched for realism and human emotion in art.[21]
22
+
23
+ Renaissance humanists such as Poggio Bracciolini sought out in Europe's monastic libraries the Latin literary, historical, and oratorical texts of Antiquity, while the Fall of Constantinople (1453) generated a wave of émigré Greek scholars bringing precious manuscripts in ancient Greek, many of which had fallen into obscurity in the West. It is in their new focus on literary and historical texts that Renaissance scholars differed so markedly from the medieval scholars of the Renaissance of the 12th century, who had focused on studying Greek and Arabic works of natural sciences, philosophy and mathematics, rather than on such cultural texts.
24
+
25
+ In the revival of neo-Platonism Renaissance humanists did not reject Christianity; quite the contrary, many of the greatest works of the Renaissance were devoted to it, and the Church patronized many works of Renaissance art. However, a subtle shift took place in the way that intellectuals approached religion that was reflected in many other areas of cultural life.[22] In addition, many Greek Christian works, including the Greek New Testament, were brought back from Byzantium to Western Europe and engaged Western scholars for the first time since late antiquity. This new engagement with Greek Christian works, and particularly the return to the original Greek of the New Testament promoted by humanists Lorenzo Valla and Erasmus, would help pave the way for the Protestant Reformation.
26
+
27
+ Well after the first artistic return to classicism had been exemplified in the sculpture of Nicola Pisano, Florentine painters led by Masaccio strove to portray the human form realistically, developing techniques to render perspective and light more naturally. Political philosophers, most famously Niccolò Machiavelli, sought to describe political life as it really was, that is to understand it rationally. A critical contribution to Italian Renaissance humanism, Giovanni Pico della Mirandola wrote the famous text De hominis dignitate (Oration on the Dignity of Man, 1486), which consists of a series of theses on philosophy, natural thought, faith and magic defended against any opponent on the grounds of reason. In addition to studying classical Latin and Greek, Renaissance authors also began increasingly to use vernacular languages; combined with the introduction of printing press, this would allow many more people access to books, especially the Bible.[23]
28
+
29
+ In all, the Renaissance could be viewed as an attempt by intellectuals to study and improve the secular and worldly, both through the revival of ideas from antiquity, and through novel approaches to thought. Some scholars, such as Rodney Stark,[24] play down the Renaissance in favour of the earlier innovations of the Italian city-states in the High Middle Ages, which married responsive government, Christianity and the birth of capitalism. This analysis argues that, whereas the great European states (France and Spain) were absolutist monarchies, and others were under direct Church control, the independent city republics of Italy took over the principles of capitalism invented on monastic estates and set off a vast unprecedented commercial revolution that preceded and financed the Renaissance.
30
+
31
+ Many argue that the ideas characterizing the Renaissance had their origin in late 13th-century Florence, in particular with the writings of Dante Alighieri (1265–1321) and Petrarch (1304–1374), as well as the paintings of Giotto di Bondone (1267–1337). Some writers date the Renaissance quite precisely; one proposed starting point is 1401, when the rival geniuses Lorenzo Ghiberti and Filippo Brunelleschi competed for the contract to build the bronze doors for the Baptistery of the Florence Cathedral (Ghiberti won).[25] Others see more general competition between artists and polymaths such as Brunelleschi, Ghiberti, Donatello, and Masaccio for artistic commissions as sparking the creativity of the Renaissance. Yet it remains much debated why the Renaissance began in Italy, and why it began when it did. Accordingly, several theories have been put forward to explain its origins.
32
+
33
+ During the Renaissance, money and art went hand in hand. Artists depended entirely on patrons while the patrons needed money to foster artistic talent. Wealth was brought to Italy in the 14th, 15th, and 16th centuries by expanding trade into Asia and Europe. Silver mining in Tyrol increased the flow of money. Luxuries from the Muslim world, brought home during the Crusades, increased the prosperity of Genoa and Venice.[26]
34
+
35
+ Jules Michelet defined the 16th-century Renaissance in France as a period in Europe's cultural history that represented a break from the Middle Ages, creating a modern understanding of humanity and its place in the world.[27]
36
+
37
+ In stark contrast to the High Middle Ages, when Latin scholars focused almost entirely on studying Greek and Arabic works of natural science, philosophy and mathematics.[28] Renaissance scholars were most interested in recovering and studying Latin and Greek literary, historical, and oratorical texts. Broadly speaking, this began in the 14th century with a Latin phase, when Renaissance scholars such as Petrarch, Coluccio Salutati (1331–1406), Niccolò de' Niccoli (1364–1437) and Poggio Bracciolini (1380–1459) scoured the libraries of Europe in search of works by such Latin authors as Cicero, Lucretius, Livy and Seneca.[29] By the early 15th century, the bulk of the surviving such Latin literature had been recovered; the Greek phase of Renaissance humanism was under way, as Western European scholars turned to recovering ancient Greek literary, historical, oratorical and theological texts.[30]
38
+
39
+ Unlike with Latin texts, which had been preserved and studied in Western Europe since late antiquity, the study of ancient Greek texts was very limited in medieval Western Europe. Ancient Greek works on science, maths and philosophy had been studied since the High Middle Ages in Western Europe and in the Islamic Golden Age (normally in translation), but Greek literary, oratorical and historical works (such as Homer, the Greek dramatists, Demosthenes and Thucydides) were not studied in either the Latin or medieval Islamic worlds; in the Middle Ages these sorts of texts were only studied by Byzantine scholars. Some argues that the Timurid Renaissance in Samarkand was linked with Ottoman Empire whose conquests led the migration of Greek scholars in Italian cities.[31][32][11][12] One of the greatest achievements of Renaissance scholars was to bring this entire class of Greek cultural works back into Western Europe for the first time since late antiquity.
40
+
41
+ Muslim logicians had inherited Greek ideas after they had invaded and conquered Egypt and the Levant. Their translations and commentaries on these ideas worked their way through the Arab West into Iberia and Sicily, which became important centers for this transmission of ideas. From the 11th to the 13th century, many schools dedicated to the translation of philosophical and scientific works from Classical Arabic to Medieval Latin were established in Iberia. Most notably the Toledo School of Translators. This work of translation from Islamic culture, though largely unplanned and disorganized, constituted one of the greatest transmissions of ideas in history.[33] The movement to reintegrate the regular study of Greek literary, historical, oratorical and theological texts back into the Western European curriculum is usually dated to the 1396 invitation from Coluccio Salutati to the Byzantine diplomat and scholar Manuel Chrysoloras (c. 1355–1415) to teach Greek in Florence.[34] This legacy was continued by a number of expatriate Greek scholars, from Basilios Bessarion to Leo Allatius.
42
+
43
+ The unique political structures of late Middle Ages Italy have led some to theorize that its unusual social climate allowed the emergence of a rare cultural efflorescence. Italy did not exist as a political entity in the early modern period. Instead, it was divided into smaller city states and territories: the Kingdom of Naples controlled the south, the Republic of Florence and the Papal States at the center, the Milanese and the Genoese to the north and west respectively, and the Venetians to the east. Fifteenth-century Italy was one of the most urbanised areas in Europe.[35] Many of its cities stood among the ruins of ancient Roman buildings; it seems likely that the classical nature of the Renaissance was linked to its origin in the Roman Empire's heartland.[36]
44
+
45
+ Historian and political philosopher Quentin Skinner points out that Otto of Freising (c. 1114–1158), a German bishop visiting north Italy during the 12th century, noticed a widespread new form of political and social organization, observing that Italy appeared to have exited from Feudalism so that its society was based on merchants and commerce. Linked to this was anti-monarchical thinking, represented in the famous early Renaissance fresco cycle The Allegory of Good and Bad Government by Ambrogio Lorenzetti (painted 1338–1340), whose strong message is about the virtues of fairness, justice, republicanism and good administration. Holding both Church and Empire at bay, these city republics were devoted to notions of liberty. Skinner reports that there were many defences of liberty such as the Matteo Palmieri (1406–1475) celebration of Florentine genius not only in art, sculpture and architecture, but "the remarkable efflorescence of moral, social and political philosophy that occurred in Florence at the same time".[37]
46
+
47
+ Even cities and states beyond central Italy, such as the Republic of Florence at this time, were also notable for their merchant Republics, especially the Republic of Venice. Although in practice these were oligarchical, and bore little resemblance to a modern democracy, they did have democratic features and were responsive states, with forms of participation in governance and belief in liberty.[37][38][39] The relative political freedom they afforded was conducive to academic and artistic advancement.[40] Likewise, the position of Italian cities such as Venice as great trading centres made them intellectual crossroads. Merchants brought with them ideas from far corners of the globe, particularly the Levant. Venice was Europe's gateway to trade with the East, and a producer of fine glass, while Florence was a capital of textiles. The wealth such business brought to Italy meant large public and private artistic projects could be commissioned and individuals had more leisure time for study.[40]
48
+
49
+ One theory that has been advanced is that the devastation in Florence caused by the Black Death, which hit Europe between 1348 and 1350, resulted in a shift in the world view of people in 14th century Italy. Italy was particularly badly hit by the plague, and it has been speculated that the resulting familiarity with death caused thinkers to dwell more on their lives on Earth, rather than on spirituality and the afterlife.[41] It has also been argued that the Black Death prompted a new wave of piety, manifested in the sponsorship of religious works of art.[42] However, this does not fully explain why the Renaissance occurred specifically in Italy in the 14th century. The Black Death was a pandemic that affected all of Europe in the ways described, not only Italy. The Renaissance's emergence in Italy was most likely the result of the complex interaction of the above factors.[14]
50
+
51
+ The plague was carried by fleas on sailing vessels returning from the ports of Asia, spreading quickly due to lack of proper sanitation: the population of England, then about 4.2 million, lost 1.4 million people to the bubonic plague. Florence's population was nearly halved in the year 1347. As a result of the decimation in the populace the value of the working class increased, and commoners came to enjoy more freedom. To answer the increased need for labor, workers traveled in search of the most favorable position economically.[43]
52
+
53
+ The demographic decline due to the plague had economic consequences: the prices of food dropped and land values declined by 30–40% in most parts of Europe between 1350 and 1400.[44] Landholders faced a great loss, but for ordinary men and women it was a windfall. The survivors of the plague found not only that the prices of food were cheaper but also that lands were more abundant, and many of them inherited property from their dead relatives.
54
+
55
+ The spread of disease was significantly more rampant in areas of poverty. Epidemics ravaged cities, particularly children. Plagues were easily spread by lice, unsanitary drinking water, armies, or by poor sanitation. Children were hit the hardest because many diseases, such as typhus and syphilis, target the immune system, leaving young children without a fighting chance. Children in city dwellings were more affected by the spread of disease than the children of the wealthy.[45]
56
+
57
+ The Black Death caused greater upheaval to Florence's social and political structure than later epidemics. Despite a significant number of deaths among members of the ruling classes, the government of Florence continued to function during this period. Formal meetings of elected representatives were suspended during the height of the epidemic due to the chaotic conditions in the city, but a small group of officials was appointed to conduct the affairs of the city, which ensured continuity of government.[46]
58
+
59
+ It has long been a matter of debate why the Renaissance began in Florence, and not elsewhere in Italy. Scholars have noted several features unique to Florentine cultural life that may have caused such a cultural movement. Many have emphasized the role played by the Medici, a banking family and later ducal ruling house, in patronizing and stimulating the arts. Lorenzo de' Medici (1449–1492) was the catalyst for an enormous amount of arts patronage, encouraging his countrymen to commission works from the leading artists of Florence, including Leonardo da Vinci, Sandro Botticelli, and Michelangelo Buonarroti.[9] Works by Neri di Bicci, Botticelli, da Vinci and Filippino Lippi had been commissioned additionally by the Convent of San Donato in Scopeto in Florence.[47]
60
+
61
+ The Renaissance was certainly underway before Lorenzo de' Medici came to power – indeed, before the Medici family itself achieved hegemony in Florentine society. Some historians have postulated that Florence was the birthplace of the Renaissance as a result of luck, i.e., because "Great Men" were born there by chance:[48] Leonardo da Vinci, Botticelli and Michelangelo were all born in Tuscany. Arguing that such chance seems improbable, other historians have contended that these "Great Men" were only able to rise to prominence because of the prevailing cultural conditions at the time.[49]
62
+
63
+ In some ways, Renaissance humanism was not a philosophy but a method of learning. In contrast to the medieval scholastic mode, which focused on resolving contradictions between authors, Renaissance humanists would study ancient texts in the original and appraise them through a combination of reasoning and empirical evidence. Humanist education was based on the programme of 'Studia Humanitatis', the study of five humanities: poetry, grammar, history, moral philosophy and rhetoric. Although historians have sometimes struggled to define humanism precisely, most have settled on "a middle of the road definition... the movement to recover, interpret, and assimilate the language, literature, learning and values of ancient Greece and Rome".[50] Above all, humanists asserted "the genius of man ... the unique and extraordinary ability of the human mind".[51]
64
+
65
+ Humanist scholars shaped the intellectual landscape throughout the early modern period. Political philosophers such as Niccolò Machiavelli and Thomas More revived the ideas of Greek and Roman thinkers and applied them in critiques of contemporary government. Pico della Mirandola wrote the "manifesto" of the Renaissance, the Oration on the Dignity of Man, a vibrant defence of thinking. Matteo Palmieri (1406–1475), another humanist, is most known for his work Della vita civile ("On Civic Life"; printed 1528), which advocated civic humanism, and for his influence in refining the Tuscan vernacular to the same level as Latin. Palmieri drew on Roman philosophers and theorists, especially Cicero, who, like Palmieri, lived an active public life as a citizen and official, as well as a theorist and philosopher and also Quintilian. Perhaps the most succinct expression of his perspective on humanism is in a 1465 poetic work La città di vita, but an earlier work, Della vita civile, is more wide-ranging. Composed as a series of dialogues set in a country house in the Mugello countryside outside Florence during the plague of 1430, Palmieri expounds on the qualities of the ideal citizen. The dialogues include ideas about how children develop mentally and physically, how citizens can conduct themselves morally, how citizens and states can ensure probity in public life, and an important debate on the difference between that which is pragmatically useful and that which is honest.
66
+
67
+ The humanists believed that it is important to transcend to the afterlife with a perfect mind and body, which could be attained with education. The purpose of humanism was to create a universal man whose person combined intellectual and physical excellence and who was capable of functioning honorably in virtually any situation.[53] This ideology was referred to as the uomo universale, an ancient Greco-Roman ideal. Education during the Renaissance was mainly composed of ancient literature and history as it was thought that the classics provided moral instruction and an intensive understanding of human behavior.
68
+
69
+ A unique characteristic of some Renaissance libraries is that they were open to the public. These libraries were places where ideas were exchanged and where scholarship and reading were considered both pleasurable and beneficial to the mind and soul. As freethinking was a hallmark of the age, many libraries contained a wide range of writers. Classical texts could be found alongside humanist writings. These informal associations of intellectuals profoundly influenced Renaissance culture. Some of the richest "bibliophiles" built libraries as temples to books and knowledge. A number of libraries appeared as manifestations of immense wealth joined with a love of books. In some cases, cultivated library builders were also committed to offering others the opportunity to use their collections. Prominent aristocrats and princes of the Church created great libraries for the use of their courts, called "court libraries", and were housed in lavishly designed monumental buildings decorated with ornate woodwork, and the walls adorned with frescoes (Murray, Stuart A.P.)
70
+
71
+ Renaissance art marks a cultural rebirth at the close of the Middle Ages and rise of the Modern world. One of the distinguishing features of Renaissance art was its development of highly realistic linear perspective. Giotto di Bondone (1267–1337) is credited with first treating a painting as a window into space, but it was not until the demonstrations of architect Filippo Brunelleschi (1377–1446) and the subsequent writings of Leon Battista Alberti (1404–1472) that perspective was formalized as an artistic technique.[54]
72
+
73
+ The development of perspective was part of a wider trend towards realism in the arts.[55] Painters developed other techniques, studying light, shadow, and, famously in the case of Leonardo da Vinci, human anatomy. Underlying these changes in artistic method was a renewed desire to depict the beauty of nature and to unravel the axioms of aesthetics, with the works of Leonardo, Michelangelo and Raphael representing artistic pinnacles that were much imitated by other artists.[56] Other notable artists include Sandro Botticelli, working for the Medici in Florence, Donatello, another Florentine, and Titian in Venice, among others.
74
+
75
+ In the Netherlands, a particularly vibrant artistic culture developed. The work of Hugo van der Goes and Jan van Eyck was particularly influential on the development of painting in Italy, both technically with the introduction of oil paint and canvas, and stylistically in terms of naturalism in representation. Later, the work of Pieter Brueghel the Elder would inspire artists to depict themes of everyday life.[57]
76
+
77
+ In architecture, Filippo Brunelleschi was foremost in studying the remains of ancient classical buildings. With rediscovered knowledge from the 1st-century writer Vitruvius and the flourishing discipline of mathematics, Brunelleschi formulated the Renaissance style that emulated and improved on classical forms. His major feat of engineering was building the dome of the Florence Cathedral.[58] Another building demonstrating this style is the church of St. Andrew in Mantua, built by Alberti. The outstanding architectural work of the High Renaissance was the rebuilding of St. Peter's Basilica, combining the skills of Bramante, Michelangelo, Raphael, Sangallo and Maderno.
78
+
79
+ During the Renaissance, architects aimed to use columns, pilasters, and entablatures as an integrated system. The Roman orders types of columns are used: Tuscan and Composite. These can either be structural, supporting an arcade or architrave, or purely decorative, set against a wall in the form of pilasters. One of the first buildings to use pilasters as an integrated system was in the Old Sacristy (1421–1440) by Brunelleschi.[59] Arches, semi-circular or (in the Mannerist style) segmental, are often used in arcades, supported on piers or columns with capitals. There may be a section of entablature between the capital and the springing of the arch. Alberti was one of the first to use the arch on a monumental. Renaissance vaults do not have ribs; they are semi-circular or segmental and on a square plan, unlike the Gothic vault, which is frequently rectangular.
80
+
81
+ Renaissance artists were not pagans, although they admired antiquity and kept some ideas and symbols of the medieval past. Nicola Pisano (c. 1220–c. 1278) imitated classical forms by portraying scenes from the Bible. His Annunciation, from the Baptistry at Pisa, demonstrates that classical models influenced Italian art before the Renaissance took root as a literary movement [60]
82
+
83
+ Applied innovation extended to commerce. At the end of the 15th century Luca Pacioli published the first work on bookkeeping, making him the founder of accounting.[62]
84
+
85
+ The rediscovery of ancient texts and the invention of the printing press democratized learning and allowed a faster propagation of more widely distributed ideas. In the first period of the Italian Renaissance, humanists favoured the study of humanities over natural philosophy or applied mathematics, and their reverence for classical sources further enshrined the Aristotelian and Ptolemaic views of the universe. Writing around 1450, Nicholas Cusanus anticipated the heliocentric worldview of Copernicus, but in a philosophical fashion.
86
+
87
+ Science and art were intermingled in the early Renaissance, with polymath artists such as Leonardo da Vinci making observational drawings of anatomy and nature. Da Vinci set up controlled experiments in water flow, medical dissection, and systematic study of movement and aerodynamics, and he devised principles of research method that led Fritjof Capra to classify him as the "father of modern science".[63] Other examples of Da Vinci's contribution during this period include machines designed to saw marbles and lift monoliths, and new discoveries in acoustics, botany, geology, anatomy, and mechanics.[64]
88
+
89
+ A suitable environment had developed to question scientific doctrine. The discovery in 1492 of the New World by Christopher Columbus challenged the classical worldview. The works of Ptolemy (in geography) and Galen (in medicine) were found to not always match everyday observations. As the Protestant Reformation and Counter-Reformation clashed, the Northern Renaissance showed a decisive shift in focus from Aristotelean natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine).[65] The willingness to question previously held truths and search for new answers resulted in a period of major scientific advancements.
90
+
91
+ Some view this as a "scientific revolution", heralding the beginning of the modern age,[66] others as an acceleration of a continuous process stretching from the ancient world to the present day.[67] Significant scientific advances were made during this time by Galileo Galilei, Tycho Brahe and Johannes Kepler.[68] Copernicus, in De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), posited that the Earth moved around the Sun. De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, gave a new confidence to the role of dissection, observation, and the mechanistic view of anatomy.[69]
92
+
93
+ Another important development was in the process for discovery, the scientific method,[69] focusing on empirical evidence and the importance of mathematics, while discarding Aristotelian science. Early and influential proponents of these ideas included Copernicus, Galileo, and Francis Bacon.[70][71] The new scientific method led to great contributions in the fields of astronomy, physics, biology, and anatomy.[c][72]
94
+
95
+ During the Renaissance, extending from 1450 to 1650,[73] every continent was visited and mostly mapped by Europeans, except the south polar continent now known as Antarctica. This development is depicted in the large world map Nova Totius Terrarum Orbis Tabula made by the Dutch cartographer Joan Blaeu in 1648 to commemorate the Peace of Westphalia.
96
+
97
+ In 1492, Christopher Columbus sailed across the Atlantic Ocean from Spain seeking a direct route to India of the Delhi Sultanate. He accidentally stumbled upon the Americas, but believed he had reached the East Indies.
98
+
99
+ In 1606, the Dutch navigator Willem Janszoon sailed from the East Indies in the VOC ship Duyfken and landed in Australia. He charted about 300 km of the west coast of Cape York Peninsula in Queensland. More than thirty Dutch expeditions followed, mapping sections of the north, west and south coasts. In 1642–1643, Abel Tasman circumnavigated the continent, proving that it was not joined to the imagined south polar continent.
100
+
101
+ By 1650, Dutch cartographers had mapped most of the coastline of the continent, which they named New Holland, except the east coast which was charted in 1770 by Captain Cook.
102
+
103
+ The long-imagined south polar continent was eventually sighted in 1820. Throughout the Renaissance it had been known as Terra Australis, or 'Australia' for short. However, after that name was transferred to New Holland in the nineteenth century, the new name of 'Antarctica' was bestowed on the south polar continent.[74]
104
+
105
+ From this changing society emerged a common, unifying musical language, in particular the polyphonic style of the Franco-Flemish school. The development of printing made distribution of music possible on a wide scale. Demand for music as entertainment and as an activity for educated amateurs increased with the emergence of a bourgeois class. Dissemination of chansons, motets, and masses throughout Europe coincided with the unification of polyphonic practice into the fluid style that culminated in the second half of the sixteenth century in the work of composers such as Palestrina, Lassus, Victoria and William Byrd.
106
+
107
+ The new ideals of humanism, although more secular in some aspects, developed against a Christian backdrop, especially in the Northern Renaissance. Much, if not most, of the new art was commissioned by or in dedication to the Church.[22] However, the Renaissance had a profound effect on contemporary theology, particularly in the way people perceived the relationship between man and God.[22] Many of the period's foremost theologians were followers of the humanist method, including Erasmus, Zwingli, Thomas More, Martin Luther, and John Calvin.
108
+
109
+ The Renaissance began in times of religious turmoil. The late Middle Ages was a period of political intrigue surrounding the Papacy, culminating in the Western Schism, in which three men simultaneously claimed to be true Bishop of Rome.[75] While the schism was resolved by the Council of Constance (1414), a resulting reform movement known as Conciliarism sought to limit the power of the pope. Although the papacy eventually emerged supreme in ecclesiastical matters by the Fifth Council of the Lateran (1511), it was dogged by continued accusations of corruption, most famously in the person of Pope Alexander VI, who was accused variously of simony, nepotism and fathering four children (most of whom were married off, presumably for the consolidation of power) while a cardinal.[76]
110
+
111
+ Churchmen such as Erasmus and Luther proposed reform to the Church, often based on humanist textual criticism of the New Testament.[22] In October 1517 Luther published the 95 Theses, challenging papal authority and criticizing its perceived corruption, particularly with regard to instances of sold indulgences.[d] The 95 Theses led to the Reformation, a break with the Roman Catholic Church that previously claimed hegemony in Western Europe. Humanism and the Renaissance therefore played a direct role in sparking the Reformation, as well as in many other contemporaneous religious debates and conflicts.
112
+
113
+ Pope Paul III came to the papal throne (1534–1549) after the sack of Rome in 1527, with uncertainties prevalent in the Catholic Church following the Protestant Reformation. Nicolaus Copernicus dedicated De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres) to Paul III, who became the grandfather of Alessandro Farnese (cardinal), who had paintings by Titian, Michelangelo, and Raphael, as well as an important collection of drawings, and who commissioned the masterpiece of Giulio Clovio, arguably the last major illuminated manuscript, the Farnese Hours.
114
+
115
+ By the 15th century, writers, artists, and architects in Italy were well aware of the transformations that were taking place and were using phrases such as modi antichi (in the antique manner) or alle romana et alla antica (in the manner of the Romans and the ancients) to describe their work. In the 1330s Petrarch referred to pre-Christian times as antiqua (ancient) and to the Christian period as nova (new).[77] From Petrarch's Italian perspective, this new period (which included his own time) was an age of national eclipse.[77]
116
+ Leonardo Bruni was the first to use tripartite periodization in his History of the Florentine People (1442).[78] Bruni's first two periods were based on those of Petrarch, but he added a third period because he believed that Italy was no longer in a state of decline. Flavio Biondo used a similar framework in Decades of History from the Deterioration of the Roman Empire (1439–1453).
117
+
118
+ Humanist historians argued that contemporary scholarship restored direct links to the classical period, thus bypassing the Medieval period, which they then named for the first time the "Middle Ages". The term first appears in Latin in 1469 as media tempestas (middle times).[79] The term rinascita (rebirth) first appeared, however, in its broad sense in Giorgio Vasari's Lives of the Artists, 1550, revised 1568.[80][81] Vasari divides the age into three phases: the first phase contains Cimabue, Giotto, and Arnolfo di Cambio; the second phase contains Masaccio, Brunelleschi, and Donatello; the third centers on Leonardo da Vinci and culminates with Michelangelo. It was not just the growing awareness of classical antiquity that drove this development, according to Vasari, but also the growing desire to study and imitate nature.[82]
119
+
120
+ In the 15th century, the Renaissance spread rapidly from its birthplace in Florence to the rest of Italy and soon to the rest of Europe. The invention of the printing press by German printer Johannes Gutenberg allowed the rapid transmission of these new ideas. As it spread, its ideas diversified and changed, being adapted to local culture. In the 20th century, scholars began to break the Renaissance into regional and national movements.
121
+
122
+ In England, the sixteenth century marked the beginning of the English Renaissance with the work of writers William Shakespeare, Christopher Marlowe, Edmund Spenser, Sir Thomas More, Francis Bacon, Sir Philip Sidney, as well as great artists, architects (such as Inigo Jones who introduced Italianate architecture to England), and composers such as Thomas Tallis, John Taverner, and William Byrd.
123
+
124
+ The word "Renaissance" is borrowed from the French language, where it means "re-birth". It was first used in the eighteenth century and was later popularized by French historian Jules Michelet (1798–1874) in his 1855 work, Histoire de France (History of France).[83][84]
125
+
126
+ In 1495 the Italian Renaissance arrived in France, imported by King Charles VIII after his invasion of Italy. A factor that promoted the spread of secularism was the inability of the Church to offer assistance against the Black Death. Francis I imported Italian art and artists, including Leonardo da Vinci, and built ornate palaces at great expense. Writers such as François Rabelais, Pierre de Ronsard, Joachim du Bellay and Michel de Montaigne, painters such as Jean Clouet, and musicians such as Jean Mouton also borrowed from the spirit of the Renaissance.
127
+
128
+ In 1533, a fourteen-year-old Caterina de' Medici (1519–1589), born in Florence to Lorenzo de' Medici, Duke of Urbino and Madeleine de la Tour d'Auvergne, married Henry II of France, second son of King Francis I and Queen Claude. Though she became famous and infamous for her role in France's religious wars, she made a direct contribution in bringing arts, sciences and music (including the origins of ballet) to the French court from her native Florence.
129
+
130
+ In the second half of the 15th century, the Renaissance spirit spread to Germany and the Low Countries, where the development of the printing press (ca. 1450) and Renaissance artists such as Albrecht Dürer (1471–1528) predated the influence from Italy. In the early Protestant areas of the country humanism became closely linked to the turmoil of the Protestant Reformation, and the art and writing of the German Renaissance frequently reflected this dispute.[85] However, the Gothic style and medieval scholastic philosophy remained exclusively until the turn of the 16th century. Emperor Maximilian I of Habsburg (ruling 1493–1519) was the first truly Renaissance monarch of the Holy Roman Empire.
131
+
132
+ After Italy, Hungary was the first European country where the Renaissance appeared.[86] The Renaissance style came directly from Italy during the Quattrocento to Hungary first in the Central European region, thanks to the development of early Hungarian-Italian relationships—not only in dynastic connections, but also in cultural, humanistic and commercial relations—growing in strength from the 14th century. The relationship between Hungarian and Italian Gothic styles was a second reason—exaggerated breakthrough of walls is avoided, preferring clean and light structures. Large-scale building schemes provided ample and long term work for the artists, for example, the building of the Friss (New) Castle in Buda, the castles of Visegrád, Tata and Várpalota. In Sigismund's court there were patrons such as Pipo Spano, a descendant of the Scolari family of Florence, who invited Manetto Ammanatini and Masolino da Pannicale to Hungary.[87]
133
+
134
+ The new Italian trend combined with existing national traditions to create a particular local Renaissance art. Acceptance of Renaissance art was furthered by the continuous arrival of humanist thought in the country. Many young Hungarians studying at Italian universities came closer to the Florentine humanist center, so a direct connection with Florence evolved. The growing number of Italian traders moving to Hungary, specially to Buda, helped this process. New thoughts were carried by the humanist prelates, among them Vitéz János, archbishop of Esztergom, one of the founders of Hungarian humanism.[88] During the long reign of emperor Sigismund of Luxemburg the Royal Castle of Buda became probably the largest Gothic palace of the late Middle Ages. King Matthias Corvinus (r. 1458–1490) rebuilt the palace in early Renaissance style and further expanded it.[89][90]
135
+
136
+ After the marriage in 1476 of King Matthias to Beatrice of Naples, Buda became one of the most important artistic centres of the Renaissance north of the Alps.[91] The most important humanists living in Matthias' court were Antonio Bonfini and the famous Hungarian poet Janus Pannonius.[91] András Hess set up a printing press in Buda in 1472. Matthias Corvinus's library, the Bibliotheca Corviniana, was Europe's greatest collections of secular books: historical chronicles, philosophic and scientific works in the 15th century. His library was second only in size to the Vatican Library. (However, the Vatican Library mainly contained Bibles and religious materials.)[92] In 1489, Bartolomeo della Fonte of Florence wrote that Lorenzo de' Medici founded his own Greek-Latin library encouraged by the example of the Hungarian king. Corvinus's library is part of UNESCO World Heritage.[93]
137
+
138
+ Matthias started at least two major building projects.[94] The works in Buda and Visegrád began in about 1479.[95] Two new wings and a hanging garden were built at the royal castle of Buda, and the palace at Visegrád was rebuilt in Renaissance style.[95][96] Matthias appointed the Italian Chimenti Camicia and the Dalmatian Giovanni Dalmata to direct these projects. [95] Matthias commissioned the leading Italian artists of his age to embellish his palaces: for instance, the sculptor Benedetto da Majano and the painters Filippino Lippi and Andrea Mantegna worked for him.[97] A copy of Mantegna's portrait of Matthias survived.[98] Matthias also hired the Italian military engineer Aristotele Fioravanti to direct the rebuilding of the forts along the southern frontier.[99] He had new monasteries built in Late Gothic style for the Franciscans in Kolozsvár, Szeged and Hunyad, and for the Paulines in Fejéregyháza.[100][101] In the spring of 1485, Leonardo da Vinci travelled to Hungary on behalf of Sforza to meet king Matthias Corvinus, and was commissioned by him to paint a Madonna.[102]
139
+
140
+ Matthias enjoyed the company of Humanists and had lively discussions on various topics with them.[103] The fame of his magnanimity encouraged many scholars—mostly Italian—to settle in Buda.[104] Antonio Bonfini, Pietro Ranzano, Bartolomeo Fonzio, and Francesco Bandini spent many years in Matthias's court.[105][103] This circle of educated men introduced the ideas of Neoplatonism to Hungary.[106][107] Like all intellectuals of his age, Matthias was convinced that the movements and combinations of the stars and planets exercised influence on individuals' life and on the history of nations.[108] Galeotto Marzio described him as "king and astrologer", and Antonio Bonfini said Matthias "never did anything without consulting the stars".[109] Upon his request, the famous astronomers of the age, Johannes Regiomontanus and Marcin Bylica, set up an observatory in Buda and installed it with astrolabes and celestial globes.[110] Regiomontanus dedicated his book on navigation that was used by Christopher Columbus to Matthias.[104]
141
+
142
+ Other important figures of Hungarian Renaissance include Bálint Balassi (poet), Sebestyén Tinódi Lantos (poet), Bálint Bakfark (composer and lutenist), and Master MS (fresco painter).
143
+
144
+ Culture in the Netherlands at the end of the 15th century was influenced by the Italian Renaissance through trade via Bruges, which made Flanders wealthy. Its nobles commissioned artists who became known across Europe.[111] In science, the anatomist Andreas Vesalius led the way; in cartography, Gerardus Mercator's map assisted explorers and navigators. In art, Dutch and Flemish Renaissance painting ranged from the strange work of Hieronymus Bosch[112] to the everyday life depictions of Pieter Brueghel the Elder.[111]
145
+
146
+ The Renaissance in Northern Europe has been termed the "Northern Renaissance". While Renaissance ideas were moving north from Italy, there was a simultaneous southward spread of some areas of innovation, particularly in music.[113] The music of the 15th-century Burgundian School defined the beginning of the Renaissance in music, and the polyphony of the Netherlanders, as it moved with the musicians themselves into Italy, formed the core of the first true international style in music since the standardization of Gregorian Chant in the 9th century.[113] The culmination of the Netherlandish school was in the music of the Italian composer Palestrina. At the end of the 16th century Italy again became a center of musical innovation, with the development of the polychoral style of the Venetian School, which spread northward into Germany around 1600.
147
+
148
+ The paintings of the Italian Renaissance differed from those of the Northern Renaissance. Italian Renaissance artists were among the first to paint secular scenes, breaking away from the purely religious art of medieval painters. Northern Renaissance artists initially remained focused on religious subjects, such as the contemporary religious upheaval portrayed by Albrecht Dürer. Later, the works of Pieter Bruegel influenced artists to paint scenes of daily life rather than religious or classical themes. It was also during the Northern Renaissance that Flemish brothers Hubert and Jan van Eyck perfected the oil painting technique, which enabled artists to produce strong colors on a hard surface that could survive for centuries.[114] A feature of the Northern Renaissance was its use of the vernacular in place of Latin or Greek, which allowed greater freedom of expression. This movement had started in Italy with the decisive influence of Dante Alighieri on the development of vernacular languages; in fact the focus on writing in Italian has neglected a major source of Florentine ideas expressed in Latin.[115] The spread of the printing press technology boosted the Renaissance in Northern Europe as elsewhere, with Venice becoming a world center of printing.
149
+
150
+ An early Italian humanist who came to Poland in the mid-15th century was Filippo Buonaccorsi. Many Italian artists came to Poland with Bona Sforza of Milan, when she married King Sigismund I the Old in 1518.[116] This was supported by temporarily strengthened monarchies in both areas, as well as by newly established universities.[117] The Polish Renaissance lasted from the late 15th to the late 16th century and was the Golden Age of Polish culture. Ruled by the Jagiellon dynasty, the Kingdom of Poland (from 1569 known as the Polish–Lithuanian Commonwealth) actively participated in the broad European Renaissance. The multi-national Polish state experienced a substantial period of cultural growth thanks in part to a century without major wars – aside from conflicts in the sparsely populated eastern and southern borderlands. The Reformation spread peacefully throughout the country (giving rise to the Polish Brethren), while living conditions improved, cities grew, and exports of agricultural products enriched the population, especially the nobility (szlachta) who gained dominance in the new political system of Golden Liberty. The Polish Renaissance architecture has three periods of development.
151
+
152
+ The greatest monument of this style in the territory of the former Duchy of Pomerania is the Ducal Castle in Szczecin.
153
+
154
+ Although Italian Renaissance had a modest impact in Portuguese arts, Portugal was influential in broadening the European worldview,[118] stimulating humanist inquiry. Renaissance arrived through the influence of wealthy Italian and Flemish merchants who invested in the profitable commerce overseas. As the pioneer headquarters of European exploration, Lisbon flourished in the late 15th century, attracting experts who made several breakthroughs in mathematics, astronomy and naval technology, including Pedro Nunes, João de Castro, Abraham Zacuto and Martin Behaim. Cartographers Pedro Reinel, Lopo Homem, Estêvão Gomes and Diogo Ribeiro made crucial advances in mapping the world. Apothecary Tomé Pires and physicians Garcia de Orta and Cristóvão da Costa collected and published works on plants and medicines, soon translated by Flemish pioneer botanist Carolus Clusius.
155
+
156
+ In architecture, the huge profits of the spice trade financed a sumptuous composite style in the first decades of the 16th century, the Manueline, incorporating maritime elements.[119] The primary painters were Nuno Gonçalves, Gregório Lopes and Vasco Fernandes. In music, Pedro de Escobar and Duarte Lobo produced four songbooks, including the Cancioneiro de Elvas. In literature, Sá de Miranda introduced Italian forms of verse. Bernardim Ribeiro developed pastoral romance, plays by Gil Vicente fused it with popular culture, reporting the changing times, and Luís de Camões inscribed the Portuguese feats overseas in the epic poem Os Lusíadas. Travel literature especially flourished: João de Barros, Castanheda, António Galvão, Gaspar Correia, Duarte Barbosa, and Fernão Mendes Pinto, among others, described new lands and were translated and spread with the new printing press.[118] After joining the Portuguese exploration of Brazil in 1500, Amerigo Vespucci coined the term New World,[120] in his letters to Lorenzo di Pierfrancesco de' Medici.
157
+
158
+ The intense international exchange produced several cosmopolitan humanist scholars, including Francisco de Holanda, André de Resende and Damião de Góis, a friend of Erasmus who wrote with rare independence on the reign of King Manuel I. Diogo and André de Gouveia made relevant teaching reforms via France. Foreign news and products in the Portuguese factory in Antwerp attracted the interest of Thomas More[121] and Albrecht Dürer to the wider world.[122] There, profits and know-how helped nurture the Dutch Renaissance and Golden Age, especially after the arrival of the wealthy cultured Jewish community expelled from Portugal.
159
+
160
+ Renaissance trends from Italy and Central Europe influenced Russia in many ways. Their influence was rather limited, however, due to the large distances between Russia and the main European cultural centers and the strong adherence of Russians to their Orthodox traditions and Byzantine legacy.
161
+
162
+ Prince Ivan III introduced Renaissance architecture to Russia by inviting a number of architects from Italy, who brought new construction techniques and some Renaissance style elements with them, while in general following the traditional designs of Russian architecture. In 1475 the Bolognese architect Aristotele Fioravanti came to rebuild the Cathedral of the Dormition in the Moscow Kremlin, which had been damaged in an earthquake. Fioravanti was given the 12th-century Vladimir Cathedral as a model, and he produced a design combining traditional Russian style with a Renaissance sense of spaciousness, proportion and symmetry.
163
+
164
+ In 1485 Ivan III commissioned the building of the royal residence, Terem Palace, within the Kremlin, with Aloisio da Milano as the architect of the first three floors. He and other Italian architects also contributed to the construction of the Kremlin walls and towers. The small banquet hall of the Russian Tsars, called the Palace of Facets because of its facetted upper story, is the work of two Italians, Marco Ruffo and Pietro Solario, and shows a more Italian style. In 1505, an Italian known in Russia as Aleviz Novyi or Aleviz Fryazin arrived in Moscow. He may have been the Venetian sculptor, Alevisio Lamberti da Montagne. He built twelve churches for Ivan III, including the Cathedral of the Archangel, a building remarkable for the successful blending of Russian tradition, Orthodox requirements and Renaissance style. It is believed that the Cathedral of the Metropolitan Peter in Vysokopetrovsky Monastery, another work of Aleviz Novyi, later served as an inspiration for the so-called octagon-on-tetragon architectural form in the Moscow Baroque of the late 17th century.
165
+
166
+ Between the early 16th and the late 17th centuries, an original tradition of stone tented roof architecture developed in Russia. It was quite unique and different from the contemporary Renaissance architecture elsewhere in Europe, though some research terms the style 'Russian Gothic' and compares it with the European Gothic architecture of the earlier period. The Italians, with their advanced technology, may have influenced the invention of the stone tented roof (the wooden tents were known in Russia and Europe long before). According to one hypothesis, an Italian architect called Petrok Maly may have been an author of the Ascension Church in Kolomenskoye, one of the earliest and most prominent tented roof churches.[123]
167
+
168
+ By the 17th century the influence of Renaissance painting resulted in Russian icons becoming slightly more realistic, while still following most of the old icon painting canons, as seen in the works of Bogdan Saltanov, Simon Ushakov, Gury Nikitin, Karp Zolotaryov and other Russian artists of the era. Gradually the new type of secular portrait painting appeared, called parsúna (from "persona" – person), which was transitional style between abstract iconographics and real paintings.
169
+
170
+ In the mid 16th-century Russians adopted printing from Central Europe, with Ivan Fyodorov being the first known Russian printer. In the 17th century printing became widespread, and woodcuts became especially popular. That led to the development of a special form of folk art known as lubok printing, which persisted in Russia well into the 19th century.
171
+
172
+ A number of technologies from the European Renaissance period were adopted by Russia rather early and subsequently perfected to become a part of a strong domestic tradition. Mostly these were military technologies, such as cannon casting adopted by at least the 15th century. The Tsar Cannon, which is the world's largest bombard by caliber, is a masterpiece of Russian cannon making. It was cast in 1586 by Andrey Chokhov and is notable for its rich, decorative relief. Another technology, that according to one hypothesis originally was brought from Europe by the Italians, resulted in the development of vodka, the national beverage of Russia. As early as 1386 Genoese ambassadors brought the first aqua vitae ("water of life") to Moscow and presented it to Grand Duke Dmitry Donskoy. The Genoese likely developed this beverage with the help of the alchemists of Provence, who used an Arab-invented distillation apparatus to convert grape must into alcohol. A Moscovite monk called Isidore used this technology to produce the first original Russian vodka c. 1430.[124]
173
+
174
+ The Renaissance arrived in the Iberian peninsula through the Mediterranean possessions of the Aragonese Crown and the city of Valencia. Many early Spanish Renaissance writers come from the Kingdom of Aragon, including Ausiàs March and Joanot Martorell. In the Kingdom of Castile, the early Renaissance was heavily influenced by the Italian humanism, starting with writers and poets such as the Marquis of Santillana, who introduced the new Italian poetry to Spain in the early 15th century. Other writers, such as Jorge Manrique, Fernando de Rojas, Juan del Encina, Juan Boscán Almogáver and Garcilaso de la Vega, kept a close resemblance to the Italian canon. Miguel de Cervantes's masterpiece Don Quixote is credited as the first Western novel. Renaissance humanism flourished in the early 16th century, with influential writers such as philosopher Juan Luis Vives, grammarian Antonio de Nebrija and natural historian Pedro de Mexía.
175
+
176
+ Later Spanish Renaissance tended towards religious themes and mysticism, with poets such as fray Luis de León, Teresa of Ávila and John of the Cross, and treated issues related to the exploration of the New World, with chroniclers and writers such as Inca Garcilaso de la Vega and Bartolomé de las Casas, giving rise to a body of work, now known as Spanish Renaissance literature. The late Renaissance in Spain produced artists such as El Greco and composers such as Tomás Luis de Victoria and Antonio de Cabezón.
177
+
178
+ The Italian artist and critic Giorgio Vasari (1511–1574) first used the term rinascita in his book The Lives of the Artists (published 1550). In the book Vasari attempted to define what he described as a break with the barbarities of Gothic art: the arts (he held) had fallen into decay with the collapse of the Roman Empire and only the Tuscan artists, beginning with Cimabue (1240–1301) and Giotto (1267–1337) began to reverse this decline in the arts. Vasari saw ancient art as central to the rebirth of Italian art.[125]
179
+
180
+ However, only in the 19th century did the French word renaissance achieve popularity in describing the self-conscious cultural movement based on revival of Roman models that began in the late 13th century. French historian Jules Michelet (1798–1874) defined "The Renaissance" in his 1855 work Histoire de France as an entire historical period, whereas previously it had been used in a more limited sense.[20] For Michelet, the Renaissance was more a development in science than in art and culture. He asserted that it spanned the period from Columbus to Copernicus to Galileo; that is, from the end of the 15th century to the middle of the 17th century.[83] Moreover, Michelet distinguished between what he called, "the bizarre and monstrous" quality of the Middle Ages and the democratic values that he, as a vocal Republican, chose to see in its character.[14] A French nationalist, Michelet also sought to claim the Renaissance as a French movement.[14]
181
+
182
+ The Swiss historian Jacob Burckhardt (1818–1897) in his The Civilization of the Renaissance in Italy (1860), by contrast, defined the Renaissance as the period between Giotto and Michelangelo in Italy, that is, the 14th to mid-16th centuries. He saw in the Renaissance the emergence of the modern spirit of individuality, which the Middle Ages had stifled.[126] His book was widely read and became influential in the development of the modern interpretation of the Italian Renaissance.[127] However, Buckhardt has been accused[by whom?] of setting forth a linear Whiggish view of history in seeing the Renaissance as the origin of the modern world.[17]
183
+
184
+ More recently, some historians have been much less keen to define the Renaissance as a historical age, or even as a coherent cultural movement. The historian Randolph Starn, of the University of California Berkeley, stated in 1998:
185
+
186
+ Rather than a period with definitive beginnings and endings and consistent content in between, the Renaissance can be (and occasionally has been) seen as a movement of practices and ideas to which specific groups and identifiable persons variously responded in different times and places. It would be in this sense a network of diverse, sometimes converging, sometimes conflicting cultures, not a single, time-bound culture.[17]
187
+
188
+ There is debate about the extent to which the Renaissance improved on the culture of the Middle Ages. Both Michelet and Burckhardt were keen to describe the progress made in the Renaissance towards the modern age. Burckhardt likened the change to a veil being removed from man's eyes, allowing him to see clearly.[48]
189
+
190
+ In the Middle Ages both sides of human consciousness – that which was turned within as that which was turned without – lay dreaming or half awake beneath a common veil. The veil was woven of faith, illusion, and childish prepossession, through which the world and history were seen clad in strange hues.[128]
191
+
192
+ On the other hand, many historians now point out that most of the negative social factors popularly associated with the medieval period—poverty, warfare, religious and political persecution, for example—seem to have worsened in this era, which saw the rise of Machiavellian politics, the Wars of Religion, the corrupt Borgia Popes, and the intensified witch hunts of the 16th century. Many people who lived during the Renaissance did not view it as the "golden age" imagined by certain 19th-century authors, but were concerned by these social maladies.[129] Significantly, though, the artists, writers, and patrons involved in the cultural movements in question believed they were living in a new era that was a clean break from the Middle Ages.[80] Some Marxist historians prefer to describe the Renaissance in material terms, holding the view that the changes in art, literature, and philosophy were part of a general economic trend from feudalism towards capitalism, resulting in a bourgeois class with leisure time to devote to the arts.[130]
193
+
194
+ Johan Huizinga (1872–1945) acknowledged the existence of the Renaissance but questioned whether it was a positive change. In his book The Autumn of the Middle Ages, he argued that the Renaissance was a period of decline from the High Middle Ages, destroying much that was important.[16] The Latin language, for instance, had evolved greatly from the classical period and was still a living language used in the church and elsewhere. The Renaissance obsession with classical purity halted its further evolution and saw Latin revert to its classical form. Robert S. Lopez has contended that it was a period of deep economic recession.[131] Meanwhile, George Sarton and Lynn Thorndike have both argued that scientific progress was perhaps less original than has traditionally been supposed.[132] Finally, Joan Kelly argued that the Renaissance led to greater gender dichotomy, lessening the agency women had had during the Middle Ages.[133]
195
+
196
+ Some historians have begun to consider the word Renaissance to be unnecessarily loaded, implying an unambiguously positive rebirth from the supposedly more primitive "Dark Ages", the Middle Ages. Most historians now prefer to use the term "early modern" for this period, a more neutral designation that highlights the period as a transitional one between the Middle Ages and the modern era.[134] Others such as Roger Osborne have come to consider the Italian Renaissance as a repository of the myths and ideals of western history in general, and instead of rebirth of ancient ideas as a period of great innovation.[135]
197
+
198
+ The term Renaissance has also been used to define periods outside of the 15th and 16th centuries. Charles H. Haskins (1870–1937), for example, made a case for a Renaissance of the 12th century.[136] Other historians have argued for a Carolingian Renaissance in the 8th and 9th centuries, Ottonian Renaissance in the 10th century and for the Timurid Renaissance of the 14th century. The Islamic Golden Age has been also sometimes termed with the Islamic Renaissance.[137]
199
+
200
+ Other periods of cultural rebirth have also been termed "renaissances", such as the Bengal Renaissance, Tamil Renaissance, Nepal Bhasa renaissance, al-Nahda or the Harlem Renaissance. The term can also be used in cinema. In animation, the Disney Renaissance is a period that spanned the years from 1989 to 1999 which saw the studio return to the level of quality not witnessed since their Golden Age or Animation. The San Francisco Renaissance was a vibrant period of exploratory poetry and fiction writing in that city in the mid-20th century.
201
+
202
+ Notes
203
+
204
+ Rapid accumulation of knowledge, which has characterized the development of science since the 17th century, had never occurred before that time. The new kind of scientific activity emerged only in a few countries of Western Europe, and it was restricted to that small area for about two hundred years. (Since the 19th century, scientific knowledge has been assimilated by the rest of the world).
205
+
206
+ Citations
207
+
208
+ Interactive resources
209
+
210
+ Lectures and galleries
211
+
en/5587.html.txt ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The periodic table, also known as the periodic table of elements, is a tabular display of the chemical elements, which are arranged by atomic number, electron configuration, and recurring chemical properties. The structure of the table shows periodic trends. The seven rows of the table, called periods, generally have metals on the left and nonmetals on the right. The columns, called groups, contain elements with similar chemical behaviours. Six groups have accepted names as well as assigned numbers: for example, group 17 elements are the halogens; and group 18 are the noble gases. Also displayed are four simple rectangular areas or blocks associated with the filling of different atomic orbitals.
6
+
7
+ The elements from atomic numbers 1 (hydrogen) through 118 (oganesson) have all been discovered or synthesized, completing seven full rows of the periodic table.[1][2] The first 94 elements, hydrogen through plutonium, all occur naturally, though some are found only in trace amounts and a few were discovered in nature only after having first been synthesized.[n 1] Elements 95 to 118 have only been synthesized in laboratories, nuclear reactors, or nuclear explosions.[3] The synthesis of elements having higher atomic numbers is currently being pursued: these elements would begin an eighth row, and theoretical work has been done to suggest possible candidates for this extension. Numerous synthetic radioisotopes of naturally occurring elements have also been produced in laboratories.
8
+
9
+ The organization of the periodic table can be used to derive relationships between the various element properties, and also to predict chemical properties and behaviours of undiscovered or newly synthesized elements. Russian chemist Dmitri Mendeleev published the first recognizable periodic table in 1869, developed mainly to illustrate periodic trends of the then-known elements. He also predicted some properties of unidentified elements that were expected to fill gaps within the table. Most of his forecasts proved to be correct. Mendeleev's idea has been slowly expanded and refined with the discovery or synthesis of further new elements and the development of new theoretical models to explain chemical behaviour. The modern periodic table now provides a useful framework for analyzing chemical reactions, and continues to be widely used in chemistry, nuclear physics and other sciences. Some discussion remains ongoing regarding the placement and categorisation of specific elements, the future extension and limits of the table, and whether there is an optimal form of the table.
10
+
11
+ 1
12
+
13
+ 1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
14
+
15
+ Background color shows subcategory in the metal–metalloid–nonmetal trend:
16
+
17
+ Each chemical element has a unique atomic number (Z) representing the number of protons in its nucleus.[n 2] Most elements have differing numbers of neutrons among different atoms, with these variants being referred to as isotopes. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. Elements with no stable isotopes have the atomic masses of their most stable isotopes, where such masses are shown, listed in parentheses.[7]
18
+
19
+ In the standard periodic table, the elements are listed in order of increasing atomic number Z. A new row (period) is started when a new electron shell has its first electron. Columns (groups) are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen and selenium are in the same column because they both have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it.[8]
20
+
21
+ Since 2016, the periodic table has 118 confirmed elements, from element 1 (hydrogen) to 118 (oganesson). Elements 113, 115, 117 and 118, the most recent discoveries, were officially confirmed by the International Union of Pure and Applied Chemistry (IUPAC) in December 2015. Their proposed names, nihonium (Nh), moscovium (Mc), tennessine (Ts) and oganesson (Og) respectively, were made official in November 2016 by IUPAC.[9][10][11][12]
22
+
23
+ The first 94 elements occur naturally; the remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements.[3] No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine (element 85); francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms).[13]
24
+
25
+ A group or family is a vertical column in the periodic table. Groups usually have more significant periodic trends than periods and blocks, explained below. Modern quantum mechanical theories of atomic structure explain group trends by proposing that elements within the same group generally have the same electron configurations in their valence shell.[14] Consequently, elements in the same group tend to have a shared chemistry and exhibit a clear trend in properties with increasing atomic number.[15] In some parts of the periodic table, such as the d-block and the f-block, horizontal similarities can be as important as, or more pronounced than, vertical similarities.[16][17][18]
26
+
27
+ Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases).[19] Previously, they were known by roman numerals. In America, the roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used if the group was before group 10, and "B" was used for groups including and after group 10. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC naming system was put into use, and the old group names were deprecated.[20]
28
+
29
+ Some of these groups have been given trivial (unsystematic) names, as seen in the table below, although some are rarely used. Groups 3–10 have no trivial names and are referred to simply by their group numbers or by the name of the first member of their group (such as "the scandium group" for group 3),[19] since they display fewer similarities and/or vertical trends.
30
+
31
+ Elements in the same group tend to show patterns in atomic radius, ionization energy, and electronegativity. From top to bottom in a group, the atomic radii of the elements increase. Since there are more filled energy levels, valence electrons are found farther from the nucleus. From the top, each successive element has a lower ionization energy because it is easier to remove an electron since the atoms are less tightly bound. Similarly, a group has a top-to-bottom decrease in electronegativity due to an increasing distance between valence electrons and the nucleus.[21] There are exceptions to these trends: for example, in group 11, electronegativity increases farther down the group.[22]
32
+
33
+ A period is a horizontal row in the periodic table. Although groups generally have more significant periodic trends, there are regions where horizontal trends are more significant than vertical group trends, such as the f-block, where the lanthanides and actinides form two substantial horizontal series of elements.[24]
34
+
35
+ Elements in the same period show trends in atomic radius, ionization energy, electron affinity, and electronegativity. Moving left to right across a period, atomic radius usually decreases. This occurs because each successive element has an added proton and electron, which causes the electron to be drawn closer to the nucleus.[25] This decrease in atomic radius also causes the ionization energy to increase when moving from left to right across a period. The more tightly bound an element is, the more energy is required to remove an electron. Electronegativity increases in the same manner as ionization energy because of the pull exerted on the electrons by the nucleus.[21] Electron affinity also shows a slight trend across a period. Metals (left side of a period) generally have a lower electron affinity than nonmetals (right side of a period), with the exception of the noble gases.[26]
36
+
37
+ Specific regions of the periodic table can be referred to as blocks in recognition of the sequence in which the electron shells of the elements are filled. Elements are assigned to blocks by what orbitals their valence electrons or vacancies lie in.[27] The s-block comprises the first two groups (alkali metals and alkaline earth metals) as well as hydrogen and helium. The p-block comprises the last six groups, which are groups 13 to 18 in IUPAC group numbering (3A to 8A in American group numbering) and contains, among other elements, all of the metalloids. The d-block comprises groups 3 to 12 (or 3B to 2B in American group numbering) and contains all of the transition metals. The f-block, often offset below the rest of the periodic table, has no group numbers and comprises most of the lanthanides and actinides. A hypothetical g-block is expected to begin around element 121, a few elements away from what is currently known.[28]
38
+
39
+ According to their shared physical and chemical properties, the elements can be classified into the major categories of metals, metalloids and nonmetals. Metals are generally shiny, highly conducting solids that form alloys with one another and salt-like ionic compounds with nonmetals (other than noble gases). A majority of nonmetals are coloured or colourless insulating gases; nonmetals that form compounds with other nonmetals feature covalent bonding. In between metals and nonmetals are metalloids, which have intermediate or mixed properties.[29]
40
+
41
+ Metal and nonmetals can be further classified into subcategories that show a gradation from metallic to non-metallic properties, when going left to right in the rows. The metals may be subdivided into the highly reactive alkali metals, through the less reactive alkaline earth metals, lanthanides and actinides, via the archetypal transition metals, and ending in the physically and chemically weak post-transition metals. Nonmetals may be simply subdivided into the polyatomic nonmetals, being nearer to the metalloids and show some incipient metallic character; the essentially nonmetallic diatomic nonmetals, nonmetallic and the almost completely inert, monatomic noble gases. Specialized groupings such as refractory metals and noble metals, are examples of subsets of transition metals, also known[30] and occasionally denoted.[31]
42
+
43
+ Placing elements into categories and subcategories based just on shared properties is imperfect. There is a large disparity of properties within each category with notable overlaps at the boundaries, as is the case with most classification schemes.[32] Beryllium, for example, is classified as an alkaline earth metal although its amphoteric chemistry and tendency to mostly form covalent compounds are both attributes of a chemically weak or post-transition metal. Radon is classified as a nonmetallic noble gas yet has some cationic chemistry that is characteristic of metals. Other classification schemes are possible such as the division of the elements into mineralogical occurrence categories, or crystalline structures. Categorizing the elements in this fashion dates back to at least 1869 when Hinrichs[33] wrote that simple boundary lines could be placed on the periodic table to show elements having shared properties, such as metals, nonmetals, or gaseous elements.
44
+
45
+ The electron configuration or organisation of electrons orbiting neutral atoms shows a recurring pattern or periodicity. The electrons occupy a series of electron shells (numbered 1, 2, and so on). Each shell consists of one or more subshells (named s, p, d, f and g). As atomic number increases, electrons progressively fill these shells and subshells more or less according to the Madelung rule or energy ordering rule, as shown in the diagram. The electron configuration for neon, for example, is 1s2 2s2 2p6. With an atomic number of ten, neon has two electrons in the first shell, and eight electrons in the second shell; there are two electrons in the s subshell and six in the p subshell. In periodic table terms, the first time an electron occupies a new shell corresponds to the start of each new period, these positions being occupied by hydrogen and the alkali metals.[34][35]
46
+
47
+ Since the properties of an element are mostly determined by its electron configuration, the properties of the elements likewise show recurring patterns or periodic behaviour, some examples of which are shown in the diagrams below for atomic radii, ionization energy and electron affinity. It is this periodicity of properties, manifestations of which were noticed well before the underlying theory was developed, that led to the establishment of the periodic law (the properties of the elements recur at varying intervals) and the formulation of the first periodic tables.[34][35] The periodic law may then be successively clarified as: depending on atomic weight; depending on atomic number; and depending on the total number of s, p, d, and f electrons in each atom. The cycles last 2, 6, 10, and 14 elements respectively.[36]
48
+
49
+ There is additionally an internal "double periodicity" that splits the shells in half; this arises because the first half of the electrons going into a particular type of subshell fill unoccupied orbitals, but the second half have to fill already occupied orbitals, following Hund's rule of maximum multiplicity. The second half thus suffer additional repulsion that causes the trend to spit between first-half and second-half elements; this is for example evident when observing the ionisation energies of the 2p elements, in which the triads B-C-N and O-F-Ne show increases, but oxygen actually has a first ionisation slightly lower than that of nitrogen as it is easier to remove the extra, paired electron.[36]
50
+
51
+ Atomic radii vary in a predictable and explainable manner across the periodic table. For instance, the radii generally decrease along each period of the table, from the alkali metals to the noble gases; and increase down each group. The radius increases sharply between the noble gas at the end of each period and the alkali metal at the beginning of the next period. These trends of the atomic radii (and of various other chemical and physical properties of the elements) can be explained by the electron shell theory of the atom; they provided important evidence for the development and confirmation of quantum theory.[37]
52
+
53
+ The electrons in the 4f-subshell, which is progressively filled from lanthanum (element 57) to ytterbium (element 70),[n 4] are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii that are smaller than would be expected and that are almost identical to the atomic radii of the elements immediately above them.[39] Hence lutetium has virtually the same atomic radius (and chemistry) as yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. This is an effect of the lanthanide contraction: a similar actinide contraction also exists. The effect of the lanthanide contraction is noticeable up to platinum (element 78), after which it is masked by a relativistic effect known as the inert pair effect.[40] The d-block contraction, which is a similar effect between the d-block and p-block, is less pronounced than the lanthanide contraction but arises from a similar cause.[39]
54
+
55
+ Such contractions exist throughout the table, but are chemically most relevant for the lanthanides with their almost constant +3 oxidation state.[41]
56
+
57
+ The first ionization energy is the energy it takes to remove one electron from an atom, the second ionization energy is the energy it takes to remove a second electron from the atom, and so on. For a given atom, successive ionization energies increase with the degree of ionization. For magnesium as an example, the first ionization energy is 738 kJ/mol and the second is 1450 kJ/mol. Electrons in the closer orbitals experience greater forces of electrostatic attraction; thus, their removal requires increasingly more energy. Ionization energy becomes greater up and to the right of the periodic table.[40]
58
+
59
+ Large jumps in the successive molar ionization energies occur when removing an electron from a noble gas (complete electron shell) configuration. For magnesium again, the first two molar ionization energies of magnesium given above correspond to removing the two 3s electrons, and the third ionization energy is a much larger 7730 kJ/mol, for the removal of a 2p electron from the very stable neon-like configuration of Mg2+. Similar jumps occur in the ionization energies of other third-row atoms.[40]
60
+
61
+ Electronegativity is the tendency of an atom to attract a shared pair of electrons.[42] An atom's electronegativity is affected by both its atomic number and the distance between the valence electrons and the nucleus. The higher its electronegativity, the more an element attracts electrons. It was first proposed by Linus Pauling in 1932.[43] In general, electronegativity increases on passing from left to right along a period, and decreases on descending a group. Hence, fluorine is the most electronegative of the elements,[n 5] while caesium is the least, at least of those elements for which substantial data is available.[22]
62
+
63
+ There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon respectively because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity.[22] The anomalously high electronegativity of lead, particularly when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state instead of the +4 state.[44]
64
+
65
+ The electron affinity of an atom is the amount of energy released when an electron is added to a neutral atom to form a negative ion. Although electron affinity varies greatly, some patterns emerge. Generally, nonmetals have more positive electron affinity values than metals. Chlorine most strongly attracts an extra electron. The electron affinities of the noble gases have not been measured conclusively, so they may or may not have slightly negative values.[47]
66
+
67
+ Electron affinity generally increases across a period. This is caused by the filling of the valence shell of the atom; a group 17 atom releases more energy than a group 1 atom on gaining an electron because it obtains a filled valence shell and is therefore more stable.[47]
68
+
69
+ A trend of decreasing electron affinity going down groups would be expected. The additional electron will be entering an orbital farther away from the nucleus. As such this electron would be less attracted to the nucleus and would release less energy when added. In going down a group, around one-third of elements are anomalous, with heavier elements having higher electron affinities than their next lighter congenors. Largely, this is due to the poor shielding by d and f electrons. A uniform decrease in electron affinity only applies to group 1 atoms.[48]
70
+
71
+ The lower the values of ionization energy, electronegativity and electron affinity, the more metallic character the element has. Conversely, nonmetallic character increases with higher values of these properties.[49] Given the periodic trends of these three properties, metallic character tends to decrease going across a period (or row) and, with some irregularities (mostly) due to poor screening of the nucleus by d and f electrons, and relativistic effects,[50] tends to increase going down a group (or column or family). Thus, the most metallic elements (such as caesium) are found at the bottom left of traditional periodic tables and the most nonmetallic elements (such as neon) at the top right. The combination of horizontal and vertical trends in metallic character explains the stair-shaped dividing line between metals and nonmetals found on some periodic tables, and the practice of sometimes categorizing several elements adjacent to that line, or elements adjacent to those elements, as metalloids.[51][52]
72
+
73
+ With some minor exceptions, oxidation numbers among the elements show four main trends according to their periodic table geographic location: left; middle; right; and south. On the left (groups 1 to 4, not including the f-block elements, and also niobium, tantalum, and probably dubnium in group 5), the highest most stable oxidation number is the group number, with lower oxidation states being less stable. In the middle (groups 3 to 11), higher oxidation states become more stable going down each group. Group 12 is an exception to this trend; they behave as if they were located on the left side of the table. On the right, higher oxidation states tend to become less stable going down a group.[53] The shift between these trends is continuous: for example, group 3 also has lower oxidation states most stable in its lightest member (scandium, with CsScCl3 for example known in the +2 state),[54] and group 12 is predicted to have copernicium more readily showing oxidation states above +2.[55]
74
+
75
+ The lanthanides positioned along the south of the table are distinguished by having the +3 oxidation state in common; this is their most stable state. The early actinides show a pattern of oxidation states somewhat similar to those of their period 6 and 7 transition metal congeners; the later actinides are more similar to the lanthanides, though the last ones (excluding lawrencium) have an increasingly important +2 oxidation state that becomes the most stable state for nobelium.[56]
76
+
77
+ From left to right across the four blocks of the long- or 32-column form of the periodic table are a series of linking or bridging groups of elements, located approximately between each block. In general, groups at the peripheries of blocks display similarities to the groups of the neighbouring blocks as well as to the other groups in their own blocks, as expected as most periodic trends are continuous.[57] These groups, like the metalloids, show properties in between, or that are a mixture of, groups to either side. Chemically, the group 3 elements, lanthanides, and heavy group 4 and 5 elements show some behaviour similar to the alkaline earth metals[58] or, more generally, s block metals[59][60][61] but have some of the physical properties of d block transition metals.[62] In fact, the metals all the way up to group 6 are united by being class-A cations ("hard" acids) that form more stable complexes with ligands whose donor atoms are the most electronegative nonmetals nitrogen, oxygen, and fluorine; metals later in the table form a transition to class-B cations ("soft" acids) that form more stable complexes with ligands whose donor atoms are the less electronegative heavier elements of groups 15 through 17.[63]
78
+
79
+ Meanwhile, lutetium behaves chemically as a lanthanide (with which it is often classified) but shows a mix of lanthanide and transition metal physical properties (as does yttrium).[64][65] Lawrencium, as an analogue of lutetium, would presumably display like characteristics.[n 6] The coinage metals in group 11 (copper, silver, and gold) are chemically capable of acting as either transition metals or main group metals.[68] The volatile group 12 metals, zinc, cadmium and mercury are sometimes regarded as linking the d block to the p block. Notionally they are d block elements but they have few transition metal properties and are more like their p block neighbors in group 13.[69][70] The relatively inert noble gases, in group 18, bridge the most reactive groups of elements in the periodic table—the halogens in group 17 and the alkali metals in group 1.[57]
80
+
81
+ The 1s, 2p, 3d, 4f, and 5g shells are each the first to have their value of ℓ, the azimuthal quantum number that determines a subshell's orbital angular momentum. This gives them some special properties,[71] that has been referred to as kainosymmetry (from Greek καινός "new").[36][72] Elements filling these orbitals are usually less metallic than their heavier homologues, prefer lower oxidation states, and have smaller atomic and ionic radii.[72]
82
+
83
+ The above contractions may also be considered to be a general incomplete shielding effect in terms of how they impact the properties of the succeeding elements. The 2p, 3d, or 4f shells have no radial nodes and are smaller than expected. They therefore screen the nuclear charge incompletely, and therefore the valence electrons that fill immediately after the completion of such a core subshell are more tightly bound by the nucleus than would be expected. 1s is an exception, providing nearly complete shielding. This is in particular the reason why sodium has a first ionisation energy of 495.8 kJ/mol that is only slightly smaller than that of lithium, 520.2 kJ/mol, and why lithium acts as less electronegative than sodium in simple σ-bonded alkali metal compounds; sodium suffers an incomplete shielding effect from the preceding 2p elements, but lithium essentially does not.[71]
84
+
85
+ Kainosymmetry also explains the specific properties of the 2p, 3d, and 4f elements. The 2p subshell is small and of a similar radial extent as the 2s subshell, which facilitates orbital hybridisation. This does not work as well for the heavier p elements: for example, silicon in silane (SiH4) shows approximate sp2 hybridisation, whereas carbon in methane (CH4) shows an almost ideal sp3 hybridisation. The bonding in these nonorthogonal heavy p element hydrides is weakened; this situation worsens with more electronegative substituents as they magnify the difference in energy between the s and p subshells. The heavier p elements are often more stable in their higher oxidation states in organometallic compounds than in compounds with electronegative ligands. This follows Bent's rule: s character is concentrated in the bonds to the more electropositive substituents, while p character is concentrated in the bonds to the more electronegative substituents. Furthermore, the 2p elements prefer to participate in multiple bonding (observed in O=O and N≡N) to eliminate Pauli repulsion from the otherwise close s and p lone pairs: their π bonds are stronger and their single bonds weaker. The small size of the 2p shell is also responsible for the extremely high electronegativities of the 2p elements.[71]
86
+
87
+ The 3d elements show the opposite effect; the 3d orbitals are smaller than would be expected, with a radial extent similar to the 3p core shell, which weakens bonding to ligands because they cannot overlap with the ligands' orbitals well enough. These bonds are therefore stretched and therefore weaker compared to the homologous ones of the 4d and 5d elements (the 5d elements show an additional d-expansion due to relativistic effects). This also leads to low-lying excited states, which is probably related to the well-known fact that 3d compounds are often coloured (the light absorbed is visible). This also explains why the 3d contraction has a stronger effect on the following elements than the 4d or 5d ones do. As for the 4f elements, the difficulty that 4f has in being used for chemistry is also related to this, as are the strong incomplete screening effects; the 5g elements may show a similar contraction, but it is likely that relativistic effects will partly counteract this, as they would tend to cause expansion of the 5g shell.[71]
88
+
89
+ Another consequence is the increased metallicity of the following elements in a block after the first kainosymmetric orbital, along with a preference for higher oxidation states. This is visible comparing H and He (1s) with Li and Be (2s); N–F (2p) with P–Cl (3p); Fe and Co (3d) with Ru and Rh (4d); and Nd–Dy (4f) with U–Cf (5f). As kainosymmetric orbitals appear in the even rows (except for 1s), this creates an even–odd difference between periods from period 2 onwards: elements in even periods are smaller and have more oxidising higher oxidation states (if they exist), whereas elements in odd periods differ in the opposite direction.[72]
90
+
91
+ In 1789, Antoine Lavoisier published a list of 33 chemical elements, grouping them into gases, metals, nonmetals, and earths.[73] Chemists spent the following century searching for a more precise classification scheme. In 1829, Johann Wolfgang Döbereiner observed that many of the elements could be grouped into triads based on their chemical properties. Lithium, sodium, and potassium, for example, were grouped together in a triad as soft, reactive metals. Döbereiner also observed that, when arranged by atomic weight, the second member of each triad was roughly the average of the first and the third.[74] This became known as the Law of Triads.[75] German chemist Leopold Gmelin worked with this system, and by 1843 he had identified ten triads, three groups of four, and one group of five. Jean-Baptiste Dumas published work in 1857 describing relationships between various groups of metals. Although various chemists were able to identify relationships between small groups of elements, they had yet to build one scheme that encompassed them all.[74] In 1857, German chemist August Kekulé observed that carbon often has four other atoms bonded to it. Methane, for example, has one carbon atom and four hydrogen atoms.[76] This concept eventually became known as valency, where different elements bond with different numbers of atoms.[77]
92
+
93
+ In 1862, the French geologist Alexandre-Émile Béguyer de Chancourtois published an early form of the periodic table, which he called the telluric helix or screw. He was the first person to notice the periodicity of the elements. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois showed that elements with similar properties seemed to occur at regular intervals. His chart included some ions and compounds in addition to elements. His paper also used geological rather than chemical terms and did not include a diagram. As a result, it received little attention until the work of Dmitri Mendeleev.[78]
94
+
95
+ In 1864, Julius Lothar Meyer, a German chemist, published a table with 28 elements. Realizing that an arrangement according to atomic weight did not exactly fit the observed periodicity in chemical properties he gave valency priority over minor differences in atomic weight. A missing element between Si and Sn was predicted with atomic weight 73 and valency 4.[79] Concurrently, English chemist William Odling published an arrangement of 57 elements, ordered on the basis of their atomic weights. With some irregularities and gaps, he noticed what appeared to be a periodicity of atomic weights among the elements and that this accorded with "their usually received groupings".[80] Odling alluded to the idea of a periodic law but did not pursue it.[81] He subsequently proposed (in 1870) a valence-based classification of the elements.[82]
96
+
97
+ English chemist John Newlands produced a series of papers from 1863 to 1866 noting that when the elements were listed in order of increasing atomic weight, similar physical and chemical properties recurred at intervals of eight. He likened such periodicity to the octaves of music.[83][84] This so termed Law of Octaves was ridiculed by Newlands' contemporaries, and the Chemical Society refused to publish his work.[85] Newlands was nonetheless able to draft a table of the elements and used it to predict the existence of missing elements, such as germanium.[86] The Chemical Society only acknowledged the significance of his discoveries five years after they credited Mendeleev.[87]
98
+
99
+ In 1867, Gustavus Hinrichs, a Danish born academic chemist based in America, published a spiral periodic system based on atomic spectra and weights, and chemical similarities. His work was regarded as idiosyncratic, ostentatious and labyrinthine and this may have militated against its recognition and acceptance.[88][89]
100
+
101
+ Russian chemistry professor Dmitri Mendeleev and German chemist Julius Lothar Meyer independently published their periodic tables in 1869 and 1870, respectively.[90] Mendeleev's table, dated March 1 [O.S. February 17] 1869,[91] was his first published version. That of Meyer was an expanded version of his (Meyer's) table of 1864.[92] They both constructed their tables by listing the elements in rows or columns in order of atomic weight and starting a new row or column when the characteristics of the elements began to repeat.[93]
102
+
103
+ The recognition and acceptance afforded to Mendeleev's table came from two decisions he made. The first was to leave gaps in the table when it seemed that the corresponding element had not yet been discovered.[94] Mendeleev was not the first chemist to do so, but he was the first to be recognized as using the trends in his periodic table to predict the properties of those missing elements, such as gallium and germanium.[95] The second decision was to occasionally ignore the order suggested by the atomic weights and switch adjacent elements, such as tellurium and iodine, to better classify them into chemical families.
104
+
105
+ Mendeleev published in 1869, using atomic weight to organize the elements, information determinable to fair precision in his time. Atomic weight worked well enough to allow Mendeleev to accurately predict the properties of missing elements.
106
+
107
+ Mendeleev took the unusual step of naming missing elements using the Sanskrit numerals eka (1), dvi (2), and tri (3) to indicate that the element in question was one, two, or three rows removed from a lighter congener. It has been suggested that Mendeleev, in doing so, was paying homage to ancient Sanskrit grammarians, in particular Pāṇini, who devised a periodic alphabet for the language.[96]
108
+
109
+ Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, it was proposed that the integer count of the nuclear charge is identical to the sequential place of each element in the periodic table. In 1913, English physicist Henry Moseley using X-ray spectroscopy confirmed this proposal experimentally. Moseley determined the value of the nuclear charge of each element and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge.[97] Nuclear charge is identical to proton count and determines the value of the atomic number (Z) of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley predicted, in 1913, that the only elements still missing between aluminium (Z = 13) and gold (Z = 79) were Z = 43, 61, 72, and 75, all of which were later discovered. The atomic number is the absolute definition of an element and gives a factual basis for the ordering of the periodic table.[98]
110
+
111
+ In 1871, Mendeleev published his periodic table in a new form, with groups of similar elements arranged in columns rather than in rows, and those columns numbered I to VIII corresponding with the element's oxidation state. He also gave detailed predictions for the properties of elements he had earlier noted were missing, but should exist.[99] These gaps were subsequently filled as chemists discovered additional naturally occurring elements.[100] It is often stated that the last naturally occurring element to be discovered was francium (referred to by Mendeleev as eka-caesium) in 1939, but it was technically only the last element to be discovered in nature as opposed to by synthesis.[101] Plutonium, produced synthetically in 1940, was identified in trace quantities as a naturally occurring element in 1971.[102]
112
+
113
+ The popular[103] periodic table layout, also known as the common or standard form (as shown at various other points in this article), is attributable to Horace Groves Deming. In 1923, Deming, an American chemist, published short (Mendeleev style) and medium (18-column) form periodic tables.[104][n 7] Merck and Company prepared a handout form of Deming's 18-column medium table, in 1928, which was widely circulated in American schools. By the 1930s Deming's table was appearing in handbooks and encyclopedias of chemistry. It was also distributed for many years by the Sargent-Welch Scientific Company.[105][106][107]
114
+
115
+ With the development of modern quantum mechanical theories of electron configurations within atoms, it became apparent that each period (row) in the table corresponded to the filling of a quantum shell of electrons. Larger atoms have more electron sub-shells, so later tables have required progressively longer periods.[108]
116
+
117
+ In 1945, Glenn Seaborg, an American scientist, made the suggestion that the actinide elements, like the lanthanides, were filling an f sub-level. Before this time the actinides were thought to be forming a fourth d-block row. Seaborg's colleagues advised him not to publish such a radical suggestion as it would most likely ruin his career. As Seaborg considered he did not then have a career to bring into disrepute, he published anyway. Seaborg's suggestion was found to be correct and he subsequently went on to win the 1951 Nobel Prize in chemistry for his work in synthesizing actinide elements.[109][110][n 8]
118
+
119
+ Although minute quantities of some transuranic elements occur naturally,[3] they were all first discovered in laboratories. Their production has expanded the periodic table significantly, the first of these being neptunium, synthesized in 1939.[111] Because many of the transuranic elements are highly unstable and decay quickly, they are challenging to detect and characterize when produced. There have been controversies concerning the acceptance of competing discovery claims for some elements, requiring independent review to determine which party has priority, and hence naming rights.[112] In 2010, a joint Russia–US collaboration at Dubna, Moscow Oblast, Russia, claimed to have synthesized six atoms of tennessine (element 117), making it the most recently claimed discovery. It, along with nihonium (element 113), moscovium (element 115), and oganesson (element 118), are the four most recently named elements, whose names all became official on 28 November 2016.[113]
120
+
121
+ The modern periodic table is sometimes expanded into its long or 32-column form by reinstating the footnoted f-block elements into their natural position between the s- and d-blocks, as proposed by Alfred Werner.[114] Unlike the 18-column form, this arrangement results in "no interruptions in the sequence of increasing atomic numbers".[115] The relationship of the f-block to the other blocks of the periodic table also becomes easier to see.[116] William B. Jensen advocates a form of table with 32 columns on the grounds that the lanthanides and actinides are otherwise relegated in the minds of students as dull, unimportant elements that can be quarantined and ignored.[117] Despite these advantages, the 32-column form is generally avoided by editors on account of its undue rectangular ratio compared to a book page ratio,[118] and the familiarity of chemists with the modern form, as introduced by Seaborg.[119]
122
+
123
+ 1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
124
+
125
+ Background color shows subcategory in the metal–metalloid–nonmetal trend:
126
+
127
+ Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table.[117][122][123] As well as numerous rectangular variations, other periodic table formats have been shaped, for example,[n 9] like a circle, cube, cylinder, building, spiral, lemniscate,[124] octagonal prism, pyramid, sphere, or triangle. Such alternatives are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables.[123]
128
+
129
+ A popular[125] alternative structure is that of Otto Theodor Benfey (1960). The elements are arranged in a continuous spiral, with hydrogen at the centre and the transition metals, lanthanides, and actinides occupying peninsulas.[126]
130
+
131
+ Most periodic tables are two-dimensional;[3] three-dimensional tables are known to as far back as at least 1862 (pre-dating Mendeleev's two-dimensional table of 1869). More recent examples include Courtines' Periodic Classification (1925),[127] Wringley's Lamina System (1949),[128]
132
+ Giguère's Periodic helix (1965)[129] and Dufour's Periodic Tree (1996).[130] Going one further, Stowe's Physicist's Periodic Table (1989)[131] has been described as being four-dimensional (having three spatial dimensions and one colour dimension).[132]
133
+
134
+ The various forms of periodic tables can be thought of as lying on a chemistry–physics continuum.[133] Towards the chemistry end of the continuum can be found, as an example, Rayner-Canham's "unruly"[134] Inorganic Chemist's Periodic Table (2002),[135] which emphasizes trends and patterns, and unusual chemical relationships and properties. Near the physics end of the continuum is Janet's Left-Step Periodic Table (1928). This has a structure that shows a closer connection to the order of electron-shell filling and, by association, quantum mechanics.[136] A somewhat similar approach has been taken by Alper,[137] albeit criticized by Eric Scerri as disregarding the need to display chemical and physical periodicity.[138] Somewhere in the middle of the continuum is the ubiquitous common or standard form of periodic table. This is regarded as better expressing empirical trends in physical state, electrical and thermal conductivity, and oxidation numbers, and other properties easily inferred from traditional techniques of the chemical laboratory.[139] Its popularity is thought to be a result of this layout having a good balance of features in terms of ease of construction and size, and its depiction of atomic order and periodic trends.[81][140]
135
+
136
+ Simply following electron configurations, hydrogen (electronic configuration 1s1) and helium (1s2) should be placed in groups 1 and 2, above lithium (1s22s1) and beryllium (1s22s2).[141] While such a placement is common for hydrogen, it is rarely used for helium outside of the context of electron configurations: When the noble gases (then called "inert gases") were first discovered around 1900, they were known as "group 0", reflecting no chemical reactivity of these elements known at that point, and helium was placed on the top of that group, as it did share the extreme chemical inertness seen throughout the group. As the group changed its formal number, many authors continued to assign helium directly above neon, in group 18; one of the examples of such placing is the current IUPAC table.[142]
137
+
138
+ The position of hydrogen in group 1 is reasonably well settled. Its usual oxidation state is +1 as is the case for its heavier alkali metal congeners. Like lithium, it has a significant covalent chemistry.[143][144]
139
+ It can stand in for alkali metals in typical alkali metal structures.[145] It is capable of forming alloy-like hydrides, featuring metallic bonding, with some transition metals.[146]
140
+
141
+ Nevertheless, it is sometimes placed elsewhere. A common alternative is at the top of group 17[138] given hydrogen's strictly univalent and largely non-metallic chemistry, and the strictly univalent and non-metallic chemistry of fluorine (the element otherwise at the top of group 17). Sometimes, to show hydrogen has properties corresponding to both those of the alkali metals and the halogens, it is shown at the top of the two columns simultaneously.[147] Another suggestion is above carbon in group 14: placed that way, it fits well into the trends of increasing ionization potential values and electron affinity values, and is not too far from the electronegativity trend, even though hydrogen cannot show the tetravalence characteristic of the heavier group 14 elements.[148] Finally, hydrogen is sometimes placed separately from any group; this is based on its general properties being regarded as sufficiently different from those of the elements in any other group.
142
+
143
+ The other period 1 element, helium, is most often placed in group 18 with the other noble gases, as its extraordinary inertness is extremely close to that of the other light noble gases neon and argon.[149] Nevertheless, it is occasionally placed separately from any group as well.[150] The property that distinguishes helium from the rest of the noble gases is that in its closed electron shell, helium has only two electrons in the outermost electron orbital, while the rest of the noble gases have eight. Some authors, such as Henry Bent (the eponym of Bent's rule), Wojciech Grochala, and Felice Grandinetti, have argued that helium would be correctly placed in group 2, over beryllium; Charles Janet's left-step table also contains this assignment. The normalized ionization potentials and electron affinities show better trends with helium in group 2 than in group 18; helium is expected to be slightly more reactive than neon (which breaks the general trend of reactivity in the noble gases, where the heavier ones are more reactive); predicted helium compounds often lack neon analogues even theoretically, but sometimes have beryllium analogues; and helium over beryllium better follows the trend of first-row anomalies in the table (s >> p > d > f).[151][152][153]
144
+
145
+ Although scandium and yttrium are always the first two elements in group 3, the identity of the next two elements is not completely settled. They are commonly lanthanum and actinium, and less often lutetium and lawrencium. The two variants originate from historical difficulties in placing the lanthanides in the periodic table, and arguments as to where the f block elements start and end.[154][n 10][n 11] It has been claimed that such arguments are proof that, "it is a mistake to break the [periodic] system into sharply delimited blocks".[156] A third variant shows the two positions below yttrium as being occupied by the lanthanides and the actinides. A fourth variant shows group 3 bifurcating after Sc-Y, into an La-Ac branch, and an Lu-Lr branch.[29]
146
+
147
+ Chemical and physical arguments have been made in support of lutetium and lawrencium[157][158] but the majority of authors seem unconvinced.[159] Most working chemists are not aware there is any controversy.[160] In December 2015 an IUPAC project was established to make a recommendation on the matter.[161]
148
+
149
+ Lanthanum and actinium are commonly depicted as the remaining group 3 members.[162][n 12] It has been suggested that this layout originated in the 1940s, with the appearance of periodic tables relying on the electron configurations of the elements and the notion of the differentiating electron. The configurations of caesium, barium and lanthanum are [Xe]6s1, [Xe]6s2 and [Xe]5d16s2. Lanthanum thus has a 5d differentiating electron and this establishes it "in group 3 as the first member of the d-block for period 6".[163] A consistent set of electron configurations is then seen in group 3: scandium [Ar]3d14s2, yttrium [Kr]4d15s2 and lanthanum [Xe]5d16s2. Still in period 6, ytterbium was assigned an electron configuration of [Xe]4f135d16s2 and lutetium [Xe]4f145d16s2, "resulting in a 4f differentiating electron for lutetium and firmly establishing it as the last member of the f-block for period 6".[163] Later spectroscopic work found that the electron configuration of ytterbium was in fact [Xe]4f146s2. This meant that ytterbium and lutetium—the latter with [Xe]4f145d16s2—both had 14 f-electrons, "resulting in a d- rather than an f- differentiating electron" for lutetium and making it an "equally valid candidate" with [Xe]5d16s2 lanthanum, for the group 3 periodic table position below yttrium.[163] Lanthanum has the advantage of incumbency since the 5d1 electron appears for the first time in its structure whereas it appears for the third time in lutetium, having also made a brief second appearance in gadolinium.[164]
150
+
151
+ In terms of chemical behaviour,[165] and trends going down group 3 for properties such as melting point, electronegativity and ionic radius,[166][167] scandium, yttrium, lanthanum and actinium are similar to their group 1–2 counterparts. In this variant, the number of f electrons in the most common (trivalent) ions of the f-block elements consistently matches their position in the f-block.[168] For example, the f-electron counts for the trivalent ions of the first three f-block elements are Ce 1, Pr 2 and Nd 3.[169]
152
+
153
+ In other tables, lutetium and lawrencium are the remaining group 3 members.[n 13] Early techniques for chemically separating scandium, yttrium and lutetium relied on the fact that these elements occurred together in the so-called "yttrium group" whereas La and Ac occurred together in the "cerium group".[163] Accordingly, lutetium rather than lanthanum was assigned to group 3 by some chemists in the 1920s and 30s.[n 14] Several physicists in the 1950s and '60s favoured lutetium, in light of a comparison of several of its physical properties with those of lanthanum.[163] This arrangement, in which lanthanum is the first member of the f-block, is disputed by some authors since lanthanum lacks any f-electrons. It has been argued that this is not a valid concern given other periodic table anomalies—thorium, for example, has no f-electrons yet is part of the f-block.[170] As for lawrencium, its gas phase atomic electron configuration was confirmed in 2015 as [Rn]5f147s27p1. Such a configuration represents another periodic table anomaly, regardless of whether lawrencium is located in the f-block or the d-block, as the only potentially applicable p-block position has been reserved for nihonium with its predicted configuration of [Rn]5f146d107s27p1.[27][n 15]
154
+
155
+ Chemically, scandium, yttrium and lutetium (and presumably lawrencium) behave like trivalent versions of the group 1–2 metals.[172] On the other hand, trends going down the group for properties such as melting point, electronegativity and ionic radius, are similar to those found among their group 4–8 counterparts.[163] In this variant, the number of f electrons in the gaseous forms of the f-block atoms usually matches their position in the f-block. For example, the f-electron counts for the first five f-block elements are La 0, Ce 1, Pr 3, Nd 4 and Pm 5.[163]
156
+
157
+ A few authors position all thirty lanthanides and actinides in the two positions below yttrium (usually via footnote markers).
158
+ This variant, which is stated in the 2005 Red Book to be the IUPAC-agreed version as of 2005 (a number of later versions exist, and the last update is from 1 December 2018),[173][n 16] emphasizes similarities in the chemistry of the 15 lanthanide elements (La–Lu), possibly at the expense of ambiguity as to which elements occupy the two group 3 positions below yttrium, and a 15-column wide f block (there can only be 14 elements in any row of the f block).[n 17] However, this similarity does not extend to the 15 actinide elements (Ac–Lr), which show a much wider variety in their chemistries.[175] This form moreover reduces the f-block to a degenerate branch of group 3 of the d-block; it dates back to the 1920s when the lanthanides were thought to have their f electrons as core electrons, which is now known to be false. It is also false for the actinides, many of which show stable oxidation states above +3.[176]
159
+
160
+ In this variant, group 3 bifurcates after Sc-Y into a La-Ac branch, and a Lu-Lr branch. This arrangement is consistent with the hypothesis that arguments in favour of either Sc-Y-La-Ac or Sc-Y-Lu-Lr based on chemical and physical data are inconclusive.[177] As noted, trends going down Sc-Y-La-Ac match trends in groups 1−2[178] whereas trends going down Sc-Y-Lu-Lr better match trends in groups 4−10.[163]
161
+
162
+ The bifurcation of group 3 is a throwback to the Mendeleev eight column-form in which seven of the main groups each have two subgroups. Tables featuring a bifurcated group 3 have been periodically proposed since that time.[n 18]
163
+
164
+ The definition of a transition metal, as given by IUPAC in the Gold Book, is an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell.[179] By this definition all of the elements in groups 3–11 are transition metals. The IUPAC definition therefore excludes group 12, comprising zinc, cadmium and mercury, from the transition metals category. However, the 2005 IUPAC nomenclature as codified in the Red Book gives both the group 3–11 and group 3–12 definitions of the transition metals as alternatives.
165
+
166
+ Some chemists treat the categories "d-block elements" and "transition metals" interchangeably, thereby including groups 3–12 among the transition metals. In this instance the group 12 elements are treated as a special case of transition metal in which the d electrons are not ordinarily given up for chemical bonding (they can sometimes contribute to the valence bonding orbitals even so, as in zinc fluoride).[180] The 2007 report of mercury(IV) fluoride (HgF4), a compound in which mercury would use its d electrons for bonding, has prompted some commentators to suggest that mercury can be regarded as a transition metal.[181] Other commentators, such as Jensen,[182] have argued that the formation of a compound like HgF4 can occur only under highly abnormal conditions; indeed, its existence is currently disputed. As such, mercury could not be regarded as a transition metal by any reasonable interpretation of the ordinary meaning of the term.[182]
167
+
168
+ Still other chemists further exclude the group 3 elements from the definition of a transition metal. They do so on the basis that the group 3 elements do not form any ions having a partially occupied d shell and do not therefore exhibit properties characteristic of transition metal chemistry.[183] In this case, only groups 4–11 are regarded as transition metals. This categorisation is however not one of the alternatives considered by IUPAC. Though the group 3 elements show few of the characteristic chemical properties of the transition metals, the same is true of the heavy members of groups 4 and 5, which also are mostly restricted to the group oxidation state in their chemistry. Moreover, the group 3 elements show characteristic physical properties of transition metals (on account of the presence in each atom of a single d electron).[62]
169
+
170
+ Although all elements up to oganesson have been discovered, of the elements above hassium (element 108), only copernicium (element 112), nihonium (element 113), and flerovium (element 114) have known chemical properties, and conclusive categorisation at present has not been reached.[55] Some of these may behave differently from what would be predicted by extrapolation, due to relativistic effects; for example, copernicium and flerovium have been predicted to possibly exhibit some noble-gas-like properties, even though neither is placed in group 18 with the other noble gases.[55][184] The current experimental evidence still leaves open the question of whether copernicium and flerovium behave more like metals or noble gases.[55][185] At the same time, oganesson (element 118) is expected to be a solid semiconductor at standard conditions, despite being in group 18.[186]
171
+
172
+ Currently, the periodic table has seven complete rows, with all spaces filled in with discovered elements. Future elements would have to begin an eighth row. Nevertheless, it is unclear whether new eighth-row elements will continue the pattern of the current periodic table, or require further adaptations or adjustments. Seaborg expected the eighth period to follow the previously established pattern exactly, so that it would include a two-element s-block for elements 119 and 120, a new g-block for the next 18 elements, and 30 additional elements continuing the current f-, d-, and p-blocks, culminating in element 168, the next noble gas.[188] More recently, physicists such as Pekka Pyykkö have theorized that these additional elements do not exactly follow the Madelung rule, which predicts how electron shells are filled and thus affects the appearance of the present periodic table. There are currently several competing theoretical models for the placement of the elements of atomic number less than or equal to 172. In all of these it is element 172, rather than element 168, that emerges as the next noble gas after oganesson, although these must be regarded as speculative as no complete calculations have been done beyond element 123.[189][190]
173
+
174
+ The number of possible elements is not known. A very early suggestion made by Elliot Adams in 1911, and based on the arrangement of elements in each horizontal periodic table row, was that elements of atomic weight greater than circa 256 (which would equate to between elements 99 and 100 in modern-day terms) did not exist.[191] A higher, more recent estimate is that the periodic table may end soon after the island of stability,[192] whose centre is predicted to lie between element 110 and element 126, as the extension of the periodic and nuclide tables is restricted by proton and neutron drip lines as well as decreasing stability towards spontaneous fission.[193][194] Other predictions of an end to the periodic table include at element 128 by John Emsley,[3] at element 137 by Richard Feynman,[195] at element 146 by Yogendra Gambhir,[196] and at element 155 by Albert Khazan.[3][n 19]
175
+
176
+ The Bohr model exhibits difficulty for atoms with atomic number greater than 137, as any element with an atomic number greater than 137 would require 1s electrons to be travelling faster than c, the speed of light.[197] Hence the non-relativistic Bohr model is inaccurate when applied to such an element.
177
+
178
+ The relativistic Dirac equation has problems for elements with more than 137 protons. For such elements, the wave function of the Dirac ground state is oscillatory rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox.[198] More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds the limit for elements with more than 173 protons. For heavier elements, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron.[199] This does not happen if the innermost orbital is filled, so that element 173 is not necessarily the end of the periodic table.[195]
179
+
180
+ The many different forms of periodic table have prompted the question of whether there is an optimal or definitive form of periodic table.[200] The answer to this question is thought to depend on whether the chemical periodicity seen to occur among the elements has an underlying truth, effectively hard-wired into the universe, or if any such periodicity is instead the product of subjective human interpretation, contingent upon the circumstances, beliefs and predilections of human observers. An objective basis for chemical periodicity would settle the questions about the location of hydrogen and helium, and the composition of group 3. Such an underlying truth, if it exists, is thought to have not yet been discovered. In its absence, the many different forms of periodic table can be regarded as variations on the theme of chemical periodicity, each of which explores and emphasizes different aspects, properties, perspectives and relationships of and among the elements.[n 20]
181
+
182
+ In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science".[203]
en/5588.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Table may refer to:
en/5589.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Table may refer to: