id
stringlengths 5
6
| input
stringlengths 3
301
| output
list | meta
null |
---|---|---|---|
26v2kc | What does it mean when one has a 'teleological view of history'? | [
{
"answer": "To build a bit on /u/Sherbert42's answer: teleology (in a historiographical sense) is a form of historical enquiry which attempts to construct a narrative view of history as a progressive march in one direction; towards an inevitable end point. \n\nTo give one particularly notable and illustrative example of teleological thinking: look at 'Whig history', a school of thought [described by Herbert Butterfield](_URL_0_) which argued that all history can be considered as an inexorable march towards enlightenment/liberalism.\n\nThe problem with the teleological approach is that it tends towards sophistry: to use the Whig history example again, the idea that British-style liberal enlightenment is the apex of human progress, and that the eventual convergence of all history on that point is an inevitability, is deeply problematic. \n\nThe idea that you can divine a perfect (or in any way satisfactory) linear narrative in history become ludicrous almost as soon as you start to interrogate it to any depth. The construction of these teleological narratives generally involves highly selective use of evidence, straw men and the complete dismissal of countervailing viewpoints or interpretations.\n\nWhat always surprises me is that this prism for understanding history hasn't entirely gone out of fashion. Butterfield wrote *The Whig Interpretation of History* in 1931, about historians mostly of the 19th century, but Francis Fukuyama's 'End of History' theory in the 1990s owes a lot to these ideas: the idea that the fall of the Soviet Union represents the ultimate triumph of liberal democracy as \"the final form of human government\".\n\nEdit: as someone else pointed out in the comments, I mangled my understanding (misread old notes from uni and clearly wasn't paying enough attention) of Butterfield's place in the Whig canon — as a critic and taxonomist, not a part of the canon. Duly corrected/now going to go hang my head in shame.",
"provenance": null
},
{
"answer": "OP here, Thank you all of you for such thorough and thought provoking replies and comments. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5777097",
"title": "Historism",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 379,
"text": "Historist historiography rejects historical teleology and bases its explanations of historical phenomena on sympathy and understanding (see Hermeneutics) for the events, acting persons, and historical periods. The historist approach takes to its extreme limits the common observation that human institutions (language, Art, religion, law, State) are subject to perpetual change.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24067936",
"title": "Borussian myth",
"section": "Section::::Teleological arguments.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 920,
"text": "A teleological argument holds all things to be designed for, or directed toward, a specific final result. That specific result gives events and actions, even retrospectively, an inherent purpose. When applied to the historical process, an historical teleological argument posits the result as the inevitable trajectory of a specific set of events. These events lead \"inevitably,\" as Karl Marx or Friedrich Engels proposed, to a specific set of conditions or situations; the resolution of those lead to another, and so on. This goal-oriented, 'teleological' notion of the historical process as a whole is present in a variety of arguments about the past: the \"inevitability,\" for example, of the revolution of the proletariat and the \"Whiggish\" narrative of past as an inevitable progression towards ever greater liberty and enlightenment that culminated in modern forms of liberal democracy and constitutional monarchy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80757",
"title": "Teleology",
"section": "Section::::Modern and postmodern philosophy.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 621,
"text": "Historically, teleology may be identified with the philosophical tradition of Aristotelianism. The rationale of teleology was explored by Immanuel Kant in his Critique of Judgement and, again, made central to speculative philosophy by Hegel and in the various neo-Hegelian schools proposing a history of our species some consider to be at variance with Darwin, as well as with the dialectical materialism of Karl Marx and Friedrich Engels, and with what is now called analytic philosophy the point of departure is not so much formal logic and scientific fact but 'identity'. (In Hegel's terminology: 'objective spirit'.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1067753",
"title": "Telos",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 555,
"text": "A telos (from the Greek τέλος for \"end\", \"purpose\", or \"goal\") is an end or purpose, in a fairly constrained sense used by philosophers such as Aristotle. It is the root of the term \"teleology\", roughly the study of purposiveness, or the study of objects with a view to their aims, purposes, or intentions. Teleology figures centrally in Aristotle's biology and in his theory of causes. The notion that everything has a \"telos\" also gave rise to epistemology. It is also central to some philosophical theories of history, such as those of Hegel and Marx.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80757",
"title": "Teleology",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 571,
"text": "Teleology or finality is a reason or explanation for something as a function of its end, purpose, or goal. It is derived from two Greek words: telos (end, goal, purpose) and logos (reason, explanation). A purpose that is imposed by a human use, such as that of a fork, is called \"extrinsic\". Natural teleology, common in classical philosophy but controversial today, contends that natural entities also have \"intrinsic\" purposes, irrespective of human use or opinion. For instance, Aristotle claimed that an acorn's intrinsic \"telos\" is to become a fully grown oak tree.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51188560",
"title": "Teleology in biology",
"section": "Section::::Context.:Teleology.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 367,
"text": "Teleology, from Greek τέλος, \"telos\" \"end, purpose\" and -λογία, \"logia\", \"a branch of learning\", was coined by the philosopher Christian von Wolff in 1728. The concept derives from the ancient Greek philosophy of Aristotle, where the final cause (the purpose) of a thing is its function. However, Aristotle's biology does not envisage evolution by natural selection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42526076",
"title": "Self-persuasion",
"section": "Section::::Four Theories That Utilize Self-Persuasion.:The Narrative Paradigm.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 517,
"text": "Walter Fisher’s Narrative Paradigm Theory posits that all meaningful communication is a form of storytelling or giving a report of events, and that human beings experience and comprehend life as a series of ongoing narratives, each with its own conflicts, characters, beginning, middle, and end. Fisher believes that all forms of communication that appeal to our reason are best viewed as stories shaped by history, culture, and character, and all forms of human communication are to be seen fundamentally as stories\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
30ovf4 | if my bathroom scale shows different numbers around the house, which number should i trust? | [
{
"answer": "Whichever is the correct one. There's no way of knowing which is the correct one, other than weighing yourself on another scale, perhaps at a doctor's office. If you think the differences are because of the crooked floor, try to find a place in your house where the floor isn't crooked. Maybe even take the scale someplace else and weigh yourself there.",
"provenance": null
},
{
"answer": "Find something in your house that you know weighs a certain amount for sure. E.g. a new bag of rice or potatoes or whatever. Weigh that in different areas of your house and see where you get the most accurate reading. Just use your scale in that spot.",
"provenance": null
},
{
"answer": "Trust the reading taken on a hard, level floor.",
"provenance": null
},
{
"answer": "This may be a dumb question, but just to make sure, are you re-weighing yourself immediately and getting different readings, or weighing yourself at different times throughout the day?",
"provenance": null
},
{
"answer": "Buy a leveler, for like a buck or two. The little tubes with liquid in them... They show you where the floor is level.",
"provenance": null
},
{
"answer": "Trust the one that makes you happier. Odds are the difference isn't great enough to matter.",
"provenance": null
},
{
"answer": "Just keep in the same spot all the time and use that measurement. Does it matter if it displays one or two kg more/less? If so, you should get a proper calibrated scale but they are not cheap. If a kilogram or two doesn't matter just keep it in the same spot because then you will at least be able to track weight changes.\n\nAt the gym I go to there is a scale called VB2-200 which is a calibrated scale. It costs around 5000SEK though (580USD) but it is damn accurate and we use to weigh plates as well.",
"provenance": null
},
{
"answer": "Weigh yourself somewhere else.\n\nWhere I live pharmacies have very accurate scales that they let you use.\n\nSimply use one like that and then go home without eating, or shitting, or taking of your clothes or wanting too long to find out in which place your scale is the most accurate.\n\nI assume you don't do anything like weightlifting or else you probably wouldn't have asked, but if you have a friend who owns weights you might simply borrow them and test things out with that.\n\nIf you don't have any weights you can borrow use something else with a known weight. Most groceries are sold with resonable accurate weights printed on them but usually they are sold in small portions. Buy something that is heavy and usefull in large portions.\n\nAs somebody has already suggested water is pretty good. Water has a density of 1 kg/l under normal conditions so you can buy a case of 12 1l bottles and take them with you on the scale. The bottles should be made out of plastic so that their weight is negligible. Just check in which location the difference of you with a know weight in water and without matches best to the difference the scale shows. (if you want to be real accurate you can buy more bottles and drink half of them) weighing yourself with a case full of empty bottles and a case full of full bottles in turn so the only real difference will be the weight of the water.\n\n",
"provenance": null
},
{
"answer": "Put a coin on the hard floors. If it rolls, it's slanted.",
"provenance": null
},
{
"answer": "Don't worry about it, its prob. not that correct anyway. \nJust always weight you self in the same spot that gives you repeated readings that are the same. \n\ni.e. test the scales in one spot 3 times, getting of and on completely each time. the spot that gives you the same results is the winner.\n\nIf you are trying to lose/gain weight, watch the change not the amount.",
"provenance": null
},
{
"answer": "Bathroom scales should not be used to find out how much you weigh, they should be used to monitor changes in your weight. Expensive ones may be calibrated before leaving the factory but then they are never again calibrated.\n\nThe best practice for them is to leave it in one location and try to weigh yourself at the same time of day each time.\n\nIf you want to find out your true weight I recommend finding out next time you are at the doctors office.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2176160",
"title": "Interval arithmetic",
"section": "Section::::Introduction.:Example.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 679,
"text": "Take as an example the calculation of body mass index (BMI). The BMI is the body weight in kilograms divided by the square of height in metres. A bathroom scale may have a resolution of one kilogram. We do not know intermediate values about 79.6 kg or 80.3 kg but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone really weighs exactly 80.0 kg. In normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all real numbers that are greater than or equal to 79.5, while less than or equal to 80.5, or in other words the interval [79.5,80.5].\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8071866",
"title": "Campus of the Massachusetts Institute of Technology",
"section": "Section::::Campus organization.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 362,
"text": "To identify a particular room within a building, the room number is simply appended to the building number, using a \"-\" (e.g. Room 26–100, a large first-floor auditorium in Building 26). The floor number is indicated in the usual way, by the leading digit(s) of the room number, with a leading digit \"0\" indicating a basement location and \"00\" for sub-basement.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18732880",
"title": "Storey",
"section": "Section::::Room numbering.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 848,
"text": "In modern buildings, especially large ones, room or apartment numbers are usually tied to the floor numbers, so that one can figure out the latter from the former. Typically one uses the floor number with one or two extra digits appended to identify the room within the floor. For example, room 215 could be the 15th room of floor 2 (or 5th room of floor 21), but to avoid this confusion one dot is sometimes used to separate the floor from the room (2.15 refers to 2nd floor, 15th room and 21.5 refers to 21st floor, 5th room) or a leading zero is placed before a single-digit room number (i.e. the 5th room of floor 21 would be 2105). Letters may be used, instead of digits, to identify the room within the floor—such as 21E instead of 215. Often odd numbers are used for rooms on one side of a hallway, even numbers for rooms on the other side.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8225223",
"title": "Room number",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 448,
"text": "Room numbers may consist of three digits, but can be any number of digits. The room number is generally assigned with the first digit indicating the floor on which the room is located. For example, room 412 would be on the fourth floor of the building; room 540 would be on the fifth floor. Buildings that have more than nine floors will have four digits assigned to rooms beyond the ninth floor. For example, room 1412 would be on the 14th floor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8637",
"title": "Door",
"section": "Section::::Construction and components.:Dimensions.:United States.\n",
"start_paragraph_id": 147,
"start_character": 0,
"end_paragraph_id": 147,
"end_character": 304,
"text": "The standard door sizes in the US run along 2\" increments. Customary sizes have a height of 78\" (1981 mm) or 80\" (2032 mm) and a width of 18\" (472 mm), 24\" (610 mm), 26\" (660 mm), 28\" (711 mm), 30\" (762 mm) or 36\" (914 mm). Most residential passage (room to room) doors are 30\" x 80\" (762 mm x 2032 mm).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "429484",
"title": "Tatami",
"section": "Section::::Size.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 409,
"text": "In Japan, the size of a room is often measured by the number of , about 1.653 square meters (for a standard Nagoya size tatami). Alternatively, in terms of traditional Japanese area units, room area (and especially house floor area) is measured in terms of \"tsubo,\" where one \"tsubo\" is the area of two tatami mats (a square); formally 1 \"ken\" by 1 \"ken\" or a 1.81818 meter square, about 3.306 square meters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40743605",
"title": "Range–frequency theory",
"section": "Section::::Calculating range values.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 707,
"text": "Consider a set of numbers ranging from 100 to 1,000, rated with six categories. Each category covers a subrange of 150 (1,000 – 100)/6. All numbers between 100 and 250 must be rated using the first category, 1 – \"very small\"; the second category, 2 – \"small\", must be assigned the numbers 250 through 400; and so on and so forth. With actual perceptual stimuli, however, the subranges cannot be assumed beforehand. For certain psychophysical dimensions, this scaling has often been assumed to be logarithmic. For other psychophysical dimensions, however, the scaling of subranges can be quasilogarithmic and for others, it is almost linear (e.g. for judgments of the sizes of squares, see Haubensak, 1982).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
34hb4y | how family guy uses brand names so frequently, but no other cartoon can. | [
{
"answer": "Any TV show can mention brand names.\n\nThey usually don't, because:\n\n* it makes sit harder to place ads for that product and its competitors\n* it can date the show and make it less desirable in syndication\n* viewers often have strong brand affinities, and might not relate to characters who use brands they do not like\n\n*Family Guy* is basically making fun of this adversion.",
"provenance": null
},
{
"answer": "They can, they just choose not to. The reason isn't related to lawsuits, either -- there's just often a policy of not mentioning by name any companies or products who are not advertisers. If Homer Simpson is drinking Coke, and Pepsi advertises on Fox, they'll object (this actually did happen for The Simpsons when mentioning Hewlett Packard). And in the reverse, if they mock or insult a real product, the producer of that product will object when they want to advertise -- notoriously, a producer of gas ovens threatened to pull their ads from CBS if Rod Serling, creator of *The Twilight Zone*, did an episode about the Holocaust, because gas chambers depicted *gas* in a bad light.\n\nSo there's no reason why shows *can't* mention real-world products, it's just usually avoided by network executives. Family Guy is evidently an exception. Perhaps the Family Guy producers persuaded the network that their jokes wouldn't work with knockoff brands the way The Simpsons' jokes would, or perhaps the executives at the time Family Guy was launched were less strict about the policy.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41912672",
"title": "Naming in the United States",
"section": "Section::::Names inspired by popular culture.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 203,
"text": "Without laws governing name usage, many American names pop up following the name's usage in movies, television, or in the media. Children may be named after their parents' favorite fictional characters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "627304",
"title": "MythBusters",
"section": "Section::::Warnings and self-censorship.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 250,
"text": "Brand names and logos are regularly blurred or covered with tape or a \"MythBusters\" sticker. Brand names are shown when integral to a myth, such as in the Diet Coke and Mentos experiment or Pop Rocks in the very first pilot episode of \"MythBusters\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6461338",
"title": "The Uncle Al Show",
"section": "Section::::Uncle Al & the kids.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 491,
"text": "By the 1960s, kids who appeared on the show each were given a nametag sticker in the shape of a bow tie modeled after Uncle Al's sartorial trademark. While the kids were told the name tag was a ticket to get in and a souvenir to take home, the primary reason for them was so that Lewis could refer to each child by name. Initially the tags were plain white, but later included the name of the show to one side, and WCPO's \"9\" logo to the other, with room in the middle for the child's name.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2093075",
"title": "Car model",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 549,
"text": "A model may also be referred to as a nameplate, specifically when referring to the product from the point of view of the manufacturer, especially a model over time. For example, the Chevrolet Suburban is the oldest automobile nameplate in continuous production, dating to 1934 (1935 model year), while the Chrysler New Yorker was (until its demise in 1996) the oldest North American car nameplate. \"Nameplate\" is also sometimes used more loosely, however, to refer to a brand or division of larger company (e.g., GMC), rather than a specific model.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14914864",
"title": "Brand awareness",
"section": "Section::::Types of brand awareness.:Marketing implications of brand awareness.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1097,
"text": "A brand name that is well known to the majority of people or households is also called a \"household name\" and may be an indicator of brand success. Occasionally a brand can become so successful that the brand becomes synonymous with the category. For example, British people often talk about \"Hoovering the house\" when they actually mean \"vacuuming the house.\" (Hoover is a brand name). When this happens, the brand name is said to have \"gone \"generic\".\" Examples of brands becoming generic abound; Kleenex, Cellotape, Nescafe, Aspirin and Panadol. When a brand goes generic, it can present a marketing problem because when the consumer requests a named brand at the retail outlet, they may be supplied with a competing brand. For example, if a person enters a bar and requests \"a rum and Coke,\" the bartender may interpret that to mean a \"rum and cola-flavoured beverage,\" paving the way for the outlet to supply a cheaper alternative mixer. In such a scenario, Coca-Cola Ltd, who after investing in brand building for more than a century, is the ultimate loser because it does not get the sale.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2895348",
"title": "Japanese abbreviated and contracted words",
"section": "Section::::Patterns of contraction.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 243,
"text": "These abbreviated names are so common in Japan that many companies initiate abbreviations of the names of their own products. For example, the animated series \"Pretty Cure\" marketed itself under the four-character abbreviated name \"purikyua\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24578",
"title": "Pub",
"section": "Section::::Names.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 414,
"text": "Pub names are used to identify and differentiate each pub. Modern names are sometimes a marketing ploy or attempt to create \"brand awareness\", frequently using a comic theme thought to be memorable, \"Slug and Lettuce\" for a pub chain being an example. Interesting origins are not confined to old or traditional names, however. Names and their origins can be broken up into a relatively small number of categories.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4bsx5d | if passwords for websites are suppose to be encrypted or only known to the user, how come some websites can tell me i have entered a password i changed years ago? | [
{
"answer": "They don't know the password, but they do know when you have set it. You tell them the password when you set it, but they don't remember the password, only its \"checksum\", or \"salted hash\", and forget the original password. which can be calculated from the original password and data that don't change during the lifetime of the account. \n\nThey don't have to know your password to check against it - they can just compute the checksum. They don't have to know the password to tell you when you have set it - they only need the timestamp for that.\n\nThis is just an ELI5 explanation, as crypto is extremely complex, counterintuitive and hard to understand.",
"provenance": null
},
{
"answer": "Passwords are not encrypted, they are hashed. Theoretically : encryption is a two way street, you can go from plaintext to encrypted and encrypted to plaintext. Hashing is a one way street, you can only go from plaintext to hash, NOT hash to plaintext.\n\nSince passwords are stored using that one way function, some enterprises feel that keeping them is not a security threat.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20538184",
"title": "Privacy protocol",
"section": "Section::::Examples of privacy protocols.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 471,
"text": "A simple protocol that does not rely on a human third party involves password changing. This works anywhere one has to type in new passwords the same twice before the password is changed. The first individual will type their secret in the first box, and the second person will type their secret in the second box, if the password is successfully changed then the secret is shared. However the computer is still a third party and must be trusted not to have a key logger.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44395225",
"title": "Notpron",
"section": "Section::::Levels.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 394,
"text": "Levels consist of finding either a password (known as a UN/PW by the community) or finding a URL to use for the next level. Passwords do not require the user to create an account, but instead will be given to the user once they have found the answer to the riddle. Each solution to each level is very unique, such as decoding ciphers, image editing, musical knowledge, and even remote viewing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4459886",
"title": "Password strength",
"section": "Section::::Password policy.:Protecting passwords.\n",
"start_paragraph_id": 107,
"start_character": 0,
"end_paragraph_id": 107,
"end_character": 437,
"text": "Software is available for popular hand-held computers that can store passwords for numerous accounts in encrypted form. Passwords can be encrypted by hand on paper and remember the encryption method and key. And another approach is to use a single password or slightly varying passwords for low-security accounts and select distinctly separate strong passwords for a smaller number of high-value applications such as for online banking.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5474115",
"title": "Google Browser Sync",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 374,
"text": "Google Browser Sync required a Google account, in which the user's cookies, saved passwords, bookmarks, browsing history, tabs, and open windows could be stored. The data was optionally encrypted using an alphanumerical PIN, which theoretically prevented even Google from reading the data. Passwords and cookies were always encrypted and could only be accessed by the user.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34973773",
"title": "Browser security",
"section": "Section::::Password security model.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 654,
"text": "The contents of a web page are arbitrary and controlled by the entity owning the domain named displayed in the address bar. If HTTPS is used, then encryption is used to secure against attackers with access to the network from changing the page contents en route. When presented with a password field on a web page, a user is supposed to look at the address bar to determine whether the domain name in the address bar is the correct place to send the password. For example, for Google's single sign-on system (used on e.g. youtube.com), the user should always check that the address bar says \"https://accounts.google.com\" before inputting their password.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29384738",
"title": "Web browsing history",
"section": "Section::::Privacy.:Privacy of kept history data.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 352,
"text": "Likewise, if a user has not cleared their web browser history and has confidential sites listed there, they may want to use a strong password or other authentication solution for their user account on their computer, password-protect the computer when not in use, or encrypt the storage medium on which the web browser stores its history information..\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7697075",
"title": "Features of the Opera web browser",
"section": "Section::::Currently supported features.:Password manager.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 228,
"text": "Every page with a password form gives the user the option of storing the password for later use. To speed up the use of the website, when a user re-visits these pages, the username and password fields will be already filled in.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1qn8z4 | if i kept switching out older body parts of mine with healthier ones as i grew up, could i live forever? | [
{
"answer": "Your brain cells will eventually age and die. If you replace those you can technically live forever, but will you still really be yourself?\n\nThis concept has been debated since the Ancient Greeks _URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "302602",
"title": "Wolverine (character)",
"section": "Section::::Powers and abilities.:Healing and defensive powers.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 370,
"text": "His healing factor also dramatically affects his aging process, allowing him to live far beyond the normal lifespan of a human. Despite being born in the late 19th century, he has the appearance, conditioning, health, and vitality of a man in his physical prime. While seemingly ageless, it is unknown exactly how greatly his healing factor extends his life expectancy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3788596",
"title": "Saga of the Skolian Empire",
"section": "Section::::Skolian Empire.:Health care.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 635,
"text": "Due to advanced nanotechnology, microscopic nanomeds can be put into peoples bodies to manage cell repair and delay aging. This enables the extension of a human lifespan to several centuries. People also keep a young appearance, as they stop visibly aging at the age of 30 – 40. It is important to start the treatments as early in life as possible. People who received those treatments in early childhood or before birth (passed on from mother to child during pregnancy) achieve the best results. The older a person is when receiving anti-aging treatments for the first time, the larger the possibility that they won't work that well.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "229058",
"title": "Young adult (psychology)",
"section": "Section::::Health.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 306,
"text": "Young/prime adulthood can be considered the healthiest time of life and young adults are generally in good health, subject neither to disease nor the problems of senescence. Strength and physical performance reach their peak from 18–39 years of age. Flexibility may decrease with age throughout adulthood.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "196125",
"title": "Aplastic anemia",
"section": "Section::::Prognosis.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 213,
"text": "Older people (who are generally too frail to undergo bone marrow transplants), and people who are unable to find a good bone marrow match, undergoing immune suppression have five-year survival rates of up to 35%.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "179242",
"title": "Infertility",
"section": "Section::::Effects.:Psychological.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 250,
"text": "Older people with adult children appear to live longer. Why this is the case is unclear and may dependent in part on those who have children adopting a healthier lifestyle, support from children, or the circumstances that led to not having children.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60511632",
"title": "Mortuary archaeology",
"section": "Section::::Methodology.:Creating a Biological Profile.:Age.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 727,
"text": "When aging remains, there are many different methods. However, the research first needs to see if the remains are fused or unfused before applying the different aging techniques. Individuals going through growth will have different aging methods than those of adults. There are four different aging techniques for adult individuals. These include cranial sutures, degradation of the pubic symphysis, auricular surface, and the sternal rib end of the first and fourth rib. Younger individuals age is based on tooth eruption and fusion of bone at different rates. Once the individual is aged, then they can be placed in different age categories: young adult (20-34 years), middle adult (35-49 years), and old adult (50+ years). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6121900",
"title": "Eternal youth",
"section": "Section::::Therapy.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 546,
"text": "The idea that the human body can be repaired in old age to a more youthful state has gathered significant commercial interest over the past few years, including by companies such as Human Longevity Inc, Google Calico, and Elysium Health. In addition to these larger companies, many startups are currently developing therapeutics to tackle the 'ageing problem' using therapy. In 2015 a new class of drugs senolytics was announced (currently in pre-clinical development) designed specifically to combat the underlying biological causes of frailty.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
17anc6 | the observer effect, the measurement problem and the 'conscious observer' of quantum mechanics? | [
{
"answer": "The \"observer effect\" and \"measurement\" problems are commonly misrepresented on the internet by people who are obsessed with new-age pseudoscience. It has nothing to do with conciousness or anything magical. To put it in ELI5 terms:\n\nImagine that we you are blindfolded and sitting in a chair. I have set up a machine that can always shoot an apple across the room and have it whiz by right in front of your face. You, being blindfolded, have to \"detect\" when the apple has passes by you by listening to a hair dryer that I have taped to your head. When the apple passes in front of the hair dryer, it changes the sound of the air being blown. The hairdryer will not change the flight of the apple in any way significant to our observations. To detect the apple, you have interacted with it, but not changed it. This is an observation made at our regular, real world scale.\n\nNow imagine we repeat the experiment with a paper ball instead of an apple. In this case, we'll still have to interact with the paper ball to detect it, but since the paper ball is so light, it's *going* to affect the paper ball's trajectory. This is an observation made at a quantum scale scale.\n\nOn a quantum scale, you can't \"see\" an electron or any other quantum particle. You have to interact with them to detect them, and interacting with them changes them. that's the problem.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18698687",
"title": "Observer effect (physics)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 957,
"text": "An especially unusual version of the observer effect occurs in quantum mechanics, as best demonstrated by the double-slit experiment. Physicists have found that even passive observation of quantum phenomena (by changing the test apparatus and passively 'ruling out' all but one possibility), can actually change the measured result. A particularly famous example is the 1998 Weizmann experiment. Despite the \"observer\" in this experiment being an electronic detector—possibly due to the assumption that the word \"observer\" implies a person—its results have led to the popular belief that a conscious mind can directly affect reality. The need for the \"observer\" to be conscious is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function and the quantum measurement process, apparently being the generation of information at its most basic level that produces the effect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18698687",
"title": "Observer effect (physics)",
"section": "Section::::Quantum mechanics.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 570,
"text": "In the context of the so-called hidden-measurements interpretation of quantum mechanics, the observer-effect can be understood as an \"instrument effect\" which results from the combination of the following two aspects: (a) an invasiveness of the measurement process, intrinsically incorporated in its experimental protocol (which therefore cannot be eliminated); (b) the presence of a random mechanism (due to fluctuations in the experimental context) through which a specific measurement-interaction is each time actualized, in a non-predictable (non-controllable) way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "474880",
"title": "Hawthorne effect",
"section": "Section::::Secondary observer effect.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 1611,
"text": "Despite the observer effect as popularized in the Hawthorne experiments being perhaps falsely identified (see above discussion), the popularity and plausibility of the observer effect in theory has led researchers to postulate that this effect could take place at a second level. Thus it has been proposed that there is a secondary observer effect when researchers working with secondary data such as survey data or various indicators may impact the results of their scientific research. Rather than having an effect on the subjects (as with the primary observer effect), the researchers likely have their own idiosyncrasies that influence how they handle the data and even what data they obtain from secondary sources. For one, the researchers may choose seemingly innocuous steps in their statistical analyses that end up causing significantly different results using the same data; e.g., weighting strategies, factor analytic techniques, or choice of estimation. In addition, researchers may use software packages that have different default settings that lead to small but significant fluctuations. Finally, the data that researchers use may not be identical, even though it seems so. For example, the OECD collects and distributes various socio-economic data; however, these data change over time such that a researcher who downloads the Australian GDP data for the year 2000 may have slightly different values than a researcher who downloads the same Australian GDP 2000 data a few years later. The idea of the secondary observer effect was floated by Nate Breznau in a thus far relatively obscure paper.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18698687",
"title": "Observer effect (physics)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 762,
"text": "In physics, the observer effect is the theory that the mere observation of a phenomenon inevitably changes that phenomenon. This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire; this is difficult to do without letting out some of the air, thus changing the pressure. Similarly, it is not possible to see any object without light hitting the object, and causing it to reflect that light. While the effects of observation are often negligible, the object still experiences a change. This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24731079",
"title": "Observer (quantum physics)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 217,
"text": "In quantum mechanics, \"observation\" is synonymous with quantum measurement and \"observer\" with a measurement apparatus and \"observable\" with what can be measured. Thus the quantum mechanical observer does not have to\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41681462",
"title": "Von Neumann–Wigner interpretation",
"section": "Section::::Reception.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 733,
"text": "A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer \"plays a distinguished physical role (e.g., wave-function collapse by consciousness)\". This poll also states that 55% (18 of the 33) indicated that they believed the observer \"plays a fundamental role in the application of the formalism but plays no distinguished physical role\". They also mention that \"Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17384910",
"title": "Observer (special relativity)",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 501,
"text": "Physicists use the term \"observer\" as shorthand for a specific reference frame from which a set of objects or events is being measured. Speaking of an observer in special relativity is not specifically hypothesizing an individual person who is experiencing events, but rather it is a particular mathematical context which objects and events are to be evaluated from. The effects of special relativity occur whether or not there is a sentient being within the inertial reference frame to witness them.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3awry9 | why are dj's treated like artists, with a stage name and everything, and their own 'shows' people get tickets to, when they just play other people's copyrighted music? | [
{
"answer": "Their art isn't in the music itself, but in the selection, order, and pacing of the music. They have to pick the right songs to fit the mood, then put them in the right order, at the right pace, to get the feeling that they're looking for. DJs are essentially the audio version of collage artists. Anyone can be a DJ, just like anyone make a collage, but there's a massive variation in quality between some kid that does it once or twice, and someone that's spent years treating it as a skill and has developed an artist's eye for it. ",
"provenance": null
},
{
"answer": "The DJs you're probably referring to are the big ones at EDC, Ultra, etc. Those guys produce a lot of the music they play or they remix those sings. They do play other peoples music as well, but they are also putting their own spin on it.",
"provenance": null
},
{
"answer": "What Pesto says is right, but there's also a lot more to any DJ worth their salt. Any numbnut with a preamp soundboard and some speakers can cue up a playlist and press spacebar. Any Joe with DJware or a multichannel board can crossfade to a new track as a previous one ends.\n\nTruly DJing a live set well, like the pros do, involves not only knowing every single facet of every song on your computer backwards and forwards but also involves being able to assign elements of those songs to effects controllers and changing and intermixing those songs on the fly to make whole new songs. Youtube mashups and you'll begin to see some of what's possible: _URL_0_ Most effects are assigned to buttons (which then have to be accurately memorized so as to be used properly during a show), but most pro DJ tools also allow effects buttons to be assigned on the fly, so there's a LOT of room for customization.\n\nAre there DJs who press play, and then sit back and piddle on Facebook? Yes, and they serve a purpose. There's a niche for everything. Many people know the original (vocal) mix of a given popular song. Musically literate fans of DJs and their associated music also know fifteen other (re)mixes, both official and non, and can differentiate between most of those and what constitutes something created uniquely on the fly. There's a very real skill involved. Youtube mashups. I think you'll find some truly eye-/mind-opening things DJs can do.",
"provenance": null
},
{
"answer": "\"Art\" is a form of expression.\n\nImagine that 100 people are going to be showing up to your house in an hour. How will you entertain them? Playing music is a good option. Do you have the right music to play? Would you just turn on the radio? Go to Pandora? \n\nRadios have commercials. Songs don't always compliment one another. \n\nBut let's say you don't want to risk having your party fail due to poor music selection... so you spend some time listening to songs, figuring out which ones compliment one another, which flow together, which ones get the crowd pumped and excited, and which ones give them a short breather so they can get ready for the next song.\n\nBut crap... that takes a lot of effort. Sure, pressing \"play\" on a machine may end up with a similar result... but you don't want to use a machine for this. You want to learn the fine-motor skills and muscle memory required to fluidly operate your music gear covered in buttons and switches. Like an audible chef, you craft a meal of sounds and rhythms for the crowds' ears... you manage to completely hide and obscure the pattern of song selection from the crowd to the point that the entire experience feels like one long ride of enjoyment. All those songs made by all those other artists might as well be different brands of paint being combined onto the DJ's percussive canvas.\n\nSo, to answer your question, the reason that people pay to see these shows is because these DJs provide a service that **not everyone** can do. And, sure, while the entry barriers to becoming a \"Dj\" are not very high, some Djs are simply better than others and can provide better experiences than their competitors.... so much so that fan bases develop and seek out opportunities to exchange their money (which plays no music) for temporary exposure to auditory stimuli that is otherwise unavailable.\n\nAt the end of the day, an experienced, talented DJ (just like any musician) can combine layers of sound in a way that taps them directly into the minds of their audience. That's pretty neat.\n\nImagine yourself on stage with some tables, wires, and buttons. Before you is a crowd of thousands. They are there because you have created something that meant something to them. You are there because you are an artist.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1836257",
"title": "Erick Morillo",
"section": "Section::::Career.:Club nights.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 268,
"text": "I party with the promoters I play for. A lot of DJs don't like to do that; they play the party, go back to the hotel and then get ready to go home. Not me. I don't deny it! For me a DJ is someone who brings a vibe. If you don't party, then how do you bring that vibe?\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3620361",
"title": "Chillits",
"section": "Section::::Music.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 408,
"text": "Most of the DJs and live musicians who play at Chillits could be considered amateur, in the sense that they do not attempt to make a living from playing music but instead donate their time and services. Some are San Francisco Bay Area technology luminaries, such as Brian Behlendorf. Starting with the 2000 gathering, many of the sets have been recorded and made available for free online (see links below).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8566928",
"title": "Music Control",
"section": "Section::::Features.:Uploaded.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 218,
"text": "Uploaded was the part of the show, which gave unsigned artists the chance to get their music showcased on the radio. Bands could upload their music through the website and a band was featured each evening on the show.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12467022",
"title": "List of club DJs",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 381,
"text": "This is a list of notable club DJs, professionals who perform, or are known to perform, at large nightclub venues or other dance events, or who have been pioneers in the development of the role of the club DJ. DJs play a mix of recorded music for an audience at a bar, nightclub, dance club or rave who dance to the music. The music is played through a sound reinforcement system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2924002",
"title": "Electronic dance music",
"section": "Section::::Production.:Ghost production.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 450,
"text": "Many ghost producers sign agreements that prevent them from working for anyone else or establishing themselves as a solo artist. Such non-disclosure agreements are often noted as predatory because ghost producers, especially teenage producers, do not have an understanding of the music industry. London producer Mat Zo has alleged that DJs who hire ghost producers \"have pretended to make their own music and [left] us actual producers to struggle\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8683",
"title": "Disc jockey",
"section": "Section::::Types.:Club DJs.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 991,
"text": "Club DJs, commonly referred as DJs in general, play music at musical events, such as parties at music venues or bars, music festivals, corporate and private events. Typically, club DJs mix music recordings from two or more sources using different mixing techniques in order to produce non-stopping flow of music. One key technique used for seamlessly transitioning from one song to another is beatmatching. A DJ who mostly plays and mixes one specific music genre is often given the title of that genre; for example, a DJ who plays hip hop music is called a hip hop DJ, a DJ who plays house music is a house DJ, a DJ who plays techno is called a techno DJ, and so on. The quality of a DJ performance (often called a DJ mix or DJ set) consists of two main features: technical skills, or how well can DJ operate the equipment and produce smooth transitions between two or more recordings and a playlist, or ability of a DJ to select most suitable recordings also known as \"reading the crowd\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30775440",
"title": "Crystal Fighters",
"section": "Section::::Live performances.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 721,
"text": "Their knack for fusing genres has garnered them a reputation for thrilling and involving live shows, of which Pringle explains: \"With dance music you can go and see a DJ who is in a booth, so you dance all night and just watch him there kind of doing nothing, which obviously can be amazing. But if you have a live performance where you are playing the same kind of music, the whole experience then changes into a rock show and so much more besides, so we’re trying to make people experience dance music in a new way.\" This reputation has led to the band being awarded the accolade of \"a definite band to watch during the summer festival season\" by numerous publications, including \"The Guardian\", Artrocker and Skiddle.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5zc08e | what exactly was dialup and why couldn't you use the phone at the same time? | [
{
"answer": "The computer sent data through the phone line. Since the phone line transmits sound data - the computer was literally generating sounds that could be interpreted as data - high and low pitched squeals that represent the data you are sending or receiving. It was like a very rapid morse code.\n\nIf you picked up the phone, you would be adding your own sounds on top of the computer's sounds. The computer at the other end wouldn't know that you picked up the phone, it would just assume that you're sending data to, and this would screw up all of the data that gets sent.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11357820",
"title": "Internet in the United States",
"section": "Section::::Overview.:Access and speed.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 592,
"text": "Dial-up access is a connection to the Internet through a phone line, creating a semi-permanent link to the Internet. Operating on a single channel, it monopolizes the phone line and is the slowest method of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas because it requires no infrastructure other than the already existing telephone network. Dial-up connections typically do not exceed a speed of 56 kbit/s, because they are primarily made via a 56k modem. Since the mid 2000s, this technology became obsolete in most developed countries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15500494",
"title": "Internet in India",
"section": "Section::::Internet user base.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 510,
"text": "Dial-up access is a connection to the Internet through a phone line, creating a semi-permanent link to the Internet. Operating on a single channel, it monopolizes the phone line and is the slowest method of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas because it requires no infrastructure other than the already existing telephone network. Dial-up connections typically do not exceed a speed of 56 kbit/s, because they are primarily made via a 56k modem.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "375271",
"title": "Dialer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 784,
"text": "A dialer (American English) or dialler (British English) is an electronic device that is connected to a telephone line to monitor the dialed numbers and alter them to seamlessly provide services that otherwise require lengthy National or International access codes to be dialed. A dialer automatically inserts and modifies the numbers depending on the time of day, country or area code dialed, allowing the user to subscribe to the service providers who offer the best rates. For example, a dialer could be programmed to use one service provider for international calls and another for cellular calls. This process is known as prefix insertion or least cost routing. A line powered dialer does not need any external power but instead takes the power it needs from the telephone line.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "375271",
"title": "Dialer",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 318,
"text": "Another type of dialer is a computer program which creates a connection to the Internet or another computer network over the analog telephone or Integrated Services Digital Network (ISDN). Many operating systems already contain such a program for connections through the Point-to-Point Protocol (PPP), such as WvDial.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47373274",
"title": "Dialling (telephony)",
"section": "Section::::Touch tone dial.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 372,
"text": "Introduced to the public in 1963 by AT&T, Touch-Tone dialing greatly shortened the time of initiating a telephone call. It also enabled direct signaling from a telephone across the long-distance network using audio-frequency tones, which was impossible with the rotary dials that generated digital direct current pulses that had to be decoded by the local central office.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "162228",
"title": "Dial tone",
"section": "Section::::Variants.:Second dial tone.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 232,
"text": "Private or internal PBX or key phone systems also have their own dial tone, sometimes the same as the external PSTN one, and sometimes different so as to remind users to dial a prefix for, or select in another way, an outside line.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "162228",
"title": "Dial tone",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1300,
"text": "Western Electric's international company in Belgium, the BTMC (Bell Telephone Manufacturing Company) first introduced Dial Tone with the cutover of its 7A Rotary Automatic Machine Switching System at Darlington, England, on 10 October 1914. Dial Tone was an essential feature, because the 7A Rotary system was common control. When a calling subscriber lifted their telephone receiver, Line Relays associated with the line operated causing all free First Line Finders in the subscribers group to drive, hunting for the subscribers line. When the line was found, start relays caused free Second Line Finders, in a particular group, to drive, hunting for the successful First Line Finder. Each Second Line Finder was paired with First Group Selector and R3 Register Chooser sequence switch, so when a Second Line Finder had found the First Line Finder, the First Group Selector's R1 sequence switch advanced (rotated) from its home position, causing the R3 Register Chooser sequence switch to advance (rotate), looking for a free Register. When a free Register was seized its R4 sequence switch advanced and Dial Tone was returned to the calling subscriber. This whole process could take as long as four seconds, so if the calling subscriber dialed before receiving a dial tone, their call would fail. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7ak01b | Why don't modern cellphones create interferences near speakers any more? | [
{
"answer": "So, for our friends that don't know, the buzzing is a signal in the AM range.\n\nThe effect is well known since the rollout of GSM in Europe begun (see Stephen Temple's _\"Inside the Mobile Revolution_\", Ch. 22). What's happening is that in TDMA, each transmitter gets a time slot in which to transmit, and then remains silent until the next slot. This pattern (transmit-silence-transmit) leads to the power amp delivering large amounts of energy within either the 850/950 or 1800/1900 MHz GSM bands, and in these bands it results at a ~217 Hz-modulated intervals IIRC. The signal is detected on any transistors or diode structures in chips, on multiple points of an amplifier simultaneously, including power regulator chips, batteries, and so on. It can occur even inside the handset itself. In GSM's 800-900 MHz range, any 80mm-long copper trace works like a quarter wave antenna, or stripline resonator.\n\nYou can see the spectrum of the burst [here](_URL_0_). The transmission power is near 2 Watts (yeah, GSM is power hungry). The resulting detection at an audio chip results in a voltage transient that looks like [this](_URL_2_); note the shift in both the supply and the ground. The output of the amplifier will eventually be clipped and filtered down to the audible range, but distortion can produce frequency components at any sum/difference of multiples of the original frequencies.\n\nThe reasons subsequent RANs (UTRAN, GERAN, E-UTRAN) don't present this problem are:\n\n* First and foremost, awareness of the problem. For example, back in 1990, when GSM was being rolled out across EU, this interference even affected devices like hearing aids, and there was major cause for concern, which translated in safety requirements for the development of subsequent standards\n* TDMA was abandoned. Instead, CDMA was adopted, where each channel uses the entire spectrum all the time, and multiplexing is achieved with frequency convolution with a signal that is orthogonal between every pair of transmitters; read more [here](_URL_1_)\n* Power requirements for user equipment became more stringent. For example, one of the first prototype chips for E-UTRAN claimed power consumption below 100 mW during the demo; see [here](_URL_3_). _Don't take that at face value, the demo was a tranmission of a few seconds. Still a remarkable difference w/ GSM_.\n\nI'm not aware if audio components changed their design to avoid problems like this.",
"provenance": null
},
{
"answer": "My iPhone 6 makes my guitar amp buzz. Any elaboration on that? ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4277314",
"title": "Cellular repeater",
"section": "Section::::Reasons for weak signal.:Diffraction and general attenuation.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 349,
"text": "The longer wavelengths have the advantage of diffracting more, and so line of sight is not as necessary to obtain a good signal. Because the frequencies that cell phones use are too high to reflect off the ionosphere as shortwave radio waves do, cell phone waves cannot travel via the ionosphere. (See Diffraction and Attenuation for more details).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16315655",
"title": "Wireless speaker",
"section": "Section::::Overview.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 479,
"text": "Wireless speakers receive considerable criticism from high-end audiophiles because of the potential for RF interference with other signal sources, like cordless phones, as well as for the relatively low sound quality some models deliver. Despite the criticism, wireless speakers have gained popularity with consumers and a growing number of models are actively marketed. Specifically, small and portable wireless Bluetooth speaker models have become very popular with consumers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "271195",
"title": "Radio propagation",
"section": "Section::::Practical effects.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 665,
"text": "Non-broadcast signals are also affected. Mobile phone signals are in the UHF band, ranging from 700 to over 2600 Megahertz, a range which makes them even more prone to weather-induced propagation changes. In urban (and to some extent suburban) areas with a high population density, this is partly offset by the use of smaller cells, which use lower effective radiated power and beam tilt to reduce interference, and therefore increase frequency reuse and user capacity. However, since this would not be very cost-effective in more rural areas, these cells are larger and so more likely to cause interference over longer distances when propagation conditions allow.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1072324",
"title": "Electromagnetic interference",
"section": "Section::::Susceptibilities of different radio technologies.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 260,
"text": "Interference tends to be more troublesome with older radio technologies such as analogue amplitude modulation, which have no way of distinguishing unwanted in-band signals from the intended signal, and the omnidirectional antennas used with broadcast systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13021454",
"title": "Acoustic shock",
"section": "Section::::Prevention.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 596,
"text": "There are many methods of attempting to reduce the risk of AS. Several devices attempt to remove potentially harmful sound signals by digital signal processing. None has yet been shown to be fully effective. Devices which solely limit noise levels to about 85 dB have been shown in field trials to be ineffective (data from these trials has not been released into the public domain). Limiting background noise and office stress may also reduce the chance of an Acoustic Shock. Proper use of the headset and preventing mobile phones from being used in call centers reduces the chance of feedback.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7342571",
"title": "Selective calling",
"section": "Section::::Group calling.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 502,
"text": "In uses where missed calls are allowable, selective calling can also hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as scanners or low-cost mobile radios—cannot reject the unwanted signals on nearby channels in urban environments. The interference will still be present and will still degrade system performance but by using selective calling the user will not have to hear the noises produced by receiving the interference.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "520289",
"title": "Hearing aid",
"section": "Section::::Technology.:Compatibility with telephones.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 1628,
"text": "The electromagnetic (telecoil) mode is usually more effective than the acoustic method. This is mainly because the microphone is often automatically switched off when the hearing aid is operating in telecoil mode, so background noise is not amplified. Since there is an electronic connection to the phone, the sound is clearer and distortion is less likely. But in order for this to work, the phone has to be hearing-aid compatible. More technically, the phone's speaker has to have a voice coil that generates a relatively strong electromagnetic field. Speakers with strong voice coils are more expensive and require more energy than the tiny ones used in many modern telephones; phones with the small low-power speakers cannot couple electromagnetically with the telecoil in the hearing aid, so the hearing aid must then switch to acoustic mode. Also, many mobile phones emit high levels of electromagnetic noise that creates audible static in the hearing aid when the telecoil is used. A workaround that resolves this issue on many mobile phones is to plug a wired (not Bluetooth) headset into the mobile phone; with the headset placed near the hearing aid the phone can be held far enough away to attenuate the static. Another method is to use a \"neckloop\" (which is like a portable, around-the-neck induction loop), and plug the neckloop directly into the standard audio jack (headphones jack) of a smartphone (or laptop, or stereo, etc.). Then, with the hearing aids' telecoil turned on (usually a button to press), the sound will travel directly from the phone, through the neckloop and into the hearing aids' telecoils.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ropnt | whats going on with the statehood movement in puerto rico as of now? | [
{
"answer": "There was a non-binding referendum is 2012, but most people see it for the sham it was.\n\nThe pro-statehood ruling party rigged it so they first asked if they people preferred the status quo, then asked the remaining people if the wanted statehood. If they asked the questions in the other order, they would have gotten a different answer.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "840701",
"title": "Independence movement in Puerto Rico",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 409,
"text": "The Independence Movement in Puerto Rico refers to initiatives by inhabitants throughout the history of Puerto Rico to obtain full political independence for the island territory, first from the Spanish Empire, from 1493 to 1898 and, since 1898, from the United States. A small variety of groups, movements, political parties, and organizations have worked for Puerto Rico's independence over the centuries. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30874732",
"title": "Political status of Puerto Rico",
"section": "Section::::Controversies.:Granting of U.S. citizenship and cultural identity.\n",
"start_paragraph_id": 110,
"start_character": 0,
"end_paragraph_id": 110,
"end_character": 929,
"text": "Former chief of the Puerto Rico Supreme Court José Trías Monge insists that statehood was never intended for the island and that, unlike Alaska and Hawaii, which Congress deemed incorporated territories and slated for annexation to the Union from the start, Puerto Rico was kept \"unincorporated\" specifically to avoid offering it statehood. And Myriam Marquez has stated that Puerto Ricans \"fear that statehood would strip the people of their national identity, of their distinct culture and language\". Ayala and Bernabe add that the \"purpose of the inclusion of U.S. citizenship to Puerto Ricans in the Jones Act of 1917 was an attempt by Congress to block independence and perpetuate Puerto Rico in its colonial status\". Proponents of the citizenship clause in the Jones Act argue that \"the extension of citizenship did not constitute a promise of statehood but rather an attempt to exclude any consideration of independence\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "95259",
"title": "Luis A. Ferré",
"section": "Section::::Political life.:Governor and Senator.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 477,
"text": "On July 23, 1967, a plebiscite was held to decide if the people of Puerto Rico desired to become an independent nation, a state of the United States of America, or continue the commonwealth relation established in 1952. The majority of Puerto Ricans opted for the Commonwealth option (see Puerto Rican status referendums). Disagreement within the then pro-statehood party headed by Miguel A. García Méndez led Ferré and others to found the New Progressive Party (a.k.a., PNP).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "82536",
"title": "Puerto Ricans",
"section": "Section::::Political and international status.:Decolonization and status referendums.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 219,
"text": "Even with the Puerto Ricans' vote for statehood, action by the United States Congress would be necessary to implement changes to the status of Puerto Rico under the Territorial Clause of the United States Constitution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56270432",
"title": "Crime in Puerto Rico",
"section": "Section::::Debates and discussions.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 500,
"text": "In 2018, Puerto Rican social activists began a renewed push for statehood status in the atmosphere of increased post-hurricane media attention. The effects that closer ties and direct governmental integration between the island and the mainland would bring, particularly in terms of crime and social stability in general, are uncertain. Past efforts in favor of statehood by political figures such as U.S. Presidents Ronald Reagan and George H. W. Bush have previously amounted to relatively little.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37697922",
"title": "Public debt of Puerto Rico",
"section": "Section::::History.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 432,
"text": "Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. The Puerto Rican status referendum, 2017 is due to be held on June 11, 2017. The two options at that time will be \"Statehood\" and \"Independence/Free Association\". This will be the first of the five referendums that will not offer the choice of retaining the current status as a Commonwealth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80448",
"title": "Government of Puerto Rico",
"section": "Section::::Government finances.:Public debt.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 432,
"text": "Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. The Puerto Rican status referendum, 2017 is due to be held on June 11, 2017. The two options at that time will be \"Statehood\" and \"Independence/Free Association\". This will be the first of the five referendums that will not offer the choice of retaining the current status as a Commonwealth.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2rt09c | How do deaf people perceive heavy bass sounds? | [
{
"answer": "I'll take a shot at this. I think many people will agree they can \"feel\" low-frequency (bass) sounds in their chest when loud enough. We can perceive this vibration in our bodies with other senses, probably [somatosensation](_URL_3_) (i.e. touch), perhaps with [proprioception](_URL_0_). I found [this paper](_URL_1_), which measured chest vibration due to jet engine sounds and found a resonance at 63-100 Hz, indicating sounds in this frequency range might possibly be felt in the chest. [This paper](_URL_2_) basically confirmed that by reporting that the perceptual rating of vibration in response to low frequency sound was better correlated with accelerometer measurements on the chest/abdomen compared to the head. This supports the \"chest-thumping\" idea of bass sounds.\n\nAs far as actual studies with deaf individuals, I could only find a paper briefly discussed in [this review](_URL_4_) but couldn't find a copy of the actual paper (Yamada et al., Jnl Low Freq Noise Vibn 2, 32). Anyway, supposedly the deaf subjects could perceive low-frequency sounds at levels only 40-50 dB above normal hearing subjects. For reference, we usually consider a deficit of > 90 dB to be \"profound hearing loss\". This indicates the deaf subjects were probably using another cue (e.g. vibratory) to perceive the sound. \n\nIt should be noted that deaf individuals can have some residual hearing that allows them to perceive very intense sounds. I have a friend with thresholds at something like 105 dB SPL, so he can hear something like a loud power tool. Of course, for these sounds there's the vibration sense as well, so the perception sort-of merges together. There's also the sense of pain, which kicks in around 130-140 dB SPL (think standing next to a jet engine). \n\nedit: typos\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "768413",
"title": "Ear",
"section": "Section::::Function.:Hearing.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 501,
"text": "The human ear can generally hear sounds with frequencies between 20 Hz and 20 kHz (the audio range). Sounds outside this range are considered infrasound (below 20 Hz) or ultrasound (above 20 kHz) Although hearing requires an intact and functioning auditory portion of the central nervous system as well as a working ear, human deafness (extreme insensitivity to sound) most commonly occurs because of abnormalities of the inner ear, rather than in the nerves or tracts of the central auditory system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1999941",
"title": "Sub-bass",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 545,
"text": "Sub-bass sounds are the deep, low- register pitched pitches approximately below 60 Hz (C in scientific pitch notation) and extending downward to include the lowest frequency humans can hear, assumed at about 20 Hz (E). In this range, human hearing is not very sensitive, so sounds in this range tend to be felt more than heard. Sound reinforcement systems and PA systems often use one or more subwoofer loudspeaker cabinets that are specifically designed for amplifying sounds in the sub-bass range. Sounds below sub-bass are called infrasound.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34118956",
"title": "Perception of infrasound",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 659,
"text": "Infrasound is sound at frequencies lower than the low frequency end of human hearing threshold at 20 Hz. It is known, however, that humans can perceive sounds below this frequency at very high pressure levels. Infrasound can come from many natural as well as man-made sources, including weather patterns, topographic features, ocean wave activity, thunderstorms, geomagnetic storms, earthquakes, jet streams, mountain ranges, and rocket launchings. Infrasounds are also present in the vocalizations of some animals. Low frequency sounds can travel for long distances with very little attenuation and can be detected hundreds of miles away from their sources.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "215176",
"title": "Infrasound",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 467,
"text": "Infrasound, sometimes referred to as low-frequency sound, is sound that is lower in frequency than 20 Hz or cycles per second, the \"normal\" limit of human hearing. Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the sound pressure must be sufficiently high. The ear is the primary organ for sensing infrasound, but at higher intensities it is possible to feel infrasound vibrations in various parts of the body.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41079",
"title": "Dynamic range",
"section": "Section::::Human perception.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 691,
"text": "A human is capable of hearing (and usefully discerning) anything from a quiet murmur in a soundproofed room to the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude and a factor 10,000,000,000 in power. The dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing (around −9 dB SPL at 3 kHz) to the threshold of pain (from 120–140 dB SPL). This wide dynamic range cannot be perceived all at once, however; the tensor tympani, stapedius muscle, and outer hair cells all act as mechanical dynamic range compressors to adjust the sensitivity of the ear to different ambient levels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57497882",
"title": "Recruitment (medicine)",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 495,
"text": "What sounds normal for someone with normal hearing may be too soft for someone with recruitment, and what is too loud for someone with normal hearing is also too loud for the patient with recruitment. In effect, the range of sound intensity that a patient with recruitment can tolerate is much narrower. Further adding to the difficulty, recruitment is observed in those frequencies that are most impaired—in the high frequencies, which also carry critical information for speech understanding.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "763138",
"title": "Sonic weapon",
"section": "Section::::Use and deployment.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 387,
"text": "Extremely high-power sound waves can disrupt or destroy the eardrums of a target and cause severe pain or disorientation. This is usually sufficient to incapacitate a person. Less powerful sound waves can cause humans to experience nausea or discomfort. The use of these frequencies to incapacitate persons has occurred both in anti-citizen special operation and crowd control settings.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9l712w | Before the Augustus founded the empire, the Roman republic was plagued with civil wars. Why didn't the Parthians invade? | [
{
"answer": "They did...it's all over the sources...\n\nThe Parthians crossed the Euphrates only twice. In 51 Cicero feared a Parthian invasion into Cilicia, but it did not materialize, and the brief Parthian campaign following Crassus' defeat fizzled out quickly. Plutarch claims that Pompey reached out to the Parthians for asylum, but he ended up going to Egypt instead and the Parthians were not active on the Roman frontier for most of the 40s. A Parthian campaign in 41, led by the younger Labienus, was initially successful, but they were disastrously defeated by Ventidius Bassus, losing the crown prince Pacorus. The Caesarians' success at Philippi allowed Antony to launch a large expedition into Armenia, which was not particularly successful but was not followed by a Parthian counterattack. Though wars were occasionally fought in Armenia, and the Romans successfully invaded Parthia a few times (under Trajan and Septimius Severus, for example), the Parthians did not again cross the Euphrates. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "17273443",
"title": "Roman–Parthian Wars",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 445,
"text": "Battles between the Parthian Empire and the Roman Republic began in 54 BC. This first incursion against Parthia was repulsed, notably at the Battle of Carrhae (53 BC). During the Roman Liberators' civil war of the 1st Century BC, the Parthians actively supported Brutus and Cassius, invading Syria, and gaining territories in the Levant. However, the conclusion of the second Roman civil war brought a revival of Roman strength in Western Asia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30863164",
"title": "Battle of Sarmisegetusa",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 407,
"text": "Because of the threat the Dacians represented to the Roman Empire's eastward expansion, in the year 101 Emperor Trajan made the decision to begin a campaign against them. The first conflict began on March 25 and the Roman troops, consisting of four principal legions, the units X \"Gemina\", XI \"Claudia\", II \"Traiana Fortis\" and XXX \"Ulpia Victrix\", defeated the Dacians, and it thus ended in Roman victory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25994",
"title": "Roman legion",
"section": "Section::::History.:Late Republic (107–30 BC).\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 565,
"text": "After the Marian reforms and throughout the history of Rome's Late Republic, the legions played an important political role. By the 1st century BC, the threat of the legions under a demagogue was recognized. Governors were not allowed to leave their provinces with their legions. When Julius Caesar broke this rule, leaving his province of Gaul and crossing the Rubicon into Italy, he precipitated a constitutional crisis. This crisis and the civil wars which followed brought an end to the Republic and led to the foundation of the Empire under Augustus in 27 BC.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19391158",
"title": "Antony's Parthian War",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 244,
"text": "Julius Caesar had planned an invasion of Parthia, but he was assassinated before implementing it. In 40 BC, the Parthians were joined by Pompeian forces and briefly captured much of the Roman East, but were defeated in Antony's counter-attack.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59841301",
"title": "Julius Caesar's planned invasion of the Parthian Empire",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 505,
"text": "Julius Caesar's planned invasion of the Parthian Empire was to begin in 44 BC; however, due to his assassination that same year, the invasion never took place. The campaign was to start with the pacification of Dacia, followed by an invasion of Parthia. Plutarch also recorded that once Parthia was subdued the army would continue to Scythia, then Germania and finally back to Rome. These grander plans are found only in Plutarch's \"Parallel Lives\", and their authenticity is questioned by most scholars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30889261",
"title": "Berber kings of Roman-era Tunisia",
"section": "Section::::Juba, Bocchus, Juba, Ptolemy.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 518,
"text": "Augustus (imperial rule: 31 BC to AD 14) controlled the Roman state following the civil wars that marked the end of the Republic (c. 510–44). He established a quasi-constitutional regime known as the Principate, commonly included as the first phase of the Empire. Roman actions in Africa throughout the period of civil war are harshly criticized by a modern Maghribi historian, Abdallah Laroui, who notes the cumulative lands lost by Berbers to Romans, and how the Romans had steadily steered events to their benefit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16100009",
"title": "Antes (people)",
"section": "Section::::History.:6th and 7th centuries.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 795,
"text": "Despite numerous defections to the Romans during the campaign, the Avar attack appears to have ended the Antean polity. They never appear in sources apart from the epithet \"Anticus\" in the imperial titulature in 612. Curta argues that the 602 attack on the Antes destroyed their political independence. However, the epithet \"Anticus\" is attested in imperial titulature until 612, thus Kardaras rather argues that they disappearance of the Antes relates to general collapse of the Scythian/ lower Danubian \"limes\" which they defended, at which time their hegemony on the lower Danube ended. Whatever the case, shortly after the collapse of the Danubian \"limes\" (more specifically, the tactical Roman withdrawal), the first evidence of Slavic settlement in north-eastern Bulgaria begin to appear.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
b48zht | Can bacteria feel pain? | [
{
"answer": "Not in any sense of the term that would make sense from a human perspective, for sure: by definition, single-celled organisms don't have nerve cells, and what we call \"pain\" is entirely a nervous-system response to various stimuli.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "24261150",
"title": "Pain in animals",
"section": "Section::::Invertebrates.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 313,
"text": "Though it has been argued that most invertebrates do not feel pain, there is some evidence that invertebrates, especially the decapod crustaceans (e.g. crabs and lobsters) and cephalopods (e.g. octopuses), exhibit behavioural and physiological reactions indicating they may have the capacity for this experience.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37099562",
"title": "Declawing of crabs",
"section": "Section::::Pain and stress caused by declawing.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 545,
"text": "There is debate about whether invertebrates can experience pain. Some of the most compelling evidence for pain in invertebrates exists for crustaceans in terms of trade-offs between stimulus avoidance and other motivational requirements. Evidence of the ability for crabs to feel pain is supported by their possessing an opioid receptor system, showing learned avoidance to putatively painful stimuli, and responding appropriately to analagesics and anaesthetics. These all indicate it is likely that crabs can experience pain during declawing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24373",
"title": "Pain",
"section": "Section::::Other animals.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 592,
"text": "The presence of pain in an animal cannot be known for certain, but it can be inferred through physical and behavioral reactions. Specialists currently believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, may also. As for other animals, plants, or other entities, their ability to feel physical pain is at present a question beyond scientific reach, since no mechanism is known by which they could have such a feeling. In particular, there are no known nociceptors in groups such as plants, fungi, and most insects, except for instance in fruit flies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48376645",
"title": "Pain in amphibians",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 240,
"text": "Pain is an aversive sensation and feeling associated with actual, or potential, tissue damage. It is widely accepted by a broad spectrum of scientists and philosophers that non-human animals can perceive pain, including pain in amphibians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50558073",
"title": "Pain in cephalopods",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 468,
"text": "If cephalopods feel pain, there are ethical and animal welfare implications including the consequences of exposure to pollutants, practices involving commercial, aquaculture and for cephalopods used in scientific research or which are eaten. Because of the possibility that cephalopods are capable of perceiving pain, it has been suggested that \"precautionary principles\" should be followed with respect to human interactions and consideration of these invertebrates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50558073",
"title": "Pain in cephalopods",
"section": "Section::::Societal implications.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 230,
"text": "Other societal implications of cephalopods being able to perceive pain include acute and chronic exposure to pollutants, aquaculture, removal from water for routine husbandry, pain during slaughter and during scientific research.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24332386",
"title": "Pain in crustaceans",
"section": "Section::::Opinions.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 660,
"text": "Advocates for Animals, a Scottish animal welfare group, stated in 2005 that \"scientific evidence ... strongly suggests that there is a potential for decapod crustaceans and cephalopods to experience pain and suffering\". This is primarily due to \"The likelihood that decapod crustaceans can feel pain [which] is supported by the fact that they have been shown to have opioid receptors and to respond to opioids (analgesics such as morphine) in a similar way to vertebrates.\" Similarities between decapod and vertebrate stress systems and behavioral responses to noxious stimuli were given as additional evidence for the capacity of decapods to experience pain.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2q7wq7 | What is Bell's Inequality and how does it work? | [
{
"answer": "It is a response to the famous EPR (Einstein, Podolsky and Rosen) paper written in 1935 where they described a thought experiment leading to what we today know as quantum entanglement. Or to quote them: \"spooky action at a distance\". \n\nEntanglement can be briefly explained by two entangled electrons (A and B) being in the combined state where one has spin up and one has spin down (in some direction). We do not know which is which, but we do know that if electron A is measured to have spin up, electron B will for sure have spin down. An important thing to note here is that the outcome of the measurements is not physically decided (even in quantum mechanics), so the outcome is \"chosen\" the moment the first electron is measured. The second electron will immediately obtain the opposite spin value than the first one.\n\nIf the electrons are separated by a large distance (say 1 light year), it could be interpreted that some signal is sent faster than light, since the second electron will choose its state right after the first electron is measured. This is called non-locality.\n\nIn this paper and after, many argued that quantum mechanics was indeed an incomplete theory. Some people suggested that there must be some *hidden variable* (this could be a number, a set of variables, whatever), that will solve all of the problems with non-locality, and maybe even remove the probabilistic nature of quantum mechanics. Once these variables are known, these weird effects disappear. \n\nSo now to Bell's theorem. In 1964, Bell published a paper called *On the Einstein Podolsky Rosen paradox* where he showed that if we assume some hidden variable that takes away all the uncertainty in the experiment described in the EPR paper, there is a measurable physical quantity that should follow some inequality that is proven to be **false** both theoretically and experimentally.\nIn other words, any hidden variable as described above is inconsistent with the postulates of quantum mechanics. This is the consequence of Bell's theorem.\n\nSo was Einstein wrong? Well, modern formulations of locality (the speed of light being upper limit) usually state that no *information* travels faster than the speed of light. And as far as we know, quantum entanglement cannot be used to send information faster than light because we cannot control the outcome of the experiment (electron A gets spin up or down).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "890032",
"title": "Sakurai's Bell inequality",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 692,
"text": "The intention of a Bell inequality is to serve as a test of local realism or local hidden variable theories as against quantum mechanics, applying Bell's theorem, which shows them to be incompatible. Not all the Bell's inequalities that appear in the literature are in fact fit for this purpose. The one discussed here holds only for a very limited class of local hidden variable theories and has never been used in practical experiments. It is, however, discussed by John Bell in his \"Bertlmann's socks\" paper (Bell, 1981), where it is referred to as the \"Wigner–d'Espagnat inequality\" (d'Espagnat, 1979; Wigner, 1970). It is also variously attributed to Bohm (1951?) and Belinfante (1973).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "883494",
"title": "Loopholes in Bell test experiments",
"section": "Section::::Loopholes.:Communication, or locality.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 947,
"text": "The Bell inequality is motivated by the absence of communication between the two measurement sites. In experiments, this is usually ensured simply by prohibiting \"any\" light-speed communication by separating the two sites and then ensuring that the measurement duration is shorter than the time it would take for any light-speed signal from one site to the other, or indeed, to the source. In one of Alain Aspect's experiments, inter-detector communication at light speed during the time between pair emission and detection was possible, but such communication between the time of fixing the detectors' settings and the time of detection was not. An experimental set-up without any such provision effectively becomes entirely \"local\", and therefore cannot rule out local realism. Additionally, the experiment design will ideally be such that the settings for each measurement are not determined by any earlier event, at both measurement stations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56369",
"title": "Bell's theorem",
"section": "Section::::Metaphysical aspects.\n",
"start_paragraph_id": 113,
"start_character": 0,
"end_paragraph_id": 113,
"end_character": 268,
"text": "BULLET::::2. Bell's inequality does not apply to some possible hidden variable theories. It only applies to a certain class of local hidden variable theories. In fact, it might have just missed the kind of hidden variable theories that Einstein is most interested in.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58278312",
"title": "Aspect's experiment",
"section": "Section::::Aspect's experiments (1980-1982).:Experiment results.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 399,
"text": "Bell's inequalities establish a theoretical curve of the number of correlations (++ or --) between the two detectors in relation to the relative angle of the detectors formula_9. The shape of the curve is characteristic of the violation of Bell's inequalities. The measures' matching the shape of the curve establishes, quantitatively and qualitatively, that Bell's inequalities have been violated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43250285",
"title": "Ann Swidler",
"section": "Section::::Major works.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 215,
"text": "\"Inequality by Design: Cracking the Bell Curve Myth\" (1996), is a well-known reply to \"The Bell Curve\" by Charles Murray and Richard Hernstein and attempts to show that the arguments in \"The Bell Curve\" are flawed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "169917",
"title": "John Stewart Bell",
"section": "Section::::Biography.:Conclusions from experimental tests.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 453,
"text": "Some people continue to believe that agreement with Bell's inequalities might yet be saved. They argue that in the future much more precise experiments could reveal that one of the known loopholes, for example the so-called \"fair sampling loophole\", had been biasing the interpretations. Most mainstream physicists are highly skeptical about all these \"loopholes\", admitting their existence but continuing to believe that Bell's inequalities must fail.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16900477",
"title": "Inequality by Design",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 556,
"text": "Inequality by Design: Cracking the Bell Curve Myth is a 1996 book by Claude S. Fischer, Michael Hout, Martín Sánchez Jankowski, Samuel R. Lucas, Ann Swidler, and Kim Voss. The book is a reply to \"The Bell Curve\" (1994) by Charles Murray and Richard Hernstein and attempts to show that the arguments in \"The Bell Curve\" are flawed, that the data used by Murray and Herrnstein do not support their conclusion and that alternative explanations (particularly the effects of social inequality) better explain differences in IQ scores than genetic explanations.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
g3oe4 | Can any experts comment on this article about the nuclear reactors in Japan please? | [
{
"answer": "Seems like a fairly in-depth article. Not sure why people would call it anti-science as such, everything I read was more or less what I have read elsewhere.\n\nWhat does worry me was the section about mobile generators being brought in to provide power for the cooling but the \"plug not fitting\". Now I'm no engineer, but surely with the level of expertise available onsite, would it not be possible to make it fit?? I.e. Rip out whatever terminals are there and connect it up somehow?\n\nAnyway all sounds fairly plausible and at least grounded in reality.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23746101",
"title": "Atomics International",
"section": "Section::::Facilities and Operations.:Santa Susana Field Laboratory (SSFL), Area IV Facility.:Development and testing of nuclear reactors.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 506,
"text": "One reactor model, the L-54, was purchased and installed by a number of United States universities and foreign research institutions, including Japan. The Japanese Atomic Research Institute renamed theirs Japan Research Reactor-1 (JRR-1) and the government of Japan issued a commemorative postage stamp noting the establishment of Japan's first nuclear reactor in 1957. The reactor was decommissioned in 1970 and is now maintained as a a museum exhibit with a Japanese-language website at Tokaimura, Japan\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4131940",
"title": "Nuclear power in Japan",
"section": "Section::::Nuclear Research and professional organizations in Japan.:Academic/professional organizations.\n",
"start_paragraph_id": 159,
"start_character": 0,
"end_paragraph_id": 159,
"end_character": 369,
"text": "BULLET::::- The Atomic Energy Society of Japan (AESJ) 日本原子力学会 is a major academic organization in Japan focusing on all forms of nuclear power. The \"Journal of Nuclear Science and Technology\" is the academic journal run by the AESJ. It publishes English and Japanese articles, though most submissions are from Japanese research institutes, universities, and companies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23736766",
"title": "Research Institute of Atomic Reactors",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 265,
"text": "The Research Institute of Atomic Reactors () is an institute for nuclear reactor research in Dimitrovgrad in Ulyanovsk Oblast, Russia. The institute houses eight nuclear research reactors: SM, Arbus (ACT-1), MIR.M1, RBT-6, RBT-10 / 1, RBT-10 / 2, BOR-60 and VK-50.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8842893",
"title": "Nagasaki Atomic Bomb Museum",
"section": "Section::::History covered in the museum.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 282,
"text": "The Nagasaki Atomic Bomb Museum covers the history of the bombing of Nagasaki, Japan. It portrays scenes of World War II, the dropping of the atomic bomb, the reconstruction of Nagasaki, and present day. Additionally, the museum exhibits the history of nuclear weapons development.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37248569",
"title": "Purdue University Reactor Number One",
"section": "Section::::Use.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 262,
"text": "The reactor's primary purpose is for training students in the principles of reactor physics. The university also uses it as a source for neutrons for research in nuclear engineering, health science, chemistry, pharmacy, agriculture, biology, and nanotechnology.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4222539",
"title": "Nuclear safety and security",
"section": "Section::::Nuclear and radiation accidents.:2011 Fukushima I accidents.\n",
"start_paragraph_id": 151,
"start_character": 0,
"end_paragraph_id": 151,
"end_character": 260,
"text": "Two government advisers have said that \"Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest\". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4131940",
"title": "Nuclear power in Japan",
"section": "Section::::Nuclear accidents.:Fukushima Daiichi nuclear disaster.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 260,
"text": "Two government advisers have said that \"Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest\". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
46gpem | what is being done in the world of science to offset the imposing "antibiotic apocalypse?" | [
{
"answer": "I recently had to watch this video in my biology class and it explains how bacteria \"talk\" with each other and how we can use their \"language\" to enhance or inhibit their communication. _URL_0_",
"provenance": null
},
{
"answer": "[Here is an article you may want to read](_URL_0_) I'll post a little of the article so my comment doesn't get deleted. \nScientists have come across a potential game-changer in the fight against drug-resistant superbugs - a new class of antibiotic that is resistant to resistance. Not only does the new compound - which comes from soil bacteria - kill deadly superbugs like MRSA, but also - because of the way it destroys their cell wall - the pathogens will find it very difficult to mutate into resistant strains",
"provenance": null
},
{
"answer": "There are several different facets of research going on. They are really starting to scale up the antibiotic research, as well as alternate treatments for infections.\n\nOne of the alternate treatments that I am aware of is a PPMO, which stands for peptide-conjugated phosphorodiamidate morpholino oligomer. I'm not entirely sure how they effect bacteria, so this part may be inaccurate, but instead of killing the whole bacteria, it just annihilates the genes inside the bacteria. The PPMO is created for specific genes, so it won't start killing off all of our own genes. It's an interesting treatment that has gone through a little bit of animal testing with positive results, but it's a long way from being used in humans.\n\nThere is a similar concept being researched that uses viruses to kind of blow up bacteria cells. They latch on to the bacteria and overload the cell with their own DNA eventually causing the bacteria to burst open, which destroys it. This treatment is also intended to be used in very specific infections. As far as I remember, there haven't been animal trials, but this could also be wrong, I haven't checked in a while. The benefit of these antibiotics is that they can catered for specific infections, instead of killing all of the good bacteria in your body as well as the bad.\n\nThere was another actual antibiotic that I was told about that is called texobactin or something like that. I think they used it to treat lab mice that had MRSA. It was a big deal when it happened but I just don't remember it off the top of my head. It looked very promising but it was a few years away from clinical trials when I saw it. I'm sure they are further along in research now.\n\nResearchers are very aware that new treatments for infections are needed, and they are certainly working on the problem. More and more money is being granted to researchers about the potential of antibiotic resistant infections and treatments. This kind of research just takes a long time, because it has to be so thoroughly studied.\n\nI know there is a lot of scary stuff about antibiotic resistant infections, but there are a lot of people working on the problem. The antibiotic apocalypse could happen if nothing viable comes out of the research, but there are a lot of promising things being done currently, so I think it is unlikely. \n\nI am just a pharmacy tech, and I like talking to the pharmacists about new drugs. There is more than those three things being studied, but those were the promising ones that I remembered. Maybe someone else can chime in and mention others.\n\n\n",
"provenance": null
},
{
"answer": "There's a [full report](_URL_0_) on _URL_1_. I don't know where things are but the key recommendations are:\n\n(1) improving our surveillance of the rise of antibiotic‐resistant bacteria to enable effective response, stop outbreaks, and limit the spread of antibiotic‐resistant organisms, and acting on surveillance data to implement appropriate infection control;\n\n(2) increasing the longevity of current antibiotics, by improving the appropriate use of existing antibiotics, preventing the spread of antibiotic‐resistant bacteria and scaling up proven interventions to decrease the rate at which microbes develop resistance to current antibiotics;\n\n(3) increasing the rate at which new antibiotics, as well as other interventions, are discovered and developed.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "60976",
"title": "Health system",
"section": "Section::::Management.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 308,
"text": "Antibiotic resistance is another major concern, leading to the reemergence of diseases such as tuberculosis. The World Health Organization, for its World Health Day 2011 campaign, is calling for intensified global commitment to safeguard antibiotics and other antimicrobial medicines for future generations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1914",
"title": "Antimicrobial resistance",
"section": "Section::::Prevention.:Global action plans and awareness.:Antibiotic Awareness Week.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 365,
"text": "World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2515404",
"title": "Production of antibiotics",
"section": "Section::::Challenges.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 642,
"text": "Another reason behind the lack of new antibiotic production is the diminishing amount of return on investment for antibiotics and thus the lack resources put into research and development by private pharmaceutical companies. The World Health Organization has recognized the danger of antibiotic resistance bacteria and has created a list of \"priority pathogens\" that are of the utmost concern. In doing so the hope is to stimulate R&D that can create a new generation of antibiotics. In the United States, the Biomedical Advanced Research and Development Authority (BARDA) aims to support the work of the industry to produce new antibiotics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "255244",
"title": "Seawater",
"section": "Section::::Microbial components.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 430,
"text": "In 2013 researchers from Aberdeen University announced that they were starting a hunt for undiscovered chemicals in organisms that have evolved in deep sea trenches, hoping to find \"the next generation\" of antibiotics, anticipating an \"antibiotic apocalypse\" with a dearth of new infection-fighting drugs. The EU-funded research will start in the Atacama Trench and then move on to search trenches off New Zealand and Antarctica.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1914",
"title": "Antimicrobial resistance",
"section": "Section::::Prevention.:Global action plans and awareness.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 993,
"text": "The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41490190",
"title": "Félix Martí Ibáñez",
"section": "Section::::Intellectual perspectives.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 667,
"text": "Often in the vanguard on intellectual thoughts about medicine, public health, human nature, and psychiatry, in 1955 Martí-Ibáñez wrote his concerns about the indiscriminate use of antibiotics, \"\"Antibiotic therapy, if indiscriminately used, may turn out to be a medicinal flood that temporarily cleans and heals, but ultimately destroys life itself.\"\", a prediction of the dire consequences that humans are just beginning to face today due to ill-advised uses of antibiotics in dairy and meat production as well as medical practices. In the 1930s he participated in the enactment of legislation liberating women and his views on human sexuality are quoted regularly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40308088",
"title": "Brilacidin",
"section": "Section::::Potential significance.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 558,
"text": "There has not been a new drug approval from a new class of antibiotics since 1987. While six antibiotics have been approved over the last year, they are all adaptations of existing antibiotic classes. None of the recently approved novel antibiotics represent entirely new classes. Novel antibiotics are crucial as antibiotic resistance poses a global health risk. The World Health Organization, warning of a \"post-antibiotic era\" has stated that antimicrobial resistance (AMR) is a \"problem so serious that it threatens the achievements of modern medicine\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
134pxi | What animal has the worst common cause of death? | [
{
"answer": "Manatees getting hit by propellors and bleeding out. Usually they get hit multiple time in their life, and are killed by particularly brutal hits.",
"provenance": null
},
{
"answer": "Elephants often lose their teeth at around 40-60 years of age and die of starvation.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "261105",
"title": "Roadkill",
"section": "Section::::Species affected.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 751,
"text": "In 1993, 25 schools throughout New England, United States participated in a roadkill study involving 1,923 animal deaths. By category, the fatalities were: 81% mammals, 15% bird, 3% reptiles and amphibians, 1% indiscernible. Extrapolating these data nationwide, Merritt Clifton (editor of \"Animal People Newspaper\") estimated that the following animals are being killed by motor vehicles in the United States annually: 41 million squirrels, 26 million cats, 22 million rats, 19 million opossums, 15 million raccoons, 6 million dogs, and 350,000 deer. This study may not have considered differences in observability between taxa (e.g. dead raccoons are easier to see than dead frogs), and has not been published in peer-reviewed scientific literature.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10588152",
"title": "2007 pet food recalls",
"section": "Section::::Impact on pets.:Numbers of affected animals.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 2386,
"text": "By the end of March, veterinary organizations reported more than 100 pet deaths amongst nearly 500 cases of kidney failure, and experts expected the death toll to number in the thousands, with one online database already self-reporting as many as 3,600 deaths as of 11 April. The U.S. Food and Drug Administration has received reports of approximately 8500 animal deaths, including at least 1950 cats and 2200 dogs who have died after eating contaminated food, but have only confirmed 14 cases, in part because there is no centralized government database of animal sickness or death in the United States as there are with humans (such as the Centers for Disease Control). For this reason, many sources speculate the full extent of the pet deaths and sicknesses caused by the contamination may never be known. In October, the results of the \"AAVLD survey of pet food-induced nephrotoxicity in North America, April to June 2007,\" were reported, indicating 347 of 486 cases voluntarily reported by 6 June 2007 had met the diagnostic criteria, with most of the cases reported from the United States, but also including cases of 20 dogs and 7 cats reported from Canada.The cases involved 235 cats and 112 dogs, with 61 percent of the cats and 74 percent of the dogs having died. Dr. Barbara Powers, AAVLD president and director of the Colorado State University Veterinary Diagnostic Laboratory, said the survey probably found only a percentage of the actual cases. She also said the mortality rate is not likely to be representative of all cases, because survey respondents had more information to submit for animals that had died. A number of dogs were also reported affected in Australia, with four in Melbourne and a few more in Sydney. No legal action or repercussions have as yet occurred regarding these cases. Dr. Powers elaborated further: “But there absolutely could be more deaths from the tainted pet food... This survey didn’t catch all the deaths that happened. In order to be counted in our survey, you had to meet certain criteria... If someone had a pet that died and they buried it in their back[yard], they weren’t eligible for our survey. We had to have confirmed exposure to the recalled pet food, proof of toxicity, and clinical signs of renal failure. So this is only a percentage of the deaths that are out there. There’s no way to guess how many pets were affected.”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10588152",
"title": "2007 pet food recalls",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 756,
"text": "By the end of March, veterinary organizations reported more than 100 pet deaths among nearly 500 cases of kidney failure, with one online database self-reporting as many as 3,600 deaths as of 11 April. The U.S. Food and Drug Administration has received reports of several thousand cats and dogs who have died after eating contaminated food, but have only confirmed 14 cases, in part because there is no centralized government database of animal sickness or death in the United States, as there are with humans (such as the Centers for Disease Control and Prevention). As a result, many sources speculate the actual number of affected pets may never be known, and experts are concerned that the actual death toll could potentially reach into the thousands.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2185035",
"title": "Australian bat lyssavirus",
"section": "Section::::Bat lyssavirus and human health.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 378,
"text": "Three cases of ABLV in humans have been confirmed, all of them fatal. The first occurred in November 1996, when an animal caregiver was scratched by a yellow-bellied sheath-tailed bat. Onset of a rabies-like illness occurred 4–5 weeks following the incident, with death 20 days later. ABLV was identified from brain tissue by polymerase chain reaction and immunohistochemistry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45376731",
"title": "List of medically significant spider bites",
"section": "Section::::Australian funnel-web spiders.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 307,
"text": "One other genus in the Hexathelidae family has been reported to cause severe symptoms in humans. Severe bites have been attributed to members of the genus \"Macrothele\" in Taiwan, but no fatalities. In other mammals, such as rodents, for example, the effects of funnel web spider venom are much less severe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1941914",
"title": "Kilimanjaro Safaris",
"section": "Section::::Incidents.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 293,
"text": "BULLET::::- Initially, there were a number of animal deaths from disease, toxic exposure, maternal killings, and park vehicles. The United States Department of Agriculture investigation found no violations of the Animal Welfare Act for the 29 deaths that happened September 1997 – April 1998.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29384146",
"title": "List of incidents at Walt Disney World",
"section": "Section::::Disney's Animal Kingdom.:Kilimanjaro Safaris.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 293,
"text": "BULLET::::- Initially, there were a number of animal deaths from disease, toxic exposure, maternal killings, and park vehicles. The United States Department of Agriculture investigation found no violations of the Animal Welfare Act for the 29 deaths that happened September 1997 – April 1998.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3s2coq | the current global warming is very concerning, but there was global warming about 1000 years ago called the "medieval warm period" - how many other such warming periods have there been and why is the current one so different? | [
{
"answer": "The medieval warm period was not as extreme, and came on much more gradually. The current one has been sudden and steady, and we have a clear cause for it: increased greenhouse gases in the atmosphere. We've increased the atmosphere's carbon dioxide content by 1/3, for example - that's a huge effect on a planetary scale. The rise almost exactly mirrors the growth of human industry, and even tapers off briefly at the collapse of the Soviet Union and its industrial capacity. \n\nWe have reliable climate records dating back tens of thousands of years from e.g. ice cores, and we are pretty sure this warming isn't like the others.",
"provenance": null
},
{
"answer": "1) Scale: It wasn't warming as much as we've already warmed the planet. \n2) Cause: Medieval peoples couldn't pump stuff into the atmosphere on anywhere near the scale we can. It was caused by things beyond their control. \nThis time around we are certain that we are the cause of the warming, that means we have to be the ones doing something about it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5354105",
"title": "Hockey stick graph",
"section": "Section::::Scientific debates.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 261,
"text": "In a perspective commenting on MBH99, Wallace Smith Broecker argued that the Medieval Warm Period (MWP) was global. He attributed recent warming to a roughly 1500-year cycle which he suggested related to episodic changes in the Atlantic's conveyor circulation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13471",
"title": "Holocene",
"section": "Section::::Climate.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 531,
"text": "The Holocene climatic optimum (HCO) was a period of warming in which the global climate became warmer. However, the warming was probably not uniform across the world. This period of warmth ended about 5,500 years ago with the descent into the Neoglacial and concomitant Neopluvial. At that time, the climate was not unlike today's, but there was a slightly warmer period from the 10th–14th centuries known as the Medieval Warm Period. This was followed by the Little Ice Age, from the 13th or 14th century to the mid-19th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22418684",
"title": "Historical impacts of climate change",
"section": "Section::::Historical era.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 303,
"text": "Notable periods of climate change in recorded history include the Medieval warm period and the little ice age. In the case of the Norse, the Medieval warm period was associated with the Norse age of exploration and Arctic colonization, and the later colder periods led to the decline of those colonies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17005658",
"title": "Subatlantic",
"section": "Section::::Climatic evolution.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 670,
"text": "After this relatively short cool interlude the climate ameliorated again and reached between 800 and 1200 almost the values of the Roman Warm Period (used temperature proxies are sediments in the North Atlantic). This warming happened during the High Middle Ages wherefore this event is known as \"Medieval Global Warming\" or the Medieval Warm Period. This warmer climate peaked around 850 AD and 1050 AD, and raised the tree line in Scandinavia and in Russia by 100 to 140 meters; it enabled the Vikings to settle in Iceland and Greenland. During this period the Crusades took place and the Byzantine Empire was eventually pushed back by the rise of the Ottoman Empire.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60160417",
"title": "Medieval Warm Period",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 596,
"text": "The Medieval Warm Period (MWP) also known as the Medieval Climate Optimum, or Medieval Climatic Anomaly was a time of warm climate in the North Atlantic region that was likely related to other warming events in other regions during that time, including China and other areas, lasting from to . Other regions were colder, such as the tropical Pacific. Averaged global mean temperatures have been calculated to be similar to early-mid 20th century warming. Possible causes of the Medieval Warm Period include increased solar activity, decreased volcanic activity, and changes to ocean circulation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5354105",
"title": "Hockey stick graph",
"section": "Section::::Controversy after IPCC Third Assessment Report.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 1925,
"text": "A literature review by Willie Soon and Sallie Baliunas, published in the relatively obscure journal \"Climate Research\" on 31 January 2003, used data from previous papers to argue that the Medieval Warm Period had been warmer than the 20th century, and that recent warming was not unusual. In March they published an extended paper in \"Energy & Environment\", with additional authors. The Bush administration's Council on Environmental Quality chief of staff Philip Cooney inserted references to the papers in the draft first Environmental Protection Agency \"Report on the Environment\", and removed all references to reconstructions showing world temperatures rising over the last 1,000 years. In the Soon and Baliunas controversy, two scientists cited in the papers said that their work was misrepresented, and the \"Climate Research\" paper was criticised by many other scientists, including several of the journal's editors. On 8 July \"Eos\" featured a detailed rebuttal of both papers by 13 scientists including Mann and Jones, presenting strong evidence that Soon and Baliunas had used improper statistical methods. Responding to the controversy, the publisher of \"Climate Research\" upgraded Hans von Storch from editor to editor in chief, but von Storch decided that the Soon and Baliunas paper was seriously flawed and should not have been published as it was. He proposed a new editorial system, and though the publisher of \"Climate Research\" agreed that the paper should not have been published uncorrected, he rejected von Storch's proposals to improve the editorial process, and von Storch with three other board members resigned. Senator James M. Inhofe stated his belief that \"manmade global warming is the greatest hoax ever perpetrated on the American people\", and a hearing of the United States Senate Committee on Environment and Public Works which he convened on 29 July 2003 heard the news of the resignations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13139823",
"title": "Post-classical history",
"section": "Section::::Main trends.:Climate.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 512,
"text": "The Medieval Warm Period from 950–1250 occurred mostly in the Northern Hemisphere, causing warmer summers in many areas; the high temperatures would only be surpassed by the global warming of the 20th/21st centuries. It has been hypothesized that the warmer temperatures allowed the Norse to colonize Greenland, due to ice-free waters. Outside of Europe there is evidence of warming conditions, including higher temperatures in China and major North American droughts which adversely affected numerous cultures.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6zv9d5 | what decides whether something will release alpha, beta, or gamma radiation? | [
{
"answer": "Type of radiation is determined by the material that emits it. Gamma radiation is electomagnetic radiation, like radio waves, microwave, x-rays, gamma rays... they are typically formed when a charge (electron) is accelerated or decelerated or moved in a circular path (which is an acceleration btw). It can also be formed when an electron jumps between shells in an atom (different energy states).\nAlpha radiation is essentially helium atom cores. They typically form as a result of a radioactive decay. Similar thing for beta except they are electrons.\n",
"provenance": null
},
{
"answer": "The type of radiation released depends on the particular isotope.\n\nAlpha radiation is helium atom nuclei (2 protons and 2 neutrons), which are ejected from a large atomic nucleus. In general, alpha radiation occurs in very large nuclei (things like uranium: 92 protons, 146 neutrons). Essentially, the nucleus is so big, that it can barely hold together against the repulsion between all the positively charged protons - so a cluster of protons gets ejected, taking some neutrons with it.\n\nBeta radiation tends to occur when the ratio of neutrons to protons in the nucleus is wrong. For light atoms, the optimal ratio is roughly 1:1, but as the nuclei get heavier, you need more neutrons (uranium is roughly 1 proton to 1.5 neutrons). \n\nIf a nucleus has too many neutrons, a neutron can transform into a proton and an electron. The electron can't stay in the nucleus, so gets kicked out as beta radiation. \n\nIf a nucleus has too many protons, a proton can transform into a neutron and a positron (a positively charged electron). The positron can't stay, so gets kicked out as (positively charged) beta radiation. \n\nGamma rays are just pure energy. They are released from a nucleus when it has too much energy - think of the protons and neutrons in the nucleus like dozens of those little magnets you can make scultpures from. Sometimes, the magnets can hold a position which isn't optimal, and then suddenly, they'll find a better position and bind tighter. When this happens in a nucleus, you get a gamma ray. \n\nGamma rays are released when alpha or beta radiation is produced. If you have a big nucleus, and an alpha breaks off, the nucleus is going to be a bit lopsided, so it will rearrange and form a more compact shape, releasing a gamma ray at the same time. \n\nA similar sort of thing happens with beta radiation - when a proton converts to a neutron, this can leave the nucleus a bit uneven. Sometimes, the energy is immediately released and it all goes into the beta radiation, but sometimes, some of the energy comes out separately as a gamma ray. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1267",
"title": "Alpha decay",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 722,
"text": "Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or 'decays' into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. Alpha particles have a charge , but as a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms – the charge is not usually shown.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "97830",
"title": "Nuclear technology",
"section": "Section::::History and scientific background.:Discovery.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 794,
"text": "As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21285",
"title": "Nuclear physics",
"section": "Section::::Modern nuclear physics.:Nuclear decay.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 208,
"text": "In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2035588",
"title": "Radiochemistry",
"section": "Section::::Main decay modes.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 227,
"text": "1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and atomic number will decrease by 2.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21787470",
"title": "Alpha particle",
"section": "Section::::Sources of alpha particles.:Alpha decay.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 510,
"text": "The best-known source of alpha particles is alpha decay of heavier ( 106 u atomic weight) atoms. When an atom emits an alpha particle in alpha decay, the atom's mass number decreases by four due to the loss of the four nucleons in the alpha particle. The atomic number of the atom goes down by exactly two, as a result of the loss of two protons – the atom becomes a new element. Examples of this sort of nuclear transmutation are when uranium becomes thorium, or radium becomes radon gas, due to alpha decay.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25856",
"title": "Radiation",
"section": "Section::::Ionizing radiation.:Gamma radiation.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 569,
"text": "Gamma (γ) radiation consists of photons with a wavelength less than 3x10 meters (greater than 10 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4651",
"title": "Beta decay",
"section": "Section::::History.:Discovery and initial characterization.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 503,
"text": "In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
irzg2 | What would be the effects of a normal diet that was entirely liquids? | [
{
"answer": "People survive quite well on liquid diets for long periods of time. Google might turn up more information if you search for [tube feeding](_URL_0_). For people who can't tolerate solid food for any reason, a tube can be placed through the nose or through an incision into the stomach, and fluid given through that tube can meet all nutritional needs.\n\nThe lack of solid matter may cause loose stools and discomfort, but that can usually be dealt with by making sure the feeding solution contains enough fiber.",
"provenance": null
},
{
"answer": "I had to have my jaw wired shut. I would have rather had a broken leg, that's how uncomfortable it was, however I ate as good as I've ever eaten. \n\nOf course there were solids in what I drank, but it could be defined as an all liquid diet. ",
"provenance": null
},
{
"answer": "The intestines do not *need*, per se, to consume solid foods in order to function properly. All nutrients can easily be gotten through liquid forms, and then some. That being said, liquid diets tend to come out the ass a lot more messy. Dietary fiber does alleviate a lot of this, but it is not perfect.\n\nAlso, risk for [diverticulotis/diverticulitis](_URL_0_) do rise. Eating solid foods is like resistance training for the muscular intestinal wall, over time (not a few weeks, but a much longer time) the intestinal wall can become weaker if not adequately stressed and develop diverticulum.\n\nAlthough not optimal, you can live on a solid-free diet.",
"provenance": null
},
{
"answer": "Hate to answer a question with a question, but I heard somehwere that it was possible to survive on only coconut milk - apparently it has the perfect number of whatever it is we need.\n\ncan any one give further information? it's slightly related to original post",
"provenance": null
},
{
"answer": "I lived for many months on a liquid diet, when I had cancer and the radiation treatments damaged my mouth. Eventually, I couldn't even drink liquids anymore, so I had a catheter installed and started taking food through an IV.\n\nWhile I was on the liquid diet, my digestive system functioned normally, I had solid waste and all that. After I started taking food through the IV, my digestive system seemed to go dormant; when treatments ended and I started eating again I passed odorless \"baby stools\" for a while until my gut flora came back.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8414316",
"title": "Liquid diet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 472,
"text": "A liquid diet is a diet that mostly consists of liquids, or soft foods that melt at room temperature (such as ice cream). A liquid diet usually helps provide sufficient hydration, helps maintain electrolyte balance, and is often prescribed for people when solid food diets are not recommended, such as for people who suffer with gastrointestinal illness or damage, or before or after certain types of medical tests or surgeries involving the mouth or the digestive tract.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8414316",
"title": "Liquid diet",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 636,
"text": "A clear liquid diet, sometimes called a \"surgical liquid diet\" because of its perioperative uses, consists of a diet containing exclusively transparent liquid foods that do not contain any solid particulates. This includes vegetable broth, bouillon (excepting any particulate dregs), clear fruit juices such as filtered apple juice, clear fruit ices or popsicles, clear gelatin desserts, and certain carbonated drinks such as ginger-ale and seltzer water. It excludes all drinks containing milk, but may accept tea or coffee. Typically, this diet contains about 500 calories per day, which is too little food energy for long-term use. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "84252",
"title": "List of diets",
"section": "Section::::Diets followed for medical reasons.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 215,
"text": "BULLET::::- Liquid diet: A diet in which only liquids are consumed. May be administered by clinicians for medical reasons, such as after a gastric bypass or to prevent death through starvation from a hunger strike.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3420838",
"title": "Enteric fermentation",
"section": "Section::::Ruminants.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 344,
"text": "Ruminant animals are those that have a rumen. A rumen is a multichambered stomach found almost exclusively among some artiodactyl mammals, such cattle, deer, and camels, enabling them to eat cellulose-enhanced tough plants and grains that monogastric (i.e., \"single-chambered stomached\") animals, such as humans, dogs, and cats, cannot digest.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8414316",
"title": "Liquid diet",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 410,
"text": "A full or strained liquid diet consists of both clear and opaque liquid foods with a smooth consistency. People who follow this diet may also take liquid vitamin supplements. Some individuals who are told to follow a full-liquid diet are additionally permitted certain components of a mechanical soft diet, such as strained meats, sour cream, cottage cheese, ricotta, yogurt, mashed vegetables or fruits, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "198711",
"title": "Rancidification",
"section": "Section::::Food safety.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 265,
"text": "Using fish oil as an example of a food or dietary supplement susceptible to rancidification over various periods of storage, two reviews found effects only on flavor and odor, with no evidence as of 2015 that rancidity causes harm if a spoiled product is consumed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8414316",
"title": "Liquid diet",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 218,
"text": "A liquid diet is not recommended outside of hospital or medical supervision. Negative side effects include fatigue, nausea, dizziness, hair loss and dry skin which are said to disappear when the person resumes eating.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6me75a | How do we know which way up a planet is? | [
{
"answer": "All planets in the Solar System orbit in almost the same plane, and their axes of rotation are almost perpendicular to such plane. (Only exception is Uranus). So you can define East as the direction the planet is rotating, and North as 90° left of the East.\n\nWe also have reference frames and coordinate systems: cartesian or spherical, centered on the Sun or centered on a planet, inertial or rotating. See [this recent thread](_URL_0_).\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "635547",
"title": "List of extreme points of the United States",
"section": "Section::::Interpretation of easternmost and westernmost.\n",
"start_paragraph_id": 70,
"start_character": 0,
"end_paragraph_id": 70,
"end_character": 827,
"text": "Still another method is to first determine the geographic center of the country and from there measure the shortest distance to every other point. All U.S. territory is spread across less than 180° of longitude, so from any spot in the U.S. it is more direct to reach the easternmost point, Point Udall, U.S. Virgin Islands, by traveling east than by traveling west. Likewise, there is not a single point in U.S. territory from which heading east is a shorter route to the westernmost point, Point Udall, Guam, than heading west would be, even accounting for circumpolar routes. The two different Point Udalls are named for two brothers from the Udall family of Arizona; Mo Udall (Guam) and Stewart Udall (Virgin Islands), sons of Chief Justice Levi Stewart Udall of the Arizona Supreme Court, both served as U.S. Congressman.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15032003",
"title": "Exoplanetology",
"section": "Section::::Physical parameters.:Mass.\n",
"start_paragraph_id": 69,
"start_character": 0,
"end_paragraph_id": 69,
"end_character": 293,
"text": "If a planet's orbit is nearly perpendicular to the line of vision (i.e. \"i\" close to 90°), a planet can be detected through the transit method. The inclination will then be known, and the inclination combined with \"M\" sin\"i\" from radial-velocity observations will give the planet's true mass.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51279487",
"title": "Kepler-419",
"section": "Section::::Planetary system.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 451,
"text": "Only the first planet is known transit the star; this means that the planet's orbit appear to cross in front of their star as viewed from the Earth's perspective. Its inclination relative to Earth's line of sight, or how far above or below the plane of sight it is, vary by less than one degree. This allows direct measurements of the planet's periods and relative diameters (compared to the host star) by monitoring the planet's transit of the star.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1337370",
"title": "Cross section (geometry)",
"section": "Section::::Examples in science.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 208,
"text": "In geology, the structure of the interior of a planet is often illustrated using a diagram of a cross section of the planet that passes through the planet's center, as in the cross section of Earth at right.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22915",
"title": "Planet",
"section": "Section::::Attributes.:Dynamic characteristics.:Orbit.\n",
"start_paragraph_id": 118,
"start_character": 0,
"end_paragraph_id": 118,
"end_character": 910,
"text": "BULLET::::- The \"inclination\" of a planet tells how far above or below an established reference plane its orbit lies. In the Solar System, the reference plane is the plane of Earth's orbit, called the ecliptic. For extrasolar planets, the plane, known as the \"sky plane\" or \"plane of the sky\", is the plane perpendicular to the observer's line of sight from Earth. The eight planets of the Solar System all lie very close to the ecliptic; comets and Kuiper belt objects like Pluto are at far more extreme angles to it. The points at which a planet crosses above and below its reference plane are called its ascending and descending nodes. The longitude of the ascending node is the angle between the reference plane's 0 longitude and the planet's ascending node. The argument of periapsis (or perihelion in the Solar System) is the angle between a planet's ascending node and its closest approach to its star.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4958123",
"title": "55 Cancri c",
"section": "Section::::Orbit and mass.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 366,
"text": "A limitation of the radial velocity method used to discover the planet is that only a lower limit on the mass can be obtained. Further astrometric observations with the Hubble Space Telescope on the outer planet 55 Cancri d suggest that planet is inclined at 53° to the plane of the sky; but innermost b and e are inclined at 85°. Planet c's inclination is unknown.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1196",
"title": "Angle",
"section": "Section::::Angles in geography and astronomy.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 303,
"text": "In geography, the location of any point on the Earth can be identified using a \"geographic coordinate system\". This system specifies the latitude and longitude of any location in terms of angles subtended at the centre of the Earth, using the equator and (usually) the Greenwich meridian as references.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
12qdei | Is it true that a third of the knights in the battle of Agincourt were over 50? | [
{
"answer": "Where did you hear this if I may ask? ",
"provenance": null
},
{
"answer": "Well life expectancy is a very skewed statistic, because infant mortality deflates it substantially. The upper-classes could expect to live to the beginning of what we'd call \"old-age\" (about 60s, 70 and above was more of a gamble) if they weren't taken ill or killed in battle. It's possible- there would be plenty of old knights over the age of 50 to take part- but it seems unlikely. Maybe your source meant a third of knights in England were over 50 at the time of Agincourt?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "36392737",
"title": "Walter Halliday",
"section": "Section::::Supposed knighthood and coat of arms.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 497,
"text": "This story does not stand up well under scrutiny. Walter would have been around seventy years old at the time of the battle of Tewkesbury in 1471, and rather old to be taking up the sword instead of his accustomed musical instrument. He is not listed among the knights created by Edward IV before or after the battle. Bluemantle Pursuivant reported in 1975 that \"I find no trace of Sir Walter in the official records of the College of Arms\", and that \"the arms in Burke's \"Commoners\" are wrong\". \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "606989",
"title": "Gareth",
"section": "Section::::Arthurian legend.:\"Le Morte d'Arthur\".\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1167,
"text": "In Book IV, there are only two knights that have ever successfully held against Lancelot: Sir Tristan and Gareth. This was always under conditions where one or both parties were unknown by the other, for these knights loved each other \"passingly well\". Gareth was knighted by Lancelot himself when he took upon him the adventure on behalf of Lynette. However, in Book VIII: \"The Death of Arthur\", the unarmed Gareth and his brother Gaheris are killed accidentally by Lancelot during the rescue of Guinevere. This leads to the final tragedy of Arthur's Round Table; Gawain refuses to allow King Arthur to accept Lancelot's sincere apology for the deaths of his two brothers. Lancelot genuinely mourns the death of Gareth, whom he loved closely like a son or younger brother. King Arthur is forced by Gawain and Mordred's insistence to go to war against Lancelot. Mordred's grief is largely faked, driven by his desire to become king. This leads to the splitting of the Round Table, Mordred's treachery in trying to seize Guinevere and the throne, Gawain's death from an old unhealed wound, and finally, Arthur and Mordred slaying each other at the Battle of Camlann. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2519449",
"title": "Peter Knights",
"section": "Section::::Playing career.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 472,
"text": "The litany of injuries that Knights had suffered through his career began to catch up with him, and from 1979 to 1981, he played in only 26 out of a possible 66 games. Amid rumours of retirement, Knights rebounded to play impressive football in his final years. In 1983, he booted six goals in the Qualifying Final to guide Hawthorn to a thrilling four-point win against Fitzroy, and was again among the best players on the field as the Hawks crushed in the Grand Final. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23044576",
"title": "List of Kaamelott episodes",
"section": "Section::::List of episodes.:Book I.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 221,
"text": "The knights at the Round Table discuss the mysterious Sir Provençal the Gaul (Gaulois). He is in fact their own comrade Perceval the Welshman (Gallois), who is not capable of giving his own name without making a mistake.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16242835",
"title": "Marmaduke Thweng, 1st Baron Thweng",
"section": "Section::::Military career.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 735,
"text": "In 1297 Marmaduke achieved some fame at the Battle of Stirling Bridge by a heroic escape. Over 100 English knights had been trapped, together with several thousand infantry, on the far side of the river, and were being slaughtered by the Scots. Thweng managed to fight his way back across the bridge and he thus became the only knight of all those on the far side of the river to survive the battle. Following the rout, Thweng with William FitzWarin were appointed castellans of Stirling Castle by the English leader John de Warenne, 6th Earl of Surrey. The castle was quickly starved into submission, and Thweng and FitzWarin were taken prisoner to Dumbarton Castle. He was summoned to Parliament in 1307, thus becoming Baron Thweng.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2196042",
"title": "The Enchanted World",
"section": "Section::::The Series.:\"Fall of Camelot\".\n",
"start_paragraph_id": 246,
"start_character": 0,
"end_paragraph_id": 246,
"end_character": 925,
"text": "The brother of the two knights slain by Lancelot was Gawain. He had been a knight to King Arthur for years and was one of Arthur's most trusted allies. Gawain's anger for Lancelot was deep and insatiable. Nothing would end his anger except the death of Lancelot. Arthur, who now controlled many fewer knights than before, could not risk losing his greatest ally, so, Arthur and his remaining troops camped around the stronghold where Lancelot now lived in France. Guinevere had long since been returned to Camelot after Lancelot vowed to all that no affair had ever taken place. Guinevere was safe because of Lancelot's lie, but Gawain's anger was still demanded that Lancelot die. The siege lasted weeks. During the siege, Arthur received a note that said that Mordred had told the people that Arthur had died in battle and that Mordred was now King. The note also said that Mordred had vowed to take Guinevere as his wife.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2832416",
"title": "London Knights (UK)",
"section": "Section::::Season History.:Continental Cup.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 406,
"text": "The Knights became the first British team to reach the finals of the Continental Cup in January 2001, where they narrowly missed taking the title at their first attempt. Their run included a surprise 4-1 win over Anschutz stablemates the Munich Barons, and only a 1-0 loss to eventual champions Zurich Lions denied them further glory. Their silver medal was considered a major success for a British side. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9vcmlj | why do fireworks look so bad on film/video, yet look good irl? | [
{
"answer": "Fireworks can look great on video if you have a good enough camera. Cheap cameras, like the ones in our phones, can't handle low light conditions very well and have a hard time focusing on the rapid flashes coming from a firework. The camera is constantly trying to auto focus but can't, resulting in a blurry image. ",
"provenance": null
},
{
"answer": "[Video compression work by detecting similarities from one frame to the next and encoding the difference between successive frames. This is usully done by cutting each frame into tiny blocks and then sending only the blocks that have changed. Some things like fireworks, snow or confetti will change most of the blocks in every frame, forcing the compression algorithm to lower quality significantly to keep up.](_URL_0_)",
"provenance": null
},
{
"answer": "Fireworks [can look amazing ](_URL_0_) on video. You just need the right equipment. ",
"provenance": null
},
{
"answer": "The other part of the problem is that fireworks are HUGE. You feel a certain way when you experience something much much larger than yourself. \n\nYour TV just doesn't have that power over humans in the same way. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6449908",
"title": "Consumer fireworks",
"section": "Section::::Examples.:Novelty fireworks.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 221,
"text": "Novelty fireworks typically produce a much weaker explosion and sound. In some countries and areas where fireworks are illegal to use, they still allow these small, low grade fireworks to be used. A few examples include:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56478514",
"title": "Fireworks policy in the United States",
"section": "Section::::Consumer fireworks safety.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 595,
"text": "Availability and use of consumer fireworks are hotly debated topics. Critics and safety advocates point to the numerous injuries and accidental fires that are attributed to fireworks as justification for banning or at least severely restricting access to fireworks. Complaints about excessive noise created by fireworks and the large amounts of debris and fallout left over after shooting are also used to support this position. There are numerous incidents of consumer fireworks being used in a manner that is supposedly disrespectful of the communities and neighborhoods where the users live.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "667188",
"title": "Adobe Fireworks",
"section": "Section::::Features.:Image optimization.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 342,
"text": "Fireworks was created specifically for web production. Since not every user may be in possession of a fast Internet connection, it is at the best interest of the web developers to optimize the size of their digital contents. In terms of image compression, Fireworks has a better compression rate than Photoshop with JPEG, PNG and GIF images.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "224894",
"title": "Skyrocket",
"section": "Section::::Professional displays.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 298,
"text": "A common misconception about professional fireworks displays is that skyrockets are used to propel the pyrotechnic effects into the air. In reality, skyrockets are more widely used as a consumer item. Professional fireworks displays utilize mortars to fire aerial shells into the air, not rockets.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59493",
"title": "Fireworks",
"section": "Section::::Safety.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 472,
"text": "Improper use of fireworks may be dangerous, both to the person operating them (risks of burns and wounds) and to bystanders; in addition, they may start fires after landing on flammable material. For this reason, the use of fireworks is generally legally restricted. Display fireworks are restricted by law for use by professionals; consumer items, available to the public, are smaller versions containing limited amounts of explosive material to reduce potential danger.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60785644",
"title": "Fireworks bans in China",
"section": "Section::::Reasons for Ban Fireworks.:Environment.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 790,
"text": "The pollution of fireworks on the environment is becoming more and more apparent. Fireworks cause the most serious pollution in the environment in the shortest time. Although fireworks are not one of the most common sources of pollution in the atmosphere, they are one of the major causes of air pollutants ozone, sulphur, dioxide and nitrogen oxides, as well as aerosols. Fireworks contain a mass of tiny metal particles. These metals are burned to produce color for fireworks: copper for blue, strontium or lithium for red, and barium compounds for bright green or white. When fireworks are set off in the air, a large number of incomplete decomposition or degradation of metal particles, dangerous toxins, harmful chemicals remain in the air for a long time, resulting in air pollution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22092889",
"title": "Shoot Loud, Louder... I Don't Understand",
"section": "Section::::Reception.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 264,
"text": "The \"Los Angeles Times\" said the film was \"as appetizing as a piece of stale pre-fab pizza... lengthy and boring... never were so many fireworks set off in such a dud of a movie.\". The \"Chicago Tribune\" called it a \"tedious and terrible mess... a disastrous dud.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1vnkb2 | how 2 wifi routers on the same channel can interoperate without completely jamming each other's signal? | [
{
"answer": "It's FM, and you know what happens to FM signals when 2 people try the repeater at the same time, with equal power. It's unreadable. Yes, the wifi points would jam each other too. \n\nThey can interoperate only because packets are short bursts and can be error corrected. They don't route the other networks traffic as it does not match their own essid. If you tried to operate demanding content on either, it would be a different story entirely. But just small amount of network traffic, you may only have a 20% duty cycle, so you can see with the ability to detect errors and resend, the only big challenge is to detect major packet collisions where headers are missing and you dont know who to send it back to. \n\nThis problem actually exists in networking without 2 access points as well. You would just need 2 users to feasibly create collisions. Well, that's where spread spectrum comes into play. That is above my level of understanding, but I believe it is correct to say the transmitters carrier frequency basically changes within the channel, seemingly at random and very often, and this change helps prevent collisions, but doesn't entirely eliminate them. Hence it does hurt performance but doesn't kill communication entirely. ",
"provenance": null
},
{
"answer": "First, I'm going to only talk about 802.11n in particular because I understand Orthogonal Frequency Division Multiplexing (OFDM) better than the other modulation techniques used; Direct-Sequence Spread Spectrum (DSSS) and Frequency-Hopping Spread Spectrum (FHSS). Though the underlying concept of DSSS and FHSS is more less the idea of a Spread Spectrum. Spread Spectrum more or less is making something that takes up very little bandwidth on its own take up more bandwidth to specifically prevent jamming or interference. OFDM is a little different though.\n\nOkay, with the fancy words out of the way lets define Multiplexing first. Let us say for example we want to build a road between two cities. One way to approach this might be make the road one lane allowing one car to travel on the road at any given time. While this car is traveling on this road no other cars are allowed to travel on it until it reaches its destination. After that another car going the opposite direction is allowed on. And so on and so on. In simple terms this is basically what Half-Duplexing is. It allows for communication two ways, but only one thing is allowed to communicate at a time. There has to be a more efficient way of allowing cars to get between the two cities, right? Another option we might try is a two lane road, we're spending the big bucks now. With this two cars can travel in different directions at the same time. More or less this is what constitutes Duplex communication. That is two devices can communicate with each other at the same time. But in those previous examples, only one or two cars are allowed to travel on the road at a time. Surely it can be made more efficient. Suppose we allowed multiple cars that are traveling in the same direction on the road at the same time, and that we further restrict them such that they are only able to travel in a single file line down the road at a fixed distance from one another. Ha! I think we have just crammed a ton of cars onto this road! If you think of each car being say a packet from the networking world then what that more or less describes is something called Time Division Multiplexing, though there is more to it. But this does not define Multiplexing. Essentially Multiplexing is taking a lot of individual . . ., channels of information and fitting, compressing, or aggregating (I like this word) onto a single channel of transmission. \n\nNow, with OFDM I'd like to focus on the last three letters first, FDM. FDM is Frequency Division Multiplexing. Going back to the idea of connecting two cities via a road. Instead of a single lane for each direction, why not add another giving a total of four lanes? What about making it six, or eight, or even ten?! Well we do have to draw a line at some point or another. With FDM what we do is take a fixed amount of bandwidth and divide it up into a set number sub-channels that do not overlap with one another. Depending on the selectivity of the filters, stability of the oscillators, and the actual modulation scheme used by the modems determines that amount of information that can be aggregated into a set amount of bandwidth. But Orthogonal Frequency Division Multiplexing takes this a bit further. Again going back to the road between two cities. What if we were to use twenty lanes, but we restrict the total amount of cars in any given lane to say five cars? Now, while a two lane road could handle the one-hundred cars, if some were to have tires blow out or engines blow up there would be less than one-hundred cars arriving at their destinations. If we imagine that these cars are imaginary bits of information that means we lost some. One way to combat the damaged cars causing any sort of back-up for flow of traffic would be to implement multiple lanes. This gives cars a way to go around the wrecked cars. Now, this does not fix the loss of information. To prevent the loss of information let us make every other car carry the same information. Now, imagine that instead of two lanes there are twenty lanes either way. Each of these lanes may not carry a lot of information individually, but together they carry a lot and they are very reliable in getting it where it needs to go. In a nutshell this is how OFDM works, in particular Coded OFDM which puts Forward Error Correction into the mix. That is take a small amount of bandwidth then modulate it onto several carriers at fixed intervals. But instead of putting the entire original signal onto each carrier; instead, we place only a small amount of the data present in the original signal onto each carrier. In effect what we have done is create a signal that takes up approximately the same amount of bandwidth and data rate as if we were to use a single-carrier transmission. The reality is that this OFDM signal has a multitude of sub-channels, where some are parallel to one another, and as a result has a lot of data integrity. In fact it is very robust to all sorts of signal degradation. Another way to think about OFDM if you're familiar with radios is like this. It can be thought of as a bunch of slowly modulated narrowband signals rather than a single large fast modulated wideband signal. \n\nThat was longer than I expected, but hopefully it should not have gone above too many people. I would definitely recommend reading the Wikis for pretty much all of these as the simplifications I did here are more for the sake of getting a basic idea. \n\n**Tl;Dr: Essentially, the signal itself has a lot redundancy built into it. This allows it to encounter some absolutely terrible conditions and still function properly. Albeit at a slower rate than in ideal conditions.**\n\nSource: Day job is Satellite Communications where we use a lot of these kind of things on a day to day basis. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13467020",
"title": "WiMAX MIMO",
"section": "Section::::Other advanced MIMO techniques applied to WiMAX.:WiMAX Uplink Collaborative MIMO.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 608,
"text": "In the case of WiMAX, Uplink Collaborative MIMO is spatial multiplexing with two different devices, each with one antenna. These transmitting devices are collaborating in the sense that both devices must be synchronized in time and frequency so that the intentional overlapping occurs under controlled circumstances. The two streams of data will then interfere with each other. As long as the signal quality is sufficiently good and the receiver at the base station has at least two antennas, the two data streams can be separated again. This technique is sometimes also termed Virtual Spatial Multiplexing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51724903",
"title": "Progetto neco",
"section": "Section::::Technology.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 240,
"text": "Every node in the network is made up by two or more radio interfaces, so that is possible receiving the signal and replaying it to one or more node on a different radio frequency, in order to decrease interferences and increase throughput.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3423785",
"title": "IEEE 1355",
"section": "Section::::Definition.:Slice: DS-SE-02.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 314,
"text": "A connection has two channels, one per direction. Each channel consists of two wires carrying strobe and data. The strobe line changes state whenever the data line starts a new bit with the same value as the previous bit. This scheme makes the links self-clocking, able to adapt automatically to different speeds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49257",
"title": "Digital Audio Broadcasting",
"section": "Section::::Disadvantages of DAB.:Signal delay.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 862,
"text": "The nature of a single-frequency network (SFN) is such that the transmitters in a network must broadcast the same signal at the same time. To achieve synchronization, the broadcaster must counter any differences in propagation time incurred by the different methods and distances involved in carrying the signal from the multiplexer to the different transmitters. This is done by applying a delay to the incoming signal at the transmitter based on a timestamp generated at the multiplexer, created taking into account the maximum likely propagation time, with a generous added margin for safety. Delays in the audio encoder and the receiver due to digital processing (e.g. deinterleaving) add to the overall delay perceived by the listener. The signal is delayed, usually by around 1 to 4 seconds and can be considerably longer for DAB+. This has disadvantages:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8595079",
"title": "Crossband operation",
"section": "Section::::Uses.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 422,
"text": "Crossband operation is sometimes used by amateur radio operators. Rather than taking it in turns to transmit on the same frequency, both operators can transmit at the same time but on different bands, each one listening to the frequency that the other is using to transmit. A variation on this procedure includes establishing contact on one frequency and then changing to a pair of other frequencies to exchange messages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4075258",
"title": "Ethernet crossover cable",
"section": "Section::::1000BASE-T and faster.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 475,
"text": "In a departure from both 10BASE-T and 100BASE-TX, 1000BASE-T and faster use all four cable pairs for simultaneous transmission in both directions through the use of telephone hybrid-like signal handling. For this reason, there are no dedicated transmit and receive pairs. From 1000BASE-T onwards the physical medium attachment (PMA) sublayer provides identification of each pair and usually continues to work even over cable where the pairs are unusually swapped or crossed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "641227",
"title": "Multihoming",
"section": "Section::::Caveats.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 404,
"text": "BULLET::::- Routers: Routers and switches must be positioned such that no single piece of network hardware controls all network access to a given host. In particular, it is not uncommon to see multiple Internet uplinks all converge on a single edge router. In such a configuration, the loss of that single router disconnects the Internet uplink, despite the fact that multiple ISPs are otherwise in use.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2i2ojt | how do you steer a gunship? or a clipper? or large sailship in general? | [
{
"answer": "On most boats, the rudder is, in fact, the principle steering device.\n\nThe sails on sailing vessels generally do not \"steer\" the boat. However, they must be re-positioned when the boat changes direction or when the wind direction changes, to maximize the thrust provided by the sails, using the available wind.\n\nFor non-sail vessels, there is often still a rudder, although steerable propellers or thrusters are often also used. Google \"Azipod\" and start reading, for more info.",
"provenance": null
},
{
"answer": "There are two modes of sailing, upwind and downwind. When sailing downwind the wind pushes the sail and the ship moves the direction it is pushed. The keel (or centerboard in a small boat) is important. It keeps the boat slicing forwards in the water, reducing side slippage. \n\nWhen you are traveling upwind the sail is acting like the wing of a plane, creating a low pressure area in front of the sail, which pulls the ship forward. The keel is really important when going upwind. \n\nWater flows along the keel and then hits the rudder. When the rudder is turned the water hits it and rotates the ship, turning the bow (front of the ship) in the same direction that the rudder is pointing. \n\nYou can steer a little with the sails if the rudder is broken, but you wont be able to point upwind at all. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "235082",
"title": "Dragon boat",
"section": "Section::::Crew.:Steerer.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 349,
"text": "A steerer can also use the steering oar to adjust the position of the boat by cranking. When a steerer cranks the steering oar, the stern of the boat moves either to the left or right, spinning the boat. This is typically executed to turn the boat around at practice or to ensure a boat is lined up straight and pointing directly down a racecourse.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "219794",
"title": "Rudder",
"section": "Section::::Boat rudders details.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 204,
"text": "There is also the barrel type rudder, where the ship's screw is enclosed and can be swiveled to steer the vessel. Designers claim that this type of rudder on a smaller vessel will answer the helm faster.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "636617",
"title": "Round Table-class landing ship logistics",
"section": "Section::::Class history.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 395,
"text": "The ships had both bow and stern doors leading onto the main vehicle deck, making them roll-on/roll-off, combined with ramps that led to upper and lower vehicle decks. Thanks to their shallow draught, they could beach themselves and use the bow doors for speedy unloading of troops and equipment. The ships also had helicopter decks on both the upper vehicle deck and behind the superstructure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "660997",
"title": "Running rigging",
"section": "Section::::Fore-and-aft rigged vessels.:Supporting.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 284,
"text": "BULLET::::- Halyards (sometimes haulyards), are used to raise sails and control luff tension. In large yachts the halyard returns to the deck but in small racing dinghies the head of the sail is attached by a short line to the head of the mast while the boat is lying on its gunwale.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61341926",
"title": "Benawa",
"section": "Section::::Description.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 709,
"text": "It is steered with 2 quarter rudders, which are fixed to a set of heavy crossbeams in a way to enable a quick emergency release. The helmsmen stood on the outboard galleries. There is a cramped cabin for the captain below the poop deck. The vessel has 2 to 3 masts, both were tripod with the rear legs fixed to heavy tabernacles by means of a horizontal spar round which they can revolve. If the foreleg comes adrift from the hook that holds it in place, the mast can be lowered easily. The sails are tanja and made with \"karoro\" matting. With European influence in the latter centuries, western-styled sails can also be used. In the past, Makassarese sailor may sail them as far as New Guinea and Singapore.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5603779",
"title": "Highfield lever",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 249,
"text": "It is possible to arrange a Highfield lever to work two backstays or shrouds by mounting the lever transversely so that it is thrown from side to side rather than fore and aft, tensioning the rigging on one side of the boat as it relaxes the other.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "485486",
"title": "Valour-class frigate",
"section": "Section::::Design.:Systems and Sensors.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 376,
"text": "The ship's steering gear consists of a steering unit and twin semi-balanced underhung rudders. There is an emergency steering station in the superstructure in the event of damage to the bridge and they can also be operated by hand from the steering gear compartment. To improve the ship's performance in a seaway, they are fitted with a B+V Simplex Compact stabiliser system.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
17w2jw | How definitive are the DNA results on the Richard III skeleton? | [
{
"answer": "[Here is a link to an article describing how they did the DNA analysis](_URL_0_) I will summarize it's points.\n\nThey were able to get a good sample of the corpse's mitochondrial DNA, which is passed without combination from mother offspring. Then, through a historical analysis, they found two people (currently living) who are descended, mother-to-mother, from Richard III's mother Cecily Duchess of York.\n\nThey compared the mitochondrial DNA of all three people and found them to be identical. So, what this proves is that these three people were descended from the same woman, and independent historical analysis pinpointed the common matriarchal ancestor of the two people living today as Cecily Duchess of York.\n\nSome additional evidence. \n\n- Radiocarbon dating suggested that the individual lived in the late 15th century\n\n- Bone composition suggested a high-protein diet including seafood, which would be expected for royalty of that time\n\n- The skeleton was from a slim man in his late 20s/early 30s (Richard III died at 32)\n\n- The skeleton had scoliosis, which is consistent with contemporary descriptions of richard\n\n- The skeleton showed multiple injuries, consistent with Richard III's death in battle",
"provenance": null
},
{
"answer": "You didn't really elaborate on what you mean, but I'm guessing you want to know how confident we can be that the skeleton they've found is King Richard?\n\nHere's an overview of the evidence:\n\nDNA comparisons:\n\n* Geneticists were able to extract and sequence mitochondrial DNA from the skeleton\n\n* Mitochondrial DNA is passed down from mother to child unchanged except for the occasional mutation\n\n* So, by comparing the skeleton's mitochondrial DNA to living people who descend from King Richard's mother's line along an unbroken line of females, we can see if the skeleton has the same mitochondrial group as what King Richard would be expected to have.\n\n* Genealogists were able to track down two direct matriline descendants of Anne of York (Richard III's sister) both of whom provided DNA samples for mitochondrial DNA testing. One of the descendants wants to remain anonymous. The second descendant is a Canadian by the name of Michael Ibsen. \n\n* The fact that they have two people means that they can compare them both and make sure that they match. It makes us more sure that we are predicting King Richard's haplogroup correctly because we can more safely say that there's no anomaly (such as an unknown adoption in one of the descendant's background).\n\n* The two descendants do indeed match, and they are members of a subgroup of haplogroup J. Luckily it is fairly rare, somewhere between 1 and 2 percent of the population belongs to this particular group. If the two living descendants were members of a very prevalent haplogroup, it would increase the odds that any match found between them and the skeleton would be purely coincidental. \n\n* Mitochondrial DNA comparison of the three people can be found [here](_URL_0_) -- it's a virtually perfect match.\n\nSo, that's the particulars of the DNA evidence that they have. However, there's additional evidence which makes them more sure that it's King Richard, and not some random haplogroup J guy:\n\n* Records say he was buried at a church in Leicester, 100 miles north of London. Archaeologist Richard Buckley identified a possible location of the grave through map analysis. They looked where his analyses predicted that King Richard would be, and they found the skeleton.\n\n* Radiocarbon dating estimates that the death occurred between 1455 and 1540 (Richard died in 1485)\n\n* The skeleton they found appears to have died in battle, and there's no coffin or anything like that, consistent with an enemy burial.\n\n* Various head injuries that the skeleton suffered are consistent with the way King Richard's death in battle was described\n\n* The remains display signs of scoliosis, consistent with contemporary descriptions of Richard. Other features of the skeleton are also consistent with Richard, such as the age. He died at age 32 and the skeleton they found died \"in his late 20s to late 30s\"\n\nThe DNA evidence alone or the circumstantial evidence alone would not have been enough to make a strong conclusion, but looking at everything together is pretty convincing. The research team is not saying that they are 100% sure they have found King Richard, but rather that they:\n\n > can now confirm that the body is that of Richard III \"beyond a reasonable doubt\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "15022586",
"title": "Fécamp Abbey",
"section": "Section::::Second foundation.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 380,
"text": "In February 2016, French, Danish and Norwegian researchers opened the lead boxes in order to conduct DNA analysis of the remains. Radiocarbon dating of the remains showed that neither skeleton could be that of Richard I or Richard II. One skeleton dated from the third century BCE, the other from the eighth century AD, both long before the lifetimes of Richard I and Richard II.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38424193",
"title": "Exhumation and reburial of Richard III of England",
"section": "Section::::Identification of Richard III and other findings.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 594,
"text": "On 4 February 2013, the University of Leicester confirmed that the skeleton was that of Richard III. The identification was based on mitochondrial DNA evidence, soil analysis, and dental tests, and physical characteristics of the skeleton consistent with contemporary accounts of Richard's appearance. Osteoarchaeologist Jo Appleby commented: \"The skeleton has a number of unusual features: its slender build, the scoliosis, and the battle-related trauma. All of these are highly consistent with the information that we have about Richard III in life and about the circumstances of his death.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38424193",
"title": "Exhumation and reburial of Richard III of England",
"section": "Section::::Analysis of the discovery.:DNA evidence.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 1508,
"text": "Professor Michael Hicks, a Richard III specialist, has been particularly critical of the use of the mitochondrial DNA to argue that the body is Richard III's, stating that \"any male sharing a maternal ancestress in the direct female line could qualify\". He also criticises the rejection by the Leicester team of the Y chromosomal evidence, suggesting that it was not acceptable to the Leicester team to conclude that the skeleton was anyone other than Richard III. He argues that on the basis of the present scientific evidence \"identification with Richard III is more unlikely than likely\". However, Hicks himself draws attention to the contemporary view held by some that Richard III's grandfather, Richard, Earl of Cambridge, was the product of an illegitimate union between Cambridge's mother Isabella of Castile (a bastard daughter of Pedro the Cruel of Castile) and John Holland (brother in law of Henry IV of England), rather than Edmund of Langley, 1st Duke of York (Edward III's fourth son). If that was the case then the Y chromosome discrepancy with the Beaufort line would be explained but obviously still fail to prove the identity of the body. Hicks suggests alternative candidates descended from Richard III's maternal ancestress for the body (e.g. Thomas Percy, 1st Baron Egremont, and John de la Pole, 1st Earl of Lincoln) but does not provide evidence to support his suggestions. Philippa Langley refutes Hicks's argument on the grounds that he does not take into account all the evidence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26284",
"title": "Richard III of England",
"section": "Section::::Discovery of remains.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 1612,
"text": "On 4 February 2013, the University of Leicester confirmed that the skeleton was beyond reasonable doubt that of King Richard III. This conclusion was based on mitochondrial DNA evidence, soil analysis, and dental tests (there were some molars missing as a result of caries), as well as physical characteristics of the skeleton which are highly consistent with contemporary accounts of Richard's appearance. The team announced that the \"arrowhead\" discovered with the body was a Roman-era nail, probably disturbed when the body was first interred. However, there were numerous perimortem wounds on the body, and part of the skull had been sliced off with a bladed weapon; this would have caused rapid death. The team concluded that it is unlikely that the king was wearing a helmet in his last moments. Soil taken from the remains was found to contain microscopic roundworm eggs. Several eggs were found in samples taken from the pelvis, where the king's intestines were, but not from the skull and only very small numbers were identified in soil surrounding the grave. The findings suggest that the higher concentration of eggs in the pelvic area probably arose from a roundworm infection the King suffered in his life, rather than from human waste dumped in the area at a later date, researchers said. The Mayor of Leicester announced that the king's skeleton would be re-interred at Leicester Cathedral in early 2014, but a judicial review of that decision delayed the reinterment for a year. A museum to Richard III was opened in July 2014 in the Victorian school buildings next to the Greyfriars grave site.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38424193",
"title": "Exhumation and reburial of Richard III of England",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 650,
"text": "The age of the bones at death matched that of Richard when he was killed; they were dated to about the period of his death and were mostly consistent with physical descriptions of the king. Preliminary DNA analysis showed that mitochondrial DNA extracted from the bones matched that of two matrilineal descendants, one 17th-generation and the other 19th-generation, of Richard's sister Anne of York. Taking these findings into account along with other historical, scientific and archaeological evidence, the University of Leicester announced on 4 February 2013 that it had concluded beyond reasonable doubt that the skeleton was that of Richard III.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26284",
"title": "Richard III of England",
"section": "Section::::Discovery of remains.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 622,
"text": "On 5 February 2013 Professor Caroline Wilkinson of the University of Dundee conducted a facial reconstruction of Richard III, commissioned by the Richard III Society, based on 3D mappings of his skull. The face is described as \"warm, young, earnest and rather serious\". On 11 February 2014 the University of Leicester announced the project to sequence the entire genome of Richard III and one of his living relatives, Michael Ibsen, whose mitochondrial DNA confirmed the identification of the excavated remains. Richard III thus became the first ancient person of known historical identity to have their genome sequenced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26284",
"title": "Richard III of England",
"section": "Section::::Discovery of remains.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 2266,
"text": "On 12 September, it was announced that the skeleton discovered during the search might be that of Richard III. Several reasons were given: the body was of an adult male; it was buried beneath the choir of the church; and there was severe scoliosis of the spine, possibly making one shoulder higher than the other (to what extent depended on the severity of the condition). Additionally, there was an object that appeared to be an arrowhead embedded in the spine; and there were perimortem injuries to the skull. These included a relatively shallow orifice, which is most likely to have been caused by a rondel dagger, and a scooping depression to the skull, inflicted by a bladed weapon, most probably a sword. Additionally, the bottom of the skull presented a gaping hole, where a halberd had cut away and entered it. Forensic pathologist Dr Stuart Hamilton stated that this injury would have left the individual's brain visible, and most certainly would have been the cause of death. Dr Jo Appleby, the osteo-archaeologist who excavated the skeleton, concurred and described the latter as \"a mortal battlefield wound in the back of the skull\". The base of the skull also presented another fatal wound in which a bladed weapon had been thrust into it, leaving behind a jagged hole. Closer examination of the interior of the skull revealed a mark opposite this wound, showing that the blade penetrated to a depth of . In total, the skeleton presented ten wounds: four minor injuries on the top of the skull, one dagger blow on the cheekbone, one cut on the lower jaw, two fatal injuries on the base of the skull, one cut on a rib bone, and one final wound on the pelvis, most probably inflicted after death. It is generally accepted that postmortem, Richard's naked body was tied to the back of a horse, with his arms slung over one side and his legs and buttocks over the other. This presented a tempting target for onlookers, and the angle of the blow on the pelvis suggests that one of them stabbed Richard's right buttock with substantial force, as the cut extends from the back all the way to the front of the pelvic bone and was most probably an act of humiliation. It is also possible that Richard suffered other injuries which left no trace on the skeleton.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3mmmhf | What was the anti masonic party and what happened to them? | [
{
"answer": "They were a political party formed in the wake of public outcry over an incident where some Masons in NY state were accused of kidnapping and possibly killing at fellow Mason (named Morgan) who had published an expose on the initiations.\n\nThis was in the 1820s.\n\nThe Anti-Masonic party was the most successful third-party in US history, coming in second in a Presidential election!\n\nHowever, after the failure to win their bid for the highest office, the party began to unravel. The damage to Freemasonry being done, the party found that it was too divided to last.\n\nFreemasonry would not fully recover until later in the century during a period that lead to what's now called the Golden Age of Fraternalism, and spawned countless fraternities modeled on Freemasonry and also saw Freemasonry itself return to and in many ways surpass its former strength.\n\nThat period lasted until the early part of the 20th century, and the decline that followed (esp. during the depression) didn't rebound until after World War II.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "32301",
"title": "Anti-Masonic Party",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 442,
"text": "The Anti-Masonic Party, also known as the Anti-Masonic Movement, was the first third party in the United States. It strongly opposed Freemasonry as a single-issue party and later aspired to become a major party by expanding its platform to take positions on other issues. After emerging as a political force in the late 1820s, most of the Anti-Masonic Party's members joined the Whig Party in the 1830s and the party disappeared after 1838. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32301",
"title": "Anti-Masonic Party",
"section": "Section::::History.:Party foundation.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 348,
"text": "The Anti-Masonic Party was formed in Upstate New York in February 1828. Anti-Masons were opponents of Freemasonry, believing that it was a corrupt and elitist secret society which was ruling much of the country in defiance of republican principles. Many people regarded the Masonic organization and its adherents involved in government as corrupt.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40513",
"title": "1840 United States presidential election",
"section": "Section::::Nominations.:Anti-Masonic Party nomination.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 939,
"text": "After the negative views of Freemasonry among a large segment of the public began to wane in the mid 1830s, the Anti-Masonic Party had begun to disintegrate. Its leaders began to move one by one to the Whig party. Party leaders met in September 1837 in Washington, D.C., and agreed to maintain the party. The third Anti-Masonic Party National Convention was held in Philadelphia on November 13-14, 1838. By this time, the party had been almost entirely supplanted by the Whigs. The delegates unanimously voted to nominate William Henry Harrison for president (who the party had supported for president the previous election along with Francis Granger for Vice President) and Daniel Webster for Vice President. However, when the Whig National Convention nominated Harrison with John Tyler as his running mate, the Anti-Masonic Party did not make an alternate nomination and ceased to function and was fully absorbed into the Whigs by 1840.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11227",
"title": "Freemasonry",
"section": "Section::::Anti-Masonry.:Political opposition.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 434,
"text": "Freemasonry in the United States faced political pressure following the 1826 kidnapping of William Morgan by Freemasons and his subsequent disappearance. Reports of the \"Morgan Affair\", together with opposition to Jacksonian democracy (Andrew Jackson was a prominent Mason), helped fuel an Anti-Masonic movement. The short-lived Anti-Masonic Party was formed, which fielded candidates for the presidential elections of 1828 and 1832.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32301",
"title": "Anti-Masonic Party",
"section": "Section::::History.:Conventions and elections.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 575,
"text": "The Anti-Masonic Party held a third national nominating convention at Temperance Hall in Philadelphia on November 13–14, 1838. By this time, the party had been almost entirely supplanted by the Whigs. The Anti-Masons unanimously nominated William Henry Harrison for president and Daniel Webster for vice president in the 1840 election. When the Whig National Convention nominated Harrison with John Tyler as his running mate, the Anti-Masonic Party did not make an alternate nomination and ceased to function, with most adherents being fully absorbed into the Whigs by 1840.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32301",
"title": "Anti-Masonic Party",
"section": "Section::::Legacy.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 432,
"text": "The Anti-Masonic movement gave rise to or expanded the use of many innovations which became accepted practice among other parties, including nominating conventions and party newspapers. In addition, the Anti-Masons aided in the rise of the Whig Party as the major alternative to the Democrats, with conventions, newspapers and Anti-Masonic positions on issues including internal improvements and tariffs being adopted by the Whigs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40512",
"title": "1836 United States presidential election",
"section": "Section::::Nominations.:Anti-Masonic Party nomination.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 330,
"text": "After the negative views of Freemasonry among a large segment of the public began to wane in the mid 1830s, the Anti-Masonic Party began to disintegrate. Some of its members began moving to the Whig Party, which had a broader issue base than the Anti-Masons. The Whigs were also regarded as a better alternative to the Democrats.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1y8knm | Did the US ever try to convert Filipinos to Protestantism during their colonisation of the country? | [
{
"answer": "After the US colonized the Philippines the Catholic Church was disestablished, and was no longer the official religion. When that happened there was a large influx of Protestant missionaries of all denominations to the Philippines. Today, Protestants make up around 10% of the total population in the Philippines, with about 9 million people. While Protestantism was introduced to the Philippines during the period of US colonialism, it wasn't necessarily due to a push from the US government. It really was due more to missionaries acting opportunistically after the disestablishment of the Catholic Church. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "679654",
"title": "Filipino Americans",
"section": "Section::::Culture.:Religion.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 986,
"text": "During the early part of the United States governance in the Philippines, there was a concerted effort to convert Filipinos into Protestants. As Filipinos began to migrate to the United States, Filipino Roman Catholics were often not embraced by their American Catholic brethren, nor were they sympathetic to a Filipino-ized Catholicism, in the early 20th century. This led to creation of ethnic-specific parishes; one such parish was St. Columban's Church in Los Angeles. In 1997, the Filipino oratory was dedicated at the Basilica of the National Shrine of the Immaculate Conception, owing to increased diversity within the congregations of American Catholic parishes. The first-ever American Church for Filipinos, San Lorenzo Ruiz Church in New York City, is named after the first saint from the Philippines, San Lorenzo Ruiz. This was officially designated as a church for Filipinos in July 2005, the first in the United States, and the second in the world, after a church in Rome.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2105161",
"title": "Koronadal",
"section": "Section::::Culture.:Catholic culture.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 475,
"text": "The Catholic Filipinos make up the great majority (over 70%) of the Southern Philippine population. They are relatively newcomers to the area; the first wave of Christian migrants came in the seventeenth century when the Spaniards sought to populate Zamboanga, Jolo, Dapitan and other areas by encouraging people from Luzon and the Visayas to settle there. In the nineteenth century Spanish policy found considerable success in encouraging migrations to Iligan and Cotabato.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "419467",
"title": "Religion in the Philippines",
"section": "Section::::Christianity.:Protestantism.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 637,
"text": "Protestantism arrived in the Philippines with the take-over of the islands by Americans at the turn of the 20th century. Nowadays, they comprise about 10%–15% of the population with an annual growth rate of 10% since 1910 and constitute the largest Christian grouping after Roman Catholicism. In 1898, Spain lost the Philippines to the United States. After a bitter fight for independence against its new occupiers, Filipinos surrendered and were again colonized. The arrival of Protestant American missionaries soon followed. Protestant church organizations established in the Philippines during the 20th century include the following:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5811119",
"title": "De La Salle Brothers Philippine District",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1033,
"text": "In the aftermath of the Philippine Revolution against Spain and the Philippine–American War which immediately followed, the Protestant denomination, first introduced by the new American colonial masters and aided by the newly arrived American teachers, the Thomasites, was gaining a foothold among Filipinos because of the then strong anti-Spanish Friar sentiment existing at that time. Due to the then very small number of Catholic educational institutions in the country, the then American Archbishop of Manila Jeremiah James Harty, himself an alumnus of a De La Salle Christian Brothers school in St. Louis, Missouri, would appeal to the Superior-General of the Christian Brothers in 1905 for the establishment of a De La Salle school in the Philippines. While there was a growing pressure for a De La Salle school, Archbishop Harty's request was rejected, because of the Christian Brothers' lack of funds. Nonetheless, Harty continued to appeal to Pope Pius X for the establishment of additional Catholic schools in the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3664706",
"title": "Jaro, Iloilo City",
"section": "Section::::History.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1135,
"text": "The coming of the Americans in the early 20th century when the Philippines was ceded by Spain to the United States through the 1898 Treaty of Paris brought with them the Protestant religion and Iloilo is one of the first places where they came and started a mission in the Philippines. During the American occupation, the Philippine islands were divided to different Protestant missions and Western Visayas came to the jurisdiction of the Baptists. Baptist missionaries came although other Protestant sects came also especially the Presbyterians and they established numerous institutions. The Presbyterians established the Iloilo Mission Hospital in 1901, the first Protestant and American founded hospital in the country while the Baptists established the Jaro Evangelical Church, the first Baptist church in the islands, and the Central Philippine University in 1905, which was founded by William Valentine through a grant given by the American industrialist, oil magnate and philanthropist John D. Rockefeller as the \"first university in the City of Jaro\" and also the first Baptist founded and second American university in Asia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47164819",
"title": "Central Philippine University - College of Theology",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 434,
"text": "Prior when the Philippines was ceded to the United States administration by Spain through the Treaty of Paris (1898), the Americans brought their faith, the Protestantism. A comity agreement with Protestant American churches and sects was created to divide the Philippine islands for missionary works and to avoid future conflicts with different churches. Western Visayas came to the jurisdictions of the Baptists (Northern Baptist).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1356946",
"title": "Iloilo City",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 873,
"text": "The United States colonization of the Philippine islands, with Iloilo as one of the firsts American colonial outposts in which they brought their faith the Protestantism, paved the way in founding of numerous institutions that mark Iloilo's significance and important contribution in the history of American colonial era in the country includes the John D. Rockefeller funded Central Philippine University, the first Baptist and second American and Protestant university in the Philippines and in Asia; Iloilo Mission Hospital, the first Protestant and American hospital in the Philippines; Jaro Evangelical Church, the first Baptist and second Protestant church in the Philippines; Jaro Adventist Center, the first organized Adventist church in the Philippines; and Convention of Philippine Baptist Churches, the first organized Baptist churches union in the Philippines.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4yyo7g | what's the difference b/w high quality and low quality meats? | [
{
"answer": "Meat from an animal that received high-quality feed is more chemically varied, and has more flavor. This is particularly noticeable in mild-tasting meat like chicken.\n\nHigh-quality beef typically has more fat mixed throughout (an effect called \"marbling\") which creates a richer taste and more delicate texture.",
"provenance": null
},
{
"answer": "It is a combination of several factors. Better quality meats will come from animals that were raised on better quality, more nutritious (and usually more expensive) diets. Genetics would have more to do with the actual build of the animal, but can affect quality. I also believe that the conditions that an animal is raised in can have a huge impact. (Think filthy, crowded feed lot VS. clean, spacious pasture)\n\nThe difference in taste and texture are a direct result of the way the animal is raised. What you put in is what you get out. Better nutrition = better building of the muscle fibers and more flavor.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49151325",
"title": "Mutton flaps",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 240,
"text": "Consisting of low-quality rib meat, described as a \"tough, scraggy meat\", if not well cooked, In recent years their high fat content has made them unpopular in many Western countries, although they are widely used as döner meat in Europe. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4444306",
"title": "Marbled meat",
"section": "Section::::Factors in marbling.:Important terms defined.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 376,
"text": "Beef quality grades - A quality grade is a composite evaluation of factors that affect palatability of meat (tenderness, juiciness, and flavor). These factors include carcass maturity, firmness, texture, and color of lean, and the amount and distribution of marbling within the lean. Beef carcass quality grading is based on (1) degree of marbling and (2) degree of maturity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10507997",
"title": "Berkshire pig",
"section": "Section::::Culinary.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 765,
"text": "Berkshire pork, prized for juiciness, flavour, and tenderness, is pink-hued and heavily marbled. Its high fat content makes it suitable for long cooking and high-temperature cooking. The meat also has a slightly higher pH, according to food science professor Kenneth Prusa of Iowa State University. Increased pH makes the meat darker, firmer, and more flavorful. High pH is a greater determinant than fat content in the meat's overall flavor characteristics. The Japanese have bred the Kurobuta branch of the Berkshire breed for increased fineness in the meat and better marbling. Pigs' fat stores many of the characteristics of the food that they eat. Berkshire pigs are usually free-ranging, often supplemented with a diet of corn, nuts, clover, apples, or milk.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6237119",
"title": "Short ribs",
"section": "Section::::Types of short ribs.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 237,
"text": "Chuck short ribs tend to be meatier than the other two types of ribs, but they are also tougher due to the more extensive connective tissues (collagen and reticulin) in them. Plate short ribs tend to be fattier than the other two types.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34931824",
"title": "Steak",
"section": "Section::::Types.:Beefsteak.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 499,
"text": "Beef steak is graded for quality, with higher prices for higher quality. Generally, the higher the quality, the more tender the beef, the less time is needed for cooking, or the better the flavor. For example, beef tenderloin is the most tender and wagyu, such as Kobe beef from Japan, is known for its high quality and commands a high price. Steak can be cooked relatively quickly compared to other cuts of meat, particularly when cooked at very high temperatures, such as by broiling or grilling.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6648130",
"title": "Carcass grade",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 445,
"text": "Grades are determined based on an animal's fat content and body condition. The most common grades, from best to worst, are \"breakers\" (fleshy, body condition 7 or above), \"boners\" (body condition 5 to 7), \"lean\", and \"light\" (thin, body condition 1 to 4). Carcasses rated as lean or light often are sold for less per pound, as less meat is produced from the carcass despite processing costs remaining similar to those of higher grade carcasses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "210486",
"title": "Fatback",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 224,
"text": "Like other types of pig fat, fatback may be rendered to make a high quality lard, and is one source of salt pork. Finely diced or coarsely ground fatback is an important ingredient in sausage making and in some meat dishes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
19uudd | Why do we always put reactive materials in glass beakers/flasks/graduated cylinders etc.? | [
{
"answer": "In a nutshell, glass is very stable, and will not react easily with most compounds.\nThe class stays intact, the chemical stays the same, everyone is happy.\n\nHowever, some reagents are better kept in plastic containers such as polyethylene, or even quartz, because glass is not a magical non-reactive substance either.",
"provenance": null
},
{
"answer": "Borosilicate glass (the main kind of glass used in laboratory settings) is very inert towards the majority of reagents, even highly corrosive ones such as concentrated sulfuric acid. It also stable enough that it can be raised (and lowered) to a temperature range suitable for most reactions. Glass is not suitable for all applications, however, since some chemical, most notably hydrofluoric acid will etch glass or will otherwise compromise the integrity of glass containers. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42791912",
"title": "Vitrimers",
"section": "Section::::Functional principle.:Glass and glass former.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 812,
"text": "Until 2010, no organic strong glass formers were known. Strong glass formers can be shaped in the same way as glass (silicon dioxide) can be. Vitrimers are the first such material discovered, which can behave like viscoelastic fluid at high temperatures. Unlike classical polymer melts, whose flow properties are largely dependent on friction between monomers, vitrimers become aviscoelastic fluid because of exchange reactions at high temperatures as well as monomer friction. These two processes have different activation energies, resulting in a wide range of viscosity variation. Moreover, because the exchange reactions follow Arrhenius' Law, the change of viscosity of vitrimers also follows an Arrhenius relationship with the increase of temperature, differing greatly from conventional organic polymers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12581",
"title": "Glass",
"section": "Section::::Other types.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 387,
"text": "To make glass from materials with poor glass forming tendencies, novel techniques are used to increase cooling rate, or reduce crystal nucleation triggers. Examples of these techniques include aerodynamic levitation (cooling the melt whilst it floats on a gas stream), splat quenching (pressing the melt between two metal anvils) and roller quenching (pouring the melt through rollers).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "65559",
"title": "Borate",
"section": "Section::::Borosilicates.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 438,
"text": "Borosilicate glass, also known as pyrex, can be viewed as a silicate in which some [SiO] units are replaced by [BO] centers, together with additional cations to compensate for the difference in valence states of Si(IV) and B(III). Because this substitution leads to imperfections, the material is slow to crystallise and forms a glass with low coefficient of thermal expansion and is resistant to cracking when heated, unlike soda glass.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1394160",
"title": "Thermal shock",
"section": "Section::::Effect on materials.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 500,
"text": "Borosilicate glass is made to withstand thermal shock better than most other glass through a combination of reduced expansion coefficient and greater strength, though fused quartz outperforms it in both these respects. Some glass-ceramic materials (mostly in the lithium aluminosilicate (LAS) system) include a controlled proportion of material with a negative expansion coefficient, so that the overall coefficient can be reduced to almost exactly zero over a reasonably wide range of temperatures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6458",
"title": "Ceramic",
"section": "Section::::Materials.:Noncrystalline ceramics.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 427,
"text": "Noncrystalline ceramics, being glass, tend to be formed from melts. The glass is shaped when either fully molten, by casting, or when in a state of toffee-like viscosity, by methods such as blowing into a mold. If later heat treatments cause this glass to become partly crystalline, the resulting material is known as a glass-ceramic, widely used as cook-tops and also as a glass composite material for nuclear waste disposal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7716620",
"title": "Pasteur pipette",
"section": "Section::::Types.:Glass Pasteur pipette.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 553,
"text": "Nowadays, the two types of glass that are used mainly in the laboratory and in the Pasteur pipette are borosilicate glass and soda lime glass. Borosilicate glass is a widely used glass for laboratory apparatus, as it can withstand chemicals and temperatures used in most laboratories. Borosilicate glass is also more economical since the glass can be fabricated easily compared to other types. Soda lime glass, although not as chemically resistant as Borosilicate glass, are suitable as a material for inexpensive apparatus such as the Pasteur pipette.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1452308",
"title": "Borosilicate glass",
"section": "Section::::Manufacturing process.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 402,
"text": "Borosilicate glass is created by combining and melting boric oxide, silica sand, soda ash, and alumina. Since borosilicate glass melts at a higher temperature than ordinary silicate glass, some new techniques were required for industrial production. The manufacturing process depends on the product geometry and can be differentiated between different methods like floating, tube drawing, or moulding.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2s02s5 | Why do the continents seem to migrate north, leaving a gap between antarctica and the rest of the world? | [
{
"answer": "It's random, mostly.\n\nPlate tectonics is driven by convection currents in the mantle under the crust. Most of the time, people only consider the major continents moving, but [the jigsaw puzzle is slightly more complicated](_URL_0_) than that. Numerous oceanic plates are jostling around too. \n\nDuring Pangea, Antarctica was wedged between India, Australia, and Eastern Africa. This whole assembly was around [the same latitude as modern day Southern Africa](_URL_1_). You'll see North America and Eurasia are up in the northern hemisphere, which is a good chunk of the land on Earth. \n\nAs things started to break up and migrate, Antarctica happened to get shunted south. Australia kinda followed it, these are the two landmasses that have been isolated the longest, but everyone else just sort of drifted north. The Northern Hemisphere is a lot more crowded land wise than the Southern, so it makes sense that the pole is more packed.\n\nThis was a bit rambling, but I hope it covered your question. tl;dr It's luck of the geologic draw.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11603215",
"title": "Geological history of Earth",
"section": "Section::::Phanerozoic Eon.:Cenozoic Era.:Paleogene Period.:Oligocene Epoch.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 705,
"text": "Antarctica continued to become more isolated and finally developed a permanent ice cap. Mountain building in western North America continued, and the Alps started to rise in Europe as the African plate continued to push north into the Eurasian plate, isolating the remnants of Tethys Sea. A brief marine incursion marks the early Oligocene in Europe. There appears to have been a land bridge in the early Oligocene between North America and Europe since the faunas of the two regions are very similar. During the Oligocene, South America was finally detached from Antarctica and drifted north toward North America. It also allowed the Antarctic Circumpolar Current to flow, rapidly cooling the continent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "648405",
"title": "Late Cretaceous",
"section": "Section::::Geography.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 504,
"text": "Due to plate tectonics, the Americas were gradually moving westward, causing the Atlantic Ocean to expand. The Western Interior Seaway divided North America into eastern and western halves; Appalachia and Laramidia. India maintained a northward course towards Asia. In the Southern Hemisphere, Australia and Antarctica seem to have remained connected and began to drift away from Africa and South America. Europe was an island chain. Populating some of these islands were endemic dwarf dinosaur species.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32833685",
"title": "Ice cap climate",
"section": "Section::::Locations.:Extreme southern latitudes.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 248,
"text": "The continent of Antarctica is centered on the South Pole. Antarctica is surrounded on all sides by the Southern Ocean. As a result, high-speed winds circle around Antarctica, preventing warmer air from temperate zones from reaching the continent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32833685",
"title": "Ice cap climate",
"section": "Section::::Locations.:Extreme southern latitudes.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 332,
"text": "While Antarctica does have some small areas of tundra on the northern fringes, the vast majority of the continent is extremely cold and permanently frozen. Because it is climatically isolated from the rest of the Earth, the continent has extreme cold not seen anywhere else, and weather systems rarely penetrate into the continent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58539210",
"title": "Late Cenozoic Ice Age",
"section": "Section::::Glaciation of the Southern Hemisphere.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 370,
"text": "Australia drifted away from Antarctica forming the Tasmanian Passage, and South America drifted away from Antarctica forming the Drake Passage. This caused the formation of the Antarctic Circumpolar Current, a current of cold water surrounding Antarctica. This current still exists today, and is a major reason for why Antarctica has such an exceptionally cold climate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31546066",
"title": "Ecology of Tasmania",
"section": "Section::::Flora.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 700,
"text": "Millions of years ago, Antarctica was warmer and much wetter, and supported the Antarctic flora, including forests of podocarps and southern beech. Antarctica was also part of the ancient supercontinent of Gondwanaland, which gradually broke up by continental drift starting 110 million years ago. The separation of South America from Antarctica 30-35 million years ago allowed the Antarctic Circumpolar Current to form, which isolated Antarctica climatically and caused it to become much colder. The Antarctic flora subsequently died out in Antarctica, but is still an important component of the flora of southern Neotropic (South America) and Australasia, which were also former parts of Gondwana.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37577368",
"title": "Tectonic evolution of the Transantarctic Mountains",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 834,
"text": "The tectonic evolution of the Transantarctic Mountains appears to have begun when Antarctica broke away from Australia during the late Cretaceous and is ongoing, creating along the way some of the longest mountain ranges (at 3500 kilometers) formed by rift flank uplift and associated continental rifting. The Transantarctic Mountains (TAM) separate East and West Antarctica. The rift system that formed them is caused by a reactivation of crust along the East Antarctic Craton. This rifting or seafloor spreading causes plate movement that results in a nearby convergent boundary which then forms the mountain range. The exact processes responsible for making the Transantarctic Mountains are still debated today. This results in a large variety of proposed theories that attempt to decipher the tectonic history of these mountains.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
102oy2 | Would a nuclear bomb explode if you bomb it with an other bomb? | [
{
"answer": "No. There's a very critically timed cobination of events that have to happen to get a nuclear detonation. The worst that would happen is that you detonate the charge around the fissile material and produce a conventional 'Dirty' bomb.",
"provenance": null
},
{
"answer": "Yes it can and it is already used! (kinda)\n\nIt all depends on the size kinda....\n\nA standard nuclear fission bomb explodes because of massive ammounts of (normaly) uranium receiving neutrons which causes them to become unstable and splice. With this fission you will get lots of energy and a could of extra neutrons to boot to keep a chain reaction going.\n\nUranium normal has some spontanious decay going on which might trigger a spontanious chain reaction. If you have sufficient uranium, this spontanious chain reaction can only happen if the uranium is very pure and if you have enough material (+/- 2.5 kilo). \n\nA slow chain reaction is used in nuclear reactors to power turbines and can be used to generate electricity.\nA faster uncontroled chain reaction is called a meltdown where more heat is generated then the cooling systems can handle (see Chernobyl disaster).\n\nNow for the question:\nIf you manage to have a large ammount if uranium which is already close to critical mass, you can set it off if you give it a sudden boost of massive ammounts of neutrons. This boost you can give with another nuclear explosion.\n\nA hydrogen bomb (more powerfull version of the fission bomb) uses not fission (big unstable atoms into smaller atoms) but fusion (merging of small lighter attoms into more heavier atoms).\nA hydrogen bomb uses a heavy version of hydrogen (called tritium) and the chain reaction is also started by massive ammounts of neutrons! \n\n\nSo, wrap a 'standard' fission bomb with large ammounts of heavy hydrogen = > Set of the fission bomb to get explosion with large ammounts of neutrons - > Neutrons start of fusion in the heavy hydrogen = > even bigger explosion!!!\n\nSo yes, a hydrogen bomb is set of by a primairy A-bomb explosion.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1421832",
"title": "Boosted fission weapon",
"section": "Section::::Gas boosting in modern nuclear weapons.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 209,
"text": "Fusion-boosted fission bombs can also be made immune to neutron radiation from nearby nuclear explosions, which can cause other designs to predetonate, blowing themselves apart without achieving a high yield.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37630",
"title": "Neutron bomb",
"section": "Section::::Use.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 452,
"text": "Although neutron bombs are commonly believed to \"leave the infrastructure intact\", with current designs that have explosive yields in the low kiloton range, detonation in (or above) a built-up area would still cause a sizable degree of building destruction, through blast and heat effects out to a moderate radius, albeit considerably less destruction, than when compared to a standard nuclear bomb of the \"exact\" same total energy release or \"yield\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21785",
"title": "Nuclear weapon",
"section": "Section::::Types.:Other types.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 922,
"text": "Some nuclear weapons are designed for special purposes; a neutron bomb is a thermonuclear weapon that yields a relatively small explosion but a relatively large amount of neutron radiation; such a device could theoretically be used to cause massive casualties while leaving infrastructure mostly intact and creating a minimal amount of fallout. The detonation of any nuclear weapon is accompanied by a blast of neutron radiation. Surrounding a nuclear weapon with suitable materials (such as cobalt or gold) creates a weapon known as a salted bomb. This device can produce exceptionally large quantities of long-lived radioactive contamination. It has been conjectured that such a device could serve as a \"doomsday weapon\" because such a large quantity of radioactivities with half-lives of decades, lifted into the stratosphere where winds would distribute it around the globe, would make all life on the planet extinct.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50893060",
"title": "Nuclear blackout",
"section": "Section::::Bomb effects.:Within the atmosphere.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 345,
"text": "When a nuclear bomb is exploded near ground level, the dense atmosphere interacts with many of the subatomic particles being released. This normally takes place within a short distance, on the order of meters. This energy heats the air, promptly ionizing it to incandescence and causing a roughly spherical fireball to form within microseconds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "988541",
"title": "1958 Tybee Island mid-air collision",
"section": "Section::::The bomb.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1410,
"text": "Some sources describe the bomb as a functional nuclear weapon, but others describe it as disabled. If the bomb had a plutonium nuclear core installed, it was a fully functional weapon. If the bomb had a dummy core installed, it was incapable of producing a nuclear explosion but could still produce a conventional explosion. The 12-foot (4 m) long Mark 15 bomb weighs and bears the serial number 47782. It contains of conventional high explosives and highly enriched uranium. The Air Force maintains that the bomb's nuclear capsule, used to initiate the nuclear reaction, was removed before its flight aboard B-47. As noted in the Atomic Energy Commission \"Form AL-569 Temporary Custodian Receipt (for maneuvers)\", signed by the aircraft commander, the bomb contained a simulated 150-pound cap made of lead. However, according to 1966 Congressional testimony by then Assistant Secretary of Defense W.J. Howard, the Tybee Island bomb was a \"complete weapon, a bomb with a nuclear capsule,\" and one of two weapons lost by that time that contained a plutonium trigger. Nevertheless, a study of the Strategic Air Command documents indicates that in February 1958, Alert Force test flights (with the older Mark 15 payloads) were not authorized to fly with nuclear capsules on board. Such approval was pending deployment of safer \"sealed-pit nuclear capsule\" weapons, which did not begin deployment until June 1958.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "949651",
"title": "Criticality accident",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 316,
"text": "Though dangerous and frequently lethal to humans within the immediate area, the critical mass formed would not be capable of producing a massive nuclear explosion of the type that fission bombs are designed to produce. This is because all the design features needed to make a nuclear warhead cannot arise by chance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53683",
"title": "Nuclear fallout",
"section": "Section::::Factors affecting fallout.:Location.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 514,
"text": "There are two main considerations for the location of an explosion: height and surface composition. A nuclear weapon detonated in the air, called an air burst, produces less fallout than a comparable explosion near the ground. A nuclear explosion in which the fireball touches the ground pulls soil and other materials into the cloud and neutron activates it before it falls back to the ground. An air burst produces a relatively small amount of the highly radioactive heavy metal components of the device itself.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
lj4fp | If gravity is a pulling force, why is there no equivalent repulsive/anti gravity force? | [
{
"answer": "short answer to your question can be: because there is no matter with negative mass.\n\nall matter has positive energy (this statement is called \"weak energy condition\") and creates positive curvature of spacetime (positive and negative are subject to sign convention). effect of this \"positive\" curvature is, that when you move forward in time, it acts as attracting force. you can imagine it like two people starting at the equator and going toward pole - they come closer to each other just as if there was some force that pulls them together, but in fact, they are only changing one coordinate (in real case it would be time coordinate) ( < - this was an EDIT2).\n\nin theory, you can invent metric (metric describes the curvature of spacetime) that has negative curvature on some places in space and positive on other ones. those metrics can have really cool properties. some are described as \"wormholes\" some other as \"warp bubbles\", but the problem with all of them is, that they would require this matter with negative mass (also called exotic matter). we have no evidence of such a thing.\n\nEDIT1: also, there are some issues with mathematical structure of the equations that describe gravity (einstein equations)...\n\nEDIT3: google up \"energy condition\"",
"provenance": null
},
{
"answer": "I'm struggling to think of any forces that have opposites - certainly none of the 4 fundamental forces do: Electromagnetism is directional, but there's no opposite. Strong nuclear, and weak nuclear have no opposites.",
"provenance": null
},
{
"answer": "short answer to your question can be: because there is no matter with negative mass.\n\nall matter has positive energy (this statement is called \"weak energy condition\") and creates positive curvature of spacetime (positive and negative are subject to sign convention). effect of this \"positive\" curvature is, that when you move forward in time, it acts as attracting force. you can imagine it like two people starting at the equator and going toward pole - they come closer to each other just as if there was some force that pulls them together, but in fact, they are only changing one coordinate (in real case it would be time coordinate) ( < - this was an EDIT2).\n\nin theory, you can invent metric (metric describes the curvature of spacetime) that has negative curvature on some places in space and positive on other ones. those metrics can have really cool properties. some are described as \"wormholes\" some other as \"warp bubbles\", but the problem with all of them is, that they would require this matter with negative mass (also called exotic matter). we have no evidence of such a thing.\n\nEDIT1: also, there are some issues with mathematical structure of the equations that describe gravity (einstein equations)...\n\nEDIT3: google up \"energy condition\"",
"provenance": null
},
{
"answer": "I'm struggling to think of any forces that have opposites - certainly none of the 4 fundamental forces do: Electromagnetism is directional, but there's no opposite. Strong nuclear, and weak nuclear have no opposites.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53037756",
"title": "Dipole repeller",
"section": "Section::::Controversy about the Dipole Repeller and its observed repulsive force.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 483,
"text": "This is because gravitation is an attractive force, but if there is an underdense region it apparently acts as a gravitational repeller, based on the concept that there may be less attraction in the direction of the underdensity, and the greater attraction due to the higher density in other directions acts to pull objects away from the underdensity; in other words, the apparent repulsion is not an active force, but due simply to the lack of a force counteracting the attraction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "342127",
"title": "Anti-gravity",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 594,
"text": "Anti-gravity (also known as \"non-gravitational field\") is creating a place or object that is free from the force of gravity. It does not refer to the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction, particularly in the context of spacecraft propulsion. Examples are the gravity blocking substance \"Cavorite\" in H. G. Wells's \"The First Men in the Moon\" and the Spindizzy machines in James Blish's \"Cities in Flight\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "389836",
"title": "G-force",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 968,
"text": "Gravitation acting alone does not produce a g-force, even though g-forces are expressed in multiples of the free-fall acceleration of standard gravity. Thus, the standard gravitational force at the Earth's surface produces g-force only indirectly, as a result of resistance to it by mechanical forces. It is these mechanical forces that actually produce the g-force on a mass. For example, a force of 1 g on an object sitting on the Earth's surface is caused by the mechanical force exerted in the upward direction by the ground, keeping the object from going into free fall. The upward contact force from the ground ensures that an object at rest on the Earth's surface is accelerating relative to the free-fall condition. (Freefall is the path that the object would follow when falling freely toward the Earth's center). Stress inside the object is ensured from the fact that the ground contact forces are transmitted only from the point of contact with the ground.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "597844",
"title": "Levitation",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 427,
"text": "Levitation is accomplished by providing an upward force that counteracts the pull of gravity (in relation to gravity on earth), plus a smaller stabilizing force that pushes the object toward a home position whenever it is a small distance away from that home position. The force can be a fundamental force such as magnetic or electrostatic, or it can be a reactive force such as optical, buoyant, aerodynamic, or hydrodynamic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2005044",
"title": "Negative Zone",
"section": "Section::::Unique features.:Life.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 585,
"text": "It is perhaps an issue of gravitational pull that is one of the biggest hindrances to life in the Negative Zone. While all objects of reasonably sized mass (planets, moons, asteroids, etc.) obviously have their own gravitational pull, it is weak enough to be overcome with minimal effort. Most heroes with flight capabilities can escape a planet's gravitational field with ease, as can any machine with the capacity for flight. Because of this lowered gravity, it is believed that vegetation has difficulty seeding properly, giving life a tenuous foothold at best on any given planet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1411100",
"title": "Introduction to general relativity",
"section": "Section::::From special to general relativity.:Tidal effects.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 508,
"text": "The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38579",
"title": "Gravity",
"section": "Section::::Specifics.:Earth's gravity.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 591,
"text": "The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is the weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s at the Equator to about 9.832 m/s at the poles.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8w53u2 | what is the difference between an originalist interpretation and a "living document" interpretation when it comes to the u.s. supreme court? | [
{
"answer": "The idea is a debate about whether the founders wrote the thing to be specific, rigid, and amendable only through the amendment process...\n\nor whether the founders wrote the thing with deliberately looser language to take shifting societal norms into account.\n\nFor example, the 8th amendment prohibits \"cruel and unusual\" punishments but neglects to define those terms. An originalist would argue that we need to research what \"cruel and unusual\" meant to the founders. A proponent of living document theory would argue that \"cruel and unusual\" is deliberately vague so that the boundaries of cruel and unusual can shift as society progresses.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "302645",
"title": "Originalism",
"section": "Section::::Strict constructionism.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 994,
"text": "Originalism is a theory of \"interpretation\", not \"construction\". However, this distinction between \"interpretation\" and \"construction\" is controversial and is rejected by many nonoriginalists as artificial. As Scalia said, \"the Constitution, or any text, should be interpreted [n]either strictly [n]or sloppily; it should be interpreted reasonably\"; once originalism has told a Judge what the provision of the Constitution means, they are bound by that meaning—however the business of Judging is not simply to know what the text means (interpretation), but to take the law's necessarily general provisions and apply them to the specifics of a given case or controversy (construction). In many cases, the meaning might be so specific that no discretion is permissible, but in many cases, it is still before the Judge to say what a reasonable interpretation might be. A judge could, therefore, be both an originalist \"and\" a strict constructionist—but he is not one by virtue of being the other.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "302645",
"title": "Originalism",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 722,
"text": "In the context of United States law, originalism is a concept regarding the interpretation of the Constitution that asserts that all statements in the constitution must be interpreted based on the original understanding of the authors or the people at the time it was ratified. This concept views the Constitution as stable from the time of enactment, and that the meaning of its contents can be changed only by the steps set out in Article Five. This notion stands in contrast to the concept of the Living Constitution, which asserts that the Constitution is intended to be interpreted based on the context of the current times, even if such interpretation is different from the original interpretations of the document.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33544108",
"title": "United States constitutional law",
"section": "Section::::Interpreting the Constitution and the authority of the Supreme Court.:Differing views on the role of the Court.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 638,
"text": "BULLET::::- The Late Associate Justice Antonin Scalia and current Associate Justice Clarence Thomas are known as originalists; originalism is a family of similar theories that hold that the Constitution has a fixed meaning from an authority contemporaneous with the ratification (although opinion as to what that authority \"is\" varies; see discussion at originalism), and that it should be construed in light of that authority. Unless there is a historic and/or extremely pressing reason to interpret the Constitution differently, originalists vote as they think the Constitution as it was written in the late 18th Century would dictate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23004",
"title": "Precedent",
"section": "Section::::Practical application.:Originalism.\n",
"start_paragraph_id": 166,
"start_character": 0,
"end_paragraph_id": 166,
"end_character": 692,
"text": "Originalism is an approach to interpretation of a legal text in which controlling weight is given to the intent of the original authors (at least the intent as inferred by a modern judge). In contrast, a non-originalist looks at other cues to meaning, including the current meaning of the words, the pattern and trend of other judicial decisions, changing context and improved scientific understanding, observation of practical outcomes and \"what works,\" contemporary standards of justice, and \"stare decisis\". Both are directed at \"interpreting\" the text, not changing it—interpretation is the process of resolving ambiguity and choosing from among possible meanings, not changing the text.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30870726",
"title": "Judicial interpretation",
"section": "Section::::Basis for judicial interpretation.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 826,
"text": "BULLET::::- Originalism involves judges trying to apply the \"original\" meanings of different constitutional provisions. To determine the original meaning, a constitutional provision is interpreted in its \"original\" context, i.e. the historical, literary, and political context of the framers. From that interpretation, the underlying principle is derived which is then applied to the contemporary situation. Former Supreme Court justice Antonin Scalia believed that the text of the constitution should mean the same thing today as it did when it had been written. A report in the \"Washington Post\" suggested that originalism was the \"view that the Constitution should be interpreted in accordance with its original meaning — that is, the meaning it had at the time of its enactment.\" \"Meaning\" based on \"original\" principles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2094153",
"title": "Living Constitution",
"section": "Section::::Debate.:Arguments against.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 440,
"text": "This view does not take into account \"why\" the original constitution does not allow for judicial interpretation in any form. The Supreme Court's power for constitutional review, and by extension its interpretation, did not come about until \"Marbury v. Madison\" in 1803. The concept for a \"living constitution\" therefore relies on an argument regarding the writing of the constitution that had no validity when the constitution was written.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2298740",
"title": "Conservatism in the United States",
"section": "Section::::Other topics.:Courts.:Originalism.\n",
"start_paragraph_id": 102,
"start_character": 0,
"end_paragraph_id": 102,
"end_character": 645,
"text": "A more recent variant that emerged in the 1980s is \"originalism\", the assertion that the United States Constitution should be interpreted to the maximum extent possible in the light of what it meant when it was adopted. Originalism should not be confused with a similar conservative ideology, strict constructionism, which deals with the interpretation of the Constitution as written, but not necessarily within the context of the time when it was adopted. In modern times, the term originalism has been used by Supreme Court justice Antonin Scalia, former federal judge Robert Bork and some other conservative jurists to explain their beliefs.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1l1xli | Slavery in ancient Greece | [
{
"answer": "Slaves in Greece were not a rare thing to see. It is certain that rural slavery was very common in Athens. It is estimated that every citizen in Athens had at least 1 slave so, to answer your question, high class people were not the only ones to have slaves.\nA slaves principal use was for agricultural purposes usually but if there was a wealthy slave owner with dozens of slaves then there would be a foreman who oversaw all responsibilities of the other slaves. I haven't heard that slaves were \"in the family\" but you thought right when you questioned whether they do no manual labor. ",
"provenance": null
},
{
"answer": "It's a simplification, and simplifications like this are only going to make sense with respect to some benchmark; perhaps that's the context of your friend's view. But without context, there isn't really much to support her.\n\nEstimates of the slave population in Classical-era Greek states are exactly that, estimates, but those estimates normally range between 60% and 80% of the total population. One census reported from the late 4th century BCE would put the figure at nearly 87%. Even if we're sceptical of that figure, it's still a *lot* of slaves.\n\nSome did serve functions as valets, child-minders, scribes, and so on. These ones certainly fit your friend's model. But you don't have to look far to find slaves in manual labour. There were also public slaves, responsible for things like cleaning up obstructions and large messes in the streets: so far, not too bad. But an awful lot of farmwork was done by slaves, and it's much harder to believe that they led a happy fulfilling life.\n\nAnd there were some really awful slave positions around: for example, in Athens the silver mines at Laureion were worked exclusively by slaves, precisely because conditions were so appalling that any worker would have a pretty short lifespan after going there. Tens of thousands of slaves worked the mines, because the mines were so lucrative for Athens, and because slave-owners could actually lease unwanted slaves to the mines for a steady income. In Sparta things were even worse in a way, though perhaps not as intensely awful as silver mining: every year the ephors would ritually declare war on their helots, there were occasional mass slaughters, and adolescents were trained to go stealing and killing among them. Slaves could also be recruited for warfare: both Athens and Sparta used slaves in this way (though their treatment of the slaves afterwards varied a lot: after the naval battle at Arginousai, Athens officially freed all the slaves who had fought in the battle; in Sparta, a group of troublesome helots who had served in battle were rounded up under the impression they were going to be freed, and then slaughtered).\n\nSlaves had no rights and could be tortured, deprived, and killed without recourse (the only limit was on doing these things to *someone else's* slave). When testifying on a legal matter, slaves' testimony was only valid if extracted under torture. So sure, *some* slaves had cushy positions. But it's certainly not a lot that I'd choose.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6585135",
"title": "History of slavery",
"section": "Section::::Europe.:Classic era.:Ancient Greece.\n",
"start_paragraph_id": 166,
"start_character": 0,
"end_paragraph_id": 166,
"end_character": 664,
"text": "Records of slavery in Ancient Greece go as far back as Mycenaean Greece. The origins are not known, but it appears that slavery became an important part of the economy and society only after the establishment of cities. Slavery was common practice and an integral component of ancient Greece, as it was in other societies of the time, including ancient Israel. It is estimated that in Athens, the majority of citizens owned at least one slave. Most ancient writers considered slavery not only natural but necessary, but some isolated debate began to appear, notably in Socratic dialogues. The Stoics produced the first condemnation of slavery recorded in history.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47471180",
"title": "Slavery in ancient Greece",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 319,
"text": "Slavery was a common practice in ancient Greece, as in other societies of the time. Some Ancient Greek writers (including, most notably, Aristotle) considered slavery natural and even necessary. This paradigm was notably questioned in Socratic dialogues; the Stoics produced the first recorded condemnation of slavery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "113147",
"title": "Barbarian",
"section": "Section::::In classical Greco-Roman contexts.:Historical developments.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 745,
"text": "Greek attitudes towards \"barbarians\" developed in parallel with the growth of chattel slavery - especially in Athens. Although the enslavement of Greeks for non-payment of debts continued in most Greek states, Athens banned this practice under Solon in the early 6th century BC. Under the Athenian democracy established ca. 508 BC, slavery came into use on a scale never before seen among the Greeks. Massive concentrations of slaves worked under especially brutal conditions in the silver mines at Laureion in south-eastern Attica after the discovery of a major vein of silver-bearing ore there in 483 BC, while the phenomenon of skilled slave craftsmen producing manufactured goods in small factories and workshops became increasingly common.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47471180",
"title": "Slavery in ancient Greece",
"section": "Section::::Views of Greek slavery.:Modern views.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 208,
"text": "In 2011, Greek slavery remains the subject of historiographical debate, on two questions in particular: can it be said that ancient Greece was a \"slave society\", and did Greek slaves comprise a social class?\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10977426",
"title": "House slave",
"section": "Section::::In antiquity.:In Greece.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 258,
"text": "The study of slavery in Ancient Greece remains a complex subject, in part because of the many different levels of servility, from traditional chattel slave through various forms of serfdom, such as Helots, Penestai, and several other classes of non-citizen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1965077",
"title": "Slavery in antiquity",
"section": "Section::::Ancient Greece.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 258,
"text": "The study of slavery in Ancient Greece remains a complex subject, in part because of the many different levels of servility, from traditional chattel slave through various forms of serfdom, such as Helots, Penestai, and several other classes of non-citizen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27992",
"title": "Slavery",
"section": "Section::::History.:Early history.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 471,
"text": "Slavery was known in almost every ancient civilization and society including Sumer, Ancient Egypt, Ancient China, the Akkadian Empire, Assyria, Ancient India, Ancient Greece, Carolingian Europe, the Roman Empire, the Hebrew kingdoms of the ancient Levant, and the pre-Columbian civilizations of the Americas. Such institutions included debt-slavery, punishment for crime, the enslavement of prisoners of war, child abandonment, and the birth of slave children to slaves.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3tfriq | The "Duel of Champions": how common was it? What was it's purpose? | [
{
"answer": "Been asked before a few times. The term for this is [Single Combat](_URL_0_).\n\n_URL_1_\n\n_URL_2_",
"provenance": null
},
{
"answer": "I've talked about this wrt feudal Japan [here](_URL_0_). Also, a more common thing seen pre-Sengoku era was still duels between opposing soldiers, though not in the sense of 'champions' but rather pairing off two sides during the battle.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29387758",
"title": "Duel of Champions",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 277,
"text": "Orazi e Curiazi (English title: \"Duel of Champions\") is a 1961 film about the Roman legend of the Horatii, triplet brothers from Rome who fought a duel against the Curiatii, triplet brothers from Alba Longa in order to determine the outcome of a war between their two nations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "153833",
"title": "Duel",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 342,
"text": "A duel is an arranged engagement in combat between two people, with matched weapons, in accordance with agreed-upon rules. Duels in this form were chiefly practiced in early modern Europe with precedents in the medieval code of chivalry, and continued into the modern period (19th to early 20th centuries) especially among military officers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14807583",
"title": "Mundo Estranho",
"section": "Section::::Sections.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 447,
"text": "BULLET::::- Duel: it is a competition between any two subjects. The magazine's team establishes five topics for each duel and compares the weak and the strong points, later deciding who wins, with the possibility of a tie. Some fights featuring famous characters or franchises were partially voted by the readers, like \"Harry Potter X The Lord of the Rings\", \"Gandalf X Professor Dumbledore\", \"Bill Gates X Carlos Slim\" and \"X-Men X The Avengers\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "153833",
"title": "Duel",
"section": "Section::::History.:Modern history.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 596,
"text": "Dueling became popular in the United States – the former United States Secretary of the Treasury Alexander Hamilton was killed in a duel against the sitting Vice President Aaron Burr in 1804. Between 1798 and the Civil War, the US Navy lost two-thirds as many officers to dueling as it did in combat at sea, including naval hero Stephen Decatur. Many of those killed or wounded were midshipmen or junior officers. Despite prominent deaths, dueling persisted because of contemporary ideals of chivalry, particularly in the South, and because of the threat of ridicule if a challenge was rejected.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "153833",
"title": "Duel",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 451,
"text": "The duel was based on a code of honor. Duels were fought not so much to kill the opponent as to gain \"satisfaction\", that is, to restore one's honor by demonstrating a willingness to risk one's life for it, and as such the tradition of dueling was originally reserved for the male members of nobility; however, in the modern era it extended to those of the upper classes generally. On occasion, duels with pistols or swords were fought between women.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31423925",
"title": "Gouging (fighting style)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 363,
"text": "Though it was never an organized sport, participants would sometimes schedule their fights (as one could schedule a duel), and victors were treated as local heroes. Gouging was essentially a type of duel to defend one's honor that was most common among the poor, and was especially common in southern states in the late eighteenth and early nineteenth centuries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "90398",
"title": "Code duello",
"section": "Section::::Southern US code of honor.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 650,
"text": "Southern duels persisted through the 1840s even after duelling in the United States was outlawed. Commonly held on sand bars in rivers where jurisdiction was unclear, they were rarely prosecuted. States such as South Carolina, Tennessee, Texas, Louisiana and others had their own duelling customs and traditions. Most duels occurred between the upper classes but teenage duels and those in the middle-classes also existed. Dueling was not at all undemocratic and it enabled lesser men to participate without any prejudice. There was also the promise of esteem and status and it also served as a form of scapegoating for unresolved personal problems.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1pq9hm | What mechanisms are behind stereotypical accents in people with English as a second language? | [
{
"answer": "**First the basics:**\n\nDifferent languages have different phonetic systems (where phonetics refers to the individual consonant vowel combinations that form the phonemes which establish contrast for word differentiation. You know \"bat\" and \"pat\" are different words because [b] and [p] are \"contrasting\").\n\nThe primary factor that differentiates languages is the vowel inventory of that language (I say vowels as primary because they are \"sonorant\" or sound creating whereas consonants like \"stops\", \"labials\" and \"fricatives\" are the continuation of sound or the stoppage of sound). English has the basic vowels of /a/, /e/, /i/, /o/, and /u/ (the orthography I'm using to describe the vowels is not standard IPA. I decided not to spend the extra time typing it all out). You then have dipthongs, which are the vowel sounds created by adjacent vowels. You then have, to a lesser degree your allophonic dipthongs and vowels based on the surrounding consonants (a /a/ sound is going to sound slightly different if it's next to a [b] as compared to an [sp]).\n\nThe native speaker perceives and produces language given the above criteria.\n\n**Now to apply this to a second language learner:**\n\nThe native German speaker who is learning English has DIFFERENT vowels than you as a native English speaker (different consonants as well, but the vowels are easier to recognize). As a native English Speaker, you have your inventory of English Vowels. When you listen to the native German speaker who has acquired English as an adult (as opposed to the bilingual, young child between 4 to 7 who has the opportunity and the language acquisition mechanisms to acquire \"native fluency\"), you are listening to the cross-influence of the native German speaker's German vowels and his attempt, successfully or otherwise, to produce English vowels. The \"accent\" you hear is thus your ability to pick up the difference in vowel quality.\n\n**What about the consonants?**\n\nConsonants likewise have an impact on how the vowel of the speaker is formed. Vowels, as mentioned before, are sonorant and the quality of the vowel is thus determined by multiple different factors: elevation of the tongue in the mouth, the \"frontness\" or \"backness\" of the tongue (is the tongue closer to the teeth, is it further away from the teeth?), and the shape of the lips. The mouth is thus an acoustic chamber that changes the sound of the vowel based on its shape. Just like for vowels, different languages have different consonants. While the different consonant inventories is obviously a part of the equation, it would have, comparably, lesser impact as most consonants in a language \"stop\" sound as opposed to create sound.\n\nAn oft used example of how consonants impact the accent is when you compare most Asian languages to that of English. Japanese and Chinese are notorious for not having the [r] that's native to English. They instead of what is called a \"flap/tap\" (the same sound you hear when someone says the word /button/ which only occurs word-medial or in the middle of the word). The Japanese or Chinese speaker in most instances does not have the muscle control and dexterity to move the tongue into such a position as to produce the [r] so they instead produce an approximation of the consonant [ɾ]. They approximate the sound by substituting a sound that is produced most similar in tongue positioning. The end result is \"rat\" sounds like \"lat\".",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3670297",
"title": "Phonological history of English consonants",
"section": "Section::::Fricatives and affricates.:Dental fricatives.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 391,
"text": "Certain English accents feature variant pronunciations of these sounds. These include fronting, where they merge with /f/ and /v/ (found in Cockney and some other dialects); stopping, where they approach /t/ and /d/ (as in some Irish speech); alveolarisation, where they become (in some African varieties); and debuccalisation, where becomes before a vowel (found in some Scottish English).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "993304",
"title": "Shtokavian",
"section": "Section::::Standard language.\n",
"start_paragraph_id": 137,
"start_character": 0,
"end_paragraph_id": 137,
"end_character": 469,
"text": "Also, the contemporary situation is unstable with regard to the accentuation, because phoneticians have observed that the 4-accents speech has, in all likelihood, shown to be increasingly unstable, which resulted in proposals that a 3-accents norm be prescribed. This is particularly true for Croatian, where, contrary to all expectations, the influence of Chakavian and Kajkavian dialects on the standard language has been waxing, not waning, in the past 50–70 years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41805462",
"title": "Basis of articulation",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1014,
"text": "Different accents within a given language may have their own characteristic basis of articulation, resulting in one accent being perceived as, e.g., more 'nasal', 'velarized' or 'guttural' than another. According to Cruttenden, \"The articulatory setting of a language or dialect may differ from GB [General British]. So some languages like Spanish may have a tendency to hold the tongue more forward in the mouth, while others like Russian may have a tendency to hold it further back in the mouth. Nasalization may be characteristic of many speakers of American English, while denasal voice ... is frequently said to occur in Liverpool\". A more detailed exposition can be read in Gili Gaya (1956). Non-native speakers typically find the basis of articulation one of the greatest challenges in acquiring a foreign language's pronunciation. Speaking with the basis of articulation of their own native language results in a foreign accent, even if the individual sounds of the target language are produced correctly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18089868",
"title": "Linguistic discrimination",
"section": "Section::::Linguistic prejudice.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1346,
"text": "It can be noted that use of language such as certain accents may result in an individual experiencing prejudice. For example, some accents hold more prestige than others depending on the cultural context. However, with so many dialects, it can be difficult to determine which is the most preferable. The best answer linguists can give, such as the authors of \"Do You Speak American?\", is that it depends on the location and the individual. Research has determined however that some sounds in languages may be determined to sound less pleasant naturally. Also, certain accents tend to carry more prestige in some societies over other accents. For example, in the United States speaking General American (i.e., an absence of a regional, ethnic, or working class accent) is widely preferred in many contexts such as television journalism. Also, in the United Kingdom, the Received Pronunciation is associated with being of higher class and thus more likeable. In addition to prestige, research has shown that certain accents may also be associated with less intelligence, and having poorer social skills. An example can be seen in the difference between Southerners and Northerners in the United States, where people from the North are typically perceived as being less likable in character, and Southerners are perceived as being less intelligent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18974628",
"title": "English language in England",
"section": "Section::::General features.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 872,
"text": "The accent of English English best known outside the United Kingdom is that of Received Pronunciation (RP), though it is used by only a small minority of speakers in England. Until recently, RP was widely considered to be more typical of educated speakers than other accents. It was referred to by some as the Queen's (or King's) English, an 'Oxford accent' or even 'BBC English' (because for many years of broadcasting it was rare to hear any other accent on the BBC). These terms, however, do not refer only to accent features but also to grammar and vocabulary, as explained in Received Pronunciation. Since the 1960s regional accents have become increasingly accepted in mainstream media, and are frequently heard on radio and television. The Oxford English Dictionary gives RP pronunciations for each word, as do most other English dictionaries published in Britain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "256791",
"title": "Accent (sociolinguistics)",
"section": "Section::::Social factors.:Prestige.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 1129,
"text": "Certain accents are perceived to carry more prestige in a society than other accents. This is often due to their association with the elite part of society. For example, in the United Kingdom, Received Pronunciation of the English language is associated with the traditional upper class. The same can be said about the predominance of Southeastern Brazilian accents in the case of the Brazilian variant of the Portuguese language, especially considering the disparity of prestige between most \"caipira\"-influenced speech, associated with rural environment and lack of formal education, together with the Portuguese spoken in some other communities of lower socioeconomic strata such as \"favela\" dwellers, and other sociocultural variants such as middle and upper class \"paulistano\" (dialect spoken from Greater São Paulo to the East) and \"fluminense\" (dialect spoken in the state of Rio de Janeiro) to the other side, inside Southeastern Brazil itself. However, in linguistics, there is no differentiation among accents in regard to their prestige, aesthetics, or correctness. All languages and accents are linguistically equal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40912579",
"title": "Accent perception",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1385,
"text": "Accents are the distinctive variations in the pronunciation of a language. They can be native or foreign, local or national and can provide information about a person’s geographical locality, socio-economic status and ethnicity. The perception of accents is normal within any given group of language users and involves the categorisation of speakers into social groups and entails judgments about the accented speaker, including their status and personality. Accents can significantly alter the perception of an individual or an entire group, which is an important fact considering that the frequency that people with different accents are encountering one another is increasing, partially due to inexpensive international travel and social media. As well as affecting judgments, accents also affect key cognitive processes (e.g., memory) that are involved in a myriad of daily activities. The development of accent perception occurs in early childhood. Consequently, from a young age accents influence our perception of other people, decisions we make about when and how to interact with others, and, in reciprocal fashion, how other people perceive us. A better understanding of the role accents play in our (often inaccurate) appraisal of individuals and groups, may facilitate greater acceptance of people different from ourselves and lessen discriminatory attitudes and behavior.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
937w70 | How are ions made artificially? | [
{
"answer": "You usually just take regular atoms and rip off their electrons somehow (heat them up, subject them to strong electric fields, shoot them through stripper foils, or some combination of those).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18963787",
"title": "Ion",
"section": "Section::::Related technology.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 267,
"text": "Ions can be non-chemically prepared using various ion sources, usually involving high voltage or temperature. These are used in a multitude of devices such as mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters, and ion engines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "326386",
"title": "Ion source",
"section": "Section::::Gas-discharge ion sources.:Inductively-coupled plasma.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 225,
"text": "Ions can be created in an inductively coupled plasma, which is a plasma source in which the energy is supplied by electrical currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2321375",
"title": "Elastic recoil detection",
"section": "Section::::Applications.\n",
"start_paragraph_id": 190,
"start_character": 0,
"end_paragraph_id": 190,
"end_character": 1293,
"text": "Ion implantation is one of the methods used to transform physical properties of polymers and to improve their electrical, optical, and mechanical performance. Ion implantation is a technique by which the ions of a material are accelerated in an electrical field and impacted into a materials such that ion are inserted into this material. This technique has many important uses. One such example is the introduction of silver plasma into the biomedical titanium. This is important because Titanium-based implantable devices such as joint prostheses, fracture fixation devices and dental implants, are important to human lives and improvement of the life quality of patients. However, biomedical titanium is lack of Osseo integration and antibacterium ability. Plasma immersion ion implantation (PIII) is a physical technique which can enhance the multi-functionality, mechanical and chemical properties as well as biological activities of artificial implants and biomedical devices. ERDA can be used to study this phenomenon very effectively. Moreover, many scientists have measured the evolution of electrical conductivity, optical transparency, corrosion resistance, and wear resistance of different polymers after irradiation by electron or low-energy light ions or high-energy heavy ions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2920436",
"title": "Electrotyping",
"section": "Section::::Electrotyping in art.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 575,
"text": "Electrotyping has been used for the production of metal sculptures, where it is an alternative to the casting of molten metal. These sculptures are sometimes called \"galvanoplastic bronzes\", although the actual metal is usually copper. It was possible to apply essentially any patina to these sculptures; gilding was also readily accomplished in the same facilities as electrotyping by using electroplating. Electrotyping has been used to reproduce valuable objects such as ancient coins, and in some cases electrotype copies have proven more durable than fragile originals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "326386",
"title": "Ion source",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 211,
"text": "An ion source is a device that creates atomic and molecular ions. Ion sources are used to form ions for mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters and ion engines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15539",
"title": "Ion implantation",
"section": "Section::::Other applications.:Ion implantation-induced nanoparticle formation.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 956,
"text": "Ion implantation may be used to induce nano-dimensional particles in oxides such as sapphire and silica. The particles may be formed as a result of precipitation of the ion implanted species, they may be formed as a result of the production of an mixed oxide species that contains both the ion-implanted element and the oxide substrate, and they may be formed as a result of a reduction of the substrate, first reported by Hunt and Hampikian. Typical ion beam energies used to produce nanoparticles range from 50 to 150 keV, with ion fluences that range from 10 to 10 ions/cm. The table below summarizes some of the work that has been done in this field for a sapphire substrate. A wide variety of nanoparticles can be formed, with size ranges from 1 nm on up to 20 nm and with compositions that can contain the implanted species, combinations of the implanted ion and substrate, or that are comprised solely from the cation associated with the substrate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "386120",
"title": "Mineral water",
"section": "Section::::Imitation mineral water.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 545,
"text": "Artificial or imitation mineral water cannot be made simply by dissolving all the mineral components in water to replicate the analysis of a natural water. If all the components were put together, many would be found to be insoluble, and others would form new chemical combinations, so that the result would differ widely from the mineral water imitated. The order in which salts are dissolved is important; dissolving some salts separately and combining the solutions can produce results impossible to obtain by dissolving everything together.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4qjc30 | why can a laser be seen from miles away but a regular flashlight has such a limited range before the light fades? | [
{
"answer": "A laser tends to be very well focused, which means that its energy doesn't spread out that much as it travels. A flashlight, on the other hand, isn't focused that well, which means that its energy spreads out very quickly as it travels, so gets dimmer much faster than a laser does.",
"provenance": null
},
{
"answer": "Lasers are a continually beam of powerful light (which is why you should never stare into one). The particles that make up the light are tightly focused and less likely to disperse. Flashlights however shoot in a cone shaped beam which spread out and eventually become invisible to the naked eye. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18145",
"title": "List of laser applications",
"section": "Section::::Industrial and commercial.\n",
"start_paragraph_id": 125,
"start_character": 0,
"end_paragraph_id": 125,
"end_character": 349,
"text": "BULLET::::- Diode lasers are used as a lightswitch in industry, with a laser beam and a receiver which will switch on or off when the beam is interrupted, and because a laser can keep the light intensity over larger distances than a normal light, and is more precise than a normal light it can be used for product detection in automated production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17556",
"title": "Laser",
"section": "Section::::Safety.\n",
"start_paragraph_id": 152,
"start_character": 0,
"end_paragraph_id": 152,
"end_character": 681,
"text": "Even the first laser was recognized as being potentially dangerous. Theodore Maiman characterized the first laser as having a power of one \"Gillette\" as it could burn through one Gillette razor blade. Today, it is accepted that even low-power lasers with only a few milliwatts of output power can be hazardous to human eyesight when the beam hits the eye directly or after reflection from a shiny surface. At wavelengths which the cornea and the lens can focus well, the coherence and low divergence of laser light means that it can be focused by the eye into an extremely small spot on the retina, resulting in localized burning and permanent damage in seconds or even less time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2077504",
"title": "Electrolaser",
"section": "Section::::Examples of electrolasers.:Picatinny Arsenal.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 306,
"text": "Scientists and engineers from Picatinny Arsenal have demonstrated that an electric discharge can go through a laser beam. The laser beam is self-focusing due to the high laser intensity of 50 gigawatts, which changes the speed of light in air. The laser was reportedly successfully tested in January 2012.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1461372",
"title": "Laser rangefinder",
"section": "Section::::Range and range error.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 436,
"text": "Some of the laser light might reflect off leaves or branches which are closer than the object, giving an early return and a reading which is too low. Alternatively, over distances longer than 1200 ft (365 m), the target, if in proximity to the earth, may simply vanish into a mirage, caused by temperature gradients in the air in proximity to the heated surface bending the laser light. All these effects have to be taken into account.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6919019",
"title": "Lasers and aviation safety",
"section": "Section::::Example laser safety calculations.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 357,
"text": "To give another example, of a more powerful laser—the type that might be used in an outdoor laser show: a 6-watt green (532 nm) laser with a 1.1 milliradian beam divergence is an eye hazard to about , can cause flash blindness to about 8,200 feet (1.5 mi/2.5 km), causes veiling glare to about 36,800 feet (), and is a distraction to about 368,000 feet ().\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1798343",
"title": "Xenon arc lamp",
"section": "Section::::Light generation mechanism.:Xenon-mercury.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 587,
"text": "The very small size of the arc makes it possible to focus the light from the lamp with moderate precision. For this reason, xenon arc lamps of smaller sizes, down to 10 watts, are used in optics and in precision illumination for microscopes and other instruments, although in modern times they are being displaced by single mode laser diodes and white light supercontinuum lasers which can produce a truly diffraction-limited spot. Larger lamps are employed in searchlights where narrow beams of light are generated, or in film production lighting where daylight simulation is required.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1442115",
"title": "Laser pointer",
"section": "Section::::Hazards.:Eye injury.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 356,
"text": "Studies have found that even low-power laser beams of not more than 5 mW can cause permanent retinal damage if gazed at for several seconds; however, the eye's blink reflex makes this highly unlikely. Such laser pointers have reportedly caused afterimages, flash blindness and glare, but not permanent damage, and are generally safe when used as intended.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
231b89 | why can most people jump higher off of one leg, when clearly there is more power in two legs? | [
{
"answer": "momentum, jumping with two legs slows you down to an extent",
"provenance": null
},
{
"answer": "because you're using your other leg for momentum.",
"provenance": null
},
{
"answer": "Well, it's not all about raw power. The problem isn't being able to move upwards, you can climb stairs a lot higher than you can jump. The problem is accelerating quickly.\n\nLook at it this way; stand perfectly still with your hands at your sides and jump. \n\nYou probably didn't get very far. This is because when you jump off one leg, neither your arms or your extra leg is sitting as dead weight. Your body spends a great deal of energy to thrust them upward just before you leave the ground. There's a lot of weight in a leg, so the inertia from that plus your arms all being thrust upwards helps to accelerate the actual dead weight (the rest of the body). \n\nTake for instance [this tornado kick](_URL_0_ ). The person in the gif appears to exert very little force on the ground as they lift off. This is because they slowly build up momentum leading up to the jump (by spinning) then angle that energy upwards to carry them off the mat.\n\nBasically the idea is rather than pushing yourself up with two legs, you're pulling yourself up with the momentum you built up in your swinging arms and legs.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23515390",
"title": "Nicolas Pueta",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 328,
"text": "\"\"I never assumed my handicap and if anything, as a kid not having a leg meant that my arms were much stronger,\"\" Pueta added. His right leg is stronger than a tree and he jumps all over the field–like a kangaroo–and will tackle everything that comes his way. His line-out jumping is also an asset to whatever team he plays in.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13791",
"title": "High jump",
"section": "Section::::Training.:Weight Lifting.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 421,
"text": "It is crucial for high jumpers to have strong lower bodies and cores, as the bar progressively gets higher, the strength of an athlete's legs (along with speed and technique) will help propel them over the bar. Squats, deadlifts, and core exercises will help a high jumper achieve these goals. It is important, however, for a high jumper to keep a slim figure as any unnecessary weight makes it difficult to jump higher.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10448601",
"title": "Stance (martial arts)",
"section": "Section::::High or low.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 262,
"text": "This refers to the bend in the knees and height relative to a normal standing position. Low stances are very powerful and assist delivery of power through the body to either the arms or the legs. High stances are more mobile and allow one to reposition rapidly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1111581",
"title": "Reaction (physics)",
"section": "Section::::Examples.:Interaction with ground.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 359,
"text": "When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4987177",
"title": "Vertical jump",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 253,
"text": "A vertical jump or vertical leap is the act of raising one's center of mass higher in the vertical plane solely with the use of one's own muscles; it is a measure of how high an individual or athlete can elevate off the ground (jump) from a standstill.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43609",
"title": "Jumping",
"section": "Section::::Anatomy.:Limb morphology.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 454,
"text": "Long legs increase the time and distance over which a jumping animal can push against the substrate, thus allowing more power and faster, farther jumps. Large leg muscles can generate greater force, resulting in improved jumping performance. In addition to elongated leg elements, many jumping animals have modified foot and ankle bones that are elongated and possess additional joints, effectively adding more segments to the limb and even more length.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4987177",
"title": "Vertical jump",
"section": "Section::::Maximization.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 1376,
"text": "An important component of maximizing height in a vertical jump is attributed to the use of counter-movements of the legs and arm swings prior to take off, as both of these actions have been shown to significantly increase the body's center of mass rise. The counter-movement of the legs, a quick bend of the knees which lowers the center of mass prior to springing upwards, has been shown to improve jump height by 12% compared to jumping without the counter-movement. This is attributed to the stretch shortening cycle of the leg muscles enabling the muscles to create more contractile energy. Furthermore, jump height can be increased another 10% by executing arm swings during the take off phase of the jump compared to if no arm swings are utilized. This involves lowering the arms distally and posteriorly during the leg counter-movements, and powerfully thrusting the arms up and over the head as the leg extension phase begins. As the arms complete the swinging movement they pull up on the lower body causing the lower musculature to contract more rapidly, hence aiding in greater jump height. Despite these increases due to technical adjustments, it appears as if optimizing both the force producing and elastic properties of the musculotendinous system in the lower limbs is largely determined by genetics and partially mutable through resistance exercise training.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1nkuc4 | How do flu shots work? | [
{
"answer": " > How does one shot protect you from all variations of flu?\n > Do they need to be topped up, as newer strains come into existence?\n\nOne shot contains vaccines for three different strains of flu. The CDC spends a LOT of time and effort every year trying to predict which three strains are most likely to be a significant health threat that year, and do so with enough lead time to get the vaccines produced and distributed.\n",
"provenance": null
},
{
"answer": "The flu vaccine contains pieces of the outer proteins of different influenza viruses. When they are injected, cells in your body eat these proteins, digest them, then present little fragments of them to T and B cells. T and B cells that have a receptor that recognizes the flu proteins then multiply. The flu specific T and B cells then develop memory of the flu proteins. If you become infected with influenza, the T and B cells that were primed by the vaccine begin to combat the infection much earlier than would happen if you had to develop these cells at the time of your infection. It does not prevent infection in humans, but it can shorten the time you are sick and the severity of the infection.\n\nSource: I work on flu in an immunology department.",
"provenance": null
},
{
"answer": "My grad school work was in the lab that produced the seed strain for the Influenza B part of the vaccine, and my thesis was on adapting the reassortment process used to produce high yield strains of A to the B virus.\n\nThe annual vaccine is trivalent, containing one strain of B and two of A. Every spring, there is a meeting, where the committee looks at what's circulating globally, and make an educated guess as to what might hit in the US next winter.\n\nThen, they collect samples of those strains, send them to labs, like the one I worked in, and we would try to adapt them to grow well in eggs. Once we were done, we would send vials of frozen seed to the vaccine manufacturers for scale up and production.\n\nOften, there isn't much change between one year and the next, and the vaccine will give protection over multiple years, but eventually, it will mutate enough to be not neutralized by antibodies to the old strains. ",
"provenance": null
},
{
"answer": "Very simplified:\n\nInfluenza (disease) is caused by certain influenza viruses of the orthomyxoviridae family, and they are categorized into three categories or \"species\": influenzavirus A-C. \n\nThere are however different subtypes of each virus species. \n\n\"Flu shots\" often includes pieces of viruses or killed/weakened viruses and are aimed to trigger an immune response without causing sickness.\n\nThe immune response, if successful, will generate antibodies against the foreign objects injected to the body which will render them harmless. The most important cells for prolonged immunity are the white blood cells called \"B-cells\" which works as a memory to almost instantly react and prompt the production of antibodies if the body is exposed to the same virus again. \n\nSome flu shots include pieces from different influenzavirus subtypes and thus grant \"immunity\" to those. When a new influenza virus emerge it's of great importance to quickly find and extract pieces of this virus and check if it could potentially stimulate the immune-system to create antibodies that can \"stop\" the virus.\n\nSome viruses, however, are very hard to find vaccines against due to the nature of the virus (HIV for example).\n\nSource: Pharmacology student, and some Wikipedia (to refresh memory) ",
"provenance": null
},
{
"answer": "Most flu comes from 2 sources (I live in the UK but it should be similar in the states) birds which migrate and pigs. \n\nThey basically take samples from these 2 sources and try and look at which strains of flu are likely going to happen that year. Bearing in mind they have to manufacture millions of flu vaccines they need probably at least 6 months+ to do this.\n\nEvery year you have a flu jab but its to different strains.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6653978",
"title": "Flu-flu arrow",
"section": "Section::::Uses.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 251,
"text": "Flu-flu arrows are often used for children's archery, and can be used to play flu-flu golf. Similar to Frisbee Golf, the player must go to where the arrow landed, pick it up, shoot it again, and repeat this process until he reaches a specified place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3344863",
"title": "Childhood immunizations in the United States",
"section": "Section::::Influenza.:Vaccine.\n",
"start_paragraph_id": 220,
"start_character": 0,
"end_paragraph_id": 220,
"end_character": 646,
"text": "The influenza vaccine comes in two forms, the inactivated form which is what is typically thought of as the \"flu shot\", and a live but attenuated (weakened) form that is sprayed into the nostrils. it is recommended to get the flu shot each year since it is remade each year to protect against the viruses that are most likely to cause disease that year. Unfortunately there are a vast array of strains of influenza, so a single vaccine can not prevent all of them. The shot prevents 3 or 4 different influenza viruses and it takes about 2 weeks after the injection for protection to develop. This protection lasts from several months to a year. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6653978",
"title": "Flu-flu arrow",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 340,
"text": "A flu-flu arrow is a type of arrow specifically designed to travel a short distance. Such arrows are particularly useful when shooting at aerial targets or for certain types of recreational archery where the arrow must not travel too far. One of the main uses of these arrows is that they do not get lost as easily if they miss the target.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6653978",
"title": "Flu-flu arrow",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 479,
"text": "A flu-flu is a design of fletching, normally made by using long sections of feathers; in most cases six or more sections are used, rather than the traditional three. Alternatively, two long feathers can be spiraled around the end of the arrow shaft. In either case, the excessive fletching serves to generate more drag and slow the arrow down rapidly after a short distance (about 30 m). Recreational flu-flus usually have rubber points to add weight and keep the flight slower.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1045705",
"title": "Influenza vaccine",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 783,
"text": "Influenza vaccines, also known as flu shots or flu jabs, are vaccines that protect against infection by influenza viruses. A new version of the vaccine is developed twice a year, as the influenza virus rapidly changes. While their effectiveness varies from year to year, most provide modest to high protection against influenza. The United States Centers for Disease Control and Prevention (CDC) estimates that vaccination against influenza reduces sickness, medical visits, hospitalizations, and deaths. When an immunized worker does catch the flu, they are on average back at work a half day sooner. Vaccine effectiveness in those under two years old and over 65 years old remains unknown due to the low quality of the research. Vaccinating children may protect those around them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51513",
"title": "Arrow",
"section": "Section::::Size.:Fletchings.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 388,
"text": "A flu-flu is a form of fletching, normally made by using long sections of full length feathers taken from a turkey, in most cases six or more sections are used rather than the traditional three. Alternatively two long feathers can be spiraled around the end of the arrow shaft. The extra fletching generates more drag and slows the arrow down rapidly after a short distance, about or so.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1211321",
"title": "Flume",
"section": "Section::::Types of flumes.:Flow measurement flume.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 562,
"text": "Some varieties of flumes are used in measuring water flow of a larger channel. When used to measure the flow of water in open channels, a flume is defined as a specially shaped, fixed hydraulic structure that under free-flow conditions forces flow to accelerate in such a manner that the flow rate through the flume can be characterized by a level-to-flow relationship as applied to a single head (level) measurement within the flume. Acceleration is accomplished through a convergence of the sidewalls, a change in floor elevation, or a combination of the two.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4b19e5 | I found this old helmet in an antique store, and I was wondering where it is from. | [
{
"answer": "Swedish M26 Army Helmet seems to be the one, \n_URL_0_\n\nHeres some info on it. \n\n_URL_1_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41162464",
"title": "Canterbury helmet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 481,
"text": "The Canterbury Helmet is an Iron Age helmet found in a field near Canterbury, Kent, England, in December 2012. Made of bronze, it is one of only a few helmets dating from the Iron Age to ever have been found in Britain. The helmet currently resides in the British Museum, and is undergoing conservation work. It was found by an anonymous metal detectorist, who found it together with an iron brooch and a pin, and it is thought to have contained a bag with cremated human remains.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56153527",
"title": "Witcham Gravel helmet",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 583,
"text": "The helmet was discovered during peat digging in the parish of Witcham Gravel, Cambridgeshire, perhaps during the 1870s. It was said to have been found \"at a depth of about four feet\", although the exact findspot within Witcham Gravel is unknown; at the time, the parish comprised about 389 acres. The helmet was first published in 1877, when, owned by Thomas Maylin Vipan, it was exhibited to the Society of Antiquaries of London. When Vipan died in 1891, the British Museum purchased it from his estate. It remains in the museum's collection, and as of 2019 is on view in Room 49.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33270284",
"title": "Coventry Sallet",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 394,
"text": "The helmet was made around 1460, during the period of English civil conflict known as the Wars of the Roses, and the armourer's marks suggest that it was made by an artisan originating from Italy. During the 19th century it was used in Coventry’s Godiva Procession. For a period it was kept on display at St Mary's Hall, Coventry, and is now shown at the city's Herbert Art Gallery and Museum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22706403",
"title": "Sutton Hoo helmet",
"section": "Section::::Discovery.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 898,
"text": "Overlooked at first, the helmet quickly gained notice. Even before all the fragments had been excavated, the \"Daily Mail\" spoke of \"a gold helmet encrusted with precious stones.\" A few days later it would more accurately describe the helmet as having \"elaborate interlaced ornaments in silver and gold leaf.\" Despite scant time to examine the fragments, they were termed \"elaborate\" and \"magnificent\"; \"crushed and rotted\" and \"sadly broken\" such that it \"may never make such an imposing exhibit as it ought to do,\" it was nonetheless thought the helmet \"may be one of the most exciting finds.\" The stag found in the burial—later placed atop the sceptre—was even thought at first to adorn the crest of the helmet, in parallel to the boar-crested Benty Grange helmet. This theory would gain no traction, however, and the helmet would have to wait out World War II before reconstruction could begin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28849380",
"title": "Meyrick Helmet",
"section": "Section::::Discovery.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 963,
"text": "The provenance of the helmet is unknown, but on stylistic grounds it is thought likely that it comes from the north of England, in the area of Britain controlled by the Brigantes tribe. The helmet is first recorded as part of the collection of arms and armour accumulated by Sir Samuel Rush Meyrick (1783–1848), and so must have been discovered some time before 1848. It is possible that the helmet came from the Stanwick Hoard of about 140 bronze objects that was found some time between 1843 and 1845 near Stanwick Camp in North Yorkshire, which may have been the \"oppidum\" of the Brigantes. After Meyrick's death the helmet and other items of Iron Age armour, such as the Witham Shield, were left to his cousin, Lt. Colonel Augustus Meyrick, who disposed of them between 1869 and 1872. The helmet was purchased by Augustus Franks, an independently wealthy antiquarian who worked for the British Museum. Franks donated the helmet to the British Museum in 1872.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28837105",
"title": "Waterloo Helmet",
"section": "Section::::Discovery.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 286,
"text": "The helmet was dredged from the bed of the River Thames close to Waterloo Bridge in 1868, and in March of the same year it was given on loan to the British Museum by Thames Conservancy. In 1988 its successor body, the Port of London Authority, donated the helmet to the British Museum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34345985",
"title": "Hallaton Helmet",
"section": "Section::::Discovery and restoration.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 754,
"text": "The helmet was discovered by 71-year-old Ken Wallace, a retired teacher and amateur archaeologist. He and other members of the Hallaton Fieldwork Group had found fragments of Roman pottery on a hill near Hallaton in 2000. He visited the site with a second-hand metal detector late one afternoon and found about 200 coins, which had been buried in a series of small pits dug into the clay. He also found another artifact, which he left in the ground overnight. The following day he returned to examine his discovery and found it that it was a silver ear. He reported the find to Leicestershire's county archaeologist, who called in the University of Leicester Archaeological Services (ULAS) to excavate the site. The dig took place in the spring of 2003.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
34snh8 | When/how did human started cooking? | [
{
"answer": "The modern human gastrointestinal tract is evolved to digest cooked food. That takes a long time. Here is a peer reviewed article that argues that control of fire was achieved nearly two million of years ago by some of the first members of the Homo genus:\n\n_URL_2_\n\nBecause of the time needed for our current digestive systems to have evolved and also corresponding archeological evidence of controlled use of fire (ancient radiomatrically dated firepits) it's now the general consensus that control of fire (and it's use for cooking) must have occurred no earlier than 400,000 years ago:\n\n_URL_1_\n\n_URL_0_\n\nIrrefutable evidence of cooking fires has been dated to 125,000 years ago. But this is not really a possible timeline for when control of fire began due to the evolutionary evidence of our guts: Our species, Homo sapiens, must have evolved in a population that had control of fire and used it to cook food, which means control of fire and cooking must have begun half a million years ago at the earliest.\n\n\nEdit:\nIt's impossible to answer the second part of your question. Humans would have experimented with cooking the variety of foods available. I don't see how you could get a specific timeline of the integration of spices and other cooking ingredients; it would all be highly variable and probably a subject of debate with many of the wild varieties. For instance we have no idea when humans started eating garlic, it's really difficult to get an accurate date of pre modern (read pre writing) things like this.",
"provenance": null
},
{
"answer": "I'm on mobile so no links, but Richard Wrangham, who is an anthropologist, has an entire book on this, called Catching Fire. \n\nHis main argument boils down to 2 million years ago. I think that's probably too long ago, but the book is very enjoyable and well written, save for the one chapter on sex and division of labour, which rankled.",
"provenance": null
},
{
"answer": "Great discussion going on. I have often wondered about this so I'll piggy-back this topic (since it is very related) to ask the following:\n\n\n\nI can't wrap my head around making the leap from controlling fire, to cooking food.\n\nIt would make sense that once early man had control of fire that he would start experimenting. Putting anything and everything he could into the fire to see what would happen. So naturally at some point he would stick some food in there and cook it by accident.\n\nAnd maybe then he would eat it and it would have been more nutritious. But of course he couldn't have known it was better for him. An animal used to eating raw meat and vegetables wouldn't automatically think that cooked food was better would it? Especially to the point that cooking it was universal thus guiding our evolution.\n\nNot that I'm doubting that that leap was made, i just don't myself understand it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5355",
"title": "Cooking",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 813,
"text": "Phylogenetic analysis suggests that human ancestors may have invented cooking as far back as 1.8 million to 2.3 million years ago. Re-analysis of burnt bone fragments and plant ashes from the Wonderwerk Cave, South Africa, has provided evidence supporting control of fire by early humans there by 1 million years ago. There is evidence that \"Homo erectus\" was cooking their food as early as 500,000 years ago. Evidence for the controlled use of fire by \"Homo erectus\" beginning some 400,000 years ago has wide scholarly support. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. Anthropologists think that widespread cooking fires began about 250,000 years ago, when hearths started appearing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2597318",
"title": "Culinary arts",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 564,
"text": "The origins of culinary began with primitive humans roughly 2 million years ago. There are various theories as to how early humans used fire to cook meat. According to anthropologist Richard Wrangham, author of \"Catching Fire: How Cooking Made Us Human\", primitive humans simply tossed a raw hunk of meat into the flames and watching it sizzle. Another theory claims humans may first have savored roasted meat by chance when the flesh of a beast killed in a forest fire was found to be more appetizing and easier to chew and digest than the conventional raw meat.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39466243",
"title": "Outline of cuisines",
"section": "Section::::History of cuisine.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 231,
"text": "BULLET::::- History of cooking – no known clear archeological evidence for the first cooking of food has survived. Most anthropologists believe that cooking fires began only about 250,000 years ago, when hearths started appearing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1158720",
"title": "Middle Paleolithic",
"section": "Section::::Technology.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 752,
"text": "The use of fire became widespread for the first time in human prehistory during the Middle Paleolithic and humans began to cook their food c. 250,000 years ago. Some scientists have hypothesized that hominids began cooking food to defrost frozen meat which would help ensure their survival in cold regions. Robert K. Wayne, a molecular biologist, has controversially claimed, based on a comparison of canine DNA, that dogs may have been first domesticated during the Middle Paleolithic around or even before 100,000 BCE. Christopher Boehm (2009) has hypothesized that egalitarianism may have arisen in Middle Paleolithic societies because of a need to distribute resources such as food and meat equally to avoid famine and ensure a stable food supply.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1158720",
"title": "Middle Paleolithic",
"section": "Section::::Nutrition.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 673,
"text": "Although gathering and hunting comprised most of the food supply during the Middle Paleolithic, people began to supplement their diet with seafood and began smoking and drying meat to preserve and store it. For instance the Middle Stone Age inhabitants of the region now occupied by the Democratic Republic of the Congo hunted large long catfish with specialized barbed fishing points as early as 90,000 years ago, and Neandertals and Middle Paleolithic \"Homo sapiens\" in Africa began to catch shellfish for food as revealed by shellfish cooking in Neandertal sites in Italy about 110,000 years ago and Middle Paleolithic \"Homo sapiens\" sites at Pinnacle Point, in Africa.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3545917",
"title": "Masonry oven",
"section": "Section::::Origins and history.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 516,
"text": "Humans built masonry ovens long before they started writing. The process began as soon as our ancestors started using fire to cook their food, probably by spit-roasting over live flame or coals. Big starchy roots and other slower-cooking foods, however, cooked better when they were buried in hot ashes, and sometimes covered with hot stones, and/or more hot ash. Large quantities might be cooked in an earth oven: a hole in the ground, pre-heated with a large fire, and further warmed by the addition of hot rocks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "390616",
"title": "Kebab",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 544,
"text": "Evidence of hominin use of fire and cooking in the Middle East dates back as far as 790,000 years, and prehistoric hearths, earth ovens, and burnt animal bones were spread across Europe and the Middle East by at least 250,000 years ago. Excavations of the Minoan settlement of Akrotiri unearthed stone supports for skewers used before the 17th century BC. In ancient times, Homer in the Iliad (1.465) mentions pieces of meat roasted on spits (), and the Mahabharata, an ancient Indian text, also mentions large pieces of meat roasted on spits.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3vzeie | Is there a history of monasticism in Islam? | [
{
"answer": "*I speak of the Middle Ages.*\n\nIslam doesn't really have monasticism. Within Sufism, Islam's mystical tradition, we can see some parallels to Christian monasticism, but fundamental differences remain--they are, at heart, different institutions with different roles to play in their respective religions and societies.\n\nAccording to Christian tradition, the roots of monasticism lie in the late antique Egyptian desert, where so-called Desert Fathers (and Mothers!) were inspired to (in theory) leave \"the world\" behind, and move into isolated (in theory) caves or rough buildings to focus on their spiritual lives and relationships with God. Two things happened: one, they were seen as holy and people from nearby villages/cities came to them seeking advice and consolation. Two, they started to form their own communities. First, communities of hermits who sometimes came together; eventually, communities who devoted as much time to seeking God *as a community* (in group prayer) as on their own. And, eventually, these communities developed formal Rules to regulate their daily lives.\n\nThat is what we typically mean by \"monasticism\" in Christianity: a group of people, typically single-sex (although \"double-houses\" of women and men, isolated from each other but living side-by-side, are a western medieval thing in certain times and places), who swear permanent vows of poverty, chastity, and obedience; wear a uniform; and follow a rigid daily schedule of group prayer, individual prayer, and some amount of time for work. \n\nAs a social institution, monasteries own land, play power politics, are played with *in* power politics among bishops and secular lords (donating land to a monastery to keep it out of someone else's hands), offer a place for noble and royal widows to finish out their lives without needing to remarry (and thus preventing their lands from leaving the family), provide charity, and intercede between their patrons and God. In the early Middle Ages, monasteries were *crucial* in spreading and anchoring Christianity across pagan Europe. They were centers of learning, literacy, and libraries throughout the medieval world. As a religious institution, monasteries allow monks and nuns to nurture their inner spiritual lives--Christian mysticism largely, though not exclusively, comes out of the monastic tradition.\n\nSo in Christianity, mysticism tends to emerge from monasticism, or is just one part of it. Conversely, Sufism is the inner or mystical dimension of Islam, and in some cases, we can see some parallels to monasticism within that mysticism tradition.\n\nThe Sufi tradition generally consists of disciples or students under a leader. As you might expect, this idea of a teacher with a group of students, appointing one as their heir upon their death, does lead to the development of *tariqa* or orders of Sufism. \n\nUnlike the rigid, exclusive, vowed communities of monasticism, however, Sufi orders are fluid. People can join them, leave them, adhere to multiple traditions at the same time! They are collective teachings of ways to build your individual relationship with God. The Christian monastic orders can also be seen that way, but they are exclusive, for-life, and consider the full way of life as part of building that relationship.\n\nAdherence to Sufi orders can manifest in many different forms. In some cases, particularly in north and west Africa, where an entire people or branch of a people will follow Sufi principles. Some Sufis will live independently and come together or meet with the teacher. But in other cases, we do see Sufis living in community. I stress that this is not the formal vowed life under a Rule of Christian monasticism. Neverthless, *zawiya*/*tekke*/Sufi \"lodges\" of the Middle Ages resemble their Christian counterparts in some ways.\n\nStructurally or architecturally, the zawiya complex provided lodging for their Sufis, a school (zawiya simply means madrasa/religious school in some parts of the Arab world), space for daily prayer, and sometimes institutions like lodging for visitors or hospitals for the sick and indigent. You would find equivalents for all of these in medieval Christian monasteries! Zawiyas, though, reflected Sufism's individualistic focus much more than their Christian counterparts tended to. While most Christian monastic traditions did not allow individual cells or space for private prayer until later in the Middle Ages (Christianity also has an eremetic or hermit tradition, though), Sufi zawiyas frequently offered both. And again, vows and the rigidity of monastic Rules were not part of life in a zawiya.\n\nSufi zawiyas did, however, mirror Christian monasteries in their missionary function. Both individual Sufis and established zawiyas played crucial roles in the expansion of both Islam and literacy in the early medieval (and also rather more modern) world.\n\nIslam and eventually Sufism are born and cultivated partially in lands very familiar with either western or eastern forms of Christian monasticism--including, of course, the Egyptian desert itself. Were the Sufi zawiyas inspired by the Christian monastic communities their founders were well aware of? Was it simply the case that the medieval Mediterranean world shared enough circumstances that educated religious communities as beacons of charity and missionary work filled a necessary niche in both? Or was it a mix of the two? As you can imagine, influences between Christian monasticism, Sufi zawiyas, and the mystical tradition within the two religions (and Judaism as well) remains a rather hotly debated question.\n\nOverall, it is wrong to say medieval Islam developed monasticism. But a closer look reveals that within Sufism, institutions did develop that paralleled contemporary Christian monasteries in several important respects.\n\n*My apologies for not including the Buddhist, Hindu or Jain monastic traditions in this discussion.*",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19626",
"title": "Monasticism",
"section": "Section::::Islam.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 635,
"text": "Islam forbids the practice of monasticism. In Sunni Islam, one example is Uthman bin Maz'oon; one of the companions of Muhammad. He was married to Khawlah bint Hakim, both being two of the earliest converts to Islam. There is a Sunni narration that, out of religious devotion, Uthman bin Maz'oon decided to dedicate himself to night prayers and take a vow of chastity from his wife. His wife got upset and spoke to Muhammad about this. Muhammad reminded Uthman that he himself, as the Prophet, also had a family life, and that Uthman had a responsibility to his family and should not adopt monasticism as a form of religious practice.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22974735",
"title": "Christianity in the 5th century",
"section": "Section::::Monasticism.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 759,
"text": "Monasticism is a form of asceticism whereby one renounces worldly pursuits (\"in contempu mundi\") and concentrates solely on heavenly and spiritual pursuits, especially by the virtues humility, poverty, and chastity. It began early in the Church as a family of similar traditions, modeled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. John the Baptist is seen as the archetypical monk, and monasticism was inspired by the organisation of the Apostolic community as recorded in Acts of the Apostles. Central figures in the development of monasticism were Basil of Caesarea in the East and Benedict of Nursia in the West, who created the famous Benedictine Rule, which became the most common rule throughout the Middle Ages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14117",
"title": "History of Christianity",
"section": "Section::::Christianity during late antiquity (313–476).:Monasticism.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 456,
"text": "Monasticism is a form of asceticism whereby one renounces worldly pursuits and goes off alone as a hermit or joins a tightly organized community. It began early in the Church as a family of similar traditions, modelled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. John the Baptist is seen as an archetypical monk, and monasticism was also inspired by the organisation of the Apostolic community as recorded in Acts 2.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25789277",
"title": "Christianity in late antiquity",
"section": "Section::::Monasticism.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 546,
"text": "Monasticism is a form of asceticism whereby one renounces worldly pursuits (\"in contempu mundi\") and concentrates solely on heavenly and spiritual pursuits, especially by the virtues humility, poverty, and chastity. It began early in the Church as a family of similar traditions, modeled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. St. John the Baptist is seen as the archetypical monk, and monasticism was also inspired by the organisation of the Apostolic community as recorded in \"Acts of the Apostles\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19626",
"title": "Monasticism",
"section": "Section::::Christianity.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 462,
"text": "Monasticism in Christianity, which provides the origins of the words \"monk\" and \"monastery\", comprises several diverse forms of religious living. It began to develop early in the history of the Church, but is not mentioned in the scriptures. It has come to be regulated by religious rules (e.g. the Rule of St Basil, the Rule of St Benedict) and, in modern times, the Church law of the respective apostolic Christian churches that have forms of monastic living.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25874",
"title": "Rule of Saint Benedict",
"section": "Section::::Origins.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 728,
"text": "Christian monasticism first appeared in the Egyptian desert, in the Eastern Roman Empire a few generations before Benedict of Nursia. Under the inspiration of Saint Anthony the Great (251-356), ascetic monks led by Saint Pachomius (286-346) formed the first Christian monastic communities under what became known as an \"Abbot\", from the Aramaic \"abba\" (father).Within a generation, both solitary as well as communal monasticism became very popular which spread outside of Egypt, first to Palestine and the Judean Desert and thence to Syria and North Africa. Saint Basil of Caesarea codified the precepts for these eastern monasteries in his Ascetic Rule, or \"Ascetica\", which is still used today in the Eastern Orthodox Church.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "721873",
"title": "Christian monasticism",
"section": "Section::::History.:Early Christianity.:Eremitic Monasticism.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 271,
"text": "An early form of \"proto-monasticism\" appeared as well in the 3rd century among Syriac Christians through the \"Sons of the covenant\" movement. Eastern Orthodoxy looks to Basil of Caesarea as a founding monastic legislator, as well to as the example of the Desert Fathers.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3zkgky | How long did it take a skilled armourer to make chainmail armour during medieval times? | [
{
"answer": "My own area of study is the armour of high and late medieval Europe. So my answer will focus on that, not on the Early Middle Ages. I mention this caveat because the economics and social organization of Europe were very different between 600 and 1450, and this effected things like how armour was made, which in turn effected the time it took to make it.\n\nMy source for this is Alan Williams' The Knight and the Blast Furnace.\n\nA mail shirt might have between 28,000 and 50,000 links, depending on the size of the links and the length of the skirt an sleeves. Some mail was made of alternating riveted and solid links (IE, something like a washer). This was quicker to make, and modern estimates suggest it would take around 750 man-hours to manufacture. If a single laborer worked 10 hour days, this would take 75 days to make (not including sundays and feast days). However, laborers often didn't work alone, and workshops would include division of labor to speed up the process. So the actual time to manufacture a shirt would often be less than 75 days, even if it represented 750 hours of labor - how many people worked on a shirt, and how well they collaborated would determine the actual time of manufacture.\n\nFrom the 14th century onwards, mail is increasingly made of all rivetted links, perhaps because it allows a tighter weave with thicker links and thus makes mail more protective. Rivetting all those extra links would add around 250 man-hours of labor, for a total of 1000 man-hours.\n\nThis made mail rather expensive, as you can imagine. In the beginning of the 14th century mail shirts bought in Bruges in Flanders were the equivalent of 60-130 days wages of a common soldier on campaign. In the early 15th century mail shirts bought from the Westphalia region of Germany were the equivalent of around 25 days wages, which is a good deal more affordable. At least some of this reduction in price may have been due to the re-use of mail - mail is easy to recycle, alter, cut up and repurpose. Many surviving mail shirts shows signs of alteration from decades or more after they were first made, and smaller pieces of mail armour like standards (collars), sleaves, skirts and gussets (underarm guards) may well have been made from older mail shirts that were cut up. So a lord buying mail shirts for his retinue might not be buying new mail, but 'remanufactured' mail.\n\nAs a final aside, the first step to making mail is making some form of wire or at least some thin piece of metal that can be bent into a ring. The quickest way to do this is to draw it - basically pulling an iron rod through a series of hole in a 'draw plate', creating a wire of a given thickness. This process is first mentioned by Theophilus in the 11th century, but mail with links of fairly even thickness dates as early as the 8th century. Some medieval mail is made from 'wire' of less even thickness, which may have been made through other processes like cutting strips from flat pieces of metal and then twisting them. I mention the manufacture of wire because while it isn't included in the calculations above, it is important to keep in mind that this was labor that needed to be performed before mail could be made - even though it wasn't necessarily performed in the mailmaker's workshop by the mailmakers themselves. Improvements in making wire made mailmaking faster and mail more affordable.\n\nEDIT: A final note is that mailmaking and making plate armour were different crafts, and at least in larger cities like London were represented by different guilds.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "57755462",
"title": "Indian armour",
"section": "Section::::Medieval period.:Early Medieval period (1206 CE-1526 CE).\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 268,
"text": "During 12th century chainmail armour is first introduced in the Indian subcontinent and used by Turkic armies. An reference of chainmail armour was found in the inscription of Mularaja II and also at the Battle of Delhi where it was used by the armoured war elephants\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57025242",
"title": "Tomb Effigy of Jacquelin de Ferrière",
"section": "Section::::Imagery.:Chainmail.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 236,
"text": "Chainmail was the prominent form of armor during the 13th century. A precursor to plate armor, chainmail protected its wearer from opponents while allowing mobility, and was extremely effective against edged weapons and thrust attacks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1390149",
"title": "Medieval technology",
"section": "Section::::Military technologies.:Armour.\n",
"start_paragraph_id": 140,
"start_character": 0,
"end_paragraph_id": 140,
"end_character": 1240,
"text": "The most common type during the 11th through the 16th centuries was the Hauberk, also known earlier than the 11th century as the Carolingian byrnie. Made of interlinked rings of metal, it sometimes consisted of a coif that covered the head and a tunic that covered the torso, arms, and legs down to the knees. Chain mail was very effective at protecting against light slashing blows but ineffective against stabbing or thrusting blows. The great advantage was that it allowed a great freedom of movement and was relatively light with significant protection over quilted or hardened leather armour. It was far more expensive than the hardened leather or quilted armour because of the massive amount of labor it required to create. This made it unattainable for most soldiers and only the more wealthy soldiers could afford it. Later, toward the end of the 13th century banded mail became popular. Constructed of washer shaped rings of iron overlapped and woven together by straps of leather as opposed to the interlinked metal rings of chain mail, banded mail was much more affordable to manufacture. The washers were so tightly woven together that it was very difficult penetrate and offered greater protection from arrow and bolt attacks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4649186",
"title": "Finery forge",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 652,
"text": "In Europe, the concept of the finery forge may have been evident as early as the 13th century. However, it was perhaps not capable of being used to fashion plate armor until the 15th century, as described in conjunction with the waterwheel-powered blast furnace by the Florentine Italian engineer Antonio Averlino (c. 1400 - 1469). The finery forge process began to be replaced in Europe from the late 18th century by others, of which puddling was the most successful, though some continued in use through the mid-19th century. The new methods used mineral fuel (coal or coke), and freed the iron industry from its dependence on wood to make charcoal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1875107",
"title": "Lorica hamata",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 284,
"text": "The lorica hamata is a type of mail armour used by soldiers for over 600 years (3rd century BC to 4th century AD) from the Roman Republic to the Roman Empire. \"Lorica hamata\" comes from the Latin \"hamatus\" (hooked) from \"hamus\" which means \"hook\", as the rings hook into one another.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "653194",
"title": "Man-at-arms",
"section": "Section::::Military function.:Arms and armour.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 528,
"text": "Throughout the Medieval period and into the Renaissance the armour of the man-at-arms became progressively more effective and expensive. Throughout the 14th century, the armour worn by a man-at-arms was a composite of materials. Over a quilted gambeson, mail armour covered the body, limbs and head. Increasingly during the century, the mail was supplemented by plate armour on the body and limbs. In the 15th century, full plate armour was developed, which reduced the mail component to a few points of flexible reinforcement.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1621648",
"title": "Chainmail (game)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 971,
"text": "Chainmail is a medieval miniature wargame created by Gary Gygax and Jeff Perren. Gygax developed the core medieval system of the game by expanding on rules authored by his fellow Lake Geneva Tactical Studies Association (LGTSA) member Perren, a hobby-shop owner with whom he had become friendly. Guidon Games released the first edition of \"Chainmail\" in 1971 as its first miniature wargame and one of its three debut products. \"Chainmail\" was the first game designed by Gygax that was available for sale as a professional product. It included a heavily Tolkien-influenced \"Fantasy Supplement\", which made \"Chainmail\" the first commercially available set of rules for fantasy wargaming, though it follows many hobbyist efforts from the previous decade. \"Dungeons & Dragons\" began as a \"Chainmail\" variant, and \"Chainmail\" pioneered many concepts later used in \"Dungeons & Dragons\", including armor class and levels, as well as various spells, monsters and magical powers.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3vhji7 | if utilities infastructure was created through taxpayer dollars, then why do people have to pay private companies for their utilities? | [
{
"answer": "In most cases where facilities were built by the public and then privatized, the company either had to pay the government for the facilities, or agree to repair or improve the facilities at their own expense, thus effectively paying a bill that would have been the government's bill.\n\n",
"provenance": null
},
{
"answer": "You have to pay the provider of the service. Water, electricity, etc. cost money to extract/generate and transmit. You have to pay FedEx to drive your packages down the highway even though the highway was built with taxpayer money, right?\n\nAs to your other questions about utilities industries, they're complicated and state-specific. In general, it can be anything from a free-for-all where anybody can become, for example, a competitive retail electricity provider, or it can be state-sponsored monopoly where the government gives one private company the exclusive right to provide the service in a particular area. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "35132007",
"title": "Utility ratemaking",
"section": "Section::::Ratemaking goals.:Capital attraction.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 667,
"text": "Although utilities are regulated industries, they are typically privately owned and must therefore attract private capital. Accordingly, because of constitutional takings law, government regulators must assure private companies that a fair revenue is available in order to continue to attract investors and borrow money. This creates competing aims of capital attraction and fair prices for customers. Utility companies are therefore allowed to charge \"reasonable rates,\" which are generally regarded as rates that allow utilities to encourage people to invest in utility stocks and bonds at the same rate of return they would in comparable non-regulated industries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "171136",
"title": "Public utility",
"section": "Section::::United States.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 529,
"text": "Public utilities can be privately owned or publicly owned. Publicly owned utilities include cooperative and municipal utilities. Municipal utilities may actually include territories outside of city limits or may not even serve the entire city. Cooperative utilities are owned by the customers they serve. They are usually found in rural areas. Publicly owned utilities are non-profit. Private utilities, also called investor-owned utilities, are owned by investors, and operate for profit, often referred to as a rate of return.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "156653",
"title": "District",
"section": "Section::::Municipal utility district.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 268,
"text": "In the US, public utility districts (PUD) have similar functions to Municipal utility districts, but are created by a local government body such as a city or county, and have no authority to levy taxes. They provide public utilities to the residents of that district.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17355007",
"title": "Electricity pricing",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 328,
"text": "Some utility companies are for-profit companies, and their prices will include a financial return for shareholders and owners. These utility companies can exercise their political power within existing legal and regulatory regimes to guarantee that return and reduce competition from other sources like distributed generation. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50791013",
"title": "1996 California Proposition 218 (Local Initiative Power)",
"section": "Section::::Types of local initiatives.:Compensatory Initiatives.:Countering Utility Fee and Charge General Fund Transfers.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 572,
"text": "In some situations, a local government may be legally allowed to transfer utility fee or charge proceeds to the general fund of the local agency to thereafter be spent at the discretion of local politicians. Such situations may include controversial reimbursements to the general fund for services and/or other benefits provided by the local government to the utility and legally allowable return on investment (“profit”) utility fee overcharges for electrical or gas service which are not subject to the cost of service constitutional protections under Proposition 218. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18878",
"title": "Monopoly",
"section": "Section::::Historical monopolies.:Utilities.\n",
"start_paragraph_id": 143,
"start_character": 0,
"end_paragraph_id": 143,
"end_character": 492,
"text": "A public utility (or simply \"utility\") is an organization or company that maintains the infrastructure for a public service or provides a set of services for public consumption. Common examples of utilities are electricity, natural gas, water, sewage, cable television, and telephone. In the United States, public utilities are often natural monopolies because the infrastructure required to produce and deliver a product such as electricity or water is very expensive to build and maintain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35720032",
"title": "Social media as a public utility",
"section": "Section::::Background.:Definitions.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 570,
"text": "The traditional definition of the term public utility is \"an infrastructural necessity for the general public where the supply conditions are such that the public may not be provided with a reasonable service at reasonable prices because of monopoly in the area.\" Conventional public utilities include water, natural gas, and electricity. In order to secure the interests of the public, utilities are regulated. Public utilities can also be seen as natural monopolies implying that the highest degree of efficiency is accomplished under one operator in the marketplace.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1nstpr | after accomplishing something very challenging why do we sometimes feel empty and emotionless about it immediately after? | [
{
"answer": "Because the challenge is gone. \n\nPossibly. ",
"provenance": null
},
{
"answer": "This reminds me of the last scene in Zero Dark Thirty (I'd say spoiler here, but since the film was based on recent real-life events, you probably know the plot already.)\n\nIf you've devoted a significant amount of time to something, once you complete it, your primary purpose is gone. It takes a bit of time to find a new goal.",
"provenance": null
},
{
"answer": "I feel that way too. For me I'm pretty certain it's just a lack of self-confidence. By completing the task, I haven't proven that I can do something difficult; either I didn't really earn it for some reason, or I've shown that it wasn't really difficult in the first place.",
"provenance": null
},
{
"answer": "The challenge or goal is gone and now that you have overcome it, you don't know what's next. Now you are empty as you await for another goal to approach itself so you can have the same feeling. Completing goals is good, having goals is good. Having too many goals is stressful, having no goals makes you feel useless and empty. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "35075711",
"title": "Spontaneous recovery",
"section": "Section::::In human memory.:Traumatic memories.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 260,
"text": "Emotionally unpleasant experiences have the tendency to come back and haunt us, even after frequent suppression. Such memories can be recovered gradually, through active search and reconstruction, or they can come to mind spontaneously, without active search.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "788091",
"title": "Psychological trauma",
"section": "Section::::Symptoms.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 825,
"text": "In time, emotional exhaustion may set in, leading to distraction, and clear thinking may be difficult or impossible. Emotional detachment, as well as dissociation or \"numbing out\" can frequently occur. Dissociating from the painful emotion includes numbing all emotion, and the person may seem emotionally flat, preoccupied, distant, or cold. Dissociation includes depersonalisation disorder, dissociative amnesia, dissociative fugue, dissociative identity disorder, etc. Exposure to and re-experiencing trauma can cause neurophysiological changes like slowed myelination, abnormalities in synaptic pruning, shrinking of the hippocampus, cognitive and affective impairment. This is significant in brain scan studies done regarding higher order function assessment with children and youth who were in vulnerable environments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2177410",
"title": "Impostor syndrome",
"section": "Section::::Measuring impostor phenomenon.:The impostor cycle.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 511,
"text": "This sequence of events serves as a reinforcement, causing the cycle to remain in motion. With every cycle, feelings of perceived fraudulence, increased self-doubt, depression, and anxiety accumulate. As the cycle continues, increased success leads to the intensification of feeling like a fraud. This experience causes the individual to remain haunted by their lack of perceived, personal ability. Believing that at any point they can be 'exposed' for who they think they really are keeps the cycle in motion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8084306",
"title": "Self-destructive behavior",
"section": "Section::::Causes.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 599,
"text": "Aside from this, a need for attention or a feel good sensation can ultimately cause this behavior. A prime example of this would be addiction to drugs or alcohol. In the beginning stages, people have the tendency to ease their way into these unhealthy behaviors because it gives them a pleasurable sensation. However, as time goes on, it becomes a habit that they can not stop and they begin to lose these great feeling easily. When these feelings stop, self-destructive behavior enhances because they aren't able to provide themselves with that feeling that makes mental or physical pain go away. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4829036",
"title": "Problem of mental causation",
"section": "Section::::Commonsensical Solutions.:The Advent of Crying.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 569,
"text": "How Emotions are Made: The Secret Life of the Brain by Lisa Feldman Barrett for a rigorous discussion. One’s crying is not planned, unless one is an actor, then we are able to tap into the mechanism that causes tears to flow. Otherwise it just happens outside of out awareness of causing it by thinking. The brain has patterns of neurons that once activated generate physiologic responses that happens up to 10 seconds before we are aware. (Koch, Christof. 2012. “How Physics and Neuroscience Dictate Your “Free” Will”. Scientific American: April 12.) and many others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17526284",
"title": "Self-conscious emotions",
"section": "Section::::Social benefits.:Social healing.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 530,
"text": "Self-conscious emotions enable social healing. When an individual makes a social error, feelings of guilt or embarrassment changes not just the person’s mood but their body language. In this situation the individual gives out non-verbal signs of submission and this is generally more likely to be greeted with forgiveness. This has been shown in a study where actors knocked over a supermarket shelve (Semin & Manstead, 1982). Those that acted embarrassed were received more favorably than those who reacted in a neutral fashion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11838661",
"title": "Exaggeration",
"section": "Section::::Everyday and psycho-pathological contexts.:Cognitive distortions.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 252,
"text": "In depression, exaggerated all-or-nothing thinking can form a self-reinforcing cycle: these thoughts might be called \"emotional amplifiers\" because, as they go around and around, they become more intense. Here are some typical all-or-nothing thoughts:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
14i1si | Does having houseplants at home provide any sort of measurable benefit to ones health? | [
{
"answer": "[They can help reduce indoor pollutants](_URL_0_)",
"provenance": null
},
{
"answer": "Houseplants can provide numerous [mental health and psychiatric](_URL_0_) benefits.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27834088",
"title": "Kenosha County Healthy Homes Initiative",
"section": "Section::::Grant.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 400,
"text": "The Healthy Homes program was made possible through a grant from the United States Department of Housing and Urban Development via a Healthy Homes Demonstration Program (HHD). The grant allows the Healthy Homes program to address environmental triggers that contribute to illnesses, conduct education and outreach that furthers the goal of protecting families from environmentally induced illnesses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22602238",
"title": "Building and Construction Improvement Program",
"section": "Section::::Program overview.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 625,
"text": "In general, over 15% of the household expenditure and around 50% of disease morbidity in the region were directly attributable to poor housing conditions, most of which were avoidable. Keeping these issues in view, the Building and Construction Improvement Program (BACIP) set out to improve the living condition by developing several home–improvement products that mitigated the negative impact of planning and building inefficiencies on these traditional households and lessened the burden on the surrounding environment. BACIP also attempted to reduce the cost and increase the affordability of better housing conditions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3531662",
"title": "Affordable housing",
"section": "Section::::Growing density convergence and regional urbanization.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 204,
"text": "\"In addition to the distress it causes families who cannot find a place to live, lack of affordable housing is considered by many urban planners to have negative effects on a community's overall health.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31835707",
"title": "Homeshare",
"section": "Section::::Who benefits from homeshare and how?\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 262,
"text": "Others benefit indirectly. Families of Householders speak of the reassurance that their loved one has someone in the house, looking out for their welfare. Public services benefit too, as homeshare can delay the need for costly services such as residential care.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "834586",
"title": "Houseplant",
"section": "Section::::Plant requirements.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 301,
"text": "Major factors that should be considered when caring for houseplants are moisture, light, soil mixture, temperature, humidity, fertilizers, potting, and pest control. The following includes some general guidelines for houseplant care. Specific care information may be found widely online and in books.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10916331",
"title": "West Harlem Environmental Action",
"section": "Section::::Partnerships.:Healthy Home Healthy Child Campaign.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 610,
"text": "The Healthy Homes Project is a joint research initiative between the Columbia University Center for Children's Environmental Health (CCCEH). The project targets the unequal exposure of environmental hazards faced by children in minority or low-income communities and works to educate families on a number of known risk factors such as \"cigarettes, lead poisoning, drugs and alcohol, air pollution, garbage, pesticides, and poor nutrition\". Educating parents on environmental health risks, can protect children from developing asthma or cancer or from experiencing growth or developmental delays, among others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31835707",
"title": "Homeshare",
"section": "Section::::Who benefits from homeshare and how?\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 590,
"text": "The direct benefits to a Householder include; help with daily living, companionship and the security of having someone in the house, especially at night. There are even recorded instances of homesharers saving lives; for example a German homesharer called the emergency services when the householder had a heart attack. Other benefits include: breaking down the barriers between generations and different cultures, fostering mutual understanding and tolerance. For instance, in an Australian program, an elderly Italian lady successfully shared her home with a Pakistani Muslim homesharer.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
15gyvs | What is the difference between German Blitzkrieg strategy and Soviet Deep Battle doctrine? | [
{
"answer": "First of all, the Germans never used the word 'Blitzkrieg' themselves and did not have a specific doctrine around deep penetration or strategic battle - as you can see from from them turning back from Warsaw to deal with the Polish counterattack at Bzura 1939-09-09, the halting of the armoured units in front of the Dunkirk pocket 1940-05-17 and 1940-05-24 and diverting the armoured units from *Heeresgrupp Nord* and *Heeresgruppe Mitte* to help form the Kiev pocket 1941-09-16.\n\nThe Germans had a strong tactical focus, with their *auftragstaktik* and were extremely flexible tactically, allowing them to penetrate enemy lines and advance on the depth. However, they did not have any specific strategic doctrine other than the traditional military ideal of the dual pincer cut-off, famous ever since Hannibal did it in the Battle of Cannae. \n\nThe Germans never managed to get mroe than about 17% mechanisation of their forces - most marched on foot and pulled their heavy weapons with horses, and the difference in speed of these two different kind of units was a constant headache, and was exploited by the allies and Soviets multiple times. As the German armoured units attacked the suburbs of Warsaw 1939-09-08 (losing 70 tanks in the process and learning that tanks were not very good in urban warfare), the untouched Polish Poznan and Pomorze armies gathered at the Bzura River and attacked the German *30. Infanterie-division* that was the only stretched-out flank protection of the German advace. The Germans had to pull back from their attack of Warsaw, go after the Poles and the campaign lasted for another two weeks.\n\n**In essence, the Germans had no blitzkrieg, they were flexible tactically and strived for encirklement strategically, and had severe problems with the armoured and motorised units outrunning the foot infantry.**\n\nThe Soviets did develop a doctrine of deep penetration, but essentially abandoned it during the 1937-1938 purges. While the purges mostly killed off generals and colonels and left the non-senior officers in place, it did freeze the Red Army in place. No-one dared do anything without orders, and tried to replace tactical flexibility with zeal and discipline, which was a recipy for disaster - which the defeat of the Spanish Republican Army (organised along Soviet lines in late 1936 and early 1937) and the performance of the Red Army in the Finnish Winter War 1939-1940 and early in the Great Patriotic War 1941-1942 shows.'\n\nThe Red Army slowly got better at knowing what it was good at and what it was bad at, and how to use what it was good at and compensate for what it was bad at. It created massive breakthough artillery units, shifting them to where they were needed. They knew they could never match the Germans in tactical flexibility, and instead created an operation doctrine, where they would rely on firepower of pre-calculated artillery barrages and massed use of tanks and assault guns to achieve penetration of the enemy lines. Massive reserves would be ready to attach to any attack that showed promise, and attacks that failed was stopped and their best forces moved to reinforce the attack that did well. Once penetrating, the Red Army focused more on destroying the enemy supply, communication and weaker rear units (destroying tank repair shops, supply services, traingin depots, etc.) and capturing important transportation hubs. Other so far untouched enemy units would be forces to retreat to not be cut off, and once out of their entrenchment and unprotected by artillery, they could be an easy prey for another massed attack. Flexibility on a larger scale, it was very effective against the Germans once their ability to conduct large scale armoured warfare had been ground down. The skill in *maskirovka*, the art of camouflage, hiding own forces and making it look like there were substantial forces where there were almost none was also important. The Red Army mastered this art.\n\n**In essence, the Red Army could not match the Germans in tactical skill, and thus built up flexible reserves to quickly shift to any breakthrough. Combined with *maskirovka* this allowed them to decisively defeat the Germans on the eastern front.**",
"provenance": null
},
{
"answer": "Both feature combined arms at their core and seek the dislocation of enemy forces through superior mobility, but Blitzkrieg has a tactical focus whereas Deep Battle takes the concept to the operational scale.\n\nYou might want to qualify your question with a temporal frame : Blitzkrieg came to its peak during the spectacular German conquests whereas the maturing of Deep Battle was just starting. While Tukachevski & al. did conceptualize Soviet operational art in the thirties, it is really the brutal and costly experience of WWII that hammered the concept into the shape of an efficient doctrine - see operation Bagration for a summary of that crazy learning process.\n\nI'm posting from memory on a mobile - I'll fetch some sources when I come home to a proper workstation.",
"provenance": null
},
{
"answer": "Theoretically, the two approaches are different styles of surface-and-gap warfare. Which basically says, attack the enemy where there is a gap (weakness) in his forces rather than a surface (strength).\n\nThe Germans exploited gaps through recon-pull, the Soviets used command-push.\n\nThe Germans would send out recon units to find weak spots, and flex their main effort formations to take advantage of and break through them. **Recon** units found the gaps, and **pulled** the main forces after them through the gap to exploit the enemy's rear--encircle him, shoot up his logistics, etc.\n\nThe Soviets would say, \"I vant gap here,\" and point to a spot on the map, and they would mass their artillery corps to blow a hole in the line. The **command** created a gap through overwhelming firepower, and **pushed** their forces through it.",
"provenance": null
},
{
"answer": "[I drew you a crappy representation of Blitzkrieg versus Deep Battle in MS Paint.](_URL_0_) The graphic you have about Blitzkrieg is different from my understanding of that strategy. My understanding is that, typically, Blitzkrieg attacks at one main point, punches a hole in the lines, and then seeks to effect envelopments/encirclements as the opportunity arises. For instances, punch a hole in a north-south defensive lines, drive deep, and then wheel around either to the north or to the south to create an envelopment around whichever part of the defenders a general chooses. This is in contrast to the dual pronged attack that is pictured in your graphic. Therefore, my representation looks a bit different from yours. (This is not to say that either is the only right representation!).\n \nBased upon my understanding, the big difference between Blitzkrieg and Deep Battle is that the former identifies a weak spot from the get-go, and then seeks to exploit it based upon a singular assault, whereas the latter does not target an existing weak point, and instead seeks to create many weak points through sheer brute force along several points of attack, then push deep to destroy supply lines and create envelopments. Also, it seems to me that Blitzkrieg is somewhat on a smaller, or at least more targeted, scale, whereas Deep Battle is an incredibly broad and huge strategy that requires massive manpower. Somebody correct me if I'm wrong.",
"provenance": null
},
{
"answer": "A little late to this I see, but I still wanted to share my own perspective- and also attempt to clear up some misconceptions, before I start though – apologies for a longish post, but Military history is my passion, and WW2 OST front is an all consuming passion ;)\n\nThere is a substantial difference between “Blitzkrieg” – the term itself did not exist in the lexicon of the Wehrmacht and Deep Operations, and the difference stems largely from the unique constraints and strengths facing both these nations.\n\n**Background:**\n\nThe Russian army has historically fought (and mostly won) defensive campaigns, using the space of Russia to draw invaders in, before springing the trap shut. With the revolution, Bolshevism as a creed demanded aggression, and being defensive was no longer enough, and with the rise of Communism, also came the rise of the leading figures who propounded the new theory of offensive operations, Frunze, Triandfilov and Tukhachevsky (quoting from memory, so the spellings might be horribly wrong).\n\nThese thinkers (mostly Triandfilov and Tukhachevsky) considered the following,\n\n(1) Can Russia stand upto another war of attrition? It had been successful in the past, but is it a guarantee of success in the future?\n(2) Space – Russian doctrine had always depended on trading space for time, and fighting defensively, but these thinkers challenged the dogma, and postulated that, the same space could also be used for an offensive strategy. As a digression, Zhukov used a reverse variant of Tukhachevsky’s offensive plan, as a 3 echeloned defensive plan (I am of the firm opinion that the Russian’s did not just stumble in the defense, and Father winter saved them, but that is a topic of an entirely different conversation)\n(3) Use strengths – Arty, masses of infantry have always been Russia’s strength and Sov doctrine married these strengths (which till then had been the hallmark of a static, defensive war) to the new doctrine of mobility as proposed by Fuller in the early 20th century, thereby resulting in what we now know as combined arms.\n(4) Weaknesses – Avoid pitched battles on Soviet soil, and take the battle to Western Europe and gain quick victories.\nThis theory \nhowever did not stretch as far as what the Germans did in terms of Radio net connectivity, use of air power etc etc, as Tukhachevsky was unfortunately purged before he could get there, and this entire theory died an unnatural death (until it was revived spectacularly in the counter offensive in Stalingrad)\n\n**What did this result in?**\n\nIt is too simplistic to say that Deep operations was only about battering a line in strength and then hope to break through to the enemies rear. The Sov army (especially post 42) started tailor making itself to this concept, and this is where the role of shock armies come in.\nDeep operations was also about maskirova (spelling?) and in ensuring that they enemy was completely on the backfoot on the chosen area of offensive. For instance during operation Uranus, Gehlen was entirely convinced that the offensive was going to be against Armee Group Centre- we talk about how FUSAG was formed in the UK, prior to D-Day, but Sov Russia created 3 fake armies, built 50 fake bridges (not exactly sure about this number) and entirely fooled German intel into where the strike was coming. This was the case during the Kursk counteroffensive and Operation Bagration as well (other examples of brilliantly executed Deep operations strikes).\nDeep operations was layered as below,\n(A) Shock army – massed infantry, heavy on sapper support backed by a overwhelming arty, mortars, Katyusha’s etc etc. You also had Shftrabats(spelling? Punishment battalions) clearing a path through minefields, but these were a very small component of the force deployed on an offensive. \n(B) Elite Guards Infantry units or regular infantry divisions as well\n(C) Armour\n(D) Cavalry Mechanised Groups\n\n**A** made the strike, these were divisions that were designed to take massive losses ,and their only role was in breaking through German lines – they had minimal mobility, communications capabilities or any of the other requirements for modern warfare. These units where the equivalent of the sledgehammer. \n\n **B** followed through, to deal with the second echelon German troops – this was an evolution in Sov tactics, and evolved as a response to Germany tactics of pulling back to a secondary line to ensure that the Arty impact was minimized (Gotthard Heinricci for instance was a genius at this tactic). \n**C** then completed the rout, and made deep penetrations, by then \n**D** (or also called Operational Exploitation Groups) were introduced into the gap, these were the units who roamed far to the rear of German lines and ravaged the Rollbahn (Armour also did the same, but lack of fuel stopped them long before the cavalry units were stopped).\n\nThe German “Blitzkrieg” also had a similar parent in Fuller’s ideas (some authors even say Guderian was deeply influenced by Tukhachevsky’s ideas – but apart from a couple of lines in Guderian’s memoirs, I have not been able to find a source to this claim), but the nature of this beast was entirely different. The Blitzkrieg considered the following,\n\n(1) Avoid the brutal war of attrition as seen in WW1\n(2) Avoid Static trench warfare, which favoured the combined (and stronger) economies of the allies\n(3) Essential to knockout the allies in the West before turning to the East to avoid a two front war\n(4) Manpower constraints, and use of force multipliers (Heinkel Tactical Bombers, Stuka’s, Panzer divisions) to even the manpower gap,\n(5) Shift in focus from a war with geographical objectives, to one that destroyed the maximum of enemy forces in minimum time.\nThis resulted in what we now know, and see as the extremely successful Blitzkrieg.\n\nIn this, as the German’s did not have the manpower to assault a wide section of the front, the entire force of the thrust was on the Schwerpunkt (again, spelling?) – or quite literally, the point of effort. It was NOT about recon by fire (probing for weakness in enemy lines, and then attacking the weakest point), but again intel played a big role in identifying (before the assault) the joints in opposing Armies, Corps (something like what Napoleon used to use), identifying clearly the lines of axis, and most importantly, about encirclements! The encirclements were planned affairs and the junctions of the pincers pre-defined.\n\nThe fundamental difference was that, Sov planning envisaged the substantial manpower reserves that were always available to it historically, and planned accordingly. Hence, the aim was more…”conventional” in that it did not seek a complete destruction of a large portion of the enemies OOB, whereas German planning was all about successive Cannae’s. \n\nInstead of Arty, the Germans leveraged their way superior CnC capabilities and used arty spotters embedded into each division, along with air spotting by spotters in Fieseler Storches, and used the Stuka’s as a moving arty. The initial attack was itself made by Armour and not by infantry, and the infantry was used to mop up the kessel’s while the armour moved onto the next encirclement / target. The logic in play here was to use all armour at the Schewrpunkt and brute force through the opposition lines, while the Luftwaffe on interdiction missions played havoc in enemy rear. Using infantry might have tangled the lines of communication, and clogged up the roads, and also giving the enemy time to react was the thought process here. It is important to note here though, Deep battle might have not been the success it became without the help of the humble Willy’s Jeeps and Studebacker trucks which immensely helped multiply Sov mobility.\n\nBoth Deep battle, and Blitzkrieg were products of the same thought – mobility over static warfare, and about bringing the war to a quick close, but the execution was as different as chalk is to cheese. In cases where deep operations failed (as in the example of the counter offensive at Moscow), it was entirely because of Stalin’s impatience and over ambition.\nThere seems to be a confusion on Deep operations, and that it involved attacks on multiple axis’, the thing is, those other offensive’s were a part of the maskirova, and meant to keep the German’s from switching reserves, the main scherpunkt (for lack of a better Sov word) was always pre-decided, and studied to the death. Take Op Uranus for instance, the planning for a counter offensive began towards the end of September, the site of the breakthrough was personally surveyed by both Zhukov and Vassilevski (spelling?), and completely pre-decided. The offensives by Chuikov, were more of a distraction to “fix” German troops.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2856495",
"title": "Deep operation",
"section": "Section::::Intended outcomes; differences with other methodologies.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 1291,
"text": "During the 1930s, the resurgence of the German military in the era of the \"Third Reich\" saw German innovations in the tactical arena. The methodology used by the Germans in the Second World War was named \"\"Blitzkrieg\"\". There is a common misconception that \"Blitzkrieg\", which is not accepted as a coherent military doctrine, was similar to Soviet deep operations. The only similarities of the two doctrines were an emphasis on mobile warfare and offensive posture. While the two similarities differentiate the doctrines from French and British doctrine at the time, the two were considerably different. While \"Blitzkrieg\" emphasized the importance of a single strike on a \"Schwerpunkt\" (focal point) as a means of rapidly defeating an enemy, Deep Battle emphasized the need for multiple breakthrough points and reserves to exploit the breach quickly. The difference in doctrine can be explained by the strategic circumstances for the USSR and Germany at the time. Germany had a smaller population but a better trained army whereas the Soviet Union had a larger population but a more poorly trained army. As a result, the \"Blitzkrieg\" emphasized narrow front attacks where quality could be decisive, while Deep Battle emphasized wider front attacks where quantity could be used effectively.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11702744",
"title": "German Army (1935–1945)",
"section": "Section::::Doctrine and tactics.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1375,
"text": "German operational doctrine emphasized sweeping pincer and lateral movements meant to destroy the enemy forces as quickly as possible. This approach, referred to as \"Blitzkrieg\", was an operational doctrine instrumental in the success of the offensives in Poland and France. Blitzkrieg has been considered by many historians as having its roots in precepts developed by Fuller, Liddel-Hart and von Seeckt, and even having ancient prototypes practiced by Alexander, Genghis Khan and Napoleon. Recent studies of the Battle of France also suggest that the actions of either Rommel or Guderian or both of them (both had contributed to the theoretical development and early practices of what later became blitzkrieg prior to World War II), ignoring orders of superiors who had never foreseen such spectacular successes and thus prepared much more prudent plans, were conflated into a purposeful doctrine and created the first archetype of blitzkrieg, which then gained a fearsome reputation that dominated the Allied leaders' minds. Thus 'blitzkrieg' was recognised after the fact, and while it became adopted by the Wehrmacht, it never became the official doctrine nor got used to its full potential because only a small part of the Wehrmacht was trained for it and key leaders at the highest levels either focused on only certain aspects or even did not understand what it was.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4652",
"title": "Blitzkrieg",
"section": "Section::::Definition.:Common interpretation.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 1183,
"text": "The traditional meaning of blitzkrieg is that of German tactical and operational methodology in the first half of the Second World War, that is often hailed as a new method of warfare. The word, meaning \"lightning war\" or \"lightning attack\" in its strategic sense describes a series of quick and decisive short battles to deliver a knockout blow to an enemy state before it could fully mobilize. Tactically, blitzkrieg is a coordinated military effort by tanks, motorized infantry, artillery and aircraft, to create an overwhelming local superiority in combat power, to defeat the opponent and break through its defences. \"Blitzkrieg\" as used by Germany had considerable psychological, or \"terror\" elements, such as the \"Jericho Trompete\", a noise-making siren on the Junkers Ju 87 dive-bomber, to affect the morale of enemy forces. The devices were largely removed when the enemy became used to the noise after the Battle of France in 1940 and instead bombs sometimes had whistles attached. It is also common for historians and writers to include psychological warfare by using Fifth columnists to spread rumours and lies among the civilian population in the theatre of operations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4652",
"title": "Blitzkrieg",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 659,
"text": "Blitzkrieg (, from \"Blitz\" [\"lightning\"] + \"Krieg\" [\"war\"]) is a method of warfare whereby an attacking force, spearheaded by a dense concentration of armoured and motorised or mechanised infantry formations with close air support, breaks through the opponent's line of defence by short, fast, powerful attacks and then dislocates the defenders, using speed and surprise to encircle them with the help of air superiority. Through the employment of combined arms in manoeuvre warfare, blitzkrieg attempts to unbalance the enemy by making it difficult for it to respond to the continuously changing front, then defeat it in a decisive (battle of annihilation).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "812799",
"title": "Glossary of Nazi Germany",
"section": "Section::::B.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 312,
"text": "BULLET::::- Blitzkrieg – lightning war; quick army invasions aided by tanks and airplanes. A form of attack generally associated with the German armed forces during the Second World War. \"Blitzkrieg\" tactics were particularly effective in the early German campaigns against Poland, France, and the Soviet Union.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4652",
"title": "Blitzkrieg",
"section": "Section::::Military operations.:Poland, 1939.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 637,
"text": "Despite the term \"blitzkrieg\" being coined by journalists during the Invasion of Poland of 1939, historians Matthew Cooper and J. P. Harris have written that German operations during it were consistent with traditional methods. The Wehrmacht strategy was more in line with \"Vernichtungsgedanken\" a focus on envelopment to create pockets in broad-front annihilation. Panzer forces were dispersed among the three German concentrations with little emphasis on independent use, being used to create or destroy close pockets of Polish forces and seize operational-depth terrain in support of the largely un-motorized infantry which followed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25467180",
"title": "Battle of Gembloux (1940)",
"section": "Section::::Military theory.:German.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1406,
"text": "The strategy, operational methods and tactics of the German Army and \"Luftwaffe\" have often been labelled \"\"Blitzkrieg\"\" (Lightning War). The concept is controversial and is connected to the problem of the nature and origin of \"\"Blitzkrieg\"\" operations, of which the 1940 campaign is often described as a classic example. An essential element of \"\"Blitzkrieg\"\" was considered to be a strategic, or series of operational developments, executed by mechanised forces to cause the collapse of the defenders' armed forces. \"\"Blitzkrieg\"\" has also been looked on as a revolutionary form of warfare but its novelty and its existence have been disputed. Rapid and decisive victories had been pursued by armies well before the Second World War. In the German wars of unification and First World War campaigns, the German General Staff had attempted \"Bewegungskrieg\" (war of manoeuvre), similar to the modern perception of \"\"Blitzkrieg\"\", with varying degrees of success. During the First World War, these methods had achieved tactical success but operational exploitation was slow as armies had to march beyond railheads. The use of tanks, aircraft, motorised infantry and artillery, enabled the Germans to attempt \"Bewegungskrieg\" with a faster tempo in 1940, than that of the slow-moving armies of 1914. The internal combustion engine and radio communication solved the problem of operational-level exploitation.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
z9uyg | [META] Wide-scale revisions to the official rules | [
{
"answer": "Looks good, kudos to you and the mod team for being awesome and keeping r/AskHistorians a great place to learn about history.",
"provenance": null
},
{
"answer": "The rules on \"top level\" comments make sense when the post is an actual question, but not for the other types of permitted posts. How can a comment on a special occasion, meta or project post \"only be a answer to the question at hand\"?\n\nI suggest that you make it clear that this rule only applies to question posts.",
"provenance": null
},
{
"answer": " > [W]hile this is a public forum it is not an egalitarian one; not all answers will be treated as having equal merit.\n\nThank you for taking a clear stance on this issue and not pussy-footing around it. I come to this subreddit for content and there really is a very impressive panel of historians to provide that.\n\nAs a history buff, I always get the itch to pitch in my own two cents, but refrain from doing so as there probably is someone who can provide much more accurate information. (I should clarify that I am not implying that I have never found a helpful and on topic response from a non-flaired poster. Just that posts with people speculating and postulating have been on the rise..)\n\nTo the Mods and the panel, thanks a lot for all the work that you put into this subreddit!",
"provenance": null
},
{
"answer": "Thank you for the tier system. Its such a breathe of fresh air compared to askscience where everything is so clinical.",
"provenance": null
},
{
"answer": "top tier comments should allow for questions directed at OP (e.g. clarification requests).",
"provenance": null
},
{
"answer": "Awesome rules. I'm sure they will keep the high quality of this subreddit.",
"provenance": null
},
{
"answer": "I strongly prefer not allowing meta posts except from moderators. Subreddits around this size frequently decay to being nearly 50% meta and \"idea\" posts; \"let's talk about downvotes\" (or \"/r/askhistorians: we need to talk\") in particular will happen at least once a week. Just like everyone doesn't have the same subject authority, everyone doesn't have the same moderation or reform authority",
"provenance": null
},
{
"answer": "It all seems to be for the better. And I'm glad it's not going to be as strict about non top-tier jokes and speculation as /r/askscience.",
"provenance": null
},
{
"answer": "Thank you for continuing to keep /r/askhistorians one of the best-moderated subreddits out there. ",
"provenance": null
},
{
"answer": "After the Holocaust denial thread, I just wanted to say thanks, mods.",
"provenance": null
},
{
"answer": "These mostly have been the un-official rules for a while that were enforced by moderator and community consensus, we are just writing them down now.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3057888",
"title": "Standing Rules of the United States Senate",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 273,
"text": "There are currently 44 rules, with the latest revision having been adopted on January 24, 2013. (The Legislative Transparency and Accountability Act of 2006 lobbying reform bill introduced a 44th rule on earmarks). The stricter rules are often waived by unanimous consent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18601975",
"title": "Axis & Allies Naval Miniatures: War at Sea",
"section": "Section::::Rules.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 430,
"text": "The advanced rules have been periodically updated, beginning in July 2007, with the Clarifications Document. The update introduces new concepts (such as ASW harassment and strafing penalties) not present in the original rules. Since at least one unit of the Task Force expansion set directly refers to these new rules, they are implicitly considered as part of the core ruleset, despite only having been published online to date.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1395193",
"title": "Continuing patent application",
"section": "Section::::Controversy around attempted changes by USPTO to continuation practice.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 263,
"text": "The rule changes were generally favored by software companies, electronics companies and US government agencies for the reasons given above. Those that favored the rule changes felt that said changes were consistent with the laws governing continuation practice.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1581598",
"title": "Federal Rules of Civil Procedure",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 383,
"text": "Effective December 1, 2009 substantial amendments were made to rules 6, 12, 13, 14, 15, 23, 27, 32, 38, 48, 50, 52, 53, 54, 55, 56, 59, 62, 65, 68, 71.1, 72 and 81. While rules 48 and 62.1 were added. Rule 1 (f) was abrogated. The majority of the amendments affect various timing requirements and change how some deadlines are calculated. The most significant changes are to Rule 6.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "920901",
"title": "Software versioning",
"section": "Section::::Significance in software engineering.\n",
"start_paragraph_id": 123,
"start_character": 0,
"end_paragraph_id": 123,
"end_character": 550,
"text": "In the 21st century, more programmers started to use a formalized version policy, such as the semantic versioning policy. The purpose of such policies is to make it easier for other programmers to know when code changes are likely to break things they have written. Such policies are especially important for software libraries and frameworks, but may also be very useful to follow for command-line applications (which may be called from other applications) and indeed any other applications (which may be scripted and/or extended by third parties).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21753764",
"title": "2009 NCAA Division I FCS football season",
"section": "Section::::Rule changes for 2009.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 229,
"text": "The NCAA football rules committee proposed several rule changes for 2009. Before these rules were officially adopted, the proposals had to be approved by the Playing Rules Oversight Panel. The rule changes include the following:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1581598",
"title": "Federal Rules of Civil Procedure",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 756,
"text": "The Rules, established in 1938, replaced the earlier procedures under the Federal Equity Rules and the Conformity Act (28 USC 724 (1934)) merging the procedure for cases, in law and equity. The Conformity Act required that procedures in suits at law conform to state practice usually the Field Code and common law pleading systems. Significant revisions have been made to the FRCP in 1948, 1963, 1966, 1970, 1980, 1983, 1987, 1993, 2000, and 2006. (The FRCP contains a notes section that details the changes of each revision since 1938, explaining the rationale behind the language.) The revisions that took effect in December 2006 made practical changes to discovery rules to make it easier for courts and litigating parties to manage electronic records.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dix4bv | how can i help somebody with seasonal depression feel better? | [
{
"answer": "Phototherapy is noted to help people with seasonal depression. It involves basically shining a special lamp in your indoor space to help mitigate the lack of light that comes with autumn & winter.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "52316",
"title": "Mood disorder",
"section": "Section::::Causes.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 566,
"text": "A depressed mood is common during illnesses, such as influenza. It has been argued that this is an evolved mechanism that assists the individual in recovering by limiting his/her physical activity. The occurrence of low-level depression during the winter months, or seasonal affective disorder, may have been adaptive in the past, by limiting physical activity at times when food was scarce. It is argued that humans have retained the instinct to experience low mood during the winter months, even if the availability of food is no longer determined by the weather.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29605965",
"title": "Life satisfaction",
"section": "Section::::Factors affecting life satisfaction.:Seasonal effects.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 675,
"text": "A recent study analyzes time-dependent rhythms in happiness comparing life satisfaction by weekdays (weekend neurosis), days of the month (negative effects towards the end of the month) and year with gender and education and outlining the differences observed. Primarily within the winter months of the year, an onset of depression can affect us, which is called seasonal affective disorder (SAD). It is recurrent, beginning in the fall or winter months, and remitting in the spring or summer. It is said that those who experience this disorder usually have a history of major depressive or bipolar disorder, which may be hereditary, having a family member affected as well.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26657251",
"title": "Lighting for the elderly",
"section": "Section::::Health concerns.:Depression.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 556,
"text": "The elderly frequently cite depression as a notable ailment. Many researchers have linked the depression to seasonal affective disorder (SAD), and seasonal mood variations have been linked to lack of light. (SAD is markedly more frequent in extreme latitudes, such as the arctic and in Finland). Light therapy in the form of light boxes are a frequent non-drug treatment for SAD. Several preliminary studies have shown that light therapy is a positive treatment for depressive symptoms for older persons although more studies need to be done in this area.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "66811",
"title": "Seasonal affective disorder",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 362,
"text": "Seasonal affective disorder (SAD) is a mood disorder subset in which people who have normal mental health throughout most of the year exhibit depressive symptoms at the same time each year, most commonly in winter. Common symptoms include sleeping too much, having little to no energy, and overeating. The condition in the summer can include heightened anxiety.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29605965",
"title": "Life satisfaction",
"section": "Section::::Factors affecting life satisfaction.:Seasonal effects.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 803,
"text": "Seasonal affective disorder is hypothesized to be caused by the diminishing of the exposure to environmental light which can lead to changes in levels of the neurotransmitter chemical serotonin. Diminishing active serotonin levels increases depressive symptoms. There are currently a few treatment therapies in order to help with seasonal affective disorder. The first line of therapy is light therapy. Light therapy involves exposure to bright, white light that mimics outdoor light, counteracting the presumed cause of SAD. Due to the shifts in one's neurochemical levels, antidepressants are another form of therapy. Other than light therapy and antidepressants, there are several alternatives which involve agomelatine, melatonin, psychological interventions, as well as diet and lifestyle changes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37400993",
"title": "Hayim Association",
"section": "Section::::Current operations.:Summer Camp.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 732,
"text": "The summer months are especially difficult for these children, as they are in hospital undergoing treatments while their healthy friends are enjoying summer camp and other fun activities. The sick children who are undergoing painful treatments are forced to avoid summer pleasures: travelling in public areas, due to their weakened immune system (as a result of the treatments), swimming, visiting beaches, and participating in summer camps that offer a wide range of activities. To help compensate them, the Hayim Association brings summer camp into the pediatric oncology departments, where children can enjoy a wide range of special activities that cater to their special needs and limitations, uplifting the children's spirits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8389",
"title": "Major depressive disorder",
"section": "Section::::Management.:Counseling.:Cognitive behavioral therapy.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 202,
"text": "Cognitive behavioral therapy and occupational programs (including modification of work activities and assistance) have been shown to be effective in reducing sick days taken by workers with depression.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3uo8l2 | what does the "crisper" drawer in my refrigerator do and what is the benefit to putting my veggies in there? | [
{
"answer": "Actually, the crisper is the worst place to keep vegetables. They do better with air circulation and the temps higher in the fridge. The best thing to keep in the crisper are raw meats, primarily to prevent raw meat juices from dripping and contaminating anything else. The drawers are relatively easy to remove and sanitize afterward.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "59338885",
"title": "Crisper drawer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 457,
"text": "A crisper drawer (also known as a crisper) is a compartment located within a refrigerator designed to prolong the freshness of stored produce. Crisper drawers have a different level of humidity from the rest of the refrigerator, optimizing freshness in fruits and vegetables. They can be adjusted to both prevent the loss of moisture from produce, and also allow ethylene gas produced by certain fruits to escape, thus preventing them from rotting quickly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59338885",
"title": "Crisper drawer",
"section": "Section::::Design and operation.:Reported public confusion.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 435,
"text": "In the UK, sources often use the term \"crisper drawer\" in conjunction with a nearby explanation, like the Vegetable Expert advice website calling them \"special compartments or 'crisper drawers' to store fruits and vegetables\", the consumers' organisation \"Which?\" calling it a \"salad crisper drawer [...] for storing your fruit and veg\", and the appliance replacement firm Partmaster.co.uk calling it a \"fridge/freezer salad crisper\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59338885",
"title": "Crisper drawer",
"section": "Section::::Design and operation.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 879,
"text": "Crisper drawers operate by creating an environment of greater humidity than the rest of the refrigerator. Many crisper drawers have a separate humidity control which closes or opens a vent in the drawer. When the vent is in the closed position, airflow is shut off, creating greater humidity in the drawer. High humidity is optimal for the storage of leafy or thin-skinned vegetables such as asparagus. When the vent is in the open position, airflow keeps humidity in the crisper drawer low, which is beneficial for storage of fruits such as pears and apples. Additionally, because some fruits emit high levels of ethylene gas, the open vent allows the ethylene gas to escape instead of causing these fruits to rot. The ability to separate low-humidity fruits from high-humidity vegetables using the different crisper drawers also prevents ethylene gas from damaging the latter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59338885",
"title": "Crisper drawer",
"section": "Section::::Design and operation.:Reported public confusion.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 285,
"text": "Appliance manufacturers have reported that many refrigerator owners are unaware of the purpose or operation of crisper drawers. A 2010 survey commissioned by Robert Bosch GmbH found that 55 percent of surveyed Americans \"admit to not knowing how to use their crisper drawer controls\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24365459",
"title": "Central vacuum cleaner",
"section": "Section::::Tools and accessories.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 345,
"text": "The \"VacnSeal\" is an accessory intended to be installed on the underside of a kitchen cabinet, over a countertop used for food preparation. The nozzle of the device is used to evacuate excess air from a zipper lock plastic food storage bag (e.g. Ziploc), which is said by the manufacturer to preserve food freshness for a longer period of time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "467731",
"title": "Spatula",
"section": "Section::::In the kitchen.:American English usage.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 452,
"text": "In kitchen utensils, a \"spatula\" is any utensil fitting the above description. One variety is used to lift and flip food items during cooking, such as pancakes and fillets (known in British English as a fish slice). The blades on these are usually made of metal or plastic, with a wooden or plastic handle to insulate them from heat. A cookie shovel is a specialty spatula with a larger blade, made for scooping cookies off their pan or cooking sheet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "496925",
"title": "List of eating utensils",
"section": "Section::::List of utensil types.:Disposable utensils.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 365,
"text": "Prepackaged products may come with a utensil intended to be consumed or discarded after using it to consume the product. For instance, some single-serve ice cream is sold with a flat wooden spade, often erroneously called a \"spoon\", to lift the product to one's mouth. Prepackaged tuna salad or cracker snacks may contain a flat plastic spade for similar purposes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
310eor | Can moons of dramatically different size happily co-exist? | [
{
"answer": "There's nothing distinctively different between a planet's orbit around a star and a moon's orbit around a planet--there simply needs to be sufficient distance between them where \"far enough\" depends on the object's sizes.\n\nEven asteroids [can have moons.](_URL_0_)\n\nEssentially the question you need to ask is this, can the system I want to make well approximated by a two-body system, can I ignore the parent orbit? For an Earth-Moon-Sun system, the answer is yes, we can ignore the Sun, the system is stable. If we cannot, and we *have to* describe things as a 3-body system, then we risk always ejecting one of the bodies over long times as a three body system is able to easily transfer momentum around, eventually it'll wander into the phase space that unbinds one of the objects.\n\nSo to answer your question, yes. This is why we're able to have satellites around Earth as well as satellites around the Moon. One more bit of nuance, it seems like \"cheating\" to ignore the third body, in principle, it can contribute to momentum transfer albeit tiny ones. Luckily for us, there is plenty of situations where the \"effective stability time\" blows up to values magnitudes older than our universe, the Earth-Sun-Moon system is one such example.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3048284",
"title": "Moons of Pluto",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 210,
"text": "An intense search conducted by \"New Horizons\" confirmed that no moons larger than 4.5 km in diameter exist at the distances up to 180,000 km from Pluto (for smaller distances, this threshold is still smaller).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "818487",
"title": "Moons of Neptune",
"section": "Section::::Characteristics.:Regular moons.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 674,
"text": "Only the two largest regular moons have been imaged with a resolution sufficient to discern their shapes and surface features. Larissa, about 200 km in diameter, is elongated. Proteus is not significantly elongated, but not fully spherical either: it resembles an irregular polyhedron, with several flat or slightly concave facets 150 to 250 km in diameter. At about 400 km in diameter, it is larger than the Saturnian moon Mimas, which is fully ellipsoidal. This difference may be due to a past collisional disruption of Proteus. The surface of Proteus is heavily cratered and shows a number of linear features. Its largest crater, Pharos, is more than 150 km in diameter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "839504",
"title": "List of natural satellites",
"section": "Section::::Moons by primary.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 603,
"text": "Among the other dwarf planets, has no known moons. It is 90 percent certain that Ceres has no moons larger than 1 km in size, assuming that they would have the same albedo as Ceres itself. has two moons, Hi'iaka and Namaka, of radii ~195 and ~100 km, respectively. has one moon, discovered in April 2016. has one known moon, Dysnomia. Accurately determining its size is difficult: one indicative estimate of its radius is , but on some assumptions could be as high as . The Kuiper belt object 90482 Orcus, believed to be a dwarf planet, was found in 2005 to have a natural satellite, later named Vanth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6395779",
"title": "Dwarf planet",
"section": "Section::::Planetary-mass moons.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 614,
"text": "Nineteen moons are known to be massive enough to have relaxed into a rounded shape under their own gravity, though some have since frozen out of equilibrium, and seven of them are more massive than either Eris or Pluto. These moons are not physically distinct from the dwarf planets, but do not fit the IAU definition of \"dwarf planet\" because they do not directly orbit the Sun. However, Alan Stern calls planetary-mass moons \"satellite planets\", one of three categories of planet, together with dwarf planets and classical planets. The term \"planemo\" (\"planetary-mass object\") also covers all three populations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "602678",
"title": "Extraterrestrial skies",
"section": "Section::::Uranus.\n",
"start_paragraph_id": 106,
"start_character": 0,
"end_paragraph_id": 106,
"end_character": 783,
"text": "None of Uranus's moons would appear as large as a full moon on Earth from the surface of their parent planet, but the large number of them would present an interesting sight for observers hovering above the cloudtops. The angular diameters of the five large moons are as follows (for comparison, Earth's moon measures on average 31′ for terrestrial observers): Miranda, 11–15′; Ariel, 20–23′; Umbriel, 15–17′; Titania, 11–13′; Oberon, 8–9′. Unlike on Jupiter and Saturn, many of the inner moons can be seen as disks rather than starlike points; the moons Portia and Juliet can appear around the size of Miranda at times, and a number of other inner moons appear larger than Oberon. Several others range from 6′ to 8′. The outer irregular moons would not be visible to the naked eye.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1275987",
"title": "Moon illusion",
"section": "Section::::Proof of illusion.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 260,
"text": "Between \"different\" full moons, the Moon's angular diameter can vary from 29.43 arcminutes at apogee to 33.5 arcminutes at perigee—an increase of around 14% in apparent diameter or 30% in apparent area. This is because of the eccentricity of the Moon's orbit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19363373",
"title": "Moons of Haumea",
"section": "Section::::Surface properties.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 487,
"text": "The sizes of both moons are calculated with the assumption that they have the same infrared albedo as Haumea, which is reasonable as their spectra show them to have the same surface composition. Haumea's albedo has been measured by the Spitzer Space Telescope: from ground-based telescopes, the moons are too small and close to Haumea to be seen independently. Based on this common albedo, the inner moon, Namaka, which is a tenth the mass of Hiʻiaka, would be about 170 km in diameter.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
inqzl | Do carbon based filters (such as brita) remove essential minerals that the human body needs? | [
{
"answer": "First: Its important to note that Brita and similar water pitchers are not just an activated carbon pitcher. There is also an ion exchange resin present in the filter. Think of this ion exchange resin as working just like a water softener (that is, because it does work exactly like a water softener).\n\nActivated carbon is really good at removing organic materials. Its safe to say that the vast majority of organics, you don't really want in your water anyway. \n\nThe other reason to use home-filtered water is if your water is hard -- which is where the ion exchange resin comes in. This is the part of the filter that removes any minerals. The way these resins work is that they are polymers which have counter-ions attached to them. Millions and gazillions of counter ions. The polymer is set up to have a negative charge and the counter ions are set up to be positive, typically sodium (which is why your home water softener takes salt -- ie NaCl, sodium chloride). There are more advanced water softeners and filters that will also exchange anions (negatively charged ions in water).\n\nWhat happens is that as the hard water flows through the pitcher, it interacts with the polymer. The polymer is designed so that it will bind to the minerals common in hard water (calcium, magnesium etc) better than it will to sodium. So it snags the \"hard\" tasting ions and replaces them with sodium. Eventually the filter is depleted of sodium and so the water starts to taste funny again because the resin isn't doing a good job of removing the calcium and magnesium. Home water softeners remedy this by running a highly concentrated salt water solution over the resin to force the calcium and magnesium out and replace the sodium content.\n\nFluoride, however, is a negatively charged ion and is not affected by the resins commonly available for home use. This is good, because fluoride is good for your teeth.\n\nOther 'trace' minerals that you need -- hard water doesn't contain nearly enough calcium to meet dietary needs. A lot of the other even tracer metals are poorly acquired through water. Here I'm talking about some of the ones that most people don't even think about but that [bullshit pseudo-science](_URL_0_) companies are happy to sell you at a high mark up. \n\nTo wrap up, any biologist who knows better is welcome to correct me on this, but basically: No, there are no health benefits or hurts. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12141201",
"title": "Particle therapy",
"section": "Section::::Carbon-ion radiotherapy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 665,
"text": "Carbon ion therapy (CIRT) uses particles more massive than protons or neutrons. Carbon-ion radiotherapy has increasingly garnered scientific attention as technological delivery options have improved and clinical studies have demonstrated its treatment advantages for many cancers such as prostate, head and neck, lung, and liver cancers, bone and soft tissue sarcomas, locally recurrent rectal cancer, and pancreatic cancer, including locally advanced disease. It also has clear advantages to treat otherwise intractable hypoxic and radio-resistant cancers while opening the door for substantially hypo-fractionated treatment of normal and radio-sensitive disease.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4111",
"title": "Bioleaching",
"section": "Section::::Compared with other extraction techniques.:Advantages.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 205,
"text": "BULLET::::- Economical: Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1246718",
"title": "Organic matter",
"section": "Section::::Aquatic.:Water purification.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 844,
"text": "The same capability of natural organic matter that helps with water retention in soil creates problems for current water purification methods. In water, organic matter can still bind to metal ions and minerals. These bound molecules are not necessarily stopped by the purification process, but do not cause harm to any humans, animals, or plants. However, because of the high level of reactivity of organic matter, by-products that do not contain nutrients can be made. These by-products can induce biofouling, which essentially clogs water filtration systems in water purification facilities, as the by-products are larger than membrane pore sizes. This clogging problem can be treated by chlorine disinfection (chlorination), which can break down residual material that clogs systems. However, chlorination can form disinfection by-products.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32845594",
"title": "Resource recovery",
"section": "Section::::Materials used as a source.:Wastewater and excreta.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 832,
"text": "BULLET::::- Fertilizing nutrients: Human excreta contains nitrogen, phosphorus, potassium and other micronutrients that are needed for agricultural production. These can be recovered through chemical precipitation or stripping processes, or simply by use of the wastewater or sewage sludge. However, reuse of sewage sludge poses risks due to high concentrations of undesirable compounds, such as heavy metals, environmental persistent pharmaceutical pollutants and other chemicals. Since the majority of fertilizing nutrients are found in excreta, it can be useful to separate the excreta fractions of wastewater (e.g. toilet waste) from the rest of the wastewater flows. This reduces the risk for undesirable compounds and reduces the volume that needs to be treated before applying recovered nutrients in agricultural production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "633963",
"title": "Liposome",
"section": "Section::::Dietary and nutritional supplements.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 705,
"text": "A very small number of dietary and nutritional supplement companies are currently pioneering the benefits of this unique science towards this new application. This new direction and employment of liposome science is in part due to the low absorption and bioavailability rates of traditional oral dietary and nutritional tablets and capsules. The low oral bioavailability and absorption of many nutrients is clinically well documented. Therefore, the natural encapsulation of lypophilic and hydrophilic nutrients within liposomes has made for a very effective method of bypassing the destructive elements of the gastric system and aiding the encapsulated nutrient to be delivered to the cells and tissues.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32286355",
"title": "Fibrolytic bacterium",
"section": "Section::::General applications.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 387,
"text": "In the chemical industry, these enzymes have allowed the development of new detergents and washing-up liquids; in the paper industry they play a very important role in bleaching processes, minimizing toxicity and being more economic; and in biotechnological research, the use of the cellulose binding domains from fibrolytic enzymes has allowed the purification of recombinant proteins.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3155923",
"title": "Hot-melt adhesive",
"section": "Section::::Materials used.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 201,
"text": "Mass-consumption disposable products such as diapers necessitate development of biodegradable HMAs. Research is being performed on e.g., lactic acid polyesters, polycaprolactone with soy protein, etc.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2lploh | why don't credit cards just use 19 digits instead of 16 digits plus 3 digit "security code"? | [
{
"answer": "The CVV is separate from the card number, so if people record your card number (such as a card skimmer), they don't have access to the 3 digit security code.\n\nIf you made it 19 digit, then 1 swipe and they have all they need to literally take all your cash.",
"provenance": null
},
{
"answer": "It helps cut down on the fradulent use of credit cards by hackers.\n \nThe security code is often not kept or used in face-to-face transactions (i.e. buying something with your credit card at the supermarket.)\nIf the supermarket's records were hacked, the hackers would not get your security code and therefore could not use it in card-not-present transactions (i.e. over the phone, online.)",
"provenance": null
},
{
"answer": "As a few peole have already pointed out, it keeps numbers in different places on the card (excep for AmEx) so a person can't take a quick look or picture and get all the info needed for many transactions, but I also wanted to add that (for Visa, MC, Amex, and Discover) it's not embossed into the card so if a store has to imprint it (we still do at my place of employment in certain situations) it's not imprinted along with the card number, name, and expiration date.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "337220",
"title": "Personal identification number",
"section": "Section::::Financial services.:PIN length.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 513,
"text": "The international standard for financial services PIN management, ISO 9564-1, allows for PINs from four up to twelve digits, but recommends that for usability reasons the card issuer not assign a PIN longer than six digits. The inventor of the ATM, John Shepherd-Barron, had at first envisioned a six-digit numeric code, but his wife could only remember four digits, and that has become the most commonly used length in many places, although banks in Switzerland and many other countries require a six-digit PIN.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "329608",
"title": "Telephone numbers in Greece",
"section": "Section::::Overview.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 201,
"text": "Two-digit codes are used with eight-digit subscriber numbers, three-digit codes with seven-digit numbers, and four-digit codes with six-digit numbers so the full telephone number is always ten digits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17182301",
"title": "Credit card",
"section": "Section::::Technical specifications.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 288,
"text": "In addition to the main credit card number, credit cards also carry issue and expiration dates (given to the nearest month), as well as extra codes such as issue numbers and security codes. Not all credit cards have the same sets of extra codes nor do they use the same number of digits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28726259",
"title": "Phone hacking",
"section": "Section::::Techniques.:Handsets.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 513,
"text": "An analysis of user-selected PIN codes suggested that ten numbers represent 15% of all iPhone passcodes, with \"1234\" and \"0000\" being the most common, with years of birth and graduation also being common choices. Even if a four-digit PIN is randomly selected, the key space is very small (formula_1 or 10,000 possibilities), making PINs significantly easier to brute force than most passwords; someone with physical access to a handset secured with a PIN can therefore feasibly determine the PIN in a short time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28737625",
"title": "ISO 9564",
"section": "Section::::Part 1: Basic principles and requirements for PINs in card-based systems.:Other specific PIN control requirements.:PIN length.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 214,
"text": "The standard specifies that PINs shall be from four to twelve digits long, noting that longer PINs are more secure but harder to use. It also suggests that the issuer should not assign PINs longer than six digits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2741879",
"title": "HMAC-based One-time Password algorithm",
"section": "Section::::Algorithm.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 325,
"text": "6-digit codes are commonly provided by proprietary hardware tokens from a number of vendors informing the default value of \"d\". Truncation extracts 31 bits or formula_1 ≈ 9.3 decimal digits, meaning, at most, \"d\" can be 10, with the 10th digit providing less extra variation, taking values of 0, 1, and 2 (i.e., 0.3 digits).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41794922",
"title": "Honey encryption",
"section": "Section::::Example.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 445,
"text": "An encrypted credit card number is susceptible to brute-force attacks because not every string of digits is equally likely. The number of digits can range from 13-19, though 16 is the most common. Additionally it must have a valid IIN and the last digit must match the checksum. An attacker can also take into account the popularity of various services: an IIN from MasterCard is probably more likely than an IIN from Diners Club Carte Blanche.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4c772o | why are the brussels and paris attacks so publicized and mourned over when others, like the current pakistani bombings, kill more and do more damage? | [
{
"answer": "Attacks in Western nations are discussed more in Western media. Attacks in nations that have been experiencing terrorism and war daily for decades, not so much.",
"provenance": null
},
{
"answer": "It's pretty much accepted that that part of the world is basically a warzone. Nobody's too surprised if something explodes or people die there.\n\nHowever if it happens in a modern major city, that IS a nasty shock. That kind of thing isn't \"supposed\" to happen in a \"civilised area\".\n\nRemember in the Dark Knight how the Joker talked about nobody panicking when things went \"according to plan\" even if the plan is horrifying? That's exactly it. You expect bombs to go off in a warzone, you don't expect them to go off in the middle of a major European city.",
"provenance": null
},
{
"answer": "Also I think that us Westerners have become very desensitised to anything bad that happens in the Middle East regions. Partly due to the way the media/hollywood have reported/portrayed the violence/wars etc from those regions for years, it goes hand in hand in most peoples minds so doesn't come as a shock to the system anymore. Also there is a growing them and us attitude, we feel more connected to fellow western nations.",
"provenance": null
},
{
"answer": "Speaking from a European perspective (sorry OP, I have no idea where you're from but I'm assuming from your username that it's the states) - Brussels and Paris are two things:\n\na) They're not considered warzones - Pakistan is both at war, and in the midde of an area from where we commonly hear of wars, so it doesn't register in a lot of people's heads as shocking. Brussels and Paris are considered peaceful, safe areas, so it's a much bigger shock.\n\nb) They're a lot closer to home. Pakistan? Syria? We can accept that bad events there are tragic and terrible, but we can dismiss them easily because they can be considered foreign places, with which your average person in my country has fairly limited interaction. It's why people can sometimes get so angry about, for example, Syrian refugees coming into the country, and feel no sympathy - the bad events happened far away in a country we don't know very much about, but the refugees are here in our front yard.\n\nAnd that's why we get scared about stuff like the Brussels attack, or the Paris attack, or the London bombings back in 2007 - these things are happening a matter of hours away from us.\n\nThat doesn't make them any more tragic, or the deaths any more deserving of being mourned than the deaths happening in the Middle-East, and I'm not trying to justify these views or reactions, but that's the way it is, and you're right, it's a form of Eurocentrism on our part. It's more surprising, more shocking and a lot scarier, and reminds us that we are involved in this conflict more than we would sometimes like to admit.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49896050",
"title": "2016 Brussels bombings",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 333,
"text": "The perpetrators belonged to a terrorist cell which had been involved in the November 2015 Paris attacks. The Brussels bombings happened shortly after a series of police raids targeting the group. The bombings were the deadliest act of terrorism in Belgium's history. The Belgian government declared three days of national mourning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49896050",
"title": "2016 Brussels bombings",
"section": "Section::::Reactions.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 604,
"text": "Governments, media outlets, and social media users received criticism in some media and academic analysis for their disproportionate emphasis placed on the attacks in Brussels over similar attacks in other countries, particularly in Turkey, which occurred days before. Similarly, reactions to the November 2015 Paris attacks were viewed as disproportionate in comparison to those of earlier bombings in Beirut. According to Akin Unver, a professor of international affairs at Istanbul's Kadir Has University, being \"selective\" about terrorism is counterproductive to the global counterterrorism efforts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51451144",
"title": "2016 Brussels National Institute of Criminology fire",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 455,
"text": "Brussels has been on high alert since the November 2015 Paris attacks and the 2016 Brussels bombings. The Brussels police conducted major police operations in early 2016 against terrorism suspects. Salah Abdeslam and other suspects were arrested in these raids. Because of the police raids the Belgian crime rate has dropped but the illegal weapon trade, number of armed robberies and terrorism-related incidents in Brussels have significantly increased.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49896050",
"title": "2016 Brussels bombings",
"section": "Section::::Background.:Terrorist cells in Brussels.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 829,
"text": "Before the bombings, several Islamist terrorist attacks had originated from Belgium, and a number of counter-terrorist operations had been carried out there. Between 2014 and 2015, the number of wiretapping and surveillance operations directed at suspected terrorists by Belgian intelligence almost doubled. In May 2014, a gunman with ties to the Syrian Civil War attacked the Jewish Museum of Belgium in Brussels, killing four people. In January 2015, anti-terrorist operations against a group thought to be planning a second \"Charlie Hebdo\" shooting had included raids in Brussels and Zaventem. The operation resulted in the deaths of two suspects. In August 2015, a suspected terrorist shot and stabbed passengers aboard a high-speed train on its way from Amsterdam to Paris via Brussels, before he was subdued by passengers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49902213",
"title": "Reactions to the 2016 Brussels bombings",
"section": "Section::::International response.:UN member and observer states or entities.\n",
"start_paragraph_id": 98,
"start_character": 0,
"end_paragraph_id": 98,
"end_character": 225,
"text": "BULLET::::- : Prime Minister Aleksandar Vučić said that what happened in Brussels disasters and is horrified by these events, but believes Europe and the world will be able to find the best response to the terrorist attacks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48619333",
"title": "Brussels lockdown",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 724,
"text": "From 21 November to 25 November 2015, the government of Belgium imposed a security lockdown on Brussels, including the closure of shops, schools, public transportation, due to information about potential terrorist attacks in the wake of the series of coordinated terrorist attacks in Paris by Islamic State of Iraq and the Levant on November 13. One of the perpetrators of the attack, Belgian-born French national Salah Abdeslam, was thought to be hiding in the city. As a result of warnings of a serious and imminent threat, the terror alert level was raised to the highest level (four) across the Brussels metropolitan area, and people were advised not to congregate publicly, effectively putting the city under lockdown.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49902213",
"title": "Reactions to the 2016 Brussels bombings",
"section": "Section::::International response.:UN member and observer states or entities.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 520,
"text": "BULLET::::- : President Hassan Rouhani has \"firmly condemned\" terrorist bomb explosions in the Belgian capital city of Brussels: “Firmly condemn terrorist attacks in Brussels. Deepest condolences to the government and people of Belgium, especialy those who lost loved one“, saying Rouhani in his Twitter account. Foreign Ministry spokesperson, Hossein Ansari also condemned the twin blasts in Brussels, stressing the importance of adopting all-embracing efforts to fight terrorism which is threatening the entire world.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
45lpra | where and how does all the energy created by power plants get stored? or is the power being generated as it's needed? | [
{
"answer": "The latter, mostly. In the case of plants that use some sort of fuel, the energy is *already* stored in whatever fuel that is being used. Generating energy from the fuel first, and then storing it back inside something else to be extracted later is a *huge* waste of everything. When you have too much power, you use less fuel. When you have too little, you use more, it's generally as simple as that.\n\nIn case of solar plants and such however, it gets interesting. Storing large amounts of energy is one of the biggest problems of today's engineering and there is considerable research being done about this very issue. There are some ways of storing the energy that a solar plant generates during the day, but they aren't exactly ideal. It usually revolves around conversion of electricity into some other form of energy. Like using the power from a solar plant to pump very large amounts of water uphill, which is essentially converting electrical energy into potential energy. You later let the water flow downhill and turn some turbines to generate electricity again. Or you could use the power from your soalr plant to pump air inside a huge tank to create pressure, and when needed you let the highly pressurized air out and, again, turn turbines with it. Obviously there are considerable losses involved in such techniques, but energy storage technology is still a developing one. We're getting there.",
"provenance": null
},
{
"answer": "I just had a class on it yesterday! Just to be clear I'm just talking about non-renewable resources and electricity, I'm not sure if what I say here is also applicable to renewable resources plants.\n\n\nElectricity in big quantities (like in power plants) is not stored. It can only be produced. It is only stored in small quantities (batteries, for example) but so far people haven't created an efficient way to store big quantities of electricity. \n\n\n",
"provenance": null
},
{
"answer": "If you are referring to the large power plants (coal/lignite), they operate on supply/demand. There isn't really any noticeable storage of the energy and the electricity generated is immediately put onto the grid. \n\nIf there is excess supply than demand (ex. a nice day and no one needs heating/cooling) the energy may get sold to other adjacent grids operated by other companies that may need it. In the case of excess supply, the power plant will decrease production to reach and equilibrium.",
"provenance": null
},
{
"answer": "While everyone has explained a lot, I want ti add another interesting fact.\n\nElectricity is, in conventional generators, created by big turbines and generators. That means, if there's more energy consumed than produced, these turbines are slowed. When the reverse happens, they start to speed up.",
"provenance": null
},
{
"answer": "Just to add that there is a common way that large amounts of energy is stored for later use - [pumped storage hydroelectricity](_URL_0_).\n\nBasically when there's low demand for energy the excess energy is used to pump water from a reservoir into a 2nd reservoir that's higher up (eg. at the top of a hill). When energy demand increases they stop pumping and let the water flow down from the higher reservoir back into the lower reservoir, driving a generator as the water passes.",
"provenance": null
},
{
"answer": "So, my dad runs a series of hydroelectric dams on a river...his system is connected to a dam on a bay heading into Lake Michigan. There is a balance to maintain, too much can blow apart the system, too little causes brownouts. During heavy rain, the bay station will literally burn the river generated excess by pumping some lake water back into the bay! It's a wild system, one of his river dams is over 110 years old!!!! ",
"provenance": null
},
{
"answer": "Generally it's being used at capacity. Storage is one of the biggest problems with renewable energies. How do you take the solar power during the day, and cost effectively store it for use at night? Answer that question and you will be very rich.",
"provenance": null
},
{
"answer": "Several good answers here, batteries are costly and don't store much energy (on a large scale, and this is ELI5). Hydroelectric also works.\nFrom work experience, I'd like to offer this:\nI live in Texas, which basically has its own Electric Grid. I often drive to west Texas (think Midland/Odessa area), where they produce more electricity from wind than any other state in the US. If I take a slight detour, I will pass a wind generated power station that is not even connected to that grid, and charges all those oil/gas places on a rate that changes every 15 minutes. If the wind does die, diesel generators are brought online, and the cost of electricity can easily go from 8 cents per kWh, to over a dollar per kWh.",
"provenance": null
},
{
"answer": "there are many factors but the main one is current draw and voltage. these plants produce electricity in very high voltages (several hundred thousand volts on the main lines) and not so high current. the current we use in the household is via the drop down transformers (current and voltage are inversely proportional)and while voltage is relatively constant, current is drawn, not pushed, meaning if it isn't needed it's not there. what you perceive as a heavy load (say a 30 amp water heater) is nothing but milliamps to the grid. there is seldom an actual instant heavy spike on the demand of the generators supplying our grid and very easily accounted for. if there was actually no load draw on the generator it simply does not make the current, but does ensure the voltage is present.",
"provenance": null
},
{
"answer": "Yay, I actually know this. My father was head instrumentation specialists for a power plant. I worked at the plant during the summers while in college as well. I was fascinated by how it all worked and asked a lot of questions. \n\nThere is little to no storage of power. You have two types of power plants: baseload and load following. Baseload run at max capacity all of the time and load following shutdown when power isn't needed and start back up to handle peak power consumption during the day. You also have auxiliary power stations such as hydroelectric damns and what not. \n\nThe load is absorbed and controlled at each power plant and power station. The transformers can handle minor fluctuations, but the majority of the power output is controlled by the steam turbine of each power plant. The steam turbine for a coal plant usually turns at around 3000 rpms, but can speed up or slow down to adjust power output. A power grid usually has many power plants feeding the grid, and each plant is connected. Many plants have installed equipment that syncs each plant up so each plant knows what the other plant is doing to optimize level power output.\n\n30 years ago this was not always the case. My dad told me a story about how a long time ago some guy at one of the plants accidentally engaged a turbine before it was fully up to speed. Because it was not up to speed it actually started pulling power instead of delivering it. The instantaneous demand of massive amounts of power shock every power plant on the grid for hundreds of miles. My dad said it sounded like a giant bomb went off in every plant connected to the grid. \n\n ",
"provenance": null
},
{
"answer": "Having designed electric utility generation control systems, I can add on. There is no storage to meet short term demand changes (pumped hydro is another topic). The generating mass of the interconnected grid absorbs short term changes, showing up as a very small change in line frequency. A utility adjusts their output to meet demand as follows:\n\nA utility is connected to the grid through power lines, called tie lines. If the utility is not buying or selling power then the sum of the power flow through the tie lines should be zero. This sum is called the Area Control Error.\n\nIf the Area Control Error is negative, then generator output is increased, and if positive output is decreased, until the control error is zero.\n\nThere is a bias applied to the control error based on grid line frequency. If the line frequency is below target (60hz in North America), then the bias will cause over generation, and vice versa, with the intent that the grid frequency stays at target.\n\n",
"provenance": null
},
{
"answer": "Electric power grids work like enormous machines with many power plants feeding to a transmission system to provide needed power, like streams coming together in a river. American utilities are required to have a certain percentage of \"spining reserve\" which are power generators online to provide more than the current usage. Some power can be stored with hydroelectric dams and reserviors that can adapt to changes in load some even use pumps to return water during periods of low demand and high production. Other power plants can be turned on as needed to meet power requirements. Baseload plants are nuclear, coal and natural gas that produce steam and take a long time to bring up to speed and turn off. Faster response is with gas combustion turbines and reciprocating engines can have the fastest response to changing loads since they are like automobile engines that can be turned on and off easily and provide varying power as needed. Energy storage is one of the major goals in power engineering with all types of battery and other systems being developed and deployed to match the different power inputs with customer needs. \nSource: worked in utilities for 20 years and renewable energy for 35 years. Also living offgrid for 25 using solar, batteries and backup generators to supply all my power needs. ",
"provenance": null
},
{
"answer": "Generators have a certain capacity, usually expressed in MW, megawatts, which is a unit of *power*. Generation systems put a certain voltage, aka potential difference, onto the transmission systems and this voltage is a constant. The more load/draw on the system, the greater the current that flows through the conductors. Since power is a product of voltage*current the generator is capable of providing any amount of current up to a certain MW capacity. Beyond that capacity the generator would sustain heat damage, but protective systems would cut off some load in order to prevent this in a properly coordinated system. \n\nThat's how I understand it, and I am oversimplifying things by leaving out reactive and apparent power. ",
"provenance": null
},
{
"answer": "A few power plants actually have a pretty interesting strategy- they'll purposefully build the plant at the bottom of a mountain and by a large lake. When the plant's generating too much electricity it will use that power to pump a bunch of water from the lake to a dam at the top of the mountain. When energy demand is high, they'll open the dam, letting all of the water flow down the mountain and make hydroelectricity.",
"provenance": null
},
{
"answer": "Power Generation Engineer here, created username just to answer this because it's the only ELI5 I have ever been qualified to answer (have built and commissioned power plants all over the world for Westinghouse and Siemens for 20+ years).\n\nA thorough explanation is probably beyond ELI5 territory and I'm probably going to get hammered for this response length but screw it, it's a throwaway and I don't need the afirmation. There are plenty of correct pieces of answers in the comments. If you strung them together you wuld have a hell of a complete answer. In short, unless the electricity is produced and stored in grid connected battery banks or some other means DIRECTLY storing electricity without first converting into into another form then it is produced \"on demand\", as it's needed.\n\nHow well it is done is dependent on many factors but not limited too the type of power generation method employed, it's age, the intelegence of the plant control system or systems that operate it, the intelegence of the transmision systems that the electricity is transported on, and the level of connectivity that the individual power plants have with the grid operator (Independent System Operators in the USA) and to each other.\n\nIn the USA, we run the gammit in terms of technology and capabilities employed across all these areas but this has been the only reliable method of delivering bulk electricity to a mass consumer base for well over 100 years (thanks George Westinghouse and Nikola Tesla - you glorious bastards).\n\nAlso, the collective \"grid\" is capable of producing more than what is actualy needed at any given time. It's called spinning reserve. Let's say a coal plant in the SW USA is capable of producing 2000MWe. At any given time it might only be producing 800 to 1500 MWe. It's boilers are generating steam, it's steam turbines are spinning the generators and all systems are effectively on line. If the load on the electrical grid were to increase, it's generaly gradual in nature and rarely instantaneous. The power plant responds by increasing the fuel input to the boilers to increase steam production and thereby increase the power generated by the steam turbine. This coresponds to an increase in the power output at the generator terminals. This process happens relatively quickly even for an old coal plant and usualy faster than the coresponding load increase on th grid. This makes it possible for the power plant to match a load increase imposed by the electrical grid within miliseconds to seconds. In the case of the SW USA coal plant, it has an additional 800MW of capability that is available but not being utilized. The point is that yes, power plants will always follow a load change on the grid but the grid, for the most part, can tolerate the disturbance until the generators can catch up.\n\nA good analogue to this is a car driving across a flat plane that suddenly encounters a hill. In order to maintain speed going up the hill the driver has to press down on the accelerator to increase the power output from the engine in order to maintain speed. The driver presses the accelerator pedal IN RESPONSE to a decrease in speed caused by an increased load on the engine. The decreased speed, although noticeable, is quickly resolved becasue the driver (control system) and the car (generator) are capable of responding quickly, almost in real time.\n\nAs for grid frequency, this isn't only affected by load on the electrical system but the TYPE of load. The points made by others in the comments are all valid but I want to add that the frequency of the grid is important when discussing VARS which aside from voltage is the other major control variable that grid operators monitor.....but that's anothr ELI5.\n",
"provenance": null
},
{
"answer": "I work in this industry.\n\nGenerally, power is generated as it is needed. A control authority can give the power plants orders to increase or decrease output as needed. Some generators can be started and stopped very quickly, and provide what is called \"reserve power\". Others can change their output very easily and provide what is called \"regulation\". \n\nHowever, there is some storage, but it's a minority. Here in my home state, we have a power station in the mountains that is what is called a pumped storage station. They have a reservoir at the top of a hill and another at the bottom. When power is cheap, it gets stored by pumping water up to the upper reservoir. When they are asked to provide reserve power, they let the water flow down to the lower reservoir through some turbines, and they can bring a little over 1 GW of power to the grid in just 90 seconds . . . just shy of the amount needed to run a time machine.\n",
"provenance": null
},
{
"answer": "Mostly power is generated as it's needed. I work at a balancing authority that basically tells any given power plant \"Hey, we need this much power at this time\" and they'll produce what the market demands.",
"provenance": null
},
{
"answer": "Utilities have several tools to match generation to load and (new battery tech notwithstanding) energy is not stored on any large scale. Spinning reserve, VAR support, and quick turn up generation (combustion turbines and pumped storage) are used to handle the transitions in load. The plant operators and transmission dispatchers use voltage and frequency as indicators to determine which tool to use. \n\nWhen load is greater than generation the system voltage will decrease and the frequency will decline. Conversely, if the generation is in excess to the load the system voltage and frequency will increase.\n\nThe operators will run their generators at 80% (for example) anticipating going up to 100% to meet the peaks in demand. This way when the load goes up the demand can be met at the generating site (adding more excitation to the generator). As the load goes down the operators will reduce their excitation voltage to maintain the system voltage at an ideal level. Some utilities forecast this data to determine which generators will run at which levels based on fuel costs and maintenance schedules.\n\nVAR support works two ways, mainly through capacitors and reactors. Closing in the capacitors on the transmission system delivers VAR support closer to the load which raises the voltage further away from generation and allows the generator to run closer to unity which is a big scary math word for 'more efficiently'. Conversely, if the voltage gets too high (ie too much generation given the load) the power plants can lower the system voltage by opening capacitors and closing in reactors. Reactors simulate load which can be useful when dialing down the system voltage and loosely speaking can be thought of an opposite to capacitors for the purpose of voltage control.\n\nQuick turn up generation (pumped storage and gas combustion turbines, for example) are also useful in matching generation to load. It takes weeks or days to turn up nuclear or coal whereas these can be turned up in hours. Raccoon Mountain near Chattanooga, Tennessee is an example pumped storage but even in this case nothing is stored but the fuel. It is actually a net energy loss every time they fill the lake, but it is a financial success given the swing in energy costs throughout the day. Pump at midnight and release at 5pm for maximum return.\n\nDC systems (solar, wind) have made strides in pushing large battery banks into the limelight, but to the best of my knowledge the battery tech hasn't advanced enough to make this a reasonable financial option on any meaningful scale.",
"provenance": null
},
{
"answer": "The situation is soon to change from 2 to 1. Home and grid based energy storage will reduce pollution and enable residential power generation.\n\nHave a look through _URL_0_\n\nand the information on this company's page will supplement on near future changes in power grids:\n\n_URL_1_",
"provenance": null
},
{
"answer": "Made as needed.\n\nBasically it's like your car, you start going up a hill, you put your foot down harder to stay at the same speed and lift off as you go over the hill.\n\nInstead of a hill, it's how much power people are using. They just control \"the throttle\" at the power station much like you do in a car to keep it going the same speed.\n\nThat should be a nice eli5 version",
"provenance": null
},
{
"answer": "Currently it's mostly generated as needed. There are base load plants that create energy at a constant rate and then there are peaker plants that ballance the load during peak time. Peaker power plants are by and large run on natural gas and can be powered on and off relatively quickly when compared to nuclear or coal power plants. \n\nEnergy storage has been used for peak time load balancing for some time but it's not widely used. The most common method used so far has been pumped hydro, but it's not cost effective or technically possible everywhere. Lately there has been a huge incentive to expand utility scale energy storage, primaraly due to renewable energy which is intermittent. Peak load ballancing using stored energy could replace peaker plants and possibly even coal and nuclear plants if enough renewable sources are used.\n\nFor home use batteries are also making more financial sense, not just to store electricity from PV panels but also to have a backup in case of power failure or for storing cheap electricity for later use when it's sold at a higher cost during peak consumption from the grid.",
"provenance": null
},
{
"answer": "Power isn't stored. A key principle of a power grid is that demand must match supply to balance the system. \n\nThe Power Grid operator can call on dormant units to generate or if there is a excess of supply, the grid can tell generating stations to reduce output or even shut down.\n\nTo fine tune the balance of the power grid (matching demand to Supply) larger power stations have a facility called Frequency Response. The Power Grid Operator uses Frequency Response (FR) to fine tune supply to match demand. A Power Station Operator of a unit providing FR might see his generation output shift up and down a couple of MWh to help balance the power grid.\n\nSource: I work in the UK energy industry ",
"provenance": null
},
{
"answer": "Electrical engineer here. Many good answers on here to summarize into a proper ELI5:\n\nSystem basics: Imagine a team of hundreds of horses drawing thousands of small carts all hooked together behind each other. If one horse dies or an extra cart is added it wont have much of an effect on the system as a whole. If all the horses stop at the same time the whole system will stop nearly instantly.\n\nRegulation: To regulate the system there is a conductor with a whip to make the horses put in some extra effort or slack off a bit. He also has some quick release systems to disconnect some of the carts or groups of carts or even some of the horses if they pull too hard. The conductor makes sure the system is always pulled at exactly the same speed (he has a very fancy speedo) and manages connections/disconnections.\n\nLoad prediction: After a few years of this the conductor can start predicting when people will be hooking on their carts or taking them off and so can plan when to add more horses to his team or take some off to rest them.\n\nEnergy storage: At times there are too little horses to pull all the carts. For this they have bred a special wind-up horse. This horse is actually a drag on the other horses but winds itself up while its dragging the others down. But its okay because it can also help pull when there are too little horses. So the conductor can tell it to either help push when there are too little horses or drag the other horses when there are too many horses. This way he conductor can utilize the real horses to their max all the time instead of letting them to pasture and still having to pay for their upkeep.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1646838",
"title": "Grid energy storage",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 441,
"text": "Grid energy storage (also called large-scale energy storage) is a collection of methods used to store electrical energy on a large scale within an electrical power grid. Electrical energy is stored during times when production (especially from intermittent power plants such as renewable electricity sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38861135",
"title": "List of energy storage projects",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 292,
"text": "Many individual energy storage projects augment electrical grids by capturing excess electrical energy during periods of low demand and storing it in other forms until needed on an electrical grid. The energy is later converted back to its electrical form and returned to the grid as needed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1646838",
"title": "Grid energy storage",
"section": "Section::::Forms.:Air.:Compressed air.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 342,
"text": "One grid energy storage method is to use off-peak or renewably generated electricity to compress air, which is usually stored in an old mine or some other kind of geological feature. When electricity demand is high, the compressed air is heated with a small amount of natural gas and then goes through turboexpanders to generate electricity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34431547",
"title": "Jixi Pumped Storage Power Station",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1097,
"text": "The pumped-storage power plant will operate by using two reservoirs to generate power, an upper and a lower. During periods of high energy demand, water is released from the upper reservoir and sent to the power station to generate electricity. After power generation, the water is discharged into the lower reservoir. When energy demand is low, such as at night, water is pumped back up to the upper reservoir as stored energy. The process repeats as needed and the pump-generators serve the dual-role of both pumping and generating electricity. Forming the lower reservoir will be a tall and long concrete-face rock-fill dam (CFRD). Its storage capacity will be of which is active (or 'usable') for pumping. The upper reservoir will be formed by a CFRD as well, this one will be tall and long, withholding a man-made lake. The normal elevation of the lower reservoir will be and the upper . This difference in elevation affords a rated hydraulic head of . The power station will be located underground near the bank of the lower reservoir and contain six 300 MW Francis pump turbine-generators.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34603037",
"title": "Rocky Mountain Hydroelectric Plant",
"section": "Section::::Design and operation.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 664,
"text": "As a pumped-storage power plant, it uses two reservoirs to produce electricity and store energy. The upper reservoir stores water (energy) for periods when electricity demand is high. During these periods, water from the upper reservoir is released down to the power plant to produce hydroelectricity. Water from the power plant is then discharged into the lower reservoir. When energy demand is low, usually at night, water is pumped from the lower reservoir back up to the upper reservoir. The upper reservoir can be replenished in as little as 7.2 hours. The same turbine-generators that are used to generate electricity reverse into pumps during pumping mode.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "751777",
"title": "Francis turbine",
"section": "Section::::Application.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 562,
"text": "In addition to electrical production, they may also be used for pumped storage, where a reservoir is filled by the turbine (acting as a pump) driven by the generator acting as a large electrical motor during periods of low power demand, and then reversed and used to generate power during peak demand. These pump storage reservoirs act as large energy storage sources to store \"excess\" electrical energy in the form of water in elevated reservoirs. This is one of a few methods that allow temporary excess electrical capacity to be stored for later utilization.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24130",
"title": "Energy storage",
"section": "Section::::Applications.:Grid electricity and power stations.:Renewable energy storage.\n",
"start_paragraph_id": 181,
"start_character": 0,
"end_paragraph_id": 181,
"end_character": 331,
"text": "Some forms of storage that produce electricity include pumped-storage hydroelectric dams, rechargeable batteries, thermal storage including molten salts which can efficiently store and release very large quantities of heat energy, and compressed air energy storage, flywheels, cryogenic systems and superconducting magnetic coils.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
flll6 | Has anyone ever taught a computer to code? | [
{
"answer": "You might be describing something a bit like [genetic programming.](_URL_0_) I believe these still require someone to determine the fitness of each program, but if you have some desired output and a set input, you can have something compare what the program produces to what you want the output to be and the algorithm will fix itself, maybe. Disclaimer, I'm just a student and don't know a ton about computer science just yet! So that could all be wrong. First thing that popped into my head though on reading this post.\n\n[Here's a program](_URL_1_) that can write music based on previous works and a human teacher for its output.\n\nHah, and [here's the album](_URL_2_) it wrote. Ridiculous.",
"provenance": null
},
{
"answer": "Genetic programming works well on a certain subset of problems; such problems tend to revolve around somewhat esoteric things like finding optimal algorithms (i.e. don't try to evolve a genetic program to add new features to facebook, but maybe you could use one to search for a way for facebook to optimize the way they store their node graph internally). In addition, the solution space for the problems need to have \"smoothness\"; in other words, they work best for problems where if a solution A is somewhat good, then solution B which lies \"nearby\" A has a strong chance of being good or better. So, it has \"taken off\" to an extent in the limited space of problems it's well suited for.\n\nAlso, genetic programs tend to come out rather...interesting. The actual code they produce tends to look bizarre, and it can be very difficult for a human engineer to intuit how they work, even when they still give the right answers. When humans write code, they are also reading the code they write, but the genetic algorithm doesn't care about making the code readable or understandable, so you end up with some pretty crazy stuff. I read an article once (way too long ago to remember details) about an optimal program which was generated, and when the researches reverse engineered it to discover how it worked they found it actually took advantage of a bug in the hardware they were running the simulation on, and if the bug was fixed the optimal solution no longer gave correct answers. In a similar fashion, they are very sensitive to the fitness function used to rank the candidate programs. It's common to evolve programs that work very well when ranked according to a certain specification of the problem, but change the target even slightly in a way that human-written code could easily adapt to, and they fall apart.",
"provenance": null
},
{
"answer": "I'm currently doing a Master's in AI, actually. My goal for the degree is to acquire the theoretical background I need to realize my longer-term goal: building a robust general problem solving agent. The tool will be able to construct simple scripts that meet a set of \"almost-natural-language\" criteria. \n\nGenetic programming is a nice approach, but dead-ends in the type of domains I find interesting. My approach is rooted in logic programming, so it is essentially constructivist, but the ultimate idea is for the system to learn the logic domains automatically by using data mining approaches.\n\nCombining data mining and advanced logic programming in a feedback loop modeled on real cognitive processes should allow for a relatively flexible system, capable of fairly complex tasks in acceptably realistic (noisy) environments.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "896120",
"title": "History of programming languages",
"section": "Section::::Early history.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 231,
"text": "The first computer codes were specialized for their applications: e.g., Alonzo Church was able to express the lambda calculus in a formulaic way and the Turing machine was an abstraction of the operation of a tape-marking machine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23015",
"title": "Programming language",
"section": "Section::::History.:Early developments.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 386,
"text": "John Mauchly's Short Code, proposed in 1949, was one of the first high-level languages ever developed for an electronic computer. Unlike machine code, Short Code statements represented mathematical expressions in understandable form. However, the program had to be translated into machine code every time it ran, making the process much slower than running the equivalent machine code.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48851729",
"title": "Monrobot XI",
"section": "Section::::Programming and operating speed.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 226,
"text": "The computer could be programmed using an assembly language system called QUIKOMP(TM), but its simple machine language instruction set and slow operation speed encouraged many programmers to code directly in machine language.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "98778",
"title": "Natural-language understanding",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 441,
"text": "The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT is one of the earliest known attempts at natural-language understanding by a computer. Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled \"Natural Language Input for a Computer Problem Solving System\") showed how a computer could understand simple natural language input to solve algebra word problems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5311",
"title": "Computer programming",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 233,
"text": "However, the first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53723",
"title": "Digital media",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 294,
"text": "Codes and information by machines were first conceptualized by Charles Babbage in the early 1800s. Babbage imagined that these codes would give him instructions for his Motor of Difference and Analytical Engine, machines that Babbage had designed to solve the problem of error in calculations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "189845",
"title": "Low-level programming language",
"section": "Section::::Machine code.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 404,
"text": "Machine code is the only language a computer can process directly without a previous transformation. Currently, programmers almost never write programs directly in machine code, because it requires attention to numerous details that a high-level language handles automatically. Furthermore it requires memorizing or looking up numerical codes for every instruction, and is extremely difficult to modify.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
59x94y | Historically, how long have Arabs been the dominant ethnic group in the Middle East? | [
{
"answer": "Unfortunately I don't think this an answerable question. \"Arab blood\" does not define Arabness. Arabness is at least partially a linguistically defined ethnicity. That's why dark-skinned dark-eyed [Anwar Sadat](_URL_2_) is just as much an Arab as light-skinned blue-eyed [Bashar al-Assad](_URL_0_) is an Arab.\n\nAside from that definitional issue, our earliest sources, including biblical references, are to \"Saracens\", not Arabs. It's not at all clear who is being referred to when these classical sources are referring to Saracens. For instance while some sources might be using it in the sense that we mean of \"ethnicity\", and therefore define the area of territory occupied by Saracens broadly, others are using an unusually narrow definition, where, for instance, Saracens are the people who live in one very specific place. [This can easily be misrepresented by Arab nationalists](_URL_1_) in cumulative fashion as suggesting that some huge portion of the Middle East was meaningfully \"Arab\" from a very early period. Maybe. I'm hugely, hugely skeptical.\n\nMost of the writers we're relying on for these descriptions in the classical era have never been to the Middle East, have no idea who lives there, and the picture will remain fuzzy until the Arab conquests themselves.\n\nI think the best we can probably do is to say that there were strong tribal connections throughout the region. Quite good linguistic connections. But making definitive statements in terms of percentages for ethnic populations is just not possible.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "802377",
"title": "Arabization",
"section": "Section::::History of Arabization.:Arabization during the early Caliphate.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 305,
"text": "The earliest and most significant instance of \"Arabization\" was the first Muslim conquests of Muhammad and the subsequent Rashidun and Umayyad Caliphates. They built a Muslim Empire that grew well beyond the Arabian Peninsula, eventually reaching as far as Spain in the West and Central Asia to the East.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21748",
"title": "Nationalism",
"section": "Section::::History.:20th century.:Middle East.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 735,
"text": "Arab nationalism, a movement toward liberating and empowering the Arab peoples of the Middle East, emerged during the latter 19th century, inspired by other independence movements of the 18th and 19th centuries. As the Ottoman Empire declined and the Middle East was carved up by the Great Powers of Europe, Arabs sought to establish their own independent nations ruled by Arabs rather than foreigners. Syria was established in 1920; Transjordan (later Jordan) gradually gained independence between 1921 and 1946; Saudi Arabia was established in 1932; and Egypt achieved gradually gained independence between 1922 and 1952. The Arab League was established in 1945 to promote Arab interests and cooperation between the new Arab states.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19323",
"title": "Middle East",
"section": "Section::::Demographics.:Ethnic groups.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 330,
"text": "Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic speaking groups. Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2293809",
"title": "Islamization",
"section": "Section::::History.:Arabization.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 622,
"text": "Arabization describes a growing cultural influence on a non-Arab area that gradually changes into one that speaks Arabic and/or incorporates Arab culture. It was most prominently achieved during the 7th-century Arabian Muslim conquests which spread the Arabic language, culture, and—having been carried out by Arabian Muslims as opposed to Arab Christians or Arabic-speaking Jews—the religion of Islam to the lands they conquered. The result: some elements of Arabian origin combined in various forms and degrees with elements taken from conquered civilizations and ultimately denominated \"Arab\", as opposed to \"Arabian\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "159433",
"title": "Arab world",
"section": "Section::::History.:Early history.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 338,
"text": "The Arabs historically originate as a Central Semitic group in the Arabian peninsula. Their expansion beyond Arabia and the Syrian desert is due to the Muslim conquests of the 7th and 8th centuries. Mesopotamia (modern Iraq) was conquered in 633, Levant (modern Syria, Israel, Palestine, Jordan, Lebanon and tine) between 636 and 640 CE.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23267",
"title": "Palestinians",
"section": "Section::::Origins.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 1700,
"text": "The region was not originally Arab – its Arabization was a consequence of the inclusion of Palestine within the rapidly expanding Arab Empire conquered by Arabian tribes and their local allies in the first millennium, most significantly during the Islamic conquest of Syria in the 7th century. Palestine, then a Hellenized region controlled by the Byzantine empire, with a large Christian population, came under the political and cultural influence of Arabic-speaking Muslim dynasties, including the Kurdish Ayyubids. From the conquest down to the 11th century, half of the world's Christians lived under the new Muslim order and there was no attempt for that period to convert them. Over time, nonetheless, much of the existing population of Palestine was Arabized and gradually converted to Islam. Arab populations had existed in Palestine prior to the conquest, and some of these local Arab tribes and Bedouin fought as allies of Byzantium in resisting the invasion, which the archaeological evidence indicates was a 'peaceful conquest', and the newcomers were allowed to settle in the old urban areas. Theories of population decline compensated by the importation of foreign populations are not confirmed by the archaeological record Like other \"Arabized\" Arab nations the Arab identity of Palestinians, largely based on linguistic and cultural affiliation, is independent of the existence of any actual Arabian origins. The Palestinian population has grown dramatically. For several centuries during the Ottoman period the population in Palestine declined and fluctuated between 150,000 and 250,000 inhabitants, and it was only in the 19th century that a rapid population growth began to occur.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13139823",
"title": "Post-classical history",
"section": "Section::::History by region in the Old World.:West Asia.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 1211,
"text": "The dominance of the Arabs came to a sudden end in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. They conquered Persia, Iraq (capturing Baghdad in 1055), Syria, Palestine, and the Hejaz. This was followed by a series of Christian Western Europe invasions. The fragmentation of the Middle East allowed joint European forces mainly from England, France, and the emerging Holy Roman Empire, to enter the region. In 1099 the knights of the First Crusade captured Jerusalem and founded the Kingdom of Jerusalem, which survived until 1187, when Saladin retook the city. Smaller crusader fiefdoms survived until 1291. In the early 13th century, a new wave of invaders, the armies of the Mongol Empire, swept through the region, sacking Baghdad in the Siege of Baghdad (1258) and advancing as far south as the border of Egypt in what became known as the Mongol conquests. The Mongols eventually retreated in 1335, but the chaos that ensued throughout the empire deposed the Seljuq Turks. In 1401, the region was further plagued by the Turko-Mongol, Timur, and his ferocious raids. By then, another group of Turks had arisen as well, the Ottomans.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3xd0z2 | what exactly is that steamy-looking stuff that comes out right after a beer bottle is opened? | [
{
"answer": "Water vapour condensed out of the air due to the sudden drop in pressure in the neck of the bottle would be my guess.",
"provenance": null
},
{
"answer": "When a gas expands it's temperature drops. When opening a beer bottle the CO2 in it rapidly expands and cools down causing water vapor in it to condense.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2061533",
"title": "Operation Outward",
"section": "Section::::Design.:\"Beer\", \"jelly\" and \"socks\".\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 657,
"text": "\"Beer\" consisted of a cylindrical metal container in diameter and long containing seven or eight half-pint bottles. Each bottle was a SIP grenade - it contained white phosphorus, benzene, water and a strip of raw rubber, long, which dissolved and formed a layer. After a delay caused by a slow burning fuse, the metal container was tipped open and its contents allowed to fall out. Around the neck of each bottle was a small metal sleeve that held a heavy ball about in diameter. The ball was attached to a strip of canvas; this ensured that when the bottles dropped they fell the right way round. The SIP grenades would spontaneously ignite on shattering.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1120362",
"title": "Steam beer",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1107,
"text": "There have been various explanations for the use of the name \"steam beer\". According to Anchor Brewing, the name \"steam\" came from the fact that the brewery had no way to effectively chill the boiling wort using traditional means. So they pumped the hot wort up to large, shallow, open-top bins on the roof of the brewery so that it would be rapidly chilled by the cool air blowing in off the Pacific Ocean. Thus while brewing, the brewery had a distinct cloud of steam around the roof let off by the wort as it cooled, hence the name. Another explanation is that the carbon dioxide pressure produced by the 19th-century steam-beer-making process was very high, and that it may have been necessary as part of the process to let off \"steam\" before attempting to dispense the beer. It is also possible that the name or brewing process derive from \"Dampfbier\" (literally \"steam beer\"), a traditional German beer that was also fermented at unusually high temperatures and that may have been known to 19th-century American brewers, many of whom were of German descent; Dampfbier is an ale, however, not a lager.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4039413",
"title": "Widget (beer)",
"section": "Section::::Method.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 365,
"text": "When the can is opened, the pressure in the can quickly drops, causing the pressurised gas and beer inside the widget to jet out from the hole. This agitation on the surrounding beer causes a chain reaction of bubble formation throughout the beer. The result, when the can is then poured out, is a surging mixture in the glass of very small gas bubbles and liquid.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5841422",
"title": "Two-liter bottle",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 369,
"text": "The two-liter bottle is a common container for soft drinks, beer, and wine. These bottles are produced from polyethylene terephthalate, also known as PET plastic, or glass using the blow molding process. Bottle labels consist of a printed, tight-fitted plastic sleeve. A resealable screw-top allows the contents to be used at various times while retaining carbonation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "967654",
"title": "Bottling line",
"section": "Section::::Beer bottling process.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 759,
"text": "The first step in bottling beer is \"depalletising\", where the empty bottles are removed from the original pallet packaging delivered from the manufacturer, so that individual bottles may be handled. The bottles may then be rinsed with filtered water or air, and may have carbon dioxide injected into them in attempt to reduce the level of oxygen within the bottle. The bottle then enters a \"filler\" which fills the bottle with beer and may also inject a small amount of inert gas (usually carbon dioxide or nitrogen) on top of the beer to disperse the oxygen, as oxygen can ruin the quality of the product via oxidation. Finally, the bottles go through a \"capper\", which applies a bottle cap, sealing the bottle. A few beers are bottled with a cork and cage.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "758034",
"title": "Water rocket",
"section": "Section::::Operation.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 227,
"text": "The bottle is partly filled with water and sealed. The bottle is then pressurized with a gas, usually air compressed from a bicycle pump, air compressor, or cylinder up to 125 psi, but sometimes CO or nitrogen from a cylinder.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "823489",
"title": "Jim Beam",
"section": "Section::::Process.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 707,
"text": "From the cooker, the mash heads to the fermenter where it is cooled to 60–70 °F and yeast is added again. The yeast is fed by the sugars in the mash, producing heat, carbon dioxide and alcohol. Called \"distiller's beer\" or \"wash\", the resulting liquid (after filtering to remove solids) looks, smells and tastes like (and essentially is) a form of beer. The wash is pumped into a column still where it is heated to over 200 °F, causing the alcohol to turn to a vapor. As the vapor cools and falls it turns to a liquid called \"low wine\", which measures 125 proof or 62.5% alcohol. A second distillation in a pot still heats and condenses the liquid into \"high wine\", which reaches 135 proof (67.5% alcohol).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ezpek8 | Why were the Franks so effective at conquering the Germanic tribes, where Rome had failed for so long? | [
{
"answer": "Frankish hegemony over transrhenan peoples was less due to conquest or overpowering campaigns, and more to a policy of personal relationships, trade, raids and counter-raids that weren't that dissimilar to Rome's policies in Germania and that tended to define sort of a Frankish \"sphere of influence/power projection\" up to the Elbe, but also in Northern Italy, Armorica, southern England and Spain.\n\nWhile Clovis battled against Alamans and Thuringians already in the late Vth and early VIth centuries, it mostly concerned groups established or raiding Gaul, not campaigns going beyond the Rhine. These peoples entering into Ostrogothic protection under Theodoric, the establishment of a Frankish hegemony in Germania can be more easily associated with the reign of Theudeuric I, who ruled the Northern-Eastern part of the Frankish realm and directly at contact with its polities (or raiders, as evidenced by the failed Danish raid of 516, whom defeat made enough of an impression to be accounted for in Beowulf).\n\nThis hegemony can mostly be traced from the reigns of Clovis' sons, especially Theudeuric who ruled over the lands associated with Franks since the IVth century and that was in direct contact with Frisians, Saxons, Alamans and Thuringians. The last two peoples already clashed with Franks during Clovis' reign, but mostly groups that were present in Gaul, the rest benefiting from the powerful and prestigious protection of Theodoric, king of Ostrogoths, a protection that disappeared in the VIth century along the decline and fall of the Ostrogothic kingdom itself.\n\nAs Francia appeared as the most prosperous and powerful polity of post-imperial western Europe and as kings in Italy were unable to really preserve Theodoric's diplomatic network, Franks were a necessary and powerful partner : the first Frankish intervention in Thuringia (with Merovingian kings probably had genealogical relations with) was even made at the behalf of a Thuringian king against another, hoping to get half of it which they did after an obscure situation where the supported candidate died. It's not that they annexed all of Thuringia : a good part was eventually swallowed up by neighboring entities, Germanic or Slavic, and the land within the realm was probably let to local nobles (led by a duke at least in the VIIth century, hinting at an \"ethnic\" rulership even if he was Frankish), poorly settled at best by Franks, with local populations being held tributary. The Thuringian exemple, eventually, can be set for the lot of Germanic principalties under Frankish influence.\n\nRather than a conquest, what was important for Merovingians was their capacity to halt raids, raid beyond the Rhine themselves and obtain both loot and substential tribute, raising auxiliaries and enforce their claims of over-lordship by including them into a personal and genealogical relationship as duces (dukes) although they probably had royal titles (in a system not unlike the Chinese tributary system, local and regional kings in Germany, Wasconia or Brittany weren't considered as such by Franks, who called them counts or dukes). On this regard, the difference usually made between Alamans, Bavarians and Thuringians from one hand; Frisians and Saxons from another one might not be that radical (Saxons, for exemple, being extorted a tribute and considered as rebels when refusing to pay up).\n\nOf course, this was true when the Merovingian kings were able to enforce their rule beyond their borders, bullying their way into submission of local kings (especially under the reigns of Theudeuric I, Clothar I, Clothar II and Dagobert); but at the first sign of weakness, a revolt was always possible as it happened to Clothar I against Saxons (and as Marcomanni did with Marcus Aurelius in their time); even the own successes of peripheral rulers could led them to challenge Franks (such as Radulf, victorious against Wendish raiders and successfully beating Dagobert's Franks).\n\nThis could give the impression that Frankish Germania was kind of an aftertought, but it seems to have been rather the contrary : tributes payed in cattle, horses, possibly slaves were important, levied men served as auxiliaries in Frankish campaigns as soon as in Northern Italy and as late as the VIIIth century. It's just that, as Romans before, Merovingians were content (or had to do) with a fluctuating and warlord-ish relationship where their hegemony had to be regularly reasserted by demonstrations of strength or battle; rather than an effective conquest. That said, even this complex relationship left marks : local dynasties and nobilities were \"Frenchified\" to an extent, allowing Carolingians to maintain genealogic ties with dynasties such as Agilofings in Bavaria, giving some leeway to integrate them further to Francia (establishing law codes, notably, and \"preparing\" their annexation) and serving as model for further conquests such as in Frisia or Saxony : these conquest, as real they were, were also not so much \"efficient\" than brutal and requiring a lot of resources and attention from early Carolingian kings compared to the more light weighted management of Merovingians.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "145915",
"title": "Foederati",
"section": "Section::::History.:The Empire.:4th century.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 625,
"text": "The Franks became foederati in 358 CE, when Emperor Julian let them keep the areas in northern Gaul, which had been depopulated during the preceding century. Roman soldiers defended the Rhine and had major armies south and west of the Rhine. Frankish settlers were established in the areas north and east of the Romans and helped with the Roman defense by providing intelligence and a buffer state. The breach of the Rhine borders in the frozen winter of 406 and 407 made an end to the Roman presence at the Rhine when both the Romans and the allied Franks were overrun by a tribal migration \"en masse\" of Vandals and Alans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6685477",
"title": "Ledringhem",
"section": "Section::::History.:Frankish Empire.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 644,
"text": "The Franks became foederati in 358 AD, when Emperor Julian let them keep the areas in northern Gaul, which had been depopulated during the preceding century. Roman soldiers defended the Rhine and had major armies 100 miles (160 km) south and west of the Rhine. Frankish settlers were established in the areas north and east of the Romans and helped with the Roman defense by providing intelligence and a buffer state. The breach of the Rhine borders in the frozen winter of 406 and 407 made an end to the Roman presence at the Rhine when both the Romans and the allied Franks were overrun by a tribal migration \"en masse\" of Vandals and Alans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6685477",
"title": "Ledringhem",
"section": "Section::::History.:Frankish Empire.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 287,
"text": "In Gaul, the Franks, a fusion of western Germanic tribes whose leaders had been strongly aligned with Rome since the 3rd century, subsequently entered Roman lands more gradually and peacefully during the 5th century, and were generally endured as rulers by the Roman-Gaulish population.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13289",
"title": "History of the Netherlands",
"section": "Section::::Roman era (57 BC – 410 AD).:Emergence of the Franks.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 560,
"text": "Franks appear in Roman texts as both allies and enemies (\"laeti\" and \"dediticii\"). By about 320, the Franks had the region of the Scheldt river (present day west Flanders and southwest Netherlands) under control, and were raiding the Channel, disrupting transportation to Britain. Roman forces pacified the region, but did not expel the Franks, who continued to be feared as pirates along the shores at least until the time of Julian the Apostate (358), when Salian Franks were allowed to settle as \"foederati\" in Toxandria, according to Ammianus Marcellinus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2462183",
"title": "Franks",
"section": "Section::::History.:Salians.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 287,
"text": "Some decades later, Franks in the same region, possibly the Salians, controlled the River Scheldt and were disrupting transport links to Britain in the English Channel. Although Roman forces managed to pacify them, they failed to expel the Franks, who continued to be feared as pirates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46732",
"title": "Narses",
"section": "Section::::Final battles.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 673,
"text": "After the final defeat of the Goths, the Franks, led by the brothers Leutharis and Buccillinus, attempted to invade the recently reconquered lands. From the \"Liber Pontificalis\": \"They (The Franks) in like manner wasted Italy. But with the help of the Lord they too were destroyed by Narses. And all Italy rejoiced.\" For the next year or two, Narses crossed the countryside, reinstituting Byzantine rule and laying siege to towns that resisted. But as more and more Franks poured over the Alps, Narses regrouped in Rome, and once spring came, marched his army against them. The Franks, led by the two brothers, were pursuing separate routes, but plundering the whole time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "923406",
"title": "Fall of the Western Roman Empire",
"section": "Section::::313–376: Abuse of power, frontier warfare, and rise of Christianity.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1527,
"text": "Constantine settled Franks on the lower left bank of the Rhine; their settlements required a line of fortifications to keep them in check, indicating that Rome had lost almost all local control. Under Constantius, bandits came to dominate areas such as Isauria well within the empire. The tribes of Germany also became more populous and more threatening. In Gaul, which did not really recover from the invasions of the third century, there was widespread insecurity and economic decline in the 300s, perhaps worst in Armorica. By 350, after decades of pirate attacks, virtually all villas in Armorica were deserted, and local use of money ceased about 360. Repeated attempts to economize on military expenditure included billeting troops in cities, where they could less easily be kept under military discipline and could more easily extort from civilians. Except in the rare case of a determined and incorruptible general, these troops proved ineffective in action and dangerous to civilians. Frontier troops were often given land rather than pay; as they farmed for themselves, their direct costs diminished, but so did their effectiveness, and there was much less economic stimulus to the frontier economy. However, except for the provinces along the lower Rhine, the agricultural economy was generally doing well. The average nutritional state of the population in the West suffered a serious decline in the late second century; the population of North-Western Europe did not recover, though the Mediterranean regions did.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
fbfce | What would need to happen in order for it to be called the LAW of evolution? | [
{
"answer": "['Laws of Science' on Wikipedia](_URL_0_)\n\nTheories are models. 'The world is like this'. Laws are fundamental rules that always apply (in the specified circumstances) and are distilled into statements of fact. Natural selection could perhaps be formulated into a law, or set of laws, but evolution is not a simple proposition.",
"provenance": null
},
{
"answer": "I don't think laws and theories are really the same thing. \n\nTheories are conceptual frameworks that suggests at a mechanistic explanation at how something happens, while laws seem to tend to be concise statement of a specific, accurate observation. \n\nFor instance, Newton's law of gravitation is just a specific equation relating the strength of a gravitational force to the masses and distance of the objects, while the theory of gravitation encompasses things like general relativity and how gravity works and what it affects.\n\nI could be wrong, but that's what I always interpreted laws as. In any case, there are hardly any laws in biology since few things are ever concise and simple and almost nothing is ever absolute. ",
"provenance": null
},
{
"answer": "There's nothing weak about the word theory. For instance, quantum field theory is the most accurate description of nature that we have.\n\nAs for laws and theorems in physics:\n\n*Bell's theorem\n\n*Newton's law of gravity, laws of motion, law of heating and cooling\n\n*Kirkchoff's laws\n\n*Kepler's laws\n\n*Laws of thermodynamics\n\nAre a few\n",
"provenance": null
},
{
"answer": "\"Laws\" tend to have an explicit mathematical definition, that is an equation describing the how some quantity varies with another (often time). It is unlikely that evolution could ever be a \"law\" as there is no one equation which can describe it: it is a large number of related and competing processes acting in tandem.\n\nAs an aside, personally I hate the term \"law\". \"Law\" implies something is absolute, and eternal, and true. Of course, this not the case at all. So-called laws are in fact just models of systems which seem to agree with experimental results/observations. E.g newton's laws are not correct: they work under certain conditions, but fail under others. Einstein improved on Newton with special relativity, and we now know that relativity offers a better and more complete description of motion. \n ",
"provenance": null
},
{
"answer": "There are no rules as to what constitutes a Law and what constitutes a Theory.\n\nThere is no smoke filled back room where scientists deliberate over whether it is to time to \"promote\" a Theory to a Law.\n\nThere are Conjectures and Hypotheses that are undisputed, and Laws that are completely wrong. \n\nThe only real \"rule\" is whatever label something had when it became widely know, that is the one that sticks. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "694732",
"title": "Dollo's law of irreversibility",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 713,
"text": "The statement is often misinterpreted as claiming that evolution is not reversible, or that lost structures and organs cannot reappear in the same form by any process of devolution. According to Richard Dawkins, the law is \"really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice (or, indeed, any particular trajectory), in either direction\". Stephen Jay Gould suggested that irreversibility forecloses certain evolutionary pathways once broad forms have emerged: \"[For example], once you adopt the ordinary body plan of a reptile, hundreds of options are forever closed, and future possibilities must unfold within the limits of inherited design.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14719490",
"title": "Tychism",
"section": "Section::::The thesis.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 428,
"text": "To explain the presence of such a universal \"law\" Peirce proposes a \"cosmological theory of evolution\" in which law develops out of chance. The hypothesis that \"out of irregularity, regularity constantly evolves\" seemed to him to have decided advantages not the least being its explanation of \"why laws are not precisely or always obeyed, for what is still in a process of evolution can not be supposed to be absolutely fixed.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5730634",
"title": "Junkyard tornado",
"section": "Section::::Details.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 547,
"text": "This \"Borel's Law\" is actually the universal probability bound, which when applied to evolution is axiomatically incorrect. The universal probability bound assumes that the event one is trying to measure is completely random, and some use this argument to prove that evolution could not possibly occur, since its probability would be much less than that of the universal probability bound. This, however, is fallacious, given that evolution is not a completely random effect (genetic drift), but rather proceeds with the aid of natural selection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "939578",
"title": "List of eponymous laws",
"section": "Section::::C–D.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 272,
"text": "BULLET::::- Dollo's law: \"An organism is unable to return, even partially, to a previous stage already realized in the ranks of its ancestors.\" Simply put this law states that evolution is not reversible; the \"law\" is regarded as a generalisation as exceptions may exist.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8787159",
"title": "Objections to evolution",
"section": "Section::::Plausibility.:Unexplained aspects of the natural world.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 527,
"text": "Creationists argue against evolution on the grounds that it cannot explain certain non-evolutionary processes, such as abiogenesis, the Big Bang, or the meaning of life. In such instances, \"evolution\" is being redefined to refer to the entire history of the universe, and it is argued that if one aspect of the universe is seemingly inexplicable, the entire body of scientific theories must be baseless. At this point, objections leave the arena of evolutionary biology and become general scientific or philosophical disputes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "186205",
"title": "Michael Behe",
"section": "Section::::Irreducible complexity and intelligent design.:\"The Edge of Evolution\".\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 274,
"text": "In 2007, Behe's book \"The Edge of Evolution\" was published arguing that while evolution can produce changes within species, there is a limit to the ability of evolution to generate diversity, and this limit (the \"edge of evolution\") is somewhere between species and orders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8787159",
"title": "Objections to evolution",
"section": "Section::::Impossibility.:Violation of the second law of thermodynamics.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 561,
"text": "Another objection is that evolution violates the second law of thermodynamics. The law states that \"the entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium\". In other words, an isolated system's entropy (a measure of the dispersal of energy in a physical system so that it is not available to do mechanical work) will tend to increase or stay the same, not decrease. Creationists argue that evolution violates this physical law by requiring a decrease in entropy, or disorder, over time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1dtpua | why hockey refs fake a puck drop during a face off? | [
{
"answer": "So that players who try to anticipate the drop and thus start their swing faster don't get an advantage. They are supposed to wait until the puck is dropped to start their swing. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "731791",
"title": "Gamesmanship",
"section": "Section::::Techniques.:Breaking the flow.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 292,
"text": "BULLET::::- In amateur ice hockey, intentionally icing the puck, lining up at the wrong face-off dot, or shooting the puck over the glass (in professional hockey, the team that ices the puck is not allowed a line change, while shooting the puck over the glass leads to a two-minute penalty).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3144282",
"title": "Running out the clock",
"section": "Section::::Other sports.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 459,
"text": "A team which shoots the puck forward from their half of the ice over the opposing team's goal line in an effort to stonewall is guilty of icing, and the puck is brought to the other end of the ice for a face-off. The rule is not in effect when a team is playing shorthanded due to a penalty. Additionally, a player (usually a goalkeeper) may be charged with a minor (two-minute) penalty for delay of game for shooting the puck over the glass and out of play.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24471025",
"title": "Out of bounds",
"section": "Section::::Ice hockey.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 467,
"text": "In ice hockey, if the puck gets knocked out of play (such as into the player's benches, over the glass, or into the netting), a face-off shall be conducted at the nearest face-off dot to where the puck had gone out of play. However, if the puck is directly shot out of bounds over the glass deliberately by a player such as a goaltender or any defensive player within their own defensive zone, a delay of game minor penalty shall be assessed on the offending player.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1269252",
"title": "Offside (ice hockey)",
"section": "Section::::Rules.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 461,
"text": "During a faceoff, a player may be judged to be in an offside position if they are lined up within 15 feet of the centres before the puck is dropped. This may result in a faceoff violation, at which point the official dropping the puck will wave the centre out of the faceoff spot and require that another player take their place. If one team commits two violations during the same attempt to restart play, it will be assessed a minor penalty for delay of game.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10142123",
"title": "Fred Waghorne",
"section": "Section::::Officiating.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 202,
"text": "BULLET::::- The practice of dropping the puck from a few feet up at faceoff rather than placing it directly on the ice, which limited player contact with the referee's shins and ankles during faceoffs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "567681",
"title": "Winger (ice hockey)",
"section": "Section::::Face-offs.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 350,
"text": "Prior to the puck being dropped for a face-off, players other than those taking the face-off must not make any physical contact with players on the opposite team, nor enter the face-off circle (where marked). After the puck is dropped, it is essential for wingers to engage the opposing players to prevent them from obtaining possession of the puck.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21318092",
"title": "Rat trick",
"section": "Section::::Legacy.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 621,
"text": "Directly as a result of the rat trick craze, the NHL amended its rules prior to the season to prevent a recurrence of this phenomenon and delays to the game that followed. Per the rule, if fans throw debris onto the ice, the referee can have the public address announcer warn the fans to stop. After a warning, the referee can then issue a delay of game penalty to the home team. The league, however, created a special exemption for articles \"thrown onto the ice following a special occasion\", specifically excluding the traditional tossing of hats onto the ice following a hat trick goal from subjection to the penalty.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3e0clt | why are cans in hawaii shaped differently than regular soda cans? | [
{
"answer": "This is an older design of common cans. Let this guy shed some light on it in the most interesting can-related video ever produced:\n\n_URL_0_",
"provenance": null
},
{
"answer": "These are regular soda cans from some years back. Factories in some locations have older equipment.",
"provenance": null
},
{
"answer": "There is a 'side bar' at the bottom of this [newspaper article](_URL_0_) that lines up with the memory I have from taking a tour of the factory as a kid. In a nutshell, the crimped cans take less aluminum to make. \n \nShipping to Hawaii is pretty expensive - other than airlines with people, I would surprised if the soda that comes in to the islands comes via air, I would expect it is put on a ship (about a 5-day trip). The rest of the cans are made by Ball. Ball has a plant in Kapolei Hawaii that makes the aluminum cans for most everyone in the state (soda, beer, sparkling water, etc.) I believe they are the single source for aluminum manufacturing for the state having purchased the factory from Reynolds in the late 1970s. \n ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "507190",
"title": "Steel and tin cans",
"section": "Section::::Standard sizes.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 219,
"text": "Cans come in a variety of shapes: two common ones are the \"soup tin\" and the \"tuna tin\". Walls are often stiffened with rib bulges, especially on larger cans, to help the can resist dents that can cause seams to split.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58449448",
"title": "History of bottle recycling in the United States",
"section": "Section::::History of PET bottle recycling.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 907,
"text": "The American mechanical engineer and inventor Nathaniel Wyeth first patented PET in 1973 and soda companies soon started using PET to package their drinks. Over the past 20 years or so, PET bottles have become the most common material to package beverages, replacing glass and metal. Especially water and soda was starting to be packaged in PET bottles. This is because PET has certain material properties that make it more favorable than glass or metal cans. Most importantly, PET is lightweight and difficult to break. Further, PET is clear and has \"good barrier properties towards moisture and oxygen\". Because of these qualities, PET has replaced glass bottles and metal cans in many instances, with PET bottles also being used for energy drinks, beer, wine, and juice. The introduction of PET bottles marked the final stage in the change away from reusable bottles to \"one-way\", nonreturnable bottles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "507190",
"title": "Steel and tin cans",
"section": "Section::::Description.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 303,
"text": "Most cans are right circular cylinders with identical and parallel round tops and bottoms with vertical sides. However, cans for small volumes or particularly-shaped contents, the top and bottom may be rounded-corner rectangles or ovals. Other contents may suit a can that is somewhat conical in shape.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55842554",
"title": "Khanom la",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 275,
"text": "In the past, \"khanom la\" was made from coconut shell because formerly, there were no cans. People will made in the coconut shell several small holes. In Thailand we call coconut is “Kala”(). Now, they use the canned drilling small holes into device instead of coconut shell.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12068253",
"title": "Blue Hawaii (cocktail)",
"section": "Section::::Preparation and variations.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 445,
"text": "Because it is easy and inexpensive to make, it is often served as a punch. At its simplest, it is a bottle or two of plain or coconut-flavored light rum, a bottle of blue curacao, a can of pineapple juice, and a bag of ice, mixed together in a punchbowl. The Blue Hawaii is seasonal, often considered a summer or warm weather drink. Occasionally, because it contains yellow pineapple juice, the Blue Hawaii will have a green coloration instead.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "204043",
"title": "C-ration",
"section": "Section::::Field ration, Type C (1938–1945).\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 452,
"text": "During the war, soldiers frequently requested that the cylindrical cans be replaced with flat, rectangular ones (similar to a sardine can), comparable to those used in the earliest versions of contemporary K rations, because of their compactness and packability; but this was deemed impractical because of the shortage of commercial machinery available to produce rectangular cans. After 1942 the K ration too, reverted to the use of small round cans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6868284",
"title": "Canned coffee",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 472,
"text": "Can design and shape have changed drastically. The earliest cans were simple in terms of graphic design and were often corrugated in the middle two-thirds of the can. Cans with straight steel sides appeared next, finally settling on a more modern shape. Like the earlier cans, this type also starts as a flat sheet that is curled and seamed. Extruded steel is also used extensively. Aluminum coffee cans are almost non-existent, although UCC Black is a notable exception.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6nj52v | Would an ancient Roman be able to read and understand the Latin Wikipedia? | [
{
"answer": "Post this to r/latin--I think they'd get a kick out of it. :)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "414942",
"title": "Phoenician language",
"section": "Section::::Surviving examples.\n",
"start_paragraph_id": 122,
"start_character": 0,
"end_paragraph_id": 122,
"end_character": 672,
"text": "Roman authors, such as Sallust, allude to some books written in the Punic language, but none have survived except occasionally in translation (e.g., Mago's treatise) or in snippets (e.g., in Plautus' plays). The Cippi of Melqart, a bilingual inscription in Ancient Greek and Carthaginian discovered in Malta in 1694, was the key which allowed French scholar Jean-Jacques Barthélemy to decipher and reconstruct the alphabet in 1758. Even as late as 1837 only 70 Phoenician inscriptions were known to scholars. These were compiled in Wilhelm Gesenius's \"Scripturae linguaeque Phoeniciae monumenta\", which comprised all that was known of Phoenician by scholars at that time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35005655",
"title": "American Institute for Roman Culture",
"section": "Section::::Study Abroad Programs.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 433,
"text": "An intensive program with PhD level professors of Latin who impart grammar, syntax, and vocabulary through related readings of poetry and prose from various moments in Rome's history. Students also engage with Latin as a spoken language. Morning classroom teaching is followed with afternoon walks through the city reading ancient authors in the locations where history happened, as well as inscriptions in their original locations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10109981",
"title": "Contemporary Latin",
"section": "Section::::Living Latin.:On the Internet.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 331,
"text": "There is even a Latin Wikipedia, although discussions are held not only in Latin but in German, English, and other languages as well. Nearly 200 active editors work on the project. There are nearly 100,000 articles on topics ranging from to , , and . Those in particularly good Latin, currently about 10% of the whole, are marked.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1694427",
"title": "History of science in classical antiquity",
"section": "Section::::Roman Empire.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 524,
"text": "Even though science continued under the Roman Empire, Latin texts were mainly compilations drawing on earlier Greek work. Advanced scientific research and teaching continued to be carried on in Greek. Such Greek and Hellenistic works as survived were preserved and developed later in the Byzantine Empire and then in the Islamic world. Late Roman attempts to translate Greek writings into Latin had limited success, and direct knowledge of most ancient Greek texts only reached western Europe from the 12th century onwards.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5607488",
"title": "Wheelock's Latin",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1297,
"text": "Wheelock's Latin (originally titled Latin and later Latin: An Introductory Course Based on Ancient Authors) is a comprehensive beginning Latin textbook. Chapters introduce related grammatical topics and assume little or no prior knowledge of Latin grammar or language. Each chapter has a collection of translation exercises created specifically for the book, most drawn directly from ancient sources. Those from Roman authors (\"Sententiae Antiquae\"—lit., \"ancient sentences\" or \"ancient thoughts\") and the reading passages that follow may be either direct quotations or adapted paraphrases of the originals. Interspersed in the text are introductory remarks on Ancient Roman culture. At the end of each chapter is a section called \"Latina Est Gaudium — Et Utilis!\", which means \"Latin Is Fun — And Useful!\" This section introduces phrases that can be used in conversation (such as \"Quid agis hodie?\", meaning \"How are you today?\"), and in particular comments on English words and their relation to Latin. Originally published in 1956 in the Barnes & Noble College Outline Series, the textbook is currently in its seventh edition. The 6th edition has been translated into Korean (2005), with a Korean translation of the 7th edition pending; the 7th edition has been translated into Chinese (2017).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9302764",
"title": "Latin Wikipedia",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 308,
"text": "The Latin Wikipedia () is the Latin language edition of Wikipedia, created in May 2002. As of 2020, it has about articles. While all primary content is in Latin, in discussions modern languages such as English, Italian, French, German or Spanish are often used, since many users (\"usores\") find this easier.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "820170",
"title": "Regimini militantis Ecclesiae",
"section": "Section::::Text.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 264,
"text": "The full, critically edited Latin text is to be found in the Monumenta Historica Societatis Iesu (MHSI), \"Constitutiones\", vol.1, Rome, 1934, pp. 24-32. Also in Reich, \"Documents\", pp. 216-219, and a condensed version in Robinson, \"European History\", ii. 161-165.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1yons6 | how have members of the bush admin not been charged with committing war crimes yet? | [
{
"answer": "This is bull they by definition did commi war crimes facts are not opinions",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "370936",
"title": "Vincent Bugliosi",
"section": "Section::::Writing career.:George W. Bush.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 451,
"text": "Bugliosi argues that, under the felony-murder rule, the resulting deaths of over 4,000 American soldiers and 100,000 Iraqi civilians (as of spring 2008) since hostilities began can be charged against Bush as second-degree murder. He said that any of the 50 state attorneys general, as well as any district attorney in the United States, had sufficient grounds to indict Bush for the murder of any soldier or soldiers who live in their state or county\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1503181",
"title": "John Yoo",
"section": "Section::::Legal opinions.:War crimes accusations.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 377,
"text": "On May 12, 2012, the Kuala Lumpur War Crimes Commission found Yoo, along with former President Bush, former Vice President Cheney, and several other senior members of the Bush administration, guilty of war crimes in absentia. The trial heard \"harrowing witness accounts from victims of torture who suffered at the hands of US soldiers and contractors in Iraq and Afghanistan\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "370936",
"title": "Vincent Bugliosi",
"section": "Section::::Writing career.:George W. Bush.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 660,
"text": "He also believed that George W. Bush should have been charged with the murders of more than 4,000 American soldiers who have died in Iraq since the American-led invasion of that country, because of his belief that Bush launched the invasion under false pretenses. In his book, \"The Prosecution of George W. Bush for Murder,\" he laid out his view of evidence and outlined what questions he would ask Bush at a potential murder trial. Bugliosi testified at a House Judiciary Committee meeting on July 25, 2008, at which he urged impeachment proceedings for Bush. The book formed the basis of a 2012 documentary film, \"The Prosecution of an American President.\" \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17507410",
"title": "The Prosecution of George W. Bush for Murder",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 305,
"text": "The Prosecution of George W. Bush for Murder is a 2008 book by Vincent Bugliosi, a former prosecutor in Los Angeles. He argues that President George W. Bush took the United States into the invasion of Iraq under false pretenses and should be tried for murder for the deaths of American soldiers in Iraq. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1005202",
"title": "Guantanamo military commission",
"section": "Section::::History.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 238,
"text": "With the War Crimes Act in mind, this ruling presented the Bush administration with the risk of criminal liability for war crimes. To address these legal problems, the president requested and Congress passed the Military Commissions Act.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "251916",
"title": "Houston Chronicle",
"section": "Section::::Criticism.:Sandoval family interview.\n",
"start_paragraph_id": 126,
"start_character": 0,
"end_paragraph_id": 126,
"end_character": 571,
"text": "In early 2004, \"Chronicle\" reporter Lucas Wall interviewed the family of Leroy Sandoval, a Marine from Houston who was killed in Iraq. After the article appeared, Sandoval's stepfather and sister called into Houston talk radio station KSEV and said that a sentence alleging \"President Bush's failure to find weapons of mass destruction\" in Iraq misrepresented their views on the war and President George W. Bush, that Wall had pressured them for a quotation that criticized Bush, and that the line alleging Bush's \"failure\" was included against the wishes of the family.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17507410",
"title": "The Prosecution of George W. Bush for Murder",
"section": "Section::::Content and themes.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 650,
"text": "Bugliosi argues that, under the felony-murder rule, the resulting deaths of over 4,000 American soldiers and 100,000 Iraqi civilians (as of spring 2008) since hostilities began can be charged against Bush as second-degree murder. He said that any of the 50 state attorneys general, as well as any district attorney in the United States, had sufficient grounds to indict Bush for the murder of any soldier or soldiers who live in their state or county. Bugliosi said as a prosecutor he would seek the death penalty. He said that an impeachment of Bush (as discussed by other opponents) would be \"a joke\" because of the scale of Bush's alleged crimes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3842s6 | what does board mean in room and board? | [
{
"answer": "If someone is offering \"room and board\", they are offering to house and feed you (generally). This sometimes is forgotten, but that's what it means.",
"provenance": null
},
{
"answer": "When lodging somewhere \"room\" refers to where you stay and \"board\" refers to the food. \n\nThe term comes from the word \"board\" being used as a word for table back in the day. Meals would be served on the \"board\" of an inn or house for the lodgers and it eventually became synonymous with served food.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4819881",
"title": "Room and board",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 302,
"text": "Room and board describes a situation where, in exchange for money, labor or other considerations, a person is provided with a place to live as well as meals on a comprehensive basis. It commonly occurs as a fee at colleges and universities; it also occurs in hotel-style accommodation for short stays.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1334301",
"title": "Legislative chamber",
"section": "Section::::Floor and committee.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 300,
"text": "The \"floor\" is the name for the full assembly, and a \"committee\" is a small deliberative assembly that is usually subordinate to the floor. In the United Kingdom, either chamber may opt to take some business such as detailed consideration of a Bill on the Floor of the House instead of in Committee.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3674098",
"title": "Floor model",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 703,
"text": "A floor model is a piece of equipment placed in a retail shop's sales area for display purposes. Floor models are taken out of their packaging and displayed how they would be used. In the case of furniture, stores will arrange pieces as they may be placed in the home. Appliances, such as microwaves, refrigerators, and washing machines, are typically put into rows so customers may compare the different models. Consumer electronics are typically plugged into an electric outlet, cable or satellite television feed, or local area network as appropriate. In all cases, floor models allow customers to test the quality of the displayed merchandise, or compare between different models of a certain type.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "197810",
"title": "Flooring",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 341,
"text": "Flooring is the general term for a permanent covering of a floor, or for the work of installing such a floor covering. Floor covering is a term to generically describe any finish material applied over a floor structure to provide a walking surface. Both terms are used interchangeably but floor covering refers more to loose-laid materials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44418",
"title": "Deliberative assembly",
"section": "Section::::Types.:Board.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 297,
"text": "A \"board\", which is an administrative, managerial, or quasi-judicial body. A board derives its power from an outside authority that defines the scope of its operations. Examples include an organized society's or company's board of directors and government agency boards like a board of education.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "562187",
"title": "Ashte kashte",
"section": "Section::::Equipment.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 219,
"text": "The board is a square divided into seven rows and columns. The outer centre squares on each side of the board are specially marked. They are the starting squares for each player, and also function as \"resting squares\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13198013",
"title": "Floor trader",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 423,
"text": "A floor trader is a member of a stock or commodities exchange who trades on the floor of that exchange for his or her own account. The floor trader must abide by trading rules similar to those of the exchange specialists who trade on behalf of others. The term should not be confused with floor broker. Floor traders are occasionally referred to as registered competitive traders, individual liquidity providers or locals.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2vy7nn | If an atom emits a photon when an excited electron returns to ground state, can that happen without the atom being heated up? | [
{
"answer": "There are many ways that can happen - basically every method of lighting that isn't incandescent light is an example.\n\n[Light-emitting diode](_URL_5_) uses electricity to push electrons to a high energy state, and at the junction the electron falls to a lower state and emits a photon.\n\n[Fluorescent lamp](_URL_11_) also uses electricity, but in this case electrons are fired between the electrodes. They impact mercury vapour in the fluorescent lamp, which excites the electrons within, and when they relax they emit a photon. One further step involves this emitted photon exciting electrons in the coating, which when relaxed emits a photon in the visible range. The phenomenon of exciting an atom with incident light and emitting a photon that way is [fluorescence](_URL_9_).\n\nA similar phenomenon is [phosphorescence](_URL_0_), which also involves photoexcitation, but the relaxation occurs at a much longer timescale as the excited electron goes through [intersystem crossing](_URL_2_) before emitting a photon. You may have encountered this in many glow-in-the-dark toys or paint.\n\nSpeaking of glow-in-the-dark, back in the days, before the dangers of radioactivity was discovered, [radium](_URL_7_) was widely used in glow-in-the-dark paint. During radioactive decay, ionizing radiation is emitted, one of which are beta particles - energized electrons. The idea is the same as the above example of fluorescent lamp - you can harness the energy of those electrons by using a fluorescent coating. This is known as [radioluminescence](_URL_12_). Nowadays, [tritium](_URL_3_) is the safe radioluminescent source.\n\nYou also have [chemiluminescence](_URL_8_), where a chemical reaction produces a product with an excited electron, which then relaxes and emits a photon. This is the principle behind glowsticks, and also [luminol spray](_URL_6_) used to detect blood in crime scenes. When the chemical reaction is biological in nature (like in fireflies), we call this [bioluminescence](_URL_10_). In many biology labs an enzyme used in bioluminescence, [luciferase](_URL_1_), is used to track transcription.\n\nA relatively recent and not-yet-fully-understood discovery is [triboluminescence](_URL_4_), where mechanical stress on the material causes charge separation and emission. Some famous examples are the emission of x-rays when unrolling Scotch tape, and glowing of mint [Life Savers](_URL_13_) when crushed.\n\nSo there are quite a number of ways to excite electrons without using heat. Many of these are already widely used, in household lighting, monitor displays, etc.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "28469",
"title": "Stimulated emission",
"section": "Section::::Mathematical model.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 246,
"text": "If the atom is in the excited state, it may decay into the lower state by the process of spontaneous emission, releasing the difference in energies between the two states as a photon. The photon will have frequency \"ν\" and energy \"hν\", given by:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24065",
"title": "Population inversion",
"section": "Section::::The interaction of light with matter.:Stimulated emission.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 460,
"text": "If an atom is already in the excited state, it may be agitated by the passage of a photon that has a frequency ν corresponding to the energy gap Δ\"E\" of the excited state to ground state transition. In this case, the excited atom relaxes to the ground state, and it produces a second photon of frequency ν. The original photon is not absorbed by the atom, and so the result is two photons of the same frequency. This process is known as \"stimulated emission\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16133434",
"title": "Optically active additive",
"section": "Section::::Physics of optically active technology.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1111,
"text": "If a single photon approaches an atom which is receptive to it, the photon can be absorbed by the atom in a manner very similar to a radio wave being picked up by an aerial. At the moment of absorption the photon ceases to exist and the total energy contained within the atom increases. This increase in energy is usually described symbolically by saying that one of the outermost electrons \"jumps\" to a \"higher orbit\". This new atomic configuration is unstable and the tendency is for the electron to fall back to its lower orbit or energy level, emitting a new photon as it goes. The entire process may take no more than 1 x 10 seconds. The result is much the same as with reflective colour, but because of the process of absorption and emission, the substance emits a glow. According to Planck, the energy of each photon is given by multiplying its frequency in cycles per second by a constant (Planck’s constant, 6.626 x 10 erg seconds). It follows that the wavelength of a photon emitted from a luminescent system is directly related to the difference between the energy of the two atomic levels involved.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28469",
"title": "Stimulated emission",
"section": "Section::::Mathematical model.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 288,
"text": "Alternatively, if the excited-state atom is perturbed by an electric field of frequency \"ν\", it may emit an additional photon of the same frequency and in phase, thus augmenting the external field, leaving the atom in the lower energy state. This process is known as stimulated emission.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1200",
"title": "Atomic physics",
"section": "Section::::Electronic configuration.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 332,
"text": "If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will \"jump\" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "292420",
"title": "Emission spectrum",
"section": "Section::::Origins.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 427,
"text": "When the electrons in the atom are excited, for example by being heated, the additional energy pushes the electrons to higher energy orbitals. When the electrons fall back down and leave the excited state, energy is re-emitted in the form of a photon. The wavelength (or equivalently, frequency) of the photon is determined by the difference in energy between the two states. These emitted photons form the element's spectrum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24065",
"title": "Population inversion",
"section": "Section::::The interaction of light with matter.:Stimulated emission.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 435,
"text": "Specifically, an excited atom will act like a small electric dipole which will oscillate with the external field provided. One of the consequences of this oscillation is that it encourages electrons to decay to the lowest energy state. When this happens due to the presence of the electromagnetic field from a photon, a photon is released in the same phase and direction as the \"stimulating\" photon, and is called stimulated emission.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
51hcib | Why did cavalry during the U.S. Civil War operate almost exclusively as dragoons? | [
{
"answer": " > Yet both Union and Confederate cavalry corps operated almost exclusively as dragoons. Why was this? To what extent was this affected by the duties of pre-war US cavalry, and/or the lack of a European-style military establishment? \n\nThe ways in which Union and Confederate cavalry came to operate during the war depended a lot on the availability and quality of mounts, the terrain on which the fighting took place, and the quality of training. The quality of Union remounts was appalling at the start of the war, with unbroken horses and those too young or old to ride effectively among other issues, being purchased en masse and poorly taken care of. Until George Stoneman took over as Chief of US Cavalry in 1863, and Sheridan as head of the Union Cavalry Corps, the situation didn't really improve. By the estimates of one French military attachee, Union Regiments went through up to 6 horses per annum per trooper, in the first 3 years of the war. The Confederates were somewhat better mounted initially, the horses often being personal mounts from home, these were irreplaceable and scarce by the later part of the war, 1864-65. So on both sides, poor quality mounts constrained the chances for mass adoption of shock tactics when these were appropriate. Shock action also required a good deal of training, as well as skillful execution to ensure success. Stephen G. Starr indicates that units that enjoyed initial success with the saber were more likely to continue with using it than those who were met with failure. For example, The 17th Mounted Infantry charged Bedford-Forrest's dismounted troopers at Bolger's Creek on April 1st, 1865, despite being raised and designated \"mounted infantry.\" Most units appeared to favour firearms simply due to the ease of training and their being easier to obtain. \n\nThe lack of quality mounts, and the difficulty in training men for shock action compared to dismounted fire action, were further compounded by the terrain in which much of the war was fought. Stephen Badsey lays out the problem quite well:\n\n > The main theatre of war, in Virginia, was by European standards \nheavy ground, hilly, sparsely populated, with large virgin forests. This was scarcely ideal for the charge. The Western Theatre, far larger, saw considerable variation in terrain, but even there, so Colonel Duke of the Confederate Cavalry wrote: \"The nature of the ground on which we generally fought, covered with dense woods or crossed with high fences, and the impossibility of devoting sufficient time to the training of the horses, rendered the employment of large bodies of mounted men to any good purpose very difficult.\"\n\nMassed cavalry charges of divisions or more were very rare (it should be noted that this was historically the case even in Europe), but actions in troop, squadron and regiment strength were possible. A charge didn't even necessarily need to involve edged weapons; troopers with revolvers, carbines or rifles could \"gallop\" a position, charging up to it and dismounting to open fire. Shock action and dismounted action could also be combined quite effectively, as in the case of J.H. Morgan's charge at Shiloh in 1862, and in the clash between Pleasonton and Stuarts Cavalry in 1863.\n\nTo conclude, it might be more proper to say that American Cavalry, Union and Confederate, functioned more as 'Mounted Rifles' or 'Hybrid Cavalry', as 19th and early 20th century British (and Dominion) military writers termed them. In the former case, fire action dismounted was prioritized, but shock action could be resorted to in special circumstances, while Cavalry's scouting role was still central. In the latter case, emphasis was placed on shock action, but combining artillery and machine guns, as well as dismounted firepower. They weren't necessarily Dragoons who simply used their horses for transport, but could display great versatility in their tactics and missions. \n\nSources:\n\n* \"The Obsolescence of the Arme Blanche and Technological Determinism in British Military History\" and \"Writing Horses into American Civil War History\" by Gervase Phillips\n* [Fire and the Sword: The British Army and the Arme Blanche \nControversy 1871-1921] (_URL_1_) by Stephen Badsey\n* \"Cold Steel: The Saber and the Union Cavalry,\" by Stephen G. Starr\n\nThis [essay] (_URL_2_) on Civil War Cavalry from before WWI is worth a read, as is Alonzo Gray's [Cavalry Tactics as illustrated by the War of the Rebellion] (_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13592317",
"title": "Model 1860 Light Cavalry Saber",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 237,
"text": "Before the Civil War there was no light or heavy cavalry in the US army. Instead there were \"Dragoons\" (founded 1830) and \"Mounted Riflemen\" (founded c.1840). In 1861 these mounted regiments were renamed cavalry and given yellow piping.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7783929",
"title": "Horses in warfare",
"section": "Section::::The Americas.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 1071,
"text": "During the American Civil War (1861–1865), cavalry held the most important and respected role it would ever hold in the American military. Field artillery in the American Civil War was also highly mobile. Both horses and mules pulled the guns, though only horses were used on the battlefield. At the beginning of the war, most of the experienced cavalry officers were from the South and thus joined the Confederacy, leading to the Confederate Army's initial battlefield superiority. The tide turned at the 1863 Battle of Brandy Station, part of the Gettysburg campaign, where the Union cavalry, in the largest cavalry battle ever fought on the American continent, ended the dominance of the South. By 1865, Union cavalry were decisive in achieving victory. So important were horses to individual soldiers that the surrender terms at Appomattox allowed every Confederate cavalryman to take his horse home with him. This was because, unlike their Union counterparts, Confederate cavalrymen provided their own horses for service instead of drawing them from the government.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1583811",
"title": "United States Cavalry",
"section": "Section::::History.:Civil War.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 413,
"text": "Shortly before the outbreak of the Civil War, the Army's dragoon regiments were designated as \"Cavalry\", losing their previous distinctions. The change was an unpopular one and the former dragoons retained their orange braided blue jackets until they wore out and had to be replaced with cavalry yellow. The 1st United States Cavalry fought in virtually every campaign in the north during the American Civil War.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13592317",
"title": "Model 1860 Light Cavalry Saber",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 266,
"text": "Later in the Civil War large cavalry charges became less common and the cavalry took on the role of skirmishers. Many replaced their sabers with extra revolvers, or left it in the saddle while fighting on foot with their repeating Henry rifles and Spencer carbines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7783929",
"title": "Horses in warfare",
"section": "Section::::The Americas.\n",
"start_paragraph_id": 84,
"start_character": 0,
"end_paragraph_id": 84,
"end_character": 501,
"text": "During the American Revolutionary War (1775–1783), the Continental Army made relatively little use of cavalry, primarily relying on infantry and a few dragoon regiments. The United States Congress eventually authorized regiments specifically designated as cavalry in 1855. The newly formed American cavalry adopted tactics based on experiences fighting over vast distances during the Mexican War (1846–1848) and against indigenous peoples on the western frontier, abandoning some European traditions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6816",
"title": "Cavalry",
"section": "Section::::History.:19th century.:United States.\n",
"start_paragraph_id": 99,
"start_character": 0,
"end_paragraph_id": 99,
"end_character": 1009,
"text": "In the early American Civil War the regular United States Army mounted rifle, dragoon, and two existing cavalry regiments were reorganized and renamed cavalry regiments, of which there were six. Over a hundred other federal and state cavalry regiments were organized, but the infantry played a much larger role in many battles due to its larger numbers, lower cost per rifle fielded, and much easier recruitment. However, cavalry saw a role as part of screening forces and in foraging and scouting. The later phases of the war saw the Federal army developing a truly effective cavalry force fighting as scouts, raiders, and, with repeating rifles, as mounted infantry. The distinguished 1st Virginia Cavalry ranks as one of the most effectual and successful cavalry units on the Confederate side. Noted cavalry commanders included Confederate general J.E.B. Stuart, Nathan Bedford Forrest, and John Singleton Mosby (a.k.a. \"The Grey Ghost\") and on the Union side, Philip Sheridan and George Armstrong Custer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2249253",
"title": "Cavalry in the American Civil War",
"section": "Section::::Union cavalry.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 363,
"text": "Early in the war, Union cavalry forces were often wasted by being used merely as pickets, outposts, orderlies, guards for senior officers, and messengers. The first officer to make effective use of the Union cavalry was Major General Joseph Hooker, who in 1863 consolidated the cavalry forces of his Army of the Potomac under a single commander, George Stoneman.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3ks5fq | why do a lot of hentai and japanese porn use rape? | [
{
"answer": "My assumption is that it has to do with the relatively rigid and formal social norms and politeness of Japanese culture in every day social interactions and also their cultural ideas of ideal man and woman. As a consequence certain behavior not tolerated at all in normal everyday life, like really uninhibited sexuality, breaking down the rigid norms and such can form as sexual role play fantasies which are then played out in porn in an exaggerated form.",
"provenance": null
},
{
"answer": "I would theorize it comes partially from the cultural status of women in japan as submissive beings that you can force your will on, and partially from the value of purity in the sense that a women *wanting* it is slutty. As a result you get a power fantasy of forceful men and an unwanting/pure woman.\n\nAt least in hentai there is also a difference between \"true\" rape and corruption rape, the former has the girl remaining resistant until the end, and seems to me to remain somewhat rare - The corruption of purity on the other hand seems to be *extremely* popular, in which case the girl is reluctant *at first* and then \"learns to love it\". So basically, turning someone pure and innocent to corrupt and slutty is what's fetishized here more so than the actual rape.",
"provenance": null
},
{
"answer": "Japan has very unusual censorship laws, essentially banning the direct display of sex/genitals that most porn relies on. So Japanese porn has to use other factors to attract/keep viewers. \n\nI'm not sure how the laws affect animation, but I know they are a factor in the rise of 'bukkake' and Japanese (live action) porn's unusual subject matter and style.\n\nAlso, as others have mentioned, Japanese culture has traditionally had a big emphasis on submission, and deference to those in power (especially by women). They also have a very long history of erotic art (ukiyo-e IIRC). No doubt this is also a major factor.",
"provenance": null
},
{
"answer": "The Western christianist mindset is guilt-based, which is an internal self-judging process-- am I a bad person for watching this naughty cartoon rape scene? Yes, yes I am, and now I feel bad about what a despicable POS I am. No one could or should ever love me. Jesus, forgive me my sinful thoughts! You do? Praise the Lord and pass the butter, now I can get on with my life again.\n\nJapan is shame-based-- go ahead and indulge your freaky hentai urges privately but once you set foot out the front door your responsibility for the next 12 hours is to conduct yourself according to society's rigidly prescribed behaviors. It's restrictive and repressive, but the other 12 hours of the day are all yours to freely express whatever variety of funkiness you happen to be into. Just keep it to yourself and some like-minded others, and possibly a trusted friend or two.\n\nTwo approaches to the pressure-relief valve necessary for the human individual coping with life in a complex society.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43351",
"title": "Ukiyo-e",
"section": "Section::::Style.:Themes and genres.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 971,
"text": "Traditional Japanese religions do not consider sex or pornography a moral corruption in the Judaeo-Christian sense, and until the changing morals of the Meiji era led to its suppression, shunga erotic prints were a major genre. While the Tokugawa regime subjected Japan to strict censorship laws, pornography was not considered an important offence and generally met with the censors' approval. Many of these prints displayed a high level a draughtsmanship, and often humour, in their explicit depictions of bedroom scenes, voyeurs, and oversized anatomy. As with depictions of courtesans, these images were closely tied to entertainments of the pleasure quarters. Nearly every ukiyo-e master produced shunga at some point. Records of societal acceptance of shunga are absent, though Timon Screech posits that there were almost certainly some concerns over the matter, and that its level of acceptability has been exaggerated by later collectors, especially in the West.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4330437",
"title": "History of manga",
"section": "Section::::After World War II.:\"Shōnen\", \"seinen,\" and \"seijin\" manga.:Sex and women's roles in manga for males.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 638,
"text": "With the relaxation of censorship in Japan after the early 1990s, various forms of graphically drawn sexual content appeared in manga intended for male readers that correspondingly occurred in English translations. These depictions ranged from partial to total nudity through implied and explicit sexual intercourse through sadomasochism (SM), incest, rape, and sometimes zoophilia (bestiality). In some cases, rape and lust-murder themes came to the forefront, as in \"Urotsukidōji\" by Toshio Maeda and \"Blue Catalyst\" from 1994 by Kei Taniguchi, but these extreme elements are not commonplace in either untranslated or translated manga.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11982328",
"title": "Junior idol",
"section": "Section::::Controversy.:Legal status.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 586,
"text": "The Japanese Anti-child prostitution and pornography law was enacted in November 1999—and revised in 2004 to criminalize distribution of child pornography over the Internet—defines child pornography as the depiction \"in a way that can be recognized visually, such a pose of a child relating to sexual intercourse or an act similar to sexual intercourse with or by the child\", of \"a pose of a child relating to the act of touching genital organs, etc.\" or the depiction of \"a pose of a child who is naked totally or partially in order to arouse or stimulate the viewer's sexual desire.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21347867",
"title": "Legal status of drawn pornography depicting minors",
"section": "Section::::Japan.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 213,
"text": "In Japan, pornographic art depicting underage characters (\"lolicon\", \"shotacon\") is legal but remains controversial even within the country. They are commonly found in manga, erotic computer games, and doujinshi.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "223289",
"title": "Bukkake",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1117,
"text": "There is a popular belief that links the origin of the practice to a form of punishment for adultery on women in medieval Japan. In fact, that description of its origin is false, mainly because the actual punishment for cheating wives was decapitation. Bukkake was first represented in pornographic films in the mid to late 1980s in Japan. According to one commentator, a significant factor in the development of bukkake as a pornographic form was the mandatory censorship in Japan where genitals must be pixelated by a \"mosaic\". One consequence of this is that Japanese pornography tends to focus more on the face and body of actresses rather than on their genitals. Since film producers could not show penetration, they sought other ways to depict sex acts without violating Japanese law and since semen did not need to be censored, a loophole existed for harder sex scenes. However, popularization of the act and the term for it has been credited to director Kazuhiko Matsumoto in 1998. The Japanese adult video studio Shuttle Japan registered the term \"ぶっかけ/BUKKAKE\" as a trademark (No. 4545137) in January 2001.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40242024",
"title": "Genocidal rape",
"section": "Section::::Documented instances.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 454,
"text": "A large portion of these rapes were systematized in a process where soldiers would search door-to-door for young girls, with many women taken captive and gang raped. The women were often killed immediately after being raped, often through explicit mutilation or by stabbing a bayonet, long stick of bamboo, or other objects into the vagina. Young children were not exempt from these atrocities, and were cut open to allow Japanese soldiers to rape them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54946",
"title": "Yaoi",
"section": "Section::::Thematic elements.:Rape.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 1149,
"text": "Rape fantasy is a theme commonly found in yaoi manga. Anal intercourse is understood as a means of expressing commitment to a partner, and in yaoi, the \"apparent violence\" of rape is transformed into a \"measure of passion\". While Japanese society often shuns or looks down upon women who are raped in reality, the yaoi genre depicts men who are raped as still \"imbued with innocence\" and are typically still loved by their rapists after the act, a trope that may have originated with \"Kaze to Ki no Uta\". Rape scenes in yaoi are rarely presented as crimes with an assaulter and a victim: scenes where a seme rapes an uke are not depicted as symptomatic of the \"disruptive sexual/violent desires\" of the seme, but instead are a signifier of the \"uncontrollable love\" felt by a seme for an uke. Such scenes are often a plot device used to make the uke see the seme as more than just a good friend and typically result in the uke falling in love with the seme. Rape fantasy themes explore the protagonist's lack of responsibility in sex, leading to the narrative climax of the story, where \"the protagonist takes responsibility for his own sexuality\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
45zfwx | Why can't the immune system prevent shingles outbreaks, since it already has antibodies for the virus? | [
{
"answer": "The mechanism that controls VZV latency is not well understood. There are several factors that may increase risk with regard to recurrent shingle episodes. Aging, immunosuppression, intrauterine\nexposure to VZV, and having had varicella at a young\nage are all thought to play a role in the recurrent infection.",
"provenance": null
},
{
"answer": "My professor explained it as a cellular vs humoral immunity question.\n\nAntibodies do a great job of stopping spread in humors (blood, interstitial fluid, etc.) -- hence antibody immunity is called humoral immunity. But, Varicella Zoster Virus (VZV) survives in the nerve (dorsal root ganglia, to be specific), which is an immune-privileged site -- no antibodies or T cells are getting into a nerve. So when the virus decides to come out of the nerve back to the skin cells (not well understood when or why, as CD8plus said), it can easily track along the nerve and infect adjacent cells.\n\n & nbsp;\n\nSo, our only way of getting rid of the VZV outbreak in our skin cells (ie shingles) is for T cells to move in and purge the infected cells. This takes a while, and is very inflammatory.",
"provenance": null
},
{
"answer": "There are good answers here already. One important point is that VZV is a member of the herpesvirus family, and as a group these are tremendous sophisticated and complex viruses. They have very large genomes (for viruses, that is), and many of their genes target the host immune system and manipulate host cells in various ways. They're ancient -- their evolutionary history can readily be traced back hundreds of millions of years -- and extremely well adapted to their host species. Many herpesviruses have different, but equally subtle and powerful ways of avoiding immunity, so that they're capable of setting up life-long infections with intermittent new outbreaks.",
"provenance": null
},
{
"answer": "I have a bit of a follow up question, since I've seen some commercials with Terry Bradshaw advertising a vaccine for singles how does that work? Is it separate from the chicken pox vaccine? Can someone who had chicken pox still get the shingles vaccine (and expect the efficacy to be reasonable)? ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "443800",
"title": "Shingles",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 742,
"text": "Unless the immune system is compromised, it suppresses reactivation of the virus and prevents shingles outbreaks. Why this suppression sometimes fails is poorly understood, but shingles is more likely to occur in people whose immune systems are impaired due to aging, immunosuppressive therapy, psychological stress, or other factors. Upon reactivation, the virus replicates in neuronal cell bodies, and virions are shed from the cells and carried down the axons to the area of skin innervated by that ganglion. In the skin, the virus causes local inflammation and blistering. The short- and long-term pain caused by shingles outbreaks originates from inflammation of affected nerves due to the widespread growth of the virus in those areas.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19167679",
"title": "Virus",
"section": "Section::::Role in human disease.:Host defence mechanisms.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 335,
"text": "Antibodies can continue to be an effective defence mechanism even after viruses have managed to gain entry to the host cell. A protein that is in cells, called TRIM21, can attach to the antibodies on the surface of the virus particle. This primes the subsequent destruction of the virus by the enzymes of the cell's proteosome system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "443800",
"title": "Shingles",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 264,
"text": "As with chickenpox and other forms of alpha-herpesvirus infection, direct contact with an active rash can spread the virus to a person who lacks immunity to it. This newly infected individual may then develop chickenpox, but will not immediately develop shingles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "443800",
"title": "Shingles",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 469,
"text": "The causative agent for shingles is the varicella zoster virus (VZV) – a double-stranded DNA virus related to the herpes simplex virus. Most individuals are infected with this virus as children which causes an episode of chickenpox. The immune system eventually eliminates the virus from most locations, but it remains dormant (or latent) in the ganglia adjacent to the spinal cord (called the dorsal root ganglion) or the trigeminal ganglion in the base of the skull.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "443800",
"title": "Shingles",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 837,
"text": "Shingles is due to a reactivation of varicella zoster virus (VZV) in a person's body. The disease chickenpox is caused by the initial infection with VZV. Once chickenpox has resolved, the virus may remain inactive in nerve cells. When it reactivates, it travels from the nerve body to the endings in the skin, producing blisters. Risk factors for reactivation include old age, poor immune function, and having had chickenpox before 18 months of age. How the virus remains in the body or subsequently re-activates is not well understood. Exposure to the virus in the blisters can cause chickenpox in someone who has not had it, but will not trigger shingles. Diagnosis is typically based on a person's signs and symptoms. Varicella zoster virus is not the same as herpes simplex virus; however, they belong to the same family of viruses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2734429",
"title": "Alphavirus",
"section": "Section::::Pathogenesis and immune response.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 332,
"text": "When an individual is infected with this particular virus, its immune system can play a role in clearing away the virus particles. Alphaviruses are able to cause the production of interferons. Antibodies and T cells are also involved. The neutralizing antibodies also play an important role to prevent further infection and spread.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15632707",
"title": "Intrinsic immunity",
"section": "Section::::Relationship to the immune system.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 202,
"text": "Because the production of intrinsic immune mediating proteins cannot be increased during infection, these defenses can become saturated and ineffective if a cell is infected with a high level of virus.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5g8cva | At what altitude on earth is the air pressure roughly equivalent to the surface pressure of the Martian atmosphere? | [
{
"answer": " > The surface pressure on Mars is equivalent to the range of pressures on Earth at altitudes between ~30 km and ~60 km. \n\n--[Math Encounters Blog](_URL_1_)\n\n > At altitudes above 50,000 feet [_15.2 km_] man requires a pressurized suit to be safe in this near space environment. \n\n--[A Brief History of the Pressure Suit](_URL_0_)\n\n > At 55,000 feet [_16.76 km_], atmospheric pressure is so low that water vapor in the body appears to boil causing the skin to inflate like a balloon. At 63,000 feet [_19.2 km_] blood at normal body temperature (98 F) appears to boil. ... At altitudes above 65,000 feet [_19.8 km_] atmospheric pressure approaches that of space, that is the pressurization factors for protective equipment to be used at 65,000 feet are essentially the same as would be required for survival in a vacuum.\n\n--[A Brief History of the Pressure Suit](_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39217282",
"title": "List of Mars analogs",
"section": "",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 741,
"text": "At about 28 miles (45 km, 150 thousand feet ) Earth altitude the pressure starts to be equivalent to Mars surface pressure. However, the major component of Mars air, CO gas, is denser than Earth air for a given pressure. Perhaps more significantly there is no land at this altitude on earth. The highest point on earth is the summit of Mount Everest at about 5.5 miles (8.8 km, 29 thousand feet), where the pressure is about fifty times greater than on the surface of Mars. The correct atmospheric pressure can be created by a vacuum chamber. NASA's Space Power Facility was used to test the airbag landing systems for the Mars Pathfinder and the Mars Exploration Rovers, Spirit and Opportunity, under simulated Mars atmospheric conditions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "202899",
"title": "Atmosphere",
"section": "Section::::Pressure.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 377,
"text": "Atmospheric pressure at a particular location is the force per unit area perpendicular to a surface determined by the weight of the vertical column of atmosphere above that location. On Earth, units of air pressure are based on the internationally recognized standard atmosphere (atm), which is defined as 101.325 kPa (760 Torr or 14.696 psi). It is measured with a barometer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47484",
"title": "Atmospheric pressure",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 838,
"text": "In most circumstances atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point. As elevation increases, there is less overlying atmospheric mass, so that atmospheric pressure decreases with increasing elevation. Pressure measures force per unit area, with SI units of Pascals (1 pascal = 1 newton per square metre, 1N/m). On average, a column of air with a cross-sectional area of 1 square centimetre (cm), measured from mean (average) sea level to the top of Earth's atmosphere, has a mass of about 1.03 kilogram and exerts a force or \"weight\" of about 10.1 newtons, resulting in a pressure of 10.1 N/cm or 101kN/m (101 kilopascals, kPa). A column of air with a cross-sectional area of 1in would have a weight of about 14.7lb, resulting in a pressure of 14.7lb/in.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52580097",
"title": "Metre sea water",
"section": "Section::::Feet of sea water.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 495,
"text": "One atmosphere is approximately equal to 33 feet of sea water or 14.7 psi, which gives 4.9/11 or about 0.445 psi per foot. Atmospheric pressure may be considered constant at sea level, and minor fluctuations caused by the weather are usually ignored. Pressures measured in fsw and msw are gauge pressure, relative to the surface pressure of 1 atm absolute, except when a pressure difference is measured between the locks of a hyperbaric chamber, which is also generally measured in fsw and msw.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22818",
"title": "Olympus Mons",
"section": "Section::::Description.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 740,
"text": "The typical atmospheric pressure at the top of Olympus Mons is 72 pascals, about 12% of the average Martian surface pressure of 600 pascals. Both are exceedingly low by terrestrial standards; by comparison, the atmospheric pressure at the summit of Mount Everest is 32,000 pascals, or about 32% of Earth's sea level pressure. Even so, high-altitude orographic clouds frequently drift over the Olympus Mons summit, and airborne Martian dust is still present. Although the average Martian surface atmospheric pressure is less than one percent of Earth's, the much lower gravity of Mars increases the atmosphere's scale height; in other words, Mars's atmosphere is expansive and does not drop off in density with height as sharply as Earth's.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47488",
"title": "Barometer",
"section": "Section::::Equation.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 278,
"text": "In thermodynamic calculations, a commonly used pressure unit is the \"standard atmosphere\". This is the pressure resulting from a column of mercury of 760 mm in height at 0 °C. For the density of mercury, use ρ = 13,595 kg/m and for gravitational acceleration use g = 9.807 m/s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41822",
"title": "Troposphere",
"section": "Section::::Pressure and temperature structure.:Pressure.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 326,
"text": "The pressure of the atmosphere is maximum at sea level and decreases with altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium so that the pressure is equal to the weight of air above a given point. The change in pressure with altitude can be equated to the density with the hydrostatic equation\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
61ngl7 | the core principles of immanuel kant's philosphy. | [
{
"answer": "It generally covers the human perception of reality, that we are limited by our senses and our brains, and aren't really capable of perceiving or conceiving the 'true' nature of the universe. Human perception and instinct being flawed, he suggested that moral and aesthetic questions should be answered through reasoned thought over emotional reaction.",
"provenance": null
},
{
"answer": "Kant answers 3 big questions:\n\n\n1- what is reality? \n\n\nKant says there's a real world outside of your body. But the way you experience this world (using your senses of seeing, hearing, touching etc.) creates a map, or model, of this outside reality in your mind, which is unique to YOU. Even things like space and time are unique to you. So if you had different senses, like Superman has superhearing, you'd have a completely different model of reality.\n\n\n2- what should be? (Right and wrong)\n\n\nThis is Kant's most famous contribution (categorical imperatives.) It means when you conclude that something is wrong, it is wrong 100% of the time, under any circumstances, and for everybody. You can't say murder is wrong then justify using it in some situations (capital punishment, war, etc.) It does not change nor does it matter where or when. \n\n\nHis point is that because your model of reality is unique to you, you can always come up with situations to convince yourself what you're doing isn't wrong (\"it's not stealing if you're starving.\") And there would be no sense of morality if enough people do that. The only way that there can be any morality is that right and wrong are universally established.\n\n\n3- How should society be governed?\n\n\nSo since right and wrong are universal, societies should be governed by a constitution and by the rule of law. Pure democracy (rule of majority) is not the answer, because no matter how many people believe something to be right, wrong is always wrong.",
"provenance": null
},
{
"answer": "1: u/sexypundit's comment is pretty accurate, and I'd like to attend you of the reply I made to it.\n\n2: Stay off wikipedia when it comes to philosophy, and read the Stanford Encyclopaedia of Philosophy instead. Much better, because it is written and edited by academics in the respective field.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "154040",
"title": "Syntactic ambiguity",
"section": "Section::::Kantean.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 307,
"text": "Immanuel Kant employs the term \"amphiboly\" in a sense of his own, as he has done in the case of other philosophical words. He denotes by it a confusion of the notions of the pure understanding with the perceptions of experience, and a consequent ascription to the latter of what belongs only to the former.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "397689",
"title": "Moral reasoning",
"section": "Section::::In philosophy.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1652,
"text": "Immanuel Kant had a radically different view of morality. In his view, there are universal laws of morality that one should never break regardless of emotions. He proposes a four-step system to determine whether or not a given action was moral based on logic and reason. The first step of this method involves formulating \"a maxim capturing your reason for an action\". In the second step, one \"frame[s] it as a universal principle for all rational agents\". The third step is assessing \"whether a world based on this universal principle is conceivable\". If it is, then the fourth step is asking oneself \"whether [one] would will the maxim to be a principle in this world\". In essence, an action is moral if the maxim by which it is justified is one which could be universalized. For instance, when deciding whether or not to lie to someone for one's own advantage, one is meant to imagine what the world would be like if everyone always lied, and successfully so. In such a world, there would be no purpose in lying, for everybody would expect deceit, rendering the universal maxim of lying whenever it is to your advantage absurd. Thus, Kant argues that one should not lie under any circumstance. Another example would be if trying to decide whether suicide is moral or immoral; imagine if everyone committed suicide. Since mass international suicide would not be a good thing, the act of suicide is immoral. Kant's moral framework, however, operates under the overarching maxim that you should treat each person as an end in themselves, not as a means to an end. This overarching maxim must be considered when applying the four aforementioned steps. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6880483",
"title": "Philosophy of mind",
"section": "Section::::Philosophy of mind in the continental tradition.\n",
"start_paragraph_id": 105,
"start_character": 0,
"end_paragraph_id": 105,
"end_character": 591,
"text": "Immanuel Kant's \"Critique of Pure Reason\", first published in 1781 and presented again with major revisions in 1787, represents a significant intervention into what will later become known as the philosophy of mind. Kant's first critique is generally recognized as among the most significant works of modern philosophy in the West. Kant is a figure whose influence is marked in both continental and analytic/Anglo-American philosophy. Kant's work develops an in-depth study of transcendental consciousness, or the life of the mind as conceived through universal categories of consciousness.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "960684",
"title": "Thesis, antithesis, synthesis",
"section": "Section::::History of the idea.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 216,
"text": "Thomas McFarland (2002), in his \"Prolegomena\" to Coleridge's \"Opus Maximum\", identifies Immanuel Kant's \"Critique of Pure Reason\" (1781) as the genesis of the thesis/antithesis dyad. Kant concretises his ideas into:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12598",
"title": "Georg Wilhelm Friedrich Hegel",
"section": "Section::::Philosophical work.:Freedom.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 662,
"text": "In his discussion of \"Spirit\" in his \"Encyclopedia\", Hegel praises Aristotle's \"On the Soul\" as \"by far the most admirable, perhaps even the sole, work of philosophical value on this topic\". In his \"Phenomenology of Spirit\" and his \"Science of Logic\", Hegel's concern with Kantian topics such as freedom and morality and with their ontological implications is pervasive. Rather than simply rejecting Kant's dualism of freedom versus nature, Hegel aims to subsume it within \"true infinity\", the \"Concept\" (or \"Notion\": \"Begriff\"), \"Spirit\" and \"ethical life\" in such a way that the Kantian duality is rendered intelligible, rather than remaining a brute \"given\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "219447",
"title": "German idealism",
"section": "Section::::Responses.:Hannah Arendt.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 734,
"text": "Hannah Arendt stated that Immanuel Kant distinguished between \"Vernunft\" (\"reason\") and \"Verstand\" (\"intellect\"): these two categories are equivalents of \"the urgent need of\" reason, and the \"mere quest and desire for knowledge\". Differentiating between reason and intellect, or the need to reason and the quest for knowledge, as Kant has done, according to Arendt \"coincides with a distinction between two altogether different mental activities, thinking and knowing, and two altogether different concerns, meaning, in the first category, and cognition, in the second\". These ideas were also developed by Kantian philosopher, Wilhelm Windelband, in his discussion of the approaches to knowledge named \"nomothetic\" and \"idiographic\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32361",
"title": "Value theory",
"section": "Section::::Ethics and axiology.:Kant: hypothetical and categorical goods.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 685,
"text": "The thinking of Immanuel Kant greatly influenced moral philosophy. He thought of moral value as a unique and universally identifiable property, as an absolute value rather than a relative value. He showed that many practical goods are good only in states-of-affairs described by a sentence containing an \"if\" clause, e.g., in the sentence, \"Sunshine is only good if you do not live in the desert.\" Further, the \"if\" clause often described the category in which the judgment was made (art, science, etc.). Kant described these as \"hypothetical goods\", and tried to find a \"categorical\" good that would operate across all categories of judgment without depending on an \"if-then\" clause.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
n53ln | How is nuclear radiation stored in objects? | [
{
"answer": "Unstable nuclei will eventually decay and emit radiation. If something is bombarded with neutrons, some of them are captured by the nuclei, which can become unstable. In the case of water, it's possible that some could be captured to produce tritium, which is radioactive.",
"provenance": null
},
{
"answer": "Radioactivity is really a property of the nucleus of an atom. Unstable nuclei will decay and throw off pieces of itself (alpha, or neutron radiation) and electrons (beta radiation) and photons (x-ray or gamma ray radiation).\n\nThere are two ways water can be radioactive:\n\n* It's contaminated with something radioactive. Iodine 131 is a pretty common after nuclear reactor accidents. I would call this water contaminated with radioactive material rather than irradiated, but apparently my version is too long for a headline.\n\n* The atoms in the water itself contains unstable isotopes. Both Hydrogen and Oxygen have unstable isotopes. The most common of these is Tritium (or ^3 H) which is a hydrogen with two extra neutrons in the nucleus. This isotope occurs naturally, but it can also be produced if water is exposed to intense neutron radiation for long periods of time.\n\nThe word \"irradiated\" can mean several different things. Inside a nuclear reactor a lot of the radiation is neutrons. When someone talks about irradiated food that's just x-rays which can't create new isotopes so can't make water or food radioactive.\n",
"provenance": null
},
{
"answer": "Unstable nuclei will eventually decay and emit radiation. If something is bombarded with neutrons, some of them are captured by the nuclei, which can become unstable. In the case of water, it's possible that some could be captured to produce tritium, which is radioactive.",
"provenance": null
},
{
"answer": "Radioactivity is really a property of the nucleus of an atom. Unstable nuclei will decay and throw off pieces of itself (alpha, or neutron radiation) and electrons (beta radiation) and photons (x-ray or gamma ray radiation).\n\nThere are two ways water can be radioactive:\n\n* It's contaminated with something radioactive. Iodine 131 is a pretty common after nuclear reactor accidents. I would call this water contaminated with radioactive material rather than irradiated, but apparently my version is too long for a headline.\n\n* The atoms in the water itself contains unstable isotopes. Both Hydrogen and Oxygen have unstable isotopes. The most common of these is Tritium (or ^3 H) which is a hydrogen with two extra neutrons in the nucleus. This isotope occurs naturally, but it can also be produced if water is exposed to intense neutron radiation for long periods of time.\n\nThe word \"irradiated\" can mean several different things. Inside a nuclear reactor a lot of the radiation is neutrons. When someone talks about irradiated food that's just x-rays which can't create new isotopes so can't make water or food radioactive.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "194031",
"title": "Nuclear fuel cycle",
"section": "Section::::Service period.:Transport of radioactive materials.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 816,
"text": "Since nuclear materials are radioactive, it is important to ensure that radiation exposure of those involved in the transport of such materials and of the general public along transport routes is limited. Packaging for nuclear materials includes, where appropriate, shielding to reduce potential radiation exposures. In the case of some materials, such as fresh uranium fuel assemblies, the radiation levels are negligible and no shielding is required. Other materials, such as spent fuel and high-level waste, are highly radioactive and require special handling. To limit the risk in transporting highly radioactive materials, containers known as spent nuclear fuel shipping casks are used which are designed to maintain integrity under normal transportation conditions and during hypothetical accident conditions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20825543",
"title": "High-level radioactive waste management",
"section": "Section::::Materials for geological disposal.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 376,
"text": "In order to store the high level radioactive waste in long-term geological depositories, specific waste forms need to be used which will allow the radioactivity to decay away while the materials retain their integrity for thousands of years. The materials being used can be broken down into a few classes: glass waste forms, ceramic waste forms, and nanostructured materials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25856",
"title": "Radiation",
"section": "Section::::Uses.:Science.\n",
"start_paragraph_id": 80,
"start_character": 0,
"end_paragraph_id": 80,
"end_character": 363,
"text": "Radiation is used to determine the composition of materials in a process called neutron activation analysis. In this process, scientists bombard a sample of a substance with particles called neutrons. Some of the atoms in the sample absorb neutrons and become radioactive. The scientists can identify the elements in the sample by studying the emitted radiation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3755",
"title": "Boron",
"section": "Section::::Characteristics.:Isotopes.:Depleted boron (boron-11).:Radiation-hardened semiconductors.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 650,
"text": "Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor \"chip\" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use \"depleted boron\", which is greatly enriched in B and contains almost no B. This is useful because B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "244601",
"title": "Effects of nuclear explosions",
"section": "Section::::Indirect effects.:Ionizing radiation.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 403,
"text": "The neutron radiation serves to transmute the surrounding matter, often rendering it radioactive. When added to the dust of radioactive material released by the bomb itself, a large amount of radioactive material is released into the environment. This form of radioactive contamination is known as nuclear fallout and poses the primary risk of exposure to ionizing radiation for a large nuclear weapon.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8842893",
"title": "Nagasaki Atomic Bomb Museum",
"section": "Section::::Maintenance of Exhibits.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 227,
"text": "The museum exhibits objects that were exposed to radiation from the atomic bomb. Though some materials are double-cased, display techniques generally are not tailored in any special way for the preservation of these materials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "263209",
"title": "Mushroom cloud",
"section": "Section::::Nuclear mushroom clouds.:Cloud composition.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 768,
"text": "The largest, and therefore the most radioactive particles, are deposited by fallout in the first few hours after the blast. Smaller particles are carried to higher altitudes and descend more slowly, reaching ground in a less radioactive state as the isotopes with the shortest half-lives decay the fastest. The smallest particles can reach the stratosphere and stay there for weeks, months, or even years, and cover an entire hemisphere of the planet via atmospheric currents. The higher danger, short-term, localized fallout is deposited primarily downwind from the blast site, in a cigar-shaped area, assuming a wind of constant strength and direction. Crosswinds, changes in wind direction, and precipitation are factors that can greatly alter the fallout pattern.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
tgitg | If we set off a nuke near Jupiters core.... | [
{
"answer": "No. Some fusion might occur but it would not be self-sustaining.",
"provenance": null
},
{
"answer": "Nope. you'd get localized shock waves, and disruption of the conditions, but Jupiter does not have enough mass to begin or sustain fusion at its core. ",
"provenance": null
},
{
"answer": "Jupiter needs to be about ten times as heavy to initiate deuterium fusion, and about 80 times as heavy to initiate bona fide hydrogen fusion and become a star. A nuclear bomb would not initiate either.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "474155",
"title": "HMS Jupiter (F85)",
"section": "Section::::Operations.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 386,
"text": "\"Jupiter\" sank the on 17 January 1942. On 27 February 1942 she struck a mine laid earlier in the day by the as she steamed with the American-British-Dutch-Australian Command (ABDA) cruiser force during the Battle of the Java Sea. The destroyer sank off the north Java coast in the Java Sea at 21:16 hours. Initially, the explosion was thought to have been caused by a Japanese torpedo.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48916027",
"title": "2016 in science",
"section": "Section::::Events.:August.\n",
"start_paragraph_id": 217,
"start_character": 0,
"end_paragraph_id": 217,
"end_character": 203,
"text": "BULLET::::- 27 August – NASA's \"Juno\" probe makes a close pass of Jupiter, coming within of the cloud tops – the closest any spacecraft has ever approached the gas giant without entering its atmosphere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38930",
"title": "Jupiter",
"section": "Section::::Physical characteristics.:Magnetosphere.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 539,
"text": "At about 75 Jupiter radii from the planet, the interaction of the magnetosphere with the solar wind generates a bow shock. Surrounding Jupiter's magnetosphere is a magnetopause, located at the inner edge of a magnetosheath—a region between it and the bow shock. The solar wind interacts with these regions, elongating the magnetosphere on Jupiter's lee side and extending it outward until it nearly reaches the orbit of Saturn. The four largest moons of Jupiter all orbit within the magnetosphere, which protects them from the solar wind.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52758747",
"title": "Waves (Juno)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 584,
"text": "On June 24, 2016 the Waves instrument recorded \"Juno\" passing across Jupiter's magnetic field's bow shock. It took about two hours for the unmanned spacecraft to cross this region of space. On June 25, 2016 it encountered the magnetopause. \"Juno\" would go on to enter Jupiter's orbit in July 2016. The magnetosphere blocks the charged particles of the solar wind, with the number of solar wind particles \"Juno\" encountered dropping 100-fold when it entered the Jovian magnetosphere. Before \"Juno\" entered it, it was encountering about 16 solar wind particles per cubic inch of space.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14733483",
"title": "The Day the Earth Stood Still (2008 film)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 357,
"text": "In the present day, a rapidly moving object is detected beyond Jupiter's orbit and forecast to impact Manhattan. It is moving at 30,000 kilometers per second, enough to destroy all life on Earth. The United States government hastily assembles a group of scientists, including Dr. Helen Benson and her friend Dr. Michael Granier, to develop a survival plan.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23696870",
"title": "2009 Jupiter impact event",
"section": "Section::::Findings.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 314,
"text": "The force of the explosion on Jupiter was thousands of times more powerful than the suspected comet or asteroid that exploded over the Tunguska River Valley in Siberia in June 1908. (This would be approximately 12,500–13,000 Megatons of TNT, over a million times more powerful than the bomb dropped on Hiroshima.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6794",
"title": "Comet Shoemaker–Levy 9",
"section": "Section::::Jupiter-orbiting comet.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 410,
"text": "More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
65xl5g | why does squirting lemon juice over spicy food make it less spicy? | [
{
"answer": "It does reduce the spice. Spicy chili peppers contain an oil called *capsaicin* which gives the spicy flavor. Lemon juice has acids in it, and the acids neutralize the oils, which reduces the spice. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2050894",
"title": "Astringent",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 637,
"text": "Astringency, the dry, puckering mouthfeel caused by the tannins in unripe fruits, lets the fruit mature by deterring eating. Ripe fruits and fruit parts including blackthorn (sloe berries), \"Aronia\" chokeberry, chokecherry, bird cherry, rhubarb, quince and persimmon fruits, and banana skins are very astringent; citrus fruits, like lemons, are somewhat astringent. Tannins, being a kind of polyphenol, bind salivary proteins and make them precipitate and aggregate, producing a rough, \"sandpapery\", or dry sensation in the mouth. The tannins in some teas and red grape wines like Cabernet Sauvignon and Merlot produce mild astringency.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "147735",
"title": "Toothpaste",
"section": "Section::::Safety.:Miscellaneous issues and debates.:Alteration of taste perception.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 623,
"text": "After using toothpaste, orange juice and other juices have an unpleasant taste. Sodium lauryl sulfate alters taste perception. It can break down phospholipids that inhibit taste receptors for sweetness, giving food a bitter taste. In contrast, apples are known to taste more pleasant after using toothpaste. Distinguishing between the hypotheses that the bitter taste of orange juice results from stannous fluoride or from sodium lauryl sulfate is still an unresolved issue and it is thought that the menthol added for flavor may also take part in the alteration of taste perception when binding to lingual cold receptors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61983",
"title": "Tannin",
"section": "Section::::Food items with tannins.:Drinks with tannins.:Fruit juices.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 287,
"text": "Although citrus fruits do not themselves contain tannins, orange-colored juices often contain food dyes with tannins. Apple juice, grape juices and berry juices are all high in tannins. Sometimes tannins are even added to juices and ciders to create a more astringent feel to the taste.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9291334",
"title": "Limonin",
"section": "Section::::Presence in citrus products.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 243,
"text": "Limonin and other limonoid compounds contribute to the bitter taste of some citrus food products. Researchers have proposed removal of limonoids from orange juice and other products (known as \"debittering\") through the use of polymeric films.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "367867",
"title": "Citron",
"section": "Section::::Uses.:Culinary.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 560,
"text": "While the lemon or orange are peeled to consume their pulpy and juicy segments, the citron's pulp is dry, containing a small quantity of insipid juice, if any. The main content of a citron fruit is the thick white rind, which adheres to the segments and cannot be separated from them easily. The citron gets halved and depulped, then its rind (the thicker the better) is cut in pieces, cooked in sugar syrup, and used as a spoon sweet, in Greek known as \"kitro glyko\" (κίτρο γλυκό), or it is diced and caramelized with sugar and used as a confection in cakes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15349424",
"title": "Succade",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 481,
"text": "Succade is the candied peel of any of the citrus species, especially from the citron or \"Citrus medica\" which is distinct with its extra-thick peel; in addition, the taste of the inner rind of the citron is less bitter than those of the other citrus. However, the term is also occasionally applied to the peel, root, or even entire fruit or vegetable like parsley, fennel and cucurbita which have a bitter taste and are boiled with sugar to get a special \"sweet and sour\" outcome.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21299730",
"title": "Lemon",
"section": "Section::::Nutritional value and phytochemicals.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 285,
"text": "Lemons contain numerous phytochemicals, including polyphenols, terpenes, and tannins. Lemon juice contains slightly more citric acid than lime juice (about 47 g/l), nearly twice the citric acid of grapefruit juice, and about five times the amount of citric acid found in orange juice.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9l8qzn | if oil is ancient organic matter, then how is there so much of it? | [
{
"answer": "_URL_0_\nBecause it has had a awfully long time to build up before humans Started using it. Or even before we started existing.",
"provenance": null
},
{
"answer": "There was A LOT of ancient life, a number of which grew to massive sizes (many insects being as large as or larger than people) due to the abudance of oxygen in certain ages, which means there was A LOT of organic matter. At least that's my understanding as to why there could be so much oil",
"provenance": null
},
{
"answer": "Hundreds of millions of years of swamps doing swampy things... \n\n...like sucking down carbon from the atmosphere and sinking it in anoxic environments where it turns to kerogen and then to fossil fuels. \n\nThe Carboniferous period predated the Permian Triassic Mass Extinction Event —aka: The Great Dying— by laying down gigatons of Carbon... which turned to coal, oil, and methane... huge volumes of which were burned by the EXTREME volcanism of the Siberian Traps. \n\nLike 96%+ of the tree of life went extinct. \n\n[Burning Fossil Fuels Almost Ended Life on Earth](_URL_0_) ",
"provenance": null
},
{
"answer": "Much of it comes from a epoch called the Carboniferous period. It was a time when plants first took over land and many of them grew to huge sizes. They also existed before the bacteria and fungi that are good at breaking down cellulose and other structural materials of the plants evolved and so they decayed very slowly when they died. Many being buried in mud and soil before they decayed which changed how they decayed turning much of their volume into coal, oil and other fossil fuels. ",
"provenance": null
},
{
"answer": "Most sedimentary rocks contain at least a small amount of organic matter that consists of the preserved residue of plant or animal tissue. It's rocks and sediment that contain a larger amount than usual which may go on to become coal or oil, and there's been a lot of time for this to happen. There's some confusion in other answers, so to clarify: **coal is generated from terrestrial organic matter (plants, mostly trees), and oil/gas is generated from marine organic matter (plankton which dies and sinks to the seafloor).** Insects, dinosaurs and any other animals have never been a significant source of organic matter for oil or coal. \n\n\nWhen the tissue of organisms decay, particularly in an oxygen-deficient environment, organic degradation may not be complete; more decay-resistant parts of organic substances such as cellulose, fats, resins, and waxes are not immediately decomposed. If a depositional basin happens to be an oxygen poor environment - such as a restricted basin, stagnant swamp, or bog - or if the supply of organic matter is so great that it simply overwhelms all available oxidants, then decay-resistant organic matter may be preserved long enough to become incorporated into accumulating sediment. \n\n\nContrary to what other answers here might have said, this in itself does not take hundreds of millions of years - it can be quite a rapid process of decades to hundreds of years. Once buried, this is when it may persist for hundreds of millions of years, and given the right temperature and pressure conditions, may transform into a fossil fuel. \n\n\nConsidering that the Earth has had over half a billion years of the sort of life and the range of environments which may produce oil, it's not really a surprise that there's a fair bit of the stuff knocking around, though it's quite an art form to find viable deposits. \n",
"provenance": null
},
{
"answer": "Plankton. There is an insane amount of biomass in plankton. What's really crazy is that it generally takes anoxic conditions and slow to stagnant water to form oils, which means that most of the biomass probably didn't undergo the transformation to oil.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "994887",
"title": "Petroleum industry",
"section": "Section::::History.:Prehistory.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 574,
"text": "Petroleum is a naturally occurring liquid found in rock formations. It consists of a complex mixture of hydrocarbons of various molecular weights, plus other organic compounds. It is generally accepted that oil is formed mostly from the carbon rich remains of ancient plankton after exposure to heat and pressure in Earth's crust over hundreds of millions of years. Over time, the decayed residue was covered by layers of mud and silt, sinking further down into Earth’s crust and preserved there between hot and pressured layers, gradually transforming into oil reservoirs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2016067",
"title": "Petrochemistry",
"section": "Section::::Origin.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 440,
"text": "It may be possible to make petroleum from any kind of organic matter under suitable conditions. The concentration of organic matter is not very high in the original deposits, but petroleum and natural gas evolved in places that favored retention, such as sealed-off porous sandstones. Petroleum, produced over millions of years by natural changes in organic materials, accumulates beneath the earth's surface in extremely large quantities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23195",
"title": "Petroleum",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 457,
"text": "It consists of naturally occurring hydrocarbons of various molecular weights and may contain miscellaneous organic compounds. The name \"petroleum\" covers both naturally occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. A fossil fuel, petroleum is formed when large quantities of dead organisms, mostly zooplankton and algae, are buried underneath sedimentary rock and subjected to both intense heat and pressure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23195",
"title": "Petroleum",
"section": "Section::::Formation.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 486,
"text": "Petroleum is a fossil fuel derived from ancient fossilized organic materials, such as zooplankton and algae. Vast amounts of these remains settled to sea or lake bottoms where they were covered in stagnant water (water with no dissolved oxygen) or sediments such as mud and silt faster than they could decompose aerobically. Approximately 1 m below this sediment or water oxygen concentration was low, below 0.1 mg/l, and anoxic conditions existed. Temperatures also remained constant.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48146",
"title": "Fossil fuel",
"section": "Section::::Importance.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 734,
"text": "Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel as of the early 2000s. Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). These materials have yet to be fully exploited commercially. With additional processing, they can be employed in lieu of other already established fossil fuel deposits. More recently, there has been disinvestment from exploitation of such resources due to their high carbon cost, relative to more easily processed reserves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2910801",
"title": "Petroleum reservoir",
"section": "Section::::Formation.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 250,
"text": "Crude oil is found in all oil reservoirs formed in the Earth's crust from the remains of once-living things. Evidence indicates that millions of years of heat and pressure changed the remains of microscopic plant and animal into oil and natural gas.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4207510",
"title": "Oil",
"section": "Section::::Types.:Mineral oils.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 729,
"text": "Crude oil, or petroleum, and its refined components, collectively termed \"petrochemicals\", are crucial resources in the modern economy. Crude oil originates from ancient fossilized organic materials, such as zooplankton and algae, which geochemical processes convert into oil. The name \"mineral oil\" is a misnomer, in that minerals are not the source of the oil—ancient plants and animals are. Mineral oil is organic. However, it is classified as \"mineral oil\" instead of as \"organic oil\" because its organic origin is remote (and was unknown at the time of its discovery), and because it is obtained in the vicinity of rocks, underground traps, and sands. \"Mineral oil\" also refers to several specific distillates of crude oil.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
m7ga3 | what exactly is management consulting? | [
{
"answer": "In short you're correct: a management consultancy does spend a lot of time investigating a company, then makes a number of recommendations to senior staff on how to improve their business.\n\nNot all management consultants are equal. The senior consultants and partners spend a lot of time with senior members of the company with which they're working, discussing performance and strategies.\n\nMid-level consultants tend to hold interviews with mid-level managers to understand the company and recognise problems. They may also supervise a team of junior consultants.\n\nThe junior consultants spend a lot of time typing up notes, fiddling with PowerPoint slides and ordering dinner because they're having to work very late.\n\nDepending on ability, performance and luck it takes about 5 years and an MBA to move from junior to senior.",
"provenance": null
},
{
"answer": "In short you're correct: a management consultancy does spend a lot of time investigating a company, then makes a number of recommendations to senior staff on how to improve their business.\n\nNot all management consultants are equal. The senior consultants and partners spend a lot of time with senior members of the company with which they're working, discussing performance and strategies.\n\nMid-level consultants tend to hold interviews with mid-level managers to understand the company and recognise problems. They may also supervise a team of junior consultants.\n\nThe junior consultants spend a lot of time typing up notes, fiddling with PowerPoint slides and ordering dinner because they're having to work very late.\n\nDepending on ability, performance and luck it takes about 5 years and an MBA to move from junior to senior.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41539611",
"title": "International Council of Management Consulting Institutes",
"section": "",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 500,
"text": "The practice of management consulting is about \"helping organizations to improve their performance, operating primarily through the analysis of existing organizational problems and the development of plans for improvement.\" with the purpose of \"gaining external (and presumably objective) advice and access to the consultants' specialized expertise.\" It follows therefore that there is scope for an international organization to promote and foster competence in the management consulting profession.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "282635",
"title": "Management consulting",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 300,
"text": "Management consulting is the practice of helping organizations to improve their performance. Organizations may draw upon the services of management consultants for a number of reasons, including gaining external (and presumably objective) advice and access to the consultants' specialized expertise.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11932269",
"title": "List of management consulting firms",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 227,
"text": "Management consulting indicates both the industry of, and the practice of, helping organizations improve their performance, primarily through the analysis of existing business problems and development of plans for improvement.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2514855",
"title": "Information technology consulting",
"section": "Section::::Management consulting and IT consulting.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 421,
"text": "There is a relatively unclear line between management consulting and IT consulting. There are sometimes overlaps between the two fields, but IT consultants often have degrees in computer science, electronics, technology, or management information systems while management consultants often have degrees in accounting, economics, Industrial Engineering, finance, or a generalized MBA (Masters in Business Administration).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25802221",
"title": "Consulting psychology",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 548,
"text": "Consulting psychology is a specialty area of psychology that addresses such areas as assessment and interventions at the individual, group, and organizational levels. The \"Handbook of Organizational Consulting Psychology\" provides an overview of specific areas of study and application within the field. The major journal in the field is \"\". Consulting psychologists typically work in business or non-profit organizations, in consulting firms or in private practice. Consulting psychologists are typically professionally licensed as psychologists.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26277853",
"title": "Subfields of psychology",
"section": "Section::::Divisions.:Consulting.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 510,
"text": "Consulting psychology includes the application of psychology to consulting contexts at the individual, group and organizational levels. The field specializes in assessment and intervention, particularly in business and organizational applications but also is concerned with the consulting process used to assess and facilitate change in any area of psychology. Lowman (2002) provides an overview of the field, including the relevance of individual, group and organizational levels to consulting psychologists.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22980370",
"title": "Risk and strategic consulting",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 883,
"text": "In contrast to management consulting, which primarily concerns internal organization and performance, risk and strategic consulting aims to provide clients with an improved understanding of the political and economic climate in which they operate. Most such consultancy is focused on those developing countries and emerging markets in which political and business risks may be greater, harder to manage, or harder to assess. Risk and strategic consulting is sometimes carried out alongside other activities such as corporate investigation, forensic accounting, employee screening or vetting, and the provision of security systems, training or procedures. Some of the largest groups in the industry include Kroll Inc. and Control Risks Group, though the size and range of consultancies varies widely, with groups such as Black Cube and Hakluyt & Company providing boutique services. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1duv6p | Why didn't Israel keep the Sinai peninsular? | [
{
"answer": "Because giving it up was a hugely important bargaining chip for peace with Egypt. No one really regarded it as part of Israel (the way the West Bank is), though there were people there who were less than thrilled about being kicked out. Making it demilitarized allowed for Israel to retain a buffer, while still getting peace with Egypt.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43209839",
"title": "Israeli Military Governorate",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 337,
"text": "The Egypt–Israel Peace Treaty led Israel to give up the Sinai Peninsula in 1982 and transform the military rule in the Gaza Strip and the West Bank into the Israeli Civil Administration in 1981. The Western part of Golan Heights was unilaterally annexed by Israel the same year, thus abolishing the Military Governorate system entirely.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36644646",
"title": "August 2012 Sinai attack",
"section": "Section::::Reactions.:Israel.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 322,
"text": "Under a peace treaty between Egypt and Israel, the peninsula is supposed to remain demilitarized, but Israel permitted the Egyptians to deploy about seven battalions in the peninsula to enforce control. Israel hopes that in this way, Egypt will be more able to eliminate terrorists that pose a threat to Egypt and Israel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27644",
"title": "Sinai Peninsula",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1191,
"text": "The Sinai Peninsula has been a part of Egypt from the First Dynasty of ancient Egypt ( BC). This comes in stark contrast to the region north of it, the Levant (present-day territories of Syria, Lebanon, Jordan, Israel and Palestine), which, due largely to its strategic geopolitical location and cultural convergences, has historically been the center of conflict between Egypt and various states of Mesopotamia and Asia Minor. In periods of foreign occupation, the Sinai was, like the rest of Egypt, also occupied and controlled by foreign empires, in more recent history the Ottoman Empire (1517–1867) and the United Kingdom (1882–1956). Israel invaded and occupied Sinai during the Suez Crisis (known in Egypt as the \"Tripartite Aggression\" due to the simultaneous coordinated attack by the UK, France and Israel) of 1956, and during the Six-Day War of 1967. On 6 October 1973, Egypt launched the Yom Kippur War to retake the peninsula, which was unsuccessful. In 1982, as a result of the Israel–Egypt Peace Treaty of 1979, Israel withdrew from all of the Sinai Peninsula except the contentious territory of Taba, which was returned after a ruling by a commission of arbitration in 1989.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2781576",
"title": "Israeli-occupied territories",
"section": "Section::::Sinai Peninsula.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1128,
"text": "Israel captured the Sinai Peninsula from Egypt in the 1967 Six-Day War. It established settlements along the Gulf of Aqaba and in the northeast portion, just below the Gaza Strip. It had plans to expand the settlement of Yamit into a city with a population of 200,000, though the actual population of Yamit did not exceed 3,000. The Sinai Peninsula was returned to Egypt in stages beginning in 1979 as part of the Israel–Egypt Peace Treaty. As required by the treaty, Israel evacuated Israeli military installations and civilian settlements prior to the establishment of \"normal and friendly relations\" between it and Egypt. Israel dismantled eighteen settlements, two air force bases, a naval base, and other installations by 1982, including the only oil resources under Israeli control. The evacuation of the civilian population, which took place in 1982, was done forcefully in some instances, such as the evacuation of Yamit. The settlements were demolished, as it was feared that settlers might try to return to their homes after the evacuation. Since 1982, the Sinai Peninsula has not been regarded as occupied territory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2866845",
"title": "Status of territories occupied by Israel in 1967",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 755,
"text": "The Sinai peninsula status was returned to full sovereignty of Egypt in 1982 as a result of the Egypt–Israel Peace Treaty. The United Nations Security Council and the International Court of Justice both describe the West Bank and Western Golan Heights as \"occupied territory\" under international law, and the Supreme Court of Israel describes it as held \"in belligerent occupation\", however Israel's government calls all of them \"disputed\" rather than \"occupied\". Israel's government also argues that since the Gaza disengagement of 2005, it does not militarily occupy the Gaza strip, a statement rejected by the United Nations Human Rights Council and Human Rights Watch because Israel continues to maintain control of its airspace, waters, and borders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27644",
"title": "Sinai Peninsula",
"section": "Section::::History.:1979-82 Israeli withdrawal.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 637,
"text": "In 1979, Egypt and Israel signed a peace treaty in which Israel agreed to withdraw from the entirety of the Sinai Peninsula. Israel subsequently withdrew in several stages, ending in 1982. The Israeli pull-out involved dismantling almost all Israeli settlements, including the settlement of Yamit in north-eastern Sinai. The exception was that the coastal city of Sharm el-Sheikh (which the Israelis had founded as Ofira during their occupation of the Sinai Peninsula) was not dismantled. The Treaty allows monitoring of Sinai by the Multinational Force and Observers, and limits the number of Egyptian military forces in the peninsula.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34119606",
"title": "Rotem Crisis",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 826,
"text": "Following Israel's withdrawal from Sinai, captured during the 1956 Suez Crisis, the peninsula remained de facto demilitarized of most Egyptian forces. It was garrisoned by one infantry brigade, elements of several reconnaissance regiments and up to 100 tanks. Although the outcome of the Suez Crisis had been politically positive for Egyptian president Gamal Abdel Nasser, Israel's Military Intelligence Directorate (Aman), as well as military and civilian decision makers, had regarded Israel's military victory in the war as an effective deterrent to future Egyptian designs. In early 1960, the Israeli Ministry of Foreign Affairs, therefore, estimated that Egypt would seek \"to avoid a military confrontation with Israel and keep the United Nations Emergency Force\" (UNEF) installed in the Gaza Strip following the crisis.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ivnze | Reddit science people, my 6 year old would like an answer to a space question. I have no idea what to tell her. | [
{
"answer": "Just some of my random thoughts on it:\n\n If there were water where there is now more or less empty space then all planets, solar systems and galaxies would be connected/accessible to one another instead of isolated systems. Life wouldnt be bound to planets but could live almost anywhere. \n\nThen again a large part of space would be frozen water as well. \n\nTheres also the question of what exactly would happen to stars if they are surrounded by water\n\nAlso, if there had always been water everywhere then that would change the outcome of how everything in the universe formed. There might be no planets or stars, supernovas, comets or anything we know today, but rather something completely different. Either everything is just one big block of ice, or something like a big connected ocean might form. ",
"provenance": null
},
{
"answer": "Your daughter is in good company with that wondering. One of the very first scientists was a Greek guy named Thales, who held that the primary essence of every substance was water.\n\nWhat do you mean by \"what if?\" Do you mean \"If the universe were all water, what would it be like?\" Or do you mean \"How do we know the universe is not made of water?\"",
"provenance": null
},
{
"answer": "The Big Bang Theory supposes that, at some point in the universe's early history, the energy in its creation began condensing into matter, in the form of (among other things) hydrogen. From there, the relatively stable hydrogen gas began coalescing (by gravitation forces) into stars and galaxies. \n\nPresumably, a universe permeated with water would show very similar behavior. Stars would form (with heavier cores due to the oxygen in water's molecular structure) with different lifecycles. In between the stars would be the typical vacuum of space, left from where the gravity swept out all the water form the stars. \n\nIn short, a universe of all water is not a \"steady-state solution.\" (Just like a vertical, single-file column of ping-pong balls on a table is not a \"steady-state.\") As time progresses, gravity will rearrange things. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "57394047",
"title": "Carmen Victoria Felix Chaidez",
"section": "Section::::Formation.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 623,
"text": "Less than two weeks after the trip, she applied to study communication and electronic engineering at Monterrey Tech, graduating in 2003. Unfortunately, she was the only one at the school at the time interested in space. Despite this, she attended space conferences and other events such as the National Astronomy Congress, which allowed her to meet people in her future field. She was also invited to join a project to conserve Chipinque Park in Monterrey, which has an observatory. It had been out of use for some time and she worked to reactivate it. She then gave classes and workshops on astronomy to student visitors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36539671",
"title": "Tam O'Shaughnessy",
"section": "Section::::Science educator.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 873,
"text": "O’Shaughnessy has extensive experience cultivating girls’ and boys’ interest in reading, math, and science. Besides being a former science teacher, she is an award-winning writer of science books for children. O’Shaughnessy has written 12 children's science books, including six with Sally Ride, the first American woman in space. Ride and O’Shaughnessy's clear and eloquent writing style earned them many accolades, including the American Institute of Physics Children's Science Writing Award in 1995 for their second book, \"The Third Planet: Exploring the Earth From Space\". In October 2015, O’Shaughnessy published a children's biography of Ride, \"Sally Ride: A Photobiography of America's Pioneering Woman in Space\". The book combines reminiscences from Ride's family and friends with dozens of photos, including many never-before-published family and personal photos.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40070261",
"title": "NASA's Space Place",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 351,
"text": "NASA Space Place is an award-winning educational website about space and Earth science targeting upper-elementary aged children. Launched in 1998, it was the first NASA website to create content about multiple missions directly for children. It has its own url, and it also serves as the kids’ portion of the NASA Science Mission Directorate website.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31186087",
"title": "Lucie Green",
"section": "Section::::Personal life.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 445,
"text": "When asked where her love of space science came from, Green has said: \"As a child, I remember hearing my parents say that they thought I was going to be an astrophysicist when I grew up. Not actually knowing what an astro-thinga-me-wotsit was, I agreed with them because I thought it sounded impressive. Really at that time I wanted to look after animals. People used to bring me injured birds and I would stay up all night feeding them worms!\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57881338",
"title": "Mike Galsworthy",
"section": "Section::::Career.:Scientists for EU.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 255,
"text": "On 20 March 2019 Galsworthy, amongst others, publicized a petition to revoke Article 50. In a televised interview the following day, with the petition having received 750,000 signatures, he expressed his view that Article 50 should be \"nuked from space\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1501354",
"title": "Memoirs of a Spacewoman",
"section": "Section::::Contents.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 292,
"text": "The Spacewoman in question is a scientist and explorer. The book is set many centuries in the future, though no dates are given. Humans have explored many worlds in a number of different galaxies. Their quest is for knowledge and to be helpful: there is a strict rule against 'interference'.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7078332",
"title": "PJ Haarsma",
"section": "Section::::Promotion of literacy.:School visits.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 446,
"text": "Haarsma takes part in school visits to promote his book and encourage imagination and reading in the school children. His presentation lasts fifty minutes, and discussions center around space travel, exploration, \"The Rings of Orbis\" universe, and other interactive topics, thus allowing for questions from the students at the conclusion. To help illustrate the scientific topics, NASA supplied Haarsma with space related information to present.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4cbko4 | how do car dealerships work with the car companies and how do they make their profits? | [
{
"answer": "Car dealerships are independent businesses who have franchise agreements with the various car makers they sell. They purchase the vehicles through the manufacturers at the invoice price, but there are often other mechanisms for the dealers to make money from the sale, such as manufacturer holdbacks, quota bonuses, and other incentives. So even if a dealer sells a car \"at invoice\" they may still be making 3-5% of the cost in profit. Obviously then, if they sell for above invoice, they make more profit. Then there are the additional revenue streams that the finance manager tries to sell you on, like extended warranties, wheel protection, etc. that are all high margin products (same reason Best Buy always tries to sell you the extended warranty). And then there is the financing, with bounty going to the dealer when they go through the auto makers' credit arm. But most dealers actually make the most of the profit form the service part of the dealership, whether repairs covered under warranty that the car maker reimburses for, or repairs/service that are paid directly by the customer.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2562877",
"title": "Car dealership",
"section": "Section::::History of car dealerships in the United States.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 679,
"text": "Car dealerships are usually franchised to sell and service vehicles by specific companies. They are often located on properties offering enough room to have buildings housing a showroom, mechanical service, and body repair facilities, as well as to provide storage for used and new vehicles. Many dealerships are located out of town or on the edge of town centers. An example of a traditional single proprietorship car dealership is Collier Motors in North Carolina. Many modern dealerships are now part of corporate-owned chains such as AutoNation with over 300 franchises. Dealership profits in the US mainly come from servicing, some from used cars, and little from new cars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2562877",
"title": "Car dealership",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 408,
"text": "A car dealership or vehicle local distribution is a business that sells new or used cars at the retail level, based on a dealership contract with an automaker or its sales subsidiary. It employs automobile salespeople to sell their automotive vehicles. It may also provide maintenance services for cars, and employ automotive technicians to stock and sell spare automobile parts and process warranty claims.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "594952",
"title": "Car dealerships in North America",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 461,
"text": "In the United States and Canada, a car dealer may specialize in used vehicles, or be a franchised dealership, which is a retailer that sells new and sometimes used cars. In most cases Franchised Dealerships include certified preowned vehicles, employ trained automotive technicians, and offer financing. In the United States, direct manufacturer auto sales are prohibited in almost every state by franchise laws requiring that new cars be sold only by dealers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "594952",
"title": "Car dealerships in North America",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 270,
"text": "New car dealerships also sell used cars, and take in trade-ins and/or purchase used vehicles at auction. Most dealerships also provide a series of additional services for car buyers and owners, which are sometimes more profitable than the core business of selling cars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "594952",
"title": "Car dealerships in North America",
"section": "Section::::Additional services.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 364,
"text": "Car dealers also provide maintenance and in some cases, repair service for cars. New car dealerships are more likely to provide these services, since they usually stock and sell parts and process warranty claims for the manufacturers they represent. Maintenance is typically a high-margin service and represents a significant profit center for automotive dealers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "594952",
"title": "Car dealerships in North America",
"section": "Section::::Additional services.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 783,
"text": "Most dealers utilize indirect lenders. This means that the installment loan contracts are immediately \"assigned\" or \"resold\" to third-party finance companies, often an offshoot of the car's manufacturer such as GM Financial, Ally Financial, or banks, which pay the dealer and then recover the balance by collecting the monthly installment payments promised by the buyer. To facilitate such assignments, dealers generally use one of several standard form contracts preapproved by lenders. The most popular family of contracts for the retail installment sale of vehicles in the U.S. are sold by business process vendor Reynolds and Reynolds; their contracts have been the subject of extensive (and frequently hostile) judicial interpretation in lawsuits between dealers and customers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "594952",
"title": "Car dealerships in North America",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 384,
"text": "Used car dealers carry cars from many different manufacturers, while new car dealerships are generally franchises associated with only one manufacturer. Some new car dealerships may carry multiple brands from the same manufacturer. In some locales, dealerships have been consolidated and a single owner may control a chain of dealerships representing several different manufacturers.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6a44fn | Is there any proof that Mesopotamia and Egypt had contact with each other and if they did, what was their relationship like? | [
{
"answer": "What timeframe are you refering to?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "60383627",
"title": "Egypt-Mesopotamia relations",
"section": "Section::::Influences on Egyptian trade and art (3500-3200 BCE).:Transmission.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 550,
"text": "The intensity of the exchanges suggest however that the contacts between Egypt and Mesopotamia were often direct, rather than merely through middlemen or through trade. Uruk had known colonial outposts of as far as Habuba Kabira, in modern Syria, insuring they presence in the Levant. Numerous Uruk cylinder seals have also been uncovered there. There were suggestions that Uruk may have had an outpost and a form of colonial presence in northern Egypt. The site of Buto in particular was suggested, but it has been rejected as a possible candidate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60335329",
"title": "Indus-Mesopotamia relations",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 391,
"text": "Indus-Mesopotamia relations are thought to have developed during the second half of 3rd millennium BCE, until they came to a halt with the extinction of the Indus valley civilization after around 1900 BCE. Mesopotamia had already been an intermediary in the trade of Lapis Lazuli between the South Asia and Egypt since at least about 3200 BCE, in the context of Egypt-Mesopotamia relations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5823212",
"title": "Late Bronze Age collapse",
"section": "Section::::Regional evidence.:Evidence of destruction.:Syria.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 556,
"text": "Levantine sites previously showed evidence of trade links with Mesopotamia (Sumer, Akkad, Assyria and Babylonia), Anatolia (Hattia, Hurria, Luwia and later the Hittites), Egypt and the Aegean in the Late Bronze Age. Evidence at Ugarit shows that the destruction there occurred after the reign of Merneptah (ruled 1213–1203 BC) and even the fall of Chancellor Bay (died 1192 BC). The last Bronze Age king of the Semitic state of Ugarit, Ammurapi, was a contemporary of the last known Hittite king, Suppiluliuma II. The exact dates of his reign are unknown.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "955784",
"title": "Meluhha",
"section": "Section::::Indian subcontinent versus Africa.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 384,
"text": "There is sufficient archaeological evidence for the trade between Mesopotamia and the Indian subcontinent. Impressions of clay seals from the Indus Valley city of Harappa were evidently used to seal bundles of merchandise, as clay seal impressions with cord or sack marks on the reverse side testify. A number of these Indian seals have been found at Ur and other Mesopotamian sites.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1587039",
"title": "Userkaf",
"section": "Section::::Reign.:Trade and military activities.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 597,
"text": "Userkaf's reign might have witnessed a revival of trade between Egypt and its Aegean neighbors as shown by a series of reliefs from his mortuary temple representing ships engaged in what may be a naval expedition. Further evidence for such contacts is a stone vessel bearing the name of his sun temple that was uncovered on the Greek island of Kythira. This vase is the earliest evidence of commercial contacts between Egypt and the Aegean world. Finds in Anatolia, dating to the reigns of Menkauhor Kaiu and Djedkare Isesi, demonstrate that these contacts continued throughout the Fifth Dynasty.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1560464",
"title": "Uruk period",
"section": "Section::::Dating and periodization.:Neighbouring regions.:Egypt.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 431,
"text": "Egypt-Mesopotamia relations seem to have developed from the 4th millennium BCE, starting in the Uruk period for Mesopotamia and in the pre-literate Gerzean culture for Prehistoric Egypt (circa 3500-3200 BCE). Influences can be seen in the visual arts of Egypt, in imported products, and also in the possible transfer of writing from Mesopotamia to Egypt, and generated \"deep-seated\" parallels in the early stages of both cultures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60383627",
"title": "Egypt-Mesopotamia relations",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 427,
"text": "Egypt-Mesopotamia relations seem to have developed from the 4th millennium BCE, starting in the Uruk period for Mesopotamia and the Gerzean culture of pre-literate Prehistoric Egypt (circa 3500-3200 BCE). Influences can be seen in the visual arts of Egypt, in imported products, and also in the possible transfer of writing from Mesopotamia to Egypt, and generated \"deep-seated\" parallels in the early stages of both cultures.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9kumez | How much cytoplasm does the average animal cell contain? | [
{
"answer": "About 100-1000 femtoliters.\n\n & #x200B;\n\nBut it's a pretty hard thing to answer. The cell with the smallest volume that I know of is the sperm cell, with about 20 femtoliters (fL). The most numerous cell in your body is the red blood cell, and it has a cytoplasmic volume of about 100 fL. But a fibroblast has a volume of 1000 fL, a fat cell has a volume of about 100,000 fL, and a egg cell (oocyte) has a volume of about 1,000,000 fL.\n\n & #x200B;\n\nSo how small of a volume is 100 femtoliters? Well, in 100 femtoliters, there is only a trillion water molecules. I know a trillion is a big number, but the fact that we even have a common word for the number of molecules in that volume tells you it's pretty small. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6781",
"title": "Cytosol",
"section": "Section::::Properties and composition.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 848,
"text": "The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than 300 Da. This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as \"E. coli\" and baker's yeast predict that under 1,000 are made.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42445",
"title": "Atomic mass unit",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 374,
"text": "The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the units kilodaltons (kDa), megadaltons (MDa), etc. Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2354482",
"title": "C-value",
"section": "Section::::Variation among species.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1343,
"text": "C-values vary enormously among species. In animals they range more than 3,300-fold, and in land plants they differ by a factor of about 1,000. Protist genomes have been reported to vary more than 300,000-fold in size, but the high end of this range (\"Amoeba\") has been called into question. Variation in C-values bears no relationship to the complexity of the organism or the number of genes contained in its genome; for example, some single-celled protists have genomes much larger than that of humans. This observation was deemed counterintuitive before the discovery of non-coding DNA. It became known as the C-value paradox as a result. However, although there is no longer any paradoxical aspect to the discrepancy between C-value and gene number, this term remains in common usage. For reasons of conceptual clarification, the various puzzles that remain with regard to genome size variation instead have been suggested to more accurately comprise a complex but clearly defined puzzle known as the C-value enigma. C-values correlate with a range of features at the cell and organism levels, including cell size, cell division rate, and, depending on the taxon, body size, metabolic rate, developmental rate, organ complexity, geographical distribution, or extinction risk (for recent reviews, see Bennett and Leitch 2005; Gregory 2005).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1479978",
"title": "Phytic acid",
"section": "Section::::Biological and physiological roles.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 246,
"text": "In animal cells, myoinositol polyphosphates are ubiquitous, and phytic acid (myoinositol hexakisphosphate) is the most abundant, with its concentration ranging from 10 to 100 µM in mammalian cells, depending on cell type and developmental stage.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "564779",
"title": "Cell growth",
"section": "Section::::Cell size.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 606,
"text": "Cell size is highly variable among organisms, with some algae such as \"Caulerpa taxifolia\" being a single cell several meters in length. Plant cells are much larger than animal cells, and protists such as \"Paramecium\" can be 330 μm long, while a typical human cell might be 10 μm. How these cells \"decide\" how big they should be before dividing is an open question. Chemical gradients are known to be partly responsible, and it is hypothesized that mechanical stress detection by cytoskeletal structures is involved. Work on the topic generally requires an organism whose cell cycle is well-characterized.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6781",
"title": "Cytosol",
"section": "Section::::Properties and composition.:Macromolecules.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 443,
"text": "In prokaryotes the cytosol contains the cell's genome, within a structure known as a nucleoid. This is an irregular mass of DNA and associated proteins that control the transcription and replication of the bacterial chromosome and plasmids. In eukaryotes the genome is held within the cell nucleus, which is separated from the cytosol by nuclear pores that block the free diffusion of any molecule larger than about 10 nanometres in diameter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24536543",
"title": "Eukaryote",
"section": "Section::::Cell features.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 531,
"text": "Eukaryotic cells are typically much larger than those of prokaryotes having a volume of around 10,000 times greater than the prokaryotic cell. They have a variety of internal membrane-bound structures, called organelles, and a cytoskeleton composed of microtubules, microfilaments, and intermediate filaments, which play an important role in defining the cell's organization and shape. Eukaryotic DNA is divided into several linear bundles called chromosomes, which are separated by a microtubular spindle during nuclear division.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
20nrez | why does it take so long for employers to reach hiring decisions? | [
{
"answer": "They're interviewing a bunch of other candidates to see who's best.",
"provenance": null
},
{
"answer": "Hiring an employee is a big investment. If there are lots of good options, then you want to make sure you're making the right one.",
"provenance": null
},
{
"answer": "1) We have to go through whatever resumes we collect during that time period, and select from those which candidates we think are worth interviewing. \n2) We have to schedule and conduct interviews of all of those candidates. \n3) We have to do decide which candidates we might want to hire. \n4) We have to do background checks on those candidates. Some places also do a credit check (which I understand but disagree with). \n5) We then extend offers. \n6) Depending on how 4 & 5 go, we might have to do another round of interviews.",
"provenance": null
},
{
"answer": "Are you asking why it takes so long for them to get back to candidates? Keep in mind that they often won't tell candidates who did not get the job until they have definitely filled the role. So they may offer it to one person who takes a week to respond and then decides to decline or wants too much money. So they offer it to the next person and that person takes some time to decide. Keep in mind that the top candidates may have other offers so things take time even after they have finished interviewing.",
"provenance": null
},
{
"answer": "There are a lot of good points here. But I'm guessing it's because time moves slower for a prospective employee waiting to hear from an employer. ",
"provenance": null
},
{
"answer": "I understand the criminal background checks. But why would an employer need to do a credit check?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2811532",
"title": "Employment discrimination",
"section": "Section::::Neoclassical explanations.:Statistical discrimination.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 547,
"text": "Edmund Phelps [1972] introduced the assumption of uncertainty in hiring decisions. When employers make a hiring decision, although they can scrutinize the qualifications of the applicants, they cannot know for sure which applicant would perform better or would be more stable. Thus, they are more likely to hire the male applicants over the females, if they believe on \"average\" men are more productive and more stable. This general view affects the decision of the employer about the individual on the basis of information on the group averages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46285600",
"title": "Shift-based hiring",
"section": "Section::::Advantages.:Advantages to employers.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 497,
"text": "BULLET::::- \"Try-before-you-buy\" - In economics, potential employees send signals to the employers through their resume and interviews in an attempt to impress upon the interviewer to land the job. Shift based hiring allows employers to further \"test the waters\" on the workers’ suitability for the job by only committing to hiring the employee for one single shift, allowing employers to better select suitable employees to work for the company, before accepting these employees for more shifts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36176805",
"title": "Skills-Based Hiring",
"section": "Section::::Purpose.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 747,
"text": "The intent of skills-based hiring is for the applicant to demonstrate, independent of an academic degree, that he or she has the skills required to be successful on the job. It is also a mechanism by which employers may clearly and publicly advertise the expectations for the job – for example indicating they are looking for a particular set of skills at an appropriately communicated level of proficiency. The result of matching the specific skill requirements of a particular job to with the skills an individual has is both more efficient for the employer to identify qualified candidates, as well as provides an alternative, more precise method for candidates to communicate their knowledge, skills, abilities and behaviors to the employer .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37372627",
"title": "Person–environment fit",
"section": "Section::::Antecedents.:Attraction–selection–attrition processes.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 635,
"text": "Lastly, the research suggests that for a better fit between an employee and a job, organization, or group to be more probable, it is important to spend an adequate amount of time with the applicant. This is because spending time with members before they enter the firm has been found to be positively associated with the alignment between individual values and firm values at entry (Chatman, 1991). Furthermore, if there are more extensive HR practices in place in the selection phase of hiring, then people are more likely to report that they experience better fits with their job and the organization as a whole (Boon et al., 2011).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "228062",
"title": "Performance appraisal",
"section": "Section::::Legal implications.\n",
"start_paragraph_id": 80,
"start_character": 0,
"end_paragraph_id": 80,
"end_character": 587,
"text": "The Employment Opportunity Commission (EEOC) guidelines apply to any selection procedure that is used for making employment decisions, not only for hiring, but also for promotion, demotion, transfer, layoff, discharge, or early retirement. Therefore, employment appraisal procedures must be validated like tests or any other selection device. Employers who base their personnel decisions on the results of a well-designed performance review program that includes formal appraisal interviews are much more likely to be successful in defending themselves against claims of discrimination.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46285600",
"title": "Shift-based hiring",
"section": "Section::::Advantages.:Advantages to employees.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 321,
"text": "BULLET::::- \"Try-before-you-buy\" - Similarly for the employees, shift based hiring allows them to gain a deeper understanding on how working in the business is like by committing to a shift. They can then make a more informed decision on whether or not to continue working more shifts on a longer basis with the company.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28929483",
"title": "Cleveland Sight Center",
"section": "Section::::Employment services.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 775,
"text": "After determining what career interests clients have and/or are best suited for via its comprehensive vocational evaluation system, staff provide clients with training in various areas of job readiness, from learning to fill out applications and develop their resumes to practicing job interviews and learning about employer expectations. Through networking and partnerships with various organizations in northeast Ohio, including Progressive Field, the Great Lakes Science Center, and the Rock and Roll Hall of Fame, employment services helps connect clients with employers and secure work. Once a client finds permanent work, he/she is monitored for 90 days during which employment services determines what accommodations the client needs to perform at optimal efficiency.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4j65pf | why is the senate investigating claims that facebook censors conservative news when facebook is a private entity/platform? | [
{
"answer": "Because the senate doesn't give a shit about actual government duties and only cares about their own partisan political ideologies and abusing their powers as much as possible in order to advance those particular political ideologies.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2327581",
"title": "Media bias in the United States",
"section": "Section::::Liberal bias.:Shadow Banning.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 746,
"text": "Claims of shadow banning of conservative social media accounts began in 2016 with Facebook’s “Trending News” controversy. Conservative news sites lashed out at Facebook after a report from an unnamed Facebook employee on May 7 alleged that contractors for the social media giant were told to minimize links to their sites in its \"trending news\" column. Alex Breitbart, former editor-in-chief of Breitbart News, claimed that “Facebook trending news artificially mutes conservatives and amplifies progressives.” Facebook’s response included a statement that they “do not permit the suppression of political perspectives” and that its trending news articles are selected by algorithms to prevent human bias from violating its policy of neutrality. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12878216",
"title": "Criticism of Facebook",
"section": "Section::::Censorship.:Censorship of conservative news.\n",
"start_paragraph_id": 233,
"start_character": 0,
"end_paragraph_id": 233,
"end_character": 278,
"text": "As a result of perception that conservatives are not treated neutrally on Facebook alternative social media platforms have been established. This perception has led to a reduction of trust in Facebook, and reduction of usage by those who consider themselves to be conservative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2337119",
"title": "Kevin McCarthy (California politician)",
"section": "Section::::Political positions.:Social media censorship.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 825,
"text": "McCarthy is confident that social media platforms, such as Twitter, are actively censoring conservative politicians and their supporters. He called on Twitter CEO Jack Dorsey to testify before congress on the matter. On August 17, 2018, McCarthy submitted a tweet to suggest that conservatives were being censored by showing a screen capture of conservative commentator Laura Ingraham's Twitter account with a sensitive content warning on one of her tweets. This warning was due to McCarthy's own Twitter settings rather than any censorship from the platform. He refused to acknowledge this fact. McCarthy also suggested that Google was biased against Republicans due to some of its short-lived vandalism of the English Wikipedia entry on the California Republican Party that was automatically indexed in the search results.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1003249",
"title": "John Thune",
"section": "Section::::Political positions.:Facebook.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 442,
"text": "Some commentators criticized Thune's letter as an example of government overreach against a private company. Facebook denied the bias allegations. Thune thanked Facebook in a statement saying, \"Private companies are fully entitled to espouse their own views, so I appreciate Facebook's efforts to address allegations of bias raised in the media and my concern about a lack of transparency in its methodology for determining trending topics.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11135417",
"title": "Corporate censorship",
"section": "Section::::Notable Cases.:Facebook.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 321,
"text": "In 2018, Facebook removed hundreds of pages related to U.S. politics on grounds of \"inauthentic activity\" one month before the midterm elections. Facebook representatives claimed that the posts and user accounts were deleted not because of the content of the posts, but because they violated Facebook's terms of service.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57169873",
"title": "Privacy and the US government",
"section": "Section::::Privacy and the Executive Branch.:Concerns and Controversies.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 342,
"text": "Although not directly performed during prescribed time in the presidential office, the use of personal private information for targeted politics on social media platforms has caused concerns regarding consumer privacy. In 2018, Facebook was accused of interfering in the 2016 election, potentially leading to events that altered the outcome.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54975568",
"title": "Occupy Democrats",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 538,
"text": "In a 2017 feature on partisan news, BuzzFeed News analyzed weekly Facebook engagements \"since the beginning of 2015 and found that Occupy Democrats on the left and Fox News on the right are the top pages in each political category.\" The article added that the pages \"consistently generate more total engagement than the pages of major media outlets.\" The organization received wide attention during the 2016 presidential primaries of the Democratic Party, and was credited for having helped \"build support\" for Bernie Sanders' candidacy.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ym6cl | How do Historians typically calculate an "exact" date? | [
{
"answer": "hi! hopefully some of the historians in antiquities will drop by with more info, but you may be interested in a few related posts\n\n\n* [How do we know what years certain pre-gregorian historical events happened in?](_URL_4_)\n\n* [How certain are we of what year it is? Were there every any disagreements, like during the Dark Ages or afterwards, of the exact year?](_URL_2_)\n\n* [If an event is recorded to have occurred on a particular date, and I ask you to say with 100% confidence how many days have elapsed since that event, what is the oldest era for which you can do this?](_URL_3_)\n\n* [What is the earliest recorded date that we can determine accurately?](_URL_5_)\n\n* [What is the earliest reliable documented event in human history?](_URL_0_)\n\n* [How do historians work with dates from different calendars? Do you have some kind of unified calendar?](_URL_1_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "7123",
"title": "Calendar date",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 517,
"text": "A calendar date is a reference to a particular day represented within a calendar system. The calendar date allows the specific day to be identified. The number of days between two dates may be calculated. For example, \"24 2020\" is ten days after \"14 2020\" in the Gregorian calendar. The date of a particular event depends on the observed time zone. For example, the air attack on Pearl Harbor that began at 7:48 a.m. Hawaiian time on 7 December 1941 took place at 3:18 a.m. Japan Standard Time, 8 December in Japan. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1844600",
"title": "Gethen",
"section": "Section::::Calendar and timekeeping.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 367,
"text": "A very curious concept of dating is employed in Gethen, though this is only explained briefly in the book: the years are not numbered sequentially in increasing order, but the current year is always referred to as \"Year One\", and the others are counted as years before or after this standpoint. Historical records employ well-known events to mark (fixed) past dates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "147476",
"title": "Maya calendar",
"section": "Section::::Long Count.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 778,
"text": "The Long Count calendar identifies a date by counting the number of days from the Mayan creation date 4 Ahaw, 8 Kumkʼu (August 11, 3114 BC in the proleptic Gregorian calendar or September 6 in the Julian calendar -3113 astronomical dating). But instead of using a base-10 (decimal) scheme like Western numbering, the Long Count days were tallied in a modified base-20 scheme. Thus 0.0.0.1.5 is equal to 25 and 0.0.0.2.0 is equal to 40. As the winal unit resets after only counting to 18, the Long Count consistently uses base-20 only if the tun is considered the primary unit of measurement, not the kʼin; with the kʼin and winal units being the number of days in the tun. The Long Count 0.0.1.0.0 represents 360 days, rather than the 400 in a purely base-20 (vigesimal) count.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "652165",
"title": "Mesoamerican calendars",
"section": "Section::::Long Count.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 387,
"text": "The Long Count calendar identifies a date by counting the number of days from August 11, 3114 BCE in the proleptic Gregorian calendar or September 6 3114 BCE in the Julian Calendar (-3113 astronomical). Rather than using a base-10 scheme, like Western numbering, the Long Count days were tallied in a modified base-20 scheme. Thus 0.0.0.1.5 is equal to 25, and 0.0.0.2.0 is equal to 40.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "442948",
"title": "Calendar era",
"section": "Section::::Ancient dating systems.:Maya.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 603,
"text": "A different form of calendar was used to track longer periods of time, and for the inscription of calendar dates (i.e., identifying when one event occurred in relation to others). This form, known as the Long Count, is based upon the number of elapsed days since a mythological starting-point. According to the calibration between the Long Count and Western calendars accepted by the great majority of Maya researchers (known as the GMT correlation), this starting-point is equivalent to August 11, 3114 BC in the proleptic Gregorian calendar or 6 September in the Julian calendar (−3113 astronomical).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30445297",
"title": "Date and time notation in the United Kingdom",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 295,
"text": "Date and time notation in the United Kingdom records the date using the day-month-year format (21 October 2011 or 21/10/11). The ISO 8601 format (2011-10-21) is increasingly used for all-numeric dates. The time can be written using either the 24-hour clock (16:10) or 12-hour clock (4.10 p.m.).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15651",
"title": "Julian calendar",
"section": "Section::::Replacement by the Gregorian calendar.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 390,
"text": "During the changeover between calendars and for some time afterwards, dual dating was used in documents and gave the date according to both systems. In contemporary as well as modern texts that describe events during the period of change, it is customary to clarify to which calendar a given date refers by using an O.S. or N.S. suffix (denoting Old Style, Julian or New Style, Gregorian).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8u33jv | With the Norman Conquest of England in 1066, the English language began rapidly changing. What other long-term cultural changes did this event bring about within England? | [
{
"answer": "Hi there, I essentially answered a similar question to yours [here](_URL_1_), and linked to an earlier answer on some more of the legal changes [here](_URL_0_). The legal changes in particular would have had a genuine impact on the day-to-day life of the English people, especially as the legal system turned heavily from restorative to punative justice.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "376974",
"title": "France in the Middle Ages",
"section": "Section::::Languages and literacy.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 323,
"text": "After the conquest of England in 1066, the Normans's language developed into Anglo-Norman. Anglo-Norman served as the language of the ruling classes and commerce in England from the time of the conquest until the Hundred Years' War, by which time the use of French-influenced English had spread throughout English society.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4478036",
"title": "History of French",
"section": "Section::::External history.:Langue d'oïl.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 323,
"text": "After the conquest of England in 1066, the Normans's language developed into Anglo-Norman. Anglo-Norman served as the language of the ruling classes and commerce in England from the time of the conquest until the Hundred Years' War, by which time the use of French-influenced English had spread throughout English society.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50711",
"title": "Middle English",
"section": "Section::::History.:Transition from Old English.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 899,
"text": "The Norman conquest of England in 1066 saw the replacement of the top levels of the English-speaking political and ecclesiastical hierarchies by Norman rulers who spoke a dialect of Old French known as Old Norman, which developed in England into Anglo-Norman. The use of Norman as the preferred language of literature and polite discourse fundamentally altered the role of Old English in education and administration, even though many Normans of this period were illiterate and depended on the clergy for written communication and record-keeping. A significant number of words of French origin began to appear in the English language alongside native English words of similar meaning, giving rise to such Modern English synonyms as \"pig/pork, chicken/poultry, calf/veal, cow/beef, sheep/mutton, wood/forest, house/mansion, worthy/valuable, bold/courageous, freedom/liberty, sight/vision, eat/dine\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1130298",
"title": "English society",
"section": "Section::::Late medieval society.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 873,
"text": "After the Norman conquest of England in 1066, society seemed fixed and unchanging for several centuries, but gradual and significant changes were still taking place, the exact nature of which would not be appreciated until much later. The Norman lords spoke Norman French, and in order to work for them or gain advantage, the English had to use the Anglo-Norman language that developed in England. This became a necessary administrative and literary language (see Anglo-Norman literature), but despite this the English language was not supplanted, and after gaining much in grammar and vocabulary began in turn to replace the language of the rulers. At the same time the population of England more than doubled between Domesday and the end of the 13th century, and this growth was not checked by the almost continual foreign warfare, crusades and occasional civil anarchy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13485",
"title": "History of England",
"section": "Section::::Norman England.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 678,
"text": "The Norman Conquest led to a profound change in the history of the English state. William ordered the compilation of the Domesday Book, a survey of the entire population and their lands and property for tax purposes, which reveals that within 20 years of the conquest the English ruling class had been almost entirely dispossessed and replaced by Norman landholders, who monopolised all senior positions in the government and the Church. William and his nobles spoke and conducted court in Norman French, in both Normandy and England. The use of the Anglo-Norman language by the aristocracy endured for centuries and left an indelible mark in the development of modern English.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1240344",
"title": "Latin influence in English",
"section": "Section::::Middle Ages.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 899,
"text": "The Norman Conquest of 1066 gave England a two-tiered society with an aristocracy which spoke Anglo-Norman and a lower class which spoke English. From 1066 until Henry IV of England ascended the throne in 1399, the royal court of England spoke a Norman language that became progressively Gallicised through contact with French. However, the Norman rulers made no attempt to suppress the English language, apart from not using it at all in their court. In 1204, the Anglo-Normans lost their continental territories in Normandy and became wholly English. By the time Middle English arose as the dominant language in the late 14th century, the Normans (French people) had contributed roughly 10,000 words to English of which 75% remain in use today. Continued use of Latin by the Church and centres of learning brought a steady, though dramatically reduced, the influx of new Latin lexical borrowings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1120028",
"title": "English language in Europe",
"section": "Section::::History of English.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 398,
"text": "For 300 years following the Norman Conquest in 1066, the Anglo-Norman language was the language of administration and few Kings of England spoke English. A large number of French words were assimilated into Old English, which lost most of its inflections, the result being Middle English. Around the year 1500, the Great Vowel Shift marked the transformation of Middle English into Modern English.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
rlbxz | Biologically, how does pedophilia even make sense? | [
{
"answer": "Although what causes pedophilia is not yet known, beginning in 2002, researchers began reporting a series of findings linking pedophilia with brain structure and function: Pedophilic (and hebephilic) men have lower IQs, poorer scores on memory tests, greater rates of non-right-handedness, greater rates of school grade failure over and above the IQ differences, lesser physical height, greater probability of having suffered childhood head injuries resulting in unconsciousness, and several differences in MRI-detected brain structures. They report that their findings suggest that there are one or more neurological characteristics present at birth that cause or increase the likelihood of being pedophilic. Evidence of familial transmittability \"suggests, but does not prove that genetic factors are responsible\" for the development of pedophilia.\n\nAnother study, using structural MRI, shows that male pedophiles have a lower volume of white matter than a control group.\n\nFunctional magnetic resonance imaging (fMRI) has shown that child molesters diagnosed with pedophilia have reduced activation of the hypothalamus as compared with non-pedophilic persons when viewing sexually arousing pictures of adults. A 2008 functional neuroimaging study notes that central processing of sexual stimuli in heterosexual \"paedophile forensic inpatients\" may be altered by a disturbance in the prefrontal networks, which \"may be associated with stimulus-controlled behaviours, such as sexual compulsive behaviours.\" The findings may also suggest \"a dysfunction at the cognitive stage of sexual arousal processing.\"\n\nBlanchard, Cantor, and Robichaud (2006) reviewed the research that attempted to identify hormonal aspects of pedophiles. They concluded that there is some evidence that pedophilic men have less testosterone than controls, but that the research is of poor quality and that it is difficult to draw any firm conclusion from it.\n\nA study analyzing the sexual fantasies of 200 heterosexual men by using the Wilson Sex Fantasy Questionnaire exam, determined that males with a pronounced degree of paraphilic interest (including pedophilia) had a greater number of older brothers, a high 2D:4D digit ratio (which would indicate excessive prenatal estrogen exposure), and an elevated probability of being left-handed, suggesting that disturbed hemispheric brain lateralization may play a role in deviant attractions.\n\nWikipedia",
"provenance": null
},
{
"answer": "All kinds of things don't make sense. Biological systems are cludged together from parts that were designed for something else. Add culture on top? And it's a quagmire of things that don't always have a purpose or function.\n\nAs an example, why do men have nipples? Answer: Because males and females are developmentally linked, you couldn't get rid of male nipples without impacting females.\n\nWith any kind of sexual attraction thing that doesn't seem to make sense, it's likely that it's just bad luck at the edges of an effect that produces good results on average. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "718021",
"title": "Chronophilia",
"section": "Section::::Sexual preferences based on age.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 837,
"text": "BULLET::::- Pedophilia is a psychological disorder in which an adult or older adolescent experiences a sexual preference for prepubescent children. According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), pedophilia is a paraphilia in which a person has intense sexual urges towards children, and experiences recurrent sexual urges towards and fantasies about children. Pedophilic disorder is further defined as psychological disorder in which a person meets the criteria for pedophilia above, and also either acts upon those urges, or else experiences distress or interpersonal difficulty as a consequence. The diagnosis can be made under the DSM or ICD criteria for persons age 16 and older. Child sexual abuse is not committed by all pedophiles, and not all child molesters are pedophiles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6341469",
"title": "Pedophilia",
"section": "Section::::Signs and symptoms.:Development.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 491,
"text": "Pedophilia emerges before or during puberty, and is stable over time. It is self-discovered, not chosen. For these reasons, pedophilia has been described as a disorder of sexual preference, phenomenologically similar to a heterosexual or homosexual sexual orientation. These observations, however, do not exclude pedophilia from the group of mental disorders because pedophilic acts cause harm, and mental health professionals can sometimes help pedophiles to refrain from harming children.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6341469",
"title": "Pedophilia",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 607,
"text": "Pedophilia is termed pedophilic disorder in the \"Diagnostic and Statistical Manual of Mental Disorders\" (DSM-5), and the manual defines it as a paraphilia involving intense and recurrent sexual urges towards and fantasies about prepubescent children that have either been acted upon or which cause the person with the attraction distress or interpersonal difficulty. The International Classification of Diseases (ICD-11) defines it as a \"sustained, focused, and intense pattern of sexual arousal—as manifested by persistent sexual thoughts, fantasies, urges, or behaviours—involving pre-pubertal children.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6341469",
"title": "Pedophilia",
"section": "Section::::Society and culture.:General.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 722,
"text": "Pedophilia is one of the most stigmatized mental disorders. One study reported high levels of anger, fear and social rejection towards pedophiles who have not committed a crime. The authors suggested such attitudes could negatively impact child sexual abuse prevention by reducing pedophiles' mental stability and discouraging them from seeking help. According to sociologists Melanie-Angela Neuilly and Kristen Zgoba, social concern over pedophilia intensified greatly in the 1990s, coinciding with several sensational sex crimes (but a general decline in child sexual abuse rates). They found that the word \"pedophile\" appeared only rarely in \"The New York Times\" and \"Le Monde\" before 1996, with zero mentions in 1991.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18721790",
"title": "Child sexual abuse",
"section": "Section::::Offenders.:Pedophilia.\n",
"start_paragraph_id": 79,
"start_character": 0,
"end_paragraph_id": 79,
"end_character": 228,
"text": "Pedophilia is a condition in which an adult or older adolescent is primarily or exclusively attracted to prepubescent children, whether the attraction is acted upon or not. A person with this attraction is called a \"pedophile\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2274218",
"title": "Richard A. Gardner",
"section": "Section::::Controversy.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 647,
"text": "In the same article, Gardner denied that he condoned pedophilia. \"I believe that pedophilia is a bad thing for society,\" he wrote. \"I do believe, however, that pedophilia, like all other forms of atypical sexuality is part of the human repertoire and that all humans are born with the potential to develop any of the forms of atypical sexuality (which are referred to as paraphilias by DSM-IV). My acknowledgment that a form of behavior is part of the human potential is not an endorsement of that behavior. Rape, murder, sexual sadism, and sexual harassment are all part of the human potential. This does not mean I sanction these abominations.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6341469",
"title": "Pedophilia",
"section": "Section::::Signs and symptoms.:Development.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 1004,
"text": "In response to misinterpretations that the American Psychiatric Association considers pedophilia a sexual orientation because of wording in its printed DSM-5 manual, which distinguishes between paraphilia and what it calls \"paraphilic disorder\", subsequently forming a division of \"pedophilia\" and \"pedophilic disorder\", the association commented: \"'[S]exual orientation' is not a term used in the diagnostic criteria for pedophilic disorder and its use in the DSM-5 text discussion is an error and should read 'sexual interest.'\" They added, \"In fact, APA considers pedophilic disorder a 'paraphilia,' not a 'sexual orientation.' This error will be corrected in the electronic version of DSM-5 and the next printing of the manual.\" They said they strongly support efforts to criminally prosecute those who sexually abuse and exploit children and adolescents, and \"also support continued efforts to develop treatments for those with pedophilic disorder with the goal of preventing future acts of abuse.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6v6qiz | why do black americans resent white americans so much for slavery when america wasn't the first to use slavery, and banned slavery 13 years before the last country to ban slavery did? | [
{
"answer": "So in your world slavery was the end of the matter?",
"provenance": null
},
{
"answer": "The effects of US Slavery are still seen today. It's not about the history of slavery in other countries. That doesn't directly affect our culture the way our slavery did. \n\nYou can't say that robberies didn't start in the US, so criminals can't be blamed for robbing today.\n\n > why are white Americans viewed as evil when slavery started in Europe in the 1400s\n\nI think that's a skewed viewpoint. Rational people wouldn't view all white people as evil. Who is saying this?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2996466",
"title": "Reformed Episcopal Church",
"section": "Section::::Social involvement.:REC and the ordination of black clergy.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 737,
"text": "The passage of the Thirteenth Amendment to the United States Constitution brought an end to the system of slavery that had kept American blacks in bondage since colonial times. After slavery was abolished, there was somewhat of a cultural crisis in the Southern states. Even though black Americans had received their freedom from the unjust practice of slavery, they also lost a consistent form of shelter, food, and worship. Almost overnight, these became things that tens of thousands of freed slaves now had to provide for by themselves. As if this hurdle were not enough, many white Americans, uncomfortable with this societal change, created, endorsed, and enforced Jim Crow laws as a way to segregate and suppress black Americans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10501905",
"title": "Judicial aspects of race in the United States",
"section": "Section::::Legislation until the American Civil War and Reconstruction.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 688,
"text": "Until the Civil War, slavery was legal. After the Revolutionary War, the new Congress passed the Naturalization Act of 1790 to provide a way for foreigners to become citizens of the new country. It limited naturalization to aliens who were \"free white persons\" and thus left out indentured servants, slaves, free African-Americans, and later Asians. In addition, many states enforced anti-miscegenation laws (e.g. Indiana in 1845), which prohibited marriage between whites and non-whites, that is, blacks, mulattoes, and, in some states, also Native Americans. After an influx of Chinese immigrants to the West Coast, marriage between whites and Asians was banned in some western states.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38009444",
"title": "Reparations for slavery",
"section": "Section::::United States.:Opposition to reparations.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 516,
"text": "Conservative writer David Horowitz wrote a list of ten reasons why \"Reparations for Slavery is a Bad Idea for Blacks - and Racist Too\" in 2001. He contends that there isn't one particular group that benefited from slavery, there isn't one group that is solely responsible for slavery, only a small percentage of whites ever owned slaves and many gave their lives fighting to free slaves, and most Americans don't have a direct or indirect connection to slavery because of the United States' multi-ethnic background.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9352302",
"title": "Wayland Seminary",
"section": "Section::::1865: plans to educate the freedmen.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 429,
"text": "By late 1865, the American Civil War was over and slavery in the United States ended with the adoption of the Thirteenth Amendment to the United States Constitution. However, known as freedmen, millions of former African American slaves were without employable job skills, opportunities, and even literacy itself, (e.g., in Virginia, since the bloody Nat Turner Rebellion in 1831, it had been unlawful to teach a slave to read).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21527089",
"title": "The National Coalition of Blacks for Reparations in America",
"section": "Section::::Recent News.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 875,
"text": "According to a 2016 article in the Washington Post, a U.N. panel said they believe the U.S. owes black people reparations. A report done by a U.N connected panel claimed because of slavery, Americans with African descent should receive reparations in some form. In the same article, it is also stated that \" Despite substantial changes since the end of the enforcement of Jim Crow and the fight for civil rights, ideology ensuring the domination of one group over another, continues to negatively impact the civil, political, economic, social and cultural rights of African Americans today.The dangerous ideology of white supremacy inhibits social cohesion amongst the US population.\" They concluded that because of slavery, violence against the black community, and terrorist acts like lynching, black people should receive some sort of apology or other form of reparation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38009444",
"title": "Reparations for slavery",
"section": "Section::::United States.:Support for reparations.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 831,
"text": "In 1999, African American lawyer and activist Randall Robinson, founder of the TransAfrica advocacy organization, wrote that America's history of race riots, lynching and institutional discrimination have \"resulted in $1.4 trillion in losses for African Americans\". Economist Robert Browne stated the ultimate goal of reparations should be to \"restore the black community to the economic position it would have if it had not been subjected to slavery and discrimination\". But what it doesn't cover is how none of those in the black community who are here due to slavery would even be here had slavery not existed in the United States. He estimates a fair reparation value anywhere between $1.4 to $4.7 trillion, or roughly $142,000 for every black American living today. Other estimates range from $5.7 to $14.2 to $17.1 trillion \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1142431",
"title": "African-American history",
"section": "Section::::The antebellum period.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 912,
"text": "In 1807, at the urging of President Thomas Jefferson, Congress abolished the international slave trade. While American Blacks celebrated this as a victory in the fight against slavery, the ban increased the demand for slaves. Changing agricultural practices in the Upper South from tobacco to mixed farming decreased labor requirements, and slaves were sold to traders for the developing Deep South. In addition, the Fugitive Slave Act of 1793 allowed any Black person to be claimed as a runaway unless a White person testified on their behalf. A number of free Blacks, especially indentured children, were kidnapped and sold into slavery with little or no hope of rescue. By 1819 there were exactly 11 free and 11 slave states, which increased sectionalism. Fears of an imbalance in Congress led to the 1820 Missouri Compromise that required states to be admitted to the union in pairs, one slave and one free.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6i9yiq | What books should be trusted? | [
{
"answer": "Obviously, there is always a ton to be said on this sort of question, but you might find [this](_URL_0_) response by /u/Cosmic_Charlie informative",
"provenance": null
},
{
"answer": "You will be interested in a series we ran a year ago on finding and evaluating sources, particularly parts [1](_URL_0_) and [2](_URL_1_).\n\nThese both feature several of our flaired members discussing where to find the best sources, how to evaluate books, and how to get the most out of secondary sources.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26992278",
"title": "Essence of the Upanishads",
"section": "Section::::Reviews and influence.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 421,
"text": "The \"Los Angeles Times\" wrote that the author \"has given us a clear, almost controversial book that draws on the text and teachings of an ancient mystical faith and applies them to the concerns of contemporary life. His insights into the use of meditation to overcome the fear of death are comforting, reassuring, invigorating... [and this] is as much a book about the richness of life as it is about the end of living.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52874",
"title": "People of the Book",
"section": "Section::::Judaism.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 288,
"text": "The Hai Gaon in 998 in Pumbeditah comments, \"Three possessions should you prize- a field, a friend, and a book.\" However the Hai Gaon mentions that a book is more reliable than even friends for sacred books span across time, indeed can express external ideas, that transcend time itself.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8626002",
"title": "Why People Believe Weird Things",
"section": "Section::::Reception.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 399,
"text": "The \"Independent Thinking Review\" wrote, \"This is a book that deserves to be widely read. Skeptics and critical thinkers can learn from it, but more importantly, it's a book to give those who maybe aren't as skeptical as you, those who need some clear and reasonable arguments to gently push them in a more critical direction. Read this book yourself: buy it for someone whose mind you care about.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35700493",
"title": "Timbuctoo (novel)",
"section": "Section::::Background.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 347,
"text": "\"The book you hold is my own fictional version of what is surely one of the greatest stories of survival ever told. I can only offer gratitude to the reader for turning a blind eye to any historical inaccuracies, and for tolerating a novelist's liberties. I am no historian, and have massaged facts and fictions into place, re-conjuring history.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52055070",
"title": "Fin-de-Siècle Splendor",
"section": "Section::::Contents.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 315,
"text": "Philip F. Williams of Arizona State University stated that some people reading the book may need to consult other scholarly reference guides in order to help them understand \"Fin-de-Siècle Splendor\" due to the absence of edition, publication, and serialization information of some works chronicled within the book.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3485734",
"title": "Inspirational fiction",
"section": "Section::::Definition.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 273,
"text": "Any good book can be an inspiration, but many of these books highlight people overcoming adversity or reaching new levels of understanding. Whether they pull themselves up by their own bootstraps or have help from a higher power, these books will uplift and entertain you.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4577839",
"title": "Li Zhi (philosopher)",
"section": "Section::::Literary works.:\"A Book to Burn\" and \"A Book to Keep (Hidden)\".\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 299,
"text": "\"A Book to Keep (Hidden)\" gives accounts of thousands of years of good and bad deeds from antiquity to the current age. By Li’s own advice, it cannot be read by those who possess “eyes of flesh” (a Buddhist term indicating the “most mundane form of vision” characteristic of someone unenlightened).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5e5x1q | Can light impart momentum? | [
{
"answer": "The solar sail can either absorb the light (so the sail gains the momentum of the photon it asborbs), or better yet, reflect the light, reversing its momentum. By conservation of momentum, the sail then gains two times the photon's momentum.\n\nThis is completely analogous to experiments you might do with, say, a medicine ball and a person standing on a skateboard.\n",
"provenance": null
},
{
"answer": "Great question! This question has already been answered for how momentum is imparted (although as a system it is always conserved), but the \"why\" might also be interesting. You probably know the equation Energy = Mass x (speed of light) squared, or E = mc^(2). In truth, this is not the full equation! The full equation for how mass and energy relate is the [Energy-momentum relation](_URL_0_). The extra term shows that even for massless particles, there is still a momentum (E = pc or Energy = momentum x speed of light). Since a photon is purely energy, it will have a momentum. Of course this is all relativistic, meaning that the object is moving very close to the speed of light, and this momentum \"p\" cannot just be substituted as mass x velocity in an example of say, a baseball being thrown. Hope that helps!",
"provenance": null
},
{
"answer": "The light loses energy and momentum when it strikes the solar sail. It doesn't lose speed, though. For a photon, the energy is equal to hf, where h is planck's constant and f is the frequency. The momentum, on the other hand, is equal to hf/c. So the light that's reflected from the solar sail will be a little bit red-shifted comapred to the light that illuminates it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "33149847",
"title": "Angular momentum of light",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 410,
"text": "The angular momentum of light is a vector quantity that expresses the amount of dynamical rotation present in the electromagnetic field of the light. While traveling approximately in a straight line, a beam of light can also be rotating (or “\"spinning\"”, or “\"twisting\"”) around its own axis. This rotation, while not visible to the naked eye, can be revealed by the interaction of the light beam with matter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33149847",
"title": "Angular momentum of light",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 396,
"text": "Less widely known is the fact that light may also carry angular momentum, which is a property of all objects in rotational motion. For example, a light beam can be rotating around its own axis while it propagates forward. Again, the existence of this angular momentum can be made evident by transferring it to small absorbing or scattering particles, which are thus subject to an optical torque.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33149847",
"title": "Angular momentum of light",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 418,
"text": "It is well known that light, or more generally an electromagnetic wave, carries not only energy but also momentum, which is a characteristic property of all objects in translational motion. The existence of this momentum becomes apparent in the “\"radiation pressure\"” phenomenon, in which a light beam transfers its momentum to an absorbing or scattering object, generating a mechanical pressure on it in the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23535",
"title": "Photon",
"section": "Section::::Physical properties.:Experimental checks on photon mass.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 646,
"text": "Current commonly accepted physical theories imply or assume the photon to be strictly massless. If the photon is not a strictly massless particle, it would not move at the exact speed of light, \"c\", in vacuum. Its speed would be lower and depend on its frequency. Relativity would be unaffected by this; the so-called speed of light, \"c\", would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11439",
"title": "Faster-than-light",
"section": "Section::::Tachyons.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 451,
"text": "In special relativity, it is impossible to accelerate an object the speed of light, or for a massive object to move the speed of light. However, it might be possible for an object to exist which moves faster than light. The hypothetical elementary particles with this property are called tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34310141",
"title": "Tests of relativistic energy and momentum",
"section": "Section::::Overview.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 486,
"text": "where formula_8. So relativistic energy and momentum significantly increase with speed, thus the speed of light cannot be reached by massive particles. In some relativity textbooks, the so-called \"relativistic mass\" formula_9 is used as well. However, this concept is considered disadvantageous by many authors, instead the expressions of relativistic energy and momentum should be used to express the velocity dependence in relativity, which provide the same experimental predictions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "422481",
"title": "Mass–energy equivalence",
"section": "Section::::Conservation of mass and energy.:Fast-moving objects and systems of objects.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 536,
"text": "When an object is pushed in the direction of motion, it gains momentum and energy, but when the object is already traveling near the speed of light, it cannot move much faster, no matter how much energy it absorbs. Its momentum and energy continue to increase without bounds, whereas its speed approaches (but never reaches) a constant value—the speed of light. This implies that in relativity the momentum of an object cannot be a constant times the velocity, nor can the kinetic energy be a constant times the square of the velocity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
fl2bvh | Why are the lanthanides and actinides crammed in one space? | [
{
"answer": "They don't, it's just a,way of drawing the Periodic Table more compactly. If you wanted the table set out properly as a grid it would have to be an unwieldy long piece of paper. The whole thing really should be split at that point and the La and Ac elements inserted as two long lines. We draw it the way we do for for covenience. If we wanted to, we could also draw the d-block elements similarly.\n\nChemically it's because the order of filling electron orbitals doesn't go simply shells 1-2-3-4-5... For the higher weight atoms, electrons start to fill the higher numbered \"s\" and \"p\" shells before the \"d\" and \"f\" of lower numbered shells have all the electrons they can take. So at the start of the Lanthanides and Actinides the sequence goes back filling in the remainder.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23053",
"title": "Periodic table",
"section": "Section::::Open questions and controversies.:Group 3 and its elements in periods 6 and 7.:Lanthanum and actinium.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 1479,
"text": "Lanthanum and actinium are commonly depicted as the remaining group 3 members. It has been suggested that this layout originated in the 1940s, with the appearance of periodic tables relying on the electron configurations of the elements and the notion of the differentiating electron. The configurations of caesium, barium and lanthanum are [Xe]6s, [Xe]6s and [Xe]5d6s. Lanthanum thus has a 5d differentiating electron and this establishes it \"in group 3 as the first member of the d-block for period 6\". A consistent set of electron configurations is then seen in group 3: scandium [Ar]3d4s, yttrium [Kr]4d5s and lanthanum [Xe]5d6s. Still in period 6, ytterbium was assigned an electron configuration of [Xe]4f5d6s and lutetium [Xe]4f5d6s, \"resulting in a 4f differentiating electron for lutetium and firmly establishing it as the last member of the f-block for period 6\". Later spectroscopic work found that the electron configuration of ytterbium was in fact [Xe]4f6s. This meant that ytterbium and lutetium—the latter with [Xe]4f5d6s—both had 14 f-electrons, \"resulting in a d- rather than an f- differentiating electron\" for lutetium and making it an \"equally valid candidate\" with [Xe]5d6s lanthanum, for the group 3 periodic table position below yttrium. Lanthanum has the advantage of incumbency since the 5d electron appears for the first time in its structure whereas it appears for the third time in lutetium, having also made a brief second appearance in gadolinium.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18308",
"title": "Lanthanide",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 647,
"text": "They are called lanthanides because the elements in the series are chemically similar to lanthanum. Both lanthanum and lutetium have been labeled as group 3 elements, because they have a single valence electron in the 5d shell. However, both elements are often included in discussions of the chemistry of lanthanide elements. Lanthanum is the more often omitted of the two, because its placement as a group 3 element is somewhat more common in texts and for semantic reasons: since \"lanthanide\" means \"like lanthanum\", it has been argued that lanthanum cannot logically be a lanthanide, but IUPAC acknowledges its inclusion based on common usage.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18308",
"title": "Lanthanide",
"section": "Section::::Applications.:Possible medical uses.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 284,
"text": "Currently there is research showing that lanthanide elements can be used as anticancer agents. The main role of the lanthanides in these studies is to inhibit proliferation of the cancer cells. Specifically cerium and lanthanum have been studied for their role as anti-cancer agents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18899",
"title": "Mendelevium",
"section": "Section::::Characteristics.:Physical.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 1667,
"text": "The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fds configurations, whereas the latter have fs configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f6d7s configuration over the [Rn]5f7s configuration for mendelevium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number. Thermochromatographic studies with trace quantities of mendelevium by Zvara and Hübener from 1976 to 1982 confirmed this prediction. In 1990, Haire and Gibson estimated mendelevium metal to have an enthalpy of sublimation between 134 and 142 kJ/mol. Divalent mendelevium metal should have a metallic radius of around . Like the other divalent late actinides (except the once again trivalent lawrencium), metallic mendelevium should assume a face-centered cubic crystal structure. Mendelevium's melting point has been estimated at 827 °C, the same value as that predicted for the neighboring element nobelium. Its density is predicted to be around .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2308",
"title": "Actinide",
"section": "Section::::Discovery, isolation and synthesis.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 683,
"text": "Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table—and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. The majority of them do not even occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21278",
"title": "Nobelium",
"section": "Section::::Characteristics.:Physical.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 1860,
"text": "The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fs configurations, whereas the latter have fds configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f6d7s configuration over the [Rn]5f7s configuration for nobelium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals, although for nobelium this prediction has not yet been confirmed. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number: an effect of this is that nobelium is predominantly divalent instead of trivalent, unlike all the other lanthanides and actinides. In 1986, nobelium metal was estimated to have an enthalpy of sublimation between 126 kJ/mol, a value close to the values for einsteinium, fermium, and mendelevium and supporting the theory that nobelium would form a divalent metal. Like the other divalent late actinides (except the once again trivalent lawrencium), metallic nobelium should assume a face-centered cubic crystal structure. Divalent nobelium metal should have a metallic radius of around 197 pm. Nobelium's melting point has been predicted to be 827 °C, the same value as that estimated for the neighboring element mendelevium. Its density is predicted to be around 9.9 ± 0.4 g/cm.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "306609",
"title": "Group 3 element",
"section": "Section::::Composition of group 3.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 1236,
"text": "It is disputed whether lanthanum and actinium should be included in group 3, rather than lutetium and lawrencium. Other d-block groups are composed of four transition metals, and group 3 is sometimes considered to follow suit. Scandium and yttrium are always classified as group 3 elements, but it is controversial which elements should follow them in group 3, lanthanum and actinium or lutetium and lawrencium. Scerri has proposed a resolution to this debate on the basis of moving to a 32-column table and consideration of which option results in a continuous sequence of atomic number increase. He thereby finds that group 3 should consist of Sc, Y, Lu, Lr. The current IUPAC definition of the term \"lanthanoid\" includes fifteen elements including both lanthanum and lutetium, and that of \"transition element\" applies to lanthanum and actinium, as well as lutetium but \"not\" lawrencium, since it does not correctly follow the Aufbau principle. Normally, the 103rd electron would enter the d-subshell, but quantum mechanical research has found that the configuration is actually [Rn]7s5f7p due to relativistic effects. IUPAC thus has not recommended a specific format for the in-line-f-block periodic table, leaving the dispute open.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1k9jw3 | How fast does an object inside the event horizon of a black hole move towards it? | [
{
"answer": "It depends on who is measuring the speed. Notice that speed is a local concept in general relativity. There is nothing stopping objects from *appearing* to move faster than light, if they are far away. This may be more readily understandable in the context of expansion of the universe, but it applies to black holes as well. Only if you are measuring speeds locally does the speed limit of c apply.",
"provenance": null
},
{
"answer": "There are two important things we have to remember here:\n\n1. Velocity is a relative quantity. We can't just say, \"That thing is moving at velocity *v*.\" We can only say, \"That thing is moving at velocity *v* **relative to that other thing**.\" I assume you question is then, \"Does an object inside the event horizon of a black hole move faster than c relative to an observer outside the event horizon?\" This brings us to the second important fact.\n\n2. The region inside the event horizon of a black hole **is not part of our universe**. This may sound shocking, but let's think about it. Nothing can cross from inside to outside, so we can never receive any information about it. It is completely inaccessible to us. We might as well say it's outside of our universe.\n\nThus, there's no meaningful way to talk about how fast something *inside* the event horizon of a black hole is moving relative to something *outside* of a black hole since we can never compare their velocities. Now, two massive objects inside the event horizon and in causal contact with each other (that is, they can \"see\" each other) must move at less than c relative to each other.\n\nWe, of course, could ask the hypothetical question, \"Well, if we *could* see inside the event horizon of a black hole, would objects in there be moving at faster than c relative to us?\" To answer that, I'd have to dig out my old relativity textbook and read up on it a lot, but even if it could happen, it wouldn't be a violation of special relativity thanks to the event horizon censorship.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29320146",
"title": "Event horizon",
"section": "Section::::Interacting with an event horizon.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1190,
"text": "In the case of the horizon around a black hole, observers stationary with respect to a distant object will all agree on where the horizon is. While this seems to allow an observer lowered towards the hole on a rope (or rod) to contact the horizon, in practice this cannot be done. The proper distance to the horizon is finite, so the length of rope needed would be finite as well, but if the rope were lowered slowly (so that each point on the rope was approximately at rest in Schwarzschild coordinates), the proper acceleration (G-force) experienced by points on the rope closer and closer to the horizon would approach infinity, so the rope would be torn apart. If the rope is lowered quickly (perhaps even in freefall), then indeed the observer at the bottom of the rope can touch and even cross the event horizon. But once this happens it is impossible to pull the bottom of rope back out of the event horizon, since if the rope is pulled taut, the forces along the rope increase without bound as they approach the event horizon and at some point the rope must break. Furthermore, the break must occur not at the event horizon, but at a point where the second observer can observe it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9223226",
"title": "Gullstrand–Painlevé coordinates",
"section": "Section::::Speeds of light.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 378,
"text": "BULLET::::- At the event horizon, formula_30 the speed of light shining outward away from the center of black hole is formula_31 It can not escape from the event horizon. Instead, it gets stuck at the event horizon. Since light moves faster than all others, matter can only move inward at the event horizon. Everything inside the event horizon is hidden from the outside world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "151013",
"title": "T-symmetry",
"section": "Section::::Macroscopic phenomena: black holes.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 471,
"text": "The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1508445",
"title": "Ergosphere",
"section": "Section::::Rotation.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 669,
"text": "As a black hole rotates, it twists spacetime in the direction of the rotation at a speed that decreases with distance from the event horizon. This process is known as the Lense–Thirring effect or frame-dragging. Because of this dragging effect, an object within the ergosphere cannot appear stationary with respect to an outside observer at a great distance unless that object were to move at faster than the speed of light (an impossibility) with respect to the local spacetime. The speed necessary for such an object to appear stationary decreases at points further out from the event horizon, until at some distance the required speed is that of the speed of light.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29320146",
"title": "Event horizon",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 792,
"text": "Any object approaching the horizon from the observer's side appears to slow down and never quite pass through the horizon, with its image becoming more and more redshifted as time elapses. This means that the wavelength of the light emitted from the object is getting longer as the object moves away from the observer. The notion of an event horizon was originally restricted to black holes; light originating inside an event horizon could cross it temporarily but would return. Later a strict definition was introduced as a boundary beyond which events cannot affect any outside observer at all, encompassing other scenarios than black holes. This strict definition of EH has caused information and firewall paradoxes; therefore Stephen Hawking has supposed an apparent horizon to be used. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1508445",
"title": "Ergosphere",
"section": "Section::::Radial pull.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 434,
"text": "Since the ergosphere is outside the event horizon, it is still possible for objects that enter that region with sufficient velocity to escape from the gravitational pull of the black hole. An object can gain energy by entering the black hole's rotation and then escaping from it, thus taking some of the black hole's energy with it (making the maneuver similar to the exploitation of the Oberth effect around \"normal\" space objects).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55278",
"title": "Warp drive",
"section": "Section::::\"Star Trek\".:Slingshot effect.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 435,
"text": "This \"slingshot\" effect has been explored in theoretical physics: it is hypothetically possible to slingshot oneself \"around\" the event horizon of a black hole. As a result of the black hole's extreme gravitation, time would pass at a slower rate near the event horizon, relative to the outside universe; the traveler would experience the passage of only several minutes or hours, while hundreds of years would pass in 'normal' space.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
26kikm | the difference in programming languages. | [
{
"answer": "I can't speak for all languages, but I'll try to help differentiate between three languages I kinda know: HTML, CSS, and JavaScript. These are mostly used for websites.\n\n**HTML** or HyperText Markup Language is a languages that 'structures' the different parts of a website. They group things up, make tables, basically brings the essentials to the table. **if your website was your body, HTML would be the skeleton.** Laying out the structure, to be built up on eventually.\n\n**CSS** or Cascading Style Sheets is a way of presenting the elements you have created in HTML. This can be done my giving a background the the *elements*, defining a margin or its general location on the page, or, if it's a text element, its font or text color. **If your website was your body, CSS would be the skin.** applies to HTML, decorating and styling it.\n\nThen there's JavaScript. This is usually a lot more complex than HTML and CSS, and uses logic gates and loops and the like. JScript does the 'behind-the-scenes' work on your websites, such as animations. **If your website was your body, JavaScript would be the organs.** Taking information and processing it in a desired way.\n\nObviously, I'm no expert. I guarantee I'm going to be corrected at least a few tines here, but this is my understanding of each. \n\nIf you're interested, I learnt this (incorrect?) Information with _URL_0_, which was hands-on, colloquial, and very ELI5-y.\n\nEDIT: I'll try to answer any more questions you might have if you ask me.",
"provenance": null
},
{
"answer": "Every single programming language serves one purpose: explain to the computer what we want it to do.\n\nHTML is... not a programming language, it's a markup language, which basically means text formatting. XML and JSON are in the same category\n\nThe rest of languages fall in a few general categories (with examples):\n\n1. Assembly is (edit: for every intent and purpose) the native language of the machine. Each CPU has it's own version, and they are somewhat interoperable (forward compatibility mostly).\n\n2. System languages (C and C++) . They are used when you need to tell the computer what to do, as well as HOW to do it. A program called a compiler interprets the code and transforms it into assembler.\n\n3. Application languages (Java and C#). Their role is to provide a platform on which to build applications using various standardized ways of working.\n\n4. Scripting languages (Python, and Perl). The idea behind them is that you can build something useful in the minimal amount of code possible.\n\n5. Domain-specific languages (FORTRAN and PHP). Each of these languages exist to build a specific type of program (Math for FORTRAN, a web page generator for PHP)\n \nThen you have various hybrid languages that fit in between these main categories. The list goes on and on. Various languages are better suited for various tasks, but it's a matter of opinion.\n\nFinally and most importantly: JavaScript is an abomination unto god, but it's the only language that can be reliably expected to be present in web browsers, so it's the only real way to code dynamic behavior on webpages.\n\nEdit: Corrections, also added the 5th category\n\n",
"provenance": null
},
{
"answer": "It can be hard to explain the differences between them without getting too technical, but I'll give it a shot.\n\nTo start with, programming languages can be divided into two categories: compiled languages, and interpreted languages.\n\n**Compiled languages** are run through a program called a compiler, which takes the source code and generates a program file (like an .exe). The main advantage of this is that you have a packaged program which usually doesn't have any requirements to run it. The disadvantage is that there's an extra step between writing the code and distributing the program. Languages like C++ and Java are compiled languages.\n\n**Interpreted languages** are run through an interpreter at the time you run them, which processes the code and turns it into instructions for the computer as it goes. The advantage is that you don't have to go through the compilation step (which can take a decent amount of time for large programs); the disadvantages are a) it has to interpret it as it goes, which can take more resources, b) because it doesn't process the program until you run it, it's a lot easier for errors to slip through (compiled programs can check for some of those errors during the compilation step), and c) the person running the program needs the interpreter (whereas only the person *making* the program needs the compiler). Languages like Python and Javascript are interpreted languages.\n\n**Not programming languages**: Things like HTML and CSS aren't programming languages at all; they define things like the structure or style of a web page (or other text), but don't actually tell the computer what to do. The computer reads the markup and decides what to do with it on its own. It's kind of like how Microsoft Word is a program, but even though a Word document *contains* all of the font and layout information, it doesn't mean anything until Word decides how to handle it.\n\nThere are differences between each language beyond, that, as well. One of the most-cited difference is between C++ and Java: C++ lets you allocate and deallocate memory on your own, while Java handles it for you. What this means is, in C++ you have to tell the program how much RAM you want to use and when you're done using it. This gives you a lot of control over how much memory your program uses, which is great for squeezing performance out of an application. On the other hand, if you *forget* to tell the program that you're done using some RAM, you can run into serious problems. Java deals with the issue by doing all of the memory management for you: it figures out when you're done using a bit of RAM and frees it up automatically. That sounds great, but that extra processing has some overhead, and it's not necessarily fast or efficient compared to doing it yourself, so Java programs can be resource hogs.\n\nThe decision to pick one over the other is based almost completely on what kind of program you want to make. In the game industry, where performance really matters, C++ is still the standard language. For an application where you have no control over the user's environment, Java might be better: the user only needs to have Java installed to run it on pretty much any machine. For something that runs inside a web page, you'd pick Javascript: it's not as fast as something like C++, but web browsers have serious security restrictions on what they can run, so compiled programs are totally out. (And in case you didn't know, Java and Javascript are unrelated.)\n\nThere are other languages with vastly different programming styles that are highly suited towards complex math or AI systems, so programmers might specialize in completely different languages depending on what sort of work they do. There really isn't a \"better\" or \"perfect\" language; they're all tools with different features, and you pick the tool that makes the most sense for the job.\n\n**Edit**: Please keep in mind that this is ELI5! If you want to suggest how I can make this easier for non-programmers to understand, then please do so. If you want to nitpick about how I'm technically wrong about something, please take it to a programming-related sub.",
"provenance": null
},
{
"answer": "HTML and CSS aren't programming languages. HTML and CSS are both used to describe a page. The former for the content and the latter specifically for the look and formatting. \n \nProgramming languages vary quite a bit. You mentioned some specific ones that are geared towards web programming. Others include C++, C#, Java, et al. Generally, languages vary in their focus, and each has their strengths. All of them vary in their level of abstraction. \n \nThe abstraction is referred to low or high-level. When you hear people talk about a high-level language then they're referring to one with high abstraction. A low-level language like assembly has low abstraction. \n \nYou can think about it this way. Draw a line with computer readability at one end and human readability at the other. Each programming language is going to sit somewhere between these extremes. Assembly languages are going to be on one end and scripting languages like Python and Ruby are on the other. \n \nA low-level language that is computer readable is going to be harder to program well, but it'll perform better and provide a better degree of control. A high-level language is going to be easier to program, but it'll perform worse, because of the overhead imposed by the abstractions; and it'll provide less control which is lost in the process of making it easy to program. \n \nThat doesn't mean high-level languages are worse. Having the best possible performance isn't always the most important consideration. And typically you don't need to exert complete control over different functions. The abstractions imposed by higher-level languages typically perform well enough and provide the basic needed functionality for most programs. And in the process they take a lot less time to bring to market.\n",
"provenance": null
},
{
"answer": "There are many different ways in which two programming languages can differ from one another. Note, that I'll try to limit this to general purpose languages, to exclude things like markup languages (like HTML) which do specific things, like describe what webpages should look like. I'll talk about one that is very dear to my heart and which is a kind of very fundamental and essential difference between languages: \n\n**Paradigms:** \n\n*imperative*: Most languages are, first and foremost, imperative. Python, Ruby and Javascript seem to fit this bill. There's alot of state you keep track of, like numbers for example, and you update them in a number of steps. It's like a recipe. You give a list of instructions. The computer does one after the other until there's a cake there. \n\n*declarative*: Haskell is a good example of a declarative language. You give definitions of things. There's no (or very little) state. The computer pieces together definitions to tell you what you want. It's a hard mindset to be in and hard to explain without more concrete examples. \n\n*logic*: some programming languages, like prolog, allow you to give a computer a list of constraints, and it will just find something obeying the constraints you layout. I don't have much experience with this. \n\nI'm not going to be able to give a full detailed answer, but the thing to remember is this: at the end of the day the computer will execute a program which is just a list of ones and zeroes. Programming languages are for people, both to let them write those zeroes and ones easier and to let them communicate programs to each other in a way they can understand. Some languages are really different. They have completely different paradigms. Even though the most common way is to provide a sequential list of instructions, there are other ways as well. Even with one one paradigm, languages differ from each other in that they each have their own ways of doing things. One language might be better suited for a particular job. Some people may prefer one language to another because they like the way it does stuff or think it's beautiful. The code one language generates might be faster. Or better at some individual thing. Each language, in addition to the technical specifications, has a group of people who write code in it and therefore its own customs. Programming languages are still for people. They're not *that* fundamentally different from natural language. ",
"provenance": null
},
{
"answer": "This is ELI:5, guys come on.\n\nThe difference in programming languages is like the difference in human languages. You're just trying to describe concepts to someone and that works differently in different languages.\n\nPython:Javascript::English:German\n\nIn both English and German, you can describe the concept, the idea of \"being happy because something terrible happened to someone else.\" That's how you describe that concept using the English language. The German language has this much better way to handle it, and you can just say \"schadenfreude\". You can also just combine words into longer words in German, but English is all about the spaces and punctuation. \n\nIt's pretty much just syntax sugar the whole way down. Even compiled vs. non-compiled are like English vs. French. One language is full of bullshit, the other is regulated by a body that came up with their own equivalent of \"email\" because saying \"email\" was denigrating to them.",
"provenance": null
},
{
"answer": "There's a lot of what seems to be CS undergrads and debates that are going beyond the scope of your question.\r\r\rDifferent problems call for different solutions using different technologies and different languages. Languages' strengths and weaknesses are entirely relative to the purpose of the application.\r\r\r\"If the only tool in your toolbox is a hammer, then everything looks like a nail.\"",
"provenance": null
},
{
"answer": "The top few parent comments ITT are very good, but otherwise there are some very confused/bored five year olds out there.",
"provenance": null
},
{
"answer": "Like human languages, programming languages really just boil down to different ways to express ideas & actions. \n\nSome of the differences are between languages are minor. I.e., if you want to display text on the screen, all of these do the same thing in various languages:\n\n print \"Hello Reddit\"\n printf \"Hello Reddit\"\n say \"Hello Reddit\"\n cout < < \"Hello Reddit\"\n System.out.print(\"Hello Reddit\");\n\nWhy such minor differences? Because languages are written by humans. And humans are human. Which is to say petty at times.\n\nOn the other hand, some of the differences are much larger. For example, one major is something called \"memory management.\" \n\nThink of yourself a computer for a moment. You're going to be told a lot of different things. More than you can remember in your head. So what do you do? \n\nYou get a notebook. You decide on each line, you'll write down each thing you need to remember. Be it Alice has $100. Or Bob's favorite color is red. Whatever it may be, each thing takes a line. How many things can you remember? That's determined by how many lines in your notebook. \n\nOf course, after a while some things are no longer needed. The activity that required to remember Alice had $100 ended. So you can erase that line & reuse it. \n\nEach of those lines is like memory in a computer. Some programming languages require you (the programmer) to explicitly say \"I'm done with lines 134 - 150. You can use them for something else.\" Other languages have ways to figure it out automatically. \n\nWhy not always figure it out automatically? Well, it's expensive. It turns out you need to keep track of a few other things & periodically take time to check if something is used. Maybe that's okay, but it's also possible you're doing something critical -- say running a nuclear power plant or the instructions for a pacemaker -- where it isn't. It's basically comes down to a tradeoff between convenience & performance. \n\nWhich is another major difference between languages: Do you aim to optimize how fast it takes the developer to write a program? Or to optimize how the program uses the physical resources of a machine? (E.g., its CPU, memory, etc.)\n\nThere's lot of other tradeoffs like these. Other tradeoffs are how well does it work with other computers on the network? How well does it let me create a graphical interface? How are unexpected conditions handled? \n\nAnd in a nutshell, each language makes a different set of decisions on tradeoffs. \n\nWhich is best for what? Well, that's subjective. Ask 100 different programmers & you'll get 100 different answers.\n\nFor example, my employer tends to 4 primary languages: C++, Java, Go, & Python. C++ is great for problems that need to handle a lot of concurrent activity. (I.e., things that need to \"scale.\") Think of problems where 100,000 people are sending a request a second. Go is good at these problems too. \n\nJava is good for when there's complicated business logic. Think of problems like figuring out how much tax you need to charge, which is going to vary not just on the state, but even the city or zip. Python is good when you need to put something together quickly. Think of problems where I have a bunch of data & I need to a one-off analysis to tell me certain characteristic. \n\nOf course, those are far from the *only* problems each language solves, but it gives a sense of it.\n\n\n\n\n\n\n",
"provenance": null
},
{
"answer": "I'd probably break it down to 4 types of programming languages.\n\n1. Object Oriented (Java or C++)\n2. Logical Programming (Prolog or LISP)\n3. Functional Programming (Haskell or LISP)\n4. Declarative (SQL)\n\nAs you can see, some languages can be classified under multiple classes. I'd describe what each type means but you're 5 and you can do your own damn research.\n\nAll I really wanted to say was that the reason there are so many languages is because each language is good at doing \"something\" better than the other. You could make a text file and a java program to store all of your shit. Or download a database management system, learn SQL, and tell the DBMS what you want to insert/edit/delete saving time/money. You could make an AI in C++, but you'll be missing out on the powerful logical programming Prolog has to offer.\n\nSome languages are easier to \"read\" than the other. And some languages are just a pain in the ass to program in. Seriously, look at the assembly code in x86 Windows for Hello, World.\n\n .486p\n .model flat,STDCALL\n include win32.inc\n \n extrn MessageBoxA:PROC\n extrn ExitProcess:PROC\n \n .data\n \n HelloWorld db \"Hello, world!\",0\n msgTitle db \"Hello world program\",0\n \n .code\n Start:\n push MB_ICONQUESTION + MB_APPLMODAL + MB_OK\n push offset msgTitle\n push offset HelloWorld\n push 0\n call MessageBoxA\n \n push 0\n call ExitProcess\n ends\n end Start\n\nDo you really want to type all of that shit out? Functional programming languages are crazy powerful because so few lines can do so much. Here's quicksort for example:\n\n quickSort :: Ord a = > [a] - > [a]\n quickSort [] = []\n quickSort (x:xs) = quickSort [a | a < - xs, a < x] ++ -- Sort the left part of the list\n [x] ++ -- Insert pivot between two sorted parts\n quickSort [b | b < - xs, b > = x] -- Sort the right part of the list\n\n3 fucking lines to sort any list in O(n log n) But of course there's trade offs. Fucking hard to read that shit, eh? Imagine working on a team and reading one page of that code that someone else wrote and communicating what it does to other team members.\n\nThere's basically as many programming languages as there are... languages. Pick one (I'd recommend Java) and play with it.",
"provenance": null
},
{
"answer": "There are lots of different programming languages because there are lots of different things you can make computers do.\n\nFor example:\n\n* **HTML**: Used to build simple websites and tell a web browser what to show on the screen.\n\n* **Python**: A very popular language designed to be beautiful, flexible, and powerful while still being easy to read and easy to write. It can be used on lots of different kinds of computers, not just Windows PCs. It can be used to make programs do extra things, add new parts to a game, build websites, and can be used to easily tell computer how to do repetitive tasks.\n\n* **Ruby**: was invented because lots of other languages were too complicated and messy, and Ruby tries to be simple.\n\n* **PHP**: Was designed to be unrestricted so anyone can contribute to make the language better. It was made so you can easily make websites talk to databases, and do much more complicated things which cannot be done by HTML.\n\n* **Javascript**: was invented as a way to make websites do more things that HTML couldn't do by itself. It isn't the same thing as Java.\n\n* **C**: An older language that is still used today, and is the predecessor to C++. It was invented to address the limitations of very early programming languages. It is complicated and easy to make big programming mistakes with.\n\n* **C++**: A very fast, complex, messy looking language which is extremely powerful and flexible, but you can easily make buggy programs if you don't know how to use it properly.\n\n* **Visual Basic**: An adaptable, easy to learn language made up by Microsoft specifically for Windows and other Microsoft products.\n\n* **Java**: Made by Sun Microsystems, designed to be used across many many different electronic devices, like modems, PCs, home applicances, robots, car audio head units, mobile phones, etc\n\n* **C#**: Tries to copy the way Java and C++ are written to make it easier for people to pick up, but tries to be simpler and easier to learn than Java and C++\n\n* **Lisp**: This is a family of similar languages which were born from a much older language. The original Lisp was designed mostly for mathematical purposes, but it has become much more flexible since. It is highly influential in the field of computer science, and its strength comes from its clever, elastic syntax (syntax is like grammar for programming languages).\n\n* **MATLAB**: Designed for use with mathematical scenarios, often used by engineers and scientists.\n\n* **SQL**: Designed to talk to databases. It is like a \"search language\" which you can use to find exactly what you need in a database. It is also used to add information to databases.\n\n* **Assembly**: A very complicated language that is used to \"talk to the CPU\" in a much more direct way. You have to think like a computer to use this language, and it is very difficult to read. Fun fact: the original Rollercoaster Tycoon was programmed by Chris Sawyer using only Assembly, which would have taken a long long time but made the game very efficient and fast.",
"provenance": null
},
{
"answer": "In a sense, all programming languages* do exactly the same thing. They describe how to go about performing particular computations. In general, if it's possible to write a program in one language, it's also possible to write that program in any language because we can prove using maths that they can all do the same things. (This is called being [\"Turing complete\"](_URL_0_))\n\nThe differences between programming languages comes down to a few factors: \n\n* Speed: Some programming languages tend to run much faster than others. One major factor in this tends to be whether the language is \"compiled\" (pre-processed to turn it into machine instructions) or \"interpreted\" (turned into machine instructions just before they're run every time). C tends to be very fast because it's already pretty close to the instructions that the computer is actually running (assembly code) and can be very heavily optimized in the compilation process. Languages like Python and JavaScript tend to be much slower because they're (usually) interpreted and very far removed from what's happening at the machine level. \n\n* Ease of writing: Some languages tend to be much easier to write in general than others. Most high-level languages (e.g. Python, JavaScript, Ruby) provide you a lot of really nice tools that allow you to write code very fast and easily. The language will also handle a lot of stuff that you'd need to micromanage in a lower-level language (e.g. C, C++) like memory management. They're also pretty flexible in terms of what they allow you to do compared to some more strictly-defined languages (e.g. Java). \n\n* Style: This relates to the above, but there are a lot of different styles of programming languages, and some people have preferences working with some versus others. One example is static- vs dynamic-typing; whether you have to explicitly declare what all of your variables are and what they can be used for (C, Java), or whether the language will just figure it out for you (Python, JavaScript). Another is imperative vs declarative; whether you tell the computer all the steps that it needs to take to solve the problem (most languages), or whether you just describe what you want it to do and have it figure it out. Kind of. It's pretty of weird. Then there are things like \"functional\" languages, where functions are treated the same as any other kind of data and can be passed around (Haskell, ML, JavaScript); object-oriented languages, where everything is an \"object\" that both has data stored in it and has a set of operations associated with it (C++, Java), etc. Basically, these are all choices the designers of the language made that aren't necessarily objectively good or bad but make some people like the language and some dislike it. \n\n*As others have mentioned, there are things that might seem like programming languages but really aren't. These include things like: \n\n* HTML: Defines the structure of a webpage\n\n* SQL: Describes information to grab from a database; newer versions can kind of be considered real programming languages.... Kind of....\n\nThese aren't actual programming languages in the sense that they're not describing how to do any computation per se. ",
"provenance": null
},
{
"answer": "Programming languages are divided up along a number of lines.\n\n\nHow they are executed:\n\nPrograms are either run directly by the hardware of the machine, or by a software layer between the program and the machine. \n\nExamples of languages that translated ahead of time to run directly on the hardware are C/C++, Pascal, Fortran, Common Lisp, Haskell, Forth... the list goes on.\n\nExamples of languages that are run by a software layer include Python, Perl, Ruby, Java, JavaScript, C#, Lua, Common Lisp... again the list goes on (and yes there is overlap, it's possible to do either).\n\nPrograms compiled to run against the hardware are usually faster than those that are run by a software layer, but that's not always the case, Java and C# are quite performant (easily within an order of magnitude of their native counterparts).\n\n\nThe Type system:\n\nIn simple terms this dictates what parts of the program can do to other parts of the program. The major examples are Untyped, statically typed, dynamically typed.\n\nAn example of an untyped language is x86 object code (the language that runs on Intel's CPU's). This language treats everything as a series of 1's and 0's. Even the instructions are just 1's and 0's, there is no concept of a letter or a number, it's just groups of 1's and 0's and different operations on them.\n\nAn example of a statically typed language is C, C knows about numbers, and different groups of numbers in a certain layout (a struct). It also has functions, which are groups of instructions. An example of the type system in action is that it will not let you call a function that expects a single number with an argument that is a group of numbers.\n\nAn example of a dynamically types programming language would be one of the Lisp family. These languages will allow you to call a function that expects a single number with an argument that is a group of numbers, the program will then fail when it is run because the function doesn't know what to do with a group of numbers.\n\n\nThe Paradigm: \n\nThis is the basic structure of the language. Examples are Imperative, Functional and Logic programming. Other people might include object oriented in this list, but I see it as largely orthogonal, it's just a way of organising things, not anything fundamental.\n\nImperative languages are comprised of instructions, they are run one at a time, one after the other. They may jump around a bit, but they will always execute one instruction after another.\n\nFunctional languages work differently, a functional language, like Haskell for instance, aren't a list of instructions that go and change data, they say what the input data is and what the output data should be, the language then goes off and does that for you.\n\nLogic programming languages, such as prolog work differently again, a prolog program consists of a set of rules. When you give the program some input it will try and match the input to it's rules and from there give you anything that matches your rules.\n\nThe Memory Management Scheme:\n\nLots of languages use a garbage collector, which is a piece of code that runs in every program written in that language. It runs in the background and looks after the memory used by the program. When it detects that a piece of memory is no longer in use it will give that bit of memory back to the operating system so that other programs (or the current program, later) can use it for something else. Java, C#, Common Lisp, Haskell, Python, Ruby, Javascript are examples of languages that have a garbage collector.\n\nOther languages don't have a garbage collector, for example C/C++, COBOL, Rust. These languages rely on the code that looks after the memory being inserted when the program is compiled. This can either be done by the programmer, as in C and Fortran or automatically inserted by the compiler as in (good) C++ and Rust. If the programmer is required to manage the memory and does it wrong, that program will probably not work.\n\nTo directly answer your question:\n\n* HTML: is a markup language, it's used for describing how something should look, it's not a programming language.\n\n* Python, Ruby, Javascript: These are interpreted, imperative, dynamically* typed, garbage collected languages. They are seen as easy to read and write and protect you from a lot of low level details.\n",
"provenance": null
},
{
"answer": "If I wanted to tell you something I want you to do, then I'd speak to you in a human language such as English, Spanish, or Russian. Programming languages work the same way, except you're talking to a computer. For example, if I wanted to tell the computer to say \"Hello World!\" here's how you could do it in a few different programming languages.\n\n\nJava:\n\n System.out.println(\"Hello World!\");\n\n\nPython:\n\n print \"Hello World!\"\n\n\nC++:\n\n cout < < \"Hello World!\";",
"provenance": null
},
{
"answer": "**This is a joke list of ELI5 differences. If anyone knows the original sauce plese post it.** \n \n***If Programming languages were religions*** \n \nC would be Judaism - it's old and restrictive, but most of the world is familiar with its laws and respects them. The catch is, you can't convert into it - you're either into it from the start, or you will think that it's insanity. Also, when things go wrong, many people are willing to blame the problems of the world on it.\n\nJava would be Fundamentalist Christianity - it's theoretically based on C, but it voids so many of the old laws that it doesn't feel like the original at all. Instead, it adds its own set of rigid rules, which its followers believe to be far superior to the original. Not only are they certain that it's the best language in the world, but they're willing to burn those who disagree at the stake.\n\nPHP would be Cafeteria Christianity - Fights with Java for the web market. It draws a few concepts from C and Java, but only those that it really likes. Maybe it's not as coherent as other languages, but at least it leaves you with much more freedom and ostensibly keeps the core idea of the whole thing. Also, the whole concept of \"goto hell\" was abandoned.\n\nC++ would be Islam - It takes C and not only keeps all its laws, but adds a very complex new set of laws on top of it. It's so versatile that it can be used to be the foundation of anything, from great atrocities to beautiful works of art. Its followers are convinced that it is the ultimate universal language, and may be angered by those who disagree. Also, if you insult it or its founder, you'll probably be threatened with death by more radical followers.\n\nC# would be Mormonism - At first glance, it's the same as Java, but at a closer look you realize that it's controlled by a single corporation (which many Java followers believe to be evil), and that many theological concepts are quite different. You suspect that it'd probably be nice, if only all the followers of Java wouldn't discriminate so much against you for following it.\n\nLisp would be Zen Buddhism - There is no syntax, there is no centralization of dogma, there are no deities to worship. The entire universe is there at your reach - if only you are enlightened enough to grasp it. Some say that it's not a language at all; others say that it's the only language that makes sense.\n\nHaskell would be Taoism - It is so different from other languages that many people don't understand how can anyone use it to produce anything useful. Its followers believe that it's the true path to wisdom, but that wisdom is beyond the grasp of most mortals.\n\nErlang would be Hinduism - It's another strange language that doesn't look like it could be used for anything, but unlike most other modern languages, it's built around the concept of multiple simultaneous deities.\n\nPerl would be Voodoo - An incomprehensible series of arcane incantations that involve the blood of goats and permanently corrupt your soul. Often used when your boss requires you to do an urgent task at 21:00 on friday night.\n\nLua would be Wicca - A pantheistic language that can easily be adapted for different cultures and locations. Its code is very liberal, and allows for the use of techniques that might be described as magical by those used to more traditional languages. It has a strong connection to the moon.\n\nRuby would be Neo-Paganism - A mixture of different languages and ideas that was beaten together into something that might be identified as a language. Its adherents are growing fast, and although most people look at them suspiciously, they are mostly well-meaning people with no intention of harming anyone.\n\nPython would be Humanism: It's simple, unrestrictive, and all you need to follow it is common sense. Many of the followers claim to feel relieved from all the burden imposed by other languages, and that they have rediscovered the joy of programming. There are some who say that it is a form of pseudo-code.\n\nCOBOL would be Ancient Paganism - There was once a time when it ruled over a vast region and was important, but nowadays it's almost dead, for the good of us all. Although many were scarred by the rituals demanded by its deities, there are some who insist on keeping it alive even today.\n\nAPL would be Scientology - There are many people who claim to follow it, but you've always suspected that it's a huge and elaborate prank that got out of control.\n\nLOLCODE would be Pastafarianism - An esoteric, Internet-born belief that nobody really takes seriously, despite all the efforts to develop and spread it.\n\nVisual Basic would be Satanism - Except that you don't REALLY need to sell your soul to be a Satanist...",
"provenance": null
},
{
"answer": "A bit late on this, but here's my shot:\n\nThink of it this way: computers have their own, super complex language. It is extremely difficult to express clearly what you want in it. In addition, computers are pretty stupid, so if you make a mistake in your instructions, they won't try to understand what you actually intend, they will just do whatever stupid stuff you just asked.\nSince it is so difficult to express ideas in this machine language, we created translators. They're sort of bilingual: they can understand instructions in one language, and perfectly translate it for you in machine language. The former is what we call a programming language. If you can express ideas in this language, this translator will be able to translate them to the machine.\n\nFrom that point, you have two things to consider:\n\n 1 - The language that this translator understands\n\n 2 - The translator itself. There are various strategies of translation (interpreted languages translate what you tell them sentence by sentence, literally, while compiled languages will take a full book of instructions and translate it in one go into a 'machine-book' of instruction, so that they have some context)\n\nThe first point is usually referred to as the language, while the second is called the implementation. Most often, it is possible to take either approach for any language. But some are specifically designed to be compiled or interpreted. Java or C++ will require you to make very long sentences, with all the context being explicitely stated, so it works much better when compiled. Others, like Python, Javascript... rely on shorter sentences, with context being more often implicit, so you can directly discuss with the computer, with the interpreter sitting in the middle and doing the translation both ways.\n\nThe difference between languages is mainly how you can express complex processes to the computer. As previously stated, computers are incredibly stupid, and require a LOT of explaining for pretty much anything. Some languages, called 'low-level' (e.g. Assembly, C, ...) try to stick to the way computers see things. It takes a bit more work expressing your ideas in this language, because you have to explain EVERYTHING, but you know that the translation will be completely faithful. On the other end of the spectrum, some language, called 'high-level' (e.g. Python, Ruby...) try to be closer to you than they are to the machine: you can express big ideas and concept to them, and they will do all the work of breaking it down into small pieces the computer can understand. Of course, you have no control on exactly how they do this break down, but it is often much simpler when exploring an idea to be able to have this high-level conversation.\nIn the middle, some languages will try to do both, defining both big abstract concept and small concrete tools for better controlling what exactly the computer is doing. \n\nFinally, the other thing to consider is that other humans will look at your code. Some prefer to have it completely broken down to understand exactly what is going on, but most are more comfortable with seeing it in high level languages, where they can get the big picture of what your program is doing without having to get all the details.\n\nNote that this is very subjective too ! Back in the days when programmers where electrical engineers, C was seen as 'high level' because it defined new abstractions, like functions. And you can express some very low-level details of how things should work in Python. The language just gives you a framework for expressing your ideas, and some framework work better than other for expressing different ideas.",
"provenance": null
},
{
"answer": "Javascript is the Duct Tape of programming languages. ",
"provenance": null
},
{
"answer": "Warning: most of this thread\n_URL_0_",
"provenance": null
},
{
"answer": "Low-level languages are used when you care about how every component inside the machine is going to operate. What you produce here is not compiled the way other languages are - you're writing instructions for a specific system architecture and what you write to work on one chip (x86), won't work on another (a Motorola chip)\n\nMid-level languages abstract some components but not others: I don't care about how the CPU and system bus work here, but I do care about memory - that's typical (e.g. C). A compiler takes this and turns into a lower level instruction set so can be compiled to work (normally) on any system architecture - if you're careful.\n\nHigh-level language: I don't care about how the computer does this I just want to describe the solution to my problem and the compiler/interpreter figures out how to turn that into a lower representation the computer can run so it will run \"everywhere\" (I'm simplifying). These are languages like Ruby, Python, Perl, etc.\n\nDifferent high-level languages are designed to describe solutions to problems in different ways. In the same way algebra and calculus are related, it's easier to solve some problems with one more than another (again, I'm simplifying), which is why you will hear talk of \"object-orientated\", \"functional\", \"imperative\", \"strongly-typed\", \"duck-typed\" and a host of other words to describe a language.",
"provenance": null
},
{
"answer": "I understand most if not all of these answers, but very little of this is ELI5, especially all the assembly disputes.",
"provenance": null
},
{
"answer": "I think the best analogy to give about programming languages is having instructions with different levels of detail.\n\nImagine you were to leave a set of instructions telling an 8 year old child, who didn't speak the local language, how to pick up some eggs from the grocery store. You would want to be as precise as possible; telling him exactly which turns to make, how much money to bring, where to look for the eggs, which eggs to buy, and how to interact with the cashier. This would be Assembly; you have to be very specific.\n\nNow, imagine you are leaving the same instructions for a 25 year old who has lived in the area and often goes grocery shopping. All you'd have to do is leave a note saying \"Buy eggs\". This would be Python; the specifics are taken care of.\n\nThere are some programming languages that have roughly the same level of specificity and just are different ways of saying things (kind of like different human languages).There are also languages that are better at one sort of thing than another (kind of like different vocabularies between fields of study). Most importantly though there are many languages that fall in between the extremes of Assembly and Python, the level of specificity you use really depends on who you're talking to and what you want them to do.\n\n**TL;DR** It's all about how specific you need to be.",
"provenance": null
},
{
"answer": "HTML is not a programming language.",
"provenance": null
},
{
"answer": "Imagine your computer is a restaurant.\n\nHTML is the hostess / atmosphere of the restaurant. They show you to where you're going, and then you just sit down and look around at things that don't change. Decorations, the tables, etc.\n\nPython, Ruby, PhP, etc \"Server Side\" languages are the kitchen. They do a lot of work, but it's all behind the scenes and you don't really see it.\n\nJavascript is the waiter. They interface between the kitchen and you, but not everything the do requires they go to the kitchen. They can tell you the specials, sing happy birthday, etc. but occasionally have to go to the kitchen to clarify some things or get the actual food made.",
"provenance": null
},
{
"answer": "Imagine you are the CEO of a company and you're looking to hire a new plant manager. Different managers have different styles.\n\nThe low level manager micro manages everything, he watches the time clock at every shift change, and employees have to ask him over walkie talkie every time they want to move a box or go to the bathroom. Under his supervision, the plant is very efficient but he has to work 120 hours a week to get the job done.\n\nThe high level manager worries primarily about minimizing the number of hours he has to work. His goal is 5 hours per week per plant. He doesn't do this because he is lazy, he does this because he wants to open and run three other plants. So, he hires a few sub managers, a secretary, a timeclock watcher, a box location watcher. The plant still produces the same amount of end product, but it requires more staff to do it. \n\nBoth managers run all of their policies by you before they get implemented, so the low level manager requires more work at the beginning from you too. \n\nThis is the fundamental tradeoff of programming languages. More up front work from you = more efficient machines. Less up front work from you = less efficient machines (because the machine manages things you otherwise would have to manage yourself, like taking out the garbage)\n\n* The managers are the programming languages.\n* The people working in the plant are the number of operations the computer had to run to complete a task (more operations means slower programs)\n* The policies they run by you is the amount of code you have to write to do something in that language.\n\nNow, there is a second dimension which is how much experience a given manager has in running plants like yours. Is it a website factory, or a bank, or a tiny factory on a small island with very limited access or ability to hire more people? The amount of experience in your domain means the more the manager already knows how to do things you'll want without you having to teach them (i.e. the programming language has features or libraries that solve common problems you're likely to encounter when running a website). And sometimes you'd like to hire a high level manager, but there aren't enough resources to do so, so you have to hire a low level manager or the plants won't be able to run.\n\nAssembly and C are low level languages.\n\nJava, Python and Ruby are high level languages.",
"provenance": null
},
{
"answer": "Well, there's a tradeoff with a language -- is it fast to write the code, or is it fast to run the code? Lower level languages can make exceptionally fast code but they take a lot of writing to do it. Higher level languages let you get working code much faster, but they typically run slower. \n\nRoughly from low to high:\n\nAssembly, C, C++, Java/C#, Python/Perl, Javascript\n\nAs for why there's so many... Well, some cater to specific uses... Fortran for math, Visual Basic is typically database front-ends, etc. The other reason is best summed up with this:\n\n_URL_0_",
"provenance": null
},
{
"answer": "I would say there are 2 important differences\n\nThe Syntax. Imagine it like this, we phrase sentences in English in a certain grammatical way. But those same grammar rules wouldn't apply if we were to speak Japanese.\n\nNext the Style. This has to do with the words used. Bulgarian and Russian use the same alphabet and some words are similar but in the end they are two different languages. Despite their similarities. So even though Java is influenced by C in it's making, you will find differences when you're writing code.\n\nAlso there are two types of programming languages. Compiled and Scripting (at least in modern use today). Complied languages run via a .exe file on your desktop or any other operating system. These are usually object based languages and require compiling inside of the program you use to write code (the IDE). These languages include Java. Scripting languages don't get compiled, instead they are read line by line in your browser. These languages are Python, Javascript, PHP, Ruby etc.\n\n\nAlso it's good to note that HTML is a markup language not a programming one. Hyper text markup language. This basically means that you're not writing any software with it. You cant program with it. But on the other hand you can make neat looking sites. But keep in mind if you only use HTML to make a site you'll end up with a site that belongs in the 90s. Hope that clears it up! ",
"provenance": null
},
{
"answer": "Some great responses here, but a simple way to put it is that languages have different capabilities. A language like Python is great because the language is easy to learn e.g\n\ndef program():\n\n ok = \"This is my sentence\"\n \n print ok\n\nHowever it has less capabilities than a language like Java or C++. You'd tend to think that languages like Python and Ruby are lower end and Java, C, C++ are large scale.\n\nHTML is a web design language, it's fairly easy to learn and consists of tags. I like to to think of it as creating the base/structure of the website. Javascript is pretty much described as adding some live/dynamic aspects to your website.",
"provenance": null
},
{
"answer": "no love for MatLab",
"provenance": null
},
{
"answer": "Basically everything deduces to c.... The rest is a huge wrapper to make programing easier. Java has a 'garbage collector' or basically it handles memory efficiently. Other than that they are pretty close.",
"provenance": null
},
{
"answer": "Why is nobody mention COBOL? I feel so alone :(",
"provenance": null
},
{
"answer": "You haven't mastered code until you've mastered Brainfuck! _URL_0_",
"provenance": null
},
{
"answer": "I disagree with a lot of the posts here. Though I think it's a pretty common misconception. A programming language is not like a cultural human language. It's a domain specific language. Think of it as jargon, and not a full language. Most computer languages have less than 100 words. The reason there are different ones is because there are different problems. Some are similar enough to be roughly equivalent, but most have a particular field in which they excel. \n\nSailing has a set of words that are similar. Verbs like douse, hoist, trim, ease. Nouns like jib, fo'csle, halyard, sheet. Adjectives like port, starboard, lazy, working, luffing. These are words that aren't used in common English. They're used to sail. Sailors speak them, and they're significantly more efficient than \"Hey that rope attached to the third pulley from the big piece of wood sticking up there. Pull that down until the top of the biggest sail is at the top of the big wooden thing.\" Instead I say, \"Hoist the main halyard.\" In programming it might be halyards[:main].hoist().\n\nThe thing is different domains have a different set of things you want to do. When you're doing drivers, the focus is on speed and efficiency. The syntax of those low level languages is on speed and efficiency. They're not very expressive. They don't have to be. Usually the tasks are relatively simple. You do something basic, but you do it over and over again really really fast. Your graphics driver, your device drivers, your com drivers are all probably like this. They're probably written in C, or something even lower level. Those languages are bulky, and they take a relatively long time to get even a simple task done. They are, however, very very fast once they're written. It's also pretty easy to tell whether they're broken, because there are only a few things they do.\n\nOnce you get to more complex things, it gets more difficult. You have to be more flexible in how to handle data, events and displays. To write those things in the low level languages, you need to spend a lot of time writing the same code over and over. They don't need to be as fast, though, because where you might do the same operation millions of times in a graphics driver, you might only do them hundreds of times in these programs. So as complexity increases, you can make your language more expressive. There are more decisions, but fewer loops. So the jargon shifts towards making the decisions easier to read, at the cost of making the loops slower.\n\nYou continue along that spectrum until you get to the very high level languages like ruby and javascript. These languages are agonizingly slow by comparison, but it's cool because you're often interacting with a human at this level. You're responding to a literal click or a keypress. Those happen one or two a second at the most, not hundreds or millions of times per second. So these languages read like English for the most part. That makes them easier to understand, share the work, and debug when they break. They run more slowly, but you almost never notice, because it's not the program that's slowing things down. It's how fast you as a human can do things.\n\nThat's one axis. Languages differing by task. Top poster is right, that some of the differences are relatively petty. They're cats and dogs things about whitespace, semi-colons, parentheses, etc.,. Even these, though embody different cultural stances. Rubyists generally favor ease of writing over ease of reading. Pythonistas insist that code should be readable no matter who writes it. If you look at python code, it pretty much always looks the same. Three different ruby chunks look like entirely different languages.\n\nSo that's my experience of why they're different.",
"provenance": null
},
{
"answer": "Generally the distinction is one of encapsulation. \n\nProgramming languages are like cars. A drive has a couple of levers and a wheel that the car responds to, sometimes he has to fill it with gas, and very occasionally he needs to fill it with money in ways he doesn't understand. The inner workings of the car are very complex, involve a lot of optimizations, and in some cases, sacrifice performance to make this Car/Driver interface less complicated and more robust to driver errors. \n\nWhen I start doing things to the car to increase performance, perhaps the very simplest being replacing the automatic transmission with a manual, I make the car harder to drive, and almost always expose more of the inner workings of the vehicle.\n\nThis follows the general principle that, once things are engineered beyond a certain point, you are unlikely to find improvements which involve no tradeoffs, and the specific set of properties which programming languages are trying to maximize (weighted) the performance of is:\n\n-performing operations\n-writing code\n-debugging code\n-reading code\n-extending code\n-reusing code\n-existence of already written code\n-is the only code which runs on browsers\n-delegation of slow operations to faster languages\n\namong your examples, Python and Ruby are both basically both competing for the spot of: slow, but easiest to use. Ruby is maybe a little slower, and a little more extensible, but you tend to use (C) native libraries to do your really big matrix operations, and they both have good support for that.\n\nJavascript is a shit language, but is the only choice for code that runs in your browser, and HTML is for page layout and object identification and doesn't really count. \n\nSome others: \n\nC is a very small language with enormous libraries, It's not very hard to use, but it is very hard to scale. It can be used to control UI elements, but it's annoying to do so. Ruby compiles to C.\n\nJava is *almost* as fast as C, has lots of library support, is relatively straightforward, and protects against a lot of errors with a very smart compiler. It is very bitchy about what you are allowed to do with the language though, and it needs to know the type of everything at runtime, which is a much more limiting task than it sounds like. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "248932",
"title": "Software development",
"section": "Section::::Subtopics.:Programming paradigm.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 456,
"text": "Just as different groups in software engineering advocate different \"methodologies\", different programming languages advocate different \"programming paradigms\". Some languages are designed to support one paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms (such as Object Pascal, C++, C#, Visual Basic, Common Lisp, Scheme, Python, Ruby, and Oz).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5311",
"title": "Computer programming",
"section": "Section::::Programming languages.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 917,
"text": "Different programming languages support different styles of programming (called \"programming paradigms\"). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from \"low-level\" to \"high-level\"; \"low-level\" languages are typically more machine-oriented and faster to execute, whereas \"high-level\" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in \"high-level\" languages than in \"low-level\" ones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2545484",
"title": "English in computing",
"section": "Section::::Programming language.\n",
"start_paragraph_id": 79,
"start_character": 0,
"end_paragraph_id": 79,
"end_character": 329,
"text": "The syntax of most programming languages uses English keywords, and therefore it could be argued some knowledge of English is required in order to use them. However, it is important to recognize all programming languages are in the class of formal languages. They are very different from any natural language, including English.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4723370",
"title": "Enumerated type",
"section": "Section::::Conventions.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 239,
"text": "Programming languages tend to have their own, oftentimes multiple, programming styles and naming conventions. Enumerations frequently follow either a PascalCase or uppercase convention, while lowercase and others are seen less frequently.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23015",
"title": "Programming language",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 494,
"text": "The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23015",
"title": "Programming language",
"section": "Section::::Taxonomies.\n",
"start_paragraph_id": 135,
"start_character": 0,
"end_paragraph_id": 135,
"end_character": 262,
"text": "A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being deliberately esoteric or not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "189897",
"title": "Programming paradigm",
"section": "Section::::Overview.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 759,
"text": "Just as software engineering (as a process) is defined by differing \"methodologies\", so the programming languages (as models of computation) are defined by differing \"paradigms\". Some languages are designed to support one paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms (such as Object Pascal, C++, Java, C#, Scala, Visual Basic, Common Lisp, Scheme, Perl, PHP, Python, Ruby, Wolfram Language, Oz, and F#). For example, programs written in C++, Object Pascal or PHP can be purely procedural, purely object-oriented, or can contain elements of both or other paradigms. Software designers and programmers decide how to use those paradigm elements.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
18t2b7 | why do modern tvs seem to increase the framerate of video, even when to footage is decades old? | [
{
"answer": "Modern televisions have a setting that is usually turned on by default that causes this effect. The way it does it is by looking at two frames in the image, seeing what the differences are, and \"guessing\" what another frame between the two would look like if it was there when the show was recorded. The TV then creates this extra frame, and gives the appearance of the TV show or film being recorded at 48-60 FPS. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "912984",
"title": "Tektronix 4010",
"section": "Section::::Underlying concept.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 535,
"text": "Conventional video displays consist of a series of images, or \"frames\", representing single snapshots in time. When the frames are updated rapidly enough, changes in those images provide the illusion of continuous motion. This makes normal television tubes unsuitable for computer displays, where the image is generally static for extended periods of time (as it is while you read this). The solution is to use additional hardware and computer memory to store the image between each update, a section of memory known as a framebuffer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2261305",
"title": "Offline editing",
"section": "Section::::History.:New technological developments.:Cheaper video recorders.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 877,
"text": "Although video technology had the potential to be cheaper since it doesn't have the costs of film stock and have to go through the development process respectively, the quality of early video recording technology in the 1950s and even into the mid 1960s was often far too low to be taken seriously against the aesthetical look, familiarity and relative ease of editing of 16mm and 35mm film stock – which many television cinematographers used well up until the late 1980s in documentaries, dramas etc. before video technology caught up to being 'acceptable' as television cameras and camcorders eventually displaced film stock for regular television use as they became lighter and more practical to take with them. Because early video cameras were so large and so expensive, it wasn't until 1984 with the JVC VHS-C camcorder that consumers had access to video tape technology.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1342176",
"title": "Film-out",
"section": "Section::::History.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 429,
"text": "Digital video equipment has made this approach easier; theatrical-release documentaries and features originated on video are now being produced this way. High Definition video became popular in the early 2000s by pioneering filmmakers like George Lucas and Robert Rodriguez, who used HD video cameras (such as the Sony HDW-F900) to capture images for popular movies like \"\" and \"Spy Kids 2\", respectively, both released in 2002.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17340648",
"title": "Video portal",
"section": "Section::::Online video.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 229,
"text": "Devices like Apple TV or Netgear's Digital Entertainer, capable of transferring video files from the Internet to the television screen, will cause an increase in the length of the size of videos, both in definition and duration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3943884",
"title": "High-motion",
"section": "Section::::Effects of new technology.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 315,
"text": "In the mid and late 2000s, digital video technology had started to make it possible to shoot video at the \"film look\" rate of 24 frame/s at little or no additional cost. This had resulted in less high motion on television and on the internet on Video sharing applications such as YouTube in the early to mid 2010s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1059088",
"title": "Film look",
"section": "Section::::Differences between video and film.:Dynamic range.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 381,
"text": "Old video technology only had a 5 stop exposure dynamic range. Modern HD video cameras have up to 14 stops. The exposure range is therefore less of an issue than before, although there is still a popular belief that video is considerably worse than film in the shoulder of the gamma curve, where whites blow out in video, while film tends to overexpose more evenly and gracefully.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3943884",
"title": "High-motion",
"section": "Section::::High motion and the \"video look\".\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 787,
"text": "Until the late 1990s, programs shot on video always possessed high motion, while programming shot on film never did. (The exceptions: Certain motion simulators and amusement park rides included film projected at 48–60 frames per second, and video recorded on kinescope film recorders lost its high motion characteristic.) This had the result of high motion being associated with news coverage and low-budget programming such as soap operas and some sitcoms. Higher-budget programming on television was usually shot on film. In the 1950s, when Hollywood experimented with higher frame rates for films (such as with the Todd AO process) some objected to the more video-like look (although the inability to convert such films for projection in regular theaters was a more serious problem).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9cbiol | Why was king hussein of jordan pro-peace with Israel? | [
{
"answer": "Part 1 (scroll to my reply to this comment to read Part 2):\n\nThe Jordanian motivations go much deeper than the history of the Hussein family and the Palestinians. Instead, one has to first understand the development of Jordan and the development of the state of Israel to understand their overlap.\n\nOn the one hand, Jordan has participated in a number of conflicts against Israel, including joining the efforts to destroy it in 1948, fighting alongside Arab armies in 1967, and participating in anti-Israel actions in various international forums. At the same time, as you noted correctly, Jordan has been far friendlier to Israel than other Arab states, going so far as to propose ways to avoid war in 1948 (King Abdullah proposing in 1947, for example, a nonbelligerency pact to split the Mandate) and to allegedly even warn Israel of an impending attack in 1973 (King Hussein told Golda Meir in a secret meeting on September 25, < 2 weeks before war, that Syria's military had moved to pre-war positions). Evidently, the willingness to cooperate with Zionist leaderships has extended to before King Hussein's father (King Abdullah I) was assassinated by a Palestinian fearing peace with Israel.\n\nThe roots are both strategic and also demographic. Through a variety of factors, Jordanian leaders have been forced to confront that in many ways, their goals overlapped with those of Israel historically.\n\nJordan, for example, was just as concerned as many other Arab leaders with the prospect of taking over the Arab areas of the British Mandate. However, unlike the other parties, Jordan was also party to the best-trained military in the lead-up to the 1948 war. Its well-trained military was counterbalanced, relative to the other Arab states, by the fact that it was (and has been) the smallest country population-wise of the major Arab belligerents since 1948, meaning it also was the least competitive in gross resources to bring to bear. This meant that Jordan was far more willing than the other parties to conflicts with Israel to accept that a secret compromise might net them better results (i.e. an agreement to gain the Arab portions) than to compete with larger Arab neighbors, such as Egypt or Syria. This was compounded in importance by the fact that Egypt and Jordan were rivals in the lead-up to the conflict, and approached it from different perspectives. Jordan, advised by the Arab Legion's head (a British commander named John Bagot Glubb), believed that the Arab forces were naive to think they could defeat the Jewish forces, and Abdullah was most conscious of this, saying that, \"The Jews are too strong -- it is a mistake to make war\". While other Arab leaders doubted victory, their inept military commands and the Arab League's closest thing to a plan seemed to approach conflict with overly optimistic ideas. As Glubb put it, the war pitch led to a situation where \"Doubters were denounced as traitors\", but only Jordan appeared to be as clearly aware of just how poor the planning/capability was and therefore willing to seek escape from the strictures of war.\n\nBesides the question of gross resources, competition between Arab states, and awareness of military discrepancies, there existed yet another reason why Jordan and Israel often found themselves drifting together. This reason lay primarily in the shifting demographics of Jordan following the 1948 war. The influx of Palestinian refugees from this war into the West Bank and Jordan, and Jordan's desire to solidify its control over the West Bank, meant Jordan had an inherently aligned strategic interest in integrating Palestinian refugees with Israel's desire to see those refugees integrated (to avoid claims for return to Israel in the long-run, a situation that has persisted to present-day). At the same time, Jordan sought integration in a way to attempt to reduce Palestinian nationalism, and to buffer his own claim to the West Bank, a claim that the Arab world largely never recognized (but that Israel, in 1947, was semi-willing to accept in exchange for peace). While this proposal never included Jerusalem of course, the common ground there was far greater than the common ground with Egypt or Syria, who treated Palestinian refugees quite differently and had gained little territory as a result of the 1948 war (and the land they did gain was of questionable value, compared to the Jordanian gains of coveted Jerusalem and arable land). In fact, it is even arguable that as Jordan had gained most of what it wanted, its remaining conflict with Israel was more related to the question of Arab unity (until 1967, of course) than it was to specific claims it wished to make to Israeli territory. Jordan, rationally, feared Palestinian reactions if it sought to overtly accept Israel's existence (amplified by the assassination of King Abdullah in 1951 predicated on this fear), but it also feared Palestinian reactions if it sought too strongly to erase Palestinian nationalism without care. As a result, Jordan's balancing act largely failed: Palestinians never came to view themselves as true Jordanians, were excluded from Arab Legion combat formations, and were barred from high-ranking positions in the civil bureaucracy, for fear of potential coup (particularly since Palestinians made up such a large proportion of the population). Egypt and Syria, on the other hand, remained largely low on gains from 1948, still coveted and fought with Israel over things like water and passageways (such as the diversions of water from the Sea of Galilee/Jordan River, or conflict over the Gulf of Aqaba). Furthermore, Egypt sought (particularly under Nasser) a role of regional hegemony, and opposition to Israel and the West were a crucial component of Nasser's strategy in mass appeal. His ability to gain influence in other countries was also a source of great concern to Jordan, Israel, and the West, which led them to covertly remain more friendly than they otherwise might have. Nasser's rise thereby made friends out of those who had previously fought, and who had also had less disagreement prior themselves.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "80608",
"title": "Hussein of Jordan",
"section": "Section::::Legacy.:Criticism.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 632,
"text": "He was also seen as too lenient toward some ministers who were alleged to be corrupt. The price of establishing peace with Israel in 1994 he had to pay domestically, with mounting Jordanian opposition to Israel concentrating its criticism on the King. The King reacted by introducing restrictions on freedom of speech, and changing the parliamentary electoral law into the one-man, one-vote system in a bid to increase representation of independent regime loyalists and tribal groups at the expense of Islamist and partisan candidates. The moves impeded Jordan's path towards democracy that had started in 1956 and resumed in 1989.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30887473",
"title": "Island of Peace massacre",
"section": "Section::::Aftermath.:Jordanian reaction.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 341,
"text": "King Hussein's sincere act was an unusual act in the history of the Israeli-Arab conflict which deeply moved the mourning Israeli public and helped improve the relationship between the two countries after the attack. Nevertheless, various Jordanian individuals and groups criticized King Hussein's act for prostrating himself before Israel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21975023",
"title": "Israel–Jordan relations",
"section": "Section::::History.:1948–1994.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 556,
"text": "The relationships between Jewish leaders in Israel and the Hashemite dynasty in the area was characterized by ambivalence as both parties' prominence grew in the area. Jordan consistently subscribed to an anti-Zionist policy, but made decisions pragmatically. Several factors are cited for this relative pragmatism. Among these are the two countries' geographic proximity, King Hussein's Western orientation, and Jordan's modest territorial aspirations. Nevertheless, a state of war existed between the two countries from 1948 until the treaty was signed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31009486",
"title": "Origins of the Six-Day War",
"section": "Section::::Events during the weeks before the war.:Jordan joins Egypt.\n",
"start_paragraph_id": 113,
"start_character": 0,
"end_paragraph_id": 113,
"end_character": 589,
"text": "However, Jordan's King Hussein got caught up in the wave of pan-Arab nationalism preceding the war;. According to Mutawi, Hussein was caught on the horns of a galling dilemma: allow Jordan to be dragged into war and face the brunt of the Israeli response, or remain neutral and risk full-scale insurrection among his own people. Army Commander-in-Chief General Sharif Zaid Ben Shaker warned in a press conference that \"If Jordan does not join the war a civil war will erupt in Jordan\". However, according to Avi Shlaim, Hussein's actions were prompted by his feelings of Arab nationalism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21975023",
"title": "Israel–Jordan relations",
"section": "Section::::History.:2010–present.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 527,
"text": "In a meeting with the Centre for Israel & Jewish Affairs in Canada, Jordanian King Abdullah noted that Israel, which he recognizes as a vital regional ally, has been highly responsive to requests by Abdullah to resume direct peace talks between Israel and the Palestinian Authority. Promoting peace between Israel and the Palestinian Authority is a major priority for Jordan. It supports U.S. efforts to mediate a final settlement, which it believes should be based on the 2002 Arab Peace Initiative, proposed by Saudi Arabia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48276",
"title": "History of the State of Palestine",
"section": "Section::::Mandate Period.:1947 UN Partition Plan.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 403,
"text": "King Abdullah I of Jordan met with a delegation headed by Golda Meir (who later became Prime Minister of Israel in 1968) to negotiate terms for accepting the partition plan, but rejected its proposal that Jordan remain neutral. Indeed, the king knew that the nascent Palestinian state would soon be absorbed by its Arab neighbors, and therefore had a vested interest in being party to the imminent war.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7515964",
"title": "Jordan",
"section": "Section::::Politics and government.:Foreign relations.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 656,
"text": "Jordan is a key ally of the US and UK and, together with Egypt, is one of only two Arab nations to have signed peace treaties with Israel, Jordan's direct neighbour. Jordan views an independent Palestinian state with the 1967 borders, as part of the two-state solution and of supreme national interest. The ruling Hashemite dynasty has had custodianship over holy sites in Jerusalem since 1924, a position re-inforced in the Israel–Jordan peace treaty. Turmoil in Jerusalem's Al-Aqsa mosque between Israelis and Palestinians created tensions between Jordan and Israel concerning the former's role in protecting the Muslim and Christian sites in Jerusalem.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9quaif | what exactly does a dip in economy do to a country? | [
{
"answer": "If the economy dips less people are buying things, if less people are buying things then companies that make things can't afford to hire so many people to make those things. Those people lose their jobs and have less resources to buy things, and so on, and so on. \n\nHouses are lost because the bank expects you to pay mortgages, cars are repossessed, utilities are shut off.\n\nQuality of life goes down. \n\nMore people die from disease that could have been treated if they could afford it. \n\nCrime rate goes up. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "16985144",
"title": "Semi-periphery countries",
"section": "Section::::History and development.:Today.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 1398,
"text": "These economic downturns occur because of increased supply and decreased demand, which combine to create a shift in surplus and power to the semi-periphery. Semi-periphery regions take advantage of the situation by expanding control of their home markets and the surrounding periphery countries at the expense of core countries. The underlying reason for this shift in power lies in the basic economic principle of scarcity. As long as core countries maintain scarcities of their goods, they can select customers from semi-periphery and periphery countries that are competing over them. When excess supply occurs, the core countries are the ones competing over a smaller market. This competition allows semi-peripheral nations to select from among core countries rather than vice versa when making decisions about commodity purchases, manufacturing investments, and sales of goods, shifting the balance of power to the semi-periphery. While in general there is a power shift from core to semi-periphery in times of economic struggles, there are few examples of semi-peripheral countries transitioning to core status. To accomplish this, semi-peripheral nations must not only take advantage of weaker core countries but must also exploit any existing advantages over other semi-peripheral nations. How well they exploit these advantages determines their arrangement within the semi-periphery class.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34945002",
"title": "Circular cumulative causation",
"section": "Section::::Dynamics.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 548,
"text": "Then he argued that the contraction of the markets in that area tends to have a depressing effect on new investments, which in turn causes a further reduction of income and demand and, if nothing happens to modify the trend, there is a net movement of enterprises and workers towards other areas. Among the further results of these events, fewer local taxes are collected in a time when more social services is required and a vicious downward cumulative cycle is started and a trend towards a lower level of development will be further reinforced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1489819",
"title": "Transition economy",
"section": "Section::::Context.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1868,
"text": "The economic malaise affecting the Comecon countries – low growth rates and diminishing returns on investment – led many domestic and Western economists to advocate market-based solutions and a sequenced programme of economic reform. It was recognized that micro-economic reform and macro-economic stabilization had to be combined carefully. Price liberalization without prior remedial measures to eliminate macro-economic imbalances, including an escalating fiscal deficit, a growing money supply due to a high level of borrowing by state-owned enterprises, and the accumulated savings of households (\"monetary overhang\") could result in macro-economic destabilization instead of micro-economic efficiency. Unless entrepreneurs enjoyed secure property rights and farmers owned their farms the process of Schumpeterian \"creative destruction\" would limit the reallocation of resources and prevent profitable enterprises from expanding to absorb the workers displaced from the liquidation of non-viable enterprises. A hardening of the budget constraints at state-owned enterprises would halt the drain on the state budget from subsidization but would require additional expenditure to counteract the resulting unemployment and drop in aggregate household spending. Monetary overhang meant that price liberalization might convert \"repressed inflation\" into open inflation, increase the price level still further and generate a price spiral. The transition to a market economy would require state intervention alongside market liberalization, privatization and deregulation. Rationing of essential consumer goods, trade quotas and tariffs and an active monetary policy to ensure that there was sufficient liquidity to maintain commerce might be needed. In addition to tariff protection, measures to control capital flight were also considered necessary in some instances.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29018342",
"title": "Currency war",
"section": "Section::::Background.:Reasons for intentional devaluation.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1153,
"text": "However, when a country is suffering from high unemployment or wishes to pursue a policy of export-led growth, a lower exchange rate can be seen as advantageous. From the early 1980s the International Monetary Fund (IMF) has proposed devaluation as a potential solution for developing nations that are consistently spending more on imports than they earn on exports. A lower value for the home currency will raise the price for imports while making exports cheaper. This tends to encourage more domestic production, which raises employment and gross domestic product (GDP). Such a positive impact is not guaranteed however, due for example to effects from the Marshall–Lerner condition. Devaluation can be seen as an attractive solution to unemployment when other options, like increased public spending, are ruled out due to high public debt, or when a country has a balance of payments deficit which a devaluation would help correct. A reason for preferring devaluation common among emerging economies is that maintaining a relatively low exchange rate helps them build up foreign exchange reserves, which can protect against future financial crises.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2237734",
"title": "Mundell–Fleming model",
"section": "Section::::Mechanics of the model.:Flexible exchange rate regime.:Changes in government spending.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 678,
"text": "An increase in government expenditure shifts the IS curve to the right. This will mean that domestic interest rates and GDP rise. However, this increase in the interest rates attracts foreign investors wishing to take advantage of the higher rates, so they demand the domestic currency, and therefore it appreciates. The strengthening of the currency will mean it is more expensive for customers of domestic producers to buy the home country's exports, so net exports will decrease, thereby cancelling out the rise in government spending and shifting the IS curve to the left. Therefore, the rise in government spending will have no effect on the national GDP or interest rate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2364729",
"title": "Shortage",
"section": "Section::::Effects.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 527,
"text": "In the case of government intervention in the market, there is always a trade-off with positive and negative effects. For example, a price ceiling may cause a shortage, but it will also enable a certain percentage of the population to purchase a product that they couldn't afford at market costs. Economic shortages caused by higher transaction costs and opportunity costs (e.g., in the form of lost time) also mean that the distribution process is wasteful. Both of these factors contribute to a decrease in aggregate wealth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38394281",
"title": "Economic reforms and recovery proposals regarding the Eurozone crisis",
"section": "Section::::Address current account imbalances.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 645,
"text": "A country with a large trade surplus would generally see the value of its currency appreciate relative to other currencies, which would reduce the imbalance as the relative price of its exports increases. This currency appreciation occurs as the importing country sells its currency to buy the exporting country's currency used to purchase the goods. Alternatively, trade imbalances can be reduced if a country encouraged domestic saving by restricting or penalizing the flow of capital across borders, or by raising interest rates, although this benefit is likely offset by slowing down the economy and increasing government interest payments.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
euhve0 | why do helicopters crash so much more often than other aircraft? | [
{
"answer": "When a helicopters engine fails they become big metal bricks. A plane however, the shape allows for some control and the ability to glide while trying to get the engines back to working. Most planes have multiple engines as well, if one fails the others compensate. There’s no such redundancy in helicopters.",
"provenance": null
},
{
"answer": "Depends on if they are single or dual engine. \n\nI’d say more often than not though it’s pilots that are flying in weather they should not be flying in",
"provenance": null
},
{
"answer": "Helicopters have extremely poor glide characteristics. Your best case scenario for a malfunction resulting in loss of lift is a controlled crash. There are a lot of flight regimes in which you're basically just 100% screwed in a helicopter because you simply don't have enough velocity or altitude to make any real recovery.\n\nPlanes have a lot more options because they are much simpler machines, and most are pretty decent gliders. A small private plane that loses power at altitude can get a pretty good distance before he hits the ground. That allows for the pilot to have some time to consider the situation, attempt to recover the aircraft, or pick a good spot to land/crash.\n\nHelicopters also tend to operate near the ground a lot more often than planes.",
"provenance": null
},
{
"answer": "Helicopters can land safely even when the engine fails. It's called \"autorotation\".\n\nThe pilot lowers the \"collective\" lever - the one which makes the helicopter go up and down. This results in the angle of the rotor blades changing, so that as the helicopter descends, the airflow through the rotor causes the rotor to rotate (and therefore giving the pilot control), and causing drag (slowing the helicopter down).\n\nHelicopters don't have much control when the engine fails, but because they can land almost anywhere, this is all the control they need - enough to pick somewhere directly below and land.\n\nMost larger helicopters have two engines, though, so an engine failure is much less of an issue.",
"provenance": null
},
{
"answer": "Your premise itself is incorrect. Helicopters crash less than fix winged aircraft. [Here](_URL_0_) is the relevant data from 2014. It is just that there have been several high profile celebrity deaths from helicopter crashes, which makes people think they are more common.",
"provenance": null
},
{
"answer": "Well they don't crash very often. But as with most aviation incidents, especially fatal ones, the incidents are given a tremendous amount of media coverage. Car accidents are tremendously common, involving roughly 1 million crashes per year just in America, with about 30,000 deaths, but because of how common they are we barely hear about them. But an airliner crash is primetime news, often for days or week",
"provenance": null
},
{
"answer": "The people saying helicopters are less stable or inherently less safe than fixed-wing aircraft (planes) are mistaken. Autorotation (described by /u/jacklychi) allows helicopters to make controlled landings without an engine very easily, and helicopters get just as many safety inspections as any other aircraft, more or less, *if* they are for commercial transportation.\n\nThe biggest difference is that helicopters are used primarily for low, short flights between nearby destinations. Autorotation requires a certain height above the ground in order to work - that means if a helicopter loses and engine it's safer if you're *higher* up. Except that most helicopters are flying lower to the ground. That makes autorotation more difficult. And since helicopters are usually flying over populated areas it's less likely that a helicopter will be able to find a clear, open place to land. Although autorotation allows a helicopter to land safely if it has a place to land, they still don't have glide slopes - you're not going to get very far, only straight down. A plane will be flying probably much higher and although a landing without an engine is more dangerous, they will at least have more energy and lift to travel horizontally to a safer, emptier place.\n\nThe tendency for helicopters to be used for lower, shorter flights also puts them closer to obstacles like power lines, buildings, birds, and even tall trees. In the case of the Kobe crash, it crashed into a hillside. Visibility was poor. Although the cause has not yet been determined, it *could* be as simple as the pilot misread the instruments and didn't see the hillside until it was too late to avoid it. Compare that to most passenger planes which would be flying so high that not being able to see the terrain isn't an issue, because there isn't any. I have zero other information, I'm not saying that was the cause, I am only suggesting that it *could* be a cause which is generally not true of most commercial passenger flights.\n\nThe other problem is that helicopters are more likely to be privately owned and operated, at least compared to big passenger planes. Those are regulated *very* strictly. Although *any* flights are very regulated, a small for-hire helicopter service may not have the same stringency applied to it as other modes of air travel. This is also true of small planes, too, and one of the reasons they tend to crash more often than big passenger planes.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "35520122",
"title": "List of accidents and incidents involving the Westland Sea King",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 973,
"text": "Of the accidents and incidents included in this list, six were caused by engine failure, while six flight-into-terrain accidents were recorded, with two additional hard landings resulting in serious damage to the aircraft. Seven more of the accidents and incidents were a result of problems in the helicopters' drive systems, while five others were caused by collisions, three of them being mid-air collisions - two with other Sea Kings, the third, with a Lockheed Hercules transport -, one a collision with the superstructure of a ship, and another a collision with a high-tension cable. Two accidents were the result of failure of the main rotor, with two others being caused by weather, while one accident each was caused by instrument failure, bird strike (presumed), fuel exhaustion, and on-board fire; eleven other accidents and incidents did not mention a specific cause, simply that the helicopter involved crashed or ditched without further detail being provided.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2334889",
"title": "Radio-controlled helicopter",
"section": "Section::::Construction.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 222,
"text": "The construction of helicopters has to be more precise than for fixed-wing model aircraft, because helicopters are susceptible to even the smallest of vibrations, which can cause problems when the helicopter is in flight.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6809465",
"title": "Air-sea rescue",
"section": "Section::::History.:Introduction of the helicopter.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 989,
"text": "Helicopters became frequently used, due to a number of advantages; they could fly in rougher weather than fixed-wing aircraft and could deliver injured passengers directly to hospitals or other emergency facilities. Helicopters can hover above the scene of an accident while fixed-wing aircraft must circle, or for seaplanes, land and taxi toward the accident. Helicopters can save those stranded among rocks and reefs, where seaplanes are unable to go. Landing facilities for helicopters can be much smaller and cruder than for fixed-wing aircraft. Additionally, the same helicopter that is capable of air-sea rescue can take part in a wide variety of other operations including those on land. Disadvantages include the loud noise causing difficulties in communicating with the survivors and the strong downdraft that the hovering helicopter creates which increases wind chill danger for already-soaked and hypothermic patients. Helicopters also tend to have limited range and endurance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "531385",
"title": "Lee wave",
"section": "Section::::Aviation.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 465,
"text": "The rotor turbulence may be harmful for other small aircraft such as balloons, hang gliders and paragliders. It can even be a hazard for large aircraft; the phenomenon is believed responsible for many aviation accidents and incidents, including the in-flight breakup of BOAC Flight 911, a Boeing 707, near Mt. Fuji, Japan in 1966, and the in-flight separation of an engine on an Evergreen International Airlines Boeing 747 cargo jet near Anchorage, Alaska in 1993.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "670783",
"title": "Wake turbulence",
"section": "Section::::Helicopters.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 635,
"text": "Helicopters also produce wake turbulence. Helicopter wakes may be of significantly greater strength than those from a fixed wing aircraft of the same weight. The strongest wake can occur when the helicopter is operating at lower speeds (20 to 50 knots). Some mid-size or executive class helicopters produce wake as strong as that of heavier helicopters. This is because two-blade main rotor systems, typical of lighter helicopters, produce a stronger wake than rotor systems with more blades. The strong rotor wake of the Bell Boeing V-22 Osprey tiltrotor can extend beyond the description in the manual, which contributed to a crash.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3512082",
"title": "Low-g condition",
"section": "Section::::Effects.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 540,
"text": "In contrast, low-g conditions can be disastrous for helicopters. In such a situation their rotors may flap beyond normal limits. The excessive flapping can cause the root of the blades to exceed the limit of their hinges and this condition, known as mast bumping, can cause the separation of the blades from the hub or for the mast to shear, and hence detach the whole system from the aircraft, falling from the sky. This is especially true for helicopters with teetering rotors, such as the two-bladed design seen on Robinson helicopters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46644621",
"title": "2015 Pakistan Army Mil Mi-17 crash",
"section": "Section::::Investigation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 693,
"text": "The helicopter crash was attributed to technical and mechanical fault, indicated by the air force inquiries. Initial military reports suggested engine failure. Developing reports later revealed a failure in the helicopter's tail rotor while it was landing, which caused it to lose control and crash. The black box was recovered. According to Foreign secretary Aizaz Ahmad, \"It was purely an accident, and accidents do happen.\" Ahmad added that the helicopter was serviced regularly, with the last service taking place 11 hours before the crash. The Chief of Army Staff and Chief of Air Staff constituted a military board of inquiry, the results of which would be made available to the public.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3ahxoz | what are options and derivatives? and what are some of the more complex securities being traded? | [
{
"answer": "A derivative is a contract you can buy and sell. For example, \"whoever holds this piece of paper can buy 100 shares of IBM from me for $50 per share.\" Now, if IBM is worth a lot more than $50 a share, that piece of paper is valuable and you can sell it -- someone will pay good money for it, and its value fluctuates as IBM's (present and future) value fluctuate.\n\nAn option is a specific kind of derivative -- one which says you have the right to buy (or sell) a certain thing at a certain price if you want, often with a certain time constraint. The above example is an option. \n\nThere are also more complex derivatives, such as \"I agree to deliver you any combination of apples and oranges totaling 100,000 kg, at a time of my choosing in November 2016, for a price equal to the then-current price of 1000 barrels of Brent crude oil, but only if the USA has exited Iraq by that time.\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53595317",
"title": "Securities market participants (United States)",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 428,
"text": "This article covers those who deal in securities and futures in US markets. Securities include equities (stocks), bonds (US Government, corporate and municipal), and options thereon. Derivatives include futures and options thereon as well as swaps. The distinction in the US relates to having two regulators. Markets in other countries have similar categories of securities and types of participants, though not two regulators.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19372783",
"title": "Stock",
"section": "Section::::Stock derivatives.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 273,
"text": "A stock derivative is any financial instrument for which the underlying asset is the price of an equity. Futures and options are the main types of derivatives on stocks. The underlying security may be a stock index or an individual firm's stock, e.g. single-stock futures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13271635",
"title": "Risk–return spectrum",
"section": "Section::::The progression.:Options and futures.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 299,
"text": "Option and futures contracts often provide leverage on underlying stocks, bonds or commodities; this increases the returns but also the risks. Note that in some cases, derivatives can be used to hedge, decreasing the overall risk of the portfolio due to negative correlation with other investments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2460491",
"title": "Financial risk",
"section": "Section::::Hedging.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 400,
"text": "For instance, when investing in a stock it is possible to buy an option to sell that stock at a defined price at some point in the future. The combined portfolio of stock and option is now much less likely to move below a given value. As in diversification there is a cost, this time in buying the option for which there is a premium. Derivatives are used extensively to mitigate many types of risk.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9135",
"title": "Derivative (finance)",
"section": "Section::::Basics.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 678,
"text": "Derivatives are contracts between two parties that specify conditions (especially the dates, resulting values and definitions of the underlying variables, the parties' contractual obligations, and the notional amount) under which payments are to be made between the parties. The assets include commodities, stocks, bonds, interest rates and currencies, but they can also be other derivatives, which adds another layer of complexity to proper valuation. The components of a firm's capital structure, e.g., bonds and stock, can also be considered derivatives, more precisely options, with the underlying being the firm's assets, but this is unusual outside of technical contexts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48190",
"title": "Commodity market",
"section": "Section::::Derivatives.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 508,
"text": "Derivatives evolved from simple commodity future contracts into a diverse group of financial instruments that apply to every kind of asset, including mortgages, insurance and many more. Futures contracts, Swaps (1970s-), Exchange-traded Commodities (ETC) (2003-), forward contracts, etc. are examples. They can be traded through formal exchanges or through Over-the-counter (OTC). Commodity market derivatives unlike credit default derivatives for example, are secured by the physical assets or commodities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "242855",
"title": "Futures contract",
"section": "Section::::Futures traders.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 621,
"text": "Futures traders are traditionally placed in one of two groups: hedgers, who have an interest in the underlying asset (which could include an intangible such as an index or interest rate) and are seeking to \"hedge out\" the risk of price changes; and speculators, who seek to make a profit by predicting market moves and opening a derivative contract related to the asset \"on paper\", while they have no practical use for or intent to actually take or make delivery of the underlying asset. In other words, the investor is seeking exposure to the asset in a long futures or the opposite effect via a short futures contract.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1bj1to | Is it theoretically possible to replace all my bones (or most of the) for something made from a stronger material, like carbon fiber | [
{
"answer": "Start by reading [this](_URL_0_) to learn about the functions of bone, which go beyond the structural.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2518882",
"title": "Bone grafting",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 555,
"text": "Bone generally has the ability to regenerate completely but requires a very small fracture space or some sort of scaffold to do so. Bone grafts may be autologous (bone harvested from the patient’s own body, often from the iliac crest), allograft (cadaveric bone usually obtained from a bone bank), or synthetic (often made of hydroxyapatite or other naturally occurring and biocompatible substances) with similar mechanical properties to bone. Most bone grafts are expected to be reabsorbed and replaced as the natural bone heals over a few months’ time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23035006",
"title": "Artificial bone",
"section": "Section::::Challenges.:Biological response.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1311,
"text": "Research on artificial bone materials has revealed that bioactive and resorbable silicate glasses (bioglass), glass-ceramics, and calcium phosphates exhibit mechanical properties that similar to human bone. Similar mechanical property does not assure biocompatibility. The body's biological response to those materials depends on many parameters including chemical composition, topography, porosity, and grain size. If the material is metal, there is a risk of corrosion and infection. If the material is ceramic, it is difficult to form the desired shape, and bone can't reabsorb or replace it due to its high crystallinity. Hydroxyapatite, on the other hand, has shown excellent properties in supporting the adhesion, differentiation, and proliferation of osteogenesis cell since it is both thermodynamically stable and bioactive. Artificial bones using hydroxyapatite combine with collagen tissue helps to form new bones in pores, and have a strong affinity to biological tissues while maintaining uniformity with adjacent bone tissue. Despite its excellent performance in interacting with bone tissue, hydroxyapatite has the same problem as ceramic in reabsorption due to its high crystallinity. Since hydroxyapatite is processed at a high temperature, it is unlikely that it will remain in a stable state.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2518882",
"title": "Bone grafting",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 320,
"text": "Bone grafting is a surgical procedure that replaces missing bone in order to repair bone fractures that are extremely complex, pose a significant health risk to the patient, or fail to heal properly. Some kind of small or acute fractures can be cured but the risk is greater for large fractures like compound fractures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4406110",
"title": "Nanofiber",
"section": "Section::::Applications.:Tissue engineering.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 1182,
"text": "Although the bone is a dynamic tissue that can self-heal upon minor injuries, it cannot regenerate after experiencing large defects such as bone tumor resections and severe nonunion fractures because it lacks the appropriate template. Currently, the standard treatment is autografting which involves obtaining the donor bone from a non-significant and easily accessible site (i.e. iliac crest) in the patient own body and transplanting it into the defective site. Transplantation of autologous bone has the best clinical outcome because it integrates reliably with the host bone and can avoid complications with the immune system. But its use is limited by its short supply and donor site morbidity associated with the harvest procedure. Furthermore, autografted bones are avascular and hence are dependent on diffusion for nutrients, which affects their viability in the host. The grafts can also be resorbed before osteogenesis is complete due to high remodeling rates in the body. Another strategy for treating severe bone damage is allografting which transplants bones harvested from a human cadaver. However, allografts introduce the risk of disease and infection in the host.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2518882",
"title": "Bone grafting",
"section": "Section::::Biological mechanism.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 409,
"text": "Bone grafting is possible because bone tissue, unlike most other tissues, has the ability to regenerate completely if provided the space into which to grow. As native bone grows, it will generally replace the graft material completely, resulting in a fully integrated region of new bone. The biologic mechanisms that provide a rationale for bone grafting are osteoconduction, osteoinduction and osteogenesis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3853380",
"title": "Stem-cell therapy",
"section": "Section::::Veterinary medicine.:Bone repair.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 690,
"text": "Bone has a unique and well documented natural healing process that normally is sufficient to repair fractures and other common injuries. Misaligned breaks due to severe trauma, as well as treatments like tumor resections of bone cancer, are prone to improper healing if left to the natural process alone. Scaffolds composed of natural and artificial components are seeded with mesenchymal stem cells and placed in the defect. Within four weeks of placing the scaffold, newly formed bone begins to integrate with the old bone and within 32 weeks, full union is achieved. Further studies are necessary to fully characterize the use of cell-based therapeutics for treatment of bone fractures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2655103",
"title": "Tricalcium phosphate",
"section": "Section::::Uses.:Biomedical.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 297,
"text": "It can be used as a tissue replacement for repairing bony defects when autogenous bone graft is not feasible or possible. It may be used alone or in combination with a biodegradable, resorbable polymer such as polyglycolic acid. It may also be combined with autologous materials for a bone graft.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |