text
stringlengths 237
516k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 17
499
| date
stringlengths 20
20
| file_path
stringclasses 370
values | language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 58
105k
|
---|---|---|---|---|---|---|---|---|
In addition to the fossil record, the insuperable anatomical gulfs between human beings and apes also invalidate the fairy tale of evolution. One of these has to do with walking.
Human beings walk upright, on two legs, using a special movement not encountered in any other living thing. Some mammals may have a restricted ability to move on two legs, such as bears and apes, and stand upright on rare occasions for short periods of time, such as when they wish to reach a food source or scout for danger. But normally they possess a stooped skeleton and walk on four legs.
However, bipedalism (walking on two legs) did not evolve from the four-legged gait of apes, as evolutionists would have us believe.
First off, bipedalism establishes no evolutionary advantage. An ape’s mode of walking is easier, faster and more efficient than a human’s. Human beings cannot move by leaping from branch to branch like apes, nor run at 125 kilometers/hour (77 miles/hour) like cheetahs. Since they walk on two legs, humans actually move very slowly over the ground, making them one of the most defenseless creatures in nature. According to the logic of evolution, there is therefore no point in apes “evolving” to walking on two legs. On the contrary, according to the survival of the fittest, human beings should have begun walking on four.
Another dilemma facing the evolutionists is that bipedalism is wholly incompatible with Darwin’s model of stage-by-stage development. This model suggested by evolution presupposes some “compound” form of walking, both on four and two legs. Yet in his 1996 computer-assisted research, the British paleoanthropologist Robin Crompton showed that such a compound walking style was impossible. (See Compound walking.) Crompton’s conclusion was that “a living being can either walk upright, or on all fours.” A walking style between these two would be impossible, as it would consume too much energy. Therefore, it is impossible for any semi-bipedal life form to have existed. (See, Origin of walking upright, the.)2009-08-12 17:55:52 | <urn:uuid:a8635c2b-f165-4d9d-8919-71e198266228> | CC-MAIN-2013-20 | http://harunyahya.com/en/works/16295/Bipedalism | 2013-06-19T06:28:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.949574 | 467 |
One in 10 teens reported being physically abused by a boyfriend or girlfriend in the last year. Teen Dating Violence Awareness Month is a national effort to raise awareness and protect teens from violence.
You can make a difference: Encourage schools, community-based organizations, parents, and teens to come together to prevent teen dating violence.
How can Teen Dating Violence Awareness Month make a difference?
We can use this month to raise awareness about teen dating violence and take action toward a solution – both at home and in our communities.
Here are just a few ideas:
- Encourage parents to talk with their teens about healthy relationships.
- Ask teachers to hold classroom discussions about dating violence and prevention – or to invite speakers in to talk about these issues.
- Help schools create policies that support healthy relationships and involve student voices.
How can I help spread the word?
We’ve made it easier for you to make a difference. This toolkit is full of ideas to help you take action today. For example:
Take action to raise awareness about teen dating violence.
- Write a letter to a public official – like a mayor or governor – asking them to recognize Teen Dating Violence Month.
- Host an event, like a play or a poetry slam, to raise awareness in your community.
- Join a group that supports the movement against dating abuse.
- Share materials from loveisrespect about healthy relationships and the warning signs of abuse.
- If you are concerned about a loved one, reach out for support.
Adapted from Break the Cycle.
Contact the Break the Cycle at teenDVmonth@breakthecycle.org for more information and materials. | <urn:uuid:1f2dbf80-8723-461c-aecf-37837f739e97> | CC-MAIN-2013-20 | http://healthfinder.gov/nho/FebruaryToolkit2.aspx | 2013-06-19T06:41:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.919418 | 343 |
Women and lesbians: discrimination on multiple grounds, an ILGA panel at the UNCHR
The discrimination that lesbian and bisexual women face is not only connected to their gender and their sexual identity. They also suffer from discrimination based on their social class, religion, “race,” minority background, age, disability, etc.
This was illustrated with various examples by all the panellists.
Claudine Ouellet, a human rights lawyer, described how including discrimination in the national constitution, in international conventions or elsewhere in the Bill of Rights can help protect the most vulnerable group of people.
She gave the example of the Canadian and the South African constitutions as being the most advanced in the protection against all forms of discrimination. Discrimination based on sexual orientation has been included in Canadian law since 1977!“This is a dream in many countries were homosexuality is still illegal like Sri Lanka”
, pointed out Rosanna Flamer Caldera, moderator of the panel and Co-secretary general of ILGA. She reported on how multi-facets of discrimination have become interwoven in Sri Lankan culture. Often, for a lesbian woman, being part of a minority group means being poor and suffering violence including within your own family, by having to accept forced marriage.
Susana Fried, of IGLHRC, gave examples of lesbians and bi-sexual women suffering discrimination on various grounds, even in areas where they would expect protection: within their family, the police, health service providers, and social groups.
She also stressed the difficulty of defining a lesbian or a bisexual woman, as in many instances these terms do not correspond to how they define themselves. The hetero-normativity
that only accepts male and female makes these definitions even more difficult. And if you are not part of the norm you simply do not exist.
Dorothy Aken’ova pointed out that in many African countries women have no rights, only obligations
and it is not even possible to speak of a woman choosing another way of life or being different from the norm. It is an exclusively patriarchal, male-dominated system.
In order simply to survive you need to have a man in your life.
Anna Leah Sarabia described the relation between sexual orientation and gender identity.
A woman is particularly affected by these strict standards: a woman must be heterosexual, feminine and virgin until she gets married. Though society accepts only two sexes and two gender identities, the male masculine hetero and the female feminine hetero, social sciences describe up to at least 48 types of gender identities.
She also underlined the double discrimination suffered by lesbian and bisexual women because of their gender and because of their sexual orientation. The task of a lesbian feminist is “to make people aware of these other 48 gender identities”
, she said “so that every person who does not fall into the hetero-normative standard is given the same respect and dignity.” | <urn:uuid:c32ed00f-f4fc-48c6-b473-3bf2a982f2aa> | CC-MAIN-2013-20 | http://ilga.org/ilga/en/article/587 | 2013-06-19T06:50:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959626 | 605 |
|15-Nov-2002 Judaism / Judaism Today
Unearthing clues to the biblical past
Restored statue of a Semite who occupied a high office in ancient Egypt. Could this be Joseph of Egypt?
Many archaeologists treat the Bible more as a work of myth than of history. Helen Jacobus meets the author of a new book, David Rohl, who argues why they are wrong. Egyptologist David Rohl is well known for upsetting the academic world. While many scholars of ancient history see the Bible as little more than fiction, Rohl regards it as an historical document whose stories can tell us about real events and characters who lived thousands of years ago.
His best-known book, “A Test of Time: From Myth to History” was turned into a three-part television documentary, “Pharaohs and Kings.” It revealed, among other details, that a smashed-up, cult statue of a Semitic high official, or vizier, had been found in a mud-brick pyramid tomb in an area in Egypt where, according to the Bible, the Israelites had been enslaved. The vizier had pale skin, red hair and wore a coat of colours. Rohl argued the statue was a representation of Joseph.
His new book, “The Lost Testament,” synthesises all his research with archaeological findings, to retell the Bible epic from the Garden of Eden to the exile of the Jews in Babylon. It is, he says, the culmination of 25 years’ work.
Speaking in his drawing room in Tunbridge Wells in Kent, which is adorned with replicas of ancient Egyptian and classical Graeco-Roman artefacts, he declares: “The Bible should be treated like any other ancient document. My approach is: let’s look at the document and see what we can find out about the history of the period and see if it’s consistent with the archaeology, rather than approaching it like some mythical text in the first place.
“The whole basis upon which the Jewish religion rests is the Passover and the Exodus from Egypt. That’s where it all began. And if that never happened, what do you do with the whole of Jewish tradition?”
As a child, Rohl had a “passion for ancient Egypt,” teaching himself hieroglyphics and the names of all the pharaohs. His mother encouraged his interest by taking him on an unforgettable trip down the Nile at the age of 10. Brought up as a Catholic, in Manchester, Rohl describes himself as “completely agnostic, with no religious beliefs as such.”
A successful career in the music industry, as a producer and engineer, enabled him to earn enough to return to his “first love,” and study for a degree in ancient history and Egyptology at University College London.
It was while doing his thesis on the last dynasties of pharaohs, from the 11th to the 5th centuries BCE, that he “found that scholars had artificially made the period too long.”
So he revised the dates downwards. The effect of that was to make the most famous pharaoh, Rameses II, not the pharaoh of the Exodus to whom Moses said “Let my people go,” in about 1279 BCE according to conventional dating, but a king contemporary with Solomon — in about 943 BCE. “By changing the dates of the pharaohs, such as Rameses II, there were obvious implications for biblical history,” he says.
Rohl, in fact, puts the Exodus around 1447 BCE, a couple of hundred years earlier than the conventional biblical dating. He describes his historical dating as the “New Chronology,” in contrast to the previously accepted “Old Chronology.” The “mistake” in the Old Chronology, he argues, stems from the Victorian era, when excavators “held the Bible in one hand, and the trowel in the other.”
Victorian biblical historians regarded Rameses II as the pharaoh of the Exodus, based on the names of cities mentioned in the Bible where the children of Israel were enslaved. But the cities of Pithom and Rameses, known as Pi-Ramesse to the ancients, Rohl says, were anachronistic city names for a far older name of the same place, Avaris. They were inserted by later text editors in antiquity, he suggests, so that people could identify the place.
To compound the error, Victorian historians also wrongly identified the Egyptian pharaoh Shoshenk I with the Shishak of the Bible, who sacked Solomon’s temple. “They added up dates of the kings before him to get a date for Rameses II, and thus a [wrong] date for Moses,” he maintains.
Rohl backs up his theory by pointing out that the archaeological dates for the destroyed walls of Jericho are in the 15th century BCE. But in the conventional system, Joshua is placed in 1200 BCE. “Archaeologists are looking in the right place, but in the wrong time,” he says.
He has a similar dispute with Professor Israel Finklestein, head of archaeology at Tel Aviv University, who has claimed there was no conquest of Canaan, no Joshua, and no Davidic nor Solomonic empire.
Although Rohl does not challenge Finklestein’s dates for David and Solomon — regarded by the Israeli as no more than tin-pot tribal chieftains — he believes that Israeli archaeologists have assigned incorrect archaeological eras to their kingdoms.
In other words, David and Solomon have been placed in the Iron Age, when there is a dearth of monuments and artefacts, a kind of archaeological Dark Ages in the Ancient Near East. But according to Rohl, King David and his son belong to the earlier Late Bronze Age, a period of great wealth.
“It’s like finding a Coca Cola tin and ascribing it to the Tudor period,” says Rohl, “and then, uncovering a skyscraper and concluding it was built in the reign of Elizabeth I.
“‘They say: ‘This is when Moses existed,’ and they look for evidence for the date and time, and there is no evidence. We say: ‘Moses was around in 1447 to 1450 BCE and there is evidence.’”
Rohl has also searched for the geographical basis of the Garden of Eden, as described in Genesis, and accordingly, he begins “The Lost Testament,” on the border between western Turkey and eastern Iran, where he has located what he says was the Garden, 7,000 years ago.
When he set out more than two decades ago, it was “to solve the puzzle of Egyptian chronology.” He had not expected his findings to have an impact on biblical chronology, too. “The Bible story is the heart and foundation of our culture. If the Egyptian chronology affected the Bible, it was important to investigate. I didn’t set out to prove that the Bible was true,” but he is now convinced that it is based on “real history.”
“The Lost Testament: From Eden to Exile — The Five-Thousand Year History of the People of the Bible,” David Rohl, Century, £18.99
David Rohl will be speaking on “The Bible — Myth or Reality?” at Northwood Synagogue, Murray Road, Northwood, Middlesex, on December 11, at 8pm.
For the rest of Rohl's views, see this week's edition of the JC. | <urn:uuid:4837ad1c-6709-4921-ab5e-9ebc6f322985> | CC-MAIN-2013-20 | http://individual.utoronto.ca/mfkolarcik/jesuit/HelenJacobus.html | 2013-06-19T06:15:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.969833 | 1,627 |
Taxonomic name: Anopheles quadrimaculatus Say, 1824
Synonyms: Anopheles annulimanus Wulp, 1867
Common names: common malaria mosquito, Gabelmücke (German)
Organism type: insect
Anopheles quadrimaculatus a mosquito is the chief vector of malaria in North America. This species prefers habitats with well-developed beds of submergent, floating leaf or emergent aquatic vegetation. Larvae are typically found in sites with abundant rooted aquatic vegetation, such as rice fields and adjacent irrigation ditches, freshwater marshes and the vegetated margins of lakes, ponds and reservoirs.
Anopheles quadrimaculatus is described as a large, dark brown mosquito. The tarsus is entirely dark (The Ohio State University Mosquito Pest Management Bulletin,1998). O'Malley (1992) reports that, "All Anopheles adults are characterized by an evenly rounded scutellum and palpi about as long as the proboscis. A. quadrimaculatus is a medium-sized species. Wings are entirely dark scaled and 4 mm or more in length. Scutal bristles are short and wings are spotted with patches of dark scales. The tip of the wing is dark without copper-colored fringe scales. The palpi have dark scales and are unbanded, and the wing has 4 distinct dark-scaled spots .."
Rafferty et al. (2002) found, "A simple method for rapid identification of large numbers of Anopheles mosquitoes based on polymerase chain reaction (PCR) amplification of rDNA." The authors state that, "This method allows rapid analysis of large numbers of mosquitoes without robotic equipment and should enable rapid and extensive PCR analysis of field-collected samples and laboratory specimens."
Anopheles diluvialis, Anopheles inundatus, Anopheles maverlius, Anopheles smaragdinus
agricultural areas, lakes, riparian zones, urban areas, water courses, wetlands
Chase and Knight (2003) state that, "Many species of mosquitoes are habitat generalists which breed, grow as larvae and emerge from a wide variety of aquatic habitats .." O'Malley (1992) reports that, "In North America, most anophelines prefer habitats with well-developed beds of submergent, floating leaf or emergent aquatic vegetation. Larvae are typically found in sites with abundant rooted aquatic vegetation, such as rice fields and adjacent irrigation ditches, freshwater marshes and the vegetated margins of lakes, ponds and reservoirs. Investigators have suggested that aquatic vegetation promotes anopheline production because it provides a refuge for larvae from predators, such as Gambusia affinis. Additional hypotheses for the beneficial effects of aquatic vegetation include: enhanced food resources in vegetated regions, shelter from physical disturbance and favorable conditions for oviposition (Orr and Resh 1989)."
Comparing and contrasting different mosquito species, Chase and Knight (2003) state that, "Although these species have somewhat distinct habitat preferences, they readily lay eggs in, and emerge from wetlands of all types (Carpenter & LaCasse 1955). Although A. quadrimaculatus will also breed in smaller water-filled habitats (e.g. containers, ditches), which are often associated with humans, wetlands provide a much greater area for potential larval habitats, and often produce many more adult mosquitoes, than the smaller habitats traditionally associated with mosquito control." The Ohio State University Mosquito Pest Management Bulletin (1998) reports that, "These mosquitoes breed chiefly in permanent freshwater pools, ponds and swamps that contain aquatic vegetation or floating debris. Common habitats include borrow pits, sloughs, city park ponds, sluggish streams and shallow margins of reservoirs and lakes. During the daytime, adults remain inactive, resting in cool, damp, dark shelters such as buildings, and caves."
Anopheles quadrimaculatus Say is historically the most important vector of malaria in the United States. Malaria was a serious plague in the United States until its eradication in the 1950s (Rutledge et al. 2005). However there are still occasional cases of local transmission of malaria in the United States vectored by A. quadrimaculatus in the east and Anopheles freeborni in the west (CDC 2005 in Rios and Connelly, 2008).
This mosquito is susceptible to infection with malaria causing Plasmodium falciparum, Plasmodium vivax and Plasmodium malariae(Carpenter and LaCasse 1955). The Ohio State University Mosquito Pest Management Bulletin (1998) reports that, A. quadrimaculatus is the most important vector of malaria attacking humans in the eastern United States and can be found frequently in houses and other shelters. Their bites are less painful than many other mosquitoes and often go unnoticed.
A. quadrimaculatus can also transmit Cache Valley virus (CV) (Blackmore et al), West Nile Virus (CDC, 2007) and transmission of St. Louis encephalitis has been obtained with this species in laboratory experiment (Horsfall 1972 in O’Malley, 1992).
A. quadrimaculatus has been found to be an excellent host for dog heartworm (Dirofilaria immitis). According to Lewandowski et al. (1980), this is probably one of the most important species involved in the natural transmission of dog heartworm in Michigan. In central New York, this species was also the most efficient host of dog heartworm out of several species tested, both in the laboratory and the wild (Todaro and Morris 1975).
A. quadrimaculatus can be a vector for the myositic parasite Trachipleistophora hominis. Weidner et al. (1999) found that, Microsporidian spores of T. hominis Hollister, isolated from a human, readily infected larval stages of both A. quadrimaculatus. The authors state that, "Nearly 50% of the infected mosquito larvae survived to the adult stage. Spores recovered from adult mosquitoes were inoculated into mice and resulted in significant muscle infection at the site of injection".
Levine et al. (2004) report that, "A. quadrimaculatus was considered to be a single species until biological evidence necessitated subdividsion into a species complex in the late 1900s. A combination of genetic crossing, isozyme, and ctytological information convincingly showed that there are at least five species in the group and they include: A. quadrimaculatus, A. smaragdinus, A. diluvialis, A. inundatus, and A. maverlius." The A. quadrimaculatus complex as a whole is often referred to as A. quadrimaculatus (sensu lato), whereas A. quadrimaculatus (sensu stricto) refers to the individual species (Rios and Connelly, 2008). The authors state that A. quadrimaculatus is the most widely distributed of the species complex in the eastern United States and southeastern Canada (Seawright et al. 1991)."
In the United States, O'Malley (1992) states that, "A. quadrimaculatus is a clean water-loving mosquito. The current wetlands regulations could be seen as actually impeding our efforts to control this mosquito. By improving water quality within water management project sites per the regulations, we are actually increasing the number of habitats available."
Native range: North America; Anopheles quadrimaculatus has a distribution that covers much of the eastern United States. Its range extends from southern Canada to the Florida Everglades, and to the west from Minnesota to Mexico (Kaiser, 1994). Please follow this link for a distribution map (Levine et al. 2004).
O'Malley (1992) reports that, "A. quadrimaculatus larvae are indiscriminate feeders whose natural food includes a wide range of aquatic organisms, both plant and animal, as well as detritus. This food may be living or dead at the time of ingestion. The main criterion in selecting food seems to be whether the suspended material is small enough to eat. When feeding, A. quadrimaculatus larvae lie horizontally, with the dorsal side just under the surface film. The head rotates 180 degrees horizontally so that it is actually upside down and the venter of the head is dorsal. Feeding is either "eddy feeding" or "interfacial feeding". Eddy feeding is employed for infusions when the surface contains islets of floating oil materials. Two eddies with converging streams unite in front of the larva to form a current toward the mouth from a distance of about half the length of the larva. Efferent currents flow outward at right angles to the body from the antenna. Particles too large to eat are held by the maxillae, drawn below the surface and discarded as the head is rotated to the normal position. Interfacial feeding on the membranes of algae, bacteria, debris and fungi is common in nature. Feeding in this manner is accomplished by setting up currents which draw particles to the mouth from all directions in a straight line and at nearly equal velocities. Surface tension of the larval habitat determines the type of feeding. Eddy feeding occurs at a surface tension of less than 60 dynes per square cm; interfacial feeding is practiced in habitats with a surface tension above 62 dynes per square cm." O'Malley (1992) reports that, "Mosquito feeding patterns are largely regulated by host availability and preference (Apperson and Lanzaro 1991). Female A. quadrimaculatus are primarily mammalian feeders and actively feed on man and on wild and domesticated animals. As noted previously, this is a significant pest species. Females repeatedly seek their hosts, often visiting the same feeding site several times during the course of a bloodmeal."
Chase and Knight (2003) state that, "Larvae of the two most common mosquito species encountered in the natural and artificial wetlands, A. quadrimaculatus and C. pipiens, and other types of mosquito larvae, utilize different feeding behaviours and have slightly different diets (e.g. Merritt et al. 1992). They are both generalists, however, and readily consume detritus, microbes and algae, both from the benthos and the water column. As such, they are likely to compete for resources with several other co-occurring species."
The Ohio State University Mosquito Pest Management Bulletin (1998) reports that, "Anopheles quadrimaculatus eggs are laid singly on the water surface with lateral floats to keep them at the surface. One hundred or more eggs are laid at a time. A single female may lay as many as 12 batches of eggs and a total of more than 3,000 eggs." O'Malley (1992) reports that, "Mating occurs as soon as the females emerge. Males wait in nearby vegetation and seek females as they begin to fly. Copulation is completed in flight and takes 10-15 seconds. One insemination is usually sufficient for the fertilization of all eggs."
Floore (2004) states that, "The mosquito goes through four separate and distinct stages of its life cycle: egg, larva, pupa, and adult. Each of these stages can be easily recognized by its special appearance." Egg stage: Eggs are laid one at a time or attached together to form "rafts." They float on the surface of the water. In the case of Culex and Culiseta species, the eggs are stuck together in rafts of up to 200. Anopheles, Ochlerotatus and Aedes , as well as many other genera, do not make egg rafts, but lay their eggs singly. Culex, Culiseta, and Anopheles lay their eggs on the water surface while many Aedes and Ochlerotatus lay their eggs on damp soil that will be flooded by water. Most eggs hatch into larvae within 48 hours; others might withstand subzero winters before hatching. Water is a necessary part of their habitat.
Larval stage: The larva (plural - larvae) lives in the water and comes to the surface to breathe. Larvae shed (molt) their skins four times, growing larger after each molt. Most larvae have siphon tubes for breathing and hang upside down from the water surface. Anopheles larvae do not have a siphon and lie parallel to the water surface to get a supply of oxygen through a breathing opening. Coquillettidia and Mansonia larvae attach to plants to obtain their air supply. The larvae feed on microorganisms and organic matter in the water. During the fourth molt the larva changes into a pupa (Floore, 2004).
Pupal stage: The pupal stage is a resting, non-feeding stage of development, but pupae are mobile, responding to light changes and moving (tumble) with a flip of their tails towards the bottom or protective areas. This is the time the mosquito changes into an adult. This process is similar to the metamorphosis seen in butterflies when the butterfly develops - while in the cocoon stage - from a caterpillar into an adult butterfly. In Culex species in the southern United States this takes about two days in the summer. When development is complete, the pupal skin splits and the adult mosquito (imago) emerges (Floore, 2004).
Adult:: The newly emerged adult rests on the surface of the water for a short time to allow itself to dry and all its body parts to harden. The wings have to spread out and dry properly before it can fly. Blood feeding and mating does not occur for a couple of days after the adults emerge (Floore, 2004).
This species has been nominated as among 100 of the "World's Worst" invaders
Principal sources: Shiff, 2002 Integrated Approach to Malaria Control
Levine et al. 2003. Distribution of Members of Anopheles quadrimaculatus Say s.l. (Diptera: Culicidae) and Implications for Their Roles in Malaria Transmission in the United States.
Compiled by: National Biological Information Infrastructure (NBII) and Invasive Species Specialist Group (ISSG)
Last Modified: Monday, 23 November 2009 | <urn:uuid:82d832e6-90c0-4652-bcb5-4008cb164e5a> | CC-MAIN-2013-20 | http://issg.org/database/species/ecology.asp?si=140&fr=1&sts=sss | 2013-06-19T06:41:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92056 | 3,006 |
Motility refers to the movement of contents through the gastrointestinal tract from mouth to anus.
Esophageal (esophagus) Motility Study, Antroduodenal (stomach, upper small intestine) Manometry and Colonic (large intestine) Motility Study
These tests are performed by placing catheters with special pressure sensors in the esophagus, stomach, upper small intestine and large intestine to measure the frequency and strength of muscular contractions in these areas of the body. These tests may be performed in children with longstanding swallowing problems, abdominal distention, feeding intolerance or chronic constipation to see if their symptoms are due to a motility problem.
A device with three small balloons is placed in the child's rectum to measure muscle contractions. Younger children are sedated with general anesthesia while older children are usually able to remain awake during the test. This test is performed in children with severe constipation and may also be used to diagnose Hirschprung's disease. | <urn:uuid:0b52bde9-0ee6-4dcf-9a3b-a1bbf8fd82af> | CC-MAIN-2013-20 | http://iuhealth.org/riley/gastroenterology/tests-procedures/motility-tests/ | 2013-06-19T06:28:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933067 | 202 |
Aristotle was the originator of the word ethics. His influence on the study of ethics comes mostly from his works such as Nicomachean Ethics. Aristotle argued that all of man's pursuits seek to bring about some form of good - some happiness - for one's self or others. Where we differ is in what we perceive as happiness. Aristotle, unlike others before him, applied a scientific approach to his study of ethics. | <urn:uuid:52e769de-b439-43c8-bb15-05d783783675> | CC-MAIN-2013-20 | http://library.thinkquest.org/12160/people/aristotle.htm | 2013-06-19T06:48:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.974713 | 87 |
[FEUDALISM] [WOMEN IN POLITICS] [BIOGRAPHIES]
Feudalism was the major political system of the Middle Ages. A lord's (or lady's) lands were worked in vassalage by serfs and freemen, who owed their liege farm goods and work for the privilege of his protection The lord might then owe his services in war to a lord or church official over him, who in turn owed the king of the country. (Rowling42).
However unsophisticated (or, well, feudal) the system may seem, it was agreed on by most medieval political theorists. The 12th-Century cleric John of Salisbury declared that the king existed for the benefit of the people, not vice-versa. Therefore, a king who truly did his job was a king; a usurper who was merely trying to fill his own pockets was a tyrant and was not intended by God to rule (Hoyt 396). In 1301, Egidius Romanus concluded that since all power derived from God, the pope was the supreme ruler of the Christian world. Through him, kings gained the Divine Right to Rule (398). Authority came from right, not might. Right came from God, but not through any particular moral or intellectual fitness (401).
WOMEN IN POLITICS
Women were not represented in the town councils, so ordinary women had no voice in local politics. Women who did involve themselves in politics were wealthy, clerical, or upper-class, and their politics were often on an international scale (Gies and Gies, City 53-4).
For women, the secular political scene consisted mainly of flesh trafficking. A woman was technically under her father's control until she married, at which time she was supposed to be completely obedient to her husband's will. The lands that came with a bride at marriage were valued commodities, as were the sons she would produce. To a miserly father, daughters represented only the potential loss of lands when they married. Though peasants had some free choice in marriages, upper-class women rarely did. Their lands and potential for childbearing were far too important to be given away indiscriminately (Gies and Gies, Castle 78). In this way, the lady was often a tool in politics, used by men for their own purposes. Though a woman could hold property, recieve inheritance, participate in trade, and go to court, she was always under a man's guardianship -- her father's, her husband's, or that of another male relative (Gies and Gies, Castle 78).
To intelligent, resourceful women, a marriage of convenience did not always provide such a bleak outlook. Women who were married to young, weak, ignorant, inexperienced, absent, or tolerant husbands could take control of the husband's politics. Queens often waited until their husbands were away at the Crusades or some other war to begin to change things at home. In turn, they used their sons and daughters as networks of connections; and as no dutiful child could neglect his or her mother's wishes, a mother could often accomplish much. Women politicians, in keeping with the beliefs of the cult of the , were often regarded as intercessors (Moriarty 93).
|Eleanor of Aquitaine, queen of Louis VII of France and later queen consort of the younger Henry II of England, became a powerful force in Church and secular politics. To find out more about her and other women active in politics, click here.|
[home] [the inquisition] [visit the city] [biographies] [sister cities] [miscellany] | <urn:uuid:3ebdcaec-3cc1-42de-a161-9d2915b68d32> | CC-MAIN-2013-20 | http://library.thinkquest.org/12834/text/throneroom.html | 2013-06-19T06:22:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.986331 | 758 |
This was another attempt to mislead people by the Fed. I hope that my answers, kept in the same simple idiom as the Fed’s, are helpful.
Image via Wikipedia
In 2006 the Federal Reserve decided it was time to begin to reach out and influence middle schoolers with the party line about the Fed, and launched the Federal Reserve Kids Page. Consisting of 10 harmless-appearing questions, either in English or Spanish, the Fed’s answers gloss over, and sometimes deliberately misstate, the correct answers:
What is the Federal Reserve System?
Fed’s answer: The Fed is a bank for other banks and a bank for the federal government. It was created to provide the nation with a safer, more flexible, and more stable monetary and financial system.
Better answer: The Fed is a cartel (a formal organization of service providers that agrees to fix prices and limit competition. The aim of such collusion is to increase individual members’ profits by reducing or eliminating competition) which was formed to protect its member banks from suffering the consequences of bank runs (which occur when a bank’s customers try to withdraw their money all at once because they’re afraid the bank is, or might become, insolvent and unable to pay off their customers).
Thus, the Fed is known as the “lender of last resort” which means that if one of the cartel’s banks gets in trouble, it can go to the Fed for money to tide the bank over until the run on the bank ends.
The Fed has no money of its own, but it can print money that looks like real money, which people will accept because other people will take it in trade for goods and services. This is often called fiat currency (money that has value only because the government says it does).
The Fed’s answer that the Fed was “created to provide the nation with a safer…monetary and financial system” must be challenged with the question, Safer than what? What was used as money before it became fiat or paper money? How does a person define safe for himself? At present, a piece of paper currency may be easily exchanged for goods and services at the store. It is easily recognized, and easily divisible into smaller or larger denominations. It is convenient to carry around, and since it represents purchasing power, it gives some people a sense of comfort and security in the event of an emergency. But will that piece of paper money always be able to buy the same amount of goods and services in the future? Is it safe to store paper money away to be spent later, perhaps much later?
The Fed also says it was created to provide the nation with a more flexible monetary system. What does flexible mean? Does it mean that paper is easier to carry than coins or checks or credit cards? Does it mean that money can be moved around from one bank to another quickly in the event of an emergency? Does it mean that the Fed can take some of your money and give it to someone else without your permission?
The Fed says it was created to provide the nation with a more stable system. What does stable mean? Does it mean that people everywhere will always accept your paper money when you want to buy things? Does it mean that prices will never change? Does it mean that we will never have to worry about runs on the banks because experts at the Fed are watching things, so we don’t have to? Does it mean that we can make plans for the future without having to worry about using our money then?
Who created the Federal Reserve, and when was it created?
Fed’s answer: Congress created the Federal Reserve System on December 23rd, 1913, with the signing of the Federal Reserve Act by President Woodrow Wilson.
Better answer: The creation of the Fed in 1913 was the culmination of efforts by bankers to create a central bank since the end of the Civil War in 1865. Resistance to central banking was very strong and as a result much of the negotiating had to take place secretly or else the public would find out and stop it. The most secret meeting took place in November, 1910 on Jekyll Island, off the coast of Georgia, where seven men (bankers affiliated with the biggest banks in the country) met to draw up plans for the Fed.
Here are some questions to ask about the creation of the Fed. Why was it done in secret? Why would the public have stopped the Fed if it had known about it beforehand? What didn’t the public like about banks and bankers after the Civil War? What was wrong with the money system that needed to be fixed? Did the Fed fix what was wrong with the money system?
What is the Board of Governors?
Fed’s answer: The Board of Governors oversees the Federal Reserve System. It is made up of seven members who are appointed by the President and confirmed by the Senate.
Better answer: The Fed’s answer is accurate, but it doesn’t go very far. Who are the people who sit on the Board? Where did they come from? Are they bankers themselves? Who nominated them? Do they have special expertise and skills that qualify them? Are they political pay-offs to friends inside the banking system?
Who are the members of the Board of Governors?
Fed’s answer: Click here to find out.
Where is the Federal Reserve Board of Governors located?
Fed’s answer: The Federal Reserve Board of Governors is located in Washington, DC.
What are the twelve Federal Reserve Districts?
Fed’s answer: Click on Question 6 here
What are some of the main responsibilities of the Federal Reserve System?
Fed’s answer: Conduct…monetary policy to help maintain employment, keep prices stable, and keep interest rates relatively low. ” “To make sure [the banks] are safe places for people to keep their money. ” “Provide financial services [such as] clearing checks, processing electronic payments, and distributing coin and paper money to the nation’s banks, credit unions, savings and loan associations, and savings banks.
Better answer: How does the Fed do that? How does the Fed keep prices stable when the paper money supply can be expanded at any time, which makes each piece of paper worth less? How does it maintain employment? With unemployment higher now than it has been in years, does that mean the Fed is failing in its purposes? How does it know how low interest rates should be? Isn’t that set by people making purchases and loans in the marketplace? What if the interest rates that the Fed sets are too low? Won’t that trick people into making bad economic decisions, such as when to buy a house or invest in a business? What if interest rates are too high? Won’t that slow down economic activity, and put people out of work? And why is the Fed involved in processing checks and making sure that banks have enough coin and currency? Isn’t that something a private company could do just as well?
What are interest rates and why are they important?
Fed’s answer: Interest rates are the prices people pay to borrow money or are paid to lend money. Interest rates, like other prices, are determined by the forces of supply and demand…the Federal Reserve System is able to affect the level of interest rates through its monetary policy.
Better answer: Part of the Fed’s answer is true: interest rates are the prices people pay to borrow and lend money, and they are “determined by the forces of supply and demand. ” But then the Fed contradicts itself, doesn’t it, when it says it’s “able to affect the level of interest rates through its monetary policy”? Why would it want to affect interest rates if people are already setting interest rates through supply and demand? If the Fed lowers interest rates, won’t that mislead people into thinking that money is cheap and then decide to borrow while rates are low? If lots of people borrow while rates are low, what happens if those rates go up? What if lots of people make bad decisions all at the same time? Won’t that create an imbalance in the economy? What happens if lots of people can’t pay back those cheap loans? What does the Fed do then? What do those people who borrowed the money do?
What is inflation?
Fed’s answer: Inflation means that the general level of prices of goods and services is increasing. When inflation is rapid, the prices of goods and services can increase faster than consumers’ income, and that means the amount of goods and services consumers are able to purchase goes down. In other words, the purchasing power of money has declined. With inflation, a dollar buys less and less over time.
Better answer: Inflation as defined by the Fed is the increase in the prices of goods and services. But what made those prices go up in the first place? Some prices might go up due to a shortage, or a poor crop, or a disruption in the manufacturing process. But other prices might go down as companies figure out ways to make things cheaper, or because consumers no longer want to buy a particular item. For example, a desktop computer 10 years ago might have cost $3,000, but today might cost only $500. So prices go up and down, according to supply and demand. But if you see prices of everything go up at the same time, isn’t that different? Doesn’t that mean that something else is responsible? Think about it: if the Fed is in charge of the supply of money, and it decides to increase the supply of money, what does that do to each piece of money already in your pocket? Doesn’t it mean that each piece is worth less and will consequently purchase less when you go to spend it?
The Fed’s answer is backwards. They increase the supply of money (to keep interest rates low, let’s say). That is inflation. When that new money comes into the market place, it means that each piece of money is worth less which is then reflected as higher prices. In other words, higher prices are the result of the inflation that the Fed already created by increasing the supply of money. Think of it this way: prices aren’t going up. The value of each piece of paper money is going down. That’s the result of inflation.
What is the FOMC, and what does it do?
Fed’s answer: FOMC stands for the Federal Open Market Committee. The FOMC consists of twelve members…The purpose of the FOMC is to determine the nation’s monetary policy…The actions taken…affect…the prices of goods and services.
Better answer: There is no better answer! The FOMC decides to adjust interest rates according to their “policy”, and increases the supply of money accordingly, which then “affect[s]…the prices of goods and services. ” In other words, when there is inflation (prices rising in the marketplace) it’s because the Fed wanted it that way.
Sign up to to receive Bob's explosive articles in your inbox every week, and as a thank you we'll send a copy of his most popular eBook - completely free of charge!
How can you help stop the Democrat's latest gun grab? How is the Federal Reserve deceiving America today? What is the latest Obama administration scandal coverup? Sign up for the Light from the Right email newsletter and help stop the progressives' takeover of America!
Pingback: Van Lee | <urn:uuid:6b90024b-c65b-4c23-bb4a-20075be2c0c1> | CC-MAIN-2013-20 | http://lightfromtheright.com/2011/06/06/federal-reserve-kids-page/ | 2013-06-19T06:48:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.970607 | 2,423 |
To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here.
Minerals are elements that originate in the Earth and cannot be made by living organisms. Plants obtain minerals from the soil, and most of the minerals in our diets come directly from plants or indirectly from animal sources. Minerals may also be present in the water we drink, but this varies with geographic locale. Minerals from plant sources may also vary from place to place, because soil mineral content varies geographically.
Select a mineral on the left for more information.
The information from the the Linus Pauling Institute's Micronutrient Information Center on vitamins and minerals is now available in a book titled "An Evidence-Based Approach to Vitamins and Minerals." The book can be purchased from the Linus Pauling Institute or Thieme Medical Publishers.
Thank you for subscribing to the Linus Pauling Institute's Research Newsletter.
You should receive your first issue within a month. We appreciate your interest in our work. | <urn:uuid:fc25adfc-e040-4908-99e0-fee4c9a4ff81> | CC-MAIN-2013-20 | http://lpi.oregonstate.edu/infocenter/minerals.html | 2013-06-19T06:41:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.893049 | 223 |
The ultimate guide to effects: delay
7th Jun 2011 | 13:54
Find out how it works and which delay plug-ins you should consider
Is there an echo in here? If you happen to be sitting in a recording studio, chances are there is indeed an echo (or two) sitting in the rack or on the hard drive. By far the most common function of the delay effect – echoes - make an appearance on just about any mix you can name.
In the earliest days of rock 'n' roll, slapback echo adorned the vocal recordings of Elvis Presley, Buddy Holly and others. It was an easy sound to create, by simply feeding a tape back to the record head without erasing the material that already existed on the tape.
This was often achieved using a pair of tape machines, but it wasn't long before dedicated echo boxes were introduced to grateful musicians, particularly those of a psychedelic bent.
Needless to say, these tape-based units were quickly overtaken by their digital descendants, and the initials DDL (for 'digital delay line') became common parlance among musicians and studio engineers. These units worked by sampling the incoming audio and replaying it alongside the dry signal. They would serve as the templates for modern software delays, even if they were quite simple by comparison.
Today's delay effects are as diverse as the producers who use them. They run the gamut from simple digital echo boxes that play back exactly what you put into them, to much more elaborate devices that provide synchronised echoes and a bevy of modulation options. Some developers go to great lengths to provide dead-on accurate emulations of the gritty, grainy digital and tape delays of decades past.
Back to basics
So what exactly is a delay, and how does it work? At the simplest level, a delay does just what the name implies: it delays an incoming signal. Old-school units used tape or digital sampling technology, whereas modern plug-ins operate by recording the incoming data and storing it in a buffer.
Normally, the length of delay is determined by the user and the buffer will spit out the signal after the specified length of time. Some plug-ins provide a number of interesting ways to manipulate the buffer's output, however, resulting in backwards signals or any number of odd, glitchy effects.
Delayed playback is fine, but it means very little if it is not paired with the dry signal. That isn't to say that it has no use at all - in fact, a delayed signal can be used to adjust timing or phase discrepancies. For the classic echo effect, though, the dry signal needs to be present. Like most effects, delay units usually provide a means by which the original signal and the processed signal are mixed together.
In addition to the basic delay effect, there is usually some form of 'feedback' or 'regen(eration)' on offer. This function determines how much of the effected signal is fed back through the effect, providing multiple repetitions of the processed signal. Each one of these repetitions will gradually fade out as new ones are created, unless you have the feedback cranked up full, in which case the signal will keep compounding until it reaches a deafening squall.
Just about any delay is going to provide some method of control over the delay time. Some will be of the free-spinning variety, displaying their values in milliseconds, while others will be lashed to the host's tempo and provide their timing values as beat divisions.
Choosing the appropriate delay time is crucial. Short delays (50ms or less) can be used to simulate double-tracking, while delay times of 100ms or longer will provide the classic slapback echo. Longer still will put you into space-rock territory.
In addition to the bog-standard delays described above, you might encounter variations on the theme. Ping-pong delays are quite common, providing a stereo output wherein alternating echoes are hard-panned in the stereo sound field. Multi-tap delays are also a popular choice - these behave the same as those described above, with the exception that multiple delay lines are provided, each with independent times and levels.
Many of the best delays are those that combine delay effects with other processing, particularly filter or modulation effects. Both of these are present in retro-style tape delay or early digital delay emulations. You see, the echoes of a vintage delay unit degraded over time, losing frequency content and timing stability.
Likewise, it is fairly common to find an LFO onboard their modern-day counterparts. This is provided as a means by which the pitch and timing of the echoes can be altered over time. Such effects can be used to simulate chorusing or flanging, as well as wacky, seasickness-inducing wobbling.
Needless to say, the above applications are just the beginning. There are loads of clever delay designs out there, ranging from reverse echo generation to wild, crunchy dub-echo devices. Often a delay will be built into an instrument, even becoming an integral part of its architecture. Modular synthesisers often come with a delay module in tow, and great fun can be had modulating the delay parameters with various other bits in a complex patch.
As we've suggested, delay is a vital element in any production, even when you don't consciously hear it. Delay lines are crucial in chorusing, reverb and other effects processors. In fact, they're so ubiquitous that you probably already have a number of them to hand, so open them up and get to know them!
Four top-notch delay plug-ins
FabFilter Timeless, £69
Now in its second incarnation, Timeless is maxed out with all manner of modulation. At its heart it's a classic stereo tape delay, but FabFilter has added filtering, time-stretching and its magnificent modulation matrix. Tap tempo is supported, or Timeless can sync to the host.
Artificial Audio Obelisk, €99
A wonderfully advanced multi-effect built around a spectral delay. Incoming audio is separated into multiple frequency bands which are treated independently. LFOs, spectral filter and gate are included to warp your sound, and modulation can be 'drawn' into the Analyzer Point View. Neat!
PSP Audioware Lexicon PSP 42, $149
One of a trio of delays built around its classic hardware units, the PSP 42 adheres to Lexicon's design and is approved by Lexicon. A stereo unit with a very retro sound, it has all the original's modulation options plus the ability to invert the feedback and delay channels.
Togu Audio Line TAL-Dub, Free
Togu Audio Line has provided yet another brilliant freebie with this vintage-style delay effect. There's a 12dB filter on board, along with the ability to set independent delay times and feedback for each channel (though these can be linked if you want). It can be used as a plain old filter too.
For a comprehensive selection of effects tutorials and techniques, check out Computer Music Special: Effects (issue 47) which is on sale now.
Liked this? Now read:The effects that changed music
Get MusicRadar straight to your inbox: Sign up for the free weekly newsletter | <urn:uuid:500395bb-963b-4e14-811a-72477dd1f769> | CC-MAIN-2013-20 | http://m.musicradar.com/tuition/tech/the-ultimate-guide-to-effects-delay-457920 | 2013-06-19T06:15:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948136 | 1,503 |
Minnesota River is a tributary of the Mississippi River, approximately 332 miles
(534 km) long, in the U.S. state of
Minnesota. It drains a watershed of nearly 17,000 square miles
(44,000 km2), 14,751 square miles
(38,205 km2) in Minnesota and about 2,000 sq mi
(5180 km2) in South Dakota and Iowa.
in southwestern Minnesota, in Big Stone Lake on the Minnesota–South Dakota border just south of
the Laurentian Divide at the
Gap portage. It flows southeast to Mankato, then turns northeast. It joins the
Mississippi south of the Twin Cities of Minneapolis and St. Paul, near the historic Fort Snelling.
The valley is one of several distinct
regions of Minnesota
. As shown
on old maps of Fort Snelling, early explorers dubbed the waterway
the St. Pierre or St. Peter's River. Pierre-Charles Le Sueur
to visit the river, but there
is no consensus as to the origin of its original name.
Its name comes from the Lakota
meaning "water" and sota
is alternately translated "smoky-white" or "like the cloudy sky".
, and later
the state, were named for the river.
The valley that the Minnesota River flows in is up to five miles
(8 km) wide and 250 feet (80 m) deep. It was carved
into the landscape by the massive glacial River Warren
between 11,700 and
9,400 years ago at the end of the last ice
in North America
The river valley is notable as the origin and center of the
in Minnesota. In 1903 Carson Nesbit
Cosgrove, an entrepreneur in Le Sueur presided at the organizational meeting of the
Minnesota Valley Canning Company (later renamed Green Giant).
By 1930, the Minnesota
River valley had emerged as one of the country's largest producers
of sweet corn. Green Giant had five canneries in Minnesota in
addition to the original facility in Le Sueur. Cosgrove's son,
Edward, and grandson, Robert also served as heads of the company
over the ensuing decades before the company was swallowed by
. Several docks for
exist along the river. Dried goods are
transported to the ports of Minneapolis and Saint Paul, and then
shipped down the Mississippi River.
Cities and towns
Notes and references
- Das Illustrirte Mississippithal, or, The Valley
of the Mississippi Illustrated. St. Paul, Minnesota: Minnesota
Historical Society, 1967
- Sansome, Minnesota Underfoot, pp. 118-19. | <urn:uuid:e1a54e65-8441-4b7a-b26d-586be71ef728> | CC-MAIN-2013-20 | http://maps.thefullwiki.org/Minnesota_River | 2013-06-19T06:15:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.896531 | 568 |
52 years ago in August 1958 the United States was so confident in our ability to provide clean nuclear energy that we put one hundred and sixteen men in a tin can called the USS Nautilus and sent them along with a nuclear reactor under the North Pole. Many of the crew from that voyage remain alive and well today.
What puzzles me is why the USA isn’t the undisputed world leader in nuclear power. Perhaps the title of undisputed leader in nuclear weaponry and leader in nuclear power are mutually exclusive. So it’s France who produce most of their power from nuclear reactors.
Google getting into the business of Nuclear power is the most exciting development in nuclear power in this country since the Nautilus. Google’s data centers are massive power consumers, but what also consumes a lot of resources is power transmission. It consumes land, steel, and a lot of power is lost during transmission.
If you can put a nuclear reactor on a 320ft submarine 60 years ago, you can build a clean nuclear reactor in 2010 with enough power to supply a local data center and the local town it provides employment for. My hope is that this is Google’s nuclear vision. | <urn:uuid:bb7b38c8-fb4d-426b-bbf7-235a9234328f> | CC-MAIN-2013-20 | http://markmaunder.com/2010/03/31/a-nuclear-google-may-be-a-very-good-thing/ | 2013-06-19T06:48:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94859 | 242 |
The Bridges of Königsberg
Library Home || Full Table of Contents || Suggest a Link || Library Help
|This problem inspired the great Swiss mathematician Leonard Euler to create graph theory, which led to the development of topology.|
|Levels:||High School (9-12), College|
|Math Topics:||Graph Theory, Topology|
© 1994-2013 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | <urn:uuid:df883f86-cf13-43c8-8373-f78d2463ae13> | CC-MAIN-2013-20 | http://mathforum.org/library/view/7472.html | 2013-06-19T06:17:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.733922 | 110 |
Climate change is expected to affect flooding through changes in rainfall, temperature, sea level and river processes. Climate change will exacerbate the existing effects of flooding on infrastructure and community services, including roads, stormwater and wastewater systems and drainage, river flood mitigation works, and private and public assets including houses, businesses and schools.
Climate change may change flood risk management priorities and may even increase the risk from flooding to unacceptable levels in some places. It is therefore important that your flood risk assessments incorporate an understanding of the impacts of climate change on the flood hazard.
Managing present-day and future risk from flooding involves a combination of risk-avoidance and risk-reduction activities. The treatment options could be a combination of avoiding risk where possible, controlling risk through structural or regulatory measures, transferring risk through insurance, accepting risk, emergency management planning, warning systems, and communicating risk (including residual risk) to affected parties. The best combination will consider the needs of future generations and not lock communities into a future of increasing risks from flooding. | <urn:uuid:7c71047c-cf11-4e42-b246-2371c6dd2036> | CC-MAIN-2013-20 | http://mfe.govt.nz/publications/climate/preparing-for-future-flooding-guide-for-local-govt/page6.html | 2013-06-19T06:22:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.927329 | 207 |
Under intense persecution, hundreds of Puritan preachers, followed by tens of thousands from their flocks in the Old Country, answered freedom’s beckoning call and headed for America. Governor John Winthrop would describe what they hoped to build as a “shining city on a hill.”
Among those arriving in Boston were the Rev. Thomas Shepard and Simon Crosby and his relatives. Simon had heard Shepard preach and had undergone a profound religious conversion. The Crosbys would settle across the river and Cambridge (Newtowne) and become prominent citizens whose heirs were pillars of local Puritan and Presbyterian churches, as well as noted soldiers in the War for Independence.
It was into such a community of faith that young Francis (Fanny) Crosby would be born. At six weeks old, Crosby developed an eye inflammation, which an unschooled traveling medical man treated with “mustard poultices.” According to the story, this was the cause of her permanent blindness, although modern scientists believe she was probably congenitally blind.
Crosby herself would always speak of the occasion as a remarkable working of God’s providence, opening doors to her that would not have been opened otherwise. Her grandmother Eunice was of particular importance in instilling in young Crosby a love for the gospel and the Scriptures.
Biographer Edith L. Blumhofer describes the home environment as sustained by “an abiding Christian faith”: “At its center stands the Bible in the classic rendering of the Authorized Version. Crosby frequently admitted its centrality in her childhood home, where the family altar found a regular place. Although she could not read for herself, she memorized Scripture under the patient tutelage of her grandmother. … Shaped by the Calvinist reading of Scripture that years before prompted the family’s migration to the New World, the Crosbys of Southeast understood that God had a purpose for whatever happened. … They knew God as the source of all true pleasure and believed that all they had – meager or abundant – came from God’s hand.”
When Crosby was 19, her family learned of the Institute for the Blind in New York, where Crosby’s world suddenly became much larger. When a traveling phrenologist (a faddish “science” that presumed to discern intelligence and capability by carefully feeling the bumps on one’s head) pronounced Crosby extremely gifted, the Institute gave Crosby every opportunity for learning. She excelled and was soon working as an instructor.
Fanny Crosby’s self-styled “primitive Presbyterianism” gave her decidedly low-church views, yet she seems to have manifested a very open and warm attitude toward all of Christian faith. One incident, in particular, captures the essence of young Crosby and dismisses immediately the notion that the Institution was a dour and mournful place.
Alice Holmes, a year younger than Crosby, was born in England and struck by yellow fever on the ocean crossing to America. The ship was quarantined for months, and when the family finally stepped onto American soil, 9-year old Alice was completely blind.
Years later, Holmes would remember her first encounter with the new roommate in her book “Lost Vision”: “At a quarter before ten, Miss Crosby announced that she would take charge of the new pupil from New Jersey, as I was to room with her, and at once, with a kind good-night to all, and taking me by the hand, she started off at a pace which rendered me rather timid, every step being new and strange to my ‘unfrequented feet.’ Which, observing, she told me not to be afraid as she would not let me break my neck; and after crossing one of the main halls and reaching the third floor beyond a long flight stairs, she remarked, ‘Here we are; this is our room … here on this side is your bed, and here is your trunk, and here is a place to hang your clothes;’ in short ‘she tended me like a welcome guest.’ Before saying our prayers, however she inquired as to my religious views, and I at once declared myself an Episcopalian, to which she humorously replied, ‘Oh, then, you are a churchman,’ and made a rhyme which ran something like this:
‘Oh, how it grieves my poor old bones,
To sleep so near this Alice Holmes.
I will inform good Mr. Jones,
I cannot room with a Churchman!’
“Then she hoped I would not be offended or feel hurt, as she was only in fun; and with a warm goodnight retired to her side of our apartment,” Holmes writes.
At the Institution, Crosby became acquainted with a broad range of evangelical Protestants who served on the board and faculty and took the opportunity to visit many of the local churches. Her first contacts with Methodism clearly broadened her experience with church music. In her last Presbyterian church before leaving the Institution, hymns were often written more or less on the spot by the deacons and elders each Sunday, a practice which, to no one’s surprise, has not survived in many quarters.
The chief instructor at the institution was Professor William Cleveland, the son of a Presbyterian minister. When the elder pastor Cleveland died, Professor Cleveland’s younger brother was quite depressed and came to spend some time at the Institution.
As Crosby recounts, “In 1853, our head teacher, Prof. William Cleveland, was called to New Jersey by the death of his father, a Presbyterian clergyman. After a few days absence, he returned, bringing with him his younger brother, a youth of 16; and the next morning afterward he came to consult me in regard to ‘the boy.’
“‘Grover has taken our father’s death very much to heart,’ he said, ‘and I wish you would go into the office, where I have installed him as clerk, and talk to him, once in a while,’” Crosby writes.
“So I wend down as request, and was introduced to the young man,” she continues. “We talked together unreservedly about his father’s death, and a bond of friendship sprung up between us, which was strengthened by subsequent interviews. He seemed a very gentle, but intensely ambitious boy, and I felt that there were great things in store for him, although … there was not though in my mind that he would ever be chose from among the millions of his country to be its president.”
As Crosby’s own fame spread later in life, her circle of friends and acquaintances would expand to include a staggering range of political, social and literary figures. And she moved freely among well-schooled Presbyterians and circuit-riding Methodists. Her hymns, like “To God Be the Glory” and “Blessed Assurance” have found places in hymnals of virtually every denomination and assembly of God’s people throughout the world.
For the full article on Fanny Crosby, please visit Leben’s website. | <urn:uuid:f24001e4-7a94-44dc-9cb9-9fd19a744b69> | CC-MAIN-2013-20 | http://mobile.wnd.com/2012/06/how-a-blind-hymn-writer-consoled-a-president/ | 2013-06-19T06:29:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.981058 | 1,509 |
ASP.NET Web pages provide the user interface for your Web applications. The topics in this section provide information on how ASP.NET Web pages work and how to create and program them.
Provides general information on the structure of ASP.NET Web pages, how pages are processed by ASP.NET, and how ASP.NET pages render markup that conforms to XHTML standards.
Describes the basic markup elements that make up an ASP.NET page.
Provides information on how to create event handlers in ASP.NET pages and how to work with client script.
Provides an overview of the programming model inherent to all ASP.NET pages, which includes the single-page model, the code-behind model, and how to decide which model to use.
Describes the run time class that is generated and then compiled to represent a page, and provide a programmable object corresponding to that page.
Describes XHTML standards and explains how to implement them in ASP.NET Web pages.
Describes Web accessibility standards and explains how to implement them in ASP.NET Web pages.
Provides a tutorial on creating a simple ASP.NET Web page.
Provides a tutorial on creating a simple ASP.NET Web page using the code-behind programming model.
Illustrates various features of the code editor. Some of the features of the code editor depend on what language you are coding in. Therefore, in this walkthrough you create two pages, one that uses Visual Basic and another that uses C#.
Provides a procedure for adding new and existing ASP.NET Web pages to a Web site in Visual Studio.
Provides information on how to create, customize, and manage an ASP.NET Web application (sometimes referred to as a Web site).
Provides information about how ASP.NET Web server controls work, how to add them to ASP.NET pages, and how to program them.
Provides information on displaying and editing data in ASP.NET Web pages.
Provides information on storing information between page requests.
Provides information on security threats to your ASP.NET applications, ways in which to mitigate threats, and ways to authenticate and authorize users.
Provides information on handling errors, debugging ASP.NET pages, viewing trace information during page processing, and using monitoring the health of your application. | <urn:uuid:ef8fcd00-c6d4-4ec9-b38f-cac764ecef30> | CC-MAIN-2013-20 | http://msdn.microsoft.com/en-us/library/fddycb06.aspx | 2013-06-19T06:51:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.810711 | 480 |
There are some lessons that students just can't learn in the classroom — and one of those is about life on the farm.
So on Oct. 8, more than 70 fourth and fifth graders from Clara Barton Elementary School in Chicago made the trek to a working hog farm in Yorkville, IL, to get a firsthand education in agriculture and discover the source of some of the foods they eat every day. State Rep. Mary Flowers (D-31st District) accompanied the students on the trip.
During their field trip to Kellogg Farms, sponsored by the Illinois Pork Producers Association, the students got to view a modern pork production facility, including the farrowing rooms where piglets are born. They also learned how corn and soybeans are ground to make feed at the farm's feedmill and got to sit in the driver's seat of a huge tractor.
“Most of our students have never been to a farm before,” says Kim Otto, fifth grade teacher at Clara Barton Elementary School. “This is a rare opportunity for them to see firsthand where their food comes from and to better understand what it is like to live and work outside the city.”
John Kellogg, a fifth-generation farmer, his wife Jan and their son Matt have hosted hundreds of tour groups at their 1,300-acre farm where they grow crops and raise pigs, farrow-to-finish.
The Kelloggs partner with the Kendall County Pork Producers and the Kendall County Farm Bureau to host student field days, along with other activities designed to educate the general public about the importance of pork production.
Seventeen years ago, the Kelloggs' vision and leadership helped create “Teachers on an AgriScience Bus,” a nationally recognized program to educate suburban teachers about various aspects of agriculture, including pork production.
“We know that many kids — and adults, too — don't know how animals are raised or what's involved in modern farming,” says Kellogg. “These tours allow us to show people how we provide the best care for our animals to ensure high-quality pork for consumers, while also caring for the environment. The students ask great questions, so we know they are taking it all in. We are helping the next generation to be well-informed consumers.” | <urn:uuid:bc202632-185d-4f36-aa5c-9d5926c9b7f0> | CC-MAIN-2013-20 | http://nationalhogfarmer.com/print/behavior-welfare/1115-students-visit-farm | 2013-06-19T06:30:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.971327 | 478 |
By JEROD CLAPP
After all the scientists had measured and weighed their victims, they cut in, gutted them and took count of the contents to see if their size made any difference.
They sound mad, but they were just fifth-graders diving into pumpkins on Halloween.
Students at Utica Elementary School conducted experiments on pumpkins to see if their size had anything to do with the number of seeds inside.
Allen Keith, a fifth-grade teacher at the school, said students liked finding the circumference, height and weight of their pumpkins, but getting orange goop on them is always their favorite part.
“They’re all just hands-on at this age,” Keith said. “So many kids like to get their hands on things and a lot of them don’t get to help in the kitchen much anymore, so this is new to some of them.”
Keith said students often find there is no correlation between size and seeds, but the kids have fun and get to take their seeds home to roast, if they so choose.
Kamie Thompson, a fifth-grader, had a three-pound pumpkin. She said she expected maybe 70 seeds because of its size, but found 280 in the guts of her pumpkin.
“I didn’t think there would be more [seeds] in a small one,” Thompson said. “I don’t know if size really affects it. A lot of pumpkins have a lot of seeds, but I wouldn’t expect that a small one would have all these seeds.”
Oliver Sabol, another fifth-grader, said he got similar results. His two-pound pumpkin yielded 224 seeds. Though he said he was surprised at the results, he said the measuring wasn’t the best part of the experiment.
“I would say the gutting was the best because you get dirty, and I like getting dirty,” Sabol said.
Josh Emily, another fifth-grade teacher, said he liked the idea of the project because it’s another way to interject science into something they might not think of that way.
“I think that’s the key, to bring science into everyday life,” Emily said. “Here, we had a simple idea — come up with an experiment and figure out how to execute it.”
Keith said bringing science down to earth is something teachers always strive to do.
“We hope they’ll think a lot more about science in their everyday lives,” Keith said. “That’s kind of the goal.” | <urn:uuid:61f30a09-05d0-4fb3-aed6-06f484e9006e> | CC-MAIN-2013-20 | http://newsandtribune.com/schools/x253575173/Gory-gourds-Utica-Elementary-students-dig-into-pumpkins/print | 2013-06-19T06:42:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.983566 | 561 |
Note FYI: A reader told me that when they tried to reach this article via a Google news alert they were redirected to a spyware site. Going directly to this article was not a problem.
Nextbigfuture has covered the micro-fusion for space propulsion work of Freidwardt Winterberg.
Friedwardt Winterberg's ideas were largely the basis of Project Daedalus, the British Interplanetary Society’s starship design that would evolve into a two-stage mission with an engine burn — for each stage — of two years, driving an instrumented payload to Barnard’s Star at twelve percent of the speed of light.
Some highlights of the interview:
When I first had thought of the fusion-micro-explosion propulsion system almost 40 years ago, I never thought about interstellar spaceflight. I rather thought about a high specific impulse – high thrust propulsion system for manned spaceflight within the solar system. Instead of an interstellar probe, one could build in space very large interference “telescopes” with separation distances between the mirrors of 100,000 km, for example, in the hope to get surface details of other earthlike planets. And by going to 500 AU at the location of Einstein gravitational lens –focus, one could use the sun as a telescopic lens with an enormous magnification.
All this needs a very powerful propulsion system. Before going to Alpha Centauri (or Epsilon Eridani), one should aim at comets in the Oort cloud. Since there is water abundantly available, [this] invites the use of deuterium as rocket fuel. Unlike a DT micro-explosion where 80% of the energy goes into neutrons, unsuitable for propulsion, it is not much more than 25% for deuterium. A deuterium mini-detonation though requires at least 100 MJ for ignition, but this can be provided with a magnetically insulated Gigavolt capacitor, driving a 100 MJ proton beam for the ignition of a cylindrical deuterium target...In reaching the Oort cloud, and there establishing human colonies, one may by “hopping” from comet to comet ultimately reach a “new” earth.
The DD reaction produces T and He3, which in a secondary reaction burn with D. This was “nicely” demonstrated by the 15 Megaton fission triggered deuterium bomb test in 1952.
For propulsion, the pure fusion fire ball can with much higher efficiency (if compared to a pusher plate) be deflected by a magnetic mirror, also avoiding the ablation of a pusher plate. The ignition, requiring more than 100 MJ, can be done with some kind of particle accelerator. The LHC at CERN can store several 100 MJ energy in a particle beam moving with almost the velocity of light. No laser can that do yet. Unlike lasers, particle accelerators are very efficient. And to get a high fusion yield of say about 1kt, cylindrical targets with axial detonation should be used, where a mega-gauss magnetic field entraps the charged fusion products, as it is required for detonation
I would agree that the best way to go interstellar is to rapidly turn human civilization into a kardashev 2 civilization over the next 50-200 years and then expand out through the Oort comet cloud and over to other solar systems.
Relatively crude molecular nanotechnology would enable a rapid advance to kardashev level 1 (Storrs Weather machine).
Ignition of a Deuterium Micro-Detonation with a Gigavolt Super Marx Generator
Deuterium microbomb Rocket Propulsion
Ways Towards Pure Deuterium Inertial Confinement Fusion Through the Attainment of Gigavolt Potentials | <urn:uuid:2f7a28af-1244-4665-be44-be41ecedf650> | CC-MAIN-2013-20 | http://nextbigfuture.com/2009/04/friedwardt-winterberg-on-starship.html | 2013-06-19T06:41:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.902279 | 781 |
My two digit number is special because adding the sum of its digits
to the product of its digits gives me my original number. What
could my number be?
If you wrote all the possible four digit numbers made by using each
of the digits 2, 4, 5, 7 once, what would they add up to?
A rhombus PQRS has an angle of 72 degrees. OQ = OR = OS = 1 unit. Find all the angles, show that POR is a straight line and that the side of the rhombus is equal to the Golden Ratio.
The problem is how did Archimedes calculate the lengths of the sides of the polygons which needed him to be able to calculate square roots?
Four cards are shuffled and placed into two piles of two. Starting with the first pile of cards - turn a card over...
You win if all your cards end up in the trays before you run out of cards in. . . . | <urn:uuid:5a1ded6c-32b1-403f-8348-58690a25ddb7> | CC-MAIN-2013-20 | http://nrich.maths.org/thismonth/3and4/2002/08 | 2013-06-19T06:36:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933158 | 202 |
[Found under: "Our English Items"]
What is Poi?
Mr. Lorrin Andrews in his Hawaiian Dictionary gives the following definition of the word Poi: “The paste or pudding which was formerly the chief food of Hawaiians, and is so to a great extent yet. It is made of kalo, sweet potatoes or breadfruit, but mostly of kalo, by baking the above articles in ovens under ground, and afterwards peeling and pounding them with more or less water (but not much); it is then left in a mass to ferment; after fermentation, it is again worked over with more water until it has the consistency of thick paste. It is eaten cold with fingers.”
The learned Hawaiian lexiographer [lexicographer] do not give the exact meaning of the word. Poi is a name given to mashed Kalo, potatoe, breadfruit or banana. The Kalo (a species of arum ex-culentum [arum esculentum] when cooked, is mashed or pounded with a stone, especially made for that purpose, until it becomes like a good soft (flour) dough. From that stage it is then reduce to what is called—poi. It is only at this stage the word poi is used. When the taro is merely mashed, or pounded into a hard pulpy mass, it is called a pa’i-ai or pa’i-kalo. When it is reduced to a still softer condition, and could be twisted by fingers, it is then called poi—whether hard or soft (poi paa or poi wali). When the poi is too soft, it is called poi hehee.
Our kanaka savant ventures to give his definition of Poi. He thinks that it primarily means to gather up; to collect, to pull up; to hold or lift up an article, lest it falls down or spills over. It is analogous to the word Hii, “to lift up; to carry upon the hips and support with the arms, as a child.” An expert poi pounder will call the attention of an unskillful person when pounding taro, saying: “E poi mai ka ai i ole e haule mawaho o ka papa.” (Gather up the ai (foot [food]) lest it falls over the board). He found a French definition of the word “poi” in Boniface Mosblech’s “Vocabulaire Oce’anien—Francais, et cetera, (Paris, 1843) to wit: “boullie de taro” (soft taro). That does not give the derivative definition of the word (kalo) any better than Mr. Andrews.
In conclusion we add the old legend pertaining to the origin of Kalo (taro).
Wakea was the husband, and Papa was the wife, and they two were supposed by some ancient Hawaiian tradition, the first progenitors of the Hawaiian race. They lived on the Koolau side of the Island of Oahu, and also at Kalihi. Their first born son was of premature birth. The little fellow died and its body was buried at one end of their house. After a while, from where the child’s body was buried a new kind of plant shot up. Nobody knows what it was. Finally, green leaves appeared. Wakea called the leaves “Lau-kapa-lili” (the quivering leaves) and the long stalk or stem of the plant was called “Ha-Loa” (long stalk or stem). The plant was finally called by Wakea as “Haloa.”
(Kuokoa Home Rula, 1/1/1909, p.1) | <urn:uuid:2c00954c-055a-4e3b-adcb-9ac4bcaa463b> | CC-MAIN-2013-20 | http://nupepa-hawaii.com/tag/breadfruit/ | 2013-06-19T06:40:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961942 | 819 |
The Historical Origins, Convergence and Interrelationship of International Human Rights Law, International Humanitarian Law, International Criminal Law and Public International Law and Their Application from at Least the Nineteenth Century
University of South Africa
November 20, 2008
Human Rights and International Legal Discourse, Vol. 1, 2007
Hofstra Univ. Legal Studies Research Paper No. 08-24
The emergence and scope of international law, whether in treaties or in customary international law, is especially relevant to those seeking reparations for atrocities committed against indigenous populations during colonization.
This article examines the origins, interrelationship, and dimensions of international law, the law of armed conflict, international human rights law, and international criminal law. It explores the time when these legal regimes came into being and when the protections accorded by them against various types of conduct became available.
It is submitted that by the turn of the twentieth century many of these laws were already available and in force. While it is commonly held that international protections against human rights violations were activated in the post-World War II era, they actually were accessible much earlier.
A specific focus of this article is the Martens Clause adopted into the Hague Conventions of 1899 and 1907. The Martens Clause it is argued constitutes one of the origins of international human rights law in the positivistic sense, and is considered applicable to the whole of international law, and has indeed shaped the development of customary international law. It will be shown that the Martens Clause is a specific and recognized provision giving protection to groups and individuals during both war and peace time.
A further focus of this article is the origins and interconnectedness of concepts such as crimes against humanity and genocide. This article looks as the origins of these notions. It argues that they are tied to the origins of international human rights law and finds they existed at least in the ninetieth century, if not before.
Number of Pages in PDF File: 41
Keywords: law of armed conflict; international human rights law, international criminal law, Marten's clause, Hague Convention, genocide, crimes against humanity, reparations, historical human rights violations, transitional justiceAccepted Paper Series
Date posted: November 24, 2008
© 2013 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollo7 in 0.797 seconds | <urn:uuid:783ccf19-daf6-4ac5-b912-d0e6296c2a7c> | CC-MAIN-2013-20 | http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1304613&rec=1&srcabs=980970&alg=1&pos=2 | 2013-06-19T06:48:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928545 | 478 |
The Law and Economics of Post-Civil War Restrictions on Interstate Migration by African-Americans
George Mason University School of Law
George Mason Law & Economics Research Paper No. 96-03
Texas Law Review, Vol. 76, 1998
An edited and revised version of this paper later became Chapter 1 of Only One Place of Redress: African Americans, Labor Organizations and the Court from Reconstruction to the New Deal (Duke University Press 2001).
In the decades after the Civil War, southern states attempted to prevent African-Americans from migrating by passing emigrant agent laws. These laws essentially banned interstate labor recruitment. The Supreme Court upheld emigrant agent laws in the little-known case of Williams v. Fears in 1900. The history of emigrant agent laws provides evidence that: (1) state action played a larger role in discrimination against African-Americans than is generally acknowledged; (2) laissez-faire jurisprudence was potentially helpful to disenfranchised African-Americans; and (3) the federalist structure of the U.S. provided African-Americans with opportunities to improve their lot through internal migration. Chapter 1 of the book Only One Place of Redress: African Americans, Labor Regulations and the Courts from Reconstruction to the New Deal (Duke University Press 2001) is based on this Article.
Number of Pages in PDF File: 68
Keywords: Civil Rights, Migration, Economics
JEL Classification: J6, J7working papers series
Date posted: June 27, 2008 ; Last revised: September 15, 2008
© 2013 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollo1 in 0.313 seconds | <urn:uuid:44ff7bcc-bc74-499d-9e4c-674a6da51348> | CC-MAIN-2013-20 | http://papers.ssrn.com/sol3/papers.cfm?abstract_id=876873 | 2013-06-19T06:23:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.904663 | 344 |
Factors Affecting Growth and Development Prepared by: Lovelyn M. Mataac 1. Heredity Heredity is the passing on of characteristics from parents to their children. Healthy children grow and develop faster…
"Good Governance" is central to the agenda for growth and for aid in low income countries. The broad contours of what this implies are clear, and there is strong evidence that deep- rooted economic ...
NAICS Code 541712 Industry Report - Identify current and future Research and Development in the Physical, Engineering, and Life Sciences (except Biotechnology) opportunities to win government ...
Factors Affecting Growth and Development Prepared by: Lovelyn M. Mataac
Heredity is the passing on of characteristics from parents to their children.
Healthy children grow and develop faster than sickly children.
Health means being physically, mentally and socially fit.
Children inherit some physical characteristics from parents. If a child inherited shortness, he or she will be short even with the proper nourishment.
A child who is healthy grows and develops faster than the one who is sickly.
3. Food and Food Habits
The body needs different kinds of nutrients for growth, energy and repair.
The body needs different nutrients; carbohydrates, fats, proteins, vitamins, minerals, and water.
Carbohydrates and fats provide the body with the energy needs. Proteins provide the materials for growth and repair. Vitamins and minerals keep the body in good condition. Water is essential for metabolism and for bowel movement. It also determines the amount of blood in circulation.
Minerals are nutrients that are needed for building strong bones and teeth, for helping the nerves work, for regulating growth and for the clotting of blood.
Metabolism refers to all the activities going on in the cells so that they can absorb food and produce energy.
It is important to practice good food habits that are necessary for one’s health. When one is healthy, he or she grows and develops fast.
4. Good Food Habits
Eat plenty of food from grains and cereals. They are the food that give you energy.
Eat protein-rich food. They are the foods that make you grow.
Eat fruits and vegetables. They are foods that regulate your growth. Some vegetables are good if eaten raw. Others should be lightly cooked so that their nutrients will not be lost.
Eat less fatty, salty or sweet foods. Too much of these can cause illnesses.
Drink plenty of water. Your body parts need water to do their work.
Eat a balanced diet. A balanced diet has the right kinds of food in the right amount.
5. Good Health Habits
Keeping Clean-make cleanliness a habit. Take a bath daily. Wash your hands as often as needed. Brush your teeth. Be sure to clean your nose and ears. Keeping yourself clean can keep off some germs that caused diseases.
Exercise-your body also needs exercise. Exercise makes your muscles strong. It also improves your flexibility and makes your heart, lungs and other body parts work efficiently.
Playing is also an exercise.
Rest-while you need exercise, you also need rest. Muscles get tired when they overworked. When your muscles are tired, they cannot work well.
Rest after work or play. You rest when you sit down and read a book or listen to music. Sleep is a form of rest.
Some diseases may affect babies before they are born or at birth. These diseases may affect some parts of the body like the brain, in which case the child may become paralyzed or mentally retarded. Blindness may also affect development of physical and social capabilities of children.
Cerebral palsy is a disorder that affects a person’s movement and posture.
Immunization means injecting into the body a weakened form of germs.
SOME DISEASES SLOW DOWN ONE’S GROWTH AND DEVELOPMENT.
7. Rest and Recreation
Recreation- an activity engaged in to restore strength and spirits after work.
It is an activity done for enjoyment.
Recreational Activities-Activities that you do voluntarily because you like it to.
Rest and recreation help a child develop physically, mentally and socially.
8. Family and Surroundings
Children grow up in a family. What children experience in the family affect their growth and development. Children who are loved grow up with a feeling of security. If their physical, emotional and social needs are provided
For, children grow up to be well-adjusted and confident of themselves. Negative experiences in the family may affect children.
Surroundings affect children. If the place they live in is polluted, children are likely to be sickly.
Family affects the growth and development of children. A small family can meet its basic needs.
Surroundings affect the growth and development of children. A clean surrounding is good for one’s health.
Answer the following with a Yes or a No. Answers only.
1. Eat fruits and vegetables daily.
2. Practicing good food habits will make us healthy.
3. Rest after work or play.
4. proteins are nutrients that come from both plants and animals.
5. Foods which contain vitamins and minerals are called Glow Foods.
6. Children inherit their neighbors’ characteristics.
7. Inherited traits can affect one’s growth and development.
8. Cerebral Palsy is injected into a body.
9. Diseases and defects can affect one’s growth and development.
10. Rest is doing what you like.
B. Write Rest or Recreation
1. Playing “taguan” with friends and neighbors.
2. Sleeping under the tree.
3. Playing badminton.
4. Making sketches of cartoon characters.
Write Good or Bad.
1. Living in a polluted surrounding.
2. Playing with stray animals.
3. It is right to drink contaminated water.
4. Healthful surroundings are free from garbage and junks.
5. Parents of a small family cannot provide the needs of the children.
Share Factors That Affect Growth And Development to:
Download Factors That Affect Growth And Development | <urn:uuid:80fdd869-91d2-4733-b5ec-a4dde623d0ed> | CC-MAIN-2013-20 | http://pdfcast.org/pdf/factors-that-affect-growth-and-development | 2013-06-19T06:49:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94149 | 1,298 |
The current swine flu cases are caused by a virus.
Specifically, they are being caused by a swine influenza A (H1N1) virus, a new strain of flu virus.
It is because this is a new strain of flu virus that it is spreading so easily and why so many kids are getting sick with the swine flu this year.
Fortunately, there are treatments for the swine flu.
Keep in mind that the CDC states that the 'priority use for these drugs this season is to treat people who are very sick (hospitalized) or people who are sick with flu symptoms and who are at increased risk of serious flu complications, such as pregnant women, young children, people 65 and older and people with chronic health conditions.'
That means that most people who get swine flu, including healthy children over age five years of age, won't need Tamiflu or Relenza.
Swine Flu Treatments
What makes this confusing is that there were many reports this past flu season that the seasonal flu virus was resistant to Tamiflu. In fact, it was recommended that doctors go back to using older medicines like Symmetrel (amantadine) or Flumadine (rimantadine) with Tamiflu or Relenza instead, if someone had a seasonal influenza A (H1N1) virus infection.
In contrast, the swine influenza A (H1N1) virus is still sensitive to Tamiflu and Relenza.
As with seasonal flu, Tamiflu and Relenza should be started within 48 hours of your child developing swine flu symptoms. According to the CDC, these flu medications can even be started after 48 hours though, especially if a patient is hospitalized or is at high risk to develop complications from the flu.
Swine Flu Treatments for Kids
Although Tamiflu is available as a syrup, it has never been approved for use in children under 12 months of age. Fortunately, the Food and Drug Administration has approved the use of Tamiflu for infants under an Emergency Use Authorization.
Dosing of Tamiflu for treatment of swine flu in infants includes:
- 12 mg twice daily for 5 days in infants under 3 months old
- 20 mg twice daily for 5 days in infants 3 to 5 months old
- 25 mg twice daily for 5 days in infants 6 to 11 months old
Dosing of Tamiflu for prevention (prophylaxis) of swine flu in infants includes:
- 20 mg once daily for 10 days in infants 3 to 5 months old
- 25 mg once daily for 10 days in infants 6 to 11 months old
It is not recommended that infants under 3 months old routinely take Tamiflu for prevention of swine flu.
Children over 12 months old would take routine dosages of Tamiflu, just like they would for seasonal flu, to prevent and treat swine flu.
Relenza is still only recommended for children who are at least 7 years old (treatment) and who are at least 5 years old (prevention).
The increasing flu activity has already led to limited supplies of Tamiflu suspension. Fortunately, the CDC reports that 'supplies of adult formulation (75 mg) oseltamivir (Tamiflu) and zanamivir (Relenza) are meeting current demand.'
So what do you do if your younger child who can't swallow pills needs Tamiflu?
- having your pharmacist follow the FDA-approved instructions for the emergency compounding of an oral suspension from Tamiflu 75mg capsules
- having your pediatrician prescribe Tamiflu capsules, which are available in 30mg, 45mg, and 75mg capsules, and then open and mix the appropriate capsule size with a sweetened liquid, such as regular or sugar-free chocolate syrup
CDC. Antiviral Drugs and Swine Influenza. Accessed April 2009. CDC. Interim Guidance on Antiviral Recommendations for Patients with Confirmed or Suspected Swine Influenza A (H1N1) Virus Infection and Close Contacts. Accessed April 2009. CDC. 2009-2010 Influenza Season: Information for Pharmacists. Accessed September 2009.
CDC. Antiviral Drugs and Swine Influenza. Accessed April 2009.
CDC. Interim Guidance on Antiviral Recommendations for Patients with Confirmed or Suspected Swine Influenza A (H1N1) Virus Infection and Close Contacts. Accessed April 2009.
CDC. 2009-2010 Influenza Season: Information for Pharmacists. Accessed September 2009. | <urn:uuid:cd4307f5-51c7-4615-881e-ecf37efb856f> | CC-MAIN-2013-20 | http://pediatrics.about.com/od/swineflu/a/409_treatments.htm | 2013-06-19T06:49:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.935964 | 947 |
Dione and Rhea pair up for an occultation, or mutual event, as seen by Cassini. While the lit portion of each moon is but a crescent, the dark side of Dione has begun to take a bite out of its distant sibling moon.
Dione is 1,126 kilometers (700 miles) across and Rhea is 1,528 kilometers (949 miles) across.
The image was taken in visible light with the Cassini spacecraft narrow-angle camera on April 17, 2006 at a distance of approximately 3.4 million kilometers (2.1 million miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 120 degrees. Resolution in the original image was 21 kilometers (12 miles) per pixel on Dione and 25 kilometers (16 miles) per pixel on Rhea. The image has been magnified by a factor of two and contrast-enhanced to aid visibility.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.
For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov/home/index.cfm. The Cassini imaging team homepage is at http://ciclops.org. | <urn:uuid:5739ae75-cf2b-4a46-a55f-ffe5c4d3d328> | CC-MAIN-2013-20 | http://photojournal.jpl.nasa.gov/catalog/PIA08183 | 2013-06-19T06:29:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.90823 | 340 |
Something I can never understand is that where the cosmic background radiation spreads? If I know well, the cosmic background radiation is actually the light of the Big Bang. If it happened exactly ...
I'm wondering whether the residual light of the Big Bang comes from one particular direction and what possibilities do we have to detect its position?
One of the experimental evidence that supports the theory of big bang is cosmic microwave background radiation (CMBR). From what I've read is that CMBR is the left over radiation from an early stage ...
Possible Duplicate: Why can we see the cosmic microwave background (CMB)? We all have seen evidence of radiation left from the Big Bang, but how is it still detectable? Why didn't it ...
I understand that we can never see much farther than the farthest galaxies we have observed. This is because, before the first galaxies formed, the universe was opaque--it was a soup of subatomic ... | <urn:uuid:d6903642-357f-45c4-85ba-877c29848823> | CC-MAIN-2013-20 | http://physics.stackexchange.com/questions/tagged/cmb+big-bang | 2013-06-19T06:24:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945355 | 193 |
Though overt racism has diminished greatly over the last 30 years, most American cities remain deeply segregated. A host of other problems, such as the lack of both public services and private enterprise in inner-city black neighborhoods, have persisted in part because of this segregation. The challenge today is no longer to thwart individual white racists or, certainly, the patent discrimination of the old "neighborhood improvement associations," which flatly excluded blacks. Rather we must address the legacy of nearly a century of institutional practices that embedded racial and ethnic ghettos deep in our urban demography. Specifically, the practices of mortgage lenders and property insurers may have done more to shape housing patterns than bald racism ever did.
Real estate agents and federal housing officials long listened to sociologist and Federal Housing Administration (FHA) advisor Homer Hoyt, who concluded in 1933 that blacks and Mexicans had a very detrimental effect on property values. In its 1939 Underwriting Manual the FHA warned of "inharmonious racial groups" and concluded that "if a neighborhood is to retain stability, it is necessary that properties shall continue to be occupied by the same social and racial classes." Until the 1960s the FHA insured the financing of many homes in white suburban areas while providing virtually no mortgage insurance in the urban markets where minorities lived. Similarly, until the 1950s the National Association of Realtors officially "steered" clients into certain neighborhoods according to race, advising that "a realtor should never be instrumental in introducing into a neighborhood a character of property or occupancy, members of any race or nationality whose presence will clearly be detrimental to property values in the neighborhood." Racially restrictive covenants were actually enforced by the courts until the Supreme Court declared them to be "unenforceable as law and contrary to public policy" in its decision in Shelley v. Kraemer, in 1948. But such covenants have persisted in practice even after they were officially declared illegal. In 1989, Urban Institute researchers found that racial steering and other forms of disparate treatment continued to block opportunities for approximately half of all black and Hispanic home seekers nationwide, whether they were potential home buyers or renters. Fair housing groups have continued to document the same practices through the 1990s.
By concentrating public housing in central city locations and financing highways to facilitate suburban development, the federal government has further reinforced emerging dual housing markets. And by subsidizing the costs of sewer systems, school construction, roads, and other aspects of suburban in fra structure, government policy nurtures urban sprawl, which generally benefits predominantly white outlying communities at the expense of increasingly nonwhite urban and inner-ring suburban communities.
The problem today is not that loan officers or insurance agents are racist. Rather, it's that their decisions are largely dictated by considerations of financial risk—and the evaluation of that risk has often been colored by questions of race. Unfortunately, it is sometimes hard to disentangle discrimination based explicitly on race from discrimination based on the credit and overall financial status of home owners, the condition of the homes they want to purchase, and the state of the neighborhood as it affects property values. Even if race per se is not a leading factor in lending or insurance decisions, urban blacks are more likely to have poor credit ratings and are more likely to be purchasing homes in neighborhoods with lower property values. These factors hurt their mortgage and insurance applications.
The Mortgage Gap
While the federal government actively encouraged racial segregation in the nation's housing markets until the 1960s, since then its official record on prohibiting discrimination has been more favorable. For example, the federal Fair Housing Act of 1968 and the Equal Credit Oppor tunity Act of 1974 outlawed racial discrimination in mortgage lending and related credit transactions, and in 1975, Congress enacted the Home Mortgage Disclosure Act (HMDA), which requires most mortgage lenders to disclose the geographic location of their mortgage lending activity. In 1989, Congress expanded HMDA to require lenders to report the race, gender, and income of every loan applicant, the census tract location of his or her home, and whether the application was accepted or denied. And in 1977, Congress enacted the Community Reinvestment Act (CRA), requiring depository institutions (primarily commercial banks and savings institutions) to seek out and be responsive to the credit needs of their entire service areas, including low- and moderate-income neighborhoods.
Still, recent research using HMDA data has found that black mortgage loan applicants are denied twice as often as whites. The most comprehensive study, prepared by researchers with the Federal Reserve Bank of Boston and published in the American Economic Review in 1996, found that even among equally qualified borrowers, blacks were rejected 60 percent more often than whites. The causes of this disparity, and particularly the extent to which outright discrimination accounts for them, continue to be hotly debated.
Other research suggests that discrimination of some sort must be stubbornly persisting. Paired testing—an investigative procedure whereby pairs of equally qualified white and nonwhite borrowers or borrowers from white and nonwhite neighborhoods approach the same lenders to inquire about a loan—has found that applicants from black and Hispanic communities are often offered inferior products, charged higher fees, provided less counseling or assistance, or are otherwise treated less favorably than applicants from white communities. Under writing guidelines like minimum loan amounts and maximum housing age requirements that are used by many lenders also are found to have an adverse disparate impact on minority communities. Over the past seven years the U.S. Department of Justice (DOJ) and the U.S. Depart ment of Housing and Ur ban Devel opment (HUD) have documented similar practices in settling fair housing complaints with 17 lenders involving millions of dollars in damages to victims and billions of dollars in loan commitments and other services previously denied to minority communities.
Fortunately, there is some evidence that racial gaps are closing. Between 1993 and 1997 the number of home purchase mortgage loans to blacks and Hispanics nationwide increased 60 percent, compared to 16 percent for whites. The percentage of those loans going to black and Hispanic borrowers, therefore, grew from 5 percent to 7 percent for each group. These positive trends likely reflect several developments. The multimillion-dollar DOJ and HUD settlements of fair housing complaints certainly attracted the industry's attention. And under the Community Reinvestment Act, community-based advocacy groups have negotiated reinvestment agreements with lenders in approximately 100 cities in 33 states, totaling more than $400 billion in lending commitments. These settlements and agreements call for a variety of actions, including opening new branch offices in central city neighborhoods, advertising in electronic and print media directed at minority communities, hiring more minority loan officers, educating consumers on mortgage lending and home ownership, and developing new loan products. Sources ranging from the National Community Reinvest ment Coalition to Federal Reserve Chairman Alan Greenspan maintain that lenders have discovered profitable markets that had previously been underserved due to racial discrimination.
Yet this progress may be threatened by consolidation within and across financial industries as well as between financial and commercial businesses. For several years Congress has debated financial mod ern ization legislation that would permit and encourage consolidation activities among banks, insurers, and securities firms that are currently prohibited by various post–Depression era statutes, most notably the Glass-Steagall Act. Even in the absence of such legislation, financial institutions have found loopholes in the laws, and consolidation has occurred within and among financial service industries.
Several studies indicate that mergers and acquisitions decrease CRA performance. The Woodstock Institute, a Chicago-based research and advocacy group that focuses on community reinvestment issues, recently found that small lenders make a higher share of their loans in low-income neighborhoods than do larger lenders. According to John Taylor, president of the National Community Reinvestment Coalition, small, local lenders have an intimate knowledge of their customers that larger and more distant institutions cannot develop. In addition, consolidation across financial industries can result in the shifting of assets away from depositories currently covered by CRA to independent mortgage banks and other institutions not covered by this law.
No Insurance, No Loan
Before potential home owners can apply for a mortgage loan, they must produce proof of insurance. As the Seventh Circuit Court of Appeals stated in its decision in the 1992 case of NAACP v. American Family Mutual Insurance Co., "No insurance, no loan; no loan, no house." Unfortunately, not much is known about the behavior of the property insurance industry, in part because there is no law comparable to HMDA requiring public disclosure of the disposition of applications and geographic location of insured properties. What evidence does exist, however, demonstrates the persistence of substantial racial disparities—though as with mortgage lending businesses, it is impossible to determine precisely to what extent this disparity is due to outright racial discrimination.
In 1992, 33 urban communities voluntarily provided zip code data to their state insurance commissioners on policies written, premiums charged, losses experienced, and other factors. Analysis of the data by the National Association of Insurance Commission ers revealed that even after loss costs were taken into consideration, racial composition of zip codes was strongly associated with the price and number of policies written by these companies. Insurers were underwriting relatively fewer policies for black neighborhood applicants, and the policies they did write were often at higher premiums than for comparable white applicants from white areas.
More recently the National Fair Housing Alliance conducted paired testing of large insurers in nine cities and found evidence of discrimination against blacks and Hispanics in approximately half of the tests. Applicants from minority communities were refused insurance, offered inferior policies, or forced to pay higher premiums. Some applicants from these areas were required to produce proof of inspection or credit reports not required in other areas. Applicants from minority communities were also found to be held to more stringent maximum age and minimum value policy requirements. (Companies sometimes require that a house not be older than a certain age, or have a minimum appraisal value, before they will insure it.)
But there is evidence here, too, that these racial gaps may be closing, at least in certain markets. Four major insurers (Allstate, State Farm, Nation wide, and American Family), accounting for almost half of all home owners' insurance policies sold nationwide, have settled fair housing complaints since 1995 with HUD, the DOJ, and several fair housing and civil rights organizations. In October 1998 a Richmond jury found Nationwide guilty of intentional discrimination and ordered the insurer to pay more than $100 million in punitive damages to the local fair housing group, Housing Opportunities Made Equal (Home), which filed the lawsuit. These insurers have agreed to eliminate discriminatory underwriting guidelines, open new offices in minority areas, and market products through minority media to increase service in previously underserved minority communities. At the same time, several insurers, trade associations, and state regulators have launched voluntary initiatives to educate consumers, recruit minority agents and more agents for urban communities, and generally increase business in urban neighborhoods.
As a first step toward greater understanding of the issue, insurance companies should be required to publicly disclose information about the geographic distribution of insurance policies, which would lead to a range of studies just as HMDA did in the mortgage lending field. In addition, researchers should undertake follow-up studies to document the impact of recent legal settlements and indicate what has worked and why. Currently, evaluations of these efforts are included as part of the settlements, but the results are not for public consumption.
Institutions, Not Individuals
Discriminatory practices in the housing industry are often treated as the problem of selected individuals—consumers who happen to have the wrong credit or color, or providers who have prejudicial attitudes and act in discriminatory ways. Thinking this way, one loses sight of the structural and political roots of the problem and of potential solutions. Dismantling our cities' dual housing markets will require appropriate political strategies that address the structural causes.
Several steps should be taken to augment ongoing community reinvestment efforts. First, Congress should enact an insurance disclosure law, comparable to HMDA, that would permit regulators, consumers, and the insurance industry itself to understand better where insurance availability problems persist and why. This information would likely stimulate additional organizing, enforcement, and voluntary industry efforts to respond to remaining problems.
Second, additional paired testing would reveal specific policies and practices (related to pricing, types of products offered, and qualifying standards) that are delivered in a discriminatory manner, enabling enforcement agencies to target their resources and secure more comprehensive remedies.
Third, Congress should establish the following requirements for any significant merger, consolidation, or acquisition involving banks, insurers, securities firms, and related financial industries:
- CRA provisions should assure that all institutions determine and respond to the needs of low-income residents and communities;
- low-cost checking, savings, and other basic banking services should be available to low-income residents;
- regulatory reviews should be conducted of all proposed restructurings to assure that an adequate plan is in place to meet community reinvestment objectives;
- public hearings should be offered on any proposed significant restructuring to permit comment by all parties that would be affected prior to any decision by a regulatory agency on the proposal;
- any subsidiary currently or subsequently found to be in violation of the Fair Housing Act or related fair lending rules should be excluded or divested.
None of these provisions would prohibit lenders, insurers, or other providers of financial services from engaging in transactions they find beneficial. But these guidelines would assure that the products and services of these industries are available throughout all of the nation's metropolitan areas.
If we are to address seriously the segregation and poverty that persist in urban America, we must continue to educate consumers about effective money management and providers of financial services about lingering discrimination. Public officials must also intensify redevelopment efforts in inner cities—once a city can demonstrate tighter labor markets, higher wages, and more jobs, lenders and insurers will start to market their products more aggressively in distressed but recovering areas. But as long as racial discrimination—whether by intent or effect—persists in the housing services industry, our cities will remain riven by segregation, leaving blacks and other minorities in underserved, undesirable locations.
You need to be logged in to comment.
(If there's one thing we know about comment trolls, it's that they're lazy) | <urn:uuid:0d818561-0944-468d-b2c8-98f2fd462d30> | CC-MAIN-2013-20 | http://prospect.org/article/indelible-color-line | 2013-06-19T06:49:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962216 | 2,854 |
Well, it turns out that empathy across group boundaries is a complicated matter. Although part of the glue holding society together is a desire to reduce the suffering of others, and though we’re quick to empathize with and help members of our own groups, this dynamic can go haywire when it must extend to members of different groups (Cikara, Bruneau, & Saxe, 2011).
When we witness a member of our own ethnic group suffering (like Katniss, if you’re European-American), our brain activates the same experience for ourselves, leading us to feel the same pain as she does.
But when we witness a member of a different ethnic group suffering (like Rue, if you’re not African American), this response short-circuits – our brains do not activate a shared experience. This is especially true of people high in implicit racism (that is, people who harbor biases toward other groups, even if they won’t admit that bias to themselves or others).
Instead of empathy, when competition abounds (and it is undeniably encouraged in The Hunger Games) we may respond to the suffering of people from different groups with Schadenfreude, taking pleasure in their pain. That is, reward regions of the brain light up when viewing someone from another, competing group receive a painful electric shock.
So although she was clearly Katniss’s ally in the book, when Rue’s race was made salient to viewers (by actually seeing her onscreen) in a savagely competitive context, it’s possible that many viewers felt they could no longer relate to Rue or empathize with her suffering. Feeling robbed of an emotional relationship with a beloved character (yes, we do build one-sided “parasocial” relationships with characters), and perhaps having to face a sick feeling of relief or joy at her death, it appears some viewers coped with these aversive feelings in a very public way: by posting racist things – some subtle, some not – on the Internet.
This topic has inspired an intense public dialogue about modern expressions of racism, and multitudes of fascinating essays and blog posts – here’s a good one.
So what should we take away from this? Rather than writing it off as “Well, some people are just racist,” or adopting a “colorblind” stance that ignores group differences and strips Rue and Thresh of their cultural identity, it’s essential that we celebrate group differences and carefully examine our own assumptions and biases. This latter task can be extremely difficult. Best of all, by cultivating friendships with members of different groups (according to the work of Elizabeth Page-Gould), we can build bridges between our own identity and that of the other group, and in turn become more comfortable when interacting with people of ethnicities different than our own.
Were you among those who had imagined Rue to be European-American? Why do you think that was? How did you react when you saw her in the movie?
Cikara, M., Bruneau, E., & Saxe, R. (2011). Us and Them: Intergroup Failures of Empathy Current Directions in Psychological Science, 20 (3), 149-153 DOI: 10.1177/0963721411408713 | <urn:uuid:7431b3fc-a138-423e-bc97-f4a4e6ecec81> | CC-MAIN-2013-20 | http://psych-your-mind.blogspot.com/2012/04/rue-and-racism-intergroup-dynamics-and.html | 2013-06-19T06:28:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.938284 | 682 |
Focus and Concentration
- Minimize distractions e.g. email, kids
- Know your best concentration span for reading and stick to it and then take a break. Try the 50-10-50 technique: 50 minutes reading, 10 minute break, 50 minutes reading.
- Don't try to read when tired
- If you are a kinesthetic /tactile learner, you need to DO something while you read.
For more information and strategies to help with focus and concentration, see:
Ideas Just Sweep Me Away: How to Stay On-Task While Reading (99 KB)
For more information, see TOOL "Improving your Concentration". Go to the Undergraduate Student tab. Download the module "Academic Reading". In the pdf version, go to p. 16. | <urn:uuid:7f4d308e-078c-4e2c-a56a-c68c836593ca> | CC-MAIN-2013-20 | http://queensu.ca/learningstrategies/grad/reading/module/gradfocus.html | 2013-06-19T06:50:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.822926 | 166 |
Baseball was part of the long and varied life of John W. Dickins for only a few years, and it seems to have played no role at all after the 1869 advent of open professionalism. Nevertheless, he made important contributions to the post-Civil War spread of the game in the South, a development that lent credence to descriptions of baseball as the “national game.”
John Whitby Dickins was born in Wigan, Lancashire County, England, on June 24, 1841, the son of Samuel Dickins, a schoolmaster, and his wife Eliza. When he was sixteen John was sent to the United States to study at the Williston Seminary in Easthampton, Massachusetts. After a year of teaching school, he moved to Brooklyn and commenced the study of law while working for the law firm of Hagner & Smith. Brooklyn was a baseball-mad city at the time and the young Englishman must have been introduced to the sport at this time, if not before.
The outbreak of the Civil War interrupted his law studies as he began what was supposed to be a three-month enlistment in the 71st New York State Militia. Instead, Dickins was captured at the Battle of Bull Run, spending four months in Libby Prison in Richmond, four months in Parish Prison in New Orleans, and another three months in the Confederate Prison in Salisbury, North Carolina. While imprisoned, he became the associate editor of “The Stars and Stripes in Rebeldom,” a collection of the writings of Union prisoners.
After close to a year behind bars, Dickins was exchanged and promptly reenlisted in the 165th New York Infantry, a regiment that became known as “Duryea’s Zouaves.” One of his fellow soldiers in the 165th was future National League president Abraham G. Mills, who would later recollect having regularly packed a bat and ball in with his field equipment and having played in a Christmas Day 1862 game at Hilton Head, South Carolina, said to have been watched by 40,000 soldiers. The figure is preposterous, but it seems a safe assumption that Dickins’ time in Duryea’s Zouaves increased his familiarity with baseball. He was promoted from Corporal to Full Sergeant on February 25, 1863, and was wounded at the battle of Port Hudson, Louisiana, on May 27, 1863.
After another promotion to Sergeant-Major, Dickins was transferred to a captaincy in the 100th U.S. Colored Troops Infantry Regiment in July of 1864. He led his new charges at the Battle of Nashville that December and was brevetted Major and Lieutenant-Colonel for “uniform gallantry and good conduct, and for especial bravery” at this pivotal battle.
It was not until the end of 1865 that John Dickins was mustered out, but he must have been free to travel after the war’s end, as he took advantage to return to England in the summer of 1865 and marry a young woman named Emma Lowe. The young couple returned to the States in October, and on Christmas Day the Civil War service of John Dickins officially ended. His discharge read: “Character, excellent; a brave, skillful and most efficient officer.”
It had now been more than four years since he had put aside his law studies, and, instead of resuming them, Dickins chose to accept a position in Nashville with the Bureau of Freedmen and Abandoned Lands. Upon his arrival in Nashville, the young Englishman used his familiarity with baseball to help organize the Cumberland Base Ball Club of Nashville, of which he served as captain and president.
Nor was his ambassadorial service on behalf of the sport restricted to Nashville. In July of 1866, he made two trips to Louisville to umpire a series for the local championship. His umpiring earned high praise from the Louisville press, and after the final out of the first contest, both clubs joined in giving him three cheers and a “tiger” – a distinctive growling cheer created by Princeton students. Not to be outdone, Dickins then called for three cheers for “base ball in general.” There was another type of drama at the end of the second game, which saw the Louisville Base Ball Club retain local bragging rights. After announcing the result, Dickins challenged the Louisville Club to face his club for the “championship of Kentucky and Tennessee.” While the right of these two clubs to contest for this honor was debatable at best, there could be no argument about the excitement the series generated in a region where baseball had yet to take firm root.
The first game was scheduled for July 31 in Tennessee. The excited members of the Louisville Club made the 183-mile trip to Nashville on the night train and engaged in “many a lusty shout and cheer for all that pertained to base ball either generally or specifically” before finally turning in for the night. Their time in Nashville lived up to their expectations as the Cumberland Club hosted them in style. The contest took place on the grounds of the Cumberland Club, located near Fort Gilliam, and in spite of intense heat a crowd estimated at 2,000-3,000 turned out to watch the visitors pull out a 30-23 triumph. One feature of the game that drew special attention was the identity of the score-keeper for the home club: Emma Dickins.
The return game took place in Louisville a couple of weeks later. It was originally scheduled for August 15, but the train carrying the Nashville players was delayed by an accident on the line, forcing a one-day postponement. Even with the delay, an overflow crowd of over 5,000 was on hand to watch, including Emma Dickins, who again acted as score-keeper for the Cumberland Club. The match ended with the Louisville Base Ball Club winning and wrapping up the best-of-three series, a result that led a reporter for the Louisville Daily Democrat to call the “exciting” contest “an epoch in the history of base ball.” He exuberantly declared that there was “not a more healthy, interesting and innocent recreation than that of base ball, a game which is recognized throughout the entire country.” He concluded by describing the Louisville Base Ball Club as “entitled to the proud CHAMPIONSHIP OF THE SOUTH” and “the equal of any in the country.”
John Dickins must have felt very proud to be a part of a match that so vividly demonstrated the extended reach of baseball. Yet his familiarity with the caliber of baseball played in other regions must have made him less sanguine about the contest’s significance. The Louisville Club’s right to claim the championship of the South was debatable at best, while the assertion that it was on a par with the national powers was mere hyperbole. Of more direct concern to Dickins was the lopsided 72-11 score by which his club had lost. The enormous gap highlighted a recurring problem for the still-young sport of baseball – that even great enthusiasm and the best of planning could not ensure the competitive balance necessary for long-term success.
As it happened, John and Emma Dickins moved to Louisville that winter, where he became an accountant. He also took over as shortstop for the Louisville Base Ball Club, and was described as an old Brooklyn player. Over the next two seasons, Dickins played shortstop for this club as it hosted a series of national powerhouses, beginning with the historic visit of the Nationals of Washington on July 17, 1867. The arrival of this club on the first-ever trans-Allegheny tour was another important milestone for a game that was still mostly confined to the Northeast. The contest led one reporter to declare that baseball had “undoubtedly established itself as the National game of our country,” so it was fitting that John Dickins played in the game.
But by this time the best days of the Louisville Base Ball Club were past, with many core players having less time for baseball because of the demands of careers and families. In the thirteen months after the game against the Nationals, the Louisville Club played host to the Athletics of Philadelphia, the Atlantics of Brooklyn, the Unions of Morrisania, and the Cincinnati club that was soon to become known as the “Red Stockings.” But none of these matches were remotely close, and the Louisville public became disenchanted with the lopsided losses. Even after the loss to the Nationals, the attitude of the Daily Democrat had begun to change, and its reporter wrote that he was “disappointed” by the “poor playing” of Dickins and second baseman Walter Brooks. Subsequent defeats prompted more grumbling and led to the departure of more regulars from the “first nine” of the Louisville Base Ball Club.
John Dickins remained a fixture in the Louisville Club’s lineup in 1868, though the birth of their first son put an end to Emma’s days as a score-keeper. He also found time to umpire numerous local contests, performing his duties “in a long-tailed duster, under a sun umbrella” on one humid summer day.
The Louisville Base Ball Club barely managed to retain local supremacy in 1868, but it came as no surprise to anyone when it disbanded at the end of that season. That was also the end of any recorded connection between John W. Dickins and the sport he had helped to popularize in the South. His family continued to grow, and by 1880 he and Emma were raising six children. His father eventually emigrated from England and joined the household.
John Dickins remained in Louisville for the rest of his life, and in 1902 he accepted a commission in the Internal Revenue Service. Emma Dickins died at some point, and he remarried around 1898 and started a second family. He died at his Louisville home on October 17, 1916, survived by his second wife and by five children from his two marriages. By then the game he had helped to introduce to the South in the 1860s was long established, with Louisville having obtained and lost two different major league franchises. It would be fascinating to learn what Dickins thought about the many changes to baseball during those years, but alas those reflections remain unknown.
History of the Second Battalion Duryee [sic] Zouaves : One Hundred and Sixty-Fifth Regiment New York Volunteer Infantry, mustered in the United States service at Camp Washington, Staten Island, N.Y. (Salem, Mass.: Higginson Book Co., 1905); Album of the Second Battalion, Duryee [sic] Zouaves, One Hundred and Sixty-fifth Regt., New York Volunteer Infantry. (1906); LDS Film 1469056, Lancashire County Baptisms 1841-1846 from the Bishop’s Transcripts, Page 2, Entry 15; BMD Birth and Marriage Records for Lancashire County; Obituary of John W. Dickins in the Louisville Evening Post, October 19, 1916, 2; Ancestry.com. Kentucky Death Records, 1852-1953 [database on-line]. (Provo, Utah: Ancestry.com Operations Inc, 2007); Chadwick Scrapbooks; census listings and city directories; contemporaneous news coverage, as cited in notes.
Louisville Journal, July 18, 1866, 3; Louisville Daily Democrat, July 18, 1866, 2
Louisville Daily Democrat, July 27, 1866, 2
Louisville Journal, August 1, 2 and 3, 1866
Louisville Daily Democrat, August 16, 1866, 2
Louisville Daily Democrat, August 17, 1866, 2
Chadwick Scrapbooks, unidentified clipping
Louisville Journal, July 18, 1867, 3
Louisville Daily Democrat, July 18, 1867, 1
Louisville Daily Democrat, July 21, 1868
Louisville Evening Post, October 18, 1916, 1, and October 19, 1916, 2 | <urn:uuid:3f3c3285-5b04-41c9-a378-d5fa7c675518> | CC-MAIN-2013-20 | http://sabr.org/bioproj/person/f4bd13cc | 2013-06-19T06:16:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.980334 | 2,505 |
Sammy's Valentine's Gala - Saturday February 15, 2014 - Email email@example.com or call 514-703-2911
SMA is an autosomal recessive genetic disease. In order for a child to be affected by SMA, both parents must be carriers of the abnormal gene and both must pass this gene on to their child. Although both parents are carriers the likelihood of a child inheriting the disorder is 25%, or 1 in 4.
An individual with SMA has a missing or mutated gene (SMN1, or survival motor neuron 1) that produces a protein in the body called Survival Motor Neuron (SMN) protein. This protein deficiency has its most severe affect on motor neurons. Motor neurons are nerve cells in the spinal cord which send out nerve fibers to muscles throughout the body. Since SMN protein is critical to the survival and health of motor neurons, without this protein nerve cells may atrophy, shrink and eventually die, resulting in muscle weakness.
As a child with SMA grows their bodies are doubly stressed, first by the decrease in motor neurons and then by the increased demands on the nerve and muscle cells as their bodies grow larger. The resulting muscle atrophy can cause weakness and bone and spinal deformities that may lead to further loss of function, as well as additional compromise of the respiratory (breathing) system.
There are four types of SMA, SMA Type I, II, III, IV. The determination of the type of SMA is based upon the physical milestones achieved. It is important to note that the course of the disease may be different for each child.
Type I SMA is also called Werdnig-Hoffmann Disease. The diagnosis of children with this type is usually made before 6 months of age and in the majority of cases the diagnosis is made before 3 months of age. Some mothers even note decreased movement in of the final months of their pregnancy.
Usually a child with Type I is never able to lift his/her head or accomplish the normal motor skills expected early on in infancy. They generally have poor head control, and may not kick their legs as vigorously as they should, or bear weight on their legs. They do not achieve the ability to sit up unsupported. Swallowing and feeding may be difficult and are usually affected at some point, and the child may show some difficulties managing their own secretions. The tongue may show atrophy, and rippling movements or fine tremors, also called fasciculations. There is weakness of the intercostal muscles (the muscles between the ribs) that help expand the chest, and the chest is often smaller than usual. The strongest breathing muscle in an SMA patient is the diaphragm. As a result, the patient appears to breath with their stomach muscles. The chest may appear concave (sunken in) due to the diaphragmatic (tummy) breathing. Also due to this type of breathing, the lungs may not fully develop, the cough is very weak, and it may be difficult to take deep enough breaths while sleeping to maintain normal oxygen and carbon dioxide levels.
The Diagnosis of Type II SMA is almost always made before 2 years of age, with the majority of cases diagnosed by 15 months. Children with this type may sit unsupported when placed in a seated position, although they are often unable to come to a sitting position without assistance. At some point they may be able to stand. This is accomplished with the aid of assistance or bracing and/or a parapodium/standing frame. Swallowing problems are not usually characteristic of Type II, but vary from child to child. Some patients may have difficulty eating enough food by mouth to maintain their weight and grow, and a feeding tube may become necessary. Children with Type II SMA frequently have tongue fasciculations and manifest a fine tremor in the outstretched fingers. Children with Type II also have weak intercostals muscles and are diaphragmatic breathers. They have difficulty coughing and may have difficulty taking deep enough breaths while they sleep to maintain normal oxygen levels and carbon dioxide levels. Scoliosis is almost uniformly present as these children grow, resulting in need for spinal surgery or bracing at some point in their clinical course. Decreased bone density can result in an increased susceptibility to fractures.
The diagnosis of Type III, often referred to as Kugelberg-Welander or Juvenile Spinal Muscular Atrophy, is much more variable in age of onset, and children can present from around a year of age or even as late as adolescence, although diagnosis prior to age 3 years is typical. The patient with Type III can stand alone and walk, but may show difficulty with walking at some point in their clinical course. Early motor milestones are often normal. However, once they begin walking, they may fall more frequently, have difficulty in getting up from sitting on the floor or a bent over position, and may be unable to run. With Type III, a fine tremor can be seen in the outstretched fingers but tongue fasciculations are seldom seen. Feeding or swallowing difficulties in childhood are very uncommon. Type III individuals can sometimes lose the ability to walk later in childhood, adolescence, or even adulthood, often in association with growth spurts or illness.
Type IV (Adult Onset)
In the adult form, symptoms typically begin after age 35. It is rare for Spinal Muscular Atrophy to begin between the ages of 18 and 30. Adult onset SMA is much less common than the other forms. It is defined as onset of weakness after 18 years of age, and most cases reported as type IV have occurred after age 35. It is typically characterized by insidious onset and very slow progression. The bulbar muscles, those muscles used for swallowing and respiratory function, are rarely affected in Type IV.
Patients with SMA typically lose function over time. Loss of function can occur rapidly in the context of a growth spurt or illness, or much more gradually. The explanation for this loss is unclear based on recent research. It has been observed that patients with SMA may often be very stable in terms of their functional abilities for prolonged periods of time, often years, although the almost universal tendency is for continued loss of function as they age.
For information on Kennedy’s Disease, please see www.KennedysDisease.org. | <urn:uuid:13cf851e-778a-4ae9-98aa-5e39cc2d6ade> | CC-MAIN-2013-20 | http://sammycavallaro.com/understanding-sma/what-causes-sma/ | 2013-06-19T06:29:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965421 | 1,317 |
It is a bright red crystalline solid. It is a strong oxidizing agent in acidic conditions. It is poisonous because it contains dichromate, which is carcinogenic. It can be destroyed by reaction with reducing agents. It can be reduced to green chromium(III) compounds such as chromium(III) oxide.
Potassium dichromate can be made by oxidation of potassium hydroxide and chromium(III) oxide. It can also be made by adding an acid to potassium chromate.
It is very rarely found in the earth. It is only found in very dry areas. If it rains, it dissolves in the water and is washed away.
It is used as a reagent for many chemicals, such as alcohol. Some breath alcohol testers use this, and the red turns to green when there is alcohol in the breath. It can be used to make chromic acid, which is used to clean glass. It can be used in cement. It is used to tan leather. It can be used in photography and to test for certain metals. It can be used to treat wood.
Potassium dichromate is very irritating. It can cause cancer when the dust is inhaled. It is a strong oxidizing agent and can start fires. It can be reacted with iron(II) sulfate to detoxify (remove the toxicity of) it. | <urn:uuid:6c0f0a2b-3645-40f0-a326-cde882cd332c> | CC-MAIN-2013-20 | http://simple.wikipedia.org/wiki/Potassium_dichromate | 2013-06-19T06:48:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951542 | 282 |
I did some script in python that connects to GMAIL and print a email text... But, often my emails has words with "accent". And there is my problem...
For example a text that I got: "PLANO DE S=C3=9ADE" should be printed as "PLANO DE SAÚDE".
How can I turn legible my email text? What can I use to convert theses letters with accent?
The code suggested by Andrey, works fine on windows, but on Linux I still getting the wrong print:
>>> b = 'PLANO DE S=C3=9ADE' >>> s = b.decode('quopri').decode('utf-8') >>> print s PLANO DE SÃDE
Thanks, you are correct about the word, it was misspelled. But the problem still the same here. Another example: CORRECT WORD: obersevação
>>> b = 'Observa=C3=A7=C3=B5es' >>> s = b.decode('quopri').decode('utf-8') >>> print s Observações
I am using Debian with UTF-8 locale:
>>> :~$ locale LANG=en_US.UTF-8
Thanks for your time. I agree with your explanation, but still with same problem here. Take look in my test:
s='Observa=C3=A7=C3=B5es' s2= s.decode('quopri').decode('utf-8') >>> print s Observa=C3=A7=C3=B5es >>> print s2 Observações >>> import locale >>> ENCODING = locale.getpreferredencoding() >>> print s.encode(ENCODING) Observa=C3=A7=C3=B5es >>> print s2.encode(ENCODING) Observações >>> print ENCODING UTF-8 | <urn:uuid:4408a06c-e606-4365-9883-08919c94f0f1> | CC-MAIN-2013-20 | http://stackoverflow.com/questions/3680352/reading-text-with-accent-python | 2013-06-19T06:23:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.797047 | 443 |
- Green Minute
- Green Campuses
- Green Media
- Contact Us
The Middle Reef, part of Australia’s Great Barrier Reef, is growing more quickly than reefs in other areas with lower levels of sediment stress, a new study has found. Rapid coral reef growth has been identified in environments with large amounts of sediment, conditions previously thought to be detrimental to reef growth.
Middle Reef is located on the inner Great Barrier Reef shelf just 4 km off the mainland coast near Townsville, Australia. Middle Reef grows in water that is always ‘muddy’ unlike most reefs that grow in clear water. The sediment comes from seasonal river flood plumes and the mud churning up from the floor of the sea. Since European settlement, The Queensland coast has changed significantly. The sediment runoff has increased due to the clearing of natural vegetation for agricultural use. It is believed that the poor water quality, due to high levels of sediment, has a detrimental effect on marine biodiversity.
Cores through the structure of Middle Reef were collected by the research team to analyze how it had grown. Radiocarbon dating was used to map out the exact rate of growth of the reef. The results show that the reef started to grow only about 700 years ago but that it has subsequently grown rapidly towards sea level at rates averaging nearly 1 cm per year. These rates are notably greater than those measured on most clear water reefs on the Great Barrier Reef and elsewhere. When the accumulation rates of land-derived sediment within the reef structure were at their peak is when the most rapid growth took place, averaging 1.3 cm a year. They discovered that, while the reef faced high sediment levels after the European settlers arrived in the 1800s, these same conditions were also part of the long-term environmental regime under which the reef grew.
The findings suggest that in some cases reefs can adapt to these conditions and flourish, although there is evidence that other reefs have suffered degradation from high levels of sediment. Middle Reef has probably been aided by the high sedimentation rates causing the rapid rate of vertical growth. The team believe this is because after the coral dies, the accumulating sediment quickly covers the coral skeleton preventing destruction from fish, urchins and other biological eroders thus promoting coral framework preservation and rapid reef growth.
Professor Chris Perry of Geography at the University of Exeter said: “Our research challenges the long-held assumption that high sedimentation rates are necessarily bad news in terms of coral reef growth. It is exciting to discover that Middle Reef has in fact thrived in these unpromising conditions. It is, however, important to remain cautious when considering what this means for other reefs. Middle Reef includes corals adapted to deal with high sedimentation and low light conditions. Other reefs where corals and various other reef organisms are less well adapted may not do so well if sediment inputs increased.”
“Our research calls for a rethink on some of the classic models of reef growth. At a time when these delicate and unique ecosystems are under threat from climate change and ocean acidification, a view endorsed in a recent consensus statement from many of the World’s coral reef scientists, it is more important than ever that we understand how, when and where reefs can grow and thrive.”
Photo Credit: University of Exeter | <urn:uuid:27e21214-f7df-4d16-bb0e-a32eb3996382> | CC-MAIN-2013-20 | http://thegreenregister.com/thriving-coral-reef-in-muddy-waters/ | 2013-06-19T06:22:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964053 | 674 |
General Structure of the Digestive System
The long continuous tube that is the digestive tract is about 9 meters in length. It opens to the outside at both ends, through the mouth at one end and through the anus at the other. Although there are variations in each region, the basic structure of the wall is the same throughout the entire length of the tube.
The wall of the digestive tract has four layers or tunics:
- Muscular layer
- Serous layer or serosa
The mucosa, or mucous membrane layer, is the innermost tunic of the wall. It lines the lumen of the digestive tract. The mucosa consists of epithelium, an underlying loose connective tissue layer called lamina propria, and a thin layer of smooth muscle called the muscularis mucosa. In certain regions, the mucosa develops folds that increase the surface area. Certain cells in the mucosa secrete mucus, digestive enzymes, and hormones. Ducts from other glands pass through the mucosa to the lumen. In the mouth and anus, where thickness for protection against abrasion is needed, the epithelium is stratified squamous tissue. The stomach and intestines have a thin simple columnar epithelial layer for secretion and absorption.
The submucosa is a thick layer of loose connective tissue that surrounds the mucosa. This layer also contains blood vessels, lymphatic vessels, and nerves. Glands may be embedded in this layer.
The smooth muscle responsible for movements of the digestive tract is arranged in two layers, an inner circular layer and an outer longitudinal layer. The myenteric plexus is between the two muscle layers.
Above the diaphragm, the outermost layer of the digestive tract is a connective tissue called adventitia. Below the diaphragm, it is called serosa. | <urn:uuid:255d9629-07ea-4e9e-8601-854d4f2b89d6> | CC-MAIN-2013-20 | http://training.seer.cancer.gov/anatomy/digestive/structure.html | 2013-06-19T06:16:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909892 | 384 |
Unitary and Orthogonal Matrices
First a unitary transformation on a complex vector space. We pick a basis and set up the matrix
We can also set up the matrix for the adjoint
That is, the adjoint matrix is the conjugate transpose. This isn’t really anything new, since we essentially saw it when we considered Hermitian matrices.
But now we want to apply the unitarity condition that . It will make our lives easier here to just write out the sum over the basis in the middle and find
Now, this isn’t particularly useful on its face. I mean, what does that mess even mean? But if nothing else it tells us that we can describe unitary matrices in terms of (a lot of) equations involving only complex numbers. We can then pick out all the complex matrices which represent unitary transformations. They form the “unitary group” .
What about orthogonal matrices? Again, we pick a basis to get a matrix
and also a matrix for the adjoint
Here the adjoint matrix is just the transpose, not the conjugate transpose, since we’re working over a real inner product space. Then we can write down the orthogonality condition
Again, this doesn’t really seem to tell us much, but we can use these equations to cut out the matrices which represent orthogonal transformations from all real matrices. They form the “orthogonal group” .
But there’s something else we should notice here. The equations for the unitary group involved complex conjugation, so we need some structure like that to talk sensibly about unitarity. However, the orthogonality equations only involve basic field operations like addition and multiplication, and so these equations make sense over any field whatsoever. That is, given a field we can consider the collection of all matrices with entries in , and then impose the above orthogonality condition to cut out the matrices in the orthogonal group , while the first orthogonal group is .
One useful orthogonal group is . This is not the same as the unitary group , though it can be confusing to keep the two separate at first. The unitary group consists of matrices whose inverses are their conjugate transposes, instead of just their transposes for the complex orthogonal group. The unitary group preserves a sesquilinear inner product, which has a clear geometric interpretation we’ve been talking about. The orthogonal group preserves a bilinear form, which doesn’t have such a clear visual interpretation. They are related in a way, but we’ll be coming back to that subject much later on. | <urn:uuid:17e438ab-a6c6-4e9a-8f33-4a52f164f55f> | CC-MAIN-2013-20 | http://unapologetic.wordpress.com/2009/07/29/unitary-and-orthogonal-matrices/?like=1&source=post_flair&_wpnonce=5b12459335 | 2013-06-19T06:36:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92508 | 580 |
- Prayer and Worship
- Beliefs and Teachings
- Issues and Action
- Catholic Giving
- About USCCB
USCCB Advocacy:USCCB insists that a just peace demands an end to violence, recognition and security for the state of Israel, an end to Israeli occupation of the West Bank and Gaza, and the establishment of an internationally-recognized and viable Palestinian state. It also requires an agreement on Jerusalem that protects religious freedom and other basic rights for all faiths, and an equitable sharing of resources, especially water. USCCB supports consistent and persistent U.S. leadership to challenge and restrain both parties to the conflict and to hold them accountable for mutual steps needed for a just peace. Palestinians must improve security by halting attacks on civilians, blocking illegal arms shipments and disarming militias, and improve governance and transparency as they build capacity for a future state. Israel needs to freeze expansion of settlements, withdraw "illegal outposts," ease movement for Palestinians by reduc-ing military check points, and refrain fromdisproportionate military responses. The dire humanitarian situation in Pal-estinian areas is not in the best interests of either Israelis or Palestinians. Non-governmental organizations, including Catholic Relief Services, play a crucial role in delivering aid.
The Middle East is a land holy to Jews, Christians and Muslims, but tragically it is also a violent land that yearns for a just peace. USCCB has had a long history of pursuing justice and peace by supporting a two-state solution: a secure and recognized Israel living in peace alongside a viable Palestinian state.
The conflict between Jewish and Arab populations dates back to before the establishment of the State of Israel in 1948. Tensions rose between Arabs and Jews in response to Jewish migration to the region. In 1947 the UN rec-ommended partition of Mandate Palestine, at the time under British rule, into two states: one Jewish and one Arab. Armed conflict ensued as British forces withdrew and Israel declared its independence in 1948. Many Arab Palestinians became refugees. The 1967 war between Israel and its Arab neighbors resulted in the occupa-tion of Gaza and the West Bank. In 1979 and 1994 Israel signed peace treaties with Egypt and Jordan respective-ly, but no other Arab countries recognize Israel and a Palestinian state has not been established.
Events of the past few years have created a particularly volatile situation. In January 2005 Palestinians elected President Mahmoud Abbas. Despite new leadership, the Palestinian Authority was viewed as plagued by cronyism and inefficiency that crippled its ability to improve the lives of the Palestinian people. The unilateral Israeli with-drawal from Gaza in 2005, while welcome, was not seen as a result of the peace process or of President Abbas' efforts and led to a collapse of security in Gaza. Palestinians believe Israeli settlements and the route of the secu-rity barrier which Israel constructed in Gaza in 1994 and began constructing in the West Bank in 2002 effectively confiscate Palestinian lands and water resources in the West Bank.
These factors and others contributed to the Hamas party winning a majority in the January 2006 Palestinian par-liamentary elections. This was a serious setback for the peace process. Hamas, unlike President Abbas and his Fatah party, refuses to recognize Israel, accept previous agreements and renounce violence. Its designation as a foreign terrorist organization led to reductions in international assistance to the Palestinian Authority as donors struggled with how to assist Palestinians without supporting Hamas. In 2006 armed conflict was precipitated by unjustifiable acts by Hamas in Gaza and Hezbollah in Lebanon, including cross-border raids against Israeli mili-tary personnel and rocket attacks against Israeli civilians. Israel defended itself, but its military response was dis-proportionate and indiscriminate in some instances, endangering civilians and destroying civilian infrastructure.
In June 2007 Hamas took control of Gaza. In response President Abbas dissolved the Hamas-Fatah unity gov-ernment and formed a new Palestinian Authority (PA) government. The PA remains in control of the West Bank and is trying to implement political and economic reforms. But persistent rocket attacks from Hamas-controlled Gaza and continued indefensible suicide attacks on civilians contributed to legitimate Israeli security concerns. In late December 2008, Israel launched a major military response that resulted in high levels of civilian Palestinian casualties in Gaza and significant destruction of property and infrastructure. Israel's military response, its contin-uing blockade of Gaza, expansion of settlements, maintenance of numerous check-points within the West Bank, and construction of a security wall deep in Palestinian areas have contributed to a dramatic decline in the Palestin-ian economy, deepening poverty and rising Palestinian anger and hopelessness.
The Christian Communities in the Holy Land: Concern for Christians in the Middle East, particularly in the Holy Land, led the Holy Father to convene a Middle East Synod of Bishops. Christians continue to emigrate due to the con-tinuing conflict, fears about the future, a lack of economic opportunities, and Israeli residency requirements and visa regulations that separate family members. Negotiations on the 1993 Fundamental Agreement between Israel and the Holy See, which is critical for the future of the Church and for religious freedom, remain incomplete. Some Church institutions are put at risk by Israeli tax policies and land confiscation, and the ministry of Church personnel is ham-pered by visa problems. Since 1998 leaders of bishops' conferences from Europe and North America have met annually in the Holy Land to visit with public officials and the local Church.ACTION REQUESTED:Engage in prayer, pilgrimage, persuasion, and projects; see website for guidelines. Despite discouraging developments, Catholics cannot abandon the Holy Land's people or pursuit of a just peace.
″ Support strong U.S. leadership that holds both parties accountable for building a just peace: the Palestinians to halt violence, improve security and governance; the Israelis to stop settlements and allow movement of people and goods. Ask Congress to support funding to build the Palestinian Authority's capacity for governance and to provide urgently needed humanitarian aid for Palestinians.
″ Join the Catholic Campaign for Peace in the Holy Land. Reach out to Jewish and Muslim religious leaders to work together to support strong U.S. leadership. Website: www.usccb.org/sdwp/holylandpeace/.
″ Support the Church in the Holy Land. Urge members of Congress and Jewish leaders to press Israel to success-fully conclude negotiations with the Holy See related to the Fundamental Agreement.
Contact: Stephen Colecchi, Director, USCCB Office of Internation-al Justice and Peace, 202-541-3160 (phone), 541-3339 (fax)
By accepting this message, you will be leaving the website of the
United States Conference of Catholic Bishops. This link is provided
solely for the user's convenience. By providing this link, the United
States Conference of Catholic Bishops assumes no responsibility for,
nor does it necessarily endorse, the website, its content, or | <urn:uuid:330935c7-59cc-4454-8dfe-8042fd3d8156> | CC-MAIN-2013-20 | http://usccb.org/issues-and-action/human-life-and-dignity/global-issues/holy-land-2011.cfm | 2013-06-19T06:48:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.942323 | 1,406 |
Caspian Sea and Volga River Delta
Jacques Descloitres, MODIS Land Rapid Response Team, NASA/GSFC
This true-color MODIS image from May 10, 2002, captures Russia’s Volga River (running south through the center) emptying into the northern portion of the Caspian Sea. The waters of the Caspian Sea are quite murky in this image, highlighting the water quality problems plaguing the sea. The sea is inundated with sewage and industrial and agricultural waste, which is having measurable impact on human health and wildlife. According reports from the Department of Energy, in less than a decade the sturgeon catch dropped from 30,000 tons to just over 2,000 tons. National and international groups are currently joining together to find strategies of dealing with the environmental problems of the Caspian Sea. | <urn:uuid:9fecbc27-695d-4a2b-86b9-26e4ba961b46> | CC-MAIN-2013-20 | http://visibleearth.nasa.gov/view.php?id=59036 | 2013-06-19T06:41:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.91255 | 175 |
World War II Heroes Who Rival Oskar Schindler
While Spielberg guaranteed that the world would remember Oskar Schindler, there were others who also saw the plight of the European Jews and went the extra mile to save as many as possible. While TopTenz has already mentioned Raoul Wallenberg and Chiune Sugihara there were others who rose to the challenge, taking incredible risks and saving thousands of Jews fleeing the Nazi Death Camps.
10. Giorgio Perlasca
After the collapse of Italy, the Nazis rounded up thousands of Italian government officials in German-controlled Italian territory. One of these was Giorgio Perlasca. After spending months in detention, he was able to escape to Hungary, where he conned the local Nazis into thinking he was a Spanish diplomat. In that guise, he was able to give out thousands of VISAs that allowed Jews to escape the Nazi death camps.
When the real Spanish diplomat was forced to flee, the Hungarians thought they could finally seize the “Spanish” Jews. Perlasca, at great risk, convinced local officials that he was now the number one Spanish diplomat in Hungary. Therefore, the protection provided by the Spanish government could be maintained. Once the Red Cross had shipped the Spanish Jews to safety, he returned to Italy and kept his amazing heroics secret until a group of grateful Jewish survivors tracked him down in the 1970s.
A Spanish diplomat in Hungary, he saved thousands by using his diplomatic powers to turn houses, hospitals, hotels into Spanish territory. He would find Jews living in a house and run up a Spanish flag, making it Spanish territory, and then the Nazis would be blocked from entering. If they did leave then they could be arrested, so Briz would have to bring food and supplies to all the little Spanish enclaves he created in Hungary. He stayed as long as he could before advancing Soviet forces forced him to head to Switzerland.
Palatucci was an Italian police official in the city of Fiume. He saved thousands of Jews from being deported to Nazi extermination camps by destroying city documents showing their location and names. Following the 1943 capitulation of Italy, Fiume was occupied by Nazis. Palatucci remained as head of the police administration, where he continued to clandestinely help Jews and maintain contact with the Resistance, until his activities were discovered by the Gestapo. A friend at the Swiss Consul to Trieste offered him safe pass to Switzerland but, instead of taking this lifeline, he sent his young Jewish fiancée instead. Palatucci was arrested on September 13, 1944. He was sentenced to death by the Germans, who sent him to the Dachau prison camp where he died on February 10, 1945.
7. Colonel José Castellanos Contreras
Colonel Castellanos was the Salvadorian diplomat in neutral Switzerland during the War. While the war raged around Switzerland, he was approached by György Mandl, a Jewish refugee who needed the Colonel’s help to get his family to safety in Switzerland. Touched by his situation, he gave Mandl a position at the Salvadorian embassy. While stationed there, with the help of Mandl, he was able to save up to 25,000 Jews by issuing them Salvadorian citizenship or VISAs. After the war, he married a Swiss national and lived a quiet anonymous life.
6. Monsignor O’Flaherty
Monsignor O’Flaherty was an Irish priest during World War II that was based in the Vatican at the start of the war. As a member of the Vatican, the Germans would let him visit POW camps looking for Allied missing-in-action (MIA) soldiers. When Italy switched sides, the Italian prison camps released their Jewish and Allied prisoners. But because the camps were often behind German lines, the Jews turned to the priest, who would also visit their camps, for help. Acting independently, he hid thousands of POWs and Jews as the Allied lines advanced up the Italian peninsula.
The SS eventually discovered that he was leading the effort to Jewish refugees and tried to arrest and kill him, but couldn’t enter the Vatican. A white line was painted on the Vatican courtyard and he was told that, if he crossed it, he would be fair game. Due to his ability to evade the traps set by the Gestapo, Monsignor O’Flaherty earned the nickname “the Scarlet Pimpernel of the Vatican”.
5. Aristides de Sousa Mendes do Amaral e Abranches
As the French army crumbled under the German Blitzkrieg, thousands of Jewish refugees fled to southern France. As the Portuguese consulate official in France in charge of granting VISAs, Aristides de Sousa Mendes saved ten of thousands of people from the Nazis by issuing VISAs from the Portuguese embassy, even though his government had forbidden helping the Jews. Even though he was a hero and savior to thousands of people, he was blacklisted by the Portuguese government for helping the Jews and died in poverty.
4. Ernst Werner Techow
(Editor’s Note: No pictures of Techow seem to exist so, in his place, here’s a shot of Walter Rathenau, the man he is famous for assassinating.)
In the 1920s Techow was part of an Anti-Semantic German terrorist group who was infamously imprisoned for taking part in the killing of famous German Jewish diplomat, Walter Rathenau. While in jail, he saw the error in his ways and sought to make up for his past deeds. After he was released, he joined the French Foreign Legion; during the Nazi invasion of France he saved hundreds of Jews by smuggling them out on ships in the Southern French ports. As a German national, he was under incredible danger; if the German or even Vichy French found out, he would be arrested and possibly charged with treason by the Nazis.
3. Georg Ferdinand Duckwitz and Denmark
Proving that not all Germans are anti-Semites, in 1943 German official Georg Ferdinand Duckwitz found out that the Nazis were going to round up all the Danish Jews. He then went to Sweden to see if the neutral Swedes would take the Dainsh Jews. When they said yes, he told the Danes about the round-up. Denmark’s underground then smuggled out 99% of its Jewish population. To get past the German patrol dogs, Danish scientists developed a mixture of rabbit blood and cocaine that was spread over the Danish ports. The rabbit’s blood would attract the dogs and then the cocaine would render them useless. Duckwitz’s role was never discovered by the Nazis, and he served in the West German government after the war.
2. Charles Coward
Charles Coward, nicknamed the “Count of Auschwitz,” was held as a British POW but, since he had escaped so many other POW camps, he was sent to Auschwitz III, a POW camp near Auschwitz II in Birkenau. Once, during an escape, he blended in with German wounded and was accidentally awarded the Iron Cross by Nazi officers. In the Auschwitz POW camp, he met a British doctor who would visit the camp from the Jewish side. One day he switched clothes with the doctor and spent a day in the Auschwitz death camp witnessing the horrors only a few meters away.
Seeing how the other half lived, he started buying dead bodies (usually dead Belgians,) and French civilian forced labourers, from the SS guards. He would then tell the Jews to fall in the ditches on their walks outside the camp, essentially playing dead. He would then switch the dead bodies for the Jewish prisoners, while giving them their ID papers too. Coward did this on many occasions and is estimated to have saved hundreds of Jewish slave labourers.
1. Irena Sendler
Anxious about the people inside the Warsaw Ghetto, Irena Sendler was able to get access by forging a nurse’s identity card. With their parents’ blessing, she then proceeded to smuggle out children in boxes, secret compartments in cars, and just about anywhere else. She even brought a dog with her and trained it to bark at the German uniform, to cover the crying of infants hidden in her car. Once the children were out of the Ghetto, she placed them with Polish foster parents under the pretext of giving them back after the war.
She was able to smuggle thousands of Jews, mostly children, to safety, until her luck ran out and she was caught by the Nazis. Brutally tortured, she was freed from the prison by her friends and a timely bribe to the guards. The bribe was able to list her as executed by the Nazis and she continued her rescue efforts. She survived the war, but was persecuted by the Communist government for her ties to the Polish exile group in London.
- Schindler factory site memorial facing major hurdles (timesofisrael.com)
- Righteous Among The Nations (drschiffman.wordpress.com)
- Story of Muslim Shoah heroes is finally told (thejc.com) | <urn:uuid:d0cde8be-c6fe-4ffe-8b0d-d5329608848d> | CC-MAIN-2013-20 | http://waldina.com/2012/06/07/ | 2013-06-19T06:22:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.981714 | 1,885 |
Snapshot Issue 17 December 2005
What do Peking duck and the French aperitif pastis have in common? A scent: that of badian – otherwise known as star anis. And if star anis has been of growing interest recently it is less for its spicy perfume than for its antiviral virtues… Indeed, a molecule known as shikimic acid is found in the Chinese star anis and it is from this that the popular drug Tamiflu is designed. Now that the dread of an outbreak of the Avian flu carried by the H5N1 strain is hovering over us, badian has an aftertaste of Tamiflu.
The Avian flu virus makes use of two surface proteins in the process of infection: hemagglutinin and neuraminidase. Hemagglutinin binds to a healthy cell where it initiates infection and virus entry. Once the virus has multiplied inside the cell, copies are freed. That is…almost freed. The new viruses bind to receptors – sialic acid – on the infected cell’s surface. And for infection to pursue, the virus must disengage itself. It does so by way of its neuraminidase which cleaves sialic acid. Once cleaved, the virus is free to infect new healthy cells.
How can infection be checked? It so happens that a molecule which resembles sialic acid can be designed from shikimic acid, which is where star anis and Tamiflu come in. Indeed, Tamiflu tricks the Avian flu virus by mimicking sialic acid. Neuraminidase gets confused and chases after Tamiflu instead of cleaving the true sialic acid. As a consequence, the viruses are trapped on the infected cell’s surface and the organism’s immune system can deal with them all that more easily.
Like most antiviral drugs, Tamiflu has certain advantages over vaccines. The Avian flu virus changes its appearance constantly by modifying its surface hemagglutinins and neuraminidases, which is bad news for the immune system. Tamiflu is seemingly not all that concerned by such transformations because it goes for a part of the neuraminidase which is crucial for infection and hence not subject to much modification. Despite this, an H5N1 virus isolated from a young Vietnamese girl turned out to have acquired resistance to Tamiflu, which is cause for concern. Moreover, Tamiflu seems to be only effective on a moderate scale which is why parallel strategies – such as the production of additional antiviral drugs and the development of vaccines – should be found before the Avian flu becomes pandemic.
Read also : "Avian flu : The new Yellow Peril ?"
Neuraminidase, Influenza A virus (souche H5N1): Q9W7Y7
L'édition française de cette chronique est disponible dans l'Instantanés du mois de Prolune.
- Need to reference this article ? Please use this link: | <urn:uuid:207cb3d9-f37b-442a-b280-d91059416cb9> | CC-MAIN-2013-20 | http://web.expasy.org/spotlight/snapshots/017/ | 2013-06-19T06:29:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.926107 | 645 |
For teenagers struggling to quit smoking, a new study has some advice. To break the habit, try breaking a sweat.
It showed that teenage boys who took part in a smoking cessation program and combined it with exercise were several times less likely to continue smoking than those who received only traditional anti-smoking advice. Exercise did not have a comparable effect on teenage girls; researchers aren’t sure why. But the research is among the first to show that an exercise plan for teenage smokers can help them kick two bad habits at once, smoking and inactivity, which often go hand in hand.
For young smokers, breaking the habit before adulthood can be particularly crucial. Studies show that starting as a teenager makes it much more difficult to quit later on. About 80 percent of adult smokers began their habit before turning 18. Yet every day, 3,500 teenagers light their first cigarette.
The new study, published this week in the journal Pediatrics, took place in a state with one of the worst teen tobacco problems, West Virginia, where roughly a third of all high school students are smokers. Previous studies have shown that in adults, exercise — even if it’s just a walk around the block or lifting some weights — can help curb smoking by easing withdrawal symptoms and controlling cravings when people are confronted with cigarettes and other strong cues. Since West Virginia also suffers high rates of teenage obesity, the researchers wanted to see what effect exercise could have in combating two major health threats.
“It seemed logical to address these two together,” said Kimberly Horn, a professor of community medicine at West Virginia University and the lead author of the paper. “Exercise is known to mediate factors that often co-occur with smoking cessation, like increased stress levels, weight gain, withdrawal and cravings.”
To find out, the researchers recruited 233 smokers ages 14 to 19 at West Virginia high schools, and randomly assigned each to one of three groups. Some students received a single smoking-cessation session. A second group went through a 10-week anti-smoking program called Not on Tobacco, or NOT. And those in the third group went through the NOT program and were given pedometers and counseling on starting an exercise plan, which they could then schedule on their own time.
After three months, the study found that only 5 percent of the students who got the single anti-smoking session had quit smoking. But almost twice as many who went through the 10-week program had quit. When exercise was added to the mix, the effect on boys was remarkable: 24 percent of male students in the exercise group quit smoking, while only about 8 percent in the 10-week program that did not encourage exercise had stopped. They were also more likely to have stayed away from cigarettes after six months as well. The teenage girls in the exercise group, though, were no more likely to have quit smoking than those who received only counseling on quitting smoking.
“The kids in this study were pretty hard-core smokers,” Dr. Horn said. “They smoked about a half pack a day during the week and up to a pack a day on weekends. They were pretty addicted, and most started when they were about 11 years old.”
The data did not explain why a gender divide would exist, but Dr. Horn speculated that a few things could be responsible. Teenage boys are generally more enthusiastic about engaging in vigorous exercise, and are “more confident in their ability to be physically active,” Dr. Horn said, while physical activity levels typically plummet as teenage girls get older.
“It’s puzzling to us; it was a surprise finding,” she said. “I think we also need to look at issues of self-confidence. It could be the girls started with some stronger fitness barriers to overcome than boys.”
Nonetheless, the results over all were encouraging, since getting teenagers to give up smoking — or change any potentially harmful habits — can be notoriously difficult.
“One of the important things to point out is that oftentimes people believe that kids aren’t interested in quitting smoking,” she said. “I think this demonstrates that kids can quit, they’re interested in quitting and they can be successful, given the right tools.” | <urn:uuid:8aca8318-8422-4127-91f4-32b69ef6deff> | CC-MAIN-2013-20 | http://well.blogs.nytimes.com/2011/09/20/exercise-spurs-teenage-boys-to-stop-smoking/ | 2013-06-19T06:17:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.982108 | 882 |
From Unreal Wiki, The Unreal Engine Documentation Site
A class is a piece of UnrealScript that can be thought of as a small program. See Wikipedia:class (computer science) and Object Oriented Programming Overview for the basics of this concept. The rest of this page covers the UnrealScript syntax for declaring classes.
Class definition syntax:
class MyClass extends ParentClass <modifiers>;
expands vs. extends
UnrealScript originally used the keyword expands in the class declaration, but long ago switched to extends, the keyword used in Java. The UT-generation compiler supports both and treats them the same (although expands was considered deprecated for a while already), but in newer engine generations expands won't work. So, always use extends when you have a choice.
Optional modifiers
...that affect only this class
- This class cannot be spawned, it's a base class only. Usually the useful functionality is implemented in subclasses. Examples: Keypoint, Triggers, Pawn.
- config or config(name)
- This class supports saving its config properties to a configuration file. By default this is the Game Ini File (the configuration file that has the same name as the executable file of the game, e.g. UnrealTournament.ini or UT2003.ini). Using config(name) overrides the default config file name for this class. There are two special names that can be used here: System uses the game ini file, User uses the User.ini. Both names can be mapped to other files via the game's -ini=...- and -userini=...- command line parameters.
- Only in UT or earlier engine versions. This class cannot be placed in a map using UnrealEd. In newer versions, use notplaceable.
- Some behaviour of this class is handled by native code. See Native Coding.
- Replication of variables and functions declared in this class is completely handled by native code.
- From a now disappeared page about undocumented UnrealScript features: Specifies that a reference to this class may safely be set to Null or default if the class object can't be found in any packages. For example, you create a map that uses textures from the package "Rugs.utx". You have two floors in your map, one surface using the "Persian" texture and the other using the "Throw" texture. If you close your map, delete "Persian" from the texture package and reload your map, the surface that was referencing Persian will be changed to reference the default texture. This is because the texture class was declared SafeReplace. Note that packages are not SafeReplace. That means if you had deleted the Rugs.utx package completely (deleting the file), your map would not load because the package must be found.
- Within ClassName
Only works in Unreal Engine 2.0 and later engine version, and with classes not derived from Actor. It allows access to the holding class's members it is declared in. The holding class is optionally pointed to by the identifier 'Outer'. For this to work, the class can only extend Object.
Examples include PlayerInput, AdminBase, and CheatManager. All three of these are declared to be "within PlayerController" and extend object. Since they aren't actors, their functions and variables cannot be replicated. Also, if you look carefully, you will notice that these classes can call Outer functions (and possibly reference Outer variables) without making an explicit reference to "Outer". This has an effect that is somewhat like multiple inheritance, because it can call Outer and Super functions.
- Stores configuration information on a per-object basis rather than a per-class basis. This means that each object should have a separate configuration section in the configuration file based on its name.
- This class is not included when saving a game state.
- Don't export to C++ header. "ucc make -h" won't automatically generate a C++ header for native functions/events. Please see the following pages for more information:
- * Native Functions
- * UnrealScript Q&A (May 2000)
Only works in Unreal Engine 2.0 and later and takes a class name as parameter. Tells the compiler to process another class of the same package first because this class depends on an enum or struct declared in that other class. If your class depends on more classes you have to use the modifier several times, like in xPawn:
class xPawn extends UnrealPawn config(User) dependsOn(xUtil) dependsOn(xPawnSoundGroup) dependsOn(xPawnGibGroup);
Note: The compiler does not check dependson for accuracy and will cause a GPF if you misspell a class name or if you include spaces inside the brackets, like this:
class AClass extends Object dependsOn( SomeOtherClass );
To access the structs in a class that is depended on in this way, you must prefix it with the class name, like so:
class A extends B dependson(ThirdClass); var ThirdClass.SomeStruct Somevariablename.
In certain cases the DependsOn() modifier might not be neccessary. Note that you cannot use it to resolve circular dependencies between classes in the same package.
- Export all structs declared in this class to C++ header. This is equivalent to declaring all structs in the class as "native export"
- UT2004 only. Only used in cache metaclasses, such as GameInfo, Mutator, Weapon, Vehicle; ignored for any non-cached class. This class modifier is used to indicate that the values of this class's cacheable properties should not be exported to the .ucl file. In general, mod authors will probably never need to use this specifier, as it used for gametypes, weapons, mutators, etc., which should not appear in GUI/webadmin lists.
- UT2004 only. This class will not appear in drop down listboxes in UnrealEd.
- UT2004 only. The .ini file for this class may be specified on the command-line, using the syntax ' -classname=filename.ini'. If no parameter is specified on the commandline, the class uses its default configuration file (the system ini, unless a different ini is specified in the class declaration using Config(xx)). For example, in order to specify a unique .ini file for the usernames & passwords used by the advanced administration system (xAdmin.xAdminConfigIni), add the parameter ' -xAdminConfigIni=filename.ini' to the startup commandline. If the commandline doesn't contain this parameter, username/password information will be stored in the xAdmin.ini file, because this is what the xAdminConfigIni class declares as its config file.
...that also affect subclasses
- Only works in Unreal Engine 2.0 and later. Collapses all property groups into one main property group.
- hidecategories(group list)
- Only works in Unreal Engine 2.0 and later. Takes a comma-seperated list of variable groups. These groups will not be shown in UnrealEd's property windows, e.g. the Actor Properties or Texture Properties. (also see Displaying Variables In UnrealEd)
- showcategories(group list)
- Only works in Unreal Engine 2.0 and later. Opposite of hidecategories. Variable groups that have been hidden in a superclass can be made visible again with this modifier.
- Only works in Unreal Engine 2.0 and later. The class must also be derived from actor or a subclass of actor. This means you can(not) place Actors of this class in a level.
- Only works in Unreal Engine 2.0 and later. Classes also cannot be a subclass of actor. See Editinline.
- See Automated Component.
From what I can gather this signifies that the member variable is owned by the class rather than just being a reference. It will replace what would normally be a 4-byte reference with an n-byte instance where n = sizeof(YourClass). The benefit is that it allows Object members to be visable and editable in the Unreal editor without having to derive from Component.
WARNING: when refactoring and changing to instanced old maps will crash when trying to load. To fix, delete appropriate world objects from map before refactoring and re-place them after building scripts.
Working with classes
- the class<foo> syntax – see peppers and pepper grinders
- the use of myObject.IsA(class)
rough snip from one of tarquin's unl33t forum postings:
Example of casting: the syntax MyGame(Level.Game).Leader Level.Game is a variable that's been declared to point to an object of class GameInfo. You can make it point to a subclass, that's the whole point of OO (polymorphism, isn't it?) GameInfo class doesn't have a Leader property, so to access that property, you've got to temporarily specialize that variable.
Related Topics
- Other important parts of a class script:
DaWrecka: Something I've been trying to find out lately is; Is it possible to find out whether an object or actor is abstract, based on the class reference? I'm trying to code a new monster manager for Fraghouse Extension, and I'd like to weed out the abstract classes from the monster list if possible. So far the only plan I've got is trying to spawn the monster, and testing whether it succeeded - a definite Plan Z due to the fact this would be from a GUI. Spawning would work, but the lack of a GameInfo would likely lead to multiple Accessed Nones.
Wormbo: There's no way to tell directly. Try spawning it in the entry level, that should have a GameInfo. | <urn:uuid:269b346b-26b7-4f6b-8a2d-8d9be8c77bc8> | CC-MAIN-2013-20 | http://wiki.beyondunreal.com/Legacy:Class | 2013-06-19T06:47:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.855879 | 2,067 |
(Part 2 with stats from the human genome)
How might you search a really big string - a few billion symbols - for the best alignment of a large number of short substrings - perhaps just a few hundred symbols long or less - with some allowed edit-distance fuzzy-matching?
One approach jumped out at me: a sorted string index, like you build before doing a Burrows-Wheeler Transform (BWT)!
For illustration we’ll use a short string of 25 symbols:
The first step of BWT is to compute the sorted rotations of the text. If you imagine adding special end-of-string symbol on the end of the string, you don’t need to actually compute the proper rotations, simply use a
`memcmp`. So we have an array of 25 offsets into the string, and sorted it looks like this:
This is called a suffix array. This has O(n) memory and can be created in O(n lg n) or Θ(n) time.
Normally BWT now hops around this index and does the transforming before discarding the suffix array; however, we will not do so. We will keep hold of this suffix array and use it as the index for our string-searching purposes.
If you now wanted to search for, say,
`CATT`you could do so in this index with a binary search.
But some simple meta-data can help us cut that down. If we count the frequency of each symbol and track the cumulative counts in sorted order:
Look at the cumulative counts - its the offset of into the index for each symbol! So as
`CATT` starts with
`C`, and this frequency table tells us that
`C` is 5 entries starting at offset 5 in the suffix array, we now only have to search this sub-section of the array.
Imagine we count the frequencies of all pairs of symbols in the string:
Look again at the cumulative counts - its the offset into the suffix array for each digraph. As
`CATT` starts with
`CA`, we can easily see that there is just 1
`CA` entry at offset 5. (Note that the tailing T at position 18 of the suffix array throws off subsequent indices; you’d have a minimum suffix length in your array to match the key length in your lookup table.)
Some digraphs may not occur, of course. There’s no
`AA`s in our string for example. But in a very large string each digraph is probably represented. No matter, we could easily allocate a
`symbol_count*symbol_count` array for this index into the suffix array. If we were to search for the substring
`AACG`, we’d immediately spot there were 0
`AA`s and we could use ordinal math to know where to look in this index into the index without having to iterate over it.
You can derive the count for a prefix from the cumulative of it and the next prefix alone, so you only actually have to store the cumulative.
You can obviously extend this to longer and longer prefixes. Imagine you know you have just four unique symbols. At 2 bits per symbol, 8 symbols would have 64K entries. At 32 bits for the cumulative, that’s a quarter of a megabyte to know where in array to look for any 8 symbol substring. Go crazy - with 24-bit prefix you use 64MB to find any 12 symbol substring.
I used 32-bit numbers for the cumulative sums. What if you have more than 4 billion symbols in your string? You could use bigger numbers, or you could simply split your string into sections. The second section could conveniently start max-substring-length from the end of the previous; this overlap would avoid you having to pick over substring matches that bridge your data. Whether you need to divide your substring into sections or otherwise, you can search it in parallel (using many CPU cores at once) as search is a read-only operation.
Write the index to disk, and re-use it. Open it with
`mmap` and let the kernel cache hotter pages of it between runs.
The nice thing about the suffix array is that the offsets are in sequential memory and CPUs are good at sequential memory reads. Once you’ve gone and read an element in the suffix array, the CPU has likely got its neighbours in the L1 cache line too.
Dereferencing these offsets into the source string is going to be out to main memory (or even disk) again, so is slightly costly.
Here’s where my mind is taking me:
You are searching for
`ATTATGCC` in and your index into the suffix array is two symbols long (as in the example tables above).
`AT` is 4 entries starting at offset 1 in the suffix array. Rather than going and dereferencing it (we are playing that we have a much longer source string so lots of random memory access is a bad thing), we instead look at the second digraph
`TA` is 1 entry at offset 18. So, we just have to find those entries in the first suffix array sequence that match those in the second, and we’ve doubled our prefix length before dereferencing into the main source string. I’ve thought along these lines before…
I am still reasoning through what the order of the offsets in the table tell us without needing to dereference them too.
You could also track the offset in each string in the index that it differs from the previous; what would that buy you?
Or if you indexed on multi-symbol boundaries; say, if two-bits-per-symbol then, a byte is every forth symbol and is an obvious boundary to consider.
With fuzzy matching, you’d have to consider prefixes with the leading character missing and so on until your threshold is reached. You’d have to consider prefixes with a classic edit-distance-type measurement (e.g. Smith-Waterman)
Hmm, what I really want to know is how big the human genome is and how the frequencies and max frequency of 24-symbol prefixes in it…
So, work in progress :)
↓ click the "like" button to share on twitter and facebook! | <urn:uuid:f3c74fe7-4f19-42e1-8adb-7a3b201cff95> | CC-MAIN-2013-20 | http://williamedwardscoder.tumblr.com/post/24071805525/searching-for-substrings-in-a-massive-string | 2013-06-19T06:36:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.922411 | 1,339 |
ALPHABET LETTER PATTERNS: BASIC ALPHABET A-Z (B/W, LARGE)
The lower case letters of the alphabet, presented in black outline. One letter to a page. Includes punctuation marks. Print on colored paper for eye-catching bulletin boards, or enlist your students' artistic skills for other classroom decorations. (one large download)
Already an abcteach Member?
LOG IN TO ACCESS THIS DOCUMENT... | <urn:uuid:d2c5b01a-dd6e-484b-aa26-e583068d32f1> | CC-MAIN-2013-20 | http://www.abcteach.com/documents/alphabet-letter-patterns-basic-alphabet-a-z-bw-large-10671 | 2013-06-19T06:50:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.809581 | 99 |
Electronic waste is composed of the chemicals and metals used to make electronic devices like computers, cell phones, and handheld videogame systems. Everything from inkjet printer cartridges to your mp3 player contains it, and getting rid of it safely, in an environmentally-friendly way, has become an increasingly important issue as the number of gadgets we use to make life easier continues to increase. Virtually every electric appliance bears a component of e-waste and must be carefully and properly disposed of or recycled, in accordance with both federal and state laws.
While federal regulations apply to some older types of products, California has passed landmark legislation to regulate the disposal of many other types of e-waste. The Electronic Waste Recycling Act of 2003 specifically targets video display devices – televisions, computer monitors, cell phones, portable DVD players – which employ LCD, CRT, or plasma screens measuring four or more inches in size diagonally. The act also set limits on the types and amounts of hazardous materials which can be used in the manufacture of such devices.
In 2005, a recycling fee was imposed on the purchase of many electronic goods. In part, this fee is used to pay for the cost of recycling and processing e-waste material. Cell phones and rechargeable batteries have also recently had guidelines for put into place for their disposal.
In order to properly dispose of electronic goods, an individual must visit a recycling location. ABS Internet works with BCS Recycling Specialists to dispose of our extraneous electronics. In addition to recycling e-waste on a large scale for businesses and organizations, BCS also accepts e-waste directly from individual consumers. BCS even offers a recycling fundraising program and is a “landfill-free” company, recycling or repurposing every last component of the devices it receives so that nothing goes into a landfill. More information can be found at their website: www.scrapdr.com.
As the population of device users increases, and the length of time before device obsolescence decreases, we will more clearly see a need to re-use components and resources involved in the manufacture of electronic devices and to further reduce their environmental impact. California is often at the forefront of much of our nation’s legislation and continues to set high standards to meet the ever-changing needs of both planet and population. | <urn:uuid:953e1cff-0c43-49fe-bec8-d293da53986f> | CC-MAIN-2013-20 | http://www.absinternet.com/blog/category/clients/ | 2013-06-19T06:23:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.950792 | 481 |
During the terrorist attacks on the World Trade Center, two men carried a woman who uses a wheelchair down 68 flights to safety moments before the tower collapsed. Other stories have shed light on hardships people with disabilities faced in the aftermath of the crisis, including difficulties they encountered in accessing various relief services. The tragic events of last September have brought into focus the importance of taking into account the needs of all persons, including those with disabilities, in preparing for, and responding to, disasters and emergencies. They have also served to renew interest in how building requirements address accessible egress.
The Access Board develops and maintains accessibility requirements for the built environment, transit vehicles, telecommunications equipment, and for electronic and information technology under several different laws, including the Americans with Disabilities Act (ADA). The Board's guidelines for facilities address means of egress that are accessible to persons with disabilities. Presented here is an overview of these design requirements. Also included are links to information developed by other organizations on evacuation planning and disaster preparedness.
Related Document: Access Board Emergency Evacuation Procedures
The ADA covers a wide variety of facilities, including places of public accommodation, commercial facilities, and state and local government facilities. The Board's ADA Accessibility Guidelines (ADAAG), which primarily cover new construction and alterations, include specifications for accessible means of egress, emergency alarms, and signage. Model building codes, life safety codes, and state access codes also address these and other elements related to emergency egress.
Accessible Means of Egress (ADAAG
ADAAG ís criteria for accessible means of egress, like those in other building requirements, address both the required number and the technical specifications. The minimum number of egress routes required to be accessible is based on life safety code requirements for means of egress. Most of the criteria for accessible routes, such as width and the treatment of elevation changes, are applied to accessible means of egress to ensure access for persons with disabilities, including those with mobility impairments. Multi-story buildings pose a particular challenge to accessible means of egress since elevators, the standard means of access between floors, are typically taken out of service in emergencies for safety purposes. ADAAG addresses this situation through requirements for areas of rescue assistance or horizontal exits. Evacuation elevators, which are recognized by the model building codes but not the current ADAAG, offer an additional solution.
Areas of Rescue Assistance (ADAAG 4.1.3(9), 4.3.11)
ADAAG provides requirements for fire-resistant spaces where persons unable to use stairs can call for and await evacuation assistance from emergency personnel. Known as "areas of rescue assistance" or "areas of refuge," these spaces must meet specifications for fire resistance and ventilation. They are often incorporated into the design of fire stair landings, but can be provided in other recognized locations meeting the design specifications, including those for fire and smoke protection. Areas of rescue assistance must include two-way communication devices so that users can place a call for evacuation assistance. ADAAG requires areas of rescue assistance in new buildings only. An exception is provided for buildings equipped with sprinkler systems that have built-in signals used to monitor the systemís features. Horizontal exits, which use fire barriers, separation, and other means to help contain the spread of fire on a floor, can substitute for areas of rescue assistance provided they meet applicable building codes. Horizontal exits enable occupants to evacuate from one area of a building to another area or building on approximately the same level that provides safety from smoke and fire. Life safety codes and model building codes provide requirements for horizontal exits (see Additional Resources).
Evacuation Elevators (Proposed ADAAG 207, 409)
Emergency personnel may operate standard elevators in certain emergencies through the use of a special key. In some cases, it may be possible to evacuate people with disabilities in this manner. This, however, is not always an option. Model building codes, such as the International Building Code, and referenced standards now include criteria for elevators that are specially designed to remain functional in emergencies. Known as "evacuation elevators," they feature, among other things, back-up power supply and pressurization and ventilation systems to prevent smoke build-up. This type of elevator was not generally recognized when ADAAG was first developed. The Board has included requirements for these elevators, that are consistent with the model building codes, in its proposal to update ADAAG. Most recent model building codes now require this technology in new mid-rise and high-rise buildings.
Alarms (ADAAG 4.1.3(14), 4.28)
ADAAG provides specifications for emergency alarms so that they are accessible to persons with disabilities, including those with sensory impairments. Where emergency alarm systems are provided, they must meet criteria that address audible and visual features. Visual strobes serve to notify people who or deaf or hard of hearing that the alarm has sounded. ADAAG specifications for visual appliances address intensity, flash rate, mounting location, and other characteristics. In general, it is not sufficient to install visual signals only at audible alarm locations. Audible alarms installed in corridors and lobbies can be heard in adjacent rooms but a visual signal can be observed only within the space it is located. Visual alarms are required in hallways, lobbies, restrooms, and any other general usage and common use areas, such as meeting and conference rooms, classrooms, cafeterias, employee break rooms, dressing rooms, examination rooms and similar spaces.
Signage (ADAAG 4.1.3(16),
Requirements in ADAAG for building signage specify that certain types of signs are required to be tactile. Raised and Braille characters are required on signs that designate permanent spaces. This is intended to cover signs typically placed at doorways, such as room and exit labels, because doorways provide a tactile cue in locating signs. Tactile specifications also apply to signs labeling rooms whose function, and thus designation, is not likely to change over time. Examples include signs labeling restrooms, exits, and rooms and floors designated by numbers or letters. This includes floor level designations provided in stairwells. ADAAG also addresses informational and directional signs. These types of signs are not required to be tactile but must meet criteria for legibility, such as character size and proportion, contrast, and sign finish. The types of directional and informational signs covered include those that provide direction to exits and information on egress routes.
Further information on ADAAG and other Board guidelines is available in our technical assistance section. Model building codes, fire safety codes, and state access codes include requirements pertinent to accessible egress and emergency notification. Resources on these codes include:
In addition, the American Institute of Architects has developed material on security issues in building design.
Evacuation planning is a critical component of life safety, including for persons with disabilities. This is true for all buildings, including those that are new and fully accessible. Evacuation planning should include a needs assessment to determine who may need what in responding to an emergency and evacuating a facility. Such an assessment is instrumental in implementing policies and supplying products that accommodate the needs of all facility occupants. Primary resources on fire safety include the U.S. Fire Administration and the National Fire Protection Association.
The U.S. Fire Administration offers a variety of materials specific to persons with disabilities:
Information is also available from other sources:
Evacuation and Emergency Alarm Products
Various products are available that are designed to accommodate persons with disabilities in emergencies. Mobility aids, such as evacuation chairs, are available to transport people unable to use stairs. These devices are designed with rollers, treads, and braking mechanisms that enable a person to be transported down stairs with the assistance of another individual. These devices can be a key element of an evacuation plan, particularly where areas of rescue assistance, horizontal exits, or evacuation elevators are not available. An agency's evacuation plan should include the designation of people willing to provide assistance and their training in the type of evacuation devices supplied. Other types of products are available that can enhance access in existing buildings that are not subject to ADAAG requirements, such as portable visual alarm devices. A leading resource on product information is ABLEDATA, a federally subsidized organization that maintains a database of information on more than 27,000 assistive devices and technologies. In addition, the Job Accommodation Network website provides information on evacuation products.
Information on disaster preparedness and relief is available from the American Red Cross, the Federal Emergency Management Agency (FEMA), the National Center on Emergency Planning for People with Disabilities, and the National Organization on Disability (NOD). NOD is the leading force behind a newly established Task Force on Emergency Preparedness and People with Disabilities, which includes representatives from disability groups, emergency planning and response organizations, and various government agencies. In addition, the Interagency Coordinating Council (ICC) on Emergency Preparedness and Individuals with Disabilities, which was established by Executive Order in 2004, is responsible for implementing policies to address the safety and security needs of people with disabilities in emergency situations. The Council is headed by the Director of Homeland Secretary and is comprised of representatives from other Federal departments, including the Department of Labor (DOL).
On-line information available from these and other organizations include: | <urn:uuid:64e5ac27-063e-44e0-b23e-443fc36375b1> | CC-MAIN-2013-20 | http://www.access-board.gov/evac.htm | 2013-06-19T06:48:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946726 | 1,924 |
Science Fair Project Encyclopedia
Biomass is organic non-fossil material, collectively. For example, plants (including trees) and animals are biomass, as are the materials they produce, such as animal droppings and wood. The most successful animal of the earth, in terms of biomass, is the Antarctic krill, Euphausia superba, with a biomass of probably over 500 million tonnes, roughly twice the total biomass of humans.
Biomass is sometimes burned as fuel for cooking and to produce electricity and heat. This is called Biofuel. Biomass used as fuel often consists of underutilized types, like chaff and animal waste. This is often considered a type of alternative energy although it is a polluting one. Paradoxically, in some industrialized countries like Germany, food is cheaper than fuel compared by price per joule. Central heating units supplied by food grade wheat or maize are available.
It is also the dried organic mass of a ecosystem. As the trophic level increases, the biomass of each trophic level decreases. That is, producers (grass, trees, scrubs, etc.) will have a much higher biomass than animals that consume the producers (deer, zebras, insects, etc.). The level with the least biomass will be the highest predators in the food chain (foxes, eagles, etc.)
Types of high volume industrial biomass
- Dried distiller's grain
- Meat and bone meal
- Rice hulls
- Plate waste
- Landscaping waste
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:b5907254-9438-4300-8312-b733dfc12259> | CC-MAIN-2013-20 | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Biomass | 2013-06-19T06:22:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.931605 | 349 |
Click on an image to learn more:
Kitchen gardens, which individuals planted near their homes for vegetables, fruit, and cooking and medicinal herbs, existed by necessity in the colonial period. Their use declined in the North during the nineteenth century as the population shifted from an agrarian to an urban lifestyle with expanding industrialization.
Beginning in the 1840s, movements promoting the health and psychic benefits of country living and gardening were led by Henry Ward Beecher and Andrew Jackson Downing, who published his influential Treatise on the Theory and Practice of Landscape Gardening in 1841. The building of railroads and other public transportation during midcentury allowed the upper and middle classes to commute into cities for work, while living more comfortably at their outskirts.
In post–Civil War suburbs, people once again planted flower and kitchen gardens, this time to alleviate the stresses in their lives, to find pleasure, and to supply themselves with healthful produce. Seedsmen, who had been selling their products for decades at general stores, began competing for these new gardeners’ business through direct mail.
Companies had the same seeds to sell, and so they attempted to outshine their rivals through the size of their catalogs, the beauty of their illustrations, and their shrewd marketing of “new and improved” strains, which they actually recycled from the eleven hundred known species and varieties of vegetables.
Postbellum gardeners looked forward to receiving the seedsmen’s catalogs each winter, when they could pour over the detailed illustrations and superlative-laden descriptions and dream about the delicious food and beautiful flowers they would cultivate come spring and summer. | <urn:uuid:a7ea32df-9b96-4d4e-b44b-41139ca0ae0d> | CC-MAIN-2013-20 | http://www.americanantiquarian.org/Exhibitions/Food/seed.htm | 2013-06-19T06:17:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964158 | 335 |
Due to scheduled maintenance at the State Data Center, azdhs.gov and associated services may be unavailable intermittently on Saturday, June 15th, between 9 a.m. and 5 p.m. We apologize for any inconvenience this may cause.
Healthcare-Associated Infection and Antibiotic Resistance
Antibiotic Resistance FAQs
Antimicrobial agents, or antimicrobials, are substances that can kill or inhibit growth of a variety of microbes, including bacteria, fungi, viruses, and parasites. The major types of antimicrobials include: antibiotics, which fight bacteria; antifungal agents, which fight fungi; antiviral agents, which fight viruses; and antiparasitic agents, which fight parasites.
Antibiotics are useful drugs that can fight bacterial infections when they are used appropriately. However, in recent years, the overuse and misuse of antibiotics has led to a decrease in antibiotic effectiveness and the resistance of some microorganisms to these drugs.
- Show All
- Hide All
- Print All
Click on the question to view the answer.
What is an antibiotic?
An antibiotic is a medicine that kills or inhibits the growth of bacteria. The term 'antibiotic' originally referred to a natural compound produced by a fungus or other microorganism that kills disease-causing bacteria in humans or animals. Some antibiotics may be synthetic compounds (not produced by microorganisms) that can also kill or inhibit the growth of bacteria.
What is antibiotic resistance?
Antibiotic resistance is the ability of bacteria or microorganisms to resist the effects of an antibiotic. The improper use and overuse of antibiotics has led to strains of bacteria that are no longer sensitive to the effects of standard drug treatments.
How does antibiotic resistance develop?
Antibiotic resistance occurs when bacteria change in a way that allows them to resist the action of antibiotics. The bacteria survive and become the source of a new, drug-resistant strain that can multiply and transfer the resistance.
Antibiotics can kill or inhibit the growth of susceptible bacteria. Sometimes one of the bacteria survives because it has the ability to neutralize or evade the effect of the antibiotic. This one bacterium can then multiply and replace other bacteria that were killed by antibiotics. Exposure to antibiotics can provide selective pressure, which makes the bacteria more likely to be resistant. Furthermore, bacteria that were once susceptible to an antibiotic can acquire resistance through mutation of their genetic material or by acquiring pieces of DNA from other bacteria that code for resistance.
Why is antibiotic resistance a growing problem?
The overuse and misuse of antibiotics promotes the development of resistant microorganisms. Every time a person takes antibiotics, sensitive bacteria are killed, but resistant bacteria may grow and multiply.
Antibiotics should be used to treat bacterial infections and are not effective against viral infections like the common cold, most sore throats, and the flu. Even when antibiotics are prescribed appropriately, they must be used as directed. Patients must take the entire course of treatment and not skip any doses. Stopping antibiotics too early kills the weak microorganisms, leaving the strong to survive and develop resistance.
How can I reduce my risk from antibiotic-resistant infections?
Antibiotics can be very useful drugs when they are used appropriately. It is important to realize that antibiotics designed for bacterial infections are not to be used for viral infections such as the cold, cough, or the flu. The following tips maybe useful to remember:
- Do not take an antibiotic for a viral infection like a cold or flu. Antibiotics will not treat these infections.
- Discuss the use of antibiotics with your healthcare provider:
- Ask your doctor whether an antibiotic is likely to be beneficial to your illness.
- Ask your doctor if there is anything else you can do to feel better soon.
- When antibiotics are prescribed, take them exactly as advised by your healthcare provider. Take the whole course of treatment even if you are feeling better. Do not skip any doses.
- Do not save the antibiotic for later use.
- Do not take antibiotics prescribed for someone else or share your antibiotics with others. The antibiotic may not be effective in treating your illness. Taking the wrong antibiotics can only delay recovery and allow bacteria to multiply.
If your healthcare provider determines that you do not have a bacterial infection, ask about other ways to help relieve your symptoms. Do not pressure your provider to prescribe an antibiotic.
What can I do to prevent myself from getting sick?
There are several ways you can prevent yourself from getting sick:
- Wash your hands frequently with soap and water for at least 20 seconds. Your hands may look clean but they can carry germs that you can't see.
- If you are unable to wash your hands with soap and water, use an alcohol-based hand sanitizer.
- Avoid touching your eyes, nose and mouth to prevent germs like bacteria or viruses from entering your body.
- Cover your mouth and nose when you cough or sneeze.
- Stay up to date on your immunizations and get a flu shot every year.
- If possible, avoid contact with people who are sick. | <urn:uuid:d656e4b2-2b4f-4da9-811b-e2f51cd1cd3b> | CC-MAIN-2013-20 | http://www.azdhs.gov/phs/oids/hai/faqs.htm | 2013-06-19T06:29:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.931382 | 1,051 |
Brain Scans Show Differences in Adults With Autism
THURSDAY, Nov. 29 (HealthDay News) -- Brain scans done on groups of men with autism show distinct differences in both the volume of specific regions and the activity of cells that signal a possible immune response, two new studies suggest.
Scientists in England and Japan used MRI and PET (positron emission tomography) scans to examine brain-based anatomical and cellular variations in those with autism. But the disparities -- while offering a deeper glimpse into the little-understood developmental disorder -- raised more questions about its cause and treatment that only further research can answer.
"There's really strong evidence now that the immune system appears to be playing a role in autism, but we just don't know what that role is," said Geraldine Dawson, chief science officer of Autism Speaks, who was not involved in either study. "There is such an urgent need for more research to understand the causes and more effective treatment for autism. Autism has really become a public health crisis, and we need to respond to this by greatly increasing the amount of research conducted so we can help families find answers."
The studies were published online in this week's issue of the journal Archives of General Psychiatry.
Affecting one in 88 children in the United States, autism is characterized by pervasive problems in social interaction and communication, as well as repetitive and restricted behavioral patterns and interests.
The Japanese study examined the brains of 20 men with autism using PET scans to focus on so-called microglia. These are cells that perform immune functions when the brain is exposed to "insults" such as trauma, infection or clots. The PET images indicated excessive activation of microglia in multiple brain regions among those with autism when compared to a group of people without the disorder.
"This really raised the question about what the role is of these abnormalities," said Dawson, who also is a professor of psychiatry at the University of North Carolina, in Chapel Hill. "Is this something that could help us explain the causes of autism? Is it a reaction to autism, or the brain's response to developing in an unusual way?"
"We don't have the answers to these questions, but now they're showing up in multiple studies so it does suggest that understanding the role of the immune system in autism may be an avenue to understanding its treatment," she added.
The British study used MRI on 84 men with autism and a matched set of healthy participants. It suggested that those with autism have marked differences in cortical volume. These differences may be linked to its two components -- cortical thickness and surface area. Overall, participants with autism had greater cortical thickness within the frontal lobe regions of the brain and reduced surface area in other regions of the brain.
Study author Christine Ecker, a lecturer in neuroimaging at King's College London, discussed such brain differences.
"We also know that about 50 percent of individuals with autism have an abnormally enlarged brain, particularly during early childhood, which suggests that those with autism have an atypical developmental trajectory of brain growth," Ecker said. "[Anatomical brain differences in these areas] are highly correlated with the severity of autistic symptoms, but we still need to establish how specific differences in surface area and cortical thickness affect wider autistic symptoms and traits."
Dawson, who wrote an editorial accompanying the studies, noted that the last decade has brought an explosion of new research into autism, although she still feels funding for this work is lacking from federal agencies.
"It's been amazing to see not only the number of new scientists that are beginning to devote their careers to autism research, but also the quality of scientists," Dawson said. "But despite the fact that we're excited and encouraged by the numbers of publications increasing, we still feel the progress is far too slow."
The U.S. National Library of Medicine has more about autism.
SOURCES: Christine Ecker, Ph.D., lecturer, neuroimaging, King's College London; Geraldine Dawson, Ph.D., chief science officer, Autism Speaks, and professor, psychiatry, University of North Carolina, Chapel Hill, N.C.; Nov. 26, 2012, Archives of General Psychiatry online | <urn:uuid:bb67983d-bac3-4268-82a7-631d895626a1> | CC-MAIN-2013-20 | http://www.barnesjewishwestcounty.org/healthlibrary/?request=default&ContentTypeID=6&ContentID=671076 | 2013-06-19T06:49:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961849 | 862 |
Homocysteine: Another Risk Factor for Heart Disease?
Elevated levels of homocysteine, an amino acid found in all humans, may be a risk factor for heart disease. Here we review some fascinating new research highlighting how you can lower your homocysteine levels—and possibly your risk for heart disease—by making some very simple dietary changes.
Homocysteine. Perhaps you've not seen or heard the term. But, at one time you may have discussed your homocysteine level with doctors, friends, and family. Why? Because homocysteine is thought to be another predictor of the risk of developing clogged arteries, heart attack, and stroke.
Homocysteine (pronounced home-oh-sis-teen) is an amino acid. While most amino acids found in the body are building blocks of protein or muscle, homocysteine is not. It is formed as an intermediate step in the production of another amino acid, methionine. The production of methionine requires a number of vitamins; if they are in short supply, the level of homocysteine in the blood rises, causing increased odds of developing plaque in the arteries—a significant risk factor for heart attack and stroke.
The connection between elevated homocysteine levels and cardiovascular disease was first noted more than 25 years ago by Kilmer McCully, MD, a pathologist working at Harvard University. McCully noticed that children with a unique metabolic defect called homocystinuria developed strokes and died. He found that these children had a severe build-up of plaque in their arteries, and theorized a link between the diseased arteries and the high levels of homocysteine that accumulate in the blood of children with homocystinuria. Since then, a growing body of evidence confirms the relationship between elevated homocysteine levels and cardiovascular disease and stroke.
In the October 2002 issue of The Journal of the American Medical Association (JAMA) researchers pooled the evidence from 30 studies, involving 5000 people, in an attempt to gauge the significance of homocysteine levels as a risk factor for heart disease. They found that homocysteine levels were not as important in determining the risk of heart disease as the major risk factors like smoking, family history, hypertension, diabetes, and high cholesterol. However, they still found that people with homocysteine levels 25% lower than usual enjoyed an 11% lower risk of heart disease.
Another study in the December 2002 issue of the British Medical Journal analyzed the results of 72 studies (involving a total of 20,669 people) and observed that high blood homocysteine levels contribute to premature cardiovascular disease. The researchers concluded that lowering homocysteine concentrations in the blood by 3 micrograms per liter of blood—achievable by increasing folic acid intake—would reduce the risk of ischemic heart disease by 11%-20%, deep vein thrombosis 8%-38%, and stroke by 15%-33%. In addition, they found that subjects with abnormal folate metabolism due to a specific mutation were at an increased risk for both moderately elevated homocysteine levels and their associated cardiovascular outcomes.
The identification of homocysteine as a risk factor for heart disease is important because many people who develop narrowed, clogged arteries have none of the more familiar and established risk factors. Although widespread screening of homocysteine levels has not been advocated by clinicians and policymakers, certain people may benefit from testing. For example, young people who already have symptoms of arterial disease, and lack traditional risk factors, may be good candidates for testing.
Right now there are clinical trials in progress to see if adding vitamin B supplements to the diet will prevent atherosclerosis (fat deposits in arteries). Recent studies have shown that homocysteine levels do correlate with calcium deposits in the coronary arteries (which is likely a risk factor for the development of heart disease), but reducing those levels in people who have already had heart attacks may not lower the risk of a recurrent attack. The evidence about homocysteine is conflicting, and new studies should emerge soon. In the meantime you can follow a nutrition plan that assures adequate consumption of B vitamins.
Those Complex B Vitamins
Many factors, including age, gender, smoking and certain diseases and medications, can influence homocysteine status. However, the most important determinants appear to be genetics and diet. Three B-complex vitamins—vitamin B 6, vitamin B 12, and folic acid—are necessary to move homocysteine through the metabolic reaction to form methionine. This lowers homocysteine levels and prevents the toxic effects of homocysteine on the blood vessels. When your diet falls short in these nutrients, homocysteine can build up to high levels.
In 1993, a group of researchers at Tufts University analyzed blood samples drawn from elderly participants in the Framingham Heart Study. The results? One in three participants had homocysteine levels that were too high. About two-thirds of these cases could be traced to a diet low in vitamin B 6 , vitamin B 12, or folate. Canadian researchers also reported that people who had the least amount of folate in their diets were 69% more likely to die of a heart problem than those whose diets were richest in folate.
How Much Is Enough?
While vitamin B 6, vitamin B 12, and folate all play a part in the metabolism of homocysteine, folate seems to have the most impact. Researchers are now scrambling to ascertain how much folate is needed to ward off heart disease. Some experts are recommending 400 micrograms (mcg) a day of folic acid. This is also the amount recommended by public health officials for pregnant women to prevent neural tube defects, such as spina bifida.
In 1999, the Recommended Dietary Allowance (RDA) of folate was increased from 200 mcg per day for men and 180 mcg per day for women, to 400 mcg per day for both men and women, because blood levels of homocysteine fall when this amount of folate is consumed each day.
"Five A Day" Is the Way to Go
What's the best way to keep your homocysteine levels in check? Eat plenty of foods high in folate and vitamins B 6 and B 12 . Piling your plate with fruits, vegetables, and legumes such as lentils and dried beans, will provide hefty amounts of folate. Dark leafy greens, oranges, and orange juice are especially good sources. Perhaps the best way to obtain adequate folic acid is to eat a fortified breakfast cereal; you will get 400 micrograms of folic acid and the RDA for vitamins B 6 and B 12 . If you don't like cereal, try milk, eggs, meat or liver. Five servings of fruits and vegetables will provide enough folate to help prevent heart disease, cancer, and birth defects. Nine fruit and vegetable servings may be even healthier, and is the current goal of many nutritional experts.
Americans are also getting a folate boost from the government. Because of the link between a folate-poor diet and the increased risk of birth defects, the U.S. Food and Drug Administration ruled that wheat flour and bread be fortified with folate beginning in January, 1998. Breakfast cereals can also be fortified with up to 400 micrograms per serving. Researchers from the University of Washington have estimated that fortification of the food supply could save up to 50,000 lives per year, which would otherwise be lost to heart disease.
What About Folate Supplements?
If you get enough folate from your diet, there is no proven benefit to supplementation to prevent heart disease and it is not currently recommended. While it's important to insure adequate folate intake, there is some concern about the relationship between folate and vitamin B12 in the elderly. Elderly people usually absorb less vitamin B12, which can sometimes lead to pernicious anemia. High levels of folate tend to mask the symptoms of a B12 deficiency. If left untreated, pernicious anemia can lead to confusion and lethargy, caused by damage to the nervous system. Further some studies have raised concern that folate supplementation may increase the risk of renarrowing called “restenosis” within coronary arteries that have been stented. Therefore, people should see their health care providers before indiscriminately taking folate supplements and one should not take a folic acid supplement without also taking vitamin B12.
Is Fortification Enough?
Some researchers feel that even with fortification of the food supply, people will still be short of the desired 400 micrograms per day. But remember that it's relatively easy to incorporate folic acid-rich foods into your diet using fruits, vegetables, and legumes. And, it's a win-win proposition! The guidelines for a folate-rich diet are consistent with those for lowering blood cholesterol. What could be better than killing two risk factors with one single dietary intervention?
|Fresh spinach||1 cup||262|
|Kidney beans||1 cup||229|
|Lentils||1/2 cup cooked||179|
|Chick peas||1/2 cup cooked||145|
|Asparagus||1/2 cup cooked||131|
|Orange juice||1 cup||109|
|Split peas||1/2 cup cooked||64|
American Heart Association
National Heart, Lung, and Blood Institute
National Institutes of Health
American Dietetic Association
BC Health Guide
Heart and Stroke Foundation of Canada
Boushey CJ, et al. A quantitative assessment of plasma homocysteine as a risk factor for vascular disease: probable benefits of increasing folic acid intakes. JAMA. 1995;274:1049-1057.
Mayer EL, Jacobsen DW, and Robinson K. Homocysteine and coronary atherosclerosis. Journal of the American College of Cardiology. 1996;27:517-527.
Selhub J, Jacques PF, Bostom AG, et al. Association between plasma homocysteine concentrations and extracranial carotid-artery stenosis. New England Journal of Medicine. 1993;270:2693-2698.
The Homocysteine Studies Collaboration. Homocysteine and risk of ischemic heart disease and stroke: a meta-analysis. JAMA. 2002;288:2015-2022.
Wald DS, Law M, Morris JK. Homocysteine and cardiovascular disease: evidence on causality from a meta-analysis. BMJ. 2002;325:1202-1206.
Wilson PWF. Homocysteine and Coronary Heart Disease: How Great is the Hazard? JAMA. 2002;288:2042-2043.
Last reviewed May 2008 by Craig Clark, DO, FACC, FAHA, FASE
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © 2011 EBSCO Publishing All rights reserved. | <urn:uuid:5f4705c7-59d7-4746-b9c4-9ad81c864868> | CC-MAIN-2013-20 | http://www.beliefnet.com/healthandhealing/getcontent.aspx?cid=%0D%0A%09%09%09%09%09%0913987 | 2013-06-19T06:41:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.925863 | 2,372 |
Best Known For
British surgeon and medical scientist Joseph Lister is regarded as the founder of antiseptic medicine, which he implemented with amputee patients.
Think you know about Biography?
Answer questions and see how you rank against other players.Play Now
Joseph Lister was born on April 5, 1827, in Upton, England. In 1861, Lister observed that 45 to 50 percent of amputation patients died from sepsis. In 1865, he learned of Louis Pasteur's theory that microorganisms cause infection. Using phenol as an antiseptic, he reduced mortality in his ward to 15 percent within four years. Lister died on February 10, 1912, in Walmer, England. Today, he is regarded as the founder of antiseptic medicine.
© 2013 A+E Networks. All rights reserved.
Included In These Groups
The Chaplin. The Fu Manchu. The Van Dyke. The Garlbaldi. These beards, and other creative variations on chin whiskers, have become such a striking reflection of their wearers' personalities that it becomes hard to know whether the people made the facial hair famous, or the other way around. We do know this much is certain: the only rivals to these fabulous beards are the men sporting them.
Fantastic Facial Hair 70 people in this group
Famous Arians 536 people in this group
Famous People Named Joseph 43 people in this group
profile name: Joseph Lister profile occupation:
Sign in with Facebook to see how you and your friends are connected to famous icons. | <urn:uuid:b7e344c8-3653-4ba2-86df-c27af1dde99f> | CC-MAIN-2013-20 | http://www.biography.com/people/joseph-lister-37032 | 2013-06-19T06:29:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.953463 | 330 |
Gravitation, or gravity, is a natural phenomenon by which all physical bodies attract each other. It is most commonly experienced as the agent that gives weight to objects with mass and causes them to fall to the ground when dropped.
Gravitation is one of the four fundamental interactions of nature, along with electromagnetism, and the nuclear strong force and weak force. Gravitation is the only of these interactions which affects any matter.1 In modern physics, the phenomenon of gravitation is most accurately described by the general theory of relativity by Einstein, in which the phenomenon itself is a consequence of the curvature of spacetime governing the motion of inertial objects. The simpler Newton's law of universal gravitation postulates the gravity force proportional to masses of interacting bodies and inversely proportional to the square of the distance between them. It provides an accurate approximation for most physical situations including calculations as critical as spacecraft trajectory.
From a cosmological perspective, gravitation causes dispersed matter to coalesce, and coalesced matter to remain intact, thus accounting for the existence of planets, stars, galaxies and most of the macroscopic objects in the universe. It is responsible for keeping the Earth and the other planets in their orbits around the Sun; for keeping the Moon in its orbit around the Earth; for the formation of tides; for natural convection, by which fluid flow occurs under the influence of a density gradient and gravity; for heating the interiors of forming stars and planets to very high temperatures; and for various other phenomena observed on Earth and throughout the universe.
History of gravitational theory
Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal2) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitation accelerates all objects at the same rate. This was a major departure from Aristotle's belief that heavier objects accelerate faster.3 Galileo correctly postulated air resistance as the reason that lighter objects may fall slower in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.
Newton's theory of gravitation
In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, “I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly.”4
Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune.
A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit.
Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.
The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum, and see if they hit the ground at the same time. These experiments demonstrate that all objects fall at the same rate when friction (including air resistance) is negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.5
Formulations of the equivalence principle include:
- The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.6
- The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.7
- The strong equivalence principle requiring both of the above.
Resources · Tests
In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion, and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.89 In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force.
Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.
Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor.
Notable solutions of the Einstein field equations include:
- The Schwarzschild solution, which describes spacetime surrounding a spherically symmetric non-rotating uncharged massive object. For compact enough objects, this solution generated a black hole with a central singularity. For radial distances from the center which are much greater than the Schwarzschild radius, the accelerations predicted by the Schwarzschild solution are practically identical to those predicted by Newton's theory of gravity.
- The Reissner-Nordström solution, in which the central object has an electrical charge. For charges with a geometrized length which are less than the geometrized length of the mass of the object, this solution produces black holes with two event horizons.
- The Kerr solution for rotating massive objects. This solution also produces black holes with multiple event horizons.
- The Kerr-Newman solution for charged, rotating massive objects. This solution also produces black holes with multiple event horizons.
- The cosmological Friedmann-Lemaitre-Robertson-Walker solution, which predicts the expansion of the universe.
- General relativity accounts for the anomalous perihelion precession of Mercury.2
- The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS.
- The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of May 29, 1919.1112 Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However, his interpretation of the results was later disputed.13 More recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general relativity.14 See also gravitational lens.
- The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
- Gravitational radiation has been indirectly confirmed through studies of binary pulsars.
- Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of the universe discovered by Edwin Hubble in 1929 confirmed this prediction.15
- The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results.16
- General relativity predicts that light should lose its energy when travelling away from the massive bodies. The group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity.17
Gravity and quantum mechanics
In the decades after the discovery of general relativity it was realized that general relativity is incompatible with quantum mechanics.18 It is possible to describe gravity in the framework of quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons.1920 This reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,18 where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
Every planetary body (including the Earth) is surrounded by its own gravitational field, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface, denoted g, is expressed below as the standard average. According to the Bureau International de Poids et Mesures, International Systems of Units (SI), the Earth's standard acceleration due to gravity is:
This means that, ignoring air resistance, an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.
According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.
The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. At the equator, the force of gravity is the weakest due to the centrifugal force caused by the Earth's rotation. The force of gravity varies with latitude and becomes stronger as you increase in latitude toward the poles. The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.23 This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".24
Equations for a falling body near the surface of the Earth
Under an assumption of constant gravity, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first 1⁄20 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 2⁄20 it has dropped at total of 4 units; by 3⁄20, 9 units and so on.
Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only.
Gravity and astronomy
The discovery and application of Newton's law of gravity accounts for the detailed information we have about the planets in our solar system, the mass of the Sun, the distance to stars, quasars and even the theory of dark matter. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit Galactic Centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them.
In general relativity, gravitational radiation is generated in situations where the curvature of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational radiation emitted by the Solar System is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR B1913+16. It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as the Laser Interferometer Gravitational Wave Observatory (LIGO) have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as the instruments themselves are endowed with greater sensitivity over the next decade, this may change.
Speed of gravity
In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light.25 The team's findings were released in the Chinese Science Bulletin in February 2013.26
Anomalies and discrepancies
There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
- Extra fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact gravitationally but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed.
- Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers.
- Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all,27 however this conclusion is disputed.28
- Anomalous increase of the astronomical unit: Recent measurements indicate that planetary orbits are widening faster than if this were solely through the sun losing mass by radiating energy.
- Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the universe should stop the photons returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.29
- Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.29
Historical alternative theories
- Aristotelian theory of gravity
- Le Sage's theory of gravitation (1784) also called LeSage gravity, proposed by Georges-Louis Le Sage, based on a fluid-based explanation where a light gas fills the entire universe.
- Ritz's theory of gravitation, Ann. Chem. Phys. 13, 145, (1908) pp. 267–271, Weber-Gauss electrodynamics applied to gravitation. Classical advancement of perihelia.
- Nordström's theory of gravitation (1912, 1913), an early competitor of general relativity.
- Whitehead's theory of gravitation (1922), another early competitor of general relativity.
Recent alternative theories
- Brans–Dicke theory of gravity (1961)
- Induced gravity (1967), a proposal by Andrei Sakharov according to which general relativity might arise from quantum field theories of matter
- In the modified Newtonian dynamics (MOND) (1981), Mordehai Milgrom proposes a modification of Newton's Second Law of motion for small accelerations
- The self-creation cosmology theory of gravity (1982) by G.A. Barber in which the Brans-Dicke theory is modified to allow mass creation
- Nonsymmetric gravitational theory (NGT) (1994) by John Moffat
- Tensor–vector–scalar gravity (TeVeS) (2004), a relativistic modification of MOND by Jacob Bekenstein
- Gravity as an entropic force, gravity arising as an emergent phenomenon from the thermodynamic concept of entropy.
- In the superfluid vacuum theory the gravity and curved space-time arise as a collective excitation mode of non-relativistic background superfluid.
- Anti-gravity, the idea of neutralizing or repelling gravity
- Artificial gravity
- Birkeland current
- Einstein–Infeld–Hoffmann equations
- Escape velocity, the minimum velocity needed to escape from a gravity well
- g-force, a measure of acceleration
- Gauge gravitation theory
- Gauss's law for gravity
- Gravitational binding energy
- Gravity assist
- Gravity gradiometry
- Gravity Recovery and Climate Experiment
- Gravity Research Foundation
- Jovian-Plutonian gravitational effect
- Kepler's third law of planetary motion
- Lagrangian point
- Mixmaster dynamics
- n-body problem
- Newton's laws of motion
- Pioneer anomaly
- Scalar theories of gravitation
- Speed of gravity
- Standard gravitational parameter
- Standard gravity
- ^ Proposition 75, Theorem 35: p. 956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I. Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4
- ^ Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.)
- "Matter" in the broad sense, including radiation (massless particles).
- Ball, Phil (06 2005). "Tall Tales". Nature News. doi:10.1038/news050613-10.
- Galileo (1638), Two New Sciences, First Day Salviati speaks: "If this were what Aristotle meant you would burden him with another error which would amount to a falsehood; because, since there is no such sheer height available on earth, it is clear that Aristotle could not have made the experiment; yet he wishes to give us the impression of his having performed it when he speaks of such an effect as one which we see."
- *Chandrasekhar, Subrahmanyan (2003). Newton's Principia for the common reader. Oxford: Oxford University Press. (pp.1–2). The quotation comes from a memorandum thought to have been written about 1714. As early as 1645 Ismaël Bullialdus had argued that any force exerted by the Sun on distant objects would have to follow an inverse-square law. However, he also dismissed the idea that any such force did exist. See, for example, Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. p. 225. ISBN 978-0-521-82750-8.
- M.C.W.Sandford (2008). "STEP: Satellite Test of the Equivalence Principle". Rutherford Appleton Laboratory. Retrieved 2011-10-14.
- Paul S Wesson (2006). Five-dimensional Physics. World Scientific. p. 82. ISBN 981-256-661-9.
- Haugen, Mark P.; C. Lämmerzahl (2001). Principles of Equivalence: Their Role in Gravitation Physics and Experiments that Test Them. Springer. arXiv:gr-qc/0103067. ISBN 978-3-540-41236-6.
- "Gravity and Warped Spacetime". black-holes.org. Retrieved 2010-10-16.
- Dmitri Pogosyan. "Lecture 20: Black Holes—The Einstein Equivalence Principle". University of Alberta. Retrieved 2011-10-14.
- Pauli, Wolfgang Ernst (1958). "Part IV. General Theory of Relativity". Theory of Relativity. Courier Dover Publications. ISBN 978-0-486-64152-2.
- Dyson, F.W.; Eddington, A.S.; Davidson, C.R. (1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919". Phil. Trans. Roy. Soc. A 220 (571–581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009.. Quote, p. 332: "Thus the results of the expeditions to Sobral and Principe can leave little doubt that a deflection of light takes place in the neighbourhood of the sun and that it is of the amount demanded by Einstein's generalised theory of relativity, as attributable to the sun's gravitational field."
- Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons.. Quote, p. 192: "About a dozen stars in all were studied, and yielded values 1.98 ± 0.11" and 1.61 ± 0.31", in substantial agreement with Einstein's prediction θ☉ = 1.75"."
- Earman, John; Glymour, Clark (1980). "Relativity and Eclipses: The British eclipse expeditions of 1919 and their predecessors". Historical Studies in the Physical Sciences 11: 49–85. doi:10.2307/27757471.
- Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons. p. 194.
- See W.Pauli, 1958, pp.219–220
- NASA's Gravity Probe B Confirms Two Einstein Space-Time Theories
- Galaxy Clusters Validate Einstein's Theory
- Randall, Lisa (2005). Warped Passages: Unraveling the Universe's Hidden Dimensions. Ecco. ISBN 0-06-053108-8.
- Feynman, R. P.; Morinigo, F. B., Wagner, W. G., & Hatfield, B. (1995). Feynman lectures on gravitation. Addison-Wesley. ISBN 0-201-62734-5.
- Zee, A. (2003). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 0-691-01019-6.
- Bureau International des Poids et Mesures (2006). "Chapter 5". The International System of Units (SI). 8th ed. Retrieved 2009-11-25. "Unit names are normally printed in roman (upright) type ... Symbols for quantities are generally single letters set in an italic font, although they may be qualified by further information in subscripts or superscripts or in brackets."
- "SI Unit rules and style conventions". National Institute For Standards and Technology (USA). September 2004. Retrieved 2009-11-25. "Variables and quantity symbols are in italic type. Unit symbols are in roman type."
- List, R. J. editor, 1968, Acceleration of Gravity, Smithsonian Meteorological Tables, Sixth Ed. Smithsonian Institution, Washington, D.C., p. 68.
- U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976. (Linked file is very large.)
- Chinese scientists find evidence for speed of gravity, astrowatch.com, 12/28/12.
- TANG, Ke Yun; HUA ChangCai, WEN Wu, CHI ShunLiang, YOU QingYu, YU Dan (February 2013). "Observational evidences for the speed of the gravity based on the Earth tide". Chinese Science Bulletin 58 (4-5): 474–477. doi:10.1007/s11434-012-5603-3. Retrieved 12 June 2013.
- Dark energy may just be a cosmic illusion, New Scientist, issue 2646, 7th March 2008.
- Swiss-cheese model of the cosmos is full of holes, New Scientist, issue 2678, 18th October 2008.
- "Gravity may venture where matter fears to tread", Marcus Chown, New Scientist issue 2669, 16 March 2009. Original site, charges money to read it: http://www.newscientist.com/article/mg20126990.400-gravity-may-venture-where-matter-fears-to-tread.html . Mirror site, free to read article: http://www.es.sott.net/articles/show/179189-Gravity-may-venture-where-matter-fears-to-tread
- Halliday, David; Robert Resnick; Kenneth S. Krane (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 0-471-32057-9.
- Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
- Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
|Look up gravitation in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Gravitation|
- Thorne, Kip S.; Misner, Charles W.; Wheeler, John Archibald (1973). Gravitation. W.H. Freeman. ISBN 0-7167-0344-0.
- Hazewinkel, Michiel, ed. (2001), "Gravitation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Hazewinkel, Michiel, ed. (2001), "Gravitation, theory of", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 | <urn:uuid:1d3799a1-f654-4b98-ad6c-b17d644e641f> | CC-MAIN-2013-20 | http://www.bioscience.ws/encyclopedia/index.php?title=Gravitation | 2013-06-19T06:50:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.890902 | 6,295 |
About a third of the world bird species make their homes in rainforests, which offer moderate temperatures, protective shelter and ample food supply. Unfortunately the rainforests are disappearing. A few thousand years ago tropical rainforests covered as much as 12 percent of the Earth's land surface. Today, that figure is reduced by half, the result of logging, mining, and the clearing of land for human settlements. More than 600 species of rainforest birds are threatened with extinction. Rainforest Bird Rescue profiles projects and people around the world who are working to prevent the loss of these beautiful birds. Illustrated with 50 spectacular color photographs, Rainforest Bird Rescue covers the people, the issues and the challenges involved in preserving a future for endangered wildlife. | <urn:uuid:357850cc-6fd8-421e-ac1c-58f19673ab27> | CC-MAIN-2013-20 | http://www.bookcloseouts.com/Store/Details/Rainforest-Bird-Rescue-Changing-The-Future-For-Endangered-Wildlife-Animal-Rescue-Series/_/R-9781554071524B | 2013-06-19T06:47:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.932731 | 148 |
The second of these vast railway bridges crosses the Menai Straits, which separate Caernarvon from the island of Anglesey. It is constructed a good hundred feet above high-water level, to enable large vessels to sail beneath it; and in building it, neither scaffolding nor centering was used.
The abutments on either side of the Straits are huge piles of masonry. That on the Anglesey side is 143 feet high, and 173 feet long. The wing walls of both terminate in splendid pedestals, and on each are two colossal lions, of Egyptian design; each being 25 feet long, 12 feet high though crouched, 9 feet abaft the body, and each paw 2 feet 1 inches. Each weighs 30 tons. The towers for supporting the tube are of a like magnitude with the entire work. The great Britannia Tower, in the centre of the Straits, is 62 feet by 52 feet at its base; its total height from the bottom, 230 feet; it contains 148,625 cubic feet of limestone, and 144,625 of sandstone; it weighs 20,000 tons; and there are 387 tons of cast iron built into it in the shape of beams and girders. It sustains the four ends of the four long iron tubes which span the Straits from shore to shore. The total quantity of stone contained in the bridge is 1,500,000 cubic feet. The side towers stand at a clear distance of 460 feet from the great central tower; and, again, the abutments stand at a distance from the side towers of 230 feet, giving the entire bridge a total length of 1849 feet, corresponding with the date of the year of its construction. The side or land towers are each 62 feet by 52 feet at the base, and 190 feet high; they contain 210 tons of cast iron.
[Illustration: CONWAY CASTLE AND TUBULAR BRIDGE.] | <urn:uuid:0cec8663-4639-4db2-be9a-40470f213354> | CC-MAIN-2013-20 | http://www.bookrags.com/ebooks/11921/152.html | 2013-06-19T06:43:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.938278 | 399 |
Fun Classroom Activities
The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying.
1. Wall Collage of Ellison
Print up photos from the internet or dictionaries of Ellison. Have the class make a wall collage.
2. Music for each chapter
Most essays have some mention of music. Prior to each study session, have the students look through the coming chapter and find artists, songs and so on that are mentioned. Assign songs/artists to different students and challenge them to find the music (CD/records etc.). Have them bring in examples and play them in the background as...
This section contains 1,248 words|
(approx. 5 pages at 300 words per page) | <urn:uuid:28daf021-73ad-4381-b8c0-9ef2972381f3> | CC-MAIN-2013-20 | http://www.bookrags.com/lessonplan/shadowact/funactivities.html | 2013-06-19T06:18:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.934105 | 215 |
At some point you may need to provide at-home tutoring assistance for your children. It may be when they move from middle to high school. It may be when behavioral issues are getting in the way of them learning in school. It may be because the school can't provide them with the right placement.
All children are different when it comes to learning. However, sometimes it's useful to have a few tips and suggestions to fit into your tutoring/teaching interactions with your child. You can try these whether you homeschool, or just help them with their homework.
If they're having trouble with their spelling words, have them jump with each letter. Jumping helps memory.
If your child easily distracted by noises, other kids in the house, etc., be creative about where you tutor. Try under the table, inside a box, in a closet.
Oxygen To The Brain
If they're not paying attention, or are getting frustrated, have them do 10 jumping jacks, or five silly toe touches, or six weird sit-ups to "get blood to their brain" and to help them think.
Incorporating motion into learning activities helps some children. Roll a ball between the two of you as your child spells words. Have her bounce a ball as she recites math facts. Have her alternate standing on one foot then the other as she recites a poem.
Raise Your Hand, Touch Your Nose
To help keep your child focused when you read out loud. Give them listening assignments. Raise your hand every time you hear the word "mouse." Or, touch your nose when you hear the word "run."
Reading Out Loud
Take turns between you and your child reading out loud. Each read a page . . . a paragraph . . . a sentence . . . a word. It makes it slightly silly and keeps them focused.
Newspapers, Recipe Books, Canned Foods
Don't always read to them or with them from books. Read captions under photos in the newspaper, read a recipe, have them sound out the ingredients on the back of a can.
Let Them Be Teacher
Let your child be the teacher. Have them give you spelling words or math problems.
Reward Them For What They DON'T Know
Tell them they'll get a point (or hug) for every word they don't know the meaning of. Or for every math problem they get wrong. (This reminds them that you love them even when they're not perfect.) | <urn:uuid:f0940575-5993-49b2-b216-e6d0a6625e61> | CC-MAIN-2013-20 | http://www.byparents-forparents.com/tutoring.html | 2013-06-19T06:41:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961351 | 515 |
From September 2010 to August 2011, about 240 miles of forest buffers were planted along the Bay watershed’s streams and rivers. A total of 7,479 miles have been planted watershed-wide since 1996*.
*Prior to 2010, the Chesapeake Bay Program tracked riparian forest buffer planting in Maryland, Pennsylvania and Virginia. In 2010, CBP began including planting data from New York, West Virginia and Delaware
Date created: Apr 25 2012 / Download
This map shows the locations of riparian forest buffer restoration projects throughout the Chesapeake Bay Watershed. Project locations were provided by Forestry Workgroup representatives from the Maryland Department of Natural Resources, Virginia Department of Forestry, and Pennsylvania Department of Conservation and Natural Resources.
Craig Highfield, Forestry for the Bay Coordinator, takes a walk in the woods to explain the importance of healthy forests to the Chesapeake Bay.
Produced by Matt Rath
Music: “A Moment of Jazz” by Ancelin
Forest buffers are trees and other plants that line the banks of waterways. Forest buffers are important because they:
Well-maintained forest buffers also absorb pollution, which helps improve the health of neighboring streams and rivers as well as the water downstream.
Bay Program partners are planting forest buffers along thousands of miles of streams, creeks and rivers throughout the Bay watershed.
Bay Program partners achieved the original 2010 forest buffer restoration goal of 2,010 miles in 2002. In 2003, they set a new, long-term goal to conserve and restore forests along at least 70 percent of all streams and shoreline in the Bay watershed. They also set a near-term goal of restoring at least 10,000 miles of forest buffers in the Bay watershed portions of Maryland, Pennsylvania, Virginia and the District of Columbia by 2010.
In 2007, the Chesapeake Executive Council committed to continue progress toward the 2003 goal of restoring at least 900 miles of forest buffers per year (2007 Response to Forest Directive 06-1). West Virginia, Delaware and New York also signed on to this 2007 commitment.
This provided the foundation for the Forest Buffer Outcome established in the 2010 Executive Order Strategy for Protecting and Restoring the Chesapeake Bay. The forest buffer outcome states: “Restore riparian forest buffers to 63 percent, or 181,440 miles, of the total riparian miles (stream bank and shoreline miles) in the Bay watershed by 2025” (Executive Order Strategy, p. 51). Currently, 58 percent of the 288,000 total riparian miles in the Bay watershed has forest buffers in place.
Achieving this outcome requires that 14,400 new miles of forest buffers are restored between 2010 and 2025. This translates to a rate of 900 miles per year.
Note: Prior to 2010, the Chesapeake Bay Program tracked riparian forest buffer planting in MD, PA and VA. In 2010, CBP began including planting data from NY, WV and DE.
Amount completed since 1996 (baseline year)
Amount completed since 2000
Amount completed in 2011
Between September 2010 and August 2011, 239.6 miles of forest buffers were reported being planted, achieving 26.6 percent of the annual target to restore 900 miles oer year. This is a decrease in forest buffer restoration rates compared to 2007 through 2010. The state-by-state breakdown was:
These numbers reflect a decrease in forest buffer planting from the previous year, when 348 miles were planted and the year before that (2009) when 722 miles were planted. Between 2010 and 2011, Maryland planted more forest buffers than during the previous year, while Pennsylvania, Virginia and West Virginia planted less.
As in past years, Pennsylvania contributed the majority of miles in 2011. Since 2000, Pennsylvania has restored 62% of the riparian forest miles in the watershed.
This indicator includes stream bank and shoreline miles in the Bay watershed that are buffered by at least a 35-foot-wide area of vegetation.
This indicator tracks documented plantings of forest buffers along the Bay watershed’s streams and rivers. However, the gains do not necessarily represent a “net resource gain.” Based on the most recent assessment available, approximately 58 percent of the riparian area in the Bay watershed is forested. When this indicator is updated in the future, it could potentially be used to determine a net resource gain or loss.
Progress Restoring Forest Buffers
Reasons for the continuing slow progress in planting forest buffers include:
All of these issues have been the focus of efforts to improve forest buffer implementation:
Directive to Protect Chesapeake Forests
In 2006, Bay Program partners produced a report entitled The State of Chesapeake Forests, which was the impetus for an Executive Council directive, Protecting the Forests of the Chesapeake Watershed. The directive seeks to protect riparian forest buffers and other forests important to water quality. | <urn:uuid:de6fe05d-955a-4e0d-9d65-9a6209c30475> | CC-MAIN-2013-20 | http://www.chesapeakebay.net/indicators/indicator/planting_forest_buffers | 2013-06-19T06:22:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.939151 | 994 |
Emergent and chronic amputation is the loss of a limb, hand or finger. Some hand and upper-extremity injuries can be severe enough to require amputation.
If your child has an amputation:
- Call 911.
- Apply pressure to control the bleeding.
- If possible, collect the amputated part and put it in a plastic, sealable bag. Do not rinse or put in water.
- Put the bag on top of (not in) ice.
- Bring the amputation to the hospital.
In most cases, attempts at urgent reconstruction is made. Treatment for an acute amputation may include microsurgical replantation (reattachment using a surgical microscope). An artificial limb may help your child regain function if the amputation cannot be reattached. Management of amputations may include:
- Further reconstructive surgery
- Consultation with therapists and prosthetists | <urn:uuid:b9b1e2e7-d82b-4b89-ab56-bb73b82d58aa> | CC-MAIN-2013-20 | http://www.choa.org/Child-Health-Glossary/A/AM/Amputation | 2013-06-19T06:15:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.850284 | 188 |
GRS 80, or Geodetic Reference System 1980, is a geodetic reference system consisting of a global reference ellipsoid and a gravity field model. In Geodesy, a reference ellipsoid is a mathematically-defined surface that approximates the Geoid, the truer Figure of the Earth, or other planetary body
Geodesy, also called geodetics, is the scientific discipline that deals with the measurement and representation of the earth, its gravitational field and geodynamic phenomena (polar motion, earth tides, and crustal motion) in three-dimensional, time-varying space. Geodesy (dʒiːˈɒdɪsi also called geodetics, a branch of Earth sciences, is the scientific discipline that deals EARTH was a short-lived Japanese vocal trio which released 6 singles and 1 album between 2000 and 2001 Gravitation is a natural Phenomenon by which objects with Mass attract one another Polar motion is the movement of Earth 's rotation axis across its surface Characteristics A tide is a repeated cycle of sea level changes in the following stages Over several hours the water rises or advances up a beach in the flood
The geoid is essentially the figure of the Earth abstracted from its topographic features. The geoid is that Equipotential surface which would coincide exactly with the mean ocean surface of the Earth if the oceans were in equilibrium at rest and extended through It is an idealized equilibrium surface of sea water, the mean sea level surface in the absence of currents, air pressure variations etc. and continued under the continental masses. The geoid, unlike the ellipsoid, is irregular and too complicated to serve as the computational surface on which to solve geometrical problems like point positioning. The geometrical separation between it and the reference ellipsoid is called the geoidal undulation. It varies globally between ±110 m.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. In Geodesy, a reference ellipsoid is a mathematically-defined surface that approximates the Geoid, the truer Figure of the Earth, or other planetary body The quantity f = (a−b)/a, where b is the semi-minor axis (polar radius), is a purely geometrical one. The mechanical ellipticity of the earth (dynamical flattening, symbol J2) is determined to high precision by observation of satellite orbit perturbations. Its relationship with the geometric flattening is indirect. The relationship depends on the internal density distribution, or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System (GRS80) posited a 6,378,137 m semi-major axis and a 1:298. 257 flattening. This system was adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG). The International Union of Geodesy and Geophysics, or IUGG, is a Non-governmental organisation dedicated to the scientific study of the Earth using It is essentially the basis for geodetic positioning by the Global Positioning System and is thus also in extremely widespread use outside the geodetic community.
The numerous other systems which have been used by diverse countries for their maps and charts are gradually dropping out of use as more and more countries move to global, geocentric reference systems using the GRS80 reference ellipsoid.
The reference ellipsoid is defined by its semi-major axis (equatorial radius) a and either its semi-minor axis (polar radius) b, aspect ratio (b / a) or flattening f. In Geometry, the semi-major axis (also semimajor axis) is used to describe the dimensions of ellipses and hyperbolae In Geometry, the semi-minor axis (also semiminor axis) is a Line segment associated with most Conic sections (that is with ellipses and The aspect ratio of a Shape is the ratio of its longer Dimension to its shorter dimension Ellipticity redirects here For the mathematical topic of ellipticity see Elliptic operator. For GRS80, these are:
For a complete definition, four independent constants are required. Nature, in the broadest sense is equivalent to the natural world, physical universe, material world or material universe. GRS80 chooses as these a, GM, J2 and ω, making the geometrical constant f a derived quantity.
The GRS80 reference system is used by the Global Positioning System, in a realization called WGS 84 (World Geodetic System 1984). Basic concept of GPS operation A GPS receiver calculates its position by carefully timing the signals sent by the constellation of GPS Satellites high above the Earth The World Geodetic System defines a reference frame for the earth for use in Geodesy and Navigation. | <urn:uuid:30f9bc48-0d70-4925-b0af-b7cc1179132f> | CC-MAIN-2013-20 | http://www.citizendia.org/GRS_80 | 2013-06-19T06:48:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909416 | 1,048 |
First released in 1982, the luggable computer is an early computer that was easier to move than other computers, weighing around 15 to 30 pounds. These computers had a small CRT display and keyboard as one unit, in some cases the keyboard was separate. While not anything like laptops, these computers offered several benefits for mobility when compared to the standard computer at the time. In the picture to the right, is a Compaq Portable II computer and an example of what a luggable computer may have looked like.
Also see: Laptop | <urn:uuid:b77769e5-ca57-4ac0-80ae-2073346768b4> | CC-MAIN-2013-20 | http://www.computerhope.com/jargon/l/luggable.htm | 2013-06-19T06:43:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.976079 | 110 |
1. Type genus of the Pipidae.
5. An ordered reference standard.
10. Narrow wood or metal or plastic runners used for gliding over snow.
13. Type genus of the Aceraceae.
14. An Arab country on the peninsula of Qatar.
15. Payment due by the recipient on delivery.
16. Soviet physicist who worked on low temperature physics (1908-1968).
18. A public promotion of some product or service.
20. Prokaryotic bacteria and blue-green algae and various primitive pathogens.
22. A small cake leavened with yeast.
25. A federal agency established to regulate the release of new foods and health-related products.
28. A pointed instrument used to prod into motion.
33. Exhibiting or restored to vigorous good health.
36. A flat wing-shaped process or winglike part of an organism.
39. The brightest star in Virgo.
42. The mission in San Antonio where in 1836 Mexican forces under Santa Anna besieged and massacred American rebels who were fighting to make Texas independent of Mexico.
43. The blood group whose red cells carry both the A and B antigens.
45. A white trivalent metallic element.
46. Largest crested screamer.
48. A warning against certain acts.
51. Small tree of dry open parts of southern Africa having erect angled branches suggesting candelabra.
55. Clean or orderly.
56. A Kwa language spoken in Ghana and the Ivory Coast.
59. Type genus of the family Arcidae.
60. Make editorial changes (in a text).
61. Cubes of meat marinated and cooked on a skewer usually with vegetables.
63. The 22nd letter of the Greek alphabet.
64. (Irish) Mother of the Tuatha De Danann.
65. Small terrestrial lizard of warm regions of the Old World.
66. The sign language used in the United States.
1. The inner surface of the hand from the wrist to the base of the fingers.
2. The United Nations agency concerned with civil aviation.
3. English Quaker who founded the colony of Pennsylvania (1644-1718).
4. A unit of dry measure used in Egypt.
5. Short and fat.
6. A white metallic element that burns with a brilliant light.
7. Essential oil or perfume obtained from flowers.
8. A boy or man.
9. A trivalent metallic element of the rare earth group.
10. Someone who works (or provides workers) during a strike.
11. A cosmetic preparation used by women in Egypt and Arabia to darken the edges of their eyelids.
12. Not in action or at work.
17. A constellation in the southern hemisphere near Telescopium and Norma.
19. An informal term for a father.
21. Someone who is morally reprehensible.
23. Title for a civil or military leader (especially in Turkey).
24. The cry made by sheep.
26. A metric unit of volume or capacity equal to 10 liters.
27. By bad luck.
29. The executive agency that advises the President on the federal budget.
30. Type genus of the Majidae.
31. (Old Testament) In Judeo-Christian mythology.
32. (Babylonian) God of wisdom and agriculture and patron of scribes and schools.
34. Resinlike substance secreted by certain lac insects.
35. West Indian tree having racemes of fragrant white flowers and yielding a durable timber and resinous juice.
37. A tax on employees and employers that is used to fund the Social Security system.
38. A Hindu prince or king in India.
40. A genus of Bothidae.
41. A radioactive element of the actinide series.
44. English monk and scholar (672-735).
47. A communist state in Indochina on the South China Sea.
49. Characteristic of false pride.
50. The basic unit of money in Bangladesh.
52. Predatory black-and-white toothed whale with large dorsal fin.
53. United States newspaper publisher (1858-1935).
54. The bags of letters and packages that are transported by the postal service.
57. The quantity contained in a keg.
58. A loose sleeveless outer garment made from aba cloth.
62. A soft silvery metallic element of the alkali earth group. | <urn:uuid:872dad14-33c7-48e7-a15e-d5d190bd36c0> | CC-MAIN-2013-20 | http://www.crosswordpuzzlegames.com/puzzles/gs_1936.html | 2013-06-19T06:28:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.860854 | 963 |
LOW TECH AND LITERACY FOR OLDER STUDENTS
Day Phone: 605—335—4445
this session and receive many functional ideas for increasing the availability
of accessible literacy activities for inclusion of older students. In order for
literacy instruction to be successful, we must use user—friendly technology,
and address the unique issues of this age group, such as finding and adapting
appropriate reading materials. This presentation will draw on a pool of over 50
technology based classroom applications which focus on the use of technology
for literacy instruction for older students with severe disabilities. Participants
will leave with ideas they will be able to use in their own classrooms.
inclusion of children with disabilities into literacy instruction ‘requires
that the classroom curriculum be applicable and accessible, in order for
instruction to be successful, teachers and other classroom personnel must be
given tools which meet their needs as well as the needs of the students they
are serving. Often the technology available to a teacher is limited to what is
personally owned by their students. This poses many obstacles such as, little
hands-on teacher training with the device, unavailability to the teacher during
planning periods, and inability to utilize the same technology with a variety
of students. These obstacles greatly reduce the success of incorporating
technology into daily literacy activities.
technology that meets the following criteria can facilitate inclusionary
activities that truly promote literacy instruction.
Ease of Implementation—Important for the teacher as well as the peers, using the technology.
to adjust programming to meet a variety of needs throughout the day.
of features such as varying input modes (object—based, picture—based, etc.),
timing options, and relay options.
versatility (scanning, direct selection, switch input, number of messages)
•that can accommodate the changing needs of its user or users within the same
The presentation will incorporate functional literacy application ideas demonstrated with technology that meet the above criteria. Several technology based applications, gathered from experienced professionals throughout the country, will highlight the joint use of technology between disabled and non-disabled peers.
ideas will incorporate the use of communication aids, switches, computer
adaptations, and other assistive devices. Although the presentation will focus
on the use of products developed by Adaptivation, the ideas presented can
easily be implemented with other commercially available products. The format of
the presentation ‘will allow participants to share their own ideas and
experiences regarding technology and literacy instruction. | <urn:uuid:30083580-a0f5-4a73-9401-be041baf8aec> | CC-MAIN-2013-20 | http://www.csun.edu/cod/conf/2006/proceedings/2732.htm | 2013-06-19T06:49:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.915638 | 518 |
DMI image reference afn1-23. « Previous || Next » Constellations A » H || Constellations I » V
Roll mouse over picture to see constellation outlines
Image and text ©2008 Akira Fujii/David Malin Images.
In the picture above north is at the top and the image covers 90.3 x 112.9 degrees.
Image centre is located at 10:59:34.9, -05:42:01 (H:M:S, D:M:S, J2000) Astrometric data from Astrometry.net.
Best seen in the early evening during March to May
In Greek mythology, Hydra the Water Snake, guards the entrance to the Underworld (or the Golden Fleece). In another legend it guards the cup of water (Crater, the goblet of Apollo) from adjoining Corvus, the Crow, forever denying him a drink of water. Hydra was a fresh-water serpent born to Echidne and Typhon and was the beast which Hercules had to slay as the second of his twelve labours. This is perhaps a retelling of an earlier Babylonian story in which the hero Gilgamesh kills a many-headed monster.
Hydra is the largest of the 88 modern constellations at over 1,300 square degrees, extending over 100° of sky. Despite its enormous size, it is hard to identify because the stars are so faint. In ancient times, Hydra was even bigger -- the smaller constellations of Corvus and Crater, together with Sextans, the sextant, were created to reduce Hydra to a more manageable size. Above is the whole of Hydra, but the scale is necessarily small. On more detailed images are the west and central parts of this sprawling constellation. The compact group of stars in the head of Hydra are best seen in the photograph that includes Sextans.
The orange line on the above image is the plane of the ecliptic, along which the Sun and planets appear to move through the zodiacal constellations. The brightest 'star' image here is the planet Jupiter, and at the extreme western (right) edge is the much fainter image of Saturn, firmly on the ecliptic.
Named stars in Hydra (Greek alphabet) Alphard (α Hya), Al Minliar al Shuja (σ Hya), Ashlesha (ε Hya), Cauda Hydrea (γ Hya), Hydrobius (ζ Hya), Pleura (ν Hya).
Constellations adjoining Hydra
Antlia, Cancer, Canis Minor, Centaurus, Corvus, Crater, Leo, Libra, Lupus (corner), Monoceros, Puppis, Pyxis, Sextans, Virgo.
the constellations | constellations, wide field | Milky Way & Crux | planets & stars | binocular views
star trails | solar eclipses | moon & lunar eclipses | comets & aurorae | Contact DMI | <urn:uuid:dc376dba-db05-4bd9-b8a1-fd8c6607429b> | CC-MAIN-2013-20 | http://www.davidmalin.com/fujii/source/afn1-23.html | 2013-06-19T06:36:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.909644 | 636 |
Performance Improvements: Caching
If you're looking at performance and you want to get some quick wins, the obvious place to start is caching. Caching as a concept is focused exclusively around improving performance. It's been used in disk controllers, processors, and other hardware devices since nearly the beginning of computing. Various software methods have been devised to do caching as well. Fundamentally caching has one limitation — managing updates — and several decisions. In this article, we'll explore the basic options for caching and their impact on performance.
Caching replaces slower operationslike making calls to a SQL server to an instantiate an objectwith faster operations like reading a serialized copy of the object from memory. This can dramatically improve the performance of reading the object; however, what happens when the object changes from time to time? Take, for instance, a common scenario where the user has a cart of products. It's normal, and encouraged, for the user to change what's in their shopping cart. However, if you're displaying a quick summary of the items in a user's cart on each page, it may not be something that you want to read from the database each time.
That's where managing cache updates comes in, you have to decide how to manage these updates and still have a cache. At the highest level you have two different strategies. The first set of strategies is a synchronized approach where it's important to maintain synchronization of the object at all times. The second strategy is a lazy (or time) based updating strategy where having a completely up-to-date object is nice but not essential. Say for instance that you had an update to a product's description to include a new award the product has won that may not be essential to know about immediately in the application.
With a synchronized update, sometimes called coherent cache, you're trying to make sure that updates are synchronized between all of the consumers of the cache. When you're on a single server this is pretty easy. You have access to an in-memory cache and you just clear the value and update it with the new value. However, in a farm where you have multiple servers you have a substantially harder problem to solve. You must now use some sort of a coordination point (like a database, out of process memory store, etc.) and take the performance impact of that synchronization, or you must develop a coordination strategy for the cache.
The problem with the synchronized approach is that synchronization reduces the performance of the cache, which is of course the whole reason why it exists. There are implementations of this strategy that require only a small of the synchronization data to be shared, and there are strategies that essentially move the cache into a serialized database not unlike persisted session state. In either strategy the impact of synchronization is not trivial.
The synchronization strategy is good when there's a relatively large amount of change in the cache. For instance, if you needed to know the last few products a user looked at, that's going to change rapidly, then you may want not want to use a synchronized strategy for that data.
Another approach to synchronized cache is to use a coordination approach. With this approach you believe that the number of updates will be small and because of that you require extra actions when the cache is updated, rather than requiring extra activities on every (or nearly every) read. In general the kinds of data that you want to cache have relatively low volatility (rate of change) so the coordination approach to keeping cache's synchronized is often the best choice for synchronized cache. In this situation, when the object changes, and that change is persisted to whatever the back end store is, the object notifies the cache manager and the cache manager notifies all of the servers running the application.
For instance, in a farm with three servers, the cache manager clears the cache on the currently running servers and then signals the other cache managers say via a web service that they should clear their cache for a specific object. The number of calls needed obviously increases the larger the farm but as a practical matter the frequency is relatively low.
The final approach to a synchronized cache is having it client managed via a cookie. This strategy is really only viable for user data/session cache when the cached data is small (so that it can be retransmitted without impact), volatile (so that another strategy won't work well), and non-sensitive (so that the user having the information doesn't have any potential security implications.) The example of the recently visited products is a great example of a cache that could be pushed down to the client. This solves the need for it to be managed in cache.
In some situations such as the product description changing to include a new award for the product the change needs to be visible to all of the members of the farm and in every session eventually but the applications usefulness doesn't rely upon an absolute up to the minute cache to function correctly. In these situations there are two basic approaches that can be applied based on your scenario. First, you can allow the cache to expire after a certain passage of time. For instance, caching information about a product is useful but perhaps you decide that the product should only be cached for ten minutes. This decision is made because most of the time if the product isn't accessed within the previous ten minutes it's not needed any longer.
Depending upon your scenario you can choose to apply a fixed window (e.g. every ten minutes the product object in memory is updated ) or a sliding window (e.g. only after ten minutes of inactivity is the product object in memory updated). Of course, we're really talking about expiring the cache so in both scenarios it's really just that the objects will be cleared from the cache at those intervals and when they are reloaded or regenerated could potentially be much longer, but the older data would only be returned while the object is in cache.
When you know specifically when the object data is going to change, say for instance during a batched update in the middle of the night, you can choose to update the entire cache, generally for a class of objects, at the same time. The effect of this is that you are able to maintain cache efficiency except for the onetime per day when you know that updates are happening.
The final scenario that introduces a bit of awareness in your objects that caching is involved is to allow for a read-through strategy where the consumer of the cached objects can specifically request that the object be read from source of truth rather than relying upon the cache. This can be useful when there are specific scenarios where its critical to know the exact right answer. For instance, consider the importance of having the exact right value during a checkout on a commerce site.
Read-Through strategies dont solve the issues around a cache getting stale. They just recognize that there are specific times when read operations are critical and other situations where theyre less critical. In recognizing this the length of time that a cache can be scale can be longer since critical operations will still use the correct data. | <urn:uuid:94992aa9-2a71-4aa3-ad72-23bed5b96ae4> | CC-MAIN-2013-20 | http://www.developer.com/tech/article.php/3831821/Performance-Improvements-Caching.htm | 2013-06-19T06:17:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.942666 | 1,426 |
The Difference Between Type 1 And Type 2 Diabetes: Most people today understand that there are two types of diabetes mellitus but how many of us actually know the difference between them? Below is a quick overview of the two types and how they manifest themselves.
Type 1 Diabetes
This is also called insulin-dependant diabetes mellitus (IDDM) and is caused by a lack of insulin secretion from the beta cells of the pancreas. It may be that the beta cells have been damaged by a viral infection or an autoimmune disease and so their functioning is seriously impaired. Occasionally however there may be a hereditary tendency that leads to beta cell degeneration and research has shown that a close family member has around a 1 in 20 chance of also developing type I diabetes whereas the probability in the general public is around 1 in 250.
The usual onset of type 1 diabetes is around the age of 14 and the majority of sufferers are diagnosed before their twentieth birthday. It may develop very abruptly over a period of a few days or weeks and shows itself in the following 3 step sequence:
- Increased Blood Glucose.
- Increased use of fats for energy and for the formation of cholesterol by the liver.
- Depletion of the body’s protein stores.
This will show outwardly by a sudden drop in body mass that isn’t stopped even when eating large amounts of food. A sufferer will also feel very fatigued and generally under the weather.
Type I diabetes is treated with insulin which is injected using an insulin pump. Providing that the condition is diagnosed quickly and the diabetic controls their diet and insulin injections then there is no reason why they can’t continue life as normal.
Type II Diabetes
Type II diabetes is also called non-insulin dependant diabetes mellitus (NIDDM) and is caused by decreased sensitivity of target tissues to the metabolic effects of insulin. This reduced sensitivity is often referred to as insulin resistance.
Type II diabetes is far more common than type I, accounting for 80-90% of all known cases of diabetes. In most cases the age of onset is 40+ years with the majority being diagnosed between the ages of 50 and 60. Unlike type I, this type develops slowly and can go unnoticed for some time.
The insulin resistance in type II diabetes is commonly secondary to obesity. The link between insulin resistance and obesity is as yet poorly understood however some studies suggest that there are fewer insulin receptors, especially in the skeletal muscle and liver in obese people than in lean people.
In many instances type II diabetes can be effectively treated, while still in the early stages, with a calorie controlled diet and mild exercise to promote weight reduction. Occasionally, drugs may be used that increase insulin sensitivity or cause the pancreas to release additional amounts of insulin. If the disease progresses however, then insulin administration is often required to control blood glucose.
Both types of diabetes mellitus are serious illnesses and need to be treated as such. Poor management will quickly lead to a diabetic episode and if left unchecked, diabetic coma and death. | <urn:uuid:f2fa5d77-e946-405d-99a6-a1e70346cbe7> | CC-MAIN-2013-20 | http://www.diabeticlive.com/diabetes-101/the-difference-between-type-1-and-type-2-diabetes-mellitus/ | 2013-06-19T06:34:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.956587 | 625 |
The Centers for Disease Control and Prevention (CDC) reported at the 17th International AIDS Conference in Mexico City this past summer that the total number of Americans affected yearly by AIDS is about 56,000, some 40 percent more than previous estimates had warranted.
It is a difficult task to judge history when history is still being made. What can we make of AIDS 27 years after it was first classified as a specific disease? Is it a one-time terror that countries and organizations around the world are continually reigning in? Or a brutal epidemic that has revealed but a mote of its fury?
When a single cause is attributed to the deaths of 2 million people in a given year it is hard to discount the severity of such a disease. That was the estimate released by UNAIDS,a multi-party initiative of the United Nations,in a report in late July. But the same report shows that worldwide prevalence and deaths are in decline. From 2001 to 2007, new HIV infections declined from 3 million to 2.7 million. While much of the decline has been seen in Africa,the overall numbers do not dictate with authority that the disease is under control. The UNAIDS report listed eight countries in which new infections are on the rise, including China, Indonesia, Kenya, Mozambique, Papua New Guinea, the Russian Federation, Ukraine and Vietnam.
It turns out, in addition, that in the United States the prevalence of HIV, the virus that causes AIDS,has been miscalculated for some time. The Centers for Disease Control and Prevention (CDC) reported at the 17th International AIDS Conference in Mexico City this past summer that the total number of Americans affected yearly by AIDS is about 56,000,some 40 percent more than previous estimates had warranted.
As understanding of pathology and effective medicines improve,more and more people are living with the disease — some 33 million world- wide.In the United States,the mortality rate of AIDS has declined.In 1995,according to research,AIDS was the number-one cause of death for those between the ages of 25 and 44.Today it is the fifth leading cause of death in the same population.As all of these numbers indicate, disease management has taken — and should continue to do so — a prominent position in the treatment and delivery of care.
A Paradigm Shift
Disease management of persons with HIV/AIDS is a challenging and potentially rewarding enterprise.Effective disease management improves clinical outcomes, connects clients to the community,provides access to expert medical management, decreases transmission of HIV through maximum viral suppression by medication management,and ultimately saves lives.
In 1995 there was a paradigm shift in the approach,treatment and long-term management of HIV/AIDS.The first protease inhibitors were introduced,and they transformed the treatment modalities and level of care from the acute care treatment of HIV-associated opportunist infections and cancers to chronic outpatient management with durable viral suppression.The therapies have been so successful for those individuals who are adherent to their HAART (highly active antiretroviral therapy) that death rates have drastically reduced from the peak in 1995 of 10.3 per 100 persons with HIV to less than two per 100 persons with HIV in 2006,according to a study published in the Journal of Acquired Immune Deficiency Syndrome. The treatment revolution has led to evolving the care continuum for persons with HIV/AIDS from the palliative healthcare delivery to the approach which has embraced the chronic care model for management of persons with HIV.
Although there has been a true revolution in treatment,the number of persons infected with HIV in the United States continues to grow.The latest estimate of national HIV prevalence from the CDC at the end of 2003 was roughly 1 to 1.1 million.More ominous yet,the CDC estimates that approximately 24 to 27 percent of the 1 million are unaware of their HIV infection. The revised figures presented by the CDC ironically highlight this understated sense of awareness.
In this country the combination of decreasing death rates and increasing infection rates will increase the need for a disease management approach to managing persons with HIV to ensure that all persons infected with HIV have access to healthcare systems with expertise in managing the disease. This will be necessary because the factors that impact effective long- term management of the HIV epidemic continue to grow — for example,complicated drug regimens,significant drug-to-drug interactions, shifting demographics of the population affected (i.e.,increasing cases of minorities and women),access to care by HIV-experienced medical providers,social stigma,and cultural barriers.
Effective antiretroviral medication therapies have made HIV/AIDS a chronic disease,however,HIV/AIDS remains unlike any of the usual chronic diseases — such as diabetes,congestive heart failure and asthma — normally associated with disease management.Because it is a communicable disease that is transmitted through sexual or blood-borne exposure,HIV is a major and dangerous public health concern.
Implementing a disease management program to specifically address HIV not only improves adherence to therapy and clinical outcomes but also addresses the following situations and scenarios that can emerge (see chart below). | <urn:uuid:41fd613c-78fa-49ca-9c8c-7be508d51477> | CC-MAIN-2013-20 | http://www.dorlandhealth.com/case_management/clinical/HIVAIDS-Bringing-an-Epidemic-to-Light_13.html | 2013-06-19T06:23:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951613 | 1,039 |
In Charlotte County, using smokeless tobacco has become a hit especially among teens. This is the reason why more and more kids are not minding the dangerous and unhealthy effects of tobacco use as the number of nicotine dependents continue to increase.
Smokeless tobacco comes in two forms: the snuff and chewing tobacco. Chewing tobacco are shredded, twisted, and bricked tobacco leaves while the snuff is fine-grain tobacco contained in teabag-like pouches. Snuff is consumed by pinching or dipping the bags between the lower lip and gums while the chewing type is usually kept between the cheek and gum.
A recent survey shows that teens use smokeless tobacco for up to six times a day. According to the 2010 Florida Youth Tobacco Survey, there are now 13.1% of high school students in Charlotte admitting to smokeless tobacco use at least once in the last 30 days.
Teens often think that because the tobacco is only chewed or allowed to sit in one’s mouth, it becomes safer than actually smoking it as they do cigarettes. Yet the same nicotine effect happens when they suck on the tobacco juices allowing nicotine to enter the bloodstream through the tissues in their mouth. There’s no need to even swallow the tobacco since it already takes effect just by staying in one’s mouth.
Principal Ron Schuyler from The Academy in Port Charlotte admitted that they don’t usually have any idea who is using smokeless tobacco. He says that teens can easily conceal their use of the substance to avoid being reprimanded.
“If any person is addicted to nicotine, they want to get that nicotine fix, and this way, they can get that fix with less chance of having any school discipline” | <urn:uuid:7126b6cc-812f-4989-8066-df066d6f99a3> | CC-MAIN-2013-20 | http://www.drugfreehomes.org/2012/02/smokeless-tobacco-abuse-among-charlotte-teens.html | 2013-06-19T06:29:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.9539 | 360 |
The central dogma of biology
states that DNA is used to make RNA, and RNA is used to make a
protein. The first stage, making RNA from DNA, is called
transcription. The second stage, making a protein from RNA, is
In this Lesson, well look at
the translation stage of the central dogma of biology. Translation
is also known as protein synthesis.
Amino acids are the building
blocks of polypeptides, and proteins are made from polypeptides. In
translation, the RNA produced from transcription is used as a
template that determines the sequence of amino acids in a
polypeptide. Polypeptides are built with the help of ribosomes and
Protein synthesis can be
affected by mutations, which can have harmful effects. | <urn:uuid:07c9e6ee-6513-4381-b33c-37b9a59b3e97> | CC-MAIN-2013-20 | http://www.educator.com/biology/animated-biology-lectures/translation_-protein-synthesis.php | 2013-06-19T06:17:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948197 | 167 |
Skip to main content
More Search Options
A member of our team will call you back within one business day.
If you have asthma, you know how it feels to have a “flare-up.” It’s hard to breathe. You may cough a lot, or hear a whistling sound in your chest (called wheezing). Your chest may feel tight. You may feel tired and not want to play. Why does this happen? Use a paper horn to see how your lungs work. First, blow into the horn. Air goes in and out. That’s what healthy lungs are like. Now squeeze the middle of the horn (like the doctor in the picture). Air can’t get in and out. That’s like your lungs when you have an asthma flare-up.
Of course, lungs aren’t exactly like a paper horn. Inside the lungs, air goes in and out through very small tubes. These tubes are called airways. Asthma makes airways a little bit inflamed all the time. (That means swollen and red, like your nose when you have a cold.) Air can still go in and out. You may not notice a problem. But lots of things can bother inflamed airways. Then they get even more swollen. Pushing the air in and out gets harder. Less air gets into your lungs. That’s a flare-up.
Color the open airways green. Color the narrow airways red. | <urn:uuid:0c19fcfc-73b8-4b66-8319-ce608c6731fa> | CC-MAIN-2013-20 | http://www.einstein.edu/einsteinhealthtopic/?languagecode=es&healthTopicId=344&healthTopicName=Otolaryngology&articleId=88845&articleTypeId=3 | 2013-06-19T06:29:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.934606 | 311 |
Superabsorbent Polymers (SAP):
Superabsorbent polymers are primarily used as an absorbent for water and aqueous solutions for diapers, adult incontinence products, feminine hygiene products, and similar applications. Undoubtedly, in these applications, superabsorbent materials will replace traditional absorbent materials such as cloth, cotton, paper wadding, and cellulose fiber.
Commercial production of superabsorbent polymers began in Japan in 1978, for use in feminine napkins. This early superabsorbent was a crosslinked starch-g-polyacrylate. Polyacrylic acids eventually replaced earlier superabsorbents, and is the primary polymer employed for superabsorbent polymers today.1 In 1980, European countries further developed the superabsorbent polymer for use in baby diapers. This first diapers employing this technology used only a small amount of polymer, approximately 1-2 g. In 1983, a thinner diaper using 4-5 grams of polymer and less fluff was marketed in Japan.
The use of superabsorbent polymers revolutionized the diaper industry. Diaper manufacturers began to design diapers to take advantage of the amazing liquid retention ability of the polymer. Superabsorbent polymers absorb, and retain under a slight mechanical pressure, about 30 times their weight in urine.2 The swollen gel holds the liquid in a solid, rubbery state and prevents the liquid from leaking onto the baby’s skin and clothing.1
Superabsorbent polymers are prepared from acrylic acid and a crosslinker by solution or suspension polymerization. The type and quantity of crosslinker control both the swelling capacity and gel modulus.2 The synthesis and use of crosslinked polyacrylate superabsorbents have been a popular topic in the polymer literature. However, very little information about manufacturing processes has been given due to its proprietary content.1
The properties of superabsorbent polymers can be employed in many different applications. The largest use of superabsorbent polymers is in personal hygiene products. These consumer products include, in order of volume of superabsorbents used, disposable infant diapers, children’s training pants, adult incontinence articles, and feminine sanitary napkins.1 AMCOL estimated their total superabsorbent polymers sold in 1995 to be represented by the graph below: 3
(source: AMCOL website)
Since the introduction of superabsorbent diapers in Japan in 1983, the global market for superabsorbent polymers has grown and changed dramatically in the last ten years as superabsorbents have replaced fluff pulp in diapers and other personal hygiene articles. Worldwide superabsorbent polymer production capacity grew from only a few million metric tons in 1985 to greater than 700 million tons in 1995 with the United States accounting for 30% of this superabsorbent polymer demand.2
Approximately 75% of the superabsorbent polymers used worldwide are sold in diaper products from five major companies. These manufactures include Proctor & Gamble (P&G), Kimberly-Clark, and other diaper manufacturers such as Paragon Trade Brands, Molnycke, and Unicharm.2
In the United States, Proctor & Gamble is a well-known diaper manufacturer, which produces the popular Pampers diaper. The superabsorbent polymer used in the Pampers diaper holds approximately thirty times its own weight in body fluid.4 The P&G Corporation developed a unique three-piece construction diaper to absorb the moisture and distribute it evenly. The transmission of fluid to the absorbent core allows the fluid to be engulfed, therefore not passing it back to the skin. P&G diapers are now sold in more than 80 countries worldwide with $4 billion in sales.4
Superabsorbent Polymer Manufacturers:
In just twenty years, worldwide production of superabsorbent polymers is in full swing. Many industrial leading countries have companies producing some type of superabsorbent polymer. In the U.S., current manufacturers of acrylate-based superabsorbents include The Dow Chemical Company, Sanyo Chemical Industries, Nippon Shokubai Company, and the Chemdal Corporation, which is a subsidiary of AMCOL International. Other manufacturers located in Europe include AMCOL, Stockhausen GMBH, Dow Chemical, Hoechst Casella, Allied Colloids, and Nippon Shokubai. Superabsorbent polymer production in Japan comes from companies such as Nippon Shokubai, Sanyo, Mitsubishi Petrochemical Company, and Sumitomo Seika.
The leading producers of SAP consist of Stockhausen GMBH, Nippon Shokubai, Chemdal Corporation, Hoechst Casella, Dow Chemical, and Sanyo. These companies manufacture eighty percent of the worldwide production of superabsorbents.
The following table shows SAP production for these industry leaders:2
Stockhausen GMBH 162,000
Nippon Shokubai 137,000
Chemdal Corporation 120,000
Hoechst Casella 94,000
Dow Chemical 90,000
Sanyo 47,000 (metric tons)
Information regarding manufacturing processes of SAP is hard to attain. The majority of manufacturing processes producing SAP employ solution polymerization, in which the monomer acrylic acid is dissolved in a solvent with free radical initiators.1 Another process used to produce SAP is suspension polymerization, although in much smaller quantities. In 1996, there were three commercial superabsorbent polymer manufacturers using a suspension process. These were Kao Soap Co. and Sumitomo Seika in Japan and Elf Atochem S.A. in France.1 Suspension polymerization is a process in which droplets of monomer or monomer solution are dispersed in an immiscible continuous phase. The polymerization is carried out independently in these dispersed droplets.5 Sumitomo Seika and has been the most prominent users of a suspension process to make superabsorbent polymers.
Disposable diapers have changed greatly during the past 30 years, but three basic design components are used in any diaper. A diaper consists of an absorbent core between a porous top-sheet and an impermeable back sheet. The top-sheet must do three things: it must allow the urine to flow through it, keep the liquid away from the baby’s skin, and retain the structural integrity of the absorbent core. Usually it is made of a porous, hydrophobic substance, for example, polyester or polypropylene non-woven fabric.1 The back sheet helps keep the baby’s clothing dry and is a nonporous, hydrophobic substance, such as a polyethylene film. The absorbent core takes in the liquid, distributes it to all regions of the core, and holds the liquid under pressure from the baby.
AMCOL put out a market breakdown in 1996 for diaper material costs, which can be seen below:
BREAKDOWN OF DIAPER MATERIAL COST
DIAPER MATERIAL % OF TOTAL MATERIAL
COSTS PER DIAPER
FLUFF PULP 19.7
BOTTOM SHEET 9.5
NONWOVEN TOP SHEET 15.5
CARRIER TISSUE 2.3
LYCRA MATERIAL OR RUBBER 2.1
MULTI-PURPOSE ADHESIVE 2.5
ELASTIC ADHESIVE 1.8
TAPE TAB PRESSURE SENSITIVE 5.5
COURTESY OF NONWOVENS MARKETS, MAY 1996
(source: AMCOL website)
In the early 1980’s, the use of superabsorbent polymers in diapers grew into the mainstream. Diapers were first designed solely to optimize their absorptive ability. This required superabsorbents having a low crosslink density.1 However, this also resulted in small gel modulus when swollen, causing a "gel-block." These are swollen masses of gel that blocked the incoming liquid from entering the interior of the diaper. Gel-blocked masses were more likely to allow the urine to contact the baby’s skin for long periods of time, and were more likely to leak. To prevent this, new composite structures containing both cellulose pulp fluff and superabsorbent polymer were devloped. This providexd a matrix in which liquid could flow. In addition, the crosslink density of the gel particles was increased, resulting in products with a higher liquid retention under shear, despite a lower capacity. Although the equilibrium swelling capacity was somewhat lower, the improved gel modulus led to an overall improvement in diaper performance.1
Further design optimizations were introduced as the diaper industry became increasingly competitive. As the understanding of anatomy increased, the polymer was placed more in the front and back of the user, instead of in the crotch. In addition, ridged and lengthwise folds were developed to prevent leakage. Layers of different density were placed in the pad, allowing the liquid to move and store more efficiently. Layering has become more prevalent with the incorporation of superabsorbent polymer into absorbent cores. 1
Many aspects must be considered in the design of the polymer matrix in the diaper core. The absorption rate of the diaper must not be slower than the urination rate of the baby, otherwise leakage will occur. The absorption rate of the composite is influenced by the absorption rate of the superabsorbent polymer. On the other hand, fast swelling of the polymer may or may not be desirable. In some diaper designs, fast swelling may cause the diaper to leak if the porosity and permeability of the composite is reduced.1 The absorption rate of superabsorbent polymers is affected by the maximum absorption capacity of the polymer and its particle size and shape. The placement of fast or slow absorbing polymers in the composite structure therefore has important implications for the effectiveness of the composite. Many different schemes for mixing fluff and superabsorbent polymer have been investigated in order to find optimum diaper performance. In addition, particle size, placement, and relative amounts play a large role in the optimization of absorption. When the superabsorbent swelling is delayed in the wetting region of some diaper designs, there is more time to distribute urine through the diaper. By distributing the liiquid better throughout the diaper, there is less saturation of the core in the wetting region, so further wetness may be absorbed.
With a refined understanding of the impact of superabsorbent polymer on the absorbent core in the early 1990s ultra-thin diapers became possible. The amount of cellulose pulp fluff used in these diapers was reduced by half, yielding a thinner diaper with a higher concentration of superabsorbent polymer in the absorbent core.2 As polymer properties become increasingly understood, diapers become thinner as the ratio of polymer to fluff increases.
A separate layer of non-woven fibers was added to improve urine distribution in the diaper. This distribution layer was placed between the composite absorbent core, which consists of the cellulose fiber and superabsorbent, and the porous cover sheet. The distribution layer had lower absorbency than either the standard cellulose fluff or superabsorbent polymer and a lower density, which allowed for a fast liquid distribution within the diaper. The layer was made from either chemically crosslinked cellulose fibers or a nonabsorbent non-woven material, such as polypropylene fiber, and it was sufficiently porous to allow liquid to pass through freely.1
Construction of the SAP:
Superabsorbent polymer is added to baby diapers in basically two ways: layered or blended. Japanese diaper manufacturers commonly adopt the layered application. In this method, powdered superabsorbent polymer first is scattered onto a layer of fluff pulp. The fluff is then folded, so that the polymer is located in a centralized layer in the absorbent structure. This structure is covered with a non-woven fabric layer. In the blended application, the superabsorbent polymer first is mixed homogeneously with the fluff pulp. Then the mixture is laid down to give the absorbent structure, which is subsequently covered with a non-woven fabric. The blended application of the SAP is representative of American diaper manufacturers.
In each case, containment of the powdered polymer within the loose, porous structure of the diaper is a concern. A recent development in Japan is the use of thermally bondable fibers within the absorbent structure to help fix the superabsorbent in place. In this method, some of the fluff pulp is replaced with thermally bondable fibers.1
Structure – Property Relationship:
The structure of Polyacrylic acid is as follows, and contains an ionizable group on each repeat unit (-COOH).6
These polymer chains are then crosslinked at the –OH.
The mechanism of swelling of ionized, crosslinked polymer networks is based upon the concept of osmotic pressure. According to Flory9, the polymer acts as a semipermeable membrane which does not allow charge substituents to exit the polymer into the surrounding solution. This is because the ionized monomeric units contain fixed charges which attract and fix ions from the outer solution. Therefore, a charge gradient is set up, in which the concentration of free ions is greater outside of the polymer. Therefore, the osmotic pressure exerted by the gradient causes the polymer chain to swell as further ions diffuse in.
The synthetic pathway of polyacrylic acid is shown below:
Na2S2O8 is a radical initiator, which polymerizes the sodium acrylate salt monomers. The crosslinking agent is added in the same step.1
Certainly the first property and arguably the most important in a commercial superabsorbent used in the personal care market is the extent of swelling. This is true not only because swelling is related to the properties of the network, but also because the principal performance criterion for diapers is the amount of liquid contained per unit cost of diaper.1 In which case, the swelling capacity is approximately 20-40 mL of urine per gram of polymer.2
If the superabsorbent polymer is more highly crosslinked, it is more rigid in the swollen state. Improving the rigidity of the particles enables the swelling particles actually to push aside the fiber component of the composite, thereby maintaining the porosity and permeability during subsequent contacts with liquid. However, this must be optimized, as particles which are too rigid will cause leaks by tearing the surrounding fiber.
How Absorption Works:
Superabsorbent polymers are crosslinked networks of flexible polymer chains. The most efficient water absorbers are polymer networks that carry dissociated ionic functional groups. Except for the molecular-sized chains that make up the network, this picture of a network is remarkably similar looking to the mass of cotton fibers. The difference is that cotton takes up water by convection – water is "sucked" up, wetting the dry fibers; SAPs work by diffusion on the molecular level, since their "fibers" are actually long chained molecules.
Water diffuses into a particle of superabsorbent polymer when the concentration of water is initially lower in the interior of the particle. As water travels into the particle, it swells to accommodate the additional molecules. Because the polymer molecules are crosslinked, they do not dissolve in the absorbing liquid.
Absorbency under load and stability of the gel against shear are important properties of superabsorbent polymers and relate strongly to diaper performance. Diaper leakage was closely correlated to the stability of gel to shearing. More rigid superabsorbent particles, created by increasing the crosslinking, allows for a higher gel modulus and helps the particle withstand the shearing from the baby’s weight.
The most commonly available superabsorbent polymers are hard, dry, granular powders that look much like clean white sand or granular table sugar. When these polymer particles are placed in water, a slurry of water and the particles is formed. Gradually the superabsorbent polymer absorbs the water, turning into a soft, rubbery gel. On average, fluffed cellulose pulp fibers will absorb about 12 g of water per gram of dry fiber, whereas superabsorbent polymers will absorb up to 1,000 g of water per gram of polymer. 1
Small amounts of crosslinkers play a major role in modifying the properties of superabsorbent polymers. In addition to modifying the swelling and mechanical properties, the crosslinker affects the amount of soluble polymer found during the polymerization as result of its relative reactivity with acrylic acid or sodium acrylate. Efficiency of crosslinking will also depend on steric hindrance and reduced mobility at the site of pendant double bonds, the tendency of a given crosslinker to undergo intermolecular addition reactions, and the solubility of the crosslinker in the monomer mixture.1
In a diaper core, capillaries exist between fluff pulp fibers, polymer particles, or the two in combination. Distribution of liquid in the diaper core will be affected by the surface tension of the liquid that is flowing in these capillaries. In use, impurities may be extracted from a swollen superabsorbent polymer into the solution external to it. Through the surface tension of the solution, the movement of fluid through the capillaries in a diaper core may be affected by these impurities.2
Superabsorbent polymers are used under conditions in which the system temperature may change over time. For example, in a diaper, the superabsorbent polymer will first be bathed in a salt and urea solution that is at the internal temperature of the human body, but the resulting gel will cool slowly in contact with the external environment. The extent and rate of cooling will depend on the climate and other environmental factors. The diffusion coefficient of polymers in solution is temperature dependent, and this should be reflected in the absorption rate of superabsorbent polymers.1
On account of 90% of all superabsorbent materials are used in disposable articles, most of which are disposed of in landfills in the United States or by incineration in northern Europe, there is a perceived environmental problem with superabsorbent polymers. In the late 1980s bans or taxes on disposable diapers were being considered in at least 20 states. However, analysis of both disposable and cloth diapers manufactured in the late 1980s have shown that there is no clearly superior choice in terms of environmental impact.
Since then, disposable diapers have been modified to use fewer raw materials, which should result in a reduced solid waste burden, reduced packaging costs, and reduced transportation costs. Despite the technical analysis, consumers clearly perceive disposable absorbent products, specifically diapers, as having a negative impact on the environment. Therefore, superabsorbent polymer producers have been interested in developing biodegradable diaper or other absorbent product. Articles incorporating biodegradable superabsorbents might be disposed of in municipal composting facilities or flushed down the toilet to degrade in domestic septic tanks or at municipal wastewater treatment plants. Several diapers claiming biodegradability have been marketed, but none has enjoyed commercial success.
Furthermore, the advantages of a biodegradable superabsorbent polymer will be realized fully only in conjunction with a completely biodegradable structure, for example, diaper with biodegradable back sheet, tapes, adhesives, and elastics.1
Amount of Landfill:
In one infant’s lifetime, approximately 8,000-10,000 disposable diapers will be used where each one of those diapers takes approximately 500 years to degrade in a landfill. 7 These diapers are filled with untreated body excrement. Over five million tons of it, that could be carrying intestinal viruses, is brought to landfills. Groundwater contamination could be attributed to this form of disposal since insects are attracted to the sewage where they may be carrying diseases that can be transmitted. In 1990 alone, over 18 billion disposable diapers were thrown into the US’s landfills which where not readily biodegradable. The diapers themselves need to be exposed to air and sun to allow the paper to decompose. It also must be taken into consideration that thirty percent of a disposable diaper is plastic and is not compostable. Recycle plants can only handle 400 of the 10,000 tons of diapers, which are not completely recyclable, in landfills each day.8 This number would only hold if they didn’t have to process any other compostable garbage.
There are several reasons for the continued development of advanced superabsorbent polymers. Diapers manufacturers would like to reduce their manufacturing costs. A superabsorbent that can take on other roles in the absorbent core can displace other components in the core, in turn, reducing raw material costs and simplifying construction. For example, a superabsorbent fiber that can provide rapid fluid absorption and adequate wicking and transport of fluid could displace cellulose fluff in the absorbent core, leading potentially to a simpler or less expensive diaper. On the other hand, manufacturers of premium diapers seek to differentiate their products in order to command a higher price. An advanced superabsorbent that presents marketable performance advantages, for example, biodegradability to an absorbent product may be of special value to diaper manufacturers.
9. Flory, Principles of Polymer Chemisty, Cornell U. Press, 1953. | <urn:uuid:ae795164-cb8c-4f2d-a785-65899aa356f7> | CC-MAIN-2013-20 | http://www.eng.buffalo.edu/courses/ce435/Diapers/Diapers.html | 2013-06-19T06:29:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.916171 | 4,470 |
Educational methods changed significantly during the 1950s as Americans started to reap the benefits of a strong economy, with a job waiting for almost every able-bodied adult. To face this new, prosperous world, schools changed curricula. Teaching students "life adjustment" took precedence over the traditional skills of math, science, and reading. Schools emphasized mental, physical, and emotional aspects of a child's life. The humanities and life skills became the new focus of educators. Home-economics classes and government classes attained record enrollments as citizenship and managing the home and family became high priorities. Comprehensive high schools offered a wide variety of vocational training as well as numerous electives in such areas as photography, botanical care, and baby care. Audio-visual aids, modern laboratory equipment, and supplemental reference materials regularly enhanced education in the...
(The entire page is 860 words.)
Want to read the whole thing?
Subscribe now to read the rest of this article. Plus, get access to:
- 30,000+ literature study guides
- Critical essays on more than 30,000 works of literature from Salem on Literature (exclusive to eNotes)
- An unparalleled literary criticism section. 40,000 full-length or excerpted essays.
- Content from leading academic publishers, all easily citable with our "Cite this page" button.
- 100% satisfaction guarantee READ MORE | <urn:uuid:2e033422-688f-43de-967e-1920c76bb07c> | CC-MAIN-2013-20 | http://www.enotes.com/1950-education-american-decades/curricula | 2013-06-19T06:35:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.952183 | 282 |
Article abstract: The last Indian war heralds the close of the American frontier and the end of traditional life for Native Americans.
Summary of Event
The Battle of Wounded Knee, on December 29, 1890, was preceded on December 15 by the slaying of Sitting Bull, the last great Sioux warrior chief. His death resulted from an effort to suppress the Ghost Dance religion, which had been begun by Wovoka. Wovoka’s admixture of American Indian and Christian beliefs inspired hope in an eventual triumph of the American Indians over the white settlers, who,...
(The entire page is 1608 words.)
Want to read the whole thing?
Subscribe now to read the rest of this article. Plus, get access to:
- 30,000+ literature study guides
- Critical essays on more than 30,000 works of literature from Salem on Literature (exclusive to eNotes)
- An unparalleled literary criticism section. 40,000 full-length or excerpted essays.
- Content from leading academic publishers, all easily citable with our "Cite this page" button.
- 100% satisfaction guarantee READ MORE | <urn:uuid:5a829461-c1d4-4ab9-ba5a-08fed4af92bf> | CC-MAIN-2013-20 | http://www.enotes.com/wounded-knee-reference/battle-wounded-knee | 2013-06-19T06:23:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.907688 | 232 |
Space technology to soothe Roadster ride
Space missions are highly complex operations, not only because the satellites or space probes are unique pieces of top-notch intricate high-tech, but also because it is so challenging to get them to their assigned position in space without damage. The technology used is now being transferred to the car industry to increase comfort.
During its launch into orbit, a satellite is exposed to a number of extreme stresses. At takeoff the extremely strong engine vibrations are transmitted via the launcher structure to the satellite, which is also exposed to a high-intensity sound levels (140 dB and more). The increasing speed of the rocket also leads to aerodynamic strains that turn into a shockwave when the launch vehicle's velocity jumps from subsonic to supersonic.
That is not all. When the burned out rocket stages are blasted off and the next stage is fired up, the satellite is exposed to temporary impulsive vibrations. So how does the satellite survive earthquake-like vibrations, the forces of supersonic shock waves and the pressures of explosive blasts?
French company ARTEC Aerospace has developed a vibration and acoustic attenuation technology based on a damping mechanism within the structures, called Smart Passive Damping Device (SPADD®). The principle of the technology is to increase the natural damping of a structure by fixing a light energy-dissipating device to it, without modifying the structure's static behaviour.
SPADD's damping system is so much superior to traditional dissipation devices that it is considered to be a technological breakthrough in the investigation and research of vibro-acoustics, the area of tackling noise and vibration problems such as those induced by powerful jets or rockets.
The SPADD technology is used on the Ariane launchers and also mounted on board a number of satellites such as Intelsat, Inmarsat, Integral and MetOp.
Space technology for the car industry
Based on this space technology, ARTEC Aerospace has developed tools for optimising the damping in non-space structures. ESA’s Technology Transfer Programme Office (TTPO) supported the transfer of this technology to the car industry through its Technology Transfer Network (TTN).
MST Aerospace, technology broker and leader of TTPO's TTN, then brought ARTEC Aerospace and its SPADD technology together with German car manufacturer Daimler Chrysler AG.
The design of convertible vehicles is often based on sibling vehicles of the saloon or coupé line of cars. However, by taking off the top of a self-supporting structure, the convertible’s structure loses stiffness. This leads to torsion vibrations that apart from making for an uncomfortable ride, also make the rear view mirror and the steering wheel shake violently; up to 10 times more than in the saloon version.
At present, the way to correct this is to increase the shell weight of the body but this means that despite the missing top, a convertible weighs around 50 kg more than the saloon version. ARTEC Aerospace demonstrated to Daimler Chrysler that by using SPADD technology on a Mercedes CLK roadster, stiffening elements of 30 to 40 kg mass could be saved.
Successful road tests followed
Since then, Daimler Chrysler and ARTEC Aerospace have been working on implementing the SPADD technology in specific vehicle lines and finding suitable development partners. According to Daimler Chrysler and ARTEC, the results of the cooperation are very promising and have been demonstrated through successful road tests of models with different implementation of the technology.
SPADD has the potential to increase the performance of the structure, for geometrical simplification and for mass and cost savings.
ESA's Technology Transfer Programme Office (TTPO)
The main mission of the TTPO is to facilitate the use of space technology and space systems for non-space applications and to demonstrate the benefit of the European space programme to European citizens. The office is responsible for defining the overall approach and strategy for the transfer of space technologies including the incubation of start-up companies and their funding. For more information, please contact:
ESA’s Technology Transfer Programme Office
European Space Agency ESA - ESTEC
Keplerlaan 1, 2200 AG, Noordwijk ZH
Phone: +31 (0) 71 565 6208
Fax: +31 (0) 71 565 6635
Email: ttp @ esa.int | <urn:uuid:d4a05b26-a487-4961-893b-17311a440d32> | CC-MAIN-2013-20 | http://www.esa.int/Our_Activities/Technology/TTP2/Space_technology_to_soothe_Roadster_ride | 2013-06-19T06:42:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920844 | 906 |
- tongue (n.)
- Old English tunge "organ of speech, speech, language," from Proto-Germanic *tungon (cf. Old Saxon and Old Norse tunga, Old Frisian tunge, Middle Dutch tonghe, Dutch tong, Old High German zunga, German Zunge, Gothic tuggo), from PIE *dnghwa- (cf. Latin lingua "tongue, speech, language," from Old Latin dingua; Old Irish tenge, Welsh tafod, Lithuanian liezuvis, Old Church Slavonic jezyku).
For substitution of -o- for -u-, see come. The spelling of the ending of the word apparently is a 14c. attempt to indicate proper pronunciation, but the result is "neither etymological nor phonetic, and is only in a very small degree historical" [OED]. Meaning "foreign language" is from 1530s. Tongue-tied is first recorded 1520s.
- tongue (v.)
- "to touch with the tongue, lick," 1680s, from tongue (n.). Earlier as a verb it meant "drive out by order or reproach" (late 14c.). Related: Tongued; tonguing. | <urn:uuid:47b1b11e-96cb-4524-89ad-98a1e5cdb57c> | CC-MAIN-2013-20 | http://www.etymonline.com/index.php?term=tongue | 2013-06-19T06:28:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.89326 | 266 |
Summary: Water and sanitation issues: Making a difference in the developing world (22 March 2007: Brussels)
Research financed by the EU is playing a major role in finding new ways to manage water and sanitation issues in the developing world.
An example is a project among coastal communities in Bangladesh, India and Sri Lanka, which has involved local woman in the production, installation and maintenance of sanitation facilities. The joint development of decent sanitation adapted to local conditions, the treatment of waste and its impact on local water supply have had very direct influence on human health. In association with creating income for the women from sanitation facilities, composting waste, safe rainwater collection and similar activities, the action research has improved conditions on the ground.
By measuring the effects in the project areas in comparison to control areas, it has been possible to have a very clear idea of the success of the project, which included considerable increases in ownership of sanitary facilities and a much higher quality of facility. As a result, the approach is being implemented in neighbouring communities. Research activities like this can help to change things on the ground.
Other successful examples include locally adapted technologies such as solar disinfection which were developed in the Mediterranean or by providing tests to measure the quality of household water. This is just one example of research funded under the EU's Research Framework Programme and the EU Water Initiative that is making a real difference to people's lives in the developing world. For details on more such projects across the world, visit: | <urn:uuid:6463bec2-8b43-48e8-9122-7f96e53ea4ce> | CC-MAIN-2013-20 | http://www.eu-un.europa.eu/articles/en/article_6878_en.htm | 2013-06-19T06:21:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.959331 | 299 |
Dewar flask [for Sir James Dewar], container after which the common thermos bottle is patterned. It consists of two flasks, one placed inside the other, with a vacuum between. The vacuum prevents the conduction of heat from one flask to the other. For greater efficiency the flasks are silvered to reflect heat. The substance to be kept hot or cold, e.g., liquid air, is contained in the inner flask. See low-temperature physics.
See more Encyclopedia articles on: Physics | <urn:uuid:db8e9fcd-fcfe-4984-b12e-7b9255419d5f> | CC-MAIN-2013-20 | http://www.factmonster.com/encyclopedia/science/dewar-flask.html | 2013-06-19T06:50:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.898575 | 107 |
Paraguay, river, c.1,300 mi (2,090 km) long, rising in the highlands of central Mato Grosso state, Brazil. Flowing generally southward, it forms the border between Brazil and Paraguay in the pantanal, then crosses the center of Paraguay, dividing the Gran Chaco from E Paraguay. Two large tributaries, the Pilcomayo and the Bermejo rivers, join it from the west. Below the Pilcomayo, the Paraguay River flows SW to the Paraná River, forming part of the Paraguay-Argentina border. Navigable for most of its course, the Paraguay River is one of the major arteries of the Río de la Plata system, with its chief port at Asunción, Paraguay.
See more Encyclopedia articles on: Latin American and Caribbean Physical Geography | <urn:uuid:2d9c38fc-b92b-403e-add7-beff21605e00> | CC-MAIN-2013-20 | http://www.factmonster.com/encyclopedia/world/paraguay-river-brazil-paraguay.html | 2013-06-19T06:43:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.853951 | 185 |
Organization is a huge component of teaching, especially in the lower grade classrooms. Children need a specific place for everything to go. It is also beneficial for teachers to have a special location for all paperwork. To aid in organization, my school’s primary grades use B.E.A.R. books.
B.E.A.R. (Bring Everything, Always Ready) books are used in many schools. Some classes in my school do not use the acronym B.E.A.R. but have created their own. In the past, we made kindergarten B.E.A.R. books by using three ring binders and adding pocket folders in the middle. This year, we were lucky. We found a set of plastic folders that were already spiral bound. We were also lucky that these books only cost $1.00 each. I think the teachers in my school bought from every location of the store that carried these folders. We even had some brought in from the warehouse.
After buying the books, we begin to place labels throughout the book. The front has a label with the child’s name. The inside folders each have one of the following labels: money, homework, teacher notes, school notes, completed work. My labels also have a picture so that the students can easily identify the correct folder.
The folders are our main communication between school and home. The teachers do not check backpacks for notes or money. The folders are checked daily and sent home each night.
I think that they are a wonderful way for both parents and teachers to stay connected with the child’s education. If there are any primary classrooms that are not using an organizational tool such as this one, I strongly suggest that they develop one. Parents tend to be more involved when they do not have to dig through the child’s backpack for homework papers and notes. Since using the folder, we have had many more parents respond to notes. The B.E.A.R. books also place more responsibility and independence on students because they can place the papers in the correct folders themselves. | <urn:uuid:7f795c01-bd54-4eb9-b907-31c0debfebf4> | CC-MAIN-2013-20 | http://www.families.com/blog/bear-books | 2013-06-19T06:29:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.978224 | 431 |
Summer advice for owners of cats, dogs, rabbits
|Cat fleas||Fly strike on rabbit||Encephalitozoon cuniculi spores|
Fly strike is a particularly unpleasant problem that generally occurs in spring and summer, although with our changing climate; cases were being reported in February. This is typically a problem that occurs in rabbits, but can occur in any species. The problem arises when flies, typically blowflies, lay their eggs on damaged or wet skin, when the eggs hatch then the maggots produced will rapidly chew their way though the exposed flesh. The primary site for this is around the tail base in rabbits, rabbits need to be checked regularly for fly strike, and if your rabbit has diarrhoea then they must be checked on a daily basis. If you have to wash your rabbit for any reason then please dry them thoroughly afterwards. If maggots are found then veterinary treatment needs to be sought rapidly, as rapid maggot removal, wound cleaning, correction of fluid loss and antibiotic treatments are required. There is a product that can be used to prevent flies laying their eggs on the rabbit and can be useful in animals particularly at risk.
Encephalitozoon cuniculi is a parasite of rabbits that can cause a variety of symptoms including head tilt, hind limb weakness, kidney disease, blindness, seizures and death. Domestic rabbits can be infected from being in contact with wild rabbits either directly or via forage contaminated with urine. It is now possible to routinely treat rabbits with Panacur rabbit; this is a pleasant tasting paste that is given up to 4 times a year. As well as preventing E.cuniculi, Panacur rabbit will also control roundworms. In addition to regularly using Panacur Rabbit we advise the following; avoid collecting fresh vegetation from areas where wild rabbits or rodents are present, place hutches to give minimum exposure to wild rabbits and rodents. Regularly wash out and disinfect water and food bowls, if you have a number of rack hutches as this can lead to urine contamination of those below
Heat stroke still occurs all too frequently, prevention is simple enough, and even on cloudy summer day's temperatures within vehicles can all too quickly rise and cause serious effects to animals inside. Treatment is aimed at bringing the animal's temperature back down to normal by use of wet towels or cold-water baths. Although often more invasive treatment is required to correct the dehydration that has also been caused.
Fleas are an-all year problem, although we do see more of them in the summer months. There are a number of quality spot-on products available from the practice for flea control. If an animal were infested with fleas then we would need to combine the animal treatment with an environmental treatment that kills the flea larvae present in the house. These products have the added benefit of reducing house-dust mite numbers. Certain dog flea products contain the drug permethrin, typically these are some of the products sold in supermarkets and pet shops, this is very safe to use in dogs, however is highly toxic in cats and cases of poisoning in cats have occurred following incorrect use of these products.
Ticks, we are again seeing a lot of dogs with large numbers of ticks present on them, this occurs primarily following walking on Dartmoor. The increase in numbers is due to changes in stocking policy on the moor and changes in weather patterns. Generally in dogs, ticks cause a localised irritation; they can be safely removed by use of a tick hook, with this device the tick is removed without squeezing its contents into the dog. Some of the flea control products also protect against ticks, although generally for a shorter period of time. Ticks can cause 2 illnesses in dogs Lyme's disease and Louping ill; fortunately neither is very common at present.
Worm Control, we recommend treatment every three months to control roundworms and tapeworms in dogs and cats. We have just started a reminder scheme to help cat and dog owners purchasing Milbemax wormers from us. Owners can opt into the scheme when a purchase is made and three month's later a reminder will be sent to you via e-mail or text whichever is preferred. | <urn:uuid:b9cb0d3a-a697-4a6a-9d04-7b3181ef0fb5> | CC-MAIN-2013-20 | http://www.filhamparkvets.co.uk/Main%20menus/Summer%20health%20for%20cats%20and%20dogs.htm | 2013-06-19T06:48:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.957184 | 857 |
Tarzan of the Apes
by Edgar Rice Burroughs
This is the first book in the famous Tarzan series and has lead to a great number of sequels (over 25), films and comics. The title character Tarzan is the child of aristocratic parents, orphaned in the jungle and raised by apes. Tarzan thrives in the jungle, in large part due to his intelligence. The author Edgar Rice Burroughs has the superiority of aristocratic blood as a theme throughout his novels and Tarzan is naturally a beneficiary of this prejudice. The reader should remember that racism and white superiority was still a respectable belief at the time this novel was written, so some opinions common then are distasteful now.
After teaching himself to read Tarzan learns of his origins and has various struggles against native Africans and the jealous alpha male of his ape family. His life is changed forever however after the arrival of the first white people Tarzan has ever seen in the jungle, including the beautiful Jane.
Tarzan's exploits are extreme and fantastic, yet the skill of Edgar Rice Burroughs makes them believable in the context of this fabulous story. There is a reason Tarzan made such an impact on popular culture and readers of this book will discover it for themselves.
APA: Burroughs, Edgar Rice. (2013). Tarzan of the Apes. Hong Kong: Forgotten Books. (Original work published 1914)
MLA: Burroughs, Edgar Rice. Tarzan of the Apes. 1914. Reprint. Hong Kong: Forgotten Books, 2013. Print.
|1. tarzan||2. apes||3. clayton||4. jungle||5. porter|
|6. jane||7. back||8. cabin||9. toward||10. again|
|11. professor||12. eyes||13. strange||14. black||15. turned|
|16. philander||17. know||18. forest||19. left||20. hand|
|21. ape||22. head||23. thought||24. knew||25. face| | <urn:uuid:a75e3e73-b1f6-426b-8ace-686a6f0c575a> | CC-MAIN-2013-20 | http://www.forgottenbooks.org/books/Tarzan_of_the_Apes_1000142548 | 2013-06-19T06:16:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928438 | 437 |
Central Equatorial Pacific Experiment (CEPEX)
CEPEX employed surface, airborne, and space-borne platforms. The
airborne platforms were NASA's ER-2, Aeromet's Learjet, the NOAA
P-3, and the NCAR Electra. The surface platforms include the R/V
Vickers, TOGA moorings (buoys), and upper-air stations on islands.
Space-borne platforms are Sun-synchronous polar orbiters (NOAA-
10 and DMSP) and geosynchronous monitoring satellites (GMS).
Instruments on these platforms measure radiation fluxes, cirrus
radiative and microphysical properties, vertical water-vapor
distribution, evaporation from the sea surface, and precipitation. | <urn:uuid:8d78df6f-c8fe-4b64-b980-2b7fbf184d6b> | CC-MAIN-2013-20 | http://www.gcrio.org/OCP/ocpCEPEX.html | 2013-06-19T06:16:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.835292 | 163 |
February 28, 2007
GCRIO Program Overview
Our extensive collection of documents.
Archives of the
Global Climate Change Digest
A Guide to Information on Greenhouse Gases and Ozone Depletion
Published July 1988 through June 1999
FROM VOLUME 9, NUMBER 7, JULY 1996
IMPACTS: IMPACTS ON AGRICULTURE
Two items in Clim.
Change, 33(1), 1-6, May 1996:
"The Impact of Climate Change on Agriculture," S. Helms (Triangle
Econ. Res., 1000 Park Forty Plaza, #200, Durham NC 27713), R. Mendelsohn, J.
Neumann, 1-6. Early studies of climate change impacts predicted large losses to
U.S. agriculture. This essay discusses four factors that have caused more recent
estimates to be more optimistic: (1) milder climate scenarios; (2) adaptation by
farmers; (3) increased productivity from carbon fertilization; and (4)
warmth-loving crops were omitted in earlier studies. Key remaining questions
include how tropical and subtropical farming will be affected, and how effects
will be distributed regionally.
"Agricultural Adaptation to Climatic Variation," B. Smit (Dept.
Geog., Univ. Guelph, Guelph ON N1G 2W1, Can.), D. McNabb, J. Smithers, 7-29.
Explores assumptions underlying impact assessments of climate change for
agriculture, both conceptually (with a model of agricultural adaptation to
climate), and empirically, based on a survey of 120 farm operators in
southwestern Ontario. Many farmers were affected by variable climatic conditions
over a six-year period, but only 20 percent were sufficiently influenced to
respond with conscious changes in their operations.
Four related items in
Clim. Change, 32(3), Mar. 1996:
"High-Frequency Climatic Variability and Crop Yields," D.S. Wilks
(Dept. Soil, Crop & Atmos. Sci., Bradfield Hall, Cornell Univ., Ithaca NY
14853), S.J. Riha, 231-235. Introducing the following three papers, this
editorial stresses that short-term climate variability, not just changes in
climatic means, is an important factor in the climate sensitivity of natural and
managed systems, and that much more work is needed to clarify climate impacts.
"Use of Conditional Stochastic Models to Generate Climate Change
Scenarios," R.W. Katz (ESIG, NCAR, POB 3000, Boulder CO 80307), 237-255.
"The Effect of Changes in Daily and Interannual Climatic Variability on
[the] CERES-Wheat [Model]: A Sensitivity Study," L.O. Mearns (NCAR, POB
3000, Boulder CO 80307), C. Rosenzweig, R. Goldberg, 257-292.
"Impact of Temperature and Precipitation Variability on Crop Model
Predictions," S.J. Riha (Dept. Soil, Crop & Atmos. Sci., Bradfield
Hall, Cornell Univ., Ithaca NY 14853), 293-311.
Climate Warming for Agricultural Production in Eastern China," W. Futang
(Chinese Acad. Meteor. Sci., Baishiqiao Rd. #46, Beijing 100081, China), World
Resour. Rev., 8(1), 61-68, Mar. 1996.
Potential impacts on production of rice, winter wheat and corn are estimated
based on composite regional GCM scenarios combined with a weather-yield model
and a cropping system model. Warming would affect corn most, wheat next, and
rice least; there would be a significant northward shift in cropping patterns.
However, it is difficult to determine whether the overall impact of climate
warming would be good or bad for farming in China, due to uncertainties in the
GCM scenarios and the complex impact of climate change on agriculture.
Global and Regional Analyses of the Effects of Climate Change: A Case Study of
Land Use in England and Wales," M.L. Parry (Dept. Geog., Univ. College, 26
Bedford Way, London WC1H 0AP, UK), J.E. Hossell et al.,
Clim. Change, 32(2), 185-198, Feb. 1996.
Uses a case study to illustrate an integrated assessment of the global and
regional effects of climate change on land use. Data on world food prices
provide input to a land-use model, which integrates the effect of price changes
for various crops with climate-related changes in yield through the year 2060.
Assessment of Impacts of Climate Change on Boro Rice Yield in Bangladesh,"
R. Mahmood (Dept. Geog., Univ. Oklahoma, Norman OK 73019), J.T. Hayes, Phys.
Geog., 16(6), 463-486, Nov.-Dec. 1995.
Rice, the main food crop in Bangladesh, is sensitive to climatic variations.
A climatic crop productivity model is applied to the boro rice growing season
(Dec.-May) for various combinations of altered thermal and solar climates. A 1°
C rise in mean growing season air temperature reduces boro rice yield by 4.6%;
and each 10% increase in incident solar radiation causes a 6.5% increase of
Guide to Publishers
Index of Abbreviations | <urn:uuid:cad94f14-da65-42a6-9ff0-1c35be8c71a6> | CC-MAIN-2013-20 | http://www.gcrio.org/gccd/gcc-digest/1996/d96jul3.htm | 2013-06-19T06:41:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.841971 | 1,199 |
Pneumonia is a breathing (respiratory) condition in which there is an infection of the lung.
This article covers pneumonia in people who have not recently been in the hospital or another health care facility (nursing home or rehab facility). This type of pneumonia is called community-acquired pneumonia, or CAP.
Bronchopneumonia; Community-acquired pneumonia; CAP
Pneumonia is a common illness that affects millions of people each year in the United States. Germs called bacteria, viruses, and fungi may cause pneumonia.
Ways you can get pneumonia include:
Pneumonia caused by bacteria tends to be the most serious kind. In adults, bacteria are the most common cause of pneumonia.
Many other bacteria can also cause pneumonia.
Viruses are also a common cause of pneumonia, especially in infants and young children.
Risk factors that increase your chances of getting pneumonia include:
The most common symptoms of pneumonia are:
Other symptoms include:
If you have pneumonia, you may be working hard to breathe, or breathing fast.
The health care provider will hear crackles or abnormal breath sounds when listening to your chest with a stethoscope. Other abnormal breathing sounds may also be heard through the stethoscope or by tapping on your chest wall (percussion).
The health care provider will likely order a chest x-ray if pneumonia is suspected.
You may need other tests, including:
Less often patients may need:
Your doctor must first decide whether you need to be in the hospital. If you are treated in the hospital, you will receive:
It is very important that you are started on antibiotics very soon after you are admitted (unless you have viral pneumonia).
You are more likely to be admitted to the hospital if you:
However, many people can be treated at home. Your doctor may tell you to take antibiotics. Antibiotics help some people with pneumonia get better.
Breathing warm, moist (wet) air helps loosen the sticky mucus that may make you feel like you are choking. These things may help:
Drink plenty of liquids (as long as your health care provider says it is okay):
Get plenty of rest when you go home. If you have trouble sleeping at night, take naps during the day.
With treatment, most patients will improve within 2 weeks. Elderly or very sick patients may need longer treatment.
Those who may be more likely to have complicated pneumonia include:
In rare cases, more severe problems may develop, including:
Your doctor may want to make sure your chest x-ray becomes normal again after you are treated. However, it may take many weeks for your x-ray to clear up.
Call your doctor if you have:
Wash your hands often, especially after:
Also wash your hands before eating or preparing foods.
Don't smoke. Tobacco damages your lung's ability to ward off infection.
Vaccines may help prevent some types of pneumonia. They are even more important for the elderly and people with diabetes, asthma, emphysema, HIV, cancer, or other long-term conditions:
If you have cancer or HIV, talk to your doctor about ways to prevent pneumonia and other infections.
Centers for Disease Control and Prevention. Recommended adult immunization schedule--United States, 2012. MMWR. 2012;61(4)
Limper AH. Overview of pneumonia. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier;2011:chap 97.
Niederman M. In the clinic. Community-acquired pneumonia. Ann Intern Med. 2009;151(7).
Torres A, Menandez R, Wunderink R. Pyogenic bacterial pneumonia and lung abscess. In: Mason RJ, Broaddus VC, Martin TR, et al. Murray & Nadel's Textbook of Respiratory Medicine. 5th ed. Philadelphia, Pa: Saunders Elsevier; 2010:chap 32. | <urn:uuid:4260d492-8606-459b-9018-48c0c5711220> | CC-MAIN-2013-20 | http://www.georgetownuniversityhospital.org/body_fw.cfm?id=555563&action=articleDetail&AEProductID=Adam2004_5117&AEArticleID=000145&crawl=false | 2013-06-19T06:23:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92703 | 841 |
In POSIX systems, a user can specify the time zone by means of the
TZ environment variable. For information about how to set
environment variables, see Environment Variables. The functions
for accessing the time zone are declared in time.h.
You should not normally need to set
TZ. If the system is
configured properly, the default time zone will be correct. You might
TZ if you are using a computer over a network from a
different time zone, and would like times reported to you in the time
zone local to you, rather than what is local to the computer.
In POSIX.1 systems the value of the
TZ variable can be in one of
three formats. With the GNU C Library, the most common format is the
last one, which can specify a selection from a large database of time
zone information for many regions of the world. The first two formats
are used to describe the time zone information directly, which is both
more cumbersome and less precise. But the POSIX.1 standard only
specifies the details of the first two formats, so it is good to be
familiar with them in case you come across a POSIX.1 system that doesn't
support a time zone information database.
The first format is used when there is no Daylight Saving Time (or summer time) in the local time zone:
The std string specifies the name of the time zone. It must be three or more characters long and must not contain a leading colon, embedded digits, commas, nor plus and minus signs. There is no space character separating the time zone name from the offset, so these restrictions are necessary to parse the specification correctly.
The offset specifies the time value you must add to the local time
to get a Coordinated Universal Time value. It has syntax like
is positive if the local time zone is west of the Prime Meridian and
negative if it is east. The hour must be between
23, and the minute and seconds between
For example, here is how we would specify Eastern Standard Time, but without any Daylight Saving Time alternative:
The second format is used when there is Daylight Saving Time:
std offset dst [offset]
The initial std and offset specify the standard time zone, as described above. The dst string and offset specify the name and offset for the corresponding Daylight Saving Time zone; if the offset is omitted, it defaults to one hour ahead of standard time.
The remainder of the specification describes when Daylight Saving Time is in effect. The start field is when Daylight Saving Time goes into effect and the end field is when the change is made back to standard time. The following formats are recognized for these fields:
365. February 29 is never counted, even in leap years.
365. February 29 is counted in leap years.
6. The week w must be between
1is the first week in which day d occurs, and week
5specifies the last d day in the month. The month m should be between
The time fields specify when, in the local time currently in
effect, the change to the other time occurs. If omitted, the default is
For example, here is how you would specify the Eastern time zone in the United States, including the appropriate Daylight Saving Time and its dates of applicability. The normal offset from UTC is 5 hours; since this is west of the prime meridian, the sign is positive. Summer time begins on the first Sunday in April at 2:00am, and ends on the last Sunday in October at 2:00am.
The schedule of Daylight Saving Time in any particular jurisdiction has changed over the years. To be strictly correct, the conversion of dates and times in the past should be based on the schedule that was in effect then. However, this format has no facilities to let you specify how the schedule has changed from year to year. The most you can do is specify one particular schedule—usually the present day schedule—and this is used to convert any date, no matter when. For precise time zone specifications, it is best to use the time zone information database (see below).
The third format looks like this:
Each operating system interprets this format differently; in the GNU C Library, characters is the name of a file which describes the time zone.
TZ environment variable does not have a value, the
operation chooses a time zone by default. In the GNU C Library, the
default time zone is like the specification ‘TZ=:/etc/localtime’
(or ‘TZ=:/usr/local/etc/localtime’, depending on how the GNU C Library
was configured; see Installation). Other C libraries use their own
rule for choosing the default time zone, so there is little we can say
If characters begins with a slash, it is an absolute file name; otherwise the library looks for the file /share/lib/zoneinfo/characters. The zoneinfo directory contains data files describing local time zones in many different parts of the world. The names represent major cities, with subdirectories for geographical areas; for example, America/New_York, Europe/London, Asia/Hong_Kong. These data files are installed by the system administrator, who also sets /etc/localtime to point to the data file for the local time zone. The GNU C Library comes with a large database of time zone information for most regions of the world, which is maintained by a community of volunteers and put in the public domain. | <urn:uuid:6a263cce-6395-4741-9f11-3bac604f4338> | CC-MAIN-2013-20 | http://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html | 2013-06-19T06:42:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.87883 | 1,147 |
Reviewed by James Seymour (University of Texas at Brownsville/Texas Southmost College)
Published on H-Women (December, 1998)
To honor the sesquicentennial anniversary of the Seneca Falls Convention, Ellen Carol DuBois collected twelve previously published pieces and two new additions in Woman Suffrage and Women's Rights. DuBois examines the evolution of the woman suffrage movement and places it within a larger women's rights struggle. The result deftly charts the professional and ideological development of the author, one of the foremothers of women's history.
Written especially for the book, Chapter One ("The Last Suffragist: An Intellectual and Political Autobiography") explains the circumstances behind the writing of her articles, including both professional and political matters, and cites criticism of her work by other historians. The reader learns about the modern feminist struggles in which she actively participated and sees how political changes influenced her writing. The outcome demonstrates her goal of creating history "that was both politically engaged and committed to full disclosure and democratic debate" (p. 19). DuBois has strong feminist and socialist views that she weaves through her writing, usually strengthening the material.
Chapter Two ("The Radicalism of the Woman Suffrage Movement: Notes Towards the Reconstruction of Nineteenth Century Feminism"), Chapter Three ("Politics and Culture in Women's History"), and Chapter Four ("Women's Rights and Abolition: The Nature of the Connection") evaluate the pre-Civil War suffrage debate. Rather than including all of the ways women used to challenge patriarchal society, DuBois focuses on their fight to obtain the ballot. She contends that "precisely by bypassing the private sphere and focusing on the male monopoly of the public sphere, pioneering suffragists sent shock waves through the whole structures that relegated women to the family" (p. 3). Radical in the period, woman suffrage challenged women's subordinate position within the entire culture. William Lloyd Garrison's brand of abolition played a special role in shaping woman suffrage, as he taught his female adherents how to shape their discontent into a social movement. In particular, Garrison regarded his female followers as human beings first, and women second. This idea underlay many nineteenth century arguments in favor of suffrage, such as equitable taxation and equal treatment under the law, which stem from women's status as humans, rather than separate considerations because of their sex.
Chapter Five ("The Nineteenth Century Woman Suffrage Movement and the Analysis of Women's Oppression"), which originally appeared in Capitalist Patriarchy, provides insights into DuBois' attempt to reconcile the hyphen in socialist-feminist, using, as DuBois admits, overly jargon-laden terminology. She begins to address working class women in the suffrage struggle with this piece, a theme she develops further in Chapter Ten ("Working Women, Class Relations, and Suffrage Militance: Harriot Stanton Blatch and the New York Woman Suffrage Movement, 1894-1909"). Here, she concentrates on the interplay of different classes within the suffrage struggle, using Harriot Stanton Blatch, the daughter of Elizabeth Cady Stanton, as her foil. Blatch bridged the chasm between the classes, DuBois argues, coming from an elite background herself while working alongside lower class women's groups such as the Equality League of Self Supporting Women. Blatch further introduced the tactics of British suffragettes, more radical than their American counterparts, to New York, so that American women were "taking suffrage out of the parlors and into the streets" (p. 199). DuBois presents an important reinterpretation of an older view that only middle class women supported suffrage. More evidence including non-New York women's activism would strengthen her argument.
Chapter Six ("Outgrowing the Compact of the Fathers: Equal Rights, Woman Suffrage and the United States") and Chapter Seven ("'Taking the Law Into Our Own Hands': Bradwell, Minor and Suffrage Militance in the 1870s") chronicle the influence of the Fourteenth and Fifteenth Amendments on woman suffrage. Rather than characterizing these amendments as a disaster for the movement, DuBois reveals women benefited from them, attempting, albeit unsuccessfully, to win the vote under the citizenship clause of the Fourteenth Amendment.
Chapter Eight ("Seeking Ecstasy on the Battlefield: Danger and Pleasure in Nineteenth Century Feminist Sexual Thought"), co-written with Linda Gordon, examines social purity in the feminist movement. In the nineteenth century, conservative female reformers denounced prostitution and attempted to impose chastity on both men and women. They claimed women possessed a "pure" view of sexual relations, which men should follow. "Pro-sex" feminists, a decided minority in this period, challenged the idea that sexual desire represented a purely masculine trait and sought to embrace their own sexual desires. The authors trace modern feminists who denounce pornography to the earlier, conservative wing of feminism. This chapter clearly reflects the activist stance DuBois takes towards history, placing modern issues within a broader historical context. It, coupled with the next chapter, evidences DuBois' mingling of advocacy with scholarship.
In Chapter Nine ("The Limitations of Sisterhood: Elizabeth Cady Stanton and Division in the American Suffrage Movement, 1875-1902"), DuBois expands the dichotomy of conservative and radical strains within nineteenth century feminism, centering on the iconoclastic Elizabeth Cady Stanton. Stanton criticized Christianity for oppressing women, and she wrote the Woman's Bible to provide a feminist analysis of Scripture. The tepid response of the National American Woman Suffrage Association to this project indicated the growing conservative nature of the organization and its focus on the ballot. DuBois clearly sides with Stanton and the radicals in the sex debate of these two chapters, explaining "I find myself a good deal closer to Stanton's ideas about women's liberation, her focus on independence and egalitarianism, her emphasis on freedom rather than protection, than I feel to her social purity opponents" (p. 171) DuBois' emphasis on Stanton underscore the importance of the social purity feminists to the women's struggle. These women, regarded as conservative today, generated more followers than the radicals and helped enact legislation considered feminist in the period.
Chapter Eleven ("Making Women's History: Historian Activists of Women's Rights, 1880-1940") and Chapter Twelve ("Eleanor Flexner and the History of American Feminism") provide a historiography of the writing of woman suffrage. The former piece charts the dissolution of the feminist coalition in the 1920s and 1930s, as different factions competed for glory and respect through autobiographies and biographies. A secondary, yet important, theme in the chapter highlights the need to preserve women's papers, so that scholars can discover more about pioneering women. In the latter article, DuBois finds her historical foremother in Eleanor Flexner, who, during the conservative 1950s, wrote a seminal and, as DuBois demonstrates, quite radical account of woman suffrage. While DuBois considers herself the "fictive daughter" of Elizabeth Cady Stanton (p. 16), she easily could trace her lineage back to Flexner as well.
The most broadly conceived of the pieces, "Woman Suffrage and the Left: An International Socialist Feminist Perspective," returns to an investigation of the hyphen around which socialist-feminist revolves, trying to blend the two criticisms of the dominant culture. DuBois investigates the interplay between socialism and feminism primarily in Europe and the United States. For most of the nineteenth century, the two ideologies had an imperfect fit, as socialists denounced "bourgeois feminism" and espoused "anti-collaborationism" with middle class women (pp. 267, 263). Under the Second International, the two movements coalesced into a common fight to expand voting rights for men without property and women of all classes. The First World War shattered this tenuous hold, and the Third International lacked a feminist component. Rather than demonstrating how the two political movements are linked, this piece demonstrates mostly the antagonisms between them.
The final chapter ("A Vindication of Women's Rights"), written for the book, traces the meaning and development of "women's rights" and "feminism" in the United States, starting with Mary Wollstonecraft in the eighteenth century. DuBois contends "women's rights" has the more radical connotation. She compares the modern abortion rights debate to the discussion about coverture laws in the nineteenth century. In the 1800s, married women could not legally own property in most states. Similarly, women's bodies today are regarded as the property of the men and, presumably, women who seek to criminalize abortion. DuBois contends abortion is "a potent symbol for women's revolt against marital dependence and female subordination" (p. 294). She concludes with a call to arms to renew the battle for women's rights in the United States.
Ellen Carol DuBois admits that "A reasonable critique of my work is that often, when writing of 'suffrage' or 'women's rights,' I am really referring to [Elizabeth Cady] Stanton and the women who shared her ideas" (p. 16). She liberally peppers her articles about nineteenth century feminism with quotations from Stanton, which gives the work a decided Northern view. Although she writes her move from SUNY Buffalo and moved to U.C.L.A. in 1988 gave her a more multicultural, even global, perspective about feminism and women's rights, DuBois still ignores the South in much of her approach. Except for the expatriate Grimke sisters, Southern women provide an uneasy fit to many of DuBois' views, such as the reliance on socialism to invigorate woman suffrage in the early twentieth century, and the working class components of the movement. Southern women also tended to be more conservative in their approach to challenging male hegemony than DuBois' examples, especially regarding suffragette tactics.
In the course of her work, DuBois all but dismisses the final years of the suffrage battle, maintaining "that too close attention to the drama of suffragism's last years, both between activists and the Wilson administration and between dissenting camps of suffragists, [which] leads to an exclusive focus on tactics, to questions that I did not think were fundamental, and to a narrative line climaxing with the Nineteenth Amendment and concluding in 1920s" (p. 18). Downplaying the struggle for the Nineteenth Amendment in the woman suffrage story reminds this reviewer of leaving a play before the final act. Perhaps some of DuBois' dismissal of the War years stems from her emphasis on the radical nature of woman suffrage. Conservative and moderate women, from the Daughters of the Confederacy to the General Federation of Women's Clubs, supported suffrage in its final battles. By 1920, woman suffrage had moved into the mainstream and out of the fringe, losing its radical nature in the process.
In addition, while suffragists themselves continued to engage in political and social activities after 1920, the achievement of their goal affected the women's movement, especially those people outside the core group of leaders whom DuBois cites in her work, such as Elizabeth Cady Stanton and Harriot Stanton Blatch. Women's activism lacked the cohesion of the woman suffrage struggle once the Nineteenth Amendment had been ratified. After 1920, the vanguard of the women's revolution continued its struggle, but many of the followers laid down their arms and went to work within the dominant political culture.
Such matters aside, Ellen Carol DuBois has assembled a very useful array of her work in Woman Suffrage and Women's Rights, providing a highly readable and entertaining account of these events. For scholars already familiar with DuBois' work, the book will provide a convenient refresher to her points. To younger scholars, the book demonstrates how to combine historical scholarship with political advocacy without damaging either.
Copyright (c)1998 by H-Net, all rights reserved. This work may be copied for non-profit, educational use if proper credit is given to the author and the list. For other permissions, please contact H-Net at H-Net@h-net.msu.edu
If there is additional discussion of this review, you may access it through the list discussion logs at: http://h-net.msu.edu/cgi-bin/logbrowse.pl.
James Seymour. Review of DuBois, Ellen Carol, Woman Suffrage and Women's Rights.
H-Women, H-Net Reviews.
Copyright © 1998 by H-Net, all rights reserved. H-Net permits the redistribution and reprinting of this work for nonprofit, educational purposes, with full and accurate attribution to the author, web location, date of publication, originating list, and H-Net: Humanities & Social Sciences Online. For any other proposed use, contact the Reviews editorial staff at email@example.com. | <urn:uuid:c07a599a-46b6-44a9-ba5b-38d0235a54a6> | CC-MAIN-2013-20 | http://www.h-net.org/reviews/showrev.php?id=2569 | 2013-06-19T06:16:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.938489 | 2,672 |
Protect yourself from West Nile virus
Her Majesty the Queen in Right of Canada
Cat. No.: H34-127/2005
West Nile virus: How to reduce your risk
Avoid mosquito bites - your first line of defence
- Use mosquito repellent that contains DEET or other approved ingredients.
- Wear light-coloured, loose-fitting clothing.
- Wear long-sleeved shirts, pants and a hat if you are going camping, hunting, or into wooded or swampy areas.
- Make sure door and window screens are in good repair.
- When outdoors, place mosquito netting over strollers and playpens.
- Take extra precautions when mosquitoes are most active, in the early morning and evening.
Clean up sources of standing water
- Mosquitoes can breed in even a small amount of standing water.
- Get rid of standing water around your house. Empty water from old tires, flower pots, rain barrel lids, toys and other outdoor objects.
- Store larger outdoor items like canoes, wheelbarrows and wading pools upside down.
- Replace water in outdoor pet dishes an other containers twice a week.
- Encourage your neighbours to clean up too!
West Nile virus is spread through the bite of an infected mosquito. Anyone can get sick from West Nile virus but the risk of serious illness increases with age. Symptoms can include: very bad headache, bad fever, sore neck, throwing up, muscle weakness and blurred vision.
For more information on West Nile virus,
Visit your local Nursing Station or Community Health Centre.
Call the West Nile virus information line at 1-800-816-7292 (toll free). | <urn:uuid:cf3dc441-dc91-4bd6-905b-9018977f365d> | CC-MAIN-2013-20 | http://www.hc-sc.gc.ca/fniah-spnia/pubs/diseases-maladies/_wnv-vno/2005_prot/index-eng.php | 2013-06-19T06:23:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.849862 | 352 |
Getting diagnosed with type 2 diabetes can come as a real shock—it’s scary to hear that you have a lifelong health problem to deal with. It’s important to remember that a diagnosis isn’t necessarily a death sentence. Type 2 diabetes is a manageable condition. In fact, a diagnosis is a good thing: it means you now know what you can do to get healthier. But type 2 diabetes does require change and a commitment to living a healthier lifestyle.
Out-of-control type 2 diabetes can cause severe complications inside your body. With type 2 diabetes, your body’s cells cannot get the fuel they need to function, and without enough fuel, your body will start breaking down other tissue such as fat and muscle. You will become very tired and very dehydrated, resulting in blurred vision, increased urination, and confusion. As your body continues to break down, it becomes very susceptible to infections, and any existing wounds won’t heal. If untreated at this point, it is even possible that you may develop severe, life-threatening illnesses.
For example, diabetics are two to four times more likely to develop cardiovascular disease. Sugar can build up in your distant capillaries, leading to nerve damage. These can also damage your eyes, leading to blindness. With diabetes being the seventh leading cause of death in the United States, this disease is nothing to ignore.
How Doctors Diagnose Type 2 Diabetes
There are a few methods that your doctor can use to verify your type 2 diabetes diagnosis.
First, there is the glycated hemoglobin (A1C) test. This blood test tells your doctor what your average blood sugar level is for the past two to three months. It measures the percentage of blood sugar attached to hemoglobin, which is the oxygen-carrying protein in your red blood cells. Higher blood sugar levels indicate more hemoglobin with sugar attached. An A1C level of 6.5 percent or higher on two different tests indicates that you are diabetic. A result between 5.7-6.4 percent indicates prediabetes, and normal levels are those below 5.7 percent.
Random (Non-Fasting) Blood Glucose Test
The A1C test is the most common, but if you are pregnant or have a hemoglobin variant, your doctor may opt to use a different test. A random blood sugar test is a test in which a sample of your blood is taken at some random time. Your blood sugar values are expressed in millimoles per liter (mmol/L) or milligrams per deciliter (mg/dL). No matter when you last ate, a random blood sugar test that indicates a level of 11.1 mmol/L (200 mg/dL) or above suggests that you are diabetic, especially if you already have some symptoms of diabetes. A blood sugar level between 7.8 mmol/L (140 mg/dL) and 11.0 mmol/L (199 mg/dL) indicates prediabetes, and a normal level is one that is less than 7.8 mmol/L (140 mg/dL).
Fasting Glucose Test
Your doctor may also opt for a fasting blood sugar test. In this case, a sample of your blood will be taken after you have fasted overnight. A normal fasting blood sugar level is less than 5.6 mmol/L (100 mg/dL). A fasting blood sugar level from 5.6 to 6.9 mmol/L (100 to 125 mg/dL) indicates prediabetes. If your test shows a level of 7 mmol/L (126 mg/dL) or above on two different tests, you have type 2 diabetes mellitus.
Oral Glucose Tolerance Test
Then there is the oral glucose tolerance test, which also requires you to fast overnight and take a fasting blood sugar test. The doctor will then have you drink a sugary liquid, and then he or she will test your blood sugar levels periodically over the course of several hours. A normal blood sugar level is less than 7.8 mmol/L (140 mg/dL). If after two hours you get a reading greater than 11.1 mmol/L (200 mg/dL), then you are diabetic. A reading between 7.8 mmol/L and 11.0 mmol/L (140 and 199 mg/dL) indicates that you have prediabetes.
Getting a Second Opinion
By all means, get a second opinion if you have any concerns or doubts about your diagnosis. Remember that you don’t have to share the first doctor’s name with your new doctor. And your health is exactly that—your health. Don’t feel intimidated; you have every right to a second opinion. What you don’t want to do is ignore your suspicions and get treated for diabetes that you don’t have, which can lead to further complications.
The Next Steps
Finally, your diabetes diagnosis means that you must follow through on your monitoring and medical appointments. Getting your blood tested and tracking your symptoms are important steps to ensure long-term health. As a diabetic, you will have an array of doctors from a podiatrist to an endocrinologist checking in on your health. But remember, you are ultimately responsible for your own health. With diabetes, it’s possible to live a long, fulfilling life—but you have to make that happen with a commitment to getting better. | <urn:uuid:295a2598-2696-4862-8660-5dfa0cb24b7c> | CC-MAIN-2013-20 | http://www.healthline.com/health/type-2-diabetes/diagnosis | 2013-06-19T06:37:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.923411 | 1,124 |
Few of us would deny wishing we could live longer, especially if this could be achieved in reasonable physical and mental health.
If on the other hand life were to just be longer, but in a state of decrepitude or senility, it would be a very unattractive prospect.
What has emerged from years of research is the virtual certainty that not only are extra years of life available but that these can be gained along with increased vitality and well-being, by using tactics known as Calorie Restriction with Full Nutrition.
Is There Evidence to Support the Idea of Life Extension?
In one word - YES - abundantly so - at least in animals.
The evidence is now overwhelming that the same calorie restriction strategies which allow animals to live 30% to 40% longer than their mates, will produce human life extension and also better health.1,2
The evidence is so compelling that many scientists and technicians working in this area of research have enthusiastically adopted calorie restriction methods for themselves and their families.
In order to understand the evidence we need to swiftly establish how long we are 'supposed' to live, and what happens as we age.3
How Long Should We Live?
An accepted formula for calculating life expectancy in mammals is based on the multiplication by five of the time it takes the skeleton to mature.
This leads to a 15 year average lifespan for dogs (which take three years for skeletal maturity) and in humans, who stop growing at around 25 years of age, to the stunning thought that we should live in good health to around 125 years of age.
What happens when we age is clear enough but the question
'how?' and 'why?'we age have many different theories to explain them.
Some researchers have concluded that the changes which take place in cells leading to damaged genetic material (DNA) are the main causes of aging. Others suggest that a decline in the efficiency of our organs, accompanied by lowered hormonal and immune function are the underlying cause of aging. There are also those who disagree, arguing that these changes are effects rather than causes of the process.
Many scientists blame a combination of accumulated toxins, free radical activity, declining efficiency of our protective anti-oxidant enzymes along with radiation damage and nutritional deficiencies as the causes, perhaps assisted by built in coding in our genes which 'switches us off' when a certain level of wear and tear becomes apparent.
The signs of aging include increasingly poor protein manufacture by the cells, accompanied by cross-linking of tissues (making it less elastic, hard and wrinkled) plus accumulating levels of age-pigments called lipofuscin (liver-spots) in the cells and tissues which prevents them from normal function. As this happens there is a dramatic drop in efficient defensive enzyme activity as well as lower levels of hormone production, such as the Growth Hormone (GH) from the pituitary (GH is associated with rebuilding and repairing tissues).4
Anti-Aging is Not the Same as Life Extension
If free radical activity, toxic accumulation, lack of nutrients (especially the anti-oxidants) were the cause of aging then a health-enhancing diet would be adequate as an anti-aging strategy.
We could simply ensure anti-oxidant nutrients, including enzymes, in our food or as supplements, avoid exposure to toxic factors such as radiation and pollution and take adequate exercise and rest. | <urn:uuid:4b92842b-7080-42a2-8907-f48f6502fa44> | CC-MAIN-2013-20 | http://www.healthy.net/scr/article.aspx?Id=221 | 2013-06-19T06:30:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.964901 | 695 |
What Are Sinus Infections?
Sinus infections (also known as sinusitis) come in two forms; acute and chronic.
An acute (short-term) sinus infection is caused by harmless bacteria in the upper respiratory tract, and is most often caused by the common cold. It can also activate by bacteria, allergies and fungal infections. It usually lasts for two to four weeks and those who are affected usually respond well to medical therapy.
With acute sinus infection, it may be awkward to breathe through the nose. The area around the eyes and face may feel puffy, and may have throbbing facial pain or a headache as well.
However, persistent sinus infection can lead to serious infections and other complications. Sinusitis that lasts more than 12 weeks, or longer despite treatment, is called chronic sinusitis.
Chronic (long-term) sinus infections are very common in which the cavities around nasal passages (sinuses) become inflamed and swollen, it may be challenging to breathe through the nose. This condition intervenes with drainage and cause the build up of mucus.
Chronic sinusitis most commonly affects adolescent and middle-aged adults, however it can affect small children as well.
Most symptoms for acute and chronic are similar. They include bad breath, coughing, toothache, trouble breathing through the nose, erythema, facial pain, weariness, feverishness, nasal congestion, sickness or soreness in the eyes, cheeks, nose or brow and pharyngitis.
If sinus infection where left untreated, complications could occur which may lead to severe medical problems, here are some of the complications that could happen.
* Fevers and headaches together with soft tissue swelling over the frontal sinus may indicate an infection of the frontal bone better known as Pott’s Puffy Tumor or Ostemyelitis?
* The eye socket may also get infected due to ethmoid sinusitis and if it swells or becomes droopy, this may result in the inability to see and even permanent blindness. It can worse is when it causes a blood clot forms around the front and top of the face. The pupils will become fixed and dilated and this will happen on both eyes.
* A sinus infection can cause mild personality changes or altered consciousness. If this happens, it is possible that the infection can spread to the brain and result in a coma or even death.
Given the dangerous of sinus infections, it is advisable to see a physician to seek the proper medical treatment. The length of the time that the patient will be under the medication depends on the person.
Sinus infections can be treated, prior to take any medicine, see the doctor first to determine what is causing it. | <urn:uuid:0199ef39-5790-47eb-aa86-2d1a13ac9fa7> | CC-MAIN-2013-20 | http://www.healthygreenlivingtoday.com/what-are-sinus-infections/ | 2013-06-19T06:40:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.944053 | 573 |
September 09, 1998
Simple Chemical Switches Steer Migrating Neurons
In an embryo's developing brain and spinal cord, the growing ends of nerve cells, called axons, travel great distances to
make precise connections with other neurons. Without such accurate
connectivity, the nervous system would never wire properly.
An axon's path towards a target neuron is steered by growth cones that
are located in the tip of the axon. These growth cones receive cues
about the best path to follow from chemical attractants and repellents
secreted by cells in the central nervous system.
Until recently, scientists assumed that type of neuron and the unique
chemical receptors found on its surface determined whether a neuron is
attracted to or repelled by a given guidance chemical. Now, HHMI
investigator Marc Tessier-Lavigne at the University of California, San
Francisco, working with Mu-ming Poo and colleagues at the University of
California, San Diego (UCSD), has found that a single chemical cue can
either attract or repel, depending on the growth cone's internal status.
This research is reported in the September 4, 1998, issue of the journal
This work, says Tessier-Lavigne, may hold promise for regenerating
nerves damaged by spinal cord injury. It also provides potentially
important clues for understanding disorders of neuronal migration, which may be responsible for childhood epilepsy, forms of mental retardation, and possibly
dyslexia and schizophrenia.
The researchers found that two key signaling chemicals, cyclic AMP and
cyclic GMP, located in the growth cone, act as switches. In general,
increasing levels of these cyclic nucleotides promotes attraction, while
lowering levels favors repulsion. Thus, both attraction and repulsion
share a common chemical switch.
In studying spinal cord neurons cultured from frog embryos, the
investigators found evidence of two steering-related circuits within the
growth cones, one responding to cyclic AMP, the other to cyclic GMP. For
example, a repulsive signal-where growth cones turn away from the source
of the chemical cue-became attractive when cyclic GMP was added to the
growing neurons. In contrast, a different repulsive signal became
attractive when cyclic AMP was added to the growing neurons.
Several years ago, Tessier-Lavigne's group discovered netrin-1, a
chemical that attracts growth cones. In an article published in
in December 1997, Tessier-Lavigne's and Poo's groups showed that netrin-1 can also repel
growth cones when cyclic AMP is lowered. Both the attraction and the
repulsion were abolished by lowering calcium levels. In the article
the researchers show that calcium dependence
emerged as a consistent feature of cues influenced by the cyclic AMP
"It's remarkable that all five cues we've looked at so far fit this
simple picture," said Tessier-Lavigne. "We seem to be tapping into some
primordial guidance mechanisms. The growth cone is a machine designed to
respond to the environment by turning one way or the other, so maybe
it's not so surprising there are only a limited number of ways of
accessing that machine."
That is not the end of the story, however. Cyclic AMP and cyclic GMP
levels are controlled by other factors in the growth cone's
ever-changing external environment. This, says Tessier-Lavigne, suggests
that "the response of a growth cone to a particular guidance cue may
depend critically on other signals received by the neuron. The
susceptibility to conversion between attraction and repulsion may enable
a growing axon to respond differentially to the same guidance cue at
different points along the journey to its final target."
In an accompanying commentary in the same issue of
of the Miescher Institute in Switzerland cites as "most exciting" the
possibility that damaged nerves might be regenerated by treating them with drugs that
boost the levels of these molecular cues, reversing the action of
For example, MAG, a component of a neuron's protective myelin sheath, is
known to actively block axonal regeneration. The researchers showed that
MAG, which is normally repellent, can become an attractant when cyclic
AMP levels are raised.
"Since many different inhibitory factors that prevent regeneration
likely act through these chemical cue systems, we have a better chance
of reversing all inhibitory actions by this type of manipulation," said
Poo. He also noted that experiments in live animals would follow,
and-assuming those studies are promising-eventually clinical trials in
studies will be needed to test whether the switching mechanism
discovered in the cell culture model is actually being used by the
developing organism, added Tessier-Lavigne. | <urn:uuid:719c3da0-4779-4ddb-96f6-9cc3db8a299d> | CC-MAIN-2013-20 | http://www.hhmi.net/news/tessier.html | 2013-06-19T06:23:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.929814 | 1,025 |
Why see them?
Built to feed the Court of Henry VIII, these kitchens were designed to feed at least 600 people twice a day.
You can still see the largest kitchens of Tudor England at Hampton Court today, and they are often still used to prepare Tudor meals.
About the Tudor kitchens
The Hampton Court kitchens are a living monument to 230 years of royal cooking and entertainment.
Between their construction in 1530 and the royal family’s last visit to the palace in 1737, the kitchens were a central part of palace life. For many people today, Hampton Court Palace is Henry VIII, and Henry’s abiding reputation remains a ‘consumer of food and women’.
But Henry’s vast kitchens in the palace were not for him. They were built to feed the six hundred or so members of the court, entitled to eat at the palace twice a day.
This was a vast operation, larger than any modern hotel, and one that had to cope without modern conveniences.
The kitchens had a number of Master Cooks, each with a team of Yeomen and Sergeants working for them. The mouths of the 1,200-odd members of Henry VIII’s court required an endless stream of dinners to be produced in the enormous kitchens of Hampton Court Palace.
Video: Watch the cooks at work »
The annual provision of meat for the Tudor court stood at 1,240 oxen, 8,200 sheep, 2,330 deer, 760 calves, 1,870 pigs and 53 wild boar.
This was all washed down with 600,000 gallons of beer. | <urn:uuid:b21fe7b4-8022-481b-96a6-8902f3dd2f6e> | CC-MAIN-2013-20 | http://www.hrp.org.uk/HamptonCourtPalace/stories/thetudorkitchens | 2013-06-19T06:41:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.968369 | 335 |
Delta del Ebro
After travelling across Castilla-Leon, Pais Vasco, La Rioja, Navarra, Aragon and Catalonia, and Mirando de Ebro, Haro, Logroño, Tudela, Zaragoza and Tortosa, the River Ebro finally reaches the Mediterranean where it fans out to form the Delta de Ebro covering 320 km², one of the most important wetlands in Europe . Although the existence of the Delta is a natural phenomenon, much of its current shape and extent is man-induced from upstream deforestation and overgrazing of the sierras of its huge catchment area, particularly as the river passed through Aragon . In the 4th century C.E. for example the Roman town of Amposta was port with a seafront (see map). The process was interrupted in the mid-twentieth century when dam-building (Ebro, Mequinenza and Ribarroja) began to block the provision of sediments. In recent years, the Delta has begun to recede and is thought to be particularly at threat from hypothetical sea-level rises over the next century. The threat is double: one from sea level rise; and two and probably currently more of a danger from sinking without the constant top-up of sediments from upstream. Intensive rice farming covers 60% of the delta.
Despite having the greatest discharge of any Spanish river, irrigation is responsible for a significant hydrological deficit: an average of 300 m3/s is taken off the river, reducing average natural flow from 745 m3/s to 430. Although the PNH (a plan to deliver much of the river’s water south) has been shelved, the revealing of large deposits of heavy metal and radioactive waste at the Flix reservoir, has cast a new shadow on the river’s and delta’s future.
Wildlife of the Ebro Delta
The Ebro Delta is a Natural Park. Parts of the area are also designated as Natural Reserves (Illa de Sapinya 4ha and Punta de la Banya 2,500 ha), and National Hunting Refuges (Laguna de l’Encanyissada and Laguna de la Tancada).
Birds of the Ebro Delta
The huge wetlands of the Delta offers as many as 300 species of birds, some 95 of which are breeders. It is also vital for a wide range of overwintering species, and in addition serves as an essential stopover point for large numbers of migratory birds. The Ebro delta has the world’s largest colony of Audouin’s Gulls, which held a record number of more than 15,000 pairs in 2006.
- Birding in The Ebro Delta . North-East Spain: Ebro Delta Ramsar site, SPA, 95 species of breeding bird… if you are planning a birding trip to Spain then visit to the Ebro Delta is a must. Highly Recomended by iberianature. Run by my friend Stephen.
- Birdwatching in the Ebro Delta The Ebro Delta is one of the most important habitats of the western Mediterranean, and birds make up the most striking aspect of the wildlife. At any given time there will be between 50,000 and 100,000 individuals in residence belonging to three hundred species, 60% of the total number found in Europe.
Where to stay in or near the Ebro Delta
- Hotel L’algadir del Delta In the delightful village of Poblenou de Delta, the only attractive settlement in the Delta itself, is a also a great base. The hotel has an outdoor swimming pool Each of the rooms in L’Algadir has been decorated based on the wildlife of the Ebro Delta
- Parador de Tortosa historic setting and magnificent views from this 10th-century castle, carefully restored as a luxury hotel but still retaining much of its original charm. I’ve never stayed here but have had drinks on several very enjoyable occasions. The views as they say are stunning. I’m told by locals that the service is excellent.
- Hostal Agustí In culinary-hotspot of Sant Carles de la Ràpita, right on the edge of the Delta. Sant Carlos is not that attractive but the seafront is very nice, it’s a great base for the Delta and the food on offer is simply superb.
- Casa Pequeña. Rural lodging in between the Ebro Delta and Els Ports Natural Park. Houses are on small farm. Guided tours for birders. More
- River Ebro Apartments. Mora d’Ebre, Southern Catalonia – fantastic base for walkers, cyclists, birdwatchers, anglers and nature-lovers. More
Guide books in English
The excellent Spain: Travellers’ Nature Guide has a nice well-informed chapter on the wildlife and flora of the Ebro Delta
Birdwatching in Spain This essential guide includes good section on birding in the Ebro Delta
A Birdwatching Guide to Eastern Spain- more detail of the Delta
By car: A7 (autopista Barcelona-València) take the L’Aldea and Amposta exit. The southern half of the Delt ais much the more intersting
By rail: You can get off at either — L’Aldea-Amposta, Camarles or L’Ampolla — on the line between Barcelona and València regional service. Local bus services are effing awful – better hire a car or cycle.
Around the web
- http://www.gencat.net/mediamb/pn/espais/delta-ang.htm (English)
- Good review of the Ebro Delta here from Shelldrake Press
Information below adapted from Ramsar Directory of Wetlands of International Importance (1992)
Importance: The Ebro Delta is a typical example of a fluvial delta. Some 30,000 pairs of waterbirds nest annually, while mid-winter waterbird counts have recorded 180,000 individuals. Breeding species include Ardea purpurea , Egretta garzetta , Bubulcus ibis , Ardeola ralloides , Nycticorax nycticorax , Ixobrychus minutus , Botaurus stellaris , Netta rufina , Himantopus himantopus , Glareola pratincola , Larus audouinii (with 7,000 pairs in 1992, the largest colony in the world), Chlidonias hybridus, Gelochelidon nilotica, Sterna albifrons , Sterna hirundo and S. sandvicensis . In summer, up to 4,000 non-breeding Phoenicopterus ruber roseus occur. Thousands of Egretta garzetta and Bubulcus ibis winter, as well as duck species, such as Anas platyrhynchos (42,800 in 1989), A. strepera (4,119 in 1985), A. clypeata (14,200 in 1991) and Netta rufina (6,100 in 1991), and up to 32,000 shorebirds (e.g. Recurvirostra avosetta and Limosa limosa ).
Wetland Types: The site is a fluvial delta, including a variety of wetlands, amongst which are shallow coastal waters, sandy beaches and dunes, saline lagoons, salinas, freshwater marshes, and freshwater pools fed by groundwater springs. At the end of the 19th Century, the introduction of agriculture transformed most of the delta so that rice fields, covering more than 20,000 ha, now dominate the region. The primary natural wetland types are permanent rivers and estuarine habitats.
Biological/Ecological notes: The shallow offshore waters around the delta are extremely important as spawning and nursery areas for fish, including many commercially valuable species. Four of the delta’s fish species are endemic to the Iberian Peninsula (e.g. Aphanius iberus ). The delta also supports an outstanding mollusc fauna (marine and freshwater), while the saltwater channels hold a small endemic shrimp Palaemonetes zariqueyi . In addition to typical Mediterranean plant communities, some plant species reach their northern limit in the delta ( Lonicera biflora, Tamarix boveana and Zygophyllum album ), while for others this is their southernmost locality ( Nymphaea alba, Alnus glutinosa ).
Hydrological/Physical notes: The delta’s flooding regime is now artificially regulated for rice cultivation. During the winter (November to April) low water levels are maintained, and inflow of seawater occurs. Conversely, during summer, fresh water is fed into the delta from the river, through a network of artificial channels, and high levels are maintained until October. The inundated area shrinks to a minimum between February and April.
Human Uses: Much of the Natural Park (including virtually all of the littoral zones) is in public ownership, although some of the major lagoons are privately owned. The principal land uses within the site are hunting, fishing, shellfish harvesting, tourism and limited agriculture, aquaculture and livestock rearing.
Adverse Factors: The water in the delta is highly contaminated by agricultural chemicals and some of the lagoons (Encanyissada, Olles, Platjola) are in an advanced state of eutrophication. Other reported problems within the site include over-exploitation of natural resources (through hunting, fishing, shellfish harvesting etc.) and insufficiently regulated tourism/recreation. Dam construction in the delta’s catchment has resulted in a significant decrease in the volume of sediment reaching the wetland, leading in turn to shrinkage of the delta by as much as 75 m per year in some of the most important areas (e.g. Isla de Buda, Punta de la Banya).
The Iberianature guide to Spain | <urn:uuid:a1f44ce2-5f2b-4197-a90a-12187481fcd5> | CC-MAIN-2013-20 | http://www.iberianature.com/regions/catalonia/delta-del-ebro/ | 2013-06-19T06:36:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.896897 | 2,125 |
Early on Thursday, 12 states east of the Rocky Mountains saw temperatures fall below zero. The township of Embarrass, Minn., nicknamed the Cold Spot, lived up to its moniker, recording the lowest temperature in the U.S. Thursday morning, hitting -42 degrees Fahrenheit, according to Weather Underground meteorologist Jeff Masters.
That's pretty chilly, but not low enough to be the all-time record of -57 degrees Fahrenheit set in 1996, according to the township's website.
Freezing air moving south from the Arctic has stirred up what are known as lake-effect snows across the Midwest and Northeastern U.S. Some areas in upstate New York have seen snow piling up more than three feet.
The recipe for lake-effect snow is roughly this: A mass of cold air moves over the warmer waters of a lake. The warmth of the lake water heats up the bottom of the cold air mass, and some of the lake evaporates up into the cold air. After it rises for a little while, the warm air cools, and the moistrue from the lake condenses into clouds and gives rise to snow.
These recent lake-effect snows are thought to be especially bigger thanks to the mild winter months beforehand, which have warmed the Great Lakes to an unprecedented degree. The Great Lakes have also seen lower than average ice coverage this season, which helps pump up the snow.
“If the lakes are frozen, they generate very little in the way of lake effect snow, since little moisture can escape upwards from the ice,” Masters wrote on Wednesday.
The warming of the Great Lakes plays into a cycle similar to one currently seen in the dwindling ice coverage in the Arctic.
“The amount of warming of the waters in Lakes Superior, Huron and Michigan is higher than one might expect, because of a process called the ice-albedo feedback: When ice melts, it exposes darker water, which absorbs more sunlight, warming the water, forcing even more ice to melt,” Masters wrote.
The cold snap is supposed to lighten up a little bit next week. For most of us, our only recourse is to bundle up, but some other creatures have more unusual strategies to beat the cold. The wood frog, for example, hibernates but doesn't seek out a cave; it simply stays put and freezes in the cold. It doesn't die, though, thanks to a unique survival technique.
Before the coldest part of winter hits, a wood frog's blood glucose levels increase over eight hours until they hit levels as much as 200 times greater than normal.
“This 'antifreeze' effect preserves tissues and organs through the long winter,” Audubon Guide writer Kent McFarland explained in 2010.
It doesn't stop there. The wood frog's body really, literally freezes, with ice crystals running between skin and muscle and encasing internal organs. The frog's eyes also turn white with frost. And when spring comes, the wood frog, like the forest around it, thaws and begins anew. | <urn:uuid:9fb5ba67-b903-4a52-84f0-0679211a57c0> | CC-MAIN-2013-20 | http://www.ibtimes.com/winter-cold-snap-us-intensifies-lake-effect-snows-pummel-northeast-midwest-1037756 | 2013-06-19T06:51:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.941281 | 632 |
Don’t let the flu get you! Check out this message from Dr. Mason on the importance of getting your flu shot!
Three Easy Steps to Avoiding the Flu:
1. Get your flu shot!
- Getting a flu vaccination is the best way to prevent the flu. The vaccine for the H1N1 virus is included in the seasonal flu vaccine this year.
- Flu shots are most important for people at high risk and their close contacts. These people include:
- Pregnant women
- People with chronic health conditions such as heart disease, asthma, etc.
- People age 50 and older
- Complications of the flu include pneumonia. Adults 65 and older should get a pneumonia vaccine. People who have a chronic illness; a weakened immune system; or live in a nursing home should also get a pneumonia shot. This vaccine protects against a bacteria which causes pneumonia as well as meningitis and bloodstream infections. This can be given at the same time as the flu shot if needed.
2. Keep your hands clean!
- Wash your hands with soap and water.
- Cover your nose and mouth with a tissue when you cough or sneeze.
- Don’t touch your eyes, nose or mouth. This is how germs can spread.
3. Be a hero!
- If you get the flu, try and stay away from others.
- If you care for others who are at high risk, such as young children or older parents, get a flu shot.
- Take antiviral drugs if your doctor says to. These can help treat the flu and sometimes can prevent it.
Will you be pregnant this flu season?
Doctors recommend that you get a flu shot. Being pregnant increases your risk of getting very sick from the flu. Stay healthy during your pregnancy. Get vaccinated.
Studies show that the flu shot is safe for pregnant women. | <urn:uuid:472e25ab-a299-481d-a978-7aa3ac7fae8f> | CC-MAIN-2013-20 | http://www.illinicare.com/for-members/health-management/flu-prevention/ | 2013-06-19T06:28:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933921 | 392 |
National Minority Mental Health Awareness Month aims to increase awareness of mental illness, treatment, and research in diverse communities. By providing information that is specific to the unique cultural aspects of minorities through appropriate media and venues, these efforts work to bridge service gaps and health disparities.
Mental illness affects one in four American families and people in diverse communities are no exception. The U.S. Surgeon General reports that minorities:
- Are less likely to receive diagnosis and treatment for their mental illness,
- Have less access to and availability of mental health services,
- Often receive a poorer quality of mental health care,
- Are underrepresented in mental health research.
For additional information about National Minority Mental Health Awareness Month and to access resources and suggested activities, click here.
Local organizations and providers are encouraged to plan events this month and raise awareness about the benefits of psychotherapy and mental health to see improved access of mental health treatment and services. This includes education on how to access and navigate the local mental health care system and community support services being offered. The National Network to Eliminate Disparities in Behavioral Health will be hosting free technical seminars; click here to learn more.
Bebe Moore Campbell was the keynote speaker at the Central Texas African American Family Support Conference and we honor her dedication to mental health advocacy for diverse communities. Campbell was an accomplished author, advocate, co-founder of NAMI Urban Los Angeles and national spokesperson when she passed away in November 2006. | <urn:uuid:bd4e9c3a-2a1c-4365-9e36-e5127f1e9764> | CC-MAIN-2013-20 | http://www.integralcare.org/?nd=fs_nmmham | 2013-06-19T06:16:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.938721 | 296 |
A newly developed data decoding technique has allowed German scientists to transmit a record 26 terabits of data on a single laser beam in one second.
Although it was only a quarter the record 109 Tbps demonstrated on multi-core fibre connections in March, the new, single-source technique was expected to deliver efficiency and capacity gains.
Karlsruhe Institute of Technology (KIT) professor Jürg Leuthold encoded data using orthogonal frequency division multiplexing (OFDM) techniques, typically used in ADSL and WLAN networks.
The technique involves splitting data into several, parallel data streams. Signals are traditionally generated from analogue circuitry and processed electronically, before being converted to an optical signal and transmitted over fibre.
Since signal processing occurs in the electronic domain, it is limited in bitrate. KIT researchers reported that the highest real-time OFDM processing rate was 101.5 Gbps at present.
In the scientific journal Nature Photonics this week, the researchers proposed an “all-optical solution” that could work “beyond the speed limitations of electronics”.
Using data from a single light source, the researchers generated, transmitted and decoded 325 data streams over 50 kilometres of dispersion-compensated fibre, at a line rate of 26 Tbps.
“Experiments show the feasibility and ease of handling terabit per second data with low energy consumption,” the researchers reported.
“To the best of our knowledge, this is the largest line rate ever encoded onto a single light source.”
In a statement issued by KIT, Leuthold said terabit-per-second data rates were needed to meet modern communication demands.
“A few years ago, data rates of 26 terabits per second were deemed utopian even for systems with many lasers, and there would not have been any applications,” he said.
“With 26 terabits per second, it would have been possible to transmit up to 400 million telephone calls at the same time. Nobody needed this at that time. Today, the situation is different.”
Professor Ben Eggleton of Australia’s Centre for Ultrahigh bandwidth Devices for Optical Systems (CUDOS) told ABC Science Online that the technique could make optic technologies cheaper and easier to deploy.
Internet traffic – and thus the network’s energy requirements – was doubling every 18 months, “which means that in 10 years, we’ve got a real crisis”, he told the ABC.
Copyright © iTnews.com.au . All rights reserved.
Processing registration... Please wait.
This process can take up to a minute to complete.
A confirmation email has been sent to your email address - SUPPLIED GOES EMAIL HERE. Please click on the link in the email to verify your email address. You need to verify your email before you can start posting.
If you do not receive your confirmation email within the next few minutes, it may be because the email has been captured by a junk mail filter. Please ensure you add the domain @itnews.com.au to your white-listed senders. | <urn:uuid:99d430e4-ed35-4e37-8efb-e7791b63f8a5> | CC-MAIN-2013-20 | http://www.itnews.com.au/News/258418,scientists-break-data-transfer-record.aspx | 2013-06-19T06:48:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.928426 | 657 |
History of Vedic Astrology has its root in the Vedas of Hindu that are the oldest scriptures in the world library. Veda is derived from the root "Vid", which means, "to know". The Veda teaches how to achieve purity of heart, getting rid of impurities. The Vedas, believed to have been written down over a period of about ten to twenty thousand years ago.
There are the "four" Samhita (collections) that we call the "Vedas".
History of Vedic Astrology
• Rigveda: The earliest of these, is composed of about 1,000 hymns addressed to various deities, and mostly arranged to serve the needs of the priestly families who were the custodians of this sacred literature.
• Yajurveda : it contains prose formulas applicable to various cultic rites, along with verses intended for a similar purpose.
• Samaveda: is made up of a selection of verses (drawn almost wholly from the Rigveda) that are provided with musical notation and are intended as an aid to the performance of sacred songs.
• Atharvanaveda: It deals chiefly with the practical side of life, with man, his protection and security.
The Vedas has "six" supplements also known as Vedangas or the limbs of the vedas.
There are six Vedangas:Shiksha (phonetics), Kalpa (rituals), Vyakarana (grammar), Jyotishya (astronomy), Nirukta (etymology) and Chhandas (metrics)..
"Six" branches of astrology
1. Jataka (Predictive astrology or Natal Astrology)
2. Gola (Astronomy related to spherical movements)
3. Prashna (Queries at a given time or "Horary Astrology")
4. Nimitta (Omens and their interpretation)
5. Muhurta (Electional Astrology/selecting an auspicious time)
6. Ganita (Astronomical calculations).
The effects of a person's actions that determine their fate in this life and the next incarnation.
"Janani Janma Soukyanam
Your karma or fortune is determined by a predestined cosmic design. You are a soul incarnating in a body at a very specific time and place, and your life is a reflection of the grater whole into which you are born just as flowers bloom at certain times, say during spring time, when all conditions are perfectly suitable. So is the case with our births on this planets.
Vardhani Kula Sampadam
Pathavi Poorva Punyanam
Likyathe Janma Patrika"
Astrology is merely the medium of indicating the karmas (destinies) which has a link with the past life and also with the future. The karmas are woven in the horoscope so as to indicate the balance of karmas that the native is carrying as well as his task in this life.
Our powerful life karma is charted by the natal horoscope. A good Astrologer can see which areas of life may be strong, joyful and flowing, and which may be weak, problematic or blocked. A highly developed Astrologer may offer possible remedies for negative karmas. | <urn:uuid:55b92bfb-f861-4c1a-823b-5ad347872dfc> | CC-MAIN-2013-20 | http://www.jeevanadi.com/vedic-astrology/vedic-astrology.php | 2013-06-19T06:29:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946315 | 704 |