text
stringlengths
247
264k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
20
294
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
62
58.7k
This Smartboard Notebook presentation Lesson Plan contains a 24 slides on the following topics of Environmental Awareness: Balance of Nature, Pollution, Pollution Example, Technology and the Environment, Pollution Classification, Air Pollution, The Carbon Cycle, Sources of Emissions of Air Pollutants, CO2 Content of Earths Atmosphere is Increasing, The Greenhouse Effect, Acid Rain, Water Pollution Sources, Ground Pollution Sources, Energy Pollution, Human Population Growth, Human Population Density, Renewable Resources, Nonrenewable Resources, The Future My presentations have been made to be both informative on curricular topics and visually stimulating for all students. 100% of my students prefer to receive and discuss new topics via Notebook/PowerPoint (compared to reading textbooks, lectures, or copying information off of the board). I have found that this method works extremely well with both mainstreamed and inclusion students. When I taught in a traditional HS, I provided a notebook with all the slides for students who needed to take a second look at a slide or were absent for the day. A free product preview of this entire Smartboard Notebook presentation is available. An Outline and other resources on this topic can be purchased separately. Environmental Awareness PowerPoint Presentation Environmental Awareness PP Notebook for Smartboard Environmental Awareness Notes Outline Lesson Plan Environmental Awareness Unit Vocabulary Lesson Plan Global Climate Change PowerPoint Presentation Global Climate Change Notes Outline Lesson Plan Environmental Awareness Homework Environmental Awareness Test Prep © Lisa Michalek The Lesson Guide
<urn:uuid:3c55d293-dc54-4c1b-a9db-1ec8ee130cef>
CC-MAIN-2013-20
http://www.teacherspayteachers.com/Product/Environmental-Awareness-Smartboard-Notebook-Presentation-Lesson-Plan-71466
2013-05-25T12:51:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.853231
324
About the Museum Inspired by the mineral wealth of the gold rush, the State of California began documenting the state’s mining history and mineral resources over a century ago. This fascinating collection, which started in San Francisco, is now exhibited at the Mining and Mineral Museum in Mariposa, an historic gold country town. Like many historic sites that are located along Highway 49, it has a rich and colorful history. The museum became a California State Park in 1999. The Mining and Mineral Museum contains thousands of unusual gems and mineral specimens, old mining artifacts, and a replica of a hard rock mining tunnel for visitors to explore. But not all the rocks are down-to-earth. The museum also exhibits rocks from space, commonly known as meteorites. One of the favorite displays is the Fricot Nugget, a rare crystalline gold specimen that weighs more than 13 lbs. The central vault area also features the Museum’s finest specimens including the official state gem, Benitoite, a very rare sapphire-blue gemstone first found in California. The West Wing showcases an antique model of a stamp mill representing an important part of California’s mining heritage. The stamp mill captured small amounts of gold from the ore extracted by miners. Another look into the past is an exhibit of a miniature mining town crafted from petrified wood. Today the golden state continues to prosper from its unique geographical landscape and rich resources. Discover how much mineral wealth still exists in our everyday lives.
<urn:uuid:40ffb605-a28d-4a94-a981-4dfe525203df>
CC-MAIN-2013-20
http://www.southernyosemitemuseums.org/csmmm/index.php
2013-05-23T05:46:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945426
306
1794 Wilkinson Map of Upper Saxony, Germany Description: A finely detailed first edition 1794 map of Upper Saxony, in what is now Mitteldeutschland, by Robert Wilkinson. In 1180 Duke Henry the Lion fell, and the medieval Duchy of Saxony dissolved. The Saxe-Wittenberg lands were passed among dynasties who took the tribal name Sachsen (Saxons) upstream as they conquered the lands of the Polabian Slavs further up the elbe. The Polabian Slavs had migrated to this area of Germany in the second half of the first millennium A.D., and had been largely assimilated by the Holy Roman empire by the time this map was made. Today, the German government recognizes some 60,000 'Sorbs,' or descendants of the Polabian Slavs, who have retained their language and culture. The map covers from the Baltic Sea in the north to Bohemia to the south, and from Lower Saxony in the west to Silesia, Poland to the east. engraved by Thomas Conder for the 1794 first edition of Robert Wilkinsonís General Atlas. Date: 1794 (dated) Source: Wilkinson, R., A General Atlas being A Collection of Maps of the World and Quarters the Principal Empires, Kingdoms, and C. with their several Provinces, and other Subdivisions Correctly Delineated., (London) 1794 First Edition. Cartographer: Robert Wilkinson (fl. c. 1768 - 1825) was a London based map and atlas publisher active in the late 18th and early 19th centuries. Most of Wilkinson's maps were derived from the earlier work of John Bowles, one of the preeminent English map publishers of the 18th century. Wilkinson's acquired the Bowles map plate library following the cartographer's death in 1779. Wilkinson updated and tooled the Bowles plates over several years until, in 1794, he issued his fully original atlas, The General Atlas of the World. This popular atlas was profitably reissued in numerous editions until about 1825 when Wilkinson died. In the course of his nearly 45 years in the map trade, Wilkinson issued also published numerous independently issued large format wall, case, and folding maps. Wilkinson's core cartographic corpus includes Bowen and Kitchin's Large English Atlas (1785), Speer's West Indies (1796), Atlas Classica (1797), and the General Atlas of the World (1794, 1802, and 1809), as well as independent issue maps of New Holland (1820), and North America ( 1823). Wilkinson's offices were based at no. 58 Cornhill, London form 1792 to 1816, following which he relocated to 125 Frenchurch Street, also in London, where he remained until 1823. Following his 1825 death, Wilkinson's business and map plates were acquired by William Darton, an innovative map publisher who reissued the General Atlas with his own imprint well into the 19th century. Click here for a list of rare maps by Robert Wilkinson. Cartographer: Thomas Conder (1747 - June 1831) was an English map engraver and bookseller active in London during the late 17th and early 18th centuries. From his shop at 30 Bucklersbury, London, Conder produced a large corpus of maps and charts, usually in conjunction with other publishers of his day, including Wilkinson, Moore, Kitchin, and Walpole. Unfortunately few biographical facts regarding Conder's life have survived. Thomas Conder was succeeded by his son Josiah Conder who, despite being severely blinded by smallpox, followed in his father's footsteps as a bookseller and author of some renown. Click here for a list of rare maps by Thomas Conder. Size: Printed area measures 10 x 8.5 inches (25.4 x 21.59 centimeters) Condition: Very good. Minor marginal soiling. Original platemark visible. Blank on verso. Code: UpperSaxony-wilkinson-1794 (to order by phone call: 646-320-8650)
<urn:uuid:420c366f-8a49-4d95-adae-0f1600ec090e>
CC-MAIN-2013-20
http://www.geographicus.com/P/AntiqueMap/UpperSaxony-wilkinson-1794
2013-05-26T03:29:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960368
858
Oldest-known Tyrannosaur Reported By American Museum Of Natural History Paleontologist An American Museum of Natural History scientist and his colleagues have described the oldest-known tyrannosaur, a new presumably predatory dinosaur that sported a strange combination of features, including a large, fragile crest on its head that would have made the animal attractive to mates but vulnerable in a fight. The team has named the new dinosaur Guanlong wucaii, with the generic name derived from the Mandarin word for "crowned dragon" and the specific name referring to the rich colors of the rocks in the Junggar Basin in northwestern China where the specimens were found. The nine-foot-long specimen (and another skeleton of the same animal also described in the new research paper) is from the Late Jurassic Period and is about 160 million years old. Most tyrannosaur specimens, except a few fragments, date only to the later period in geologic timethe Cretaceous. None of the previously discovered tyrannosaurs are as old as G. wucaii, including a 130-million-year-old, relatively primitive, feathered tyrannosaur, Dilong paradoxus, reported by Museum scientists and colleagues in 2004. G. wucaii now displaces D. paradoxus as the most primitive tyrannosaur found to date. The wide variety of features found in G. wucaii as well as in coelurosaurs, a larger, related group of bird-like theropod dinosaurs to which it belongs, suggests that traits in these animals were modified dramatically as theropod dinosaurs changed over time. The G. wucaii specimen that will be used as the baseline for scientists studying it in the future was a 12-year-old adult when it died. It possibly was trampled by the second specimen, a 6-year-old juvenile, described in the new research. The crest on G. wucaii's head was filled with air sacs and is comparable to the exaggerated ornamental features found on some living birds such as cassowaries and hornbills. The crest was about one and a half millimeters thick, about as thick as a tortilla, and it measured about two and a half inches high. The new finding is described in the journal Nature by Mark A. Norell, Curator in the Division of Paleontology at the American Museum of Natural History; Xing Xu, Chengkai Jia, and Qi Zhao of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing; James M. Clark of George Washington University; Catherine A. Forster of Stony Brook University; Gregory M. Erickson of Florida State University; and David A. Eberth of the Royal Tyrrell Museum (Drumheller, Alberta, Canada). Dr. Xu is also a research fellow at the American Museum of Natural History, and Drs. Clark, Forster, and Erickson are also research associates at the Museum. "The discovery of this basal tyrannosaur is giving us a much broader picture of the diversity in this group and its ancestors, and is suggesting new interpretations for ornamental structures in these animals and others," Dr. Norell said. Guanlong wucaii fossil The skeleton of G. wucaii resembles those of more derived, or advanced, tyrannosaurs, except that the new dinosaur had three fingers (one more than is found in advanced tyrannosaurs) and is much smaller than the advanced tyrannosaurs that followed, including, of course, Tyrannosaurus rex. The front teeth and other skull and pelvic features of G. wucaii suggest that it was an intermediate animal in the evolutionary route between primitive coelurosaurs and tyrannosaurs. A mathematical analysis of the relationships among these dinosaur groups and their close relations confirms that G. wucaii is the most primitive tyrannosaur known, or is the first branch on the tyrannosaur family tree. "Guanlong shows us how the small coelurosaurian ancestors of tyrannosaurs took the first step that led to the giant T. rex almost 100 million years later," Dr. Clark said. The crest on G. wucaii is comparable to the exaggerated ornamentation of a peacock's tail or the large horns on Irish elks. Among modern animals from beetles to bison, horns are almost always used to attract mates, compete with rivals, or allow animals of the same species to recognize each other. The crest on G. wucaii is too thin to have provided much protection. So the crest on G. wucaii likely was used for display or mate recognition, not defense. "On one hand, Guanlong looks like just what paleontologists have been expecting for a primitive tyrannosaur," said Dr. Xu. "On the other hand, no one expected that a tyrannosaur would bear a crest like this, large and delicate. Even after so many great discoveries, we have to say there is still a lot we don't know about dinosaurs. They are really a diverse group of animals." The work on G. wucaii was funded by the Special Funds for Major State Basic Research Projects of China, the National Natural Science Foundation of China, the National Geographic Society, the Chinese Academy of Sciences, the National Science Foundation of the USA, and the American Museum of Natural History. Media Inquiries: Department of Communications, 212-769-5800
<urn:uuid:bc410f0e-2267-44c9-9a25-531ebc0a7220>
CC-MAIN-2013-20
http://www.amnh.org/our-research/science-news/2006/oldest-known-tyrannosaur-reported-by-american-museum-of-natural-history-paleontologist
2013-05-23T11:40:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952152
1,116
Toads on the Japanese island of Ishima seem to be losing their evolutionary battle with snakes. Most snakes, and indeed most other animals, avoid eating toads because of the toxins in their skin. Rhabdophis tigrinus snakes, however, not only tolerate the toxins, they store the chemicals for their own defensive arsenal. Deborah Hutchinson at Old Dominion University in Norfolk, Virginia, US, and colleagues, found that snakes on Ishima had bufadienolide compounds - toad toxins - in their neck glands, while those snakes living on the toad-free island of Kinkazan had none. The snakes are unable to synthesise their own toxins, so they can only have derived bufadienolide compounds from their diet. Hutchinson's team confirmed this by feeding snake hatchlings either a toad-rich or a toad-free diet. Toad-fed snakes accumulated toad-toxins in the nuchal glands on the back of the neck; snakes on a toad-free diet did not. "Rhabdophis tigrinus is the first species known to use these dietary toxins for its own defence," says Hutchinson. Fight or flight What is more, when attacked, snakes on different islands react differently. On Ishima, snakes stand their ground and rely on the toxins in their nuchal glands to repel the predator. On Kinkazan, the snakes flee. "Snakes on Kinkazan have evolved to use their nuchal glands in defence less often than other populations of snakes, presumably due to their lack of defensive compounds," says Hutchinson. Moreover, baby snakes benefit too. The team showed that snake mothers with high toxin levels pass on the compounds to their offspring. Snake hatchlings thus also enjoy the toad-derived protection. Journal reference: Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas0610785104) If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:99dac3f9-c1f8-4d32-90bd-272f9dc9cdf0>
CC-MAIN-2013-20
http://www.newscientist.com/article/dn11048-snakes-eat-poisonous-toads-and-steal-their-venom.html
2013-05-20T22:26:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950189
491
We begin this course by focusing first on the end, the outcome you hope to achieve as a parent. ❖ JOURNAL: 1. Your child as an adult Imagine your children growing older and older until they are adults, no longer living at home. Let’s say a friend of your adult child approaches him or her and asks, “As you think back to your childhood, what did your mother and father stand for? What difference did they make in your life because they were your mother and your father? What did you learn from this person?” Your adult child pauses for a moment to think before responding. Now imagine how he would respond to that friend. Imagine the conversation. Write down what you hope he would say. Next: Core purpose
<urn:uuid:710cc98f-7614-4977-b334-afc426edda67>
CC-MAIN-2013-20
http://www.k-state.edu/wwparent/courses/rd/concepts/purpose.html
2013-05-22T08:01:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982053
160
by Anne Trafton, MIT News Office Cambridge MA (SPX) Jan 14, 2013 MIT engineers have created a new polymer film that can generate electricity by drawing on a ubiquitous source: water vapor. The new material changes its shape after absorbing tiny amounts of evaporated water, allowing it to repeatedly curl up and down. Harnessing this continuous motion could drive robotic limbs or generate enough electricity to power micro- and nanoelectronic devices, such as environmental sensors. "With a sensor powered by a battery, you have to replace it periodically. If you have this device, you can harvest energy from the environment so you don't have to replace it very often," says Mingming Ma, a postdoc at MIT's David H. Koch Institute for Integrative Cancer Research and lead author of a paper describing the new material in the Jan. 11 issue of Science. "We are very excited about this new material, and we expect as we achieve higher efficiency in converting mechanical energy into electricity, this material will find even broader applications," says Robert Langer, the David H. Koch Institute Professor at MIT and senior author of the paper. Those potential applications include large-scale, water-vapor-powered generators, or smaller generators to power wearable electronics. Other authors of the Science paper are Koch Institute postdoc Liang Guo and Daniel Anderson, the Samuel A. Goldblith Associate Professor of Chemical Engineering and a member of the Koch Institute and MIT's Institute for Medical Engineering and Science. Previous efforts to make water-responsive films have used only polypyrrole, which shows a much weaker response on its own. "By incorporating the two different kinds of polymers, you can generate a much bigger displacement, as well as a stronger force," Guo says. The film harvests energy found in the water gradient between dry and water-rich environments. When the 20-micrometer-thick film lies on a surface that contains even a small amount of moisture, the bottom layer absorbs evaporated water, forcing the film to curl away from the surface. Once the bottom of the film is exposed to air, it quickly releases the moisture, somersaults forward, and starts to curl up again. As this cycle is repeated, the continuous motion converts the chemical energy of the water gradient into mechanical energy. Such films could act as either actuators (a type of motor) or generators. As an actuator, the material can be surprisingly powerful: The researchers demonstrated that a 25-milligram film can lift a load of glass slides 380 times its own weight, or transport a load of silver wires 10 times its own weight, by working as a potent water-powered "mini tractor." Using only water as an energy source, this film could replace the electricity-powered actuators now used to control small robotic limbs. "It doesn't need a lot of water," Ma says. "A very small amount of moisture would be enough." If used to generate electricity on a larger scale, the film could harvest energy from the environment - for example, while placed above a lake or river. Or, it could be attached to clothing, where the mere evaporation of sweat could fuel devices such as physiological monitoring sensors. "You could be running or exercising and generating power," Guo says. On a smaller scale, the film could power microelectricalmechanical systems (MEMS), including environmental sensors, or even smaller devices, such as nanoelectronics. The researchers are now working to improve the efficiency of the conversion of mechanical energy to electrical energy, which could allow smaller films to power larger devices. Massachusetts Institute of Technology Powering The World in the 21st Century at Energy-Daily.com |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:683d3fad-e1a3-4ca3-9e39-da33798ca08e>
CC-MAIN-2013-20
http://www.spacedaily.com/reports/New_material_harvests_energy_from_water_vapor_999.html
2013-05-21T10:14:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927963
891
St. John Bosco On January 31, the Catholic Church celebrates the feast of Saint John Bosco, also known as Saint Don Bosco, founder of the Salesian Society. Early Life of St. Don Bosco John Bosco was born to humble beginnings, as Giovanni Melchior Bosco, to poor parents in a little cabin at Becchi, a hill-side hamlet near Piedmont, Italy in 1815. He was barely over two years old when his father died, leaving him and his brothers in the care of his mother, Margaret. The early years of John’s life were spent as a shepherd, but from an early age he craved studying and had a true desire to live the religious life. He’d received his first instruction at the hands of the parish priest and displayed a quick wit and retentive memory. Though he often had to turn away from his books to work in the field, John did not ignore his vocation. Ministry of St. John Bosco Finally, in 1835, John was able to enter the seminary at Chieri, and after six years was ordained on the eve of Trinity Sunday. From here, he went to Turin and began his priestly labors with zeal. An incident soon occurred which truly opened John to the full effort of his mission. One of John’s duties was to visit the prisons throughout the city. These visits brought to John’s attention the children throughout the city, exposed each day to dangers of both physical and spiritual nature. These poor children were abandoned to the evil influences surrounding them, with little more to look forward to than a life that would lead them to the gallows. John made up his mind to dedicate his life to the rescue and care of the unfortunate outcasts. John Bosco began first by gathering the children together on Sundays, to teach them the catechism. The group of boys became established as the Oratory of St. Francis de Sales; as it grew and spread around the world, it became known as the Salesian Society. John in time also began to teach classes in the evening, and children who worked in factories by day would come to study as the factories closed for the day. The future St. Dominic Savio was among the students. John also founded the Salesian Congregation, which was composed of both priests and lay people who wished to continue the work he had begun. He also wanted to expand his apostolate to young girls. John, along with the woman who would become St. Maria Domenica Mazzarello, founded the FMA, the Congregation of the Daughters of Mary Help of Christians. Education, above all education about loving the Lord, was a primary focus of John’s life work. He spent his free time, including time he should have been sleeping, to writing and popularizing booklets of Catholic teaching for ordinary people. Of John Bosco’s education style, the Catholic Encyclopedia writes: “John Bosco's method of study knew nothing of punishment. Observance of rules was obtained by instilling a true sense of duty, by removing assiduously all occasions for disobedience, and by allowing no effort towards virtue, how trivial so ever it might be, to pass unappreciated. He held that the teacher should be father, adviser, and friend, and he was the first to adopt the preventive method. Of punishment he said: 'As far as possible avoid punishing . . . try to gain love before inspiring fear.' And in 1887 he wrote: 'I do not remember to have used formal punishment; and with God's grace I have always obtained, and from apparently hopeless children, not alone what duty exacted, but what my wish simply expressed.' " By the time of John’s death in 1888 there were 250 houses of the Salesian Society in all parts of the world, housing and educating 130,000 children. John Bosco was declared venerable in 1907 and was canonized in 1934.
<urn:uuid:14761b91-8bab-4340-a426-3619c64d31fd>
CC-MAIN-2013-20
http://www.aquinasandmore.com/catholic-articles/st.-john-bosco/article/260/sort/popularity/productsperpage/12/layout/grid/currentpage/2/keywords/bosco
2013-06-20T09:23:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711240143/warc/CC-MAIN-20130516133400-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989077
831
Study Finds Inhaled Insulin Effective in Type 1 Diabetes Miami, FL (June 27, 2005) -- Scientists at the University of Miami Leonard M. Miller School of Medicine have announced that inhaled insulin works just as well at controlling blood sugar levels as injected insulin, suggesting that daily pre-meal insulin injections may soon be a thing of the past for patients with type 1 diabetes. The study’s findings are published in the July issue of the journal Diabetes Care. In the six-month study, 328 patients with type 1 diabetes received their regular twice-daily injections of long-acting insulin, and were randomized to receive either inhaled insulin before each meal, or subcutaneous regular insulin before each meal. “This research shows that inhaled insulin before meals works exactly the same as injected insulin before meals, even in people who do not make any insulin on their own,” said Jay Skyler, M.D., professor of medicine at the Diabetes Research Institute at the Miller School of Medicine, lead investigator and author of the study. “The lungs are an ideal target for delivery of a substance like insulin,” said Dr. Skyler. “Think of it this way, the surface area of the lungs is about the same as a tennis court, and it is only one cell in thickness, enabling the insulin to get where it needs to go very quickly.” It’s estimated that of the 18 million Americans who have diabetes, more than one million are diagnosed with the autoimmune form of the disease, or type 1 diabetes. In type 1 the body’s cells that produce insulin have been destroyed, and daily insulin injections are needed to sustain life. Researchers have long been looking for an alternative to insulin therapy by injection, which greatly impacts a patient’s quality of life . In order for patients with type 1 diabetes to achieve control of blood sugar levels (which helps minimize but not prevent the longterm complications of the disease), they must adhere to a strict regimen of insulin injections, particularly around mealtime. Many patients often reject multiple injections because of the discomfort, burden and inconvenience, so an effective alternative to the injections would not only be welcomed by these patients, but would likely result in better patient compliance and lower public health care costs. The proof-of-concept study that inhaled insulin was a viable treatment option was published in the February 3, 2001, issue of the prestigious journal The Lancet by Dr. Skyler and colleagues. “This study, however, is the best demonstration to date that inhaled insulin can be substituted for pre-meal insulin in patients with type 1 diabetes,” Dr. Skyler said. “It could mean a significant improvement in quality of life for individuals currently living with the disease.” The inhaled insulin used in this study – Exubera – is currently under review by the Food and Drug Administration, and could be approved by the end of the year. Jeanne Antol Krull
<urn:uuid:da5593be-4669-449b-a6c7-11e837a4af9d>
CC-MAIN-2013-20
http://www.diabetesresearch.org/page.aspx?pid=401
2013-05-20T11:46:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948999
614
Lupus is an autoimmune disease, meaning that the body's own immune system attacks its own tissues. This may lead to inflammation, swelling, pain and tissue damage. Patients with lupus may experience more serious conditions later on that may include problems with the kidneys, heart, lungs and nervous system. The exact cause of lupus is not fully understood. Some scientists and researchers believe that people may be born with genes that affect their immune system and how it works. These people may be more likely to get lupus. Lupus is not contagious. Triggering Lupus Flares What triggers lupus flares in one patient may not trigger lupus in another. Patients are encouraged to speak with a doctor to determine what may trigger their lupus attacks, although they should also be aware of certain environmental triggers. Possible triggers include ultraviolet light (especially from the sun), hormones, certain medications and chemicals such as trichloroethylene. Smoking may lead to lupus and may worsen the condition, while infections such as Hepatitis C, cytomegalovirus and parvovirus may trigger lupus as well. Epstein-Barr virus has been attributed to lupus in children. Patients are encouraged to learn more about lupus and its possible causes. A licensed healthcare professional is the best resource for further information involving lupus. SkinCareGuide.com also offers further information about lupus, its symptoms and its treatments.
<urn:uuid:e89ede9c-19af-4403-8716-73841455022d>
CC-MAIN-2013-20
http://www.skincareguide.com/article/cause-of-lupus-erythematosus.html
2013-05-23T18:32:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963133
306
What you do speaks so loudly that I cannot hear what you say. [Ralph Waldo Emerson] In any presentation, body language -- also known as non-verbal communication -- can strengthen your message or undermine it. Your audience reads clues from a myriad of things about you other than what you say: how you stand; your facial expressions; gestures; eye contact. And if these non-verbals are distracting or conflict with your intended message, the audience will be heavily influenced by what they see rather than what they hear. When the eyes say one thing, and the tongue another, a practiced man relies on the language of the first. [Ralph Waldo Emerson] Picture this: let's say you are attempting to persuade an audience to pursue a healthier lifestyle. Your presentation is filled with facts and engaging stories about the benefits of eating fruits and vegetables, getting exercise and reducing stress. But you don't make eye contact, you stand with your arms crossed in front of your chest and you frown frequently throughout the presentation. Even though your message is strong and flows logically, your body language is telling the audience that you're not confident and enthusiastic about your topic. So why should they be? How you stand, hold your body and move around the presentation area can communicate confidence and competence [or the opposite] to your audience. Researchers at Northwestern and Harvard have studied what happens when you place your body in positions that project power -- arms open wide and feet apart. They discovered that these power poses trigger a rise in testosterone, a hormone associated with confident, assertive behavior, and a decrease in cortisol, the stress hormone, when the positions are held for as little as two minutes. Adapting their findings to a presentation, these power positions are: - standing tall, feet shoulder width apart and chest out - arms away from your sides and uncrossed, open and expansive - arms outstretched to the audience at about chest height So for your next presentation strategically employ body language to show your comfort and self-assurance. Focus on regularly throwing your shoulders back and extending your arms away from your body. Incorporate this stance into your presentation prep, especially if this is not a normal body position for you. By the time you get in front of the audience, the hormone adjustment combined with the rehearsal will have your body language projecting your increased confidence. For more information on this research, here is an informative and entertaining video presentation from one of the researchers, Amy Cuddy, who believes we can learn a lot from Wonder Woman. flickr/pullip_junk C.C. 2.0
<urn:uuid:312859d3-daaf-49cc-a913-fceb6a54a9ce>
CC-MAIN-2013-20
http://andnowpresenting.typepad.com/professionally_speaking/2012/10/body-language-practicum-1-power-poses.html
2013-05-19T10:16:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923665
536
|Era||developed into Koiné Greek by the 4th century BC| |Writing system||Greek alphabet| Map of Homeric Greece Ancient Greek is the form of the Greek language used during the periods of time spanning the c. 9th – 6th century BC, (known as Archaic), the c. 5th – 4th century BC (Classical), and the c. 3rd century BC – 6th century AD (Hellenistic) in ancient Greece and the ancient world. It was predated in the 2nd millennium BC by Mycenaean Greek. The language of the Hellenistic phase is known as Koine (common) or Biblical Greek, while the language from the late period onward features no considerable differences from Medieval Greek. Koine is regarded as a separate historical stage of its own, although in its earlier form, it closely resembled the Classical. Prior to the Koine period, Greek of the classic and earlier periods included several regional dialects. Ancient Greek was the language of Homer and of classical Athenian historians, playwrights, and philosophers. It has contributed many words to English vocabulary and has been a standard subject of study in educational institutions of the West since the Renaissance. This article primarily contains information about the Epic and Classical phases of the language. The origins, early form and development of the Hellenic language family are not well understood because of the lack of contemporaneous evidence. There are several theories about what Hellenic dialect groups that may have existed between the divergence of early Greek-like speech from the common Proto-Indo-European language. They have the same general outline but differ in some of the detail. The only attested dialect from this period1 is Mycenaean, but its relationship to the historical dialects and the historical circumstances of the times imply that the overall groups already existed in some form. The major dialect groups of the Ancient Greek period can be assumed to have developed not later than 1120 BC, at the time of the Dorian invasion(s), and their first appearances as precise alphabetic writing began in the 8th century BC. The invasion would not be "Dorian" unless the invaders had some cultural relationship to the historical Dorians; moreover, the invasion is known to have displaced population to the later Attic-Ionic regions, who regarded themselves as descendants of the population displaced by or contending with the Dorians. The Greeks of this period considered there to be three major divisions of all the Greek people—Dorians, Aeolians and Ionians (including Athenians), each with their own defining and distinctive dialects. Allowing for their oversight of Arcadian, an obscure mountain dialect, and Cyprian, far from the center of Greek scholarship, this division of people and language is quite similar to the results of modern archaeological-linguistic investigation. One standard formulation for the dialects is:2 West vs. non-west Greek is the strongest marked and earliest division, with non-west in subsets of Ionic-Attic (or Attic-Ionic) and Aeolic vs. Arcado-Cyprian, or Aeolic and Arcado-Cyprian vs. Ionic-Attic. Often non-west is called East Greek. The Arcado-Cyprian group apparently descended more closely from the Mycenaean Greek of the Bronze Age. Boeotian had come under a strong Northwest Greek influence, and can in some respects be considered a transitional dialect. Thessalian likewise had come under Northwest Greek influence, though to a lesser degree. Pamphylian, spoken in a small area on the south-western coast of Asia Minor and little preserved in inscriptions, may be either a fifth major dialect group, or it is Mycenaean Greek overlaid by Doric, with a non-Greek native influence. Ancient Macedonian was an Indo-European language closely related to Greek, but its exact relationship is unclear because of insufficient data: possibly a dialect of Greek; a sibling language to Greek; or a close cousin to Greek, and perhaps related to some extent, to Thracian and Phrygian languages. The Pella curse tablet is one of many finds that support the idea that the Ancient Macedonian language is closely related to the Doric Greek dialect. Most of the dialect sub-groups listed above had further subdivisions, generally equivalent to a city-state and its surrounding territory, or to an island. Doric notably had several intermediate divisions as well, into Island Doric (including Cretan Doric), Southern Peloponnesus Doric (including Laconian, the dialect of Sparta), and Northern Peloponnesus Doric (including Corinthian). The Lesbian dialect was a member of the Aegean/Asiatic Aeolic sub-group. All the groups were represented by colonies beyond Greece proper as well, and these colonies generally developed local characteristics, often under the influence of settlers or neighbors speaking different Greek dialects. The dialects outside the Ionic group are known mainly from inscriptions, notable exceptions being fragments of the works of the poetess Sappho from the island of Lesbos and the poems of the Boeotian poet, Pindar. After the conquests of Alexander the Great in the late 300's BC, a new international dialect known as Koine or Common Greek developed, largely based on Attic Greek, but with influence from other dialects. This dialect slowly replaced most of the older dialects, although Doric dialect has survived to the present in the form of the Tsakonian dialect of Modern Greek, spoken in the region of modern Sparta. Doric has also passed down its aorist terminations into most verbs of Demotic Greek. By about the 500's AD, the Koine had slowly metamorphosized into Medieval Greek. |Archaic local variants| |In other languages| The pronunciation of Post-Classic Greek changed considerably from Ancient Greek, although the orthography still reflects features of the older language (see W. Sidney Allen, Vox Graeca – a guide to the pronunciation of Classical Greek). For a detailed description on the phonology changes from Ancient to Hellenistic periods of the Greek language, see the article on Koine Greek. The examples below are intended to represent Attic Greek in the 5th century BC. Although ancient pronunciation can never be reconstructed with certainty, Greek in particular is very well documented from this period, and there is little disagreement among linguists as to the general nature of the sounds that the letters represented. [ŋ] occurred as an allophone of /n/ used before velars and as an allophone of /ɡ/ before nasals. /r/ was probably voiceless when word-initial (written ῥ) |Close||i iː||y yː| |Close-mid||e eː||o oː| /oː/ raised to [uː], probably by the 4th century BC. In verb conjugation, one consonant often comes up against the other. Various sandhi rules apply. - Most basic rule: When two sounds appear next to each other, the first assimilates in voicing and aspiration to the second. - This applies fully to stops. Fricatives assimilate only in voicing, sonorants do not assimilate. - Before an /s/ (future, aorist stem), velars become [k], labials become [p], and dentals disappear. - Before a /tʰ/ (aorist passive stem), velars become [kʰ], labials become [pʰ], and dentals become [s]. - Before an /m/ (perfect middle first-singular, first-plural, participle), velars become [ɡ], nasal+velar becomes [ɡ], labials become [m], dentals become [s], other sonorants remain the same. Certain vowels historically underwent compensatory lengthening in certain contexts. /a/ sometimes lengthened to [aː] or [ɛː], and /e/ and /o/ become the closed values [eː] and [oː] and the open ones [ɛː] and [ɔː] depending on time period. Greek, like all of the older Indo-European languages, is highly inflected. It is highly archaic in its preservation of Proto-Indo-European forms. In Ancient Greek nouns (including proper nouns) have five cases (nominative, genitive, dative, accusative and vocative), three genders (masculine, feminine and neuter), and three numbers (singular, dual and plural). Verbs have four moods (indicative, imperative, subjunctive, and optative), three voices (active, middle and passive), as well as three persons (first, second and third) and various other forms. Verbs are conjugated through seven combinations of tenses and aspect (generally simply called "tenses"): the present, future and imperfect are imperfective in aspect; the aorist (perfective aspect); a present perfect, pluperfect and future perfect. Most tenses display all four moods and three voices, although there is no future subjunctive or imperative. Also, there is no imperfect subjunctive, optative or imperative. There are infinitives and participles corresponding to the finite combinations of tense, aspect and voice. The indicative of past tenses adds (conceptually, at least) a prefix /e-/, called the augment. This was probably originally a separate word, meaning something like "then," added because tenses in PIE had primarily aspectual meaning. The augment is added to the indicative of the aorist, imperfect and pluperfect, but not to any of the other forms of the aorist (no other forms of the imperfect and pluperfect exist). There are two kinds of augment in Greek, syllabic and quantitative. The syllabic augment is added to stems beginning with consonants, and simply prefixes e (stems beginning with r, however, add er). The quantitative augment is added to stems beginning with vowels, and involves lengthening the vowel: - a, ā, e, ē → ē - i, ī → ī - o, ō → ō - u, ū → ū - ai → ēi - ei → ēi or ei - oi → ōi - au → ēu or au - eu → ēu or eu - ou → ou Some verbs augment irregularly; the most common variation is e → ei. The irregularity can be explained diachronically by the loss of s between vowels. In verbs with a prefix, the augment is placed not at the start of the word, but between the prefix and the original verb. For example, προσ(-)βάλλω (I attack) goes to προσέβαλoν in the aorist. The augment sometimes substitutes for reduplication; see below. Almost all forms of the perfect, pluperfect and future perfect reduplicate the initial syllable of the verb stem. (Note that a few irregular forms of perfect do not reduplicate, whereas a handful of irregular aorists reduplicate.) There are three types of reduplication: - Syllabic reduplication: Most verbs beginning with a single consonant, or a cluster of a stop with a sonorant, add a syllable consisting of the initial consonant followed by e. An aspirated consonant, however, reduplicates in its unaspirated equivalent: Grassmann's law. - Augment: Verbs beginning with a vowel, as well as those beginning with a cluster other than those indicated previously (and occasionally for a few other verbs) reduplicate in the same fashion as the augment. This remains in all forms of the perfect, not just the indicative. - Attic reduplication: Some verbs beginning with an a, e or o, followed by a sonorant (or occasionally d or g), reduplicate by adding a syllable consisting of the initial vowel and following consonant, and lengthening the following vowel. Hence er → erēr, an → anēn, ol → olōl, ed → edēd. This is not actually specific to Attic Greek, despite its name; but it was generalized in Attic. This originally involved reduplicating a cluster consisting of a laryngeal and sonorant; hence h₃l → h₃leh₃l → olōl with normal Greek development of laryngeals. (Forms with a stop were analogous.) Irregular duplication can be understood diachronically. For example, lambanō (root lab) has the perfect stem eilēpha (not *lelēpha) because it was originally slambanō, with perfect seslēpha, becoming eilēpha through compensatory lengthening. Reduplication is also visible in the present tense stems of certain verbs. These stems add a syllable consisting of the root's initial consonant followed by i. A nasal stop appears after the reduplication in some verbs.4 Ancient Greek was written in the Greek alphabet, with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of Ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later. - Ὅτι μὲν ὑμεῖς, ὦ ἄνδρες Άθηναῖοι, πεπόνθατε ὑπὸ τῶν ἐμῶν κατηγόρων, οὐκ οἶδα: ἐγὼ δ' οὖν καὶ αὐτὸς ὑπ' αὐτῶν ὀλίγου ἐμαυτοῦ ἐπελαθόμην, οὕτω πιθανῶς ἔλεγον. Καίτοι ἀληθές γε ὡς ἔπος εἰπεῖν οὐδὲν εἰρήκασιν. Transliterated into the Latin alphabet using a modern version of the Erasmian scheme: - Hóti mèn humeîs, ô ándres Athēnaîoi, pepónthate hupò tôn emôn katēgórōn, ouk oîda: egṑ d' oûn kaì autòs hup' autōn olígou emautoû epelathómēn, hoútō pithanôs élegon. Kaítoi alēthés ge hōs épos eipeîn oudèn eirḗkasin. Using the IPA: - hóti men hymêːs, ɔ̂ː ándres atʰɛːnáioi, pepóntʰate hypo tɔ̂ːn emɔ̂ːn katɛːgórɔːn, oːk óida; egɔ̌ː dôːn kai autos hyp autɔ̂ːn olígoː emautôː epelatʰómɛːn, hǒːtɔː pitʰanɔ̂ːs élegon. kaítoi alɛːtʰés ge hɔːs épos eːpêːn oːden eːrɛ̌ːkasin. Translated into English: - What you, men of Athens, have learned from my accusers, I do not know: but I, for my part, nearly forgot who I was thanks to them, since they spoke so persuasively. And yet, of the truth, they have spoken, one might say, nothing at all. Another example, from the beginning of Homer's Iliad: Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος οὐλομένην, ἣ μυρί’ Ἀχαιοῖς ἄλγε’ ἔθηκε, πολλὰς δ’ ἰφθίμους ψυχὰς Ἄϊδι προΐαψεν ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν οἰωνοῖσί τε πᾶσι· Διὸς δ’ ἐτελείετο βουλή· ἐξ οὗ δὴ τὰ πρῶτα διαστήτην ἐρίσαντε Ἀτρεΐδης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς. The study of Ancient Greek in European countries in addition to Latin occupied an important place in the syllabus from the Renaissance until the beginning of the 20th century. Ancient Greek is still taught as a compulsory or optional subject especially at traditional or elite schools throughout Europe, such as public schools and grammar schools in the United Kingdom. It is compulsory in the Liceo classico in Italy, in the gymnasium in the Netherlands, in some classes in Austria, in Croatia in klasicna gimnazija and it is optional in the Humanistisches Gymnasium in Germany (usually as a third language after Latin and English, from the age of 14 to 18). In 2006/07, 15,000 pupils studied Ancient Greek in Germany according to the Federal Statistical Office of Germany, and 280,000 pupils studied it in Italy.5 Ancient Greek is also taught at most major universities worldwide, often combined with Latin as part of Classics. It will also be taught in state primary schools in the UK, to boost children’s language skills,678 and will be offered as a foreign language to pupils in all primary schools from 2014 as part of a major drive to boost education standards, together with Latin, Mandarin, French, German, Spanish, and Italian.9 Ancient Greek is also taught as a compulsory subject in Gymnasia and Lykia in Greece.1011 Ancient Greek is often used in the coinage of modern technical terms in the European languages: see English words of Greek origin. Modern authors rarely write in Ancient Greek, though Jan Křesadlo wrote some poetry and prose in the language, and some volumes of Asterix have been written in Attic Greek12 and Harry Potter and the Philosopher's Stone has been translated into Ancient Greek.13 Ancient Greek is also used by organizations and individuals, mainly Greek, who wish to denote their respect, admiration or preference for the use of this language. This use is sometimes considered graphical, nationalistic or funny. In any case, the fact that modern Greeks can still wholly or partly understand texts written in non-archaic forms of ancient Greek shows the affinity of modern Greek language to its ancestral predecessor.14 An isolated community near Trabzon, Turkey, an area where Pontic Greek is spoken, has been found to speak a variety of Greek that has parallels, both structurally and in its vocabulary, to Ancient Greek not present in other varieties.15 As few as 5,000 people speak the dialect but linguists believe that it is the closest living language to Ancient Greek.1617 - Exploring the Ancient Greek Language and Culture (competition) - Greek alphabet - Greek declension - Greek diacritics - Mycenaean Greek language - Koine Greek - Medieval Greek - Modern Greek - Greek language - List of Greek phrases (mostly Ancient Greek) - List of Greek words with English derivatives - Imprecisely attested and somewhat reconstructive due to its being written in an ill-fitting syllabary (Linear B). - This one appears in recent versions of the Encyclopædia Britannica, which also lists the major works that define the subject.page needed - Roger D. Woodard (2008), "Greek dialects", in: The Ancient Languages of Europe, ed. R. D. Woodard, Cambridge: Cambridge University Press, p. 51. - Palmer, Leonard (1996). The Greek Language. Norman, OK: University of Oklahoma Press. p. 262. ISBN 0-8061-2844-5. - Ancient Greek 'to be taught in state schools' - "Primaries go Greek to help teach English" - Education News - 30 July 2010. - "Now look, Latin's fine, but Greek might be even Beta" TES Editorial © 2010 - TSL Education Ltd. - More primary schools to offer Latin and ancient Greek, The Telegraph, 26 November 2012 - Asterix speaks Attic (classical Greek) - Greece (ancient) - Areios Potēr kai ē tu philosophu lithos, Bloomsbury 2004, ISBN 1-58234-826-X - Akropolis World News, and Tech news in Ancient Greek - Jason and the argot: land where Greek's ancient language survives, The Independent, 3 January 2011 - Against all odds: archaic Greek in a modern world, University of Cambridge - Archaic Greek in a modern world video from Cambridge University, on YouTube - P. Chantraine (1968), Dictionnaire étymologique de la langue grecque, Klincksieck, Paris. - Athenaze A series of textbooks on Ancient Greek published for school use - Hansen, Hardy and Quinn, Gerald M. (1992) Greek: An Intensive Course, Fordham University Press - Easterling, P & Handley, C. Greek Scripts: An illustrated introduction. London: Society for the Promotion of Hellenic Studies, 2001. ISBN 0-902984-17-9 |Wikibooks has a book on the topic of: Ancient Greek| |Ancient Greek test of Wikipedia at Wikimedia Incubator| |For a list of words relating to Ancient Greek, see the Ancient Greek language category of words in Wiktionary, the free dictionary.| |Greek Wikisource has original text related to this article:| - Online Greek resources Dictionaries, grammar, virtual libraries, fonts, etc. - Ancient Greek Swadesh list of basic vocabulary words (from Wiktionary's Swadesh list appendix) - A more extensive grammar of the Ancient Greek language written by J. Rietveld - Recitation of classics books - Perseus Greek dictionaries - Greek-Language.com Information on the history of the Greek language, application of modern Linguistics to the study of Greek, and tools for learning Greek - Free Lessons in Ancient Greek, Bilingual Libraries, Forum - A critical survey of websites devoted to Ancient Greek - Ancient Greek Tutorials Berkeley Language Center of the University of California - A Digital Tutorial For Ancient Greek Based on White's First Greek Book - New Testament Greek - Acropolis World News A summary of the latest world news in Ancient Greek, Juan Coderch, University of St Andrews
<urn:uuid:9a681784-55f0-414e-9e5a-fc9e263fbd43>
CC-MAIN-2013-20
http://www.bioscience.ws/encyclopedia/index.php?title=Ancient_Greek
2013-05-19T10:05:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90533
5,316
Core Ecological Concepts Core Ecological Concepts Understanding the patterns and processes by which nature sustains life is central to ecological literacy. Fritjof Capra says that these may be called principles of ecology, principles of sustainability, principles of community, or even the basic facts of life. In our work with teachers and schools, the Center for Ecoliteracy has identified six of these principles that are important for students to understand and be able to apply to the real world. Recognizing these core ecological concepts is one of the important results of our guiding principle, "Nature Is Our Teacher," which is described in the "Explore" section of this website. We present them again here for the guidance they can provide to teachers as they plan lessons as part of schooling for sustainability. All living things in an ecosystem are interconnected through networks of relationship. They depend on this web of life to survive. For example: In a garden, a network of pollinators promotes genetic diversity; plants, in turn, provide nectar and pollen to the pollinators. Nature is made up of systems that are nested within systems. Each individual system is an integrated whole and — at the same time — part of larger systems. Changes within a system can affect the sustainability of the systems that are nested within it as well as the larger systems in which it exists. For example: Cells are nested within organs within organisms within ecosystems. Members of an ecological community depend on the exchange of resources in continual cycles. Cycles within an ecosystem intersect with larger regional and global cycles. For example: Water cycles through a garden and is also part of the global water cycle. Each organism needs a continual flow of energy to stay alive. The constant flow of energy from the sun to Earth sustains life and drives most ecological cycles. For example: Energy flows through a food web when a plant converts the sun's energy through photosynthesis, a mouse eats the plant, a snake eats the mouse, and a hawk eats the snake. In each transfer, some energy is lost as heat, requiring an ongoing energy flow into the system. All life — from individual organisms to species to ecosystems — changes over time. Individuals develop and learn, species adapt and evolve, and organisms in ecosystems coevolve. For example: Hummingbirds and honeysuckle flowers have developed in ways that benefit each other; the hummingbird's color vision and slender bill coincide with the colors and shapes of the flowers. Ecological communities act as feedback loops, so that the community maintains a relatively steady state that also has continual fluctuations. This dynamic balance provides resiliency in the face of ecosystem change. For example: Ladybugs in a garden eat aphids. When the aphid population falls, some ladybugs die off, which permits the aphid population to rise again, which supports more ladybugs. The populations of the individual species rise and fall, but balance within the system allows them to thrive together.
<urn:uuid:b376f164-a49b-41ac-bd46-275a12d51835>
CC-MAIN-2013-20
http://www.ecoliteracy.org/philosophical-grounding/core-ecological-concepts
2013-05-23T18:38:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932825
599
A tilt table test is used to determine if you are prone to sudden blood pressure drops or slow pulse rates when your position changes. Your physician will order a tilt table test if you have fainting spells, severe lightheadedness or dizziness that forces you to lie down. What Is a Tilt Table Test? Fainting or severe dizziness occurs for different reasons, from nervous system reactions to dropping blood pressure. A tilt table test involves being tilted with your head up at a variety of angles for a period of time. The test shows how your heart rate and blood pressure respond to these changes in position. When your test begins, a nurse will start an IV in your arm. You will lie down on the tilt table flat on your back and be connected to monitors to check your heart rate, blood pressure, breathing and oxygen levels throughout the test. Safety straps will be securely placed across your chest and legs to hold you in place. The first part of the test is a quiet period. The table will then be slowly tilted to a standing position, with your head up. Since you are strapped in, you will be supported in this position. The technician will check you at several additional positions and then return you to a flat position. You will be awake during the entire test, but it is important to remain quiet and still. During the test, you might feel lightheaded or sick to your stomach, or experience dizziness, heart palpitations or flutters, vision changes or possibly even faint. It is important that you tell your nurse or doctor about any symptoms you are experiencing right away. The second part of your tilt table test starts once you have been given medication, such as nitroglycerine or Isuprel, to stress your heart. The medication is given slowly in your IV and closely adjusted based on your body’s reaction. The medication could make you feel jittery, nervous or as though your heart is beating faster. You will be tilted again with the additional medications used to try to provoke any abnormal response. These symptoms will diminish as the medication wears off. What to Expect During Your Tilt Table Test Preparing for Your Procedure Do not eat or drink anything for at least four hours prior to your tilt table test. Check with your physician to determine if any of your medications should be avoided for the days leading up to your scheduled test. Make sure to bring all of your medications, as well as any herbal or dietary supplements and over-the-counter medications, to the test with you. On the day of the test, do not wear any jewelry or bring any valuables with you. You will be asked to wear a hospital gown during the test. It is important to bring an adult to drive you home after the test is complete, because you will not be allowed to drive. During Your Procedure Follow the technologist’s instructions closely and make sure to hold completely still. If you feel very uncomfortable and cannot go on during the test, it will be stopped. If you faint during the test, the test will also be stopped. Your tilt table test and recovery will take about three hours to complete. After Your Procedure It is important to rest for several hours once your tilt test is complete. You can eat and drink regular foods right away. You should start to feel normal again within 15 minutes of the test concluding, but you will remain tired. A report of your tilt table test will be sent to your physician, who will contact you to discuss your results. Ohio State’s Ross Heart Hospital OSU Heart Center at University Hospital East To schedule your appointment, please call 614-293-ROSS or 888-293-ROSS.
<urn:uuid:e007d3f0-a50c-4dbc-9d24-5f07188349d3>
CC-MAIN-2013-20
http://medicalcenter.osu.edu/heart/conditions/Pages/Tests/TiltTableTest.aspx
2013-05-20T22:23:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934005
765
Preface by Lewis Lehrman "I have always thought that all men should be free; but if any should be slaves, it should be first those who desire it for themselves, and secondly , those who desire it for others. When I hear anyone arguing for slavery, I feel a strong impulse to see it tried on him personally," President Lincoln told an Indiana Regiment passing through Washington less than a month before his murder.1 Mr. Lincoln thought deeply on the subject of liberty. He knew it was a vital but fragile concept, which needed to be nurtured. Nearly a decade earlier, in the midst of furor over the Kansas-Nebraska Act, Mr. Lincoln had said in Peoria, Illinois: "Little by little, but steadily as man's march to the grave, we have been giving up the old for the new faith. Nearly eighty years ago we began by declaring that all men are created equal; but now from that beginning we have run down to the other declaration, that for some men to enslave others is a 'sacred right of self-government.' These principles cannot stand together. They are as opposite as God and Mammon; and whoever holds to the one must despise the other."2 A few months earlier, he had written some notes – perhaps for a speech not given: "Most governments have been based, practically, on the denial of equal rights of men, as I have, in part, stated them ours began, by affirming those rights. They said, some men are too ignorant, and vicious, to share in government. Possibly so, said we; and, by your system, you would always keep them ignorant, and vicious. We proposed to give all a chance; and we expected the weak to grow stronger, the ignorant, wiser; and all better, and happier together."3 Liberty was the cornerstone of the Republic, enshrined in Declaration of Independence. It was the cornerstone of republican government and a bulwark for the growth of democracy elsewhere. In the Peoria speech, Mr. Lincoln said: "This declared indifference, but, as I must think, covert real zeal, for the spread of slavery, I cannot but hate. I hate it because it deprives our republican example of its just influence in the world; enables the enemies of free institutions with plausibility to taunt us as hypocrites; cause the real friends of freedom to doubt our sincerity; and especially because it forces so many good men among ourselves into an open war with the very fundamental principles of civil liberty, criticizing the Declaration of Independence, and insisting that there is no right principle of action but self-interest."4 Mr. Lincoln did not believe that under then current law slavery could be abolished where it already existed. But morally and constitutionally, Mr. Lincoln believed it must and couldbe restricted where it did not exist. In an 1858 speech in Chicago, Mr. Lincoln said: "If we cannot give freedom to every creature, let us do nothing that will impose slavery upon any other creature."5 In Kansas in early December 1859, Mr. Lincoln said "There is no justification for prohibiting slavery anywhere, save only in the assumption that slavery is wrong."6 In Hartford on March 5, 1860, Mr. Lincoln said: "If slavery is right, it ought to be extended; if not, it ought to be restricted – there is no middle ground."7 Mr. Lincoln's respect for work was fundamental for his disdain for slavery. William Wolf wrote in The Almost Chosen People: "Lincoln felt strongly about the essential importance of labor to society and liked to make it concrete by referring to the injunction on work in Genesis. He had known in early life what it meant to earn bread in the sweat of his brow. He was offended by the arrogant complacency of the planter interests and especially by their mouthpieces in the clergy."8 Mr. Lincoln understood that fundamental to one's attitude toward slavery was one's willingness to let others' sweat on one's behalf. Indeed, work was as fundamental a value as freedom, argued Mr. Lincoln. In 1854, Mr. Lincoln wrote: "The ant, who has toiled and dragged a crumb to his nest, will furiously defend the fruit of his labor, against whatever robber assails him. So plain, that the most dumb and stupid slave that ever toiled for a master, does constantly know that he is wronged. So plain that no one, high or low, ever does mistake it, except in a plainly selfish way; for although volume upon volume is written to prove slavery a very good thing, we never hear of the man who wishes to take the good of it, by being a slave himself."9 "I believe each individual is naturally entitled to do as he pleases with himself and the fruits of his labor, so far as it in no wise interferes with any other man's rights," said Mr. Lincoln in Chicago in July 1858.10 "Work, work, work, is the main thing," once wrote Mr. Lincoln in a letter.11 Relatively early in his political career, Mr. Lincoln had declared: "In the early days of the world, the Almighty said to the first of our race "In the sweat of thy face shalt thou eat bread"; and since then, if we except the light and the air of heaven, no good thing has been, or can be enjoyed by us, without having first cost labour. And, inasmuch [as] most good things are produced by labour, it follows that [all] such things of right belong to those whose labour has produced them. But it has so happened in all ages of the world, that some have laboured, and others have, without labour, enjoyed a large proportion of the fruits. This is wrong, and should not continue. To [secure] to each labourer the whole product of his labour, or as nearly as possible, is a most worthy object of any good government. But then the question arises, how can a government best, effect this? In our own country, in it's present condition, will the protective principle advance or retard this object? Upon this subject, the habits of our whole species fall into three great classes – useful labour, useless labour and idleness. Of these the first only is meritorious; and to it all the products of labour rightfully belong; but the two latter, while they exist, are heavy pensioners upon the first, robbing it of a large portion of its just rights. The only remedy for this is to, as far as possible drive useless labour and idleness out of existence. And, first, as to useless labour. Before making war upon this, we must learn to distinguish it from the useful. It appears to me, then, that all labour done directly and incidentally in carrying articles to their place of consumption, which could have been produced in sufficient abundance, with as little labour, at the place of consumption, as at the place they were carried from, is useless labour."12 Perhaps as a young man, Mr. Lincoln had done his share of useless labor to last a lifetime. Mr. Lincoln did what was necessary and he expected others to do the same. His work ethic was fundamental to his attitudes toward slavery. A man had the right to the fruits of his labors – and an obligation to pursue his labors to the best of his ability. And the rewards of hard work were important in politics as well – one reason that the 1849 appointment of Justin Butterfield to the federal Land Commissioner's post so disturbed Lincoln. Butterfield hadn't worked in the election and rewarding him for his lethargy was bad politics and bad government. Liberty, work, and justice were closely connected concepts for Mr. Lincoln. "The world has never had a good definition of the word liberty, and the American people, just now, are much in want of one. We all declare for liberty; but in using the same word we do not all mean the same thing. With some the word liberty may mean for each man to do as he pleases with himself, and the product of his labor; while with others the same word may mean for some men to do as they please with other men, and the product of other men's labor. Here are two, not only different, but incompatable [sic] things, called by the same name – liberty. And it follows that each of the things is, by the respective parties, called by two difference and incompatable [sic] names – liberty and tyranny," Mr. Lincoln told the U.S. Sanitary Commission Fair in Baltimore on April 18, 1864. "The shepherd drives the wolf from the sheep's throat, for which the sheep thanks the shepherd as his liberators, while the wolf denounces him for the same act as the destroyer of liberty, especially as the sheep was a black one. Plainly the sheep and the wolf are not agreed upon a definition of the word liberty; and precisely the same difference prevails to-day among us human creatures, even in the North, and all professing to love liberty. Hence we behold the processes by which thousands are daily passing from under the yoke of bondage, hailed by some as the advance of liberty, and bewailed by others as the destruction of all liberty. Recently, as it seems, the people of Maryland have been doing something to define liberty; and thanks to them that, in what they have done, the wolf's dictionary, has been repudiated," continued Mr. Lincoln.13 Mr. Lincoln's philosophy was often revealed in letters designed for publication. One such letter was to Kentucky editor Albert G. Hodges in April 1864. President Lincoln began: "I am naturally antislavery. If slavery is not wrong, nothing is wrong. I cannot remember when I did not so think and feel, and yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling."14 Mr. Lincoln understood that he could not act outside of the powers granted by the Constitution. The most powerful tool Mr. Lincoln had was the doctrine of military necessity that he used to proclaim emancipation on January 1, 1863. Mr. Lincoln's view on slavery did not depend on the Constitution alone, but were firmly rooted in the Declaration of Independence. He strongly believed that slavery was wrong – whatever the law stated. "If slavery is right, all words, acts, laws, and constitutions against it are themselves wrong, and should be silenced and swept away," said Mr. Lincoln in a speech in New Haven in early March 1860 in which he addressed the arguments of slavery's defenders: "If it is right, we cannot justly object to its nationality – its universality; if it is wrong, they cannot justly insist upon its extension – its enlargement. All they ask we could readily grant, if we thought slavery right; all we ask they could as readily grant, if they thought it wrong. Their thinking it right, and our thinking it wrong, is the precise fact upon which depends the whole controversy. Thinking it right, as they do, they are not to blame for desiring its full recognition as being right; but thinking it wrong, as we do, can we yield to them? Can we cast our votes with their view, and against our own? In view of our moral, social, and political responsibilities, can we do this? Wrong as we think slavery is, we can yet afford to let it alone where it is, because that much is due to the necessity arising from its actual presence in the nation."15 The necessity was linked to constitutional provisions associated with the birth of the Union. No President, except in the gravest national emergency, could act alone outside the constitution. Mr. Lincoln also realized that the pursuit and protection of liberty required a long struggle. After President Lincoln issued the Emancipation Proclamation, New York Governor Edwin D. Morgan visited the White House. Mr. Lincoln told Morgan, who was also chairman of the Republican National Committee: "I do not agree with those who say that slavery is dead. We are like whalers who have been long on a chase – we have at last got the harpoon into the monster, but we must now look how we steer, or, with one 'flop' of his tail, he will yet send us all to eternity."16 Mr. Lincoln realized that freedom depended upon Union – but he also realized that some supporters of Union opposed the actions he had taken to grant freedom to Southern slaves. He addressed these critics in an open letter to Union supporters meeting in Springfield, Illinois in September 1863. You dislike the emancipation proclamation; and, perhaps would have it retracted – You say it is unconstitutional – I think differently. I think the constitution invests it's [sic] commander in chief, with the law of war in time of war – The most that can be said, if so much, is that slaves are property. Is there – has there ever been – any question that by the law of war, property, both of enemies and friends, may be taken when needed? And is it not needed whenever taking it, helps us, or hurts the enemy? Armies, the world over, destroy enemie's property when they can not use it; and even destroy their own to keep it from the enemy – Civilized beligerents do all in their power to help themselves, or hurt the enemy, except a few things regarded as barbarous or cruel – Among the exceptions are the massacres of vanquished foes, and non combattants, male and female. Biographers John G. Nicolay and John Hay wrote: "Admitting the general principle of international law, of the right of a belligerent to appropriate or destroy enemies' property, and applying it to the constitutional domestic war to suppress rebellion which he was then prosecuting, there came next the question of how his military decree of enfranchisement was practically to be applied. This point, thought not fully discussed, is sufficiently indicated in several extracts. In the draft of a letter to Charles D. Robinson he wrote, August 17, 1864: 'The way these measures were to help the cause was not to be by magic or miracles, by inducing the colored people to come bodily over from the rebel side to ours.' And in his letter to James C. Conkling of August 26, 1863, he says: 'But negroes, like other people, act upon motives. Why should they do anything for us if we will do nothing for them? If they stake their lives for us, they must be prompted by the strongest motive, even the promise of freedom. And the promise, being made, must be kept.'"17 Long after Mr. Lincoln first issued the Draft Emancipation Proclamation, Mr. Lincoln met with two Wisconsin politicians. It was August 1864. Former Governor Alexander W. Randall and Judge Joseph T. Mills visited with President Lincoln at the Soldiers' Home on the outskirts of Washington. It was a low point in Union military fortunes and the President's political fortunes. Foes and friends alike seemed determined to deprive President Lincoln of a second term. So embattled did the President seem that Randall urged him to take a vacation from the conflict for two weeks. Mr. Lincoln said that "two or three weeks would do me good, but I cannot fly from my thoughts; my solicitude for this great country follows me where I go."18 The President then discussed with his Wisconsin visitors both the political situation and the impact of emancipation on the conflict. President Lincoln make it clear that by defending the Union, black soldiers had earned their freedom: "We have to hold territory in inclement and sickly places; where are the Democrats to do this? It was a free fight, and the field was open to the War Democrats to put down this rebellion by fighting against both master and slave long before the present policy was inaugurated. Clearly, President Lincoln was not about to forget the loyalty of black soldiers or the disloyalty of Confederate ones. Judge Mills wrote: "I saw the President was a man of deep convictions, of abiding faith in justice, truth, and Providence. His voice was pleasant, his manner earnest and emphatic. As he warmed with his theme, his mind grew to the magnitude of his body. I felt I was in the presence of the great guiding intellect of the age, and that those 'huge Atlantean shoulders were fit to bear the weight of mightiest monarchies.' His transparent honesty, republican simplicity, his gushing sympathy for those who offered their lives for their country, his utter forgetfulness of self in his concern for its welfare, could not but inspire me with confidence that he was Heaven's instrument to conduct his people through this sea of blood to a Canaan of peace and freedom."20 Historian Don E. Fehrenbacher wrote of President Lincoln's Emancipation Proclamation: "In a sense, as historians fond of paradox are forever pointing out, it did not immediately liberate any slaves at all. And the Declaration of Independence, it might be added, did not immediately liberate a single colony from British rule. The people of Lincoln's time apparently had little doubt about the significance of the Proclamation. Jefferson Davis did not regard it as a mere scrap of paper, and neither did that most famous of former slaves, Frederick Douglass. He called it 'the greatest event of our nation's history.'"21 Historian James M. McPherson wrote: "Lincoln left no doubt of his convictions concerning the correct definition of liberty. And as commander in chief of an army of one million men armed with the most advanced weapons in the world, he wielded a great deal of power. In April 1864 this army was about to launch offensives that would produce casualties and destruction unprecedented even in this war that brought death to more Americans than all the country's other wars combined. Yet this was done in the name of liberty – to preserve the republic 'conceived in liberty' and to bring a 'new birth of freedom' to the slaves. As Lincoln conceived it, power was the protector of liberty, not its enemy – except to the liberty of those who wished to do as they pleased with the product of other men's labor."22 According to Fehrenbacher, "There are two principal measures of a free society. One is the extent to which it optimizes individual liberty of all kinds. The other is the extent to which its decision-making processes are controlled ultimately by the people; for freedom held at the will of others is too precarious to provide a full sense of being free. Self-government, in Lincoln's view, is the foundation of freedom."23 Fehrenbacher wrote that President Lincoln "placed the principle of self-government above even his passion for the Union. More than that, he affirmed his adherence to the most critical and most fragile principle in the democratic process – namely, the requirement of minority submission to majority will."24 Mr. Lincoln was resolved to preserve the Union. "I expect to maintain this contest until successful, or till I die, or am conquered, or my term expires, or till I die, or Congress or the country forsakes me." Hay and Nicolay were young men who lived and worked at the White House. They had a front row seat for the drama of emancipation. They noted in their ten-volume biography that if "the Union arms were victorious, every step of that victory would become clothed with the mantle of law. But if, in addition, it should turn out that the Union arms had been rendered victorious through the help of the negro soldiers, called to the field by the promise of freedom contained in the proclamation, then the decree and its promise might rest secure in the certainty of legal execution and fulfillment. To restore the Union by the help of black soldiers under pledge of liberty, and then for the Union, under whatever legal doctrine or construction, to attempt to reenslave them, would be a wrong to which morality would revolt."25 Slavery was the cause. Disunion was the symptom. President Lincoln chose to administer emancipation as the treatment the Union required. Emancipation ultimately was the just penalty for rebellion and the reward for black military service in restoring the Union. Liberty was both a right conferred by the Declaration of Independence and an obligation of the Union incurred by the service of black soldiers. John Hope Franklin wrote that "no one appreciated better than Lincoln the fact that the Emancipation Proclamation had a quite limited effect in freeing the slaves directly. It should be remembered, however, that in the Proclamation he called emancipation 'an act of justice,' and in later weeks and months he did everything he could to confirm his view that it was An Act of Justice."26 By recruiting black soldiers and employing them in combat, the government secured a moral obligation to black Americans which President Lincoln clearly understood. But the contract was not just moral. It was practical. President Lincoln wrote Charles D. Robinson in the summer of 1864: "Drive back to the support of the rebellion the physical force which the colored people now give and promise us, and neither the present nor any coming Administration can save the Union. Take from us and give to the enemy the hundred and thirty-, forty, or fifty thousand colored persons now serving us as soldiers, seamen, and laborers and we cannot longer maintain the contest."27 Black soldiers were literally fighting for their own freedom "Emancipation and the enlistment of slaves as soldiers tremendously increased the stakes in this war, for the South as well as the North," wrote historian James M. McPherson. Southerners vowed to fight 'to the last ditch' before yielding to a Yankee nation that could commit such execrable deeds. Gone was any hope of an armistice or a negotiated peace so long as the Lincoln administration was in power."28 Mr. Lincoln's course of action was slow but deliberate – designed to effect a permanent rather than a temporary change in the status of slavery in America. Nicolay and Hay saw that clearly: "The problem of statesmanship therefore was not one of theory, but of practice. Fame is due Mr. Lincoln, not alone because he decreed emancipation, but because events so shaped themselves under his guidance as to render the conception practical and the decree successful. Among the agencies he employed none proved more admirable or more powerful than this two-edged sword of the final proclamation, blending sentiment with force, leaguing liberty with Union, filling the voting armies at home and the fighting armies in the field. In the light of history we can see that by this edict Mr. Lincoln gave slavery its vital thrust, its mortal wound. It was the word of decision, the judgment without appeal, the sentence of doom."29 Historian LaWanda Cox wrote of Mr. Lincoln's actions on emancipation "On occasion he acted boldly. More often, however, Lincoln was cautious, advancing one step at a time, and indirect, exerting influence behind the scenes. He could give a directive without appearing to do so, or even while disavowing it as such. Seeking to persuade, he would fashion an argument to fit the listener. Some statements were disingenuous, evasive, or deliberately ambiguous."30 In a letter to Kentucky editor Albert G. Hodges, for example, Mr. Lincoln somewhat disingenuously said, "I add a word which was not in the verbal conversation. In telling this tale I attempt no compliment to my own sagacity. I claim not to have controlled events, but confess plainly that events have controlled me. Now, at the end of three years struggle the nation's condition is not what either party, or any man devised, or expected, God alone can claim it. Whither it is tending seems plain. If God now wills the removal of a great wrong, and wills also that we of the North as well as you of the South, shall pay fairly for our complicity in that wrong, impartial history will find therein new cause to attest and revere the justice and goodness of God."31 Mr. Lincoln may not have controlled events, but he did a pretty good job trying to steer them. Mr. Lincoln himself never claimed to be a liberator – but he did believe in liberation. President Lincoln told Interior Department official T. J. Barnett in late 1862 "that the foundations of slavery have been cracked by the war, by the rebels...and the masonry of the machine is in their own hands."32 Black historian Benjamin Quarles wrote in Lincoln and the Negro: The Lincoln of the White House years had deep convictions about the wrongness of slavery. But as Chief Magistrate he made a sharp distinction between his personal beliefs and his official actions. Whatever was constitutional he must support regardless of his private feelings. If the states, under the rights reserved to them, persisted in clinging to practices that he regarded as outmoded, he had no right to interfere. His job was to uphold the Constitution, not to impose his own standards of public morality. Historian David Potter wrote: "In the long-run conflict between deeply held convictions on one hand and habits of conformity to the cultural practices of a binary society on the other, the gravitational forces were all in the direction of equality. By a static analysis, Lincoln was a mild opponent of slavery and a moderate defender of racial discrimination. By a dynamic analysis, he held a concept of humanity which impelled him inexorably in the direction of freedom and equality."34 Abolitionist Frederick Douglas understood this commitment. In his 1876 speech dedicating the Freedmen's monument in Lincoln Park east of the U.S. Capitol, Douglass said: "His great mission was to accomplish two things: first, to save his country from dismemberment and ruin; and second, to free his country form the great crime of slavery. To do one or the other, or both, he needed the earnest sympathy and the powerful cooperation of his loyal fellow countrymen. Without those primary and essential conditions to success his efforts would have been utterly fruitless. Had he put the abolition of slavery before the salvation of the Union, he would have inevitably driven from him a powerful class of the American people and rendered resistance to rebellion impossible. From the genuine abolition view, Mr. Lincoln seemed tardy, cold, dull, and indifferent, but measuring him by the sentiment of his country – a sentiment he was bound as a statesman to consult – he was swift, zealous, radical and determined."35 Historian Allen C. Guelzo noted: "When Frederick Douglass arrived at the White House in August, 1863, to meet Lincoln for the first time, he expected to meet a 'white man's president, entirely devoted to the welfare of the white men.' But he came away surprised to find Lincoln 'the first great man that I talked with in the United States freely who in no single instance reminded me of the difference between himself and myself, or the difference of color.' The reason Douglass surmised, was 'because of the similarity with which I had fought my way up, we both starting at the lowest rung of the ladder." This, in Douglass's mind, made Lincoln 'emphatically the black man's president.'"36 In undated notes to himself, foreshadowing the sublime Second Inaugural which Mr. Lincoln wrote: "The will of God prevails. In great contests each party claims to act in accordance with the will of God. Both may be, and one must be wrong. God can not be for, and against the same thing at the same time. In the present civil war it is quite possible that God's purpose is something different from the purpose of either party – and yet the human instrumentalities, working just as they do, are of the best adaptation to effect His purpose. I am almost ready to say this is probably true – that God wills this contest, and wills that it shall not end yet. By his mere quiet power, on the minds of the now contestants. He could have either saved or destroyed the Union without a human contest. Yet the contest began. And having begun He could give the final victory either side any day. Yet the contest proceeds."37 Mr. Lincoln had no doubt about maintaining the contest until victory for "in giving freedom to the slave, we assure freedom to the free."38
<urn:uuid:fdb0287c-f7b9-4a2f-b20c-f2f20fd875a2>
CC-MAIN-2013-20
http://www.mrlincolnandfreedom.org/content_inside.asp?ID=1&subjectID=1
2013-05-19T02:23:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978382
5,816
How Massive Can Stars Be? Stars Just Got Bigger: 300 Solar Mass Star Uncovered The existence of these monsters -- millions of times more luminous than the Sun, losing weight through very powerful winds -- may provide an answer to the question “how massive can stars be?” A team of astronomers led by Paul Crowther, Professor of Astrophysics at the University of Sheffield, has used ESO’s Very Large Telescope (VLT), as well as archival data from the NASA/ESA Hubble Space Telescope, to study two young clusters of stars, NGC 3603 and RMC 136a in detail. NGC 3603 is a cosmic factory where stars form frantically from the nebula’s extended clouds of gas and dust, located 22,000 light-years away from the Sun. RMC 136a (more often known as R136) is another cluster of young, massive and hot stars, which is located inside the Tarantula Nebula, in one of our neighboring galaxies, the Large Magellanic Cloud, 165,000 light-years away. The team found several stars with surface temperatures over 40,000 degrees, more than seven times hotter than our Sun, and a few tens of times larger and several million times brighter. Comparisons with models imply that several of these stars were born with masses in excess of 150 solar masses. The star R136a1, found in the R136 cluster, is the most massive star ever found, with a current mass of about 265 solar masses and with a birth weight of as much as 320 times that of the Sun. In NGC 3603, the astronomers could also directly measure the masses of two stars that belong to a double star system, as a validation of the models used. The stars A1, B and C in this cluster have estimated masses at birth above or close to 150 solar masses. Very massive stars produce very powerful outflows. “Unlike humans, these stars are born heavy and lose weight as they age,” says Paul Crowther. “Being a little over a million years old, the most extreme star R136a1 is already ‘middle-aged’ and has undergone an intense weight loss program, shedding a fifth of its initial mass over that time, or more than fifty solar masses.” “Its high mass would reduce the length of the Earth’s year to three weeks, and it would bathe the Earth in incredibly intense ultraviolet radiation, rendering life on our planet impossible,” says Raphael Hirschi from Keele University, who belongs to the team. These super heavyweight stars are extremely rare, forming solely within the densest star clusters. Distinguishing the individual stars -- which has now been achieved for the first time -- requires the exquisite resolving power of the VLT’s infrared instruments. The team also estimated the maximum possible mass for the stars within these clusters and the relative number of the most massive ones. “The smallest stars are limited to more than about eighty times more than Jupiter, below which they are ‘failed stars’ or brown dwarfs,” says team member Olivier Schnurr from the Astrophysikalisches Institut Potsdam. “Our new finding supports the previous view that there is also an upper limit to how big stars can get, although it raises the limit by a factor of two, to about 300 solar masses.” Within R136, only four stars weighed more than 150 solar masses at birth, yet they account for nearly half of the wind and radiation power of the entire cluster, comprising approximately 100 000 stars in total. R136a1 alone energizes its surroundings by more than a factor of fifty compared to the Orion Nebula cluster, the closest region of massive star formation to Earth. Stars between about 8 and 150 solar masses explode at the end of their short lives as supernovae, leaving behind exotic remnants, either neutron stars or black holes. Having now established the existence of stars weighing between 150 and 300 solar masses, the astronomers’ findings raise the prospect of the existence of exceptionally bright, “pair instability supernovae” that completely blow themselves apart, failing to leave behind any remnant and dispersing up to ten solar masses of iron into their surroundings. A few candidates for such explosions have already been proposed in recent years. Not only is R136a1 the most massive star ever found, but it also has the highest luminosity too, close to 10 million times greater than the Sun. “Owing to the rarity of these monsters, I think it is unlikely that this new record will be broken any time soon,” concludes Crowther.
<urn:uuid:983f14a5-2d52-4a5b-8402-7e16928c99ea>
CC-MAIN-2013-20
http://www.astrobio.net/includes/html_to_doc_execute.php?id=3564&component=news
2013-06-18T22:51:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948555
966
Nerve Function Tests for Evaluating Low Back Problems Topic Overview Back to top The nerves that carry messages to and from your legs come from your low back. By checking your muscle strength, your reflexes, and your sensation (feeling), your doctor can tell whether there is pressure on a nerve root coming from your spinal column. He or she can often also tell which nerve root is involved. Muscle strength tests can detect true muscle weakness, which is one sign of pressure on a nerve root. (Sometimes leg weakness is actually due to pain, not pressure on a nerve.) Most people who have herniated discs that cause symptoms also have some nerve root compression. Specific muscles receive impulses from specific nerves, so finding out which muscles are weak shows your doctor where nerve roots are being compressed. See a picture of the lumbosacral region, from which nerve root compression usually originates. Muscle strength tests include: - Hip flexion. You sit on the edge of the exam table with your knees bent and feet hanging down. Then you lift your thigh up off the table while your doctor pushes down on your leg near your knee. (This test can also be done while you are lying on your back.) If your painful leg is weaker than the other leg, you may have nerve root compression at the higher part of your low back, in the area of the last thoracic and the first, second, and third lumbar vertebrae (T12, L1, L2, L3 region). - Knee extension. While in the sitting position, you straighten out your knee while your doctor pushes down on your leg near your ankle. If your painful leg is weaker than the other leg, you may have nerve root compression at the second, third, or fourth lumbar vertebrae (L2, L3, or L4 region). - Ankle dorsiflexion. While you are in the sitting position, your doctor pushes down on your feet while you try to pull your ankles upward. If there is weakness in one leg, the ankle will give way to the downward pressure. This is a sign of possible nerve root compression at the level of the fifth lumbar vertebra (L4 or L5 region). - Great toe extension. While you are in the sitting position, your doctor pushes down on your big toes while you try to extend them (bend them back toward you). If there is weakness in one leg, its big toe will give way to the pressure. This is a sign of possible nerve root compression at the level of the fifth lumbar vertebra (L5 region). - Plantar flexion power. You stand and rise up on your toes on both feet and then on each foot separately. Toe raises are difficult, if not impossible, to do if a particular nerve region is compressed. This is a sign of possible nerve root compression at the level of the first sacral vertebra (S1 region). Just as your muscles receive signals through certain nerves, other nerves carry signals back to your spinal cord from specific sections of your skin and other tissues. Testing your sense of feeling helps your doctor find out what nerve root may be compressed. Your sense of feeling may be tested in several ways. Your doctor will probably ask you to close your eyes during this testing, because it's easy to imagine the feeling if you can see the test being done. Testing may include touching your skin lightly with a cotton ball or pricking your skin lightly with a pin. |Area of skin||Nerve level| |The front of your thigh||L1, L2, L3, L4| |The inside of your lower leg, from the knee to the inner ankle and arch||L4| |The top of your foot and toes||L5| |The outside of your ankle and foot||S1| Tendons attach the muscles to the bones. Reflexes are little movements of the muscle when the tendon is tapped. A reflex can be decreased or absent if there is a problem with the nerve supply. To test your reflexes, your doctor will use a rubber hammer to tap firmly on the tendon. If certain reflexes are decreased or absent, it will show what nerve might be compressed. Not all nerve roots have a reflex associated with them. - Patellar tendon reflex. You sit on the exam table with your knee bent and your foot hanging down, not touching the floor. Your doctor will use a rubber hammer to tap firmly on the tendon just below your kneecap. In a normal test, your knee will extend and lift your foot a little. A decreased or absent reflex may mean that there is compression in the L2, L3, or L4 region. - Achilles tendon reflex. You sit on a table with your knees bent and feet hanging down, or you may be asked to lie down on your stomach with your legs straight and your feet off the edge of the exam table. Your doctor will use a rubber hammer to tap firmly on the Achilles tendon, which connects the muscle at the back of your calf to your heel bone. In a normal test, your foot will move as though you were going to point your toes. A decreased or absent reflex may mean that there is compression in the S1 region. Credits Back to top |Primary Medical Reviewer||William H. Blahd, Jr., MD, FACEP - Emergency Medicine| |Specialist Medical Reviewer||Robert B. Keller, MD - Orthopedics| |Last Revised||December 14, 2011| Last Revised: December 14, 2011 Author: Healthwise Staff To learn more visit Healthwise.org © 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:918e3cd0-e3ef-4704-bc15-3e115e0cecc4>
CC-MAIN-2013-20
http://www.uwhealth.org/health/topic/special/nerve-function-tests-for-evaluating-low-back-problems/hw47243.html
2013-05-19T02:17:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.912595
1,213
Boffin dubs global warming 'irreversible' Environmental researchers are such downers Economies may rise and fall every few decades or so, but at least the hard work we've put into global warming is "irreversible" on the human time scale. That's according to research from a team of US environmental scientists published today in the journal Proceedings of the National Academy of Sciences. The report claims that even if all carbon emissions could somehow be halted, the CO2 changes to Earth's surface temperature, rainfall, and sea levels will keep on truckin' for at least a millennium. "People have imagined that if we stopped emitting carbon dioxide that the climate would go back to normal in 100 years or 200 years," said study author Susan Soloman. "What we're showing here is that's not right. It's essentially an irreversible change that will last for more than a thousand years." Oh, for the job security of a carbon dioxide molecule. The scientists say the oceans are currently absorbing much of the planet's excess heat, but that heat will eventually be released back into the air over the course of many hundreds of years. Carbon concentrations in the atmosphere today stand at about 385 parts per million (ppm). Many environmental scientists have a vague hope of stabilizing CO2 in the atmosphere at 450ppm if major changes in carbon emissions are instituted post-haste. But according to the study, if CO2 peaks at 450-600ppm, the result still could include persistent dry-season rainfall comparable to the 1930s North American Dust Bowl in zones including southern Europe, northern and southern Africa, southwestern North America, and western Australia. If carbon dioxide in the atmosphere reaches 600ppm, expansion of warming ocean waters alone (not taking into account melting glaciers and polar ice sheets) could cause sea levels to rise by at least 1.3 to 3.2 feet (0.4 to 1.0 meter) by the year 3000. The authors claim to have relied on many different climate models to support their results. They said they focused on drying of particular regions and thermal expansion of the ocean because observations suggesting humans are contributing to climate change have already been measured. So remember readers: an extra hour to your car ride today may help make it a warm, sunny day at the London coral reef for your grateful descendants no matter what those UN hippies try. ®
<urn:uuid:cf8bf7c9-3955-498b-820a-1be6464512ae>
CC-MAIN-2013-20
http://www.theregister.co.uk/2009/01/28/global_warming_irreversible/
2013-05-18T05:31:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93308
492
Feline Infectious Peritonitis, often-abbreviated "FIP," is a disease in the cat, which often affects the lining of the chest and/or abdomen. There is still a lot not known about this disease. It has been recognized since the 1960's and is much more complex than many of the other cat diseases. It is currently thought that FIP is the second biggest killer of cats, second only to Feline Leukemia. The disease is definitely contagious from cat to cat, but we do not know exactly how it is spread. The virus may be shed in the saliva, urine, and feces of infected cats. Most infections are thought to occur through the mouth or nose. It is often seen later in other cats in a household once a positive case has been diagnosed. Signs of FIP often develop very slowly over a period of months. Early signs are very vague and mimic other diseases. Loss of appetite, high fever, and labored breathing are often the first signs. As the disease progresses, signs include very difficult breathing, distended abdomen, weight loss, and emaciation. Death will eventually occur from suffocation caused by a buildup of fluid in the chest restricting the ability of the lungs to inflate with air. There are no known cures for FIP at this time. It is FATAL! Sometimes treatment is available that can provide temporary relief in some cats, however it does not reverse the course of the disease and in the end treatment is not successful. The following recommendations will help control the disease: - Isolate infected cats to prevent the spread if they are not euthanized. - Practice good hygiene and sanitation with adequate cleaning of food and water bowls. - If you have a cat testing positive for FIP, do not bring a new cat into the household as long as that cat is present. Thirty days after that cat is no longer present, other cats in the household should be tested for the disease before you adopt any new cat possibly exposing them to the disease. Disinfecting with 4 ounces of Clorox in one gallon of water is effective in killing the virus. - Never allow your cat to live outside.
<urn:uuid:80a21f59-db80-432e-ba2f-a4abb0c97640>
CC-MAIN-2013-20
http://fergusonanimal.com/resources_fip.shtml
2013-05-18T08:25:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97243
447
Northern Territory Acceptance Act 1910 (Cth) The Governor-General assented to this law on 16 November 1910, providing for the transfer of the Northern Territory to the Commonwealth as the third Commonwealth territory, after Papua, and the Federal Capital Territory. Territorians were strong supporters of Federation in the 1898 referendum but both the South Australian and Commonwealth parliaments were less decided on the issue of the transfer of the Territory to the new Commonwealth. Although the initial proposal by Premier Frederick William Holder was made in April 1901, it was not until December 1907 that the two governments executed a formal agreement for the transfer. Though South Australian Premier Tom Price secured enactment of the necessary Surrender Act in March 1908, the Commonwealth Act was slow in coming. Deakin and his advisors disagreed with some provisions of the South Australian Act. No sooner was that problem solved than the Deakin government fell to the Labor Opposition under Andrew Fisher. The Fisher Ministry, preoccupied with Labor's social aims, put aside the question of Territory transfer. In May 1909 Deakin, at the head of the newly-created Fusion Party, returned to power. Five months later the Fusion ministry brought the Northern Territory Acceptance Bill into the House of Representatives. Most Members conceded that, for reasons of defence and development, the Commonwealth should acquire the Territory; but beneath the debate lay a deep sense of uneasiness born of the South Australian experience. Deakin alone expressed a bright pan-Australian vision. The Bill did not pass in 1909, for the Senate delayed it until it lapsed. When Andrew Fisher's Labor Ministry reintroduced the measure in 1910, Deakin threw the whole force of his influence behind it. The Bill passed and from 1 January 1911 the Northern Territory became the exclusive responsibility of the Commonwealth of Australia. South Australian laws remained in force, subject to any later changes by the Commonwealth. Donovan, Peter, A Land Full of Possibilities: A History of South Australia's Northern Territory , University of Queensland Press, Brisbane, 1981. Powell, Alan, Far Country: A Short History of the Northern Territory , Melbourne University Press, Melbourne, 1996. Detail from the cover of the Northern Territory Acceptance Act 1910 (Cth). |Long Title:||An Act to provide for the Acceptance of the Northern Territory under the Authority of the Commonwealth and for the carrying out of the Agreement for the Surrender and Acceptance. (No. 20 of 1910)| |No. of pages:||10 + cover; page 10 is blank| |Medium:||Parchment cover, blue silk ribbons, untrimmed pages| |Measurements:||29 x 22.5 cm| |Provenance:||House of Representatives| |Features:||Signatures on p.1 and p.9| |Location & Copyright:||National Archives of Australia| |Reference:||NAA: A1559/1, 1910/20|
<urn:uuid:bbffb2be-4be1-4d1f-bc10-4a3b20f06969>
CC-MAIN-2013-20
http://foundingdocs.gov.au/item-did-52.html
2013-05-21T10:42:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929866
607
C++ is an object-oriented enhancement of the C programming language and is becoming the language of choice for serious software development. C++ has crossed the Single Book Complexity Barrier. The individual features are not all that complex, but when put together in a program they interact in highly non-intuitive ways. Many books discuss each of the features separately, giving readers the illusion that they understand the language. But when they try to program, they're in for a painful surprise (even people who already know C). C++: The Core Language is for C programmers transitioning to C++. It's designed to get readers up to speed quickly by covering an essential subset of the language. The subset consists of features without which it's just not C++, and a handful of others that make it a reasonably useful language. You can actually use this subset (using any compiler) to get familiar with the basics of the language. Once you really understand that much, it's time to do some programming and learn more from other books. After reading this book, you'll be far better equipped to get something useful out of a reference manual, a graphical user interface programming book, and maybe a book on the specific libraries you'll be using. (Take a look at our companion book, Practical C++ Programming.) C++: The Core Language includes sidebars that give overviews of all the advanced features not covered, so that readers know they exist and how they fit in. It covers features common to all C++ compilers, including those on UNIX, Windows NT, Windows, DOS, and Macintosh. Comparison: C++: The Core Language vs. Practical C++ Programming O'Reilly's policy is not to publish two books on the same topic for the same audience. We'd rather spend twice the time on making one book the industry's best. So why do we have two C++ tutorials? Which one should you get? The answer is they're very different. Steve Oualline, author of the successful book Practical C Programming, came to us with the idea of doing a C++ edition. Thus was born Practical C++ Programming. It's a comprehensive tutorial to C++, starting from the ground up. It also covers the programming process, style, and other important real-world issues. By providing exercises and problems with answers, the book helps you make sure you understand before you move on. While that book was under development, we received the proposal for C++: The Core Language. Its innovative approach is to cover only a subset of the language -- the part that's most important to learn first -- and to assume readers already know C. The idea is that C++ is just too complicated to learn all at once. So, you learn the basics solidly from this short book, which prepares you to understand some of the 200+ other C++ books and to start programming. These two books are based on different philosophies and are for different audiences. But there is one way in which they work together. If you are a C programmer, we recommend you start with C++: The Core Language, then read about advanced topics and real-world problems in Practical C++ Programming.
<urn:uuid:4db01cfa-2e93-426c-92cc-863dea62f75a>
CC-MAIN-2013-20
http://www.amazon.ca/C-Core-Language-Doug-Brown/dp/156592116X
2013-05-20T12:24:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954799
655
Looking east down Fourth Street in busy central San Rafael, it's hard to imagine that mission orchards and vineyards producing fruit in abundance once spread from east of the Mission to Irwin Street. The Mission San Rafael Arcángel was founded in 1817 as an asistencia, a hospital to which San Francisco's Mission Dolores sent Indians to recover from illness. San Rafael's temperate climate made it an ideal spot for healing. San Rafael Arcángel became a true mission in 1822, one of the last to be founded by the Franciscans in California. The friars taught the Indians to raise animals and grow fruits and vegetables. The natural springs on the hill above the mission provided irrigation for the orchards, vineyards and fields of wheat, barley, corn, beans and peas. Eventually the mission became self-sustaining, with a surplus of produce to trade. By 1828, 1,140 Miwoks lived at the mission and tended its plants and livestock. San Rafael Arcángel was also the first mission to be secularized in 1834 when the Mexican government dismantled the mission system, seizing the land and other assets from the Franciscans. General Mariano Vallejo, who administered the transition, hauled away most of the mission's livestock, vines and trees to his property in Sonoma. Another recipient of the bounty was General John Bidwell, who stated in a letter published in the May 1888 Overland Monthly, “It was in 1848 that I went to San Rafael to get pear trees and grape vines. I obtained them from Don Timoteo Murphy, who for many years under Mexican rule had been the Administrator of the mission.” According to Betty Goerke, author of Chief Marin: Leader, Rebel, and Legend, some native people returned to work the orchards and gardens after secularization. Time, however, treated both the mission grounds and the Indians harshly. Recalling Mission Pears In its heydey, the mission was known for its delicious pears. Several eyewitness narratives survive from various points in the mission orchard's history. Charles Lauff, a Marin pioneer, describes what he saw in 1845 in a series of his reminiscences in the San Rafael Independent, Jan 25 - May 23, 1916: “I remember in 1845 when I came to the county with General Fremont it was in the autumn, and the pears were ripe on these trees. They had quite a large orchard that extended from C Street to the Hotel Rafael. The trees were planted in a straight row, and grapes were planted between the trees. The fruit was excellent and we carried some away with us. The General, who was a very talkative fellow, remarked that the pears were the first fruit that he had tasted in several years.” Lauff also mentions the state of the pear trees in 1916: “The old pear trees back of the Masonic building, at the southeast corner of Fifth and De Hiery Streets, San Rafael, were planted by the Mission Fathers in 1817, and they have borne fruit every year since. They never require pruning and the fruit is quite palatable.” In another account in the San Rafael Independent, August 14, 1917, Juan Garcia mentions the mission orchards: "My father, Corporal Rafael Garcia, was in charge of the building of the Mission San Rafael. "All the land from B Street east along Fourth Street and north of Fourth Street was planted in fruit and grapes by the missionaries. The orchards were intact thirty years ago and the first property owners to cut into the orchard were Hepburn Wilkins, Douglass Saunders, Oliver Irwin and others. "The same pear trees planted by the Mission Fathers can be found back of the Masonic Hall today, and there are several back of the Herzog property on Fourth Street, opposite Cijos Street. There is also one, still bearing fruit, back of the Magnes lot on Fourth Street.” In his 1880 History of Marin County, J.P. Munro-Fraser provides another description of the aging orchard: “Contiguous to the mission there was a vast orchard and garden that extended from the Wilkins' place down to the thoroughfare known as Irwin street and from thence to the Marsh land. “Standing at the lower end of the town, a little below the court house, are a dozen or more trees, gnarled in appearance, grey with time and bowed with age, which, without their clothing of foliage, have all the appearance of good old oaks that have stood the brunt of battle with many a fierce gale. These are the remains of the pear trees which formerly stood in the ancient mission orchard.” By the late 1800s, development had encroached on the orchard and fields to where the Sausalito News, September 16, 1892 reported: “Ever since San Rafael began to build streets and houses the ancient trees grew alone by the main thoroughfare, always an object of historical interest and curiosity. A week or two since they were cut down, with the exception of a few smaller ones in the rear, and now sections of the trunks, fully three feet or more in diameter, may be seen on the sidewalk. They will be preserved as curiosities to show how large fruit trees may grow in California.” Pear Promoter George D. Shearer The remaining pear trees were not totally neglected. George D. Shearer, a prominent Realtor, auctioneer and owner of the “Everything Auction House and Storage Company,” took an interest in the old mission pear trees. The Sausalito News of May 19, 1893 reported that George D. Shearer would supervise the Marin County Excursion to the World's Fair, which left San Francisco on June 1, 1893. "Be sure you see George at once and secure a good berth. Have you seen what he puts up in his overland lunch baskets? Oh my!” The paper read. Shearer may have placed mission pears in his lunch baskets, as he brought a load of the pears to exhibit at the Chicago World's Fair. He also exhibited a slab of the largest remaining pear tree. In the ensuing years, Shearer continued to exhibit the nearly century-old pears in fairs and exhibitions around the state, including San Francisco's Panama-Pacific Exhibition in 1915. Shearer died in 1923, and without a promoter, the pear trees also disappeared. With construction of the El Rey Apartments in the 1929, all but one tree was destroyed. It stood in a courtyard behind the El Rey Apartments at 845 Fifth Avenue next to the Masonic Building on Lootens Place. Saving the Trees A pear from the tree was exhibited as a museum artifact at a meeting of the Marin County Historical Society on September 20, 1939. The society also owned a branch from one of the original pear trees donated by Mary (Mrs. Thomas) Wintringham. Wintringham's daughter, Georgia, described in a 1966 letter to historian Lucretia Little her memories of the last trees: "There was a row just inside the fence on the lot back of the Masonic Building before the apartment house on the corner of Fifth Avenue was built. My mother tried to get people interested in buying the lot, turning it into a playground and saving the trees but was unable to get enough people enthused." Last Pear Tree Destroyed In late 1963, workers ripped out this last remaining tree. According to the report by Alton S. Bock in the January 2, 1964 Marin Independent Journal, Harry Albert, son of Jacob Albert, had purchased both the Marin Municipal Water District building on Fourth Street and the El Rey Apartments. He planned to use the rear of the properties for a parking lot and needed to add a driveway between the El Rey Courtyard and the Masonic Building. Sometime in December 1963, the tree was destroyed to make way for the driveway. Albert had planned to save the tree, but he died before he made those plans known. John C. Oglesby, a 10-year resident of the El Rey Apartments who had long watched over the tree from his apartment window, discovered too late that the tree was gone. A civil engineer and former Marin County surveyor, Oglesby rushed to the dump to see if he could save a bit of the tree but found no remains. The news saddened San Rafael residents who valued the last living remnant of the old mission. Mabel J. (Mrs. Albert) Siemer wrote in a letter to the editor of the Independent Journal: "My husband built the El Rey Apartments and felt very badly about having to sacrifice the trees, but at that time no one seemed to want them. He did have the one tree transplanted to the rear court of the El Rey, where as you know it continued to bear pears. "At that time he had a gavel of the old wood made and it was presented to the of San Rafael where I hope it is being used." Saved by Graftings Thinking all the trees lost, residents perked up when they learned that the pears lived on thanks to the forethought and talents of nurseryman Richard Lohrmann, founder in 1909 of a nursery in San Rafael's West End. Lohrmann had taken a graft from one of the trees back in 1929 when the trees were destroyed to make way for the El Rey Apartments. He grafted the mission pear onto a tree that produced both a German variety and a French variety. “I called up the editor, Craemer, and I told him that it was not a lost cause because we still had a tree in the nursery and I would graft them over for the following year and then I would give them to various people. And I was successful. I think I grafted about twenty or so trees and I gave one for Mrs. Moya del Pino; one I planted in front of the Catholic Church. It’s still standing there.” Pear Trees Planted In 1965 Karl Untermann gave one of the trees to Saint Rafael Church. He planted it with Rev. Thomas Kennedy and eighth graders Connie Croker and Kenny Andres of St. Raphael School, during 'plant a tree week.' That tree no longer stands. Yet another of the grafted trees still thrives next to the Jose Moya del Pino Library, home of the Ross Historical Society, in the Marin Art and Garden Center. Untermann produced about 20 trees, so other grafted mission pears may be growing in the county. Karl Untermann purchased his Uncle Lohrmann's nursery and operated it as West End Nursery until he turned it over to his son Tom in 1990. Now Tom and his son Chris continue the family nursery business and can be proud of their family's role in saving San Rafael's mission pear. Several of the photographs were provided by the . If you are interested in purchasing these photographs or others from their collection please call 415-382.0770x3 or email firstname.lastname@example.org. Other resources were provided by the Anne T. Kent California Room of the Marin County Library.
<urn:uuid:e168ebfb-1719-4730-8c66-d2f7ea63dd74>
CC-MAIN-2013-20
http://sanrafael.patch.com/groups/around-town/p/history-san-rafael-arc-ngel-mission-pears-saved-with-graftings
2013-05-18T18:50:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972704
2,349
In 1508 da Vinci started a large sized cartoon sketch in charcoals as a study for a painting commissioned by King Louis XII of France. Now known as The Virgin and Child with St. Anne and St. John the Baptist, the painting in charcoals was created on eight sheets of paper that have been glued together to form one large sheet. Depictions of the Virgin and Child separately with either St. Anne or St. John the Baptist were popular themes during the Italian Renaissance. As with many of Leonardo da Vinci’s later works, the cartoon remains unfinished and now hangs in the National Gallery in London. In honor of the da Vinci: The Genius exhibition at MOSI, we asked local artists of all ages to contribute pieces of art inspired by the works of DaVinci. This cartoon was the inspiration for two very different interpretations of DaVinci’s work. The first of the pieces inspired by this work was painted by a local Tampa Artist named Greg Latch. About the Artist: Greg Latch Greg Latch liberated his aspirations of becoming a basketball player at a young age upon noticing the attention that artist’s received. His skill has evolved from triumph in a 6th grade art competition, to drawing for his church at an older age to his da Vinci inspired work displayed at MOSI today. You can view more of his work at latchart.com. The second piece was created by a 12th grade high school artist named Cady Gonzalez from Wiregrass Ranch High School. This inspired piece of art was created using a selection of charcoals, just as Leonardo da Vinci would have done. These two very different interpretations of the same piece of art help to show how the work of a Renaissance master still influences the art of our modern age. Leonardo da Vinci was considered one of the finest painters of Renaissance Italy and was known for his subtle shading and careful treatment of faces to bring forth all of the beauty of the human form into his art. The two interpretations of da Vinci’s creation can be seen in the MOSI Founder’s Hall before you enter the da Vinci: The Genius exhibit.
<urn:uuid:92723591-de23-4e8d-8789-26e43b7be600>
CC-MAIN-2013-20
http://mositampa.blogspot.com/2010/03/inspired-by-davinci-art-exhibition-at.html
2013-05-25T20:43:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981322
454
So now the task of gathering information on endangered species begins. And the first task in gathering information is to look for the ‘best’ sources of information on endangered species around the world. And that is how I plan to spend this next month; looking for credible sources of information about endangered species. This would include international bodies like CITES or the IUCN and government sources from countries around the world like the Fish andWildlife Service (FWS) in the United State. There are also a number of non-government sources of endangered species information like the World Wildlife Fund (WWW) or the Wildlife Conservation Society (WCS). And then there are projects that were created by individual scientist like Jane Goodall, or by concerned individuals like Peter Mass, who created the ‘The Sixth Extinction’ website (a wonderful ‘extinction’ resource). And then there are also the zoos and aquariums around the world, that both house endangered species and work to protect them in the wild. The San Diego Zoo is such a facility. Anyone who knows me knows I spend A LOT of time at the San Diego Zoo. It is one of my favorite places to be in the world. And in fact, I am in San Diego at this moment ‘gathering’ information to launch this part of the ‘Endangered Earth Journal’ and ‘a Tiger Journal’. I’m not sure exactly how many endangered species can be found at the San Diego Zoo, but I have counted over 60. And every endangered species at the zoo has a sign at their enclosure with habitat information and also information why the animal is endangered. Now, it would be easy to just leave it at that, but here is where it gets interesting. And it’s where the ‘gathering’ of endangered species information can get complicated. The San Diego Zoo has a Spectacled Bear on exhibit (see picture above). And according to the sign at the enclosure, the Spectacled Bear is ‘endangered’ (see image below). However, according to the CITES, and the US Fish and Wildlife (FWS), it is not listed as endangered (see the endangered species list at Bagheera - under 'B' for bear not 'S' for Spectacled). When I asked an animal keeper about this this discrepancy, she explained –correctly- that often times the status of an animal might be endangered in one part of their range, and not in another. Or, that the status of an animal changes so quickly, the information is hard to keep track of. And that’s exactly right. Information about the status of endangered species is hard to keep track of, but is also hard to gather (and expensive). To complicate that, there is often time no agreement between some of the ‘major’ endangered species organizations what the exact status of an animal is (see FWS and CITES for examples on this). So is the Spectacled Bear endangered - or not. I don't know yet. But I do plan to persue that in a future journal entry. But, for the purpose of this journal, and for my goal of updating the Bagheera and Endangered Earth websites over the course of this next year, the point is clear; gathering information about endangered species from expert sources around the world, will not necessarily be an simple task. But it is an important one. The 'status' of an animal (vulnerable - endangered - critically endangered) determines the level of protection it receives under many of the laws written to protect endangered animals. And those animals that are 'endangered' deserve -and need- all the protection they can get. For more information about endangered animals go to Bagheera. For more information about endangered tigers go to Tigers in Crisis.
<urn:uuid:7fe13edc-16b8-452d-94fb-ec0e0042fd4e>
CC-MAIN-2013-20
http://www.endangeredearthjournal.com/2011/10/gathering-endangered-species.html
2013-05-22T00:50:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91984
804
Nag Panchami, or Nagarapanchami, is the worship of Naga or Snakes and is an important festival in North and East India in Shravan month. Snakes are an indispensable part of Hindu religion and the two of the most popular Gods in Hinduism – Lord Vishnu and Lord Shiva – are closely associated with serpents. Lord Vishnu has the several hooded Snake Ananta as his bed and Lord Shiva wear snakes as his ornament and this close association has deep symbolic meaning. In 2011, Nag Panchami date is August 4. Nag Panchami is observed at two different times. It is observed on the fifth day after Purnima in Ashar Month in Eastern parts of India and the festival is known as Nagpanchami Manasa Devi Ashtanag Puja. The important Nag Panchami which is observed through out India falls on the fifth day after Amavasi in Shravan month. Manasa Devi, the snake goddess, is worshipped on this day in Bengal, Orissa and several parts of North India. Special idols of Goddess Manasa are made and are worshipped during this period. Fasting on Nag Panchami People also observe Vrata – some communities fast during the daytime and eat food only after sunset. Some people avoid salt on the day - food is consumed without salt. Deep fried things are avoided on the day. Some communities in South India have an elaborate oil bath on the day. There is a belief that unmarried women who undertake Nagpanchami Vrat and do the puja and feed snakes will get good husbands. Nag Panchami is Guga Navami in Punjab and a huge snake is made from flour and is worshipped on this day. Legend has it that Lord Krishna overpowered the huge black snake Kalia that terrorized his village on this day. The monsoon season is at is peak during the Shravana Month (July – August). The snakes move out of their burrows, which are filled with water, and occupy spaces frequented by human beings. So it is widely believed that Nag Panchami is observed to please the Snake Gods and avoid snake bites during this season. In many places, two idols of snakes are drawn on both sides of doors using cow dung on this day. Five-hooded idols are worshipped in many regions. The idol of five-hooded snake is made using mud, turmeric, sandal and saffron. Milk is offered to the snake idols and in some extreme form of worship people feed milk to live cobras. The festival of Nag Panchami is yet another example of the influence of Mother Nature on Hinduism. It also shows the need for human beings to respect animals, which play an important role in the survival of human beings. You may also like to read
<urn:uuid:9559dcf9-3067-4f59-8c72-5aa21af97427>
CC-MAIN-2013-20
http://www.hindu-blog.com/2008/07/nag-panchami-manasa-devi-nagpanchami.html?showComment=1249507583058
2013-05-19T09:47:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973721
589
Struts Dispatch Action (org.apache.struts.actions.DispatchAction) is one of the Built-in Actions provided along with the struts framework. class enables a user to collect related functions into a single Action. It eliminates the need of creating multiple independent actions for each function. Here in this example you will learn more about Struts Dispatch Action that will help you grasping the concept better. Let's develop Dispatch_Action class which is a sub class of org.apache.struts.actions.DispatchAction class. This class does not provide an implementation for the execute() method because DispatchAction class itself implements this method. This class manages to delegate the request to one of the methods of the derived Action class. An Action Mapping is done to select the particular method (via Struts-Configuration file). Here the Dispatch_Action class contains multiple methods ie.. add() , edit() , search() , save(). Here all the methods are taking the same input parameters but each method returns a different ActionForward like "add" in case of add() method , "edit" in case of edit() etc. Each ActionForward is defined in the struts-config.xml file (action mapping is shown later in this page). Here is the code for Action Class. Developing an Action Class (Dispatch_Action.java) Developing an ActionForm Class Our form bean class contains only one property "parameter" which is playing prime role in this example. Based on the parameter value appropriate function of Action class is executed. Here is the code for FormBean ( DispatchActionForm.java): Defining form Bean in struts-config.xml file Add the following entry in the struts-config.xml file for defining the form bean Developing the Action Mapping in the struts-config.xml Here, Action mapping helps to select the method from the Action class for specific requests. Note that the value specified with the parameter attribute is used to delegate request to the required method of the Dispath_Action Class. <forward name="add" path="/pages/DispatchActionAdd.jsp" /> <forward name="edit" path="/pages/DispatchActionEdit.jsp" /> <forward name="search" path="/pages/DispatchActionSearch.jsp"/> <forward name="save" path="/pages/DispatchActionSave.jsp" /> Developing jsp page Code of the jsp (DispatchAction.jsp) to delegate requests to different jsp pages : |<%@ taglib uri="/WEB-INF/struts-bean.tld" prefix="bean"%> <%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html"%> <TITLE>Dispatch Action Example</TITLE> <H3>Dispatch Action Example</H3> <p><html:link page="/DispatchAction.do?parameter=add">Call Add Section</html:link></p> <p><html:link page="/DispatchAction.do?parameter=edit">Call Edit Section</html:link></p> <p><html:link page="/DispatchAction.do?parameter=search">Call Search Section</html:link></p> <p><html:link page="/DispatchAction.do?parameter=save">Call Save Section</html:link></p> Add the following line in the index.jsp to call the form. <html:link page="/pages/DispatchAction.jsp">Struts File Upload</html:link> Example demonstrates how DispatchAction Class works. Building and Testing the Example To build and deploy the application go to Struts\Strutstutorial directory and type ant on the command prompt. This will deploy the application. Open the browser and navigate to the DispatchAction.jsp page. Your browser displays the following DispatchAction page. Selecting Call Add Section displays the following DispatchActionAdd.jsp page Selecting Call Edit Section displays the following DispatchActionEdit.jsp page Selecting Call Search Section displays the following Selecting Call Save Section displays the following DispatchActionSave.jsp page If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:275d6700-a52c-4018-bd22-0d7b40b25930>
CC-MAIN-2013-20
http://roseindia.net/struts/struts-dispatch-action.shtml
2013-05-19T02:44:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.657236
942
squamous cell carcinoma (SKWAY-mus sel KAR-sih-NOH-muh) Cancer that begins in squamous cells. Squamous cells are thin, flat cells that look like fish scales, and are found in the tissue that forms the surface of the skin, the lining of the hollow organs of the body, and the lining of the respiratory and digestive tracts. Most cancers of the anus, cervix, head and neck, and vagina are squamous cell carcinomas. Also called epidermoid carcinoma.
<urn:uuid:d5d4517c-a6de-4a7c-a348-2a0ddec564ae>
CC-MAIN-2013-20
http://cancer.gov/Common/PopUps/definition.aspx?id=46595&version=Patient&language=English
2013-05-25T19:48:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909779
113
Understanding Heartburn -- the Basics What Is Heartburn? Despite its name, heartburn has nothing to do with the heart. Some of the symptoms, however, are similar to those of a heart attack or heart disease. Heartburn is an irritation of the esophagus that is caused by stomach acid. This can create a burning discomfort in the upper abdomen or below the breast bone. With gravity's help, a muscular valve called the lower esophageal sphincter, or LES, keeps stomach acid in the stomach. The LES is located where the esophagus meets the stomach -- below the rib cage and slightly left of center. Normally it opens to allow food into the stomach or to permit belching; then it closes again. But if the LES opens too often or does not close tight enough, stomach acid can reflux, or seep, into the esophagus and cause the burning sensation. Occasional heartburn isn't dangerous, but chronic heartburn or gastroesophageal reflux disease (GERD) can sometimes lead to serious problems. Heartburn is a weekly occurrence for about 20% of Americans and very common in pregnant women. What Causes Heartburn? The basic cause of heartburn is a lower esophageal sphincter, or LES, that doesn't tighten as it should. Two excesses often contribute to this problem: too much food in the stomach (overeating) or too much pressure on the stomach (frequently from obesity or pregnancy). Certain foods commonly relax the LES, including tomatoes, citrus fruits, garlic, onions, chocolate, coffee, alcohol, caffeinated products, and peppermint. Dishes high in fats and oils (animal or vegetable) often lead to heartburn, as do certain medications. Stress and lack of sleep can increase acid production and can cause heartburn. And smoking, which relaxes the LES and stimulates stomach acid, is a major contributor.
<urn:uuid:acd9cfee-963e-4b19-ac24-ee0472f8b15e>
CC-MAIN-2013-20
http://doctor.webmd.com/local/texas/mcallen-edinburg-mission/gastroenterologists.htm
2013-05-19T02:09:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920834
405
About Site Map Contact Us |A service of the U.S. National Library of Medicine®| On this page: Reviewed August 2007 What is the official name of the IRGM gene? The official name of this gene is “immunity-related GTPase family, M.” IRGM is the gene's official symbol. The IRGM gene is also known by other names, listed below. Read more about gene names and symbols on the About page. What is the normal function of the IRGM gene? The IRGM gene provides instructions for making a protein that plays an important role in the immune system. This protein is involved in a process called autophagy, which cells use to surround and destroy foreign invaders such as bacteria and viruses. Specifically, the IRGM protein helps trigger autophagy in cells infected with certain kinds of bacteria (mycobacteria), including the type of bacteria that causes tuberculosis. In addition to protecting cells from infection, autophagy is used to recycle worn-out cell parts and break down certain proteins when they are no longer needed. This process also plays an important role in controlled cell death (apoptosis). How are changes in the IRGM gene related to health conditions? Where is the IRGM gene located? Cytogenetic Location: 5q33.1 Molecular Location on chromosome 5: base pairs 150,226,084 to 150,228,230 The IRGM gene is located on the long (q) arm of chromosome 5 at position 33.1. More precisely, the IRGM gene is located from base pair 150,226,084 to base pair 150,228,230 on chromosome 5. See How do geneticists indicate the location of a gene? in the Handbook. Where can I find additional information about IRGM? You and your healthcare professional may find the following resources about IRGM helpful. You may also be interested in these resources, which are designed for genetics professionals and researchers. What other names do people use for the IRGM gene or gene products? See How are genetic conditions and genes named? in the Handbook. Where can I find general information about genes? The Handbook provides basic information about genetics in clear language. These links provide additional genetics resources that may be useful. What glossary definitions help with understanding IRGM? You may find definitions for these and many other terms in the Genetics Home Reference Glossary. See also Understanding Medical Terminology. References (5 links) The resources on this site should not be used as a substitute for professional medical care or advice. Users seeking information about a personal genetic disease, syndrome, or condition should consult with a qualified healthcare professional. See How can I find a genetics professional in my area? in the Handbook.
<urn:uuid:c8cdf9e1-c782-42b7-9bd4-4b5663124140>
CC-MAIN-2013-20
http://ghr.nlm.nih.gov/gene=IRGM
2013-05-24T23:32:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705284037/warc/CC-MAIN-20130516115444-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910311
585
Ok, hist of sci people out there, help me out: In Hilchot Yesodei Hatorah chapter 3, Maimonides lays out the geocentric model of the cosmos, epicycles and all. At 3:8, he notes that Earth is about 40 times larger than the moon, the sun is about 170 times larger than Earth, the sun is the largest of the "stars" (a category that includes planets too), and Mercury is the smallest. Comparing this with the actual data we know now, the sun is of course larger than any of the planets, and Mercury is the second smallest of the heavenly bodies known at that time (the moon is smaller). The sun's radius is 109 times Earth's radius, so the Rambam's number is pretty decent, within a factor of 2. Earth's radius is 3.67 times the radius of the moon, so that figure is considerably further off (by an order of magnitude). The ratio of Earth's volume to the moon's volume is 49, much closer to the Rambam's number of 40 (and he doesn't actually specify which dimension he's talking about), though if we understand the Rambam's ratios to be about volume rather than linear dimension, then the sun-to-Earth ratio is thrown way off. So my question is this: HOW THE HECK DID HE KNOW? (And by "he", I mean the Rambam himself, or ancient Greek astronomers, or medieval Arab astronomers, or wherever he's getting his data from.) Even if the moon number is considerably further off than the sun number, it's still an impressive feat to know that the moon is smaller than Earth (by any amount) even though the sun is much larger (by an amount that he basically got right) and the sun and moon are the same apparent size in the sky. He knew that the sun was farther away than the moon (which can be reasonably inferred from the sun's (apparent) orbital period being longer), which would mean that the sun is larger than the moon if they're the same apparent size, but it's not clear how he got any sort of quantitative relationship between those sizes (the ratio he gives between the sizes of the sun and the moon doesn't have any obvious mathematical relationship to the ratio of their orbital periods), let alone how he could compare either of them to the size of the Earth. (Did he have some version of Kepler's Third Law?) I know that Eratosthenes measured the circumference of the Earth, but did pre-modern astronomers have any sense of how far away the sun or other celestial bodies were? And as for the sizes of the planets (the ones we would call planets, not the sun and the moon), how could anyone resolve any finite sizes, rather than just seeing them as points of light? I can't blame him for thinking the moon is larger, but how did he know that Mercury is smaller than Venus, Mars, Jupiter, and Saturn? Pardon me if these questions are ignorant; I would be fascinated to know the answers. Thanks!
<urn:uuid:e214a7fc-3d75-4909-b859-ecb614bc0ee4>
CC-MAIN-2013-20
http://mahrabu.blogspot.com/2010_08_01_archive.html
2013-05-24T08:44:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963084
633
Emery County Historical Society President Evelyn Huntsman welcomes speaker Steve Taylor. The Emery County Historical Society met at the Museum of the San Rafael to learn about Paiute Indians from Steve Taylor of Fremont, Utah. Evelyn Huntsman opened meeting with a welcome to all who came. She then proceeded to play on the piano several songs from the 1940s. Val Payne came forward and discussed the reasons for joining the Old Spanish Trail Association. This is Americas 15th National Historic Trail and in 2002, Congress added The Old Spanish Trail route to the National Historic Trails System. Bernice Payne introduced historian Steve Taylor a native of Fremont. Fremont is named after John C. Fremont. Taylor has been studying for several years about Indians, Spanish Slave Traders, Trappers, The Old Spanish Trail, and teaching Utah History. The topic this night was the Paiute Indians. Taylor started out with the question of, what caused the Paiute Indians to disappear from the high plateau regions in the first half of the 19th century? What were the circumstances around the Paiute Indians? He then defined the High Plateau region and the various valleys, mountains and mountain passes. He described what the Paiute Indians were like. They were very poor and easily preyed upon by other tribes such as the Utes, and by the slave traders. The Paiutes made a very sparse living off the land and had no horses. They frequently killed horses of the people traveling through their valley for food. Taylor mentions the various trappers explorers and traders that went through Utah, such as Jedediah Smith, Daniel Hawks, Wolfskill and Yount, Parley P. Pratt, Kit Carson, the Indian Agent Edward Fitzgerald Peel, and John C. Fremont. He described from their journals and diaries the hardships they encountered and their contact with the Indians. Which in the case of the Paiute was minimal. The Paiute Indians would send up smoke signals to warn other tribes of intruders into the valley where they lived. The Black Hawk War was precipitated by stopping the slave traders from bartering for Indian slaves in Utah by the Mormon court in Salt Lake City. The Utes trading with the Spanish goes back into the 1600s. There is evidence to suggest that extensive slave trade was going on during that period of time. The trade evolved around horses, Navajo blankets, the Old Spanish Trail and slaves. Taylor pointed out, in order for the slave trade to really function you had to have a commodity (people that can be taken into slavery) and you had to have a market. The Paiute Indians were the commodity. They were readily available. The market was the strong demand for Indian slaves in Santa Fe and in California. Girls brought $200-250 and boys about $150. This was Chief Walker's principle means of operating. Chief Walker would take Paiute slaves to California and come back along the Old Spanish Trail with horses and mules. Prior to the Spanish Trail being established he did not have a market. The Old Spanish Trail is the road that was used to get the commodity to market. The Spanish would take Paiute Indians from the Ute Indians and market them in Los Angeles. They would then do the same thing on the way back to Santa Fe. The second component of this lucrative trade was the horse trade. They could at that time buy horses and mules in California for $10-15 each and sell them in Missouri for $400-500 each. They moved as many as 4,000 head of horses at a time across the trail. These Spanish traders would also take Navajo blankets from Santa Fe to the market in Los Angeles. On the way back they would bring horses and mules. It is believed that the influx of settlers into the region and the Spanish slave trade along the Old Spanish Trail wiped out the Paiutes in the high plateau region. There is still a small group of Indians in Grass Valley that ended being called the Koosharem Band with their headquarters there. These Gopher Paiute Indians are a last remnant of the Black Hawk War. At the end of this lecture Taylor gave out a list of publications from which he gained the information he presented.
<urn:uuid:593d1162-303e-43a2-864e-83c6f855a583>
CC-MAIN-2013-20
http://www.ecprogress.com/print.php?tier=1&article_id=12564
2013-05-24T02:20:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.977325
854
The Holocaust in Croatia After the German invasion of Yugoslavia in April 1941, the country was divided between Germany and its allies, Italy, Bulgaria, and Hungary. The regions of Croatia (not including the Adriatic coast which is today part of Croatia) and of Bosnia and Herzegovina were united into a puppet state – the so-called Independent State of Croatia - that was ruled by the Croatian fascist Ustaša movement. The Ustaša immediately embarked on a campaign "to purge Croatia of foreign elements". Hundreds of thousands of Serbs were expelled or sadistically murdered in camps established by the Ustaša. The concentration of Jews in camps began in June 1941. By the end of that year about two thirds of Croatia's Jews had been sent to Ustaša camps, where most of them were killed on arrival. In August 1942 and May 1943 the Germans deported the remaining Jews from Croatia to Auschwitz. 30,000 out of Croatia’s 37,000 Jews perished in the Holocaust.
<urn:uuid:140aa16e-75e8-401b-9529-905b4f949af6>
CC-MAIN-2013-20
http://www.yadvashem.org/yv/en/righteous/stories/historical_background/croatia.asp
2013-06-20T08:45:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.981563
207
Filed under: Comfort at Camp, Preparation & Readiness, Uncategorized CAMPSITE POWER Chapter 4 – Converting Battery Power to Household Power We have been exploring different ways to generate renewable power for a camper’s 12 volt batteries. But, we have not devoted any time explaining how to change the power we have generated and stored in our 12 volt batteries into 120 volt household power. Reading Chapter 4 will give you the basics on the inversion process and the equipment needed. I am sure most everyone is aware that a camper’s 12 volt system is direct current. This means the electrons flowing in a wire only go in one direction – from negative to positive. 120 volt household power is alternating current. This means the electrons change their direction of flow sixty times per second. Appliances designed to work on 12 volts DC will not operate on 120 volts AC. So, we must do something to change the electrical power stored in a 12 volt battery to 120 volts AC. We can accomplish this with a device called an inverter. Don’t confuse this with the converter that is standard equipment on your camper. The converter changes 120 volt household power to 12 volts DC to recharge your battery and power 12 volt appliances. A converter brings voltage down, an inverter brings voltage up. Inverters are available as little pocket units that plug into a vehicle’s 12 volt power socket (cigarette lighter) that provides up to 100 watts of household power for low current devices like laptop computers and electric shavers. Inverters that can provide power over 100 watts are best directly connected to a battery. These inverters can range all the way up to a whopping 5,000 watts. In Chapter two I explained the relationship between voltage and current (amperage) and how they are related to power. I also told you that energy could neither be created nor destroyed – It can only be converted. This fact is what makes the use of inverters somewhat tricky. The television in our camper is rated to use 200 watts of power at 120 volts. The 200 watts is equal to 1.66 amperes at 120 volts. If I want to convert the voltage from 12 volts DC to 120 volts AC the power will stay the same – 200 watts. But, 200 watts from a 12-volt battery is equal to 16.66 amperes of current! That’s right, to get the same power the current (amperage) is increased by at least a factor of 10 (just like the ratio of 120 volts to 12 volts). I can power our TV from a 500 watt inverter connected to four 50 amp-hour sealed AGM batteries in a near-by cabinet for about 10 hours. Less if I use the satellite receiver or DVD player. That translates to about five nights at 2 hours per night. Actually, not all that bad! If you are lost at this point, do not worry. Just remember that when inverting battery power to household power the amps taken from the battery will increase by 10 times the 120-volt amperage. Inverters in campers are rarely used to power air conditioners. Both the size of the inverter and the needed battery bank would be tremendous. But, inverters are sometimes called upon to power a small microwave or hand held hair dryer. When plugging a camper’s power cord into the outlet of an inverter it is important to be sure the camper’s refrigerator is set to GAS, the converter is turned off or unplugged and the electric heating element (if you have one) in your water heater is turned off. In this set-up 12 volt appliances and lights will continue to draw from the batteries and their current must be added to any life-span computations for the attached inverter. Use of inverters for coffee makers, electric heaters, and toasters is possible, but extremely inefficient. Again, the size of the battery bank to sustain power for any reasonable time would have to be extremely large. Large translates to 600 or more pounds of added battery weight, which often causes an overloaded camper. Selecting the size of an inverter is not just a matter of picking out one with a high number. The more power an inverter supplies the larger the battery cables will need to be. A 2,000 watt inverter will use up to four copper cables the diameter of your thumb. That’s some pretty big (and expensive) wire! Since a 2,000 watt inverter can draw up to 167 amps from a battery bank your battery life will be very short. A 100 amp hour battery will probably provide no more than ½ hour of power. It will take a 45 watt solar panel array like we explored last week about 44 hours of direct sunlight to replace the energy used in that ½ hour. So, I hope you can see what I mean when I say taking that much power from an inverter is inefficient! My advice is to NOT plan on powering a microwave oven from an inverter unless it is rated under 700 watts and you have at a minimum a 1,200 watt inverter and four golf cart batteries capable of delivering 240 amp hours of power. Even then, your total cooking time will be limited to approximately 2 hours at the most. It will all depend on the battery temperature and starting state of charge. Inverters over 1,500 watts are usually only found in large motor homes that have sufficient space and extra weight capacity for batteries. Lastly, you will find most inverters sold as “Modified Sine Wave” and others as “Pure Sine Wave”. The inverters using a modified sine wave output are less expensive and may cause synchronous motors like those found in fans to hum or buzz. Television sets that do not have adequate line filtering may also have a buzz in the audio. Microwave ovens will produce less cooking power from a modified sine wave inverter. The pure sine wave inverters are closer to the household power your appliances are designed to use and will operate more efficiently from a pure sine wave. While they are 3 to 4 times more expensive, their use will translate to longer battery life and better appliance operation. Still, a modified sine wave inverter will work satisfactorily in a budget installation. No appliances or equipment should be harmed by plugging them into this type of inverter. For those that may be curious, we carry 34 sealed absorbed glass mat batteries supplying a total of 2,500 amp hours of power. This allows us to run our modified 6,000 BTU bedroom air conditioner from a 1,500 watt inverter for at least three nights before recharging. I hate to sleep on sweaty sheets in 90 plus degree weather! But, you must keep in mind that our batteries have a combined weigh of over 2,500 pounds. That’s when an ex-semi tractor as a RV hauler is really nice. The extra battery load just makes the truck ride smoother. We have 3 inverters; a 500 watt to power the entertainment center, a 3,000 watt for all of the camper’s electrical outlets and a 1,500 watt for the bedroom air conditioner. Next week we will begin to explore portable generators. We will look at both inverter and synchronous designs and weigh the pros and cons of each. Until then, Happy Camping Trails to Everyone! Last 5 posts by Professor95 - Got an iPad? Drive a RV? Check out this new GPS app from Rand McNally! - January 10th, 2013 - OLD GAS PRESSURE LANTERNS - Restoring My Past - January 3rd, 2013 - IT WAS THE NIGHT BEFORE CHRISTMAS (Revised by...... well, ME!) - December 16th, 2012 - IT'S FINALLY HERE! (The 2013 Good Sam RV Travel Guide & Camp Ground Directory) - December 12th, 2012 - CAMPFIRE STORIES - "Broken Down on the Key Bridge" - November 16th, 2012
<urn:uuid:d84b8abf-afcf-4959-87e2-311efb669533>
CC-MAIN-2013-20
http://blog.woodalls.com/2010/12/campsite-power-chapter-4-converting-battery-power-to-household-power/
2013-05-22T14:18:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919021
1,673
Loading the player ... Ms. Wyatt's 4th Grade Class Eden Elementary School Pell City School System Students in Ms. Wyatt's Class at Eden Elementary School went on a hunt to find objects that are transparent, translucent, and opaque. This video defines the terms transparent, translucent, and opaque. The video also shows pictures of objects that fit each term. Aligned to the following ALEX lesson plan: Translucent, Transparent, and Opaque Objects Content Areas: Science Alabama Course of Study Alignments and/or Professional Development Standard Alignments: [S1] (4) 3: Recognize how light interacts with transparent, translucent, and opaque materials.
<urn:uuid:ae563625-f53e-4f55-ba66-e70f2798fac3>
CC-MAIN-2013-20
http://alex.state.al.us/podcast_view.php?podcast_id=1270
2013-05-18T08:50:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.80092
145
About the Program The U.S. Department of Energy (DOE) Wind Program is committed to developing and deploying a portfolio of innovative technologies for clean, domestic power generation to support an ever-growing industry, targeted at producing 20% of our nation's electricity by 2030. What We Do The Program's activities are leading the nation's efforts to accelerate the deployment of wind power technologies through improved performance, lower costs, and reduced market barriers. The Program works with national laboratories, industry, universities, and other federal agencies to conduct research and development activities through competitively selected, directly funded, and cost-shared projects. Our efforts target both land-based and offshore wind power to fully support the clean energy economy. Why It Matters Greater use of the nation's abundant wind resources for electric power generation will help the nation reduce emissions of greenhouse gases and other air pollutants, diversify its energy supply, provide cost‐competitive electricity to key regions across the country, and reduce water usage for power generation. In addition, wind energy deployment will help stimulate the revitalization of key sectors of the economy by investing in infrastructure and creating long-term, sustainable skilled jobs. Reducing the Cost of Renewable Energy The Wind Program is committed to helping the nation secure cost-competitive sources of renewable energy through the development and deployment of innovative wind power technologies. By investing in improvements to wind plant design, technology development, and operation as well as developing tools to identify the highest quality wind resources, the Wind Program serves as a leader in making wind energy technologies more competitive with traditional sources of energy and a larger part of our nation's renewable energy portfolio. Securing Clean, Domestic Energy The Wind Program is contributing to the nation's role as a leader in renewable energy technology development by promoting domestic manufacturing of wind power technologies. Wind energy is a clean, domestic power source that requires little to no water and creates no air pollution when compared to more traditional energy sources. The Program works to ensure that wind energy technologies are environmentally responsible by analyzing the environmental impacts of wind energy, observing species' interactions with wind turbines, and researching opportunities to mitigate or eliminate any impacts where they may exist. Enabling the Renewable Energy Market By working with industry, federal and international partners, and national laboratories, the Wind Program seeks to understand and address market barriers such as environmental impacts, project siting and permitting processes, and wind's potential effects on our nation's air space and waterways. These efforts will help wind power continue on its trajectory to being a competitive, cost-effective part of our nation's renewable energy portfolio. Harnessing Energy Where our Nation Needs it Most Wind energy presents a unique opportunity to harness energy in areas where our country's populations need it most. This includes offshore wind's potential to provide power to population centers near coastlines, and land-based wind's ability to deliver electricity to rural communities with few other local sources of power. By working to deploy wind power in new areas on land and at sea and ensuring the stable, secure integration of this power to our nation's electrical grid, the Wind Program contributes to the delivery of clean, renewable energy throughout the nation. The Wind Program funds research and development activities at the following national laboratories:
<urn:uuid:d1c14ab3-69b6-4722-9389-e2e39dc2fbef>
CC-MAIN-2013-20
http://www1.eere.energy.gov/wind/about.html
2013-06-18T23:12:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924227
655
It is one of the great pleasures of an art lover to open a book about art that actually includes full-colour plates. That was important to author Marylin McKay too. She couldn’t be more delighted with how Picturing the Land: Narrating Territories in Canadian Landscape Art, 1500-1950 turned out. Published by McGill-Queen’s University Press, it’s a handsome and comprehensive volume that takes a look at Canadian landscape art over five centuries. Dr. McKay, professor in NSCAD’s Division of Historical and Critical Studies, worked on the book over a period of four years, spending time in archives and visiting galleries from coast to coast. A specialist in Canadian art, she says Canadians might be surprised by one of her conclusions, namely that Canadian art, and specifically the art of Tom Thomson and the Group of Seven, isn’t uniquely Canadian at all, but rather a reflection of the larger art movements prevalent in Western society. “I think they got a lot of good press,” she says of the eight artists whose work has come to be regarded as iconic Canadiana today. “They came along at the right time, when Canadians were looking for a nationalist art. People were able to see it as unique when in fact it fits into a broader picture.” More interesting, she says, might be the differences between French and English Canadian landscape painting. For example, a painting of a farm by an English Canadian artist might represent “a cozy retreat from the city,” she says, while a similar scene for a French Canadian could be something quite different: an affirmation of French culture, rural living and a way of holding on to a way of life under threat by assimilation. Clearly, Picturing the Land is more than pretty pictures, but a critical, sophisticated and perhaps surprising look at how social, economic and political conditions influence art and how art is regarded. The book is one of five shortlisted titles for the Canada Prize in the Humanities, which recognizes outstanding scholarly works in the humanities. The prize is valued at $2,500 and will be presented at a special ceremony on Friday, March 30 at the Musee des beaux-arts in Montreal. The nominees are chosen from works support by the Canadian Federation for the Humanities and Social Sciences’ Awards to Scholarly Publications Program and are selected by a jury of scholars from across the country. Dr. McKay is also the author of A National Soul: Canadian Mural Painting, 1860s-1930s. Picturing the Land is available at Chapters in Halifax’s Bayers Lake Business Park or by ordering online through Amazon.ca, Chapters.Indigo.ca or the publisher McGill-Queen’s University Press. | || | Picturing the Land: Narrating Territories in Canadian Landscape Art, 1500 to 1950 (left) and Professor Marylin McKay (below).
<urn:uuid:c6236ec4-4ce0-4611-b9d0-439dd42289b0>
CC-MAIN-2013-20
http://nscad.ca/en/home/abouttheuniversity/news/land-032212.aspx
2013-05-19T03:06:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959572
603
A world war is a war affecting the majority of the world's major nations. World wars usually span multiple continents, and are very bloody and destructive. The term has usually been applied to two conflicts of unprecedented scale and slaughter that occurred during the 20th century. They were the First World War, also known as the Great War (1914–1918) and the Second World War (1939–1945). Origins of the term The term "World War" was coined speculatively in the early 20th century, some years before the First World War broke out, probably as a literal translation of the German word 'Weltkrieg' The Oxford English Dictionary cites the first known usage as being in April 1909, in the pages of the Westminster Gazette. It was recognized that the complex system of opposing alliances—German Empire - Austria-Hungary - Italy vs. French Third Republic - Russian Empire - United Kingdom of Great Britain and Ireland - Serbia was likely to lead to a global conflict in the event of war breaking out. The fact that the powers involved had large overseas empires virtually guaranteed that a conflict would be global, as the colonies' resources would be a crucial strategic factor. The same strategic considerations also ensured that the combatants would strike at each others' colonies, thus spreading the fighting far more widely than in the pre-colonial era. Prior to 1939, the European war of 1914–1918 was usually called either the World War or the Great War. Only after the start of hostilities in 1939 did the World War become commonly known as the First World War. This is easily observed today when visiting the numerous First World War monuments and memorials to be found throughout Europe and North America. Such memorials, most of which were constructed in the 1920s plainly refer to the World War or Great War. Occasionally, a contemporary marker will indicate 1919 as the year the war ended (e.g., The World War, 1914-1919) which refers to the date of the Treaty of Versailles as the official end of the war rather than the Armistice in 1918 which in effect ended the actual hostilities. In 1933, Simon & Schuster published a photographic history of the war, edited by playwright and war veteran Laurence Stallings, with the title The First World War. A feature-length documentary film, also written by Stallings and titled The First World War, was released in November 1934. Three months before World War II began in Europe, Time magazine first used the term "World War I" in its issue of June 12, 1939, when comparing the last war with the upcoming war. The term "Second World War" was also coined in the 1920s. In 1928, US Secretary of State Frank B. Kellogg advocated his treaty "for the renunciation of war" (known as the Kellogg-Briand Pact) as being a "practical guarantee against a second world war". The term came into widespread use as soon as the war began in 1939. Time magazine introduced the term "World War II" in the same article of June 12, 1939, in which it introduced "World War I," three months before the start of the second war. Other languages have also adopted the "World War" terminology; for instance, in French, the two World Wars are the Guerres Mondiales; in German, the Erste und Zweite Weltkrieg; in Russian the мировые войны; and so on. Earlier worldwide conflicts Other examples suitable to be classified as world wars in terms of their intercontinental and intercultural scope were the Mongol Invasions leading to the Mongol Empire, which spanned Eurasia from China, Japan, and Korea to Persia, Mesopotamia, the Balkans, Hungary and Russia, and the Dutch-Portuguese War from the 1580s to the 1650s, which was fought throughout the Atlantic, Brazil, West Africa, Southern Africa, the Indian Ocean, India and Indonesia. Dutch-Portuguese war, the first intercontinental resource war. Other wars in earlier periods that saw conflict across the world have been considered world wars by some, including the War of the Spanish Succession (1701-1713)Seven Years' War (1756–1763); Winston Churchill called it "the first world war" in A History of the English-Speaking Peoples, the French Revolutionary Wars (1792–1802) and the Napoleonic Wars (1803–1815). These, however, were confined to the European powers and their colonial empires and offshoots. The Asian powers were not involved (counting the Ottoman Empire as a European power in this instance). Prior to the late 19th century, the concept of a world war would not have had much meaning. The Asian powers of China and Japan did not act outside their own continents, and they certainly did not conduct affairs on an equal footing with the European powers; China was the target of European colonialism while Japan remained isolationist until the 1850s. The European conflicts of earlier centuries were essentially quarrels between powers which took place in fairly limited, though sometimes far-flung, theaters of conflict. Where native inhabitants of other continents were involved, they generally participated as local auxiliaries rather than as allies of equal status, fighting in multiple theaters. For instance, in Britain's wars against France, Native Americans assisted both European powers on their own ground rather than being shipped to continental Europe to serve as allied troops there. By contrast, during the World Wars, millions of troops from Africa, Asia, North America and Australasia served alongside the colonial powers in Europe and other theatres of war. Characteristics of the World Wars The two World Wars of the 20th century took place on every continent on Earth save Antarctica, with the bulk of the fighting taking place in Europe and Asia. They involved more combatant nations and more individual combatants than any other conflicts. The World Wars were also the first wars to be fought in all three terrestrial elements—ground, sea and air—and depended, more than in any previous conflict, on the mobilization of industrial and scientific resources. They were the first instance in which the doctrine of total war was fully applied, with drastic effects on the participants. Many of the nations who fought in the First World War also fought in the Second, although not always on the same sides. Some historians have characterized the World Wars as a single "European civil war" spanning the period 1914–1945. This is arguably an oversimplification, as the European aspect of the Second World War might never have happened had Adolf Hitler not come to power. It also overlooks the war in the Far East caused by Japan's programme of territorial expansion, which started independently of events in Europe. The World Wars were made possible, above all else, by a combination of fast communications (such as the telegraph and radio) and fast transportation (the steam ship and railroad). This enabled military action to be coordinated rapidly over a very wide area and permitted troops to be transported quickly in large numbers on a global scale. Effects of the World Wars The two World Wars of the 20th century caused unprecedented casualties and destruction across the theaters of conflict. The numbers killed in the wars are estimated at between 60 and 100 million people. Unlike in most previous conflicts, civilians suffered as badly as or worse than soldiers, and the distinction between combatants and civilians was often erased. Both World Wars in comparison (estimated data) ||World War I ||World War II ||4 M km² ||22 M km² The outcome of the World Wars had a profound effect on the course of world history. The old European empires collapsed or were dismantled as a direct result of the wars' crushing costs and in some cases the defeats of imperial powers. The modern international security, economic and diplomatic system was created in the aftermath of the wars. Institutions such as NATO, the United Nations and the European Union were established to "collectivise" international affairs, with the explicit aim of preventing another outbreak of general war. The wars also greatly changed the course of daily life. Technologies developed during wartime had a profound effect on peacetime life as well—for instance, jet aircraft, penicillin, nuclear energy and electronic computers. Since the Second World War was ended in August 1945 by the atomic bombings of Hiroshima and Nagasaki, there has been a widespread and prolonged fear of a Third World War between nuclear-armed superpowers. The fact that this has not come to pass has been attributed by many to the devastating and essentially unwinnable nature of nuclear warfare, with the end result being the extermination of human life or, at the very least, the collapse of civilization. When asked what kind of weapons would be used to fight World War III, the physicist Albert Einstein replied: - I don't know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones. Subsequent world wars Some groups define "world war" such that the Cold War should be termed a world war. Others claim that the current "War on Terrorism" is a world war. The Project for the New American Century holds both views, calling the Cold War "World War III" and the War on Terrorism "World War IV", this was also agreed by Jean Baudrillard. However, these characterizations have attracted little support and have not been agreed upon by the majority of historians. - ^ Online Etymology Dictionary entry for World War - ^ "War Machines," Time, June 12, 1939.
<urn:uuid:96513005-2cbe-43e1-8de4-cd2d8d66ffae>
CC-MAIN-2013-20
http://www.bookyards.com/categories.html?type=books&category_id=243
2013-05-24T09:12:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963838
1,966
Definition of Cervidae 1. Noun. Deer: reindeer; moose or elks; muntjacs; roe deer. Generic synonyms: Mammal Family Group relationships: Ruminantia, Suborder Ruminantia Member holonyms: Cervid, Deer, Cervus, Genus Cervus, Genus Odocoileus, Odocoileus, Alces, Genus Alces, Dama, Genus Dama, Capreolus, Genus Capreolus, Genus Rangifer, Rangifer, Genus Mazama, Mazama, Genus Muntiacus, Muntiacus, Genus Moschus, Moschus, Elaphurus, Genus Elaphurus Click the following link to bring up a new window with an automated collection of images related to the term: Cervidae Images Lexicographical Neighbors of Cervidae Literary usage of Cervidae Below you will find example usage of this term as found in modern and/or classical literature: 1. The Archaeological Journal by British Archaeological Association (1908) "Four harpoons and two small implements made of the horns of cervidae from the Grotte de Reilhac. (§) precise position in the deposits—most of them having ..." 2. The White River Badlands by Cleophas Cisney O'Harra (1920) "cervidae Until 1904 nothing was known of the ancestral deer within the region of the White River badlands. In that year Mr. Matthew described a fragmentary ..." 3. Bulletin of the American Museum of Natural History by American Museum of Natural History (1904) "Lateral toes usually better developed than in preceding group. Horns postorbital. cervidae. Deciduous branching antlers. ..." 4. The Collected Scientific Papers of the Late Alfred Henry Garrod by Alfred Henry Garrod, William Alexander Forbes (1881) "Neither in any of the American cervidae, except C. leucotis, nor in Rangifer ... and differs from the cervidae generally. In C. leucotis they are so. ..." Other Resources Relating to: Cervidae
<urn:uuid:5d843413-99f3-48b8-b737-06a0b16181f6>
CC-MAIN-2013-20
http://www.lexic.us/definition-of/cervidae
2013-06-19T06:09:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.837312
477
As teachers, we have received extensive training to understand student behavior. But, even with years of teaching experience and a bag full of tricks, student behavior can make or break a classroom environment. So, what's the solution to this age-old problem? Don't try to tackle behavior on your own. Understanding behavior takes teamwork. Involve students' families in the process. Click on the pictures above for a handout that can be used to educate families. This handout addresses the following topics: Is Behavior a Sign? What are Typical Behaviors? What are Some Behavioral Signs? How to Steer Towards Positive Behavior. Plus, there is a "parent toolkit" with tips and resources. Here are some additional resources to help communicate with students' families. Looking for a weekly behavior report template…click here Need a behavior report with more than one week on a page…click here Need a more detailed means to communicate behavior issues with parents…click here Plan this field trip NOW!!!! 1 hour ago
<urn:uuid:609625a5-ca99-4a23-a5a4-df449cd3bf5b>
CC-MAIN-2013-20
http://firstgradefactory.blogspot.com/2011_12_01_archive.html
2013-05-22T00:49:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932608
210
Partitions in a computer system are logical spaces in a hard disk, which are created to separate the system files and data files. The motto behind keeping the system files away from the data files is to ensure that there is enough space for virtual memory paging and swapping. Another benefit of this separation is that if one of the non-system drives gets corrupted, the other partitions remains safe. Yet, corruption to partitions is unbearable as it creates a lot of trouble and the worst part is that partitions fall prey to corruption quite often. Corruption could loom the partition because of various reasons, like power outages, improper system shutdown, damaged system files, virus infections, software failure, malfunction, etc. Because of corruption issue, you may come across following error message, saying: Operating system not found Missing Operating System Eruption of this error message restricts user to boot his Windows XP operating system and consequently stops you from accessing the system...
<urn:uuid:9c2ef2e3-61a6-4b29-9848-cb7df1d07938>
CC-MAIN-2013-20
http://www.programmersheaven.com/user/johnwilson/blog/tags/windows+data+recovery+software/
2013-06-18T04:34:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92814
191
An important new study from the Laboratory for Developmental Genetics at USC has confirmed cytomegalovirus (CMV) as a cause of the most common salivary gland cancers. CMV joins a group of fewer than 10 identified oncoviruses — cancer-causing viruses — including HPV. The findings, published online in the journal Experimental and Molecular Pathology over the weekend, are the latest in a series of studies by USC researchers that together demonstrate CMV's role as an oncovirus, a virus that can either trigger cancer in healthy cells or exploit mutant cell weaknesses to enhance tumor formation. Lead author Michael Melnick, professor of developmental genetics in the Ostrow School of Dentistry of USC and Co-Director of the Laboratory for Developmental Genetics, said the conclusion that CMV is an oncovirus came after rigorous study of both human salivary gland tumors and salivary glands of postnatal mice. CMV's classification as an oncovirus has important implications for human health. The virus, which has an extremely high prevalence in humans, can cause severe illness and death in patients with compromised immune systems and can cause birth defects if a woman is exposed to CMV for the first time while pregnant. It may also be connected to other cancers besides salivary gland cancer, Melnick added. "CMV is incredibly common; most of us likely carry it because of our exposure to it," he said. "In healthy patients with normal immune systems, it becomes dormant and resides inactive in the salivary glands. No one knows what reactivates it." This study illustrates not only that the CMV in the tumors is active but also that the amount of virus-created proteins found is positively correlated with the severity of the cancer, Melnick said. Previous work with mice satisfied other important criteria needed to link CMV to cancer. After salivary glands obtained from newborn mice were exposed to purified CMV, cancer developed. In addition, efforts to stop the cancer's progression identified how the virus was acting upon the cells to spark the disease. Thus, the team not only uncovered the connection between CMV and mucoepidermoid carcinoma, the most common type of salivary gland cancer, but also identified a specific molecular signaling pathway exploited by the virus to create tumors, being the same in humans and mice. "Typically, this pathway is only active during embryonic growth and development," Melnick said, "but when CMV turns it back on, the resulting growth is a malignant tumor that supports production of more and more of the virus." The study was conducted by Melnick with Ostrow School of Dentistry of USC colleagues Tina Jaskoll, professor of developmental genetics and co-director of the Laboratory for Developmental Genetics; Parish Sedghizadeh, director of the USC Center for Biofilms and associate professor of diagnostic sciences; and Carl Allen at The Ohio State University. Jaskoll said salivary gland cancers can be particularly problematic because they often go undiagnosed until they reach a late stage. And since the affected area is near the face, surgical treatment can be quite extensive and seriously detrimental to a patient's quality of life. However, with the new information about CMV's connection to cancer comes hope for new prevention and treatment methods, perhaps akin to the development of measures to mitigate human papilloma virus (HPV) after its connection to cervical cancer was established. Jaskoll added that the mouse salivary gland model created to connect CMV to cancer might also be used to design more effective treatments. "This could allow us to have more rational design of drugs used to treat these tumors," she said. Melnick said that in the not too distant future, he expects much more information about viruses and their connections to cancer and other health issues seemingly unrelated to viral infection to emerge. "This should be a most fruitful area of investigation for a long time to come," he said. "This is just the tip of the iceberg with viruses." Source : University of Southern California
<urn:uuid:d1414465-05e5-4049-af4f-8ef857b7d5ab>
CC-MAIN-2013-20
http://www.biologynews.net/archives/2011/11/15/researchers_confirm_new_cancercausing_virus.html
2013-05-18T19:03:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958794
838
Virtual Reality: Resurrecting The Original Canoes and Kayaks of North America Edwin Tappen Adney spent six decades studying and reproducing the aboriginal canoes of North America—a lifetime’s worth of work that nearly went unheralded. Adney’s obsession began at age 20, when documented the construction of a New Brunswick Malecite birchbark canoe in the 1880s. From then on, traditional canoes consumed Adney’s life. By the time he was 81, he’d traveled across the continent multiple times and recorded the lines of hundreds of canoes. After several false starts, he died before he had a chance to complete a book on the subject. Fortunately, then-Smithsonian Institution curator Howard Chappelle resurrected Adney’s work and in 1964, The Bark Canoes and Skin Boats of North America was released. As Adney hoped, it became the definitive work on the subject-lavishly illustrated with plan drawings and diagrams for canoes and sea kayaks (Chappelle’s specialty) from across present-day Canada, the United States and Greenland. The book has said to have “[saved] the craft from oblivion,” inspiring a new generation of birchbark canoe- and skin-on-frame kayak-builders across the continent. Among those influenced by Adney and Chappelle’s work is Grand Marais, Minn.’s Bryan Hansel. A photographer, paddler, and blogger at paddlinglight.com, Hansel has build eight cedar-strip paddlecraft of his own, and has aspirations to build more. Lacking a place to undertake his annual winter boat-building project, Hansel has dedicated this winter to reproducing and modernizing the canoes and kayaks of Adney and Chappelle, and making detailed plans available online for modern wood-strip builders. His plan is to transfer Adney and Chappelle’s research “into a form that’s useable for modern cedar-strip construction.” Hansel imports Adney and Chappelle’s measurements into Delftship Pro, a nautical engineering software program that generates three-dimensional models and exact measurements to create plywood forms for modern boat-building. So far, Hansel has made several plans available for free download on his website, including an 1895 Malecite St. John River canoe and an 1898 Passamaquoddy ocean canoe. A new canoe or sea kayak plan will be posted on paddlinglight.com each week. Ultimately, Hansel’s goal is to breathe new life into the canoes and sea kayaks of The Bark Canoes and Skin Boats of North America and other long-forgotten designs. He hopes to live vicariously through the experiences of those who take on the challenge of bringing these boat designs-many of which haven’t been made in over a century-back to life. “I think it’s a great experiment in archaeology,” says Hansel. “If we go back and try to build these boats we’ll get a better feel for what the original designs were like. It will give us a chance to compare them to the boats we’re paddling now.” – Conor Mihell
<urn:uuid:f47dde96-14e2-4860-b95c-6060c0fd3b7e>
CC-MAIN-2013-20
http://www.canoekayak.com/canoe/virtual-reality-original-canoes-and-kayaks-of-north-america/
2013-05-19T18:27:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945656
703
The population growth that we have increases the production of waste materials on earth. Though there are different waste management processes that have been conducted to minimize our waste still, it grows every day. Landfills are now being subject for disapproval by environmentalist and the residents who will be greatly affected in building of dump sites in their area. Since having landfills in the area can be a cause of health issues and pollution that will greatly affect the environment and population. Recycling facilities are being created to solve the landfill concern that we have nowadays. The recycling facility are responsible for receiving, separating and preparing recyclable waste materials that will be used to produce a new product or selling it to manufacturers. But even if there is a recycling facility, landfill can still be useful because that is where the waste materials are stocked. There are different kinds of recycling facility but clean and dry facility is generally used to distinguish the types of waste materials. Wet recycling facility is also being utilized nowadays for specific wet or liquid types of waste material. Clean recycling facility is the composition of waste materials that are already separated from the municipal solid waste before it reaches the facility. The waste materials are already sorted according to certain specification and processed accordingly. The process may include shredding, drying crushing, compacting and preparing for shipment to the market. Dirty recycling facility on the other hand, is the facility that accepts mixed waste materials. After receiving the mixed waste material, it is being separated and segregated to its different types through mechanical or manual sorting. The sorted waste materials that can be recycled will undergo the processing procedure and the other mixed waste materials will be disposed to a different kind of facility such as landfill. Dirty recycling facility can be a great challenge since it requires more labor and expensive than having a clean recycling facility. Though having a dirty recycling facility provides higher recovery rates than a clean recycling facility since it can recover more waste materials that can be recyclable. Compared to a clean recycling facility, the sorting process is already conducted from the source of the waste material whereas in a dirty recycling facility, they are the ones who also do the sorting process to which, they can have a greater control in selecting the waste materials that can be recycled or sell to the market.
<urn:uuid:9db3f72a-f086-45e9-a08d-49b43875ba87>
CC-MAIN-2013-20
http://tgeg-asia.blogspot.com/2012/01/recycling-facility.html
2013-05-25T05:31:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95937
448
The chickadees are starting to excavate their nest cavities. This Chestnut-backed Chickadee was working on a dead snag right along the trail at Nestucca Bay NWR. Even though the birds choose soft dead wood on which to work, it seems a herculean task for a bird with such a diminutive bill to excavate a cavity large enough for nesting. She can fit about half her body into the cavity so far. It takes about seven days to complete a nest. Here she spits out a mouthful of wood chips. Larger chips are carried away from the nest site before being dropped. A big pile of wood chips at the base of the nest tree would alert predators to the location of the nest.
<urn:uuid:3d88f16a-3dc8-4b29-8874-b905300686bf>
CC-MAIN-2013-20
http://johnrakestraw.net/2009/04/07/chestnut-backed-chickadee/
2013-05-24T22:28:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96179
153
- What is the difference between a Learning Outcome, and aim, and an objective? - Why get involved in an online community? - Framing mobile learning from the perspective of learners’ experiences - What impact is participating in a Virtual Professional Learning and Development programme having for students? Check out this video - Organisational approaches to e-learning in the tertiary sector: An annotated bibliography Tag Archives: Curriculum How the brain works…and how emotional stability at home is the single greatest predictor of academic success! DK shared this video of a keynote from ISTE 2011 by Dr John Medina. Dr Medina is an entertaining speaker…you definitely won’t fall asleep!! He starts by exploding myths such as right brain / left brain ‘ways of thinking’. And … Continue reading The Super Book of Web Tools for Educators has a plethora of tips and ideas for teachers in primary, intermediate, and secondary schools, with focus on topic areas such as ESL/EFL. Tools have been selected and described for their suitability for … Continue reading Image source Today I received a query about whether it is possible to “get access to another country’s curriculum, in this case Germany”. The person posting the question is looking at some possible models to teach online. So, I set … Continue reading Image by hazelowendmc via Flickr In this Slidecast Nick Rate gives a brief overview of student voice in four areas: student voice in reflections on learning student voice in student led conferences student voice in learning and school design student … Continue reading Image via Wikipedia Following on from a post I made a few weeks ago Links to ideas, resources, and tools for learning and teaching Te Reo Māori, there has been growing interest in discipline-specific resources, in particular geography. Curriculum Paul … Continue reading There is much talk about student empowerment, engagement, and choice, and yet many education institutions stick to the topics and content required by the curriculum. But not all schools! Read more here… Continue reading
<urn:uuid:acbb83e8-92d9-4196-ac46-f41b0ea644a7>
CC-MAIN-2013-20
http://ictenhancedlearningandteaching.wordpress.com/tag/curriculum/
2013-05-26T02:34:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922434
426
Bell's palsy is a sudden weakness and paralysis on one side of the face. It is a temporary condition. Bell's palsy can occur in anyone but is most common in people with diabetes or those with a recent cold or flu infection. Bell's palsy is caused by damage to a nerve of the face. The exact cause of this damage is unknown. The damage to the nerve causes swelling along the nerve. The swelling puts extra pressure on the nerve. This extra pressure leads to paralysis of a part of the face. Some infections are believed to cause some Bell's palsy. Herpes virus, flu virus, and Lyme disease may be associated with Bell's palsy. Facial paralysis may also be caused by: Factors that may increase your risk of Bell's palsy include: Bell's palsy symptoms may come on suddenly or develop over a few days. Initial symptoms may include: - Pain behind the ear that is followed by weakness and paralysis of the face - Ringing sound in the ears - Slight fever - Slight hearing impairment - Slight increase in sensitivity to sound on the affected side. Symptoms of full-blown Bell's palsy may include: - Facial weakness or paralysis (look for smooth forehead and problems smiling)—most often on one side - Numbness just before the weakness starts - Drooping corner of the mouth - Decreased tearing Inability to close an eye, which can lead to: - Dry, red eyes - Ulcers forming on the eye - Problems with taste - Sound sensitivity in one ear - Slurred speech Late complications can occur 3-4 months after onset and can include: - Long-lasting tightening of the facial muscles - Tearing from eye while chewing The doctor will ask about your symptoms and medical history. A physical exam will be done. Other tests may include: - Hearing test—to see if nerve damage involves the hearing nerve, inner ear, or hearing mechanism - Balance test—to see if balance nerves are involved - Lumbar puncture—a test of the cerebrospinal fluid (CSF) from the lower back; to rule out meningitis, autoimmune disorders, or cancer spreading from a tumor - Tear test—measures the eye's ability to produce tears - computed tomography (CT) scan—a type of x-ray that uses a computer to make pictures of structures inside the head to see if there is an infection, tumor, bone fracture, or other problem in the area of the facial nerve - Magnetic resonance imaging (MRI) scan—a test that uses magnetic waves to make pictures of structures inside the head to see if there is an infection, tumor, bone fracture, or other problem in the area of the facial nerve - Electrical test (NCM/EMG)—to evaluate for damage to the facial nerve - Blood tests—to check for diabetes, HIV infection, or Lyme disease For most, treatment is not needed. Symptoms will often go away on their own within a few weeks. Bell's palsy will completely resolve after a few months in many people. For some people, some symptoms of Bell's palsy may never go away. If an underlying cause of the Bell's palsy is known, it may be treated. Treatment will be based on that condition. Some treatments that may be used for Bell's palsy include: Your doctor may prescribe corticosteroids. This is a medication that can decrease swelling and pain. Antiviral medications may also be recommended. This medication will help weaken viruses associated with Bell's palsy. It will only be used if your doctor believes that palsy is caused by a virus. If the paralysis includes your eyelid, you may need to protect your eye. This may include: - Appling lubricant or putting drops in the eye. - Covering and taping eye closed at night. - Wearing an eye patch to keep the eye closed. This helps moisten and keep particles out of the eye. Massaging of the weakened facial muscles may also help. Symptoms can be very distressing. Counseling can help you manage emotional issues and make appropriate adjustments. Physical therapy and speech therapy may also help. Therapists may help reduce your symptoms or decrease their impact on your daily activities. If you are diagnosed with Bell's palsy, follow your doctor's instructions. There are no guidelines for preventing Bell's palsy. If you think you are at risk for Bell's palsy, talk to your doctor. There may be steps you can take to reduce your risk. - Reviewer: Rimas Lukas, MD - Review Date: 10/2012 - - Update Date: 10/11/2012 -
<urn:uuid:5a22080d-29a4-4990-9109-85bca8c2900d>
CC-MAIN-2013-20
http://memorialhospitaljax.com/your-health/?/12019/Bell%E2%80%99s-palsy
2013-06-20T01:44:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928818
995
On July 28, 2010, NOAA released the 2009 State of the Climate report. This report draws on data for 10 key climate indicators that all point to the same finding: the scientific evidence that our world is warming is unmistakable. More than 300 scientists from 160 research groups in 48 countries contributed to the report, which confirms that the past decade was the warmest on record and that the Earth has been growing warmer over the last 50 years. For a 10 page summary of the report, click here or see a full supplemental package at http://www.ncdc.noaa.gov/bams-state-of-the-climate/2009.php.
<urn:uuid:c6ef6e01-3c9a-41b9-b19b-4c503ac3f530>
CC-MAIN-2013-20
http://www.globalchange.gov/resources/gallery?func=download&catid=21&id=442
2013-05-18T08:25:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918086
141
Doing their research: Siskiyou County officials study Coos Bay salmon project By John Bowman Siskiyou Daily News, Yreka, CA By John Bowman Updated Feb. 18, 2013 @ 4:36 pm COOS BAY, Ore. By John Bowman Updated Feb. 18, 2013 @ 4:36 pm » Social News Recently, Siskiyou County Supervisor Michael Kobseff and county Natural Resource Policy Specialist Ric Costales made the nearly five-hour drive to Coos Bay, Ore. to watch partially developed salmon eggs being planted in the gravel of coastal streambeds. The trip was part of the county's effort to implement the same process in the Shasta River in an attempt to aid the recovery of coho salmon there. Since the coho salmon was listed by the federal government as threatened in the southern Oregon and northern California coastal region in 1997, the fish has been a flashpoint in the water politics of the region and a driving force behind regulation and river restoration projects. Nearly five years ago – around the same time the California Department of Fish and Wildlife (CDFW) declared coho to be "functionally extinct" in the Shasta River – the county learned of a process called eyed-egg injection and decided to lobby fish and wildlife agencies for permission to try the technique on the Shasta. Eyed-egg injection is a process by which salmon eggs are incubated for several months in a hatchery (or other controlled setting) until the eggs begin to develop an eye – an indicator that the egg is within a few weeks of becoming a free swimming fish – and then taken to a stream and planted under gravel in the natural streambed where it will finish its development. The process was pioneered by fisheries biologists in Alaska as a method to help rebuild decimated fish populations, and has shown relatively high success rates there. Salmon hatcheries have a long history on the west coast and a substantial impact on the genetics of its salmon populations. Conventional hatchery operations collect eggs and milt from returning hatchery fish, fertilize and incubate the eggs, then raise the juvenile fish in tanks or raceways until they are nearly ready to migrate downstream to the ocean which, for coho, takes nearly a year. Many biologists have alleged that this process not only concentrates a fish population's gene pool, eliminating healthy genetic diversity, but also robs the juveniles of survival skills necessary in the wild. Proponents of eyed-egg injection say the technique retains the benefits of higher egg hatch rates through controlled incubation while allowing nature to play a vital role in gene and behavior selection – resulting in a stronger fish with higher odds of returning to natural streams to produce self-sustaining populations, provided that the in-stream habitat can support them. Paul Merz, a veteran commercial salmon fisherman based in Coos Bay along with Oregon Department of Fish and Wildlife (ODFW) biologist Gary Vanderhoe began planting eyed eggs in Coos Bay tributaries for the first time this year. The project is one of many being undertaken as part of ODFW's Salmon and Trout Enhancement Program (STEP), which has utilized volunteer efforts to bolster the agency's salmonid restoration efforts across the state since 1981. Merz has been a volunteer participant in the program since the beginning. As a fisherman who's livelihood depends on healthy salmon runs, he has chosen to take a hands-on approach to improving west coast salmon populations. Merz says he has been in contact with officials in Siskiyou County for nearly five years and has made at least five visits to the county to collaborate on the local salmon supplementation efforts. After all, Merz says between 40 and 80 percent of the salmon harvest off the central Oregon coast is made up of fish from northern California rivers. He also worked on an effort a few years ago to implement an eyed-egg injection project for spring Chinook on Siskiyou County's Salmon River. That project was ultimately denied by state and federal biologists. Current county efforts to implement the procedure on the Shasta River appear to have a better chance of acceptance by biologist and regulators. CDFW Director Charlton Bonham recently told the Daily News that he is impressed by the county's broad supplementation partnership with The Nature Conservancy, California Trout, the Siskiyou County Farm Bureau and the Shasta Valley Resource Conservation District. Bonham said the coalition's level of cooperation has given him hope that supplementation in the Shasta River can work. In addition, the same coalition organized the Upper Klamath River Coho Salmon Workshop in February of 2012. The event held in Yreka brought together more than 50 prominent fisheries biologists, geneticists and restorationists to explore the idea of coho supplementation in the Shasta River and the upper Klamath. During the workshop's closing discussion, virtually all members of the panel agreed that – based on the imminent threat of complete extirpation of coho on the Shasta River – a supplementation program there would be justified. While eyed-egg injection is only one method of salmon supplementation, Siskiyou County officials say they find it appealing because of its relatively low cost, simplicity of implementation and high rate of effectiveness in Alaska. Planting the eggs Kobseff and Costales began their visit to Coos Bay at the Noble Creek Hatchery where Merz explained how eggs were taken from wild coho, fertilized and then incubated. Merz and Vanderhoe then extracted the eggs from the hatch box where they had been incubating and placed them in a small cooler for travel to the injection sites. Three miles from the hatchery, on Catching Slough, Costales volunteered to shoulder the gas powered backpack pump equipped with an intake hose to draw water from the creek, and an output hose connected to a length of metal pipe, open on one end, with a funnel on the other. The apparatus serves the dual purposes of pumping water into the streambed to blast out excess sand and sediment while acting as a funnel to direct the eggs 12 inches deep into the gravel. Kobseff carried the hoses and Merz carried the cooler full of 6,500 potential coho salmon while the crew hiked a mile upstream to find the best stretches of streambed. When a potential site with suitable gravel was selected, the lower end of the metal pipe was worked down into the gravel while the backpack pump forced water into the streambed pushing the fine grain sediments out of the streambed, leaving the larger gravel in place. Once an area of approximately two feet in diameter had been cleaned of fine sediment, the bottom end of the pipe was inserted approximately 12 inches into the cleaned gravel. The pump was then shut off and salmon eggs were poured into the top funneled end of the pipe and directed down into the streambed. After the eggs had been planted, the crew used their hands to gather additional clean gravel and piled it lightly on top of the man-made redd (salmon nest) to provide additional protection for the eggs. After the last batch of eggs was injected into the streambed the crew congratulated each other and expressed their hopes for the success of what they'd done. Merz said, because this is the first time this process has been used in Oregon, the only way to know the success rate of the project will be to look for the returning fish in two or three years when they will hopefully return to spawn. Until then, Siskiyou County officials will be keeping their fingers crossed while they continue to lay the groundwork for their own goal of planting coho eggs in the Shasta River and hopefully, eventually, restoring healthy populations of the fish to Siskiyou County.
<urn:uuid:487e6f5e-75b6-4e81-8d03-97b6dcc1896c>
CC-MAIN-2013-20
http://www.siskiyoudaily.com/article/20130218/NEWS/130219816/0/highlight
2013-05-21T10:14:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957371
1,578
Strengthening Streams in California The State of California employs EPA's causal assessment CADDIS tool to identify and treat problems in waterways Thriving aquatic life--fish, insects, plants, algae and invertebrates--is a good indication that a stream or other watery ecosystem is healthy. On the other hand, aquatic life that is unhealthy or dying off is a strong signal that there is a problem. But beyond that initial diagnosis, deciphering the cause of the trouble can be a serious challenge. In such situations, scientists conduct a causal assessment to discover the cause of the problem and develop solutions to remedy it. A causal assessment uses a variety of techniques to evaluate data and other information to identify probable culprits, much like a doctor conducts exams and tests to diagnose illnesses or injuries in a patient. Over the last 15 years, EPA has honed its causal assessment process, an effort that culminated in the Causal Analysis/Diagnosis Decision Information System (CADDIS), a valuable tool that water managers and others from states and tribes rely on to identify problems in their local waterways. Named for caddisflies, an order of insects whose larvae live in flowing waters on the bottom of high quality streams, CADDIS provides a step-by-step framework to help conduct causal assessments, mainly in streams. Recently, the State of California began using CADDIS as a way to find the causes of unhealthy streams and identify solutions. The EPA, the Southern California Coastal Water Research Project, the California Department of Fish and Game, and several stakeholder groups, have collaborated to conduct causal assessment case studies for three freshwater streams: the Santa Clara and San Diego Rivers, the Salinas River, and the Garcia River. These three case studies capture the diversity of California's geography, land use characteristics, and stressors. Each examines evidence to identify the cause(s) of unhealthy aquatic life and provides recommendations for future causal assessment efforts. The goal of the collaboration is to help California protect the health of state streams and, ultimately, the people who depend on them.
<urn:uuid:8020d309-aa17-4d97-a86c-ea13016674a8>
CC-MAIN-2013-20
http://www.epa.gov/sciencematters/sept2012/castream.htm
2013-05-19T09:56:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907513
420
Freely (Storm on Horseback: The Seljuk Warriors of Turkey, 2008, etc.) profiles the various caliphates that fostered scholarship and scientific inquiry during Europe’s Dark Ages. As the eighth century drew to a close, the author writes, Baghdad became a beacon illuminating classical antiquity. The Abbasid caliphate, which had held sway there for several centuries, reached its peaking during the reign of Harun al-Rashid (786–809), when Baghdad’s scholars plumbed the known world for long lost books and documents, including many from the ancient library at Alexandria. In Baghdad’s library, known as the House of Wisdom, Greek texts were painstakingly translated into Arabic. But Islamic scholars did more than just translate, the author notes; they critiqued Greek thinkers from Archimedes and Aristotle to Zeno. They questioned ideas on the nature of reality, corrected astronomical observations and probed medical tracts and mathematical theorems. In once instance, three wards of a Baghdad caliph marched a measured distance from north to south in the desert until the elevation of Polaris had changed by exactly a single degree; multiplying by 360, they arrived at a circumference of the earth only 92 miles short of what today’s science confirms. In time, Cairo and Damascus succeeded Baghdad as centers of Islamic study, flourishing from the tenth into the 14th centuries under the Fatimids and other dynasties. Umayyad caliphs ruled the region of southern Spain known to Arabs as Al-Andalus, which offered another tolerant, enlightened bastion for scholars. As Christians came there to study, Greek texts that had once flowed into Arabic were poured into Latin, and the early flame of the European Renaissance flickered. Freely extensively documents Islamic works that gave us words like algebra and algorithm and dusted off the even more ancient Hindu numerals now universally employed. A chewy study of the preservation and transportation of classical Greek thought. See Jonathan Lyons’ The House of Wisdom (2009) for a more accessible account of the Arab influence on Western civilization.
<urn:uuid:1a22c25f-8ccd-4205-b862-e3c4901fe53c>
CC-MAIN-2013-20
http://www.kirkusreviews.com/book-reviews/john-freely/aladdins-lamp/print/
2013-06-19T06:02:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956579
428
|Introduction to Streaming Media| In general, media files are huge. For example, five minutes of uncompressed video would require almost one gigabyte of space! So, when the audio and video is prepared for streaming, the media file is compressed to make the file size smaller. When a user requests the file, the compressed file is sent from the video server in a steady stream and is decompressed by a steaming media player on the user's computer to play automatically in real time. A user can jump to any location in the video or audio presentation. Streaming media generally tries keep pace with the user's connection speed in order to reduce interruptions and stalling. Though general network congestion is unavoidable, the streaming server attempts to compensate by maintaining a constant connection. Streaming Media Player Required Streaming technology allows users to receive live or pre-recorded audio and video, as well as "illustrated audio" (sound synchronized to still pictures). To access streaming media, the user must have a player capable of displaying the presentation. The College of DuPage uses Windows Media software to encode streaming media. To access and view streaming media files, users must have the free Windows Media Player. Once the Windows Media Player is installed, a user may simply click a link to a Windows Media file. This prompts the player to launch automatically and begin playing the requested file within seconds. Windows Media files can be linked like any other file type; however, the most common way is to embed the file in a Web page. Terms and Concepts to Know Bandwidth: A measurement of the amount of data that can be transmitted or received in a specified amount of time. When discussing streaming media, bandwidth is usually expressed in terms of bits per second (bps) or kilobits per second (kbps). (Modems are rated in terms of kbps and usually abbreviated as k. A 56.k modem has twice the bandwidth of a 28.8 k modem.) Bandwidth is an important consideration when dealing with streaming media. Simply put, more bandwidth is required for more complex data. Therefore, it requires more bandwidth per second to display a photograph that it takes to display text. When delivering streaming media (large audio and video files), a great deal of bandwidth is required to achieve an acceptable level of performance. Server vs. Server-less Delivery: Streaming media files is most efficiently delivered using a dedicated streaming server. However, content may be uploaded and delivered from servers other than a dedicated streaming server. Here is an example of the same content delivered from a web server and the streaming video Another reason to use the dedicated video server is space. The typical web server is configured up to hold a great many HTML and graphic files, which are generally small. The typical video server is configured with very large storage capacities, as audio and video files may be huge. File size management is critical since any server has a finite storage capacity. Streaming: Delivery of audio and video over the World Wide Web in real time. With streaming technology, the browser can start displaying the data before the entire file has been sent. Unicasting: Networking in which computers establish two-way, point-to-point connections. This means when a user requests a file, and the server sends the file to that user only. Unicasting allows a user to pause, or skip around in a streaming media presentation. Because this method requires sending multiple copies of the data to multiple users, bandwidth requirements are high. Broadcasting: To simultaneously send the same message to all users on the network. Broadcasting sends a message to the whole network, whether or not the data is wanted. Multicasting: In contrast to broadcasting, where data is sent to the entire network, multicasting sends a single copy of the data only to those users who request it. Multiple copies are not sent across the network, nor is data sent to clients who do not want it. This reduces bandwidth requirements, but as a trade-off, users are unable to control the streaming media and thus cannot pause or skip forward or backward in the presentation. Webcasting: Using the Internet to broadcast live or pre-recorded audio or video. Embed: To place the source within a document. The source cannot be edited. Send comments to:
<urn:uuid:8748b751-077b-4c4a-9a6a-a7640d098266>
CC-MAIN-2013-20
http://www.cod.edu/it/streamingmedia/intro.htm
2013-06-18T23:17:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901415
880
Caribou are the largest members of the reindeer family (Rangifer tarandus) and are native to the arctic and sub-arctic regions of Siberia, North America and Greenland. Reindeer, which are traditionally herded in northern Europe and Eurasia, were introduced into Alaska in 1892. Although some herding of reindeer continues in Alaska today, many of the introduced reindeer interbred with caribou. The four caribou subspecies—barren ground, Peary’s, tundra and woodland—differ greatly in range, size, coloration, behavior, food habits and habitat use. Caribou are a medium-sized member of the deer family and stand about 31⁄2 feet tall at the shoulder. Females (cows) can weigh up to 300 pounds, while large males (bulls) are about twice that size. Most caribou are medium-brown or gray, but coloration varies widely from nearly black to almost white. Their winter coat is somewhat lighter than their summer coat. Caribou are the only deer species in which both males and females have antlers. Their antlers, which are shed every year, have a long, sweeping main beam up to five feet wide. Each side has one or two tines, or branches, and each tine may have several points. The larger racks of caribou bulls are considered trophies by big-game Caribou have special adaptations that allow them to survive their harsh arctic environment. Long legs and broad, flat hooves help them walk on snow and on soft ground such as a peat bog. A dense woolly undercoat overlain by stiff, hollow guard hairs keeps them warm. Caribou dig for food using their large, sharp hooves. The average lifespan of an adult caribou is eight to ten years. They reach maturity at about three years. As with most deer species, male caribou fight each other for a harem of five to 40 cows. This sparring, called rutting, occurs in the fall. Injuries in this natural quest for dominance are rare, although occasionally the bulls’ antlers lock together and both animals die. A single calf is born in the spring. Unlike most deer, caribou young do not have spots. They are able to walk within two hours of birth and are weaned gradually over several months. After calves are born, females with newborns gather into “nursery bands” and separate from the rest of the herd. Gradually, the bulls and barren cows rejoin the calving cows at the calving grounds. These larger groups of caribou offer some protection for the calves from predators such as wolves, bears and lynx. Caribou feed on sedges, grasses, fungi, lichens, mosses, and the leaves and twigs of woody plants such as willows and birches. Although some herds stay on the cold tundra all year, most caribou have distinct summer and winter ranges. The large northern herds migrate over long distances, frequently crossing large, swift-running streams and rivers. Consequently, even caribou young are extremely strong swimmers. Insect bites are a particular nuisance for caribou. When mosquitoes are numerous, a caribou may lose up to half a pint of blood a day. In coastal areas, they seek temporary relief by submerging themselves in water. They may seek windy hilltops, dry, rocky slopes, or snowfields if they do not have access to a coastal area. Barren ground caribou have been known to stampede in attempts to escape the ravages of mosquitoes, warble flies or nostril flies. Rangifer tarandus caribou Adult male caribou can weigh up to 600 pounds. Females, generally not as large, can weigh up to 300 pounds. U.S. Fish & Wildlife Service Caribou were once essential to the survival and livelihood of native peoples of the Arctic. Natives used caribou meat, milk and organs for food. Hides provided material for clothing and shelter, and bones, antlers, and sinews were used to make tools, tableware, and handicrafts. However, though caribou remain a subsistence food resource, other uses have declined as native populations have become more technologically advanced. Though caribou in North America were once found as far south as Lake Superior, today they are completely absent in New Brunswick, Nova Scotia, Maine and Minnesota, mainly due to changes in plant growth since the last glaciers receded 10,000 years ago. Remnant caribou populations in these areas were susceptible to the encroachment of European settlers and subsequent changes in the habitat brought about by logging, farming and fire suppression. The early settlers also hunted caribou heavily for food. Only one population of caribou is left in the lower 48 states—the Selkirk Mountain herd which ranges from Canada into northern Idaho. Outside Alaska, this herd of woodland caribou is the last of its kind in the United States. In 1983, the U.S. Fish and Wildlife Service placed this population on the endangered species list, giving it protection under federal law. Endangered status means the woodland caribou is considered in danger of extinction within that part of Canada and the State of Idaho cooperatively manage the Selkirk Mountain herd through monitoring and reintroduction of more woodland caribou to enhance the population. In Alaska, the U.S. Fish and Wildlife Service, through its Arctic National Wildlife Refuge, protects large portions of the range of the Porcupine caribou herd, in particular the herd’s sensitive calving grounds. Although the Porcupine herd is not considered endangered, in 1987, the United States and Canada finalized a formal agreement for the conservation and management of this group of majestic U.S. Fish & Wildlife Service 1 800/344 WILD Caribou currently range throughout arctic and sub-arctic areas of Siberia, Greenland and North America. The U.S. population is limited to Alaska. Caribou are adept climbers, ascending steep slopes and traversing glacial snow fields. Click tabs to swap between content that is broken into logical sections.
<urn:uuid:0d7cc571-a981-48e5-930a-eed56b43b6db>
CC-MAIN-2013-20
http://digitalmedia.fws.gov/cdm/singleitem/collection/document/id/194/rec/40
2013-05-25T05:48:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905093
1,465
Easton’s Bible Dictionary Vine: one of the most important products of Palestine. The first mention of it is in the history of Noah (Genesis 9:20). It is afterwards frequently noticed both in the Old and New Testaments, and in the ruins of terraced vineyards there are evidences that it was extensively cultivated by the Jews. It was cultivated in Palestine before the Israelites took possession of it. The men sent out by Moses brought with them from the Valley of Eshcol a cluster of grapes so large that "they bare it between two upon a staff" (Numbers 13:23). The vineyards of En-gedi (Song of Solomon 1:14), Heshbon, Sibmah, Jazer, Elealeh (Isaiah 16:8-10; Jeremiah 48:32,34), and Helbon (Ezekiel 27:18), as well as of Eshcol, were celebrated. The Church is compared to a vine (Psalm 80:8), and Christ says of himself, "I am the vine" (John 15:1). In one of his parables also (Matthew 21:33) our Lord compares his Church to a vineyard which "a certain householder planted, and hedged round about," etc. Hosea 10:1 is rendered in the Revised Version, "Israel is a luxuriant vine, which putteth forth his fruit," instead of "Israel is an empty vine, he bringeth forth fruit unto himself," of the Authorized Version.
<urn:uuid:54133f42-ed21-46a0-a85d-b61e2d51c538>
CC-MAIN-2013-20
http://www.christnotes.org/dictionary.php?dict=ebd&id=3770
2013-05-21T10:20:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.977039
321
Facts on nuclear waste from Koeberg Information from Eskom There are three levels of waste at Koeberg, South Africa's only nuclear power station. These are caegorised as: low level waste, intermediate waste and high level waste (spent fuel). Low level waste comprises of refuse that may or may not be contaminated with minute quantities of radioactive material. This waste is usually in the form of clothing, plastics, insulation material, paper and coveralls and is generated in the controlled radiological areas of the power station. These items are sealed in clearly marked metal drums and stored on site until they are shipped to the Vaalputs national nuclear waste repository that is run by the South African Nuclear Energy Corporation (NECSA). Vaalputs is the national repository for low and intermediate level waste some 500 km north of Koeberg. Intermediate level waste consists of evaporator concentrate, spent resins, filter cartridges and contaminated scrap metal. This waste is more radioactive than the refuse but much less radioactive than spent fuel. It is mixed in a specific way with concrete and sealed into appropriately marked concrete drums. These concrete drums are shipped to Vaalputs. If a shipment of these concrete drums were involved in a road accident and fell from the truck and fractured at point of impact, the radioactive materials encapsulated within the concrete would retain the contents without leakage, with no threat to the public or environment. High Level Waste comprises the metal and mineral waste left over once spent fuel has been reprocessed to extract any re-usable Uranium or Plutonium. HLW has been around since mankind started its large-scale nuclear activities. Spent nuclear fuel is radioactively extremely dangerous and therefore needs to be safely housed. When it is removed from the reactor vessel it is stored in special "pools" known as fuel pools. At Koeberg the two pressurised water reactors generate approximately 32 tons of spent fuel each year. Over a 40-year design lifetime of the plant this would add up to 1 280 tons. Each spent fuel assembly contains radioactive materials that fall into three categories. The first category contains the fission products such as Caesium, Iodine, Strontium, and Xenon which are created when Uranium or Plutonium nuclei are split. They are the most predominant radioactive nuclides of spent fuel when it is removed from the reactor vessel and transferred to the spent fuel pool where they decay to low levels of radioactivity relatively quickly. After 1 000 years only the longest-lived fission products such as Iodine 129, remain. In the second category are the actinides, which are isotopes of Uranium and heavier metals including Plutonium. These are long-lived nuclides which take 10 000 years to decay. The last category contains the structural materials of the fuel assemblies which become radioactive through irradiation by neutrons. They add a small amount of radiation to the spectrum of the spent fuel assembly total and decay in about 500 years. A remarkable feature of spent fuel is that after one year of storage only 1% of the radioactivity remains in the assembly because the radioactive nuclides in the material decay so quickly. After 10 years, only 0,5% of the original radioactivity remains. What is left after 10 000 years of storage is about 0,0002% of the radioactive content and most of that would be Plutonium and other actinides. After this period the radioactivity has decayed to below what would have been there had the Uranium been left undisturbed in the ground. During the 1990s Koeberg took a decision to go for the high-density storage racks for its spent fuel assemblies. New technology enables us to pack more spent fuel into racks making it possible to store all the spent fuel that will be generated over the design lifetime of the station (40 years). These would number approximately 3 000, depending on how many refueling outages Koeberg has. This re-racking project was completed in 2002. Two different storage regions have been created in the spent fuel pools. The first region has 360 positions in three racks and will store the most reactive fuel. This is the fuel that has spent the least amount of time in the reactor and therefore contains relatively large amounts of U235, which could still undergo fission. In this region the fuel assemblies are further apart so that there is no chance that they may start a spontaneous fission reaction. Using neutron-absorbing materials in the construction further controls criticality (the start of the fission process) so that the number of thermal neutrons in the region is always below that required to start a chain reaction. The racks are made up of stainless steel with plates of borated steel attached to the outside surface of each stainless steel storage channel. Borated stainless steel contains 1,7% boron as part of its chemical composition. Boron is an excellent neutron absorbing material. The second region contains the bulk of the spent fuel. The assemblies are closer together since this fuel has spent a longer period in the reactor and hence has a lower residual amount of fissile uranium. The racks in this region are constructed of the same materials as those in region one. In 1996 four spent fuel casks that can house 28 spent fuel assemblies each were bought. These casks are dual purpose transport and/or storage casks and are specially designed to contain the radioactivity associated with 10-year old spent fuel assemblies. Due to delays in the re-racking project a decision was taken to use these casks as an interim contingency measure prior to the refueling outage in April 2000 and January 2001, in order to ensure that there would be enough space in the spent fuel pools to allow fuel unloading to occur . The empty casks weigh 97 740 kg and are made of ductile cast iron. The cast iron walls are 358 mm thick providing the structural strength needed, heat dissipation as well as shielding from the radiation emitted by the spent fuel assemblies. A layer of polyethylene rods are contained inside the wall of the cask to provide a shield against the neutrons emitted by the fuel. The cask design ensures that the remaining thermal heat in the fuel assemblies is dissipated naturally. The advantage of this is that no heat removal systems that will require monitoring or maintenance are necessary. The heat losses occur in the same way as it does when a cup of tea is allowed to cool down. Eskom has already commented on the draft nuclear radioactive waste policy that has been drawn up by the Department of Minerals and Energy Affairs. Eskom is in the mean time looking at all the options available world wide for the handling of spent fuel but will have to abide by the measures laid down in this radioactive waste policy for final storage of spent fuel.
<urn:uuid:f3e4d481-49fa-41be-ba22-9652cfc09317>
CC-MAIN-2013-20
http://mybroadband.co.za/vb/showthread.php/450231-Fracking-in-the-Karoo-why-it-s-bad?p=8624689
2013-05-19T02:38:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957598
1,396
Is it an allergy? Drug allergies can be a confusing area for parents because an adverse reaction to a medication does not necessarily mean that the child is allergic to that medicine. In fact most drug-related symptoms will not be caused by an allergic reaction. Nevertheless, whether it’s an allergic or a non-allergic reaction, all adverse drug events should be checked out by a doctor, as they can be very serious, even life-threatening. What are the symptoms of allergic reactions? Most allergic reactions occur fairly soon after taking a medicine, however, it is possible to develop an allergic reaction after several weeks on a drug, or during subsequent use of medicine. The most common symptoms of an allergy include: - Skin rash - Facial swelling - Shortness of breath And it’s worth noting that many of these are also the symptoms of a non-allergic reaction. What is anaphylaxis? While rare, it is the most serious allergic drug reaction and is a medical emergency. Anaphylaxis symptoms usually start within minutes of exposure to a drug. Symptoms of a possible anaphylactic reaction include: - Tightening (constriction) of the airways and a swollen tongue or throat, causing trouble breathing - Shock, with a severe drop in blood pressure - Weak, rapid pulse - Nausea, vomiting or diarrhoea - Dizziness, light-headedness or loss of consciousness It's possible to have an allergic response to a drug that caused no problem in the past. Common drug allergies This is the most common drug allergy, with penicillins, cephalosporins and sulfonamides (those containing sulpher) the main culprits. Antibiotics can also cause non-allergic side effects such as a skin rash and digestive problems. Allergic reactions can occur, but rarely, after a vaccination. Usually an allergic reaction is triggered by other ingredients in the vaccine such as egg or neomycin, rather than the vaccine itself. Non-allergic reactions to vaccines are common, but in most cases they aren't severe and symptoms improve quickly. What are non-allergic reactions? An allergy involves the body’s immune system identifying a chemical or substance as harmful and creating antibodies to attack it. In most cases, what appears to be an allergy is actually a reaction that doesn't involve the immune system. Some drug reactions that are similar to, but may not be an allergic reaction include: - Breathing difficulties: nonsteroidal anti-inflammatory drugs (NSAIDs) including aspirin and ibuprofen can cause asthma-like symptoms in some people so it's understandable why these can be confused with an allergy. - Skin rashes: medicines can affect the skin in many different way, for example, sulfonamides may cause cause a rash by making the skin burn more easily in the sun. - Stomach problems: nausea, vomiting and diarrhoea are common side effects with antibiotics. - Dizziness or light-headedness: many medicines can cause this by lowering blood pressure or by affecting the central nervous system. - Swelling of the face, lips, tonuge or other parts of the body: some medicines for high blood pressure and heart conditions can cause these symptoms directly without involving the immune system. Who’s at risk? Anyone can have an allergic reaction to a medicine but there are some factors that can suggest an increase in your risk, such as: - Having a past allergic reaction to the same drug or another drug. If past reactions have been mild, you could be at risk of a more severe reaction. - Taking a drug similar to one that caused a reaction in the past. - Having a health condition that weakens your immune system. - Having hay fever or another allergy. When to seek medical help Inform your doctor of any reactions as it can influence whether you can be prescribed the same medicine again. If possible, see your doctor when the allergic reaction is occurring. This will help identify the cause and make sure you get treatment if it's needed. Obviously if there are any severe symptoms or signs of anaphylaxis, seek urgent medical attention. Related allergy articles This article was written by Fiona Baker for Kidspot, Australia’s best family health resource. Sources include NPS MedicineWise. Last revised: Tuesday, 15 February 2011 This article contains general information only and is not intended to replace advice from a qualified health professional. - Do I have postnatal depression? - Are your kids getting enough green time? - 6 ways to reduce your risk of breast cancer today - Being friendly with your breasts could save your life - Sunscreen shock: Just how safe is sunscreen for our kids? - Why mums need breakfast too - Digestive distress: How to manage bloating, wind and cramping - Phthalates in plastics: how safe are they? - In a lather over sodium lauryl sulfate - What's the fuss about parabens?
<urn:uuid:5abb8c50-b957-463c-9f10-75a2eb01b735>
CC-MAIN-2013-20
http://www.kidspot.com.au/familyhealth/%20http:/www.kidspot.com.au/Conditions-and-Disorders-Allergies-and-immune-system-Is-it-an-allergy+4692+202+article.htm
2013-05-19T02:16:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926987
1,059
When a thief breaks into a bank vault, sensors are activated and the alarm is raised. Cells have their own early-warning system for intruders, and scientists at the European Molecular Biology Laboratory (EMBL) in Grenoble, France, have discovered how a particular protein sounds that alarm when it detects invading viruses. The study, published October 14 in Cell, is a key development in our understanding of the innate immune response, shedding light on how cells rapidly respond to a wide range of viruses including influenza, rabies and hepatitis. To sense invading agents, cells use proteins called pattern recognition receptors, which recognise and bind to molecular signatures carried only by the intruder. This binding causes the receptors to change shape, starting a chain-reaction that ultimately alerts the surrounding cells to the invasion. How these two processes - sensing and signalling -- are connected, has until now remained unclear. The EMBL scientists have now discovered the precise structural mechanism by which one of these receptors, RIG-I, converts a change of shape into a signal. "For a structural biologist this is a classic question: how does ligand binding to a receptor induce signalling?" says Stephen Cusack, who led the work. "We were particularly interested in answering it for RIG-I, as it targets practically all RNA viruses, including influenza, measles and hepatitis C." In response to a viral infection, RIG-I recognises viral genetic material -- specifically, viral RNA -- and primes the cell to produce the key anti-viral molecule, interferon. Interferon is secreted and picked up by surrounding cells, causing them to turn on hundreds of genes that act to combat the infection. To understand how RIG-I senses only viral RNA, and not the cell's own RNA, and sounds the alarm, the scientists used intense X-ray beams generated at the European Synchrotron Radiation Facility (ESRF) to determine the three-dimensional atomic structure of RIG-I in the presence and absence of viral RNA, in a technique called X-ray crystallography. They found that in the absence of a viral infection, the receptor is 'sleeping with one eye open': the part of RIG-I that senses viral RNA is exposed, whilst the domains responsible for signalling are hidden, out of reach of the signalling machinery. When RIG-I detects viral RNA, it changes shape, 'waking up' the signalling domains, which become accessible to trigger interferon production. Although the EMBL scientists used RIG-I from the mallard duck, this receptor's behaviour is identical to that of its human counterpart. "RIG-I is activated in response to viral RNA, but a similar mechanism is likely to be used by a number of other immune receptors, whether they are specific to viruses or bacteria," says PhD student Eva Kowalinski, who carried out most of the work. Thus, these findings contribute to a broader understanding of the workings of the innate immune system -- our first line of defence against intruders, and the subject of this year's Nobel Prize in Physiology or Medicine. The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by European Molecular Biology Laboratory (EMBL). - Kowalinski, E., Lunardi, T., McCarthy, A.A., Louber, J., Brunel, J., Grigorov, B., Gerlier, D. & Cusack, S. Structural basis for the activation of innate immune pattern recognition receptor RIG-I by viral RNA. Cell, 14 October 2011 DOI: 10.1016/j.cell.2011.09.039
<urn:uuid:cde93e7c-f360-4d8d-9655-b310f6e4bd3d>
CC-MAIN-2013-20
http://www.ebionews.com/news-center/research-frontiers/rnai-a-microrna/45381-cells-have-early-warning-system-for-intruders.html
2013-05-18T17:57:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920514
753
Warrant Officers have always been specialists carried on board ships for specific responsibilities requiring a very high level of experience and detailed knowledge. These attributes were not expected of the "fighting" officers who were primarily concerned with the tactics necessary to make contact with the enemy and then to "fight" the ship. To do this successfully it was essential for warships to carry others who would ensure that the ship was always in a high state of readiness. It had to be well maintained and its guns always ready for use, with ample charges and projectiles. More importantly it had to be in the right place at the right time. These specialists were attached to the ship throughout its life, whether in commission, or "in ordinary" ("laid up"). They did not hold a King's, or Queen's Commission, but had a Warrant signed by members of the Board of Admiralty The First Warrant Officers Five specialists were ranked as Warrant Officers, and had the following responsibilities: Boatswain (Bo'sun) - "Running" and "standing rigging", sails, anchors and cables. He was also responsible for the maintenance of discipline on board. This category also served in Royal Dockyards for similar duties. The origin of the title is buried in antiquity and dates from The Master - Navigation of the ship. Carpenter - Hull maintenance and repair. Clerk - All correspondence. Gunner - Guns, ammunition and explosives. Cook - Feeding all on board. Two other categories were later elevated to Warrant status having previously been considered to be ratings: Chaplain - All matters concerned with religious affairs. Schoolmaster - General teaching requirements. In 1843 The Master and the Chaplain were given Commissioned rank. A further change in 1861 granted a Commission to Schoolmasters engaged in the instruction of naval officers in shore training establishments. They were renamed "Naval Instructors", but those serving on ships retained the "Schoolmaster" title with added distinctions of "Senior Master" and "Headmaster" for those having greater responsibilities. These titles remained in use until 1946 when all Schoolmasters were given Commissioned rank as Instructor Officers. Warrant Officers of all specialisations had to be capable of carrying out instructional duties ashore and afloat. This criteria still pertains to-day. Impact of new Technology in the late 19th Century Advances in scientific knowledge had a most significant effect on the Royal Navy since they completely changed design requirements for warships. Use of steel for ship construction and installation of new types of equipment introduced new manning and support requirements. Educational and training standards had to be totally revised to provide new types of rating for the new much larger Fleet. Each category would require specialist supervision by officers of Warrant rank. The equipment mountings and optical range finders. propulsion and other machinery. for lighting and other services pre-World War 1 cruiser HMS Fox (no enlargement) New Categories of Warrant Officer in 1913 It took some years before the necessary expertise was available but study of "King's Regulations and Admiralty Instructions" (KR&AI) for 1913 shows the extent to which Warrant Rank had been introduced for duties ashore and afloat. These Warrant Officers were responsible for supervision and training of their specialist category. It should be noted that these new titles are shown in association with the present Branches of the Service which differ from those in 1913: Gunner + Selected Gunners - given training as Instructors with particular emphasis on ships armament. Identified by a Dagger (+) suffix to their rank title and hence known as "Dagger" Gunners. Gunner (T) - a direct equivalent of the "Gunner" (see above). Specialised in torpedo armament equipment operation, maintenance and repair, and in addition was responsible for electrical distribution circuits. Warrant Telegraphist - operation, maintenance and repair of all wireless communication outfits. Signal Boatswain - all visual signalling matters. Recent developments had introduced more complex procedures for manoeuvring and tactical control of Warrant Master at Arms - all disciplinary matters and ratings drafting in Depots ashore. Supervision of Regulating Branch Warrant Bo'sun (PRT) - physical training and organisation of recreational activities in large shore establishments Warrant Engineer - skilled tradesman with high standard of education and long initial training to provide professional standards needed to supervise the operation and repair of complex mechanical and electrical power generation equipment. Warrant Mechanician - introduced to provide an avenue of promotion for selected Stoker ratings. Received skill training similar to that given to Engine Room Artificers. Mainly employed for shore training of Stoker ratings. Warrant Shipwright - skilled tradesman with long initial training or entry after a shore apprenticeship. Responsible for hull repair and maintenance in wooden and steel ships including operation and maintenance of anchors and cables. Also employed in Royal Dockyards. Warrant Ordnance Officer - skilled tradesman with high educational qualifications and long initial training Responsible for maintenance and repair of all types of gunnery equipment, including optical instruments and Warrant Electrician - skilled tradesman specialising in maintenance and repair of all electrical equipment including instrumentation and generating machinery as well as torpedo control equipment. Warrant Writer - all pay and ships Warrant Supply Officer - custody and accounting of naval and victualling stores. "Warrant Instructor in Cookery" - shore training of Cook ratings. Warrant Wardmaster - administrative duties and patient care in naval hospitals and hospital ships, other than any associated directly with the work of medical officers and nurses. pre-World War 2 destroyer HMS Glowworm (courtesy CyberHeritage) New Post 1930 Categories The various specialisations remained but by 1935 Warrant Rank had been introduced for rating categories which had evolved since 1918. These reflected the new requirements such as the increased use of aircraft and the development of improved anti-submarine weapons. These were: Boatswain (A/S) - 0peration and training of personnel in submarine detection outfits and anti-submarine weapons together with their maintenance and repair. This was due to the introduction of equipment which embodied modern techniques. Warrant Photographer - all photographic services in ships and shore establishments. Photography was extensively used in air operations and gunnery training. Warrant Steward - supervision of work of Stewards and administration of Wardroom Mess services to give an improved standard in large shore establishments. The Warrant Officer in World War 2 The tremendous changes in terms of types of equipment and increase in personnel made great demands on all holding Warrant Rank during WW2. Their professional and man-management experience enabled them to make an invaluable contribution. Quite apart from their instructional duties they did much to ensure a high standard of availability of equipment and services at sea. As the RN was largely made up of officers and ratings serving only for the duration of hostilities, the value of this leavening provided a basis for efficiency which cannot be disregarded. Although the introduction of radar and improved weapons had been made before 1939 these equipments were comparatively rudimentary. The many changes made as new techniques were developed demanded a considerable degree of professional expertise by existing categories. In the Fleet Air Arm, Warrant rank was introduced for aircrew (Pilot and Telegraphist/Air Gunner) and for Aircraft Maintenance ratings. These latter required similar skills to those of the Engineering Branch in ships and their suitability was assessed by professional examination. Qualifications and Promotion In general all candidates for Warrant Rank were required to have achieved the same educational standard by having passed the Higher Educational Test (A standard slightly less than that of the pre-1944 School Certificate). Although most branches had professional examinations these varied considerably between branches and some promotions were made on the basis of "long and zealous service". Candidates for Warrant rank were required to have qualified for Petty Officer rating, and in some cases to have served as such for a number of years. Few promotions could be made before the age of 30 because of these constraints. In some cases promotions were made without sufficient regard to suitability of individuals to their new status and their ability to adapt to change. The average age of promotion to Warrant Rank was between 31 and 35, whereas the majority of commissioned officers were younger. Integration into the new environment was more easily achieved by those whose education and interests covered a wide enough horizon to meet their new responsibilities. In this connection previous experience in activities whether within the service environment or otherwise, and beyond their particular specialist knowledge was a great asset. The transition did however require considerable adjustment and needed goodwill on the part of all involved. When achieved the contribution made by Warrant Officers to overall efficiency was clearly apparent. Further advancement to "Commissioned Officer from Warrant Rank" was a slow process and required 10 years service as a Warrant Officer. The number of promotions was also limited by the number of complement billets allowed for that rank. This factor meant that few Warrant Officers could expect any further promotion until they were over 40 years of age. It was a major cause of disquiet to them since it showed little appreciation of their contribution and the advantages to be gained by recognising their merit. There were however an increasing number of Warrant Officers promoted direct to Lieutenant rank after 1937 although the few so promoted was a very small proportion of the total number The Contribution of the Warrant Officer 1913 to 1948 Wide experience gained over many years enabled Warrant Officers to provide the necessary lubrication to ensure that the wheels of the "command machinery" worked smoothly. They were able to ensure that all foreseeable situations were dealt with promptly and efficiently by virtue of their specialist knowledge and long service. When necessary, they could improvise and adapt existing facilities and procedures with a degree of competence simply not available in the case of many younger officers. Years of supervision of ratings and direct daily contact with all matters essential to the smooth running of all departments did much to ensure efficient conduct of affairs whether ashore or afloat. Because Warrant Officers retained their association with the Manning Port Division which they had chosen, usually on entry to the service, they accumulated a wide range of contacts within the local dockyard and in the Depot. This gave them unrivalled advantages compared with younger General List Officers who would be appointed to ships manned from any of the main Depots. Local knowledge of the personnel involved in dockyards and in the administration of the Depot was gained over Each dockyard and Manning Depot had its own local procedures and knowledge of these could be very valuable in obtaining the best possible service from local support facilities. Family connections or school friendships also played their part. Many Warrant Officers had family roots in the close knit local community, some of whom were likely to be employed in Admiralty service. Those who attended the local Dockyard School before entry as Artificers would have received their craft training with dockyard personnel. This affinity lasting over several years enabled continuity of contact to be maintained with individuals who carried out work essential to the running of the Fleet. As a result barriers presented by "officialdom" could be circumvented and many impossible situations could be satisfactorily overcome through these personal connections. Hospitality in the Warrant Officers Mess for those who rendered services was an added bonus to help this process. One of the most important capabilities required of all specialisations at Warrant level was that of instructional competence. Since training of all ratings and some officers was carried out at the Manning Depot, most Warrant Officers had continued association with trainees extending over several years. In addition to influencing training policies they gained knowledge of individuals whom they would meet again many times during their subsequent careers. The bond which existed between all Warrant Officers was another asset. It allowed many quite intractable problems to be settled "in the Mess" by suitable arrangements, without the need to use more formal channels. There was rarely a department in any large ship or establishment which had no Warrant officer within its structure, so they were in an excellent position to ensure full benefit was obtained from the resources of men, material and knowledge available to them. Warrant Officers who were Heads of Departments, such as Engineer Officer in a destroyer, had, apart from his overall responsibility for machinery, to be able to co-operate with other departments requiring engineering or associated services. To an experienced professional this presented no major difficulty. Understanding of the reactions of the average rating to particular circumstances was a man management asset which did much to ensure smooth running of their Department. As Divisional Officers they were therefore able to make the necessary balance between compassion and naval practice by virtue of their experience of men and circumstances. Many General List officers have good reason to be grateful for the accumulated wisdom of a Warrant Officer with whom they served as a Midshipman or Sub-Lieutenant. During the period before 1948 when Warrant Officers lived apart from other officers in all large ships, their associations with their fellow officers were largely professional. Although they took part in social and sporting activities, various other factors had great influence on their social relationship with Wardroom officers. Service in Destroyers, Sloops and Small Ships postwar destroyer HMS Decoy (courtesy NavyPhotos) Both Warrant Engineer and Gunner (T) specialisations were appointed to these ships and also to some submarines. Of the two, the Warrant Engineer had the advantage of a good educational background and was used to the higher standard of social conduct found in Artificers Messes. The Gunner (T) was less advantaged since he would, in all likelihood, have joined the service as a Boy Seaman without the benefit of the type of academic training given to the Artificer entrant. He would also have spent much of his earlier career in Broadside and Chief or Petty Officers Messes with a less refined atmosphere than was to be found in a Wardroom. However, the wide experience and professional ability of each made them valued members of any small ship wardroom as long as they were able to adapt to their new social In this connection a great deal depended on the attitude of the Captain who would need to recognise these basic facts and make it clear that a certain degree of "give and take" was needed by all concerned if his ship was to be "happy" and efficient. Regrettably this was not always the case and prejudice together with a lack of understanding on both sides did much to delay the acceptance of the Warrant Officer as a valuable asset in a Wardroom. There were instances of officers who took advantage of their status and brought discredit on their fellow Warrant Officers, but these were by no means the majority. Warrant Officers Messes In large ships and most shore establishments the complement would include Warrant Officers of many specialisations who were accommodated in their own Mess. As very few activities did not affect them they were able to exercise considerable influence on the quality of life on board. The Mess President, usually the Senior Seaman Officer, a Commissioned Gunner or Boatswain, had responsibility for ensuring that all Warrant Officers conducted themselves socially in a manner which met the standards expected by the Captain. The disadvantages of a less extensive education than that of Wardroom officers and the less exacting standards previously acceptable still applied. Although adjustment was frequently without difficulty there were instances where the President concerned lacked the very qualities necessary to maintaining conduct which would enhance the standing of all Warrant Officers. A great deal depended on the make-up of each Mess with its members of very varying educational and family backgrounds which undoubtedly affected their social attitudes. On the credit side it should be said that Warrant Officers took their part in all sporting and social activities both as ships' officers and as a separate Mess, with great success. The conduct of those who adapted quickly to life as an officer did much to ensure that the representations being made about the status of the Warrant Officer were favourably forwarded by their Captain. As in small ships the Captain and the Executive Officer played a very important role in providing clear guidelines about the standards expected. The availability of alcohol was a factor needing careful handling, but in most circumstances did not lead to major problems any more than it did in Wardroom Messes. The introduction of more modem weapons and other equipment into the RN had a significant effect on the calibre of rating required. It was evident that there would be a need for experienced officers to supervise work on more complex equipment, whether as operators or in the support role. This trend began to make itself evident by 1948 and a better quality of rating was becoming available for promotion at the time of introduction of the Branch List. The reduction of the required minimum age on promotion to 28 also improved the prospects of promotion before completion of a 12 years Engagement. The standards required during professional examinations for many categories was more stringently applied. A higher general education was necessary to carry out duties as senior ratings satisfactorily, which did much to ensure that candidates for promotion to Warrant rank were better able to deal with their new status on promotion. Change of Title Following the many representations made by officers holding Warrant Rank during and immediately after the end of WW2, an Admiralty Committee was set up to investigate the status of Warrant Officers. The principal areas of concern which had been represented Change of title to more accurately align with the responsibilities carried. Coupled with this was the desire for replacement of the "bootlace" single "half stripe" insignia worn Officers, which it was considered made a further unnecessary differentiation The Committee, chaired by Admiral Noble, took into account the submissions were made by Presidents of all Warrant Officers Messes. It concluded that amendments should be made to the existing regulations. Much attention was given to suggestions from the Messes in Port Divisions since they were recognised as coming from the largest number of more senior representatives. Whether this was a sound principle is, in retrospect, questionable, as there was no consensus, especially relating to Title. The proposal largely supported in many Warrant Officers Messes was that these be changed to Sub. Lieutenant and Lieutenant, but this was not agreed by many members of the Committee and not adopted. The new structure was announced with the Naval Estimates on 9 March 1948 and introduced on 1 July that year. This went some way to improving matters although the contentious subject of title, and the change from the "half stripe" insignia were not resolved and they continued to fester for another 9 years. Post-Noble Committee Report Improvements Warrant Officers to be known in future as "Commissioned Officers", and "Commissioned Officers from Warrant Rank" as "Senior Commissioned "Officers ". They were to be collectively identified as Branch List" officers and to be equivalent to Sub Lieutenant and Lieutenant respectively. Minimum age on promotion was reduced to 28. Officers Mess was to be abolished and all officers above the rank of Midshipman were to be accommodated in the Wardroom in all ships and Establishments although it was recognised that space may not always have been available and any existing Warrant Officers Mess were then to be known as "Wardroom II" until enlarged existing or proposed facilities could be provided both ashore and afloat. selection for the next promotion being made after 10 years in the rank, a "Zone" between 5 and 9 years was introduced for "Commissioned Officers". selected Branch List Officers were to be given direct promotion to Lieutenant on the General List. They would then be eligible to take up appointments for any General List Officer with the same prospects for The number of specialist appointments for Senior Commissioned Officers to Lieutenant on the Branch List was also to be increased to allow a greater number of promotions to be made but this particular change could not be effective immediately because wartime conditions had made necessary the promotion of Warrant Officers to "Acting Commissioned Officer from Warrant Rank" status. The number of new promotions had to include officers holding Acting rank which meant that the promotion of many younger officers was delayed. The reduction in size of the Fleet also caused a corresponding reduction in the number of billets available. All other officers became identified as General List Officers or Supplementary List Officers who had joined for shorter naval service Warrant Rank titles were changed as from 1 April 1948 and officers were accommodated in Wardrooms on 1 July. The prefix "Mr" was replaced by "Commissioned or Senior Commissioned" followed by the specialist title (e.g. "Boatswain" became "Commissioned Boatswain" and "Warrant Writer" became "Commissioned Writer Officer".) Introduction of the new Electrical Branch The formation of the Electrical Branch in 1946 had a very significant effect on other existing branches. Consequential changes involved many structural upheavals in existing categories of rating. The new Branch took over responsibility for maintenance and repair of equipments from other existing departments. Power Supply and Generation from the Engine Room and Torpedo Branches. and Radar from the W/T element of the Signals Branch. Gunnery and Torpedo armament from the Seaman Branch. A major transfer of ratings and officers into the new Electrical Branch took effect from the beginning of 1947 and Branch List Officers received new titles appropriate to their specialisation. Seamen Branch - Gunners (T) who transferred to the Electrical Branch became Commissioned Electrical Officers (L). Gunners (T) who remained in the Torpedo Branch became Gunners (TAS). A new category of Boatswain (PR) was introduced for the Seaman Branch radar and plot operators. Gunnery ratings were re-categorised to suit their new duties. Signals Branch - Warrant Telegraphists who transferred to the Electrical Branch became Commissioned (R). Warrant Telegraphists and Signal Boatswains who remained Branch became Electrical Branch - Sub-divisions were created as follows, Commissioned Electrical Officer having suffices: (L) - Power Generation Distribution and all electrical services. (R) - all communications and radar equipment (AL) and (AR) - for aircraft equipment as above. Warrant Engineers were sub-divided into categories, again, Commissioned Engineer having suffices: (ME) - for Marine (AE) - for Air (OE) - for Ordnance Engineers transferred later from Electirical Branch in 1948. Transitional Period 1948 - 1956 A gradual increase in the proportion of Branch List officers entering the Wardroom took place as the changes made in 1948 had time to take effect. The new intake initially faced the same problems of prejudice and adjustment. Although acceptance was slow in some ships, the improvements made when the Branch List was formed were shown to be most beneficial. Very extensive changes to the armed forces made in 1957 involved a complete review of the strength of the Fleet and the officer structure as a whole. Special Duties List It was decided during 1956 as part of an overall Review to abolish the Branch List and replace it by a different designation to be called the "Special Duties List". As a result meaningful recognition was given to officers promoted by virtue of their specialist expertise. At one stage serious consideration was being given to providing special uniform buttons marked “SD" for these officers. Such a distinction was felt by all Branch List Officers to be quite unnecessary and a way of maintaining the distinctions so evident before 1939. Following many representations by individual officers that this would be against the best long term interests of the Service the proposal was dropped. Another innovation was the removal of the coloured lace worn by all specialist officers on General, Supplementary and Special Duties Lists. Only Medical, Dental and Constructor Officers were to have this indication of their specialisation. In future there would be no visible distinction between other officers. Although not totally welcomed by all, experience showed that this change helped to further reduce some of the prejudice still evident in some wardrooms. As from 1 January 1957, "Commissioned Officers" were accorded the title of "Sub-Lieutenant", and "Senior became "Lieutenants". In consequence the associated stigma of the "half stripe" was removed and Special Duties List Officers were to wear the full single or two full stripes as worn by other officers of these ranks. The specialist qualifications of each officer on the SD List was indicated as part of their new rank title. Examples: Commissioned Boatswains became Sub-lieutenants (B). Engineer - Senior Commissioned Engineers (AE) became Engineer lieutenants (AE). Supply - Commissioned Writer Officers became Supply Sub-lieutenants (W). New promotions to Sub-Lieutenant received a Commission signed by the Queen but existing Branch list Officers retained their original Admiralty Warrant as their authority to "observe and execute 'Regulations for the Government of Naval Service'". As part of the naval reorganisation, Schemes of Complement were altered to provide more appointments afloat to give these officers greater opportunity to extend their responsibilities and hence to improve their promotion prospects. By the 1970's the SD Officer had been fully accepted in most Wardrooms for his true value as a professional colleague and messmate who took part in all ship activities on Final Phase of Transition 1970 to 1985 During this period very extensive administrative changes within the RN including the amalgamation of the Electrical and Engineering Specialisations. These have allowed alterations to complement requirements ashore and afloat. After suitable training Special Duties list Officers can now be employed as Head of Department instead of General List Officers. Revised promotion policies allow promotions for Lieutenant Commanders to Commander on the SD List so culminating the aspirations of previous holders of Warrant Rank. With very few exceptions, no officer from Warrant Rank, or its later equivalents, previously had any reasonable chance of attaining this rank unless already transferred to the General List. At last due recognition of experience and a high degree of professional competence had been achieved. Re-introduction of Warrant Rank On 1 September 1970 the cycle was completed by the introduction of a status equivalent to that of Warrant Officers in the Army and Royal Air Force. Those who joined the RN as ratings and wished to be advanced in status within their particular specialisation could be promoted to a new rank, Fleet Chief Petty Officer. The "Royal Coat of Arms" was to be used for the insignia of rank as in the other services. In 1986 their title was changed to Warrant Officer thus completing the cycle. They continue to be accommodated with Chief Petty Officers and carry out specialist duties very similar to those of the original RN Warrant Officer. The likelihood of the pattern repeating is unlikely because opportunities are available for the new Warrant Officer to qualify for further promotion on the SD List. Owing to a shortfall in manning requirements advancement is possible to Temporary Sub-lieutenant (SD) and there were 13 Temporary Sub-Lieutenants and six Temporary Lieutenants holding Commissions in the 1991 SD list. It is interesting to note that the current Navy List includes specialisations by rank within the Special Duties Section, as opposed to the previous practice of listing each specialisation separately. The new SD List officer has broader responsibilities and carries out duties outside his basic specialisation in the same way as any other Commissioned officer. Extensive changes in social attitudes outside the naval service and training appropriate to modem requirements within it have significantly altered the type of officer now serving. As a result the earlier prejudices have largely disappeared. The Special Duties List provides an avenue of promotion with none of the inherent disadvantages faced by the pre-1948 Warrant Officer. The modem well-trained, experienced and dedicated specialist rating is more socially aware and better able to take his place within the officer structure. Having shown the necessary initiative and perseverance he may be sure his professional competence will be recognised and can be rewarded to far greater extent than was possible in the past. Navy Lists 1948, 1958, 1966 and KR&AI 1913, 1926, 1938 and "The Royal Navy Since The Eighteenth Century, The Navy in Transition" by Michael Lewis (H&S "A Social History of the Royal Navy" by Michael Lewis (Allan and Urwin)
<urn:uuid:52555fe3-41ef-4d72-86dd-c0b03834fccf>
CC-MAIN-2013-20
http://naval-history.net/xGM-Pers-Warrant%20Rank.htm
2013-06-18T22:57:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970441
6,251
Rockhopper Penguin (Image credit: Flickr user Marcus Borg)For several hundred years, human activity on the Falkland Islands -roughly 300 miles of the Argentine coast- threatened its penguins' survival. But the trend started to reverse in 1982, when Argentina and Britain began duking it out for control of the Falklands. Turns out, a war, a few landmines, and some unstable diplomatic relations might have been just enough to get the penguins back on track. The Falkland Islands are small. Collectively, the 200-plus islands that make up the Falklands are only about as big as Connecticut. But through the years, they've managed to inspire some Texas-sized international contention. Ever since Argentina gained independence from Spain in 1816, it's been vying for control of the Falklands in one form or another. Some Argentines even claims possession of the region today, even though Queen Elizabeth's face graces every piece of currency, the Union Jack appears on the official flag, and every other government in the world recognizes British rule over the Falklands. Despire the fact that Argentina famously lost its military bid for control of the islands back in 1982, national polls still show 80 percent of Argentines want their government to take back the Islas Malvinas, as they're known in the Spanish-speaking nation. King Penguins (Image credit: Flickr user andym8y)So what is it the Argentines so jealously covet? Hard to say. The Falkland Islands aren't home to much, other than about 3,000 humans, 700,000 sheep, and a few fishing installations. What they do have, however, is an enormous population of penguins from five different species -the Southern Rockhoppers, the Magellanic, the King, the Gentoo, andthe Macaroni. Their names derive from, respectively, the ability to hop on rocks, a celebrated circumnavigator, a British ruler, a religious slur, and a slang reference to flashy dressers. With those five species combined, the Falklands are home to to a penguin army more than 1 million strong. That's pretty impressive, but it's believed the number was closer to 10 million only 300 years ago. In the 18th century, the whale oil industry was booming, and the Falklands had their fair share of whales. Not coincidentally, the French, British, and Spanish groups began showing up on the islands to get in on the action. But whale oil isn't exactly the easiest thing to produce. First, whales are brought ashore. Then their blubber is separated from their bodies, and the fat is rendered into oil in gigantic vats of boiling water. The Falkland Islands had plenty of whales, but they're mostly void of timber, and burning whale oil to render whale oil seemed a little silly. So how did the settlers make their Falkland outposts survive? "Francoise, throw another penguin on the fire!" Yes, as it turned out, penguins made surprisingly good kindling, thanks to layers of protective (and, apparently, highly flammable) fat beneath their skin. And it didn't hurt that they're so easy to catch. Penguins are flightless and unafraid of humans, so anytime the rendering fires got low, whalers simply grabbed a penguin or two and tossed 'em in. Gentoo Penguins (Image credit: Flickr user andym8y)ONE FISH, TWO FISH Fortunately for the penguins, the whale oil business died out in the 1860s with the discovery of fossil fuels. That left the islands with little commercial industry, and the worst thing the penguins had to worry about for a while was the occasional egg theft. But peaceful human-penguin relations hit a roadblock again in 1982 when Argentina made its ill-fated attempt to reclaim the Falklands. Although the British presence on the Falkland Islands has long been a sore spot for Argentina, no Argentine leader had ever tried to force a national claim to the land. At the time, however, the military government, led by General Leopoldo Galtieri, was in a unique situation. Already unpopular at home because of his habit of kidnapping and killing opposition leaders, Galtieri started to get truly nervous when the Argentine economy began to sink. Fearing outright rebellion, Galtieri tried to enlist the spirit of nationalism by invading the largely unprotected Falkland Islands on April 2. He quickly declared victory over the British, but his success was short-lived. Unfortunately for Galtieri, British Prime Minister Margaret Thatcher didn't believe in capitulating to dictators, even regarding land as inconsequential and unprofitable as the Falklands. The United Kingdom quickly struck back. In the ensuing two-month conflict, more than 1,000 Argentine servicemen died, and Galtieri's political downfall was solidified. Magellanic Penguin (Image credit: Flickr user Bruno Furnari)When the dust cleared, Britain's rulers realized they'd just spent several million pounds to assert control over the Falklands, and it was probably in their best interest to find some way to prove that the expense had been worthwhile. Fishing seemed like the best way to make the Falklands economically self-sufficient, so the British government set up an exclusive fishing zone around the islands and began selling permits to everyone from local islanders to gigantic international fishing companies. It was a fine plan, except that the penguins relied on those same fish for survival. Before long, competing with humans for food had become a far greater threat to penguins than whaling had ever been. In a single decade, the Islands' penguin population dropped from more than 6 million to fewer than 1 million. THE SPOILS OF WAR The Falkland Islands War, and the dwindling supply of fish that came with it, seriously threatened the local penguins. But, ironically enough, it also led to their gradual comeback. Since the dispute, Britain and Argentina have approached one another on diplomatic eggshells, if at all. As a result, neither side has been willing to risk angering the other by drilling for oil off the Falklands coast -even though experts estimate that 11 billion barrels worth of oil lie buried out there. That's good news for all of penguinkind. In other parts of the world, even small amounts of oil leaked from drilling stations have proved disastrous for penguins. The flightless birds rely on a very specific balance of oils on their feathers in order to maintain perfect buoyancy. When mixed with crude oil, penguins will either sink and drown or float and starve. But as long as tensions remain high between the two nations, the Falkland penguins are in the clear. The Falklands War also left the penguins with a bizarre kind of habitat protection. During Argentina's occupation of the islands, its military laid landmines along the beaches and pastureland near the capital city to deter the British from reclaiming the area. So far, those landmines haven't killed anyone, but the well-marked and fenced-off explosive zones have made for prime penguin habitat. The penguins aren't heavy enough to set off the mines, but because sheep and humans are, the little guys have to minefields all to themselves. Macaroni Penguin (Image credit: Flickr user Terry Saunders)Today there is still an estimated 25,000 landmines in the Falkland Islands. Over the years, they've come in pretty handy not only for protecting the penguin habitat from over-grazing, but also for keeping out overzealous tourists. Consequently, Falkland Islanders have decided that maybe having landmines is not such a bad thing. Even though the British government is obligated to remove them by 2009, the islanders recently put forth a proposal calling for their government to instead clean up the same number of mines in greater-risk areas such as Angola, Cambodia, or Afghanistan. After all, signs warning "Keep away from the penguins" will never be as effective as "Keep away from the penguins -or die." Ed. Note: Although the British began removing the landmines in a 2009-2010 pilot program, thousands still remain there today. __________________________The above article by Hank Green is reprinted with permission from the March-April 2006 issue of mental_floss magazine. Be sure to visit mental_floss' entertaining website and blog for more fun stuff!
<urn:uuid:1ca61c93-ceb5-4ae7-b20a-ebbaa8000a56>
CC-MAIN-2013-20
http://www.neatorama.com/2011/04/20/how-an-island-full-of-landmines-led-to-a-thriving-penguin-population/
2013-05-22T07:50:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958239
1,737
| Home | Site Search | Outreach | See/Hear Index | Spring 2000 Table of Contents Versión Español de este artículo (Spanish Version) By Mignon M. Schminky and Jane A. Baran Department of Communication Disorders University of Massachusetts, Amherst, Massachusetts Reprinted from Fall 1999 Deaf-Blind Perspectives, Published by Teaching Research Division of Western Oregon University for DB-LINK Hearing is a complex process that is often taken for granted. As sounds strike the eardrum, the sounds (acoustic signals) begin to undergo a series of transformations through which the acoustic signals are changed into neural signals. These neural signals are then passed from the ear through complicated neural networks to various parts of the brain for additional analysis, and ultimately, recognition or comprehension. For most of us, when someone talks about hearing abilities, we think primarily of the processing that occurs in the ear; that is, the ability to detect the presence of sound. Likewise, when someone is described as having a hearing loss, we assume that this individual has lost all or part of the ability to detect the presence of sound. However, the ability to detect the presence of sounds is only one part of the processing that occurs within the auditory system. There are many individuals who have no trouble detecting the presence of sound, but who have other types of auditory difficulties (e.g., difficulties understanding conversations in noisy environments, problems following complex directions, difficulty learning new vocabulary words or foreign languages) that can affect their ability to develop normal language skills, succeed academically, or communicate effectively. Often these individuals are not recognized as having hearing difficulties because they do not have trouble detecting the presence of sounds or recognizing speech in ideal listening situations. Since they appear to "hear normally," the difficulties these individuals experience are often presumed to be the result of an attention deficit, a behavior problem, a lack of motivation, or some other cause. If this occurs, the individual may receive medical and/or remedial services that do not address the underlying "auditory" problem. Central auditory processes are the auditory system mechanisms and processes responsible for the following behavioral phenomena. These mechanisms and processes apply to nonverbal as well as verbal signals and may affect many areas of function, including speech and language (ASHA, 1996, p. 41). Katz, Stecker & Henderson (1992) described central auditory processing as "what we do with what we hear." In other words, it is the ability of the brain (i.e., the central nervous system) to process incoming auditory signals. The brain identifies sounds by analyzing their distinguishing physical characteristics frequency, intensity, and temporal features. These are features that we perceive as pitch, loudness, and duration. Once the brain has completed its analysis of the physical characteristics of the incoming sound or message, it then constructs an "image" of the signal from these component parts for comparison with stored "images." If a match occurs, we can then understand what is being said or we can recognize sounds that have important meanings in our lives (sirens, doorbells, crying, etc.). This explanation is an oversimplification of the complicated and multifaceted processes that occur within the brain. The complexity of this processing, however, can be appreciated if one considers the definition of central auditory processing offered by the American Speech-Language-Hearing Association (ASHA). This definition acknowledges that many neurocognitive functions are involved in the processing of auditory information. Some are specific to the processing of acoustic signals, while others are more global in nature and not necessarily unique to processing of auditory information (e.g., attention, memory, language representation). However, these latter functions are considered components of auditory processing when they are involved in the processing of auditory information. CAPD can be defined as a deficiency in any one or more of the behavioral phenomena listed above. There is no one cause of CAPD. In many children, it is related to maturational delays in the development of the important auditory centers within the brain. Often, these children's processing abilities develop as they mature. In other children, the deficits are related to benign differences in the way the brain develops. These usually represent more static types of problems (i.e., they are more likely to persist throughout the individual's life). In other children, the CAPD can be attributed to frank neurological problems or disease processes. These can be caused by trauma, tumors, degenerative disorders, viral infections, surgical compromise, lead poisoning, lack of oxygen, auditory deprivation, and so forth. The prevalence of CAPD in children is estimated to be between 2 and 3% (Chermak & Musiek, 1997), with it being twice as prevalent in males. It often co-exists with other disabilities. These include speech and language disorders or delays, learning disabilities or dyslexia, attention deficit disorders with or without hyperactivity, and social and/or emotional problems. Below is a listing of some of the common behavioral characteristics often noted in children with CAPD. It should be noted that many of these behavioral characteristics are not unique to CAPD. Some may also be noted in individuals with other types of deficits or disorders, such as attention deficits, hearing loss, behavioral problems, and learning difficulties or dyslexia. Therefore, one should not necessarily assume that the presence of any one or more of these behaviors indicates that the child has a CAPD. However, if any of these behaviors are noted, the child should be considered at risk for CAPD and referred for appropriate testing. Definitive diagnosis of a central auditory disorder cannot be made until specialized auditory testing is completed and other etiologies have been ruled out. There are a number of behavioral checklists that have been developed in an effort to systematically probe for behaviors that may suggest a CAPD (Fisher, 1976; Kelly, 1995; Smoski, Brunt, & Tannahill, 1992; Willeford & Burleigh, 1985). Some of these checklists were developed for teachers, while others were designed for parents. These checklists can be helpful in determining whether a child should be referred to an audiologist for a central auditory processing assessment. CAPD is assessed through the use of special tests designed to assess the various auditory functions of the brain. However, before this type of testing begins, it is important that each person being tested receive a routine hearing test for reasons that will become obvious later. There are numerous auditory tests that the audiologist can use to assess central auditory function. These fall into two major categories: behavioral tests and electrophysiologic tests. The behavioral tests are often broken down into four subcategories, including monaural low-redundancy speech tests, dichotic speech tests, temporal patterning tests, and binaural interaction tests. It should be noted that children being assessed for CAPD will not necessarily be given a test from each of these categories. Rather the audiologist will select a battery of tests for each child. The selection of tests will depend upon a number of factors, including the age of the child, the specific auditory difficulties the child displays, the child's native language and cognitive status, and so forth. For the most part, children under the age of 7 years are not candidates for this type of diagnostic testing. In addition, central auditory processing assessments may not be appropriate for children with significant developmental delays (i.e., cognitive deficits). Space limitations preclude an exhaustive discussion of each of the central tests that are available for clinical use. However, a brief overview of the major test categories is provided, along with an abbreviated description of a few tests that are considered representative of the many tests available for use in central auditory assessments. Electrophysiologic tests are measures of the brain's response to sounds. For these tests, electrodes are placed on the earlobes and head of the child for the purpose of measuring electrical potentials that arise from the central nervous system in response to an auditory stimulus. An auditory stimulus, often a clicking sound, is delivered to the child's ear and the electrical responses are recorded. Some electrophysiologic tests are used to evaluate processing lower in the brain (auditory brainstem response audiometry), whereas others assess functioning higher in the brain (middle latency responses, late auditory evoked responses, auditory cognitive or P300 responses). The results obtained on these tests are compared to age-appropriate norms to determine if any abnormalities exist. Monaural Low-Redundancy Speech Tests: Due to the richness of the neural pathways in our auditory system and the redundancy of acoustic information in spoken language, a normal listener is able to recognize speech even when parts of the signal are missing. However, this ability is often compromised in the individual with CAPD. Monaural low-redundancy speech tests represent a group of tests designed to test an individual's ability to achieve auditory closure when information is missing. The speech stimuli used in these tests have been modified by changing one or more of the following characteristics of the speech signal: frequency, temporal, or intensity characteristics. An example of a test in this category is the Compressed Speech test (Beasley, Schwimmer, & Rintelmann, 1972). This is a test in which the speech signals have been altered electronically by removing portions of the original speech signal. The test items are presented to each ear individually and the child is asked to repeat the words that have been presented. A percent correct score is derived for each ear and these are compared to age-appropriate norms. In these tests different speech items are presented to both ears either simultaneously or in an overlapping manner and the child is asked to repeat everything that is heard (divided attention) or repeat whatever is heard in one specified ear (directed attention). The more similar and closely acoustically aligned the test items, the more difficult the task. One of the more commonly used tests in this category is the Dichotic Digits test (Musiek, 1983). The child is asked to listen to four numbers presented to the two ears at comfortable listening levels. In each test item two numbers are presented to one ear and two numbers are presented to the other ear. For example, in figure one, 5 is presented to the right ear at the same time 1 is presented to the left ear. Then the numbers 9 and 6 are presented simultaneously to the right and left ears. The child is asked to repeat all numbers heard and a percent correct score is determined for each ear and compared to age-appropriate norms. 5, 9 1, 6 (For text only readers: Figure 1 shows numbers 1,6 entering the left ear and numbers 5,9 entering the right ear). These tests are designed to test the child's ability to process nonverbal auditory signals and to recognize the order or pattern of presentation of these stimuli. A child can be asked to simply "hum" the patterns. In this case, the processing of the stimuli would occur largely in the right half of the brain. If on the other hand, the child is asked to describe the patterns using words; then the left side of the brain is also involved, as well as the major auditory fibers that connect the auditory portions of both sides of the brain. The Frequency Pattern Sequences test (Musiek & Pinheiro, 1987) is one of the temporal patterning tests used frequently with children. The test items are sequences of three tone bursts that are presented to one or both ears. In each of the sequences two tone bursts are of the same frequency, while the third tone is of a different frequency. There are just two different frequencies used in this test: one is a high-frequency sound and the other a low-frequency sound. The child therefore hears patterns, such as high-high-low or low-high-low, and is asked to either hum or describe the patterns heard. As with other central tests, the test items are presented at levels that are comfortable for the child and percent correct scores are obtained and compared to norms. Binaural Interaction Tests: Binaural interaction tests are sometimes referred to as binaural integration tests. These tests tap the ability of structures low in the brain (brainstem) to take incomplete information presented to the two ears and fuse or integrate this information in some manner. Most of the tests in this category present different parts of a speech signal to each ear separately. If only one part of the signal is presented, the child usually cannot recognize the test item. However, if the two different parts of the stimuli are presented simultaneously, with one portion going to one ear and the other portion to the other ear, the child with normal processing abilities has no difficulty recognizing the test item. This is because the two parts (which are unrecognizable if presented in isolation) are integrated into a single identifiable stimulus by the auditory nervous system. An example of a test in this category is the Rapidly Alternating Speech Perception test (Willeford, 1976). For this test, sentence materials are divided into brief segments which are alternated rapidly between the two ears. The example below is a rough approximation of what happens to a sentence when it is segmented in this manner. In this example, the first sound in the sentence "Put a dozen apples in the sack" (represented by pu) is presented to the right ear, then the t sound is presented to the left ear, and so forth and so on. If the child hears only the segments presented to the right ear or left ear, he or she is unlikely to be able to recognize the sentence. However, if the right ear and left ear segments are presented in a cohesive fashion to the child, sentence recognition improves dramatically as long as this particular function of the brain is intact. Rapidly Alternating Speech Perception PU A ZE AP S N SA T DO N PLE I THE CK (For text readers only: Figure 2 shows a visual representation of the above example, with the letters PU, A, ZE, AP, S, N, SA, presented to the right ear and letters T, D,O, N, PLE, I, THE, CK, presented to the left ear). The list of behavioral observations provided earlier in this article highlights many of the academic and/or speech and language problems that might be experienced by the child with CAPD. Since speech and language skills are developed most efficiently through the auditory sensory modality, it is not unusual to observe speech and language problems, as well as academic problems (many of them language-based), in children with CAPD. If a child experiences difficulty in processing the brief and rapidly changing acoustics of spoken speech, he or she is likely to have problems recognizing the "speech sounds" of language. If problems are encountered in recognizing the sound system of language, then additional problems are likely to be encountered when the child is asked to begin to match "speech sounds" to their alphabetic representations (a skill that serves as the foundation for the development of subsequent reading and writing skills). This in turn can lead to comprehension problems and poor academic performance. It is worth reiterating at this time that not all children with CAPD will experience all of these problems. There is a wide range of variability in the problems experienced by children with CAPD; however, it should be recognized that the presence of a CAPD places the child at risk for developing many of these language and academic problems. There are several different ways to help children overcome their CAPD. The exact procedures or approaches used will depend upon a number of factors, including the exact nature of the CAPD, the age of the child, the co-existence of other disabilities and/or problems, and the availability of resources. In general, the approaches to remediation or management fall into three main categories: (a) enhancing the individual's auditory perceptual skills, (b) enhancing the individual's language and cognitive resources, and (c) improving the quality of the auditory signal. The following discussion presents some of the procedures that may be used with a child with CAPD. More detailed information is beyond the scope of this article, but may be found in the various resources listed at the end of this article. Many children with CAPD will benefit from auditory training procedures and phonological awareness training. Intervention may also involve the identification of (and training in the use of) strategies that can be used to overcome specific auditory, speech and language, or academic difficulties. A number of actions can be taken to improve the quality of the signal reaching the child. Children can be provided personal assistive-listening devices that should serve to enhance the teacher's voice and reduce the competition of other noises and sounds in the classroom. Acoustic modifications can be made to the classroom (e.g., carpeting, acoustic ceiling tiles, window treatments) which should help to minimize the detrimental effects of noise on the child's ability to process speech in the educational setting. Finally, teachers and parents can assist the child in overcoming his or her auditory deficits by speaking clearly, rephrasing information, providing preferential seating, using visual aids to supplement auditory information, and so forth. The program should be tailored to the child's individual needs, and it should represent an interdisciplinary approach. Parents, teachers, educational specialists, and other professionals, as appropriate, should be involved in the development and implementation of the child's management program. Children with CAPD do not have hearing loss if the term is used to refer to a loss of hearing sensitivity. Most children with CAPD have normal hearing sensitivity and their auditory difficulties will not be detected during routine hearing testing unless some of the special "sensitized" tests (see discussion above) are administered. These children, however, have hearing loss in the sense that they do not process auditory information in a normal fashion. They have auditory deficits that can be every bit as debilitating as unidentified hearing loss. If the auditory deficits are not identified early and managed appropriately, many of these children will experience speech and language delays, academic failure and/or underachievement, loss of self-esteem, and social and emotional problems. Children can have both a hearing loss and a CAPD. Fortunately, most children seen for central auditory testing have normal hearing (i.e., detection) abilities. However, children with hearing loss can also have a CAPD. In fact, the presence of a hearing loss may place a child at risk for CAPD. This is because the auditory pathways and centers in the brain develop as they are stimulated with sound. The presence of a hearing loss may limit the amount and type of auditory stimulation that is necessary to promote optimal development of the auditory nervous system. If this happens, then auditory deficits are likely to result. A question frequently asked of audiologists is "whether or not a child with a hearing loss can be tested for CAPD?" The answer is not a simple "yes" or "no." Many children with hearing losses can be tested as long as they have some hearing (i.e., detection) abilities. Interpretation of the test results does become somewhat more difficult for the audiologist who is conducting the testing if a hearing loss is present, but there are distinct patterns of test results that can indicate the presence of a CAPD. Moreover, there are certain tests that the audiologist can use that are not affected to the same degree as other tests by the presence of a hearing loss. These tests should be used whenever feasible. Unfortunately, there are some individuals with losses so severe that testing cannot be completed. As a general rule, central auditory testing cannot be done if the individual being tested has a hearing loss falling in the severe- to-profound range. The books listed in the reference section are good sources of information. In addition, we have provided a list of web sites that you may find helpful. Selected Web Sites for Teachers and Parents Address correspondence to: Jane A. Baran, Ph.D., Professor, Department of Communication Disorders, University of Massachusetts, 127 Arnold House, Amherst, MA 01003-0410. Telephone: (413) 545-0565; Fax: (413) 545-0803; email@example.com . | Spring 2000 Table of Contents | Send EMail to SEE / HEAR | Please complete the comment form or send comments and suggestions to: Jim Allan (Webmaster-Jim Allan) Last Revision: April 27, 2004
<urn:uuid:09a6c6f1-5201-4395-8bc1-a6a8982627ad>
CC-MAIN-2013-20
http://www.tsbvi.edu/seehear/spring00/centralauditory.htm
2013-05-19T18:33:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94576
4,196
What Is A Supernova? This Chandra X-ray photograph shows Cassiopeia A (Cas A, for short), the youngest supernova remnant in the Milky Way. CREDIT: NASA/CXC/MIT/UMass Amherst/M.D.Stage et al. A blindingly bright star bursts into view in a corner of the night sky — it wasn't there just a few hours ago, but now it burns like a beacon. That bright star isn't actually a star, at least not anymore. The brilliant point of light is the explosion of a star that has reached the end of its life, otherwise known as a supernova. Supernovas can briefly outshine entire galaxies and radiate more energy than our sun will in its entire lifetime. They're also the primary source of heavy elements in the universe. On average, a supernova will occur about once every 50 years in a galaxy the size of the Milky Way. Put another way, a star explodes every second or so somewhere in the universe. Exactly how a star dies depends in part on its mass. Our sun, for example, doesn't have enough mass to explode as a supernova (though the news for Earth still isn't good, because once the sun runs out of its nuclear fuel, perhaps in a couple billion years, it will swell into a red giant that will likely vaporize our world, before gradually cooling into a white dwarf). A star can go supernova in one of two ways: - Type I supernova: star accumulates matter from a nearby neighbor until a runaway nuclear reaction ignites. - Type II supernova: star runs out of nuclear fuel and collapses under its own gravity. Let's look at the more exciting Type II first: For a star to explode as a Type II supernova, it must be at several times more massive than the sun (estimates run from eight to 15 solar masses). Like the sun, it will eventually run out of hydrogen and then helium fuel at its core. However, it will have enough mass and pressure to fuse carbon. Here's what happens next: - Gradually heavier elements build up at the center, and it becomes layered like an onion, with elements becoming lighter towards the outside of the star. - Once the star's core surpasses a certain mass (the Chandrasekhar limit), the star begins to implode (for this reason, these supernovas are also known as core-collapse supernovas). - The core heats up and becomes denser. - Eventually the implosion bounces back off the core, expelling the stellar material into space ? the supernova. What's left is an ultradense object called a neutron star. There are sub-categories of Type II supernovas, classified based on their light curves. The light of Type II-L supernovas declines steadily after the explosion, while Type II-P's light stays steady for a time before diminishing. Both types have the signature of hydrogen in their spectra. Stars much more massive than the sun (around 20 to 30 solar masses) might not explode as a supernova, astronomers think. Instead they collapse to form black holes. Type 1 supernovas lack a hydrogen signature in their light spectra. Type Ia supernovae are generally thought to originate from white dwarf stars in a close binary system. As the gas of the companion star accumulates onto the white dwarf, the white dwarf is progressively compressed, and eventually sets off a runaway nuclear reaction inside that eventually leads to a cataclysmic supernova outburst. Astronomers use Type 1a supernovas as "standard candles" to measure cosmic distances because all are thought to blaze with equal brightness at their peaks. Type 1b and 1c supernovas also undergo core-collapse just as Type II supernovas do, but they have lost most of their outer hydrogen envelopes. Recent studies have found that supernovas vibrate like giant speakers and emit an audible hum before exploding. In 2008, scientists caught a supernova in the act of exploding for the first time. MORE FROM SPACE.com
<urn:uuid:412253d1-0671-4662-97fa-1ed2f17d79ec>
CC-MAIN-2013-20
http://www.space.com/6638-supernova.html
2013-05-19T03:22:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927955
863
|KANT IN THE CLASSROOM Materials to aid the study of Kant’s lectures| Descriptions of the Notes (click below): |Anthropology||Encyclopedia||Geography||Logic||Mathematics||Metaphysics||Moral Phil.||Nat. Law||Pedagogy||Physics||Nat. Theology| Philosophical Encyclopedia Notes Kant taught a course on the so-called “Philosophical Encyclopedia” a total of ten semesters, beginning with SS 1767 and ending with WS 1781/82. We have mention of three sets of notes, but only an-Friedländer 4.1 is still available. See the Encyclopedia lectures. The Philosophical Encyclopedia Notes [top] Abbreviations: A: availability [‡ = the set of notes (either as manuscript or in printed form) appears to be complete, + = a large fragment of the original text is still available, - = only a small fragment of the original text is available, (no sign) = none of the original text is available], * = only part of the available text was printed, Ak. = Akademie-Ausgabe, an = anonymous, Kön = Königsberg, NA () = not available (last known location), rpt. = reprint of, var = published as a variant reading. Bibliography: Lehmann 1961: Gerhard Lehmann, ed., Vorlusungen über Enzyklopädie und Logik, Bd. 1: Vorlesungen über Philosophische Enzyklopädie (Berlin: Akademie Verlag, 1961). [This is not a volume of the Akademie-Ausgabe.] (1) anonymous-Friedländer 4.1 [top] Philosophische Enzyklopädie [Lehmann 1980]. Physical Description and History Bound quarto volume, 840 pp. Title on the spine: “Phylosophische Encycopädie aus den Vorlesungen von I. Kant”. The volume contains four texts: (1) philosophical encyclopedia notes (an-Friedländer 4.1), (2) a nine-page student essay originating from one of Kant’s metaphysics lectures, (3) anthropology notes (an-Friedländer 4.3), and (4) physics notes (an-Friedländer 4.4). Each manuscript is paginated separately. There is no separate title-page for the notes, which begin: “Philosophische-Encyclopedie / oder / ein kurtzer Inbegrif aller philosophischen / Wißenschaften / aus den Vorlesungen / des / Herrn Profeßoris Immanuel Kant”. David Joachim Friedländer [bio], owned lecture notes on anthropology (two sets), physical geography, moral philosophy, and physics, apart from the notes on philosophical encyclopedia, which comprise the first 144 pages of a volume containing four distinct works. Second in the volume (an-Friedländer 4.2) is a nine-page student essay originating from one of Kant’s metaphysics lectures. Third in the volume is the 840 page set of anthropology notes (an-Friedländer 4.3). Fourth is the 52 page set of notes on physics (an-Friedländer 4.4). [Lehmann 1980; Ak. 29:663] Richter [1974, 65] identifies this fragment as coming from Kant’s logic lectures (as does Kuehn [1983)]), and counts eight sheets. It is in fact a 9 page untitled student essay, prepared in conjunction with Kant’s metaphysics lectures — specifically, §§7-18 (on possibility) of Baumgarten’s Metaphysics. The manuscript contains four marginalia written in Kant’s hand, and the student essay along with Kant’s marginalia are printed at Ak. 17: 262-69. (1) Ms: Berlin, SBPK, Haus II [Ms. germ. quart. 400.1]. (2) Film: Marburg Kant-Archiv [Film 25]. (1) Lehmann [1961, 31-68]. (2) Lehmann [1980; Ak. 29: 5-45]. Corresponds to Ms. 3-144. Kant lectured on this topic ten times, beginning with WS 1767/68 and ending WS 1781/82. Kuehn argues for 1775 as the date of the note’s source lecture. Stark [1985b, 631] reports a reference in the notes (Ak. 29: 26) to a Berlin Academy prize essay question — “On whether the government should deceive the people for their own good” — that was announced in November 1777 (with the prize awarded in 1780) [see]; this suggests a dating no earlier than WS 1777/78. This could be the set of notes that Kant delivered to Herz at the end of 1778 (see Kant’s letter to Herz of 15 December 1778). Kant would have lectured the previous winter semester, and says in his letter that it was difficult to procure the notes, and that he lacked the time to read through, much less correct or amplify, them. In his letter to Herz the following month (January 1779), Kant expressed dismay at the poor material in the manuscript. In brief, Kuehn argues that the note’s content limits the possible semesters to 1775, 1777/78, 1779/80, or 1781/82. The last date was favored by Lehmann , but for faulty reasons that Kuehn and Tonelli [1962, 513] make plain. Kuehn also notes that the lectures must have pre-dated Kant’s exposure to Tetens (at the latest, early spring of 1778). Because these notes are bound together with three other manuscripts (two other sets of notes from Kant’s lectures — on anthropology and on physics — as well as a short student essay that bears marginalia in Kant’s hand), Kuehn also makes use of clues in these other manuscripts, such as the mention of “Basedow’s institute” in the anthropology notes (this reference would have likely pre-dated Basedow’s resignation from the Philanthropinum in 1776). This evidence, however, is not decisive; it isn’t even suggestive of much. The anthropology notes have since been dated to WS 1775/76, but they could still have been bound with notes stemming from lectures given many years before or after that semester. The notes are based on the textbook: Johann Georg Heinrich Feder, Grundriß der Philosophischen Wissenschaften nebst der nöthigen Geschichte, zum Gebrauche seiner Zuhörer [Coburg: Findeisen, 1767, 21769]. Kant used Feder in 1767/8, 1768/9, 1769, 1770/71, 1775, 1777/78, 1779/80, and 1781/82. The topics covered include: Systems vs aggregates/kinds of knowledge [Ms 3-9]; nature and brief history of philosophy [Ms 9-28]; on genius [Ms 29-31]; logic in general [Ms 31-33]; innate ideas [Ms 34-42]; concepts [Ms 43-50]; judgments [Ms 50-52]; inference [Ms 52-4]; nature of truth [Ms 54-57]; means for arriving at truth [Ms 57-69]; prejudice [Ms 70-84]; learning and thinking [Ms 85-93]; history of logic [Ms 93-100]; metaphysics [Ms 100-34]; monads [Ms 135-41]; empirical psychology [Ms. 141-4]. Wundt [1924, 163] rightly suggests that, for the survey of the history of philosophy, Kant would have relied on Jakob Brucker’s Historia critica philosophiae, 2nd ed. [1766-67]. A useful background on Brucker’s history and Kant’s knowledge of it can be found in Fistioc . (2) anonymous-Hippel 2 [top] See an-Hippel 1 (anthropology). (3) anonymous-Pillau 2 [top] Physical Description and History Bound quarto volume. On the spine: “Kunowsky, logicalischer Katechismus. Auch einige Bemerkungen über phyisische Geographie.” The first part of this volume is a handwritten copy of a book by G. S. Kunowsky [Berlin: Gottlieb August Lange, 1775]. At the top of the first page (?) of the second part of this volume: “II. Teil. Prolegomena Phylosophiae.” This part consists of three chapters, consisting of 15, 8, and 8 sheets, respectively: (a) Prolegomena Logices, (b) Prolegomena Psychologiae, and (c) Kurze Darstellung der Praktischen Philosophie. A third part contains notes on physical geography [Vaihinger 1899, 253-55]. See the related notes on anthropology and physical geography, discussed at an-Pillau 1 (anthropology). All three volumes were located together and reported by Vaihinger; the two other volumes (anthropology and physical geography) are now in Berlin. (1) Ms: Pillau, Realprogymnasium. Lost. Copyright ©2006 Steve Naragon (Manchester College) Last modified: 17 Nov 2007 Please send comments and questions to: firstname.lastname@example.org
<urn:uuid:dd62def4-8f18-4d6e-b8ac-b9bda8cee213>
CC-MAIN-2013-20
http://www.manchester.edu/kant/Notes/NotesEncyclopedia.htm
2013-05-26T09:35:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.809862
2,118
Grade BThe student has a thorough knowledge and understanding of the content and a high level of competence in the processes and skills. In addition, the student is able to apply this knowledge and these skills to most situations. Foundation Statement strands The following strands are covered in this activity: - Fundamental Movement and Physical Activity Students apply movement skills in dance, gymnastics, games and sports, and practise manipulative skills in a range of minor games. They perform movement sequences with consistency and control and demonstrate cooperation, effort and practice in physical activity. Students demonstrate proficiency in the fundamental movement skills of static balance, sprint run, vertical jump, catch, hop, side gallop, skip and overarm throw through practice and application in different games and sports. They participate in physical activity and investigate how it contributes to a healthy and active lifestyle.
<urn:uuid:fc1ad42c-dabd-4d03-a0cc-9be569b1faf3>
CC-MAIN-2013-20
http://arc.boardofstudies.nsw.edu.au/go/stage-2/pdhpe/stu-work/b/fundamental-movement-skills-mackenzie/grade-commentary/
2013-05-23T04:40:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93299
168
UN General Assembly Resolution 194 | Date: || December 11, 1948 | | What does not meet. : || 186 | | Code: || A/RES/194 (III) ( article ) | | Vote: || For: 35 Abs: 8:15 opposition | | Subject: || Palestine - UN Mediator progress report | | Results: || Approval | UN General Assembly resolution 194 near the end of, was passed on December 11, 1948, 1948 Arab - Israeli war . The resolution expresses its appreciation to the efforts of UN special envoy to the folder only Bernadotte was assassinated by a member after the ultra-nationalist Zionist Lehi (group) led, Yitzhak Shamir . Population of Jews in Palestinian control and survival of Israel in the West Bank and Jerusalem in the occupied Arab (trans) resolution and local circumstances after the majority of the population of the Arabs of Palestine to escape from the area had been expelled by the group Army Jordanian Arab information 194. The resolution called for the return of refugees to their homes and defining the role of the UN Conciliation Commission United Nations as an organization to promote regional peace. Check out also personal loans no credit history. The resolution was adopted by a majority of 58 members from 35 countries of the United Nations at the time, all six Arab countries represented at the UN then (Egypt, Iraq, Lebanon, Saudi Arabia, Syria, Yemen all voted against it, the parties to the conflict in question), respectively. Israel has not yet been admitted to the United Nations. The resolution consists of 15 articles, most of those quoted are as follows. - Article 7: Protection and free access to the Holy Land - Article 8: United Nations control and demilitarization of Jerusalem - Article 9: free access to Jerusalem - Article 11: calls for the repatriation of refugees The UN General Assembly Resolution 273 and then agreed to implement the UN resolutions including 194 and other resolutions that Israel has allowed Israel to the United Nations on 11 May 1949 181 . [ Edit International Reception and interpretation] to Israel rejected the Arab countries are, as the truce lasted until 1949 during the 1948 conflict with Israel was opposed by either because they were overshadowed by the war, many of the articles of the resolution are satisfied was Trans-Jordan . Israel has since come to Israel so that Palestinians rejected a resolution calling for it all. General Assembly resolutions are not binding, since only a recommendation statement, and can be enforced and there is no obligation of Resolution 194. [ reference needed ] [ edit ] Article 11 - Refugees Article 11 reads. - Return to the home, the refugees want to live peacefully with their neighbors will resolve that need to be allowed to be performed by a date practical earliest, remuneration and selection never returns a response, they to be paid for property loss or damage to property and, under the rule of law in international law or in equity, should be a good government or authorities responsible. - Mediation Committee, the repatriation of refugees and payment of compensation, to promote economic and social rehabilitation and resettlement, and directed to maintain a close relationship with the director of the United Nations Relief for Palestine Refugees and, through him, the proper UN agencies and organs. The exact meaning of the execution and timing of the resolution, was disputed from the beginning. See 194. Since the late 1960s, has been cited by those who interpret it as the basis for the Palestinian refugees right of return of more and more Article 11. View unsecured loans no credit checks nz. Israel is usually a refugee simply text "Practical fast date" to return to their homes in "Rubeki be allowed" and this recommendation has been described as the only "applies to the prospective noted it in a particular dispute are reading this ... living in peace with their neighbors. " [ citation needed ] One exception was in the Lausanne conference in 1949 was approved by representatives of Arab governments and Israel May 12, 1949 Joint Protocol. After Israel became a member of the United Nations, Prime Minister Moshe Charette has offered to repatriate 100,000 refugees. This offer, then, when was withdrawn by Israel David Ben-Gurion was once again became prime minister. See payday loans with no credit checks nz. As well as Eastern European Jewish immigrants, Israel has absorbed a large amount of Jewish refugees had been induced to emigrate to Israel or forced to emigrate from Arab countries, the Hagana , 850,000 - 750,000 of Total number of Jews from Arab lands during the duration of 1948-1951. In 1951 Israel passed a law of return [, "], for the Jews the rights of all immigrants, or "the return from exile," that legislation. It is the West Bank, Gaza, Jordan, Palestinian refugee status is about 500 million people live in areas where the refugees scattered around Lebanon and Syria are assumed to be hereditary, under this article claims is estimated to be able to return. Under the UN Convention on the Status of Refugees of 1951, refugees are narrower than enough reasons to fear of persecution by reason of the race "is defined as a person by religion, nationality, particular social group members of, or political opinion, "outside the country of his nationality, and you can not become so due to fear, is unwilling to take advantage of the protection of his own country. According to this definition, many of the Palestinians displaced during the war of 1948 refugees, but who moved internally. [ citation needed ] [ Edit Text] to , Even considering the situation in Palestine He was a peaceful adjustment of future conditions for the Palestinian cause at the expense of his life, and express deep appreciation the progress made through the good offices of mediation in promoting the United Nations in the late devotion to duty in Palestine and continued efforts to extend their gratitude to the Acting Mediator and his staff for. See instant personal loans. - Has established a mediation committee consisting of three members of the United States shall have the following features. - It is the existing situation, the General Assembly resolution 186 of 14 May 1948 (S - 2) as long as deemed necessary by the function specified in the UN mediator in Palestine, is assumed. - To carry out specific functions and directives given to it by including additional features and directives can be given to it by the General Assembly or Security Council resolutions and the present. - Such as required by Security Council conciliation committee is currently one of the functions assigned to the Commission by the UN ceasefire resolution of UN Security Council or the mediator in Palestine, the Security Council upon request, to respect all the remaining functions of the UN mediator in Palestine, based on implementing Security Council resolution, the mediator's office shall be terminated. [ edit ] See also please. [ edit References to [ edit ] External links - China, France, Soviet Socialist Republic, a parliamentary committee consisting of the American Federation of England and, by the end of the first part of the current session of the General Assembly determines that must be presented, congressional approval, suggestions for names for the three countries that make up an arbitration committee - Committee requests a view to the establishment of contact between the parties themselves and the Commission as soon as possible to begin its functions at once. - To extend the scope of negotiations in November 1948 provided the Security Council resolutions of 16, an agreement was made in negotiations with one of the view directly to the committee's final arbitration to resolve all outstanding questions In order to determine between them and the authorities call upon governments concerned; - The arbitration committee to achieve a final settlement of all questions outstanding between them, and to take measures directed to support the government and authorities concerned. - Including Nazareth - - should be protected and religious buildings and sites resolves that the holy places in Palestine, free access to them, and existing rights, the United Nations an effective arrangement of this last accordance with historical practice under the supervision of that, certainly, that the United Nations Conciliation Commission to present detailed proposals for a permanent international regime for the territory of the fourth regular session the General Assembly in Jerusalem , you must include a recommendation on a sacred place in the region, that is, with respect to the Holy Land of Palestine remaining committee, gave a formal assurance of protection according to the holy places, to access them, the region must be called by the political authorities, and these efforts can not be submitted to the General Assembly for approval. - To be resolved, in addition to the current municipality of Jerusalem, surrounding villages and towns, including three world religions in the must easternmost area of Jerusalem, from the perspective of the relationship between Abudisu , the most southerly , Bethlehem , most of Western, Ainkarimu (including built-up area of Motsa ), and the most northern, is Shufat should be given a special, individual treatment from the rest of Palestine, an effective should be placed under UN control. Take further measures to ensure the demilitarization of Jerusalem at the earliest possible date for a Security Council request for a permanent international regime for the Jerusalem area to provide for maximum local self-government distinctive groups consistent with the special international status for Jerusalem area conciliation committee to present to tell the fourth regular session a detailed proposal of the General Assembly, the arbitration committee, the interim administration for the area of Jerusalem it, shall cooperate with local authorities, the UN has the authority to appoint a representative. You maybe interested in unsecured loan without guarantor. - Relationship between the government and authorities, pending agreement on more detailed arrangements, roads, free access to Jerusalem can train or plane should be given to all residents of Palestine, and resolve it. For appropriate action by the agency that directs the Conciliation Commission to report immediately to the Security Council, any attempt by any party to inhibit such access; - The arbitration committee, a regional economic development, including the arrangements for transportation and communication facilities available to access the ports and airfields and to instruct the authorities to seek agreement between the government and facilitate relationships. - Return to the home, to do this at an early date practical refugees wishing to live in peace with their neighbors should be permitted to be resolved, the reward for those properties that choose not to return repatriation of refugees and payment of compensation to be paid for loss or damage, and instruct the arbitration board to promote the economic and social rehabilitation and resettlement, the government or authorities may be responsible for , or in equity under the principles of international law, Palestinian refugees and the appropriate organs and agencies of the United Nations and to the property, through him, to maintain a close relationship with the director of the UN Relief. - Mediation Committee, as may be found necessary for the effective discharge of its functions and responsibilities under this resolution, and acting under its authority to appoint such subsidiary bodies, such permitted to employ technical experts, arbitration committee, will have its official headquarters at Jerusalem. Authorities responsible for maintaining order in Jerusalem will be responsible to take all necessary measures to ensure the security of the committee. The Secretary-General will provide a limited number of guards for the protection of staff and building committee. - Security Council to instruct the arbitration board to render progress reports periodically to the Secretary-General to transmit to the members of the United Nations. View instant personal loans no credit check. - Cooperation in the conciliation committee, according to the officials call on all relevant government and all steps possible to support the implementation of this resolution; - Provide staff and facilities necessary to provide the necessary funds required in carrying out the provisions of this resolution, the Secretary-General to arrange appropriate. You maybe interested in personal loans now. Arab Legion soldiers watch over evacuation of Jews from the Jewish Quarter of Jerusalem, 1948 (Journal of Life)
<urn:uuid:30982b3e-958b-4faf-abc9-1126609d236d>
CC-MAIN-2013-20
http://businesscashadvance.biz/p/personal+loans+no/key/194/
2013-05-18T08:47:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949111
2,387
This picture shows where Earth's North Magnetic Pole was in 2005. It also shows Earth's geographic North Pole. The two poles are several hundred kilometers apart. Click on image for full size Original artwork by Windows to the Universe staff (Randy Russell). Earth's North Magnetic Pole Earth has a magnetic field with a north pole and a south pole. Earth's magnetic field is pretty much (but not exactly) like the magnetic field around a bar magnet. Earth's North Magnetic Pole (NMP) is not in the same place as the geographic North Pole. The NMP is off the northern coast of Canada, several hundred kilometers (miles) from the geographic North Pole. Earth's magnetic poles move around. The NMP sometimes moves 85 km (53 miles) in a single day. The NMP moves in a circle or oval each day as the Earth spins. The interplay between Earth's magnetic field and the Sun's magnetic field causes these daily motions. Over longer time periods, the NMP moves even further. It moved about 1,100 km (684 miles) during the 20th century. Right now it is headed towards Siberia, but it will probably change course before it gets there. Compass needles point towards the NMP. Since the NMP is pretty close to the geographic North Pole, people have used compasses to find their way around for many, many years. Did you know that the NMP is really the south pole of Earth's magnetic field? What? People were using compasses for a long time before they really understood magnets. After many years they discovered that the north pole of one magnet is attracted to the south pole of another. A compass needle is a tiny magnet. The needle's north pole points toward Earth's NMP... so the NMP is really the south pole of Earth's magnetic field. Pretty confusing, huh? Some kinds of radiation in space flow along magnetic field lines. Earth's magnetic field steers these particles towards Earth's magnetic poles. When the particles blast into our atmosphere, they make gases in the atmosphere glow. That's what causes the beautiful Northern Lights! Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: The force of magnetism causes material to point along the direction the magnetic force points. As shown in the diagram to the left, the force of magnetism is illustrated by lines, which represent the force....more Earth has a magnetic field. If you imagine a gigantic bar magnet inside of Earth, you'll have a pretty good idea what Earth's magnetic field is shaped like. Of course, Earth DOESN'T have a giant bar magnet...more One main type of radiation, particle radiation, is the result of subatomic particles hurtling at tremendous speeds. Protons, cosmic rays, and alpha and beta particles are some of the most common types...more The Earth has a magnetic field with north and south poles. The Earth's magnetic field reaches 36,000 miles into space. The magnetic field of the Earth is surrounded in a region called the magnetosphere....more The belts of trapped radiation above the Earth's atmosphere, but within the magnetosphere, were first detected by James Van Allen in 1958. Therefore these belts are also known as Van Allen Belts. When...more Earth has a magnetic field with a north pole and a south pole. Earth's magnetic field is pretty much (but not exactly) like the magnetic field around a bar magnet. Earth's North Magnetic Pole (NMP) is...more Altocumulus clouds are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part of the cloud darker than the other. Altocumulus clouds usually form in groups and are about...more
<urn:uuid:8a1ac146-525b-4017-8e7c-a087520ae703>
CC-MAIN-2013-20
http://www.windows2universe.org/earth/Magnetosphere/earth_north_magnetic_pole.html
2013-05-18T18:25:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93592
809
|Breathing Well: Pulmonary Rehabilitation | Breathing Well: Pulmonary Rehabilitation Treats People with COPD Do you ever feel like you can’t get enough air? Maybe you cough with almost every breath, or have trouble clearing your throat. Could it be Chronic Obstructive Pulmonary Disease (COPD)? COPD is a lung disease that includes both emphysema and chronic bronchitis. A person with COPD has difficulty breathing due to narrowed or blocked airways to the lungs. Over time, air sacs (or alveoli) in the lungs are destroyed and can no longer provide oxygen to the blood. Symptoms include trouble breathing or a cough that won’t go away. COPD develops gradually, over many years. Smoking is the most common cause of COPD. Other causes include longterm exposure to pollution or second-hand smoke. There is no cure for COPD, but there are treatments to help. Inhaled medications can open up the airways to ease breathing. Special techniques taught in a pulmonary rehabilitation class can help you manage shortness of breath and improve muscle strength. "The sooner COPD is detected, the better the results of treatment," says pulmonologist Dmitry Lvovsky, MD. "There are different ways to treat the disease, based on its severity, so it is important to have your symptoms evaluated by a specialist in pulmonary diseases." For referrals to Bridgeport Hospital physicians specializing in COPD or other pulmonary diseases, please call toll free, 24/7, at 1-888-357-2396 or visit
<urn:uuid:700ad90d-3be0-4263-a639-e4dfa312c831>
CC-MAIN-2013-20
http://www.bridgeporthospital.org/publications/HealthyWise/default.aspx?view=Article&iArticleID=245&cIssueKey=12SCC53&fontsize=3
2013-05-24T22:29:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.902142
365
Helicobacter pylori is an important agent of gastroduodenal disease in Africa and throughout the world. We sought to determine an optimum method for genotyping H. pylori strains from children and adults in The Gambia, West Africa. Virulence genes were amplified in 127 of 190 cases tested (121 adults and 6 children); each of 60 bacterial cultures, and 116 from DNA extracted directly from biopsies. The proportion of biopsies that were cagA+, the ratio of vacAs1/s2, and vacAm1/m2, and the proportion of mixed strain populations in individual subjects changed with age. Strains lacking virulence cagA and vacA genes and with apparently homogeneous (one predominant strain) infections were more common among infants than adults. In order to detect the range of bacterial genotypes harbored by individual patients, direct PCR proved slightly superior to isolation of H. pylori by biopsy culture, but the techniques were complementary, and the combination of both culture and direct PCR produced the most complete picture. The seemingly higher virulence of strains from adult than infant infections in The Gambia merits further analysis. Keywords:Genotyping; Helicobacter pylori; biopsy specimens; bacterial cultures Helicobacter pylori chronically infects over 50% of people worldwide, causes gastritis and sometimes gastric or duodenal ulceration, and increases the risk of gastric cancer [1,2]. Infection also contributes to other maladies such as malnutrition among the very poor, iron deficiency anemia, and susceptibility to other food and water borne pathogens, especially in developing countries, including The Gambia [3,4]. The prevalence of H. pylori infection is particularly high in developing countries including The Gambia [5-7]. H. pylori, because it is a fastidious micro-aerobic bacterium, it is technically difficult to grow and maintain for molecular biologic research in poorly resourced laboratories in Africa. These challenges coupled with the uniqueness of genotypes of African strains and special features of human physiology and environment in this continent limit our understanding of the spectrum of H. pylori-associated diseases and how this is affected by bacterial genotype in Africa [8,9]. So extensive efforts have been made to determine an optimum method for PCR-based genotyping of H. pylori [10-13]. To more effectively investigate the influence of H. pylori genotype on associated diseases in a West African setting, this study sought to determine an optimum method for PCR-based genotyping of H. pylori in The Gambia, West Africa. A total of 169 biopsy samples from adult subjects, and 21 from infants were investigated for H. pylori infections by both culture and PCR of DNA obtained directly from biopsies. 89/169 (52.6%) adults seemed to be culture positive. Pure H. pylori cultures were obtained from only 63 of them, but not the other 26, primarily because of overgrowth by contaminants despite inclusion of multiple antibiotics in the culture medium or bacterial cells failing to survive further subculture. Direct PCR from adult biopsies indicated that 164/169 (97%) were positive for Hp16s (table 1). The DNA extracts from the remaining 5 biopsies were H. pylori negative with Hp16s and did not amplify for any of the genes tested, even though they were culture positive. These five samples were tested for PCR inhibitors by spiking them with DNA from a known positive. The spiked samples were also all negative after PCR (data not shown), which implies the presence of a potent inhibitor, possibly a ribonuclease. Virulence gene data were obtained by direct PCR from both biopsies and cultures in 60 cases and these data are used for the comparisons in table 2. Table 1. Comparison between culture results and direct PCR with Hp specific 16s primer Table 2. Comparison of amplification of virulence genes between PCR on bacterial cultures and direct PCR on biopsy material Amongst the 80 culture negative adult subjects for whom Hp16s was positive, amplification of some or all virulence genes was only achieved in 20 cases. It is not yet known if these strains lacked virulence genes, were divergent in primer binding sequences, or the bacterial density was so low that amplification was possible only with the most general of primer combinations, such as Hp16s. The remaining 60 samples did not show amplification for any of the genes tested despite a positive response to Hp16s. Amongst 21 infants, 8 were Hp16s positive and pure H. pylori cultures were obtained successfully from six of them. We succeeded in amplifying virulence gene sequences only from the 6 culture positive children. Direct PCR of the biopsies from the other 13 children were either negative (n = 10) or not done (n = 3). Virulence gene amplification was successful in 127 (121 adults and 6 children) cases. A comparison of the products that were indicative of cagA (Figure 1), cag emptysite (Figure 2), vacAs alleles (Figure 3), vacAm alleles (Figure 4), iceA1 (Figure 5) and iceA2 (Figure 6) between both methods for detecting H. pylori is summarized for 60 samples for which sufficient amplified DNA was obtained for this further analysis. The proportion of samples that were cagA+ve with DNA from biopsies and from culture was similar, 58.3% and 61.7% respectively. The success in amplification of vacAs1/s2 (s1 = toxigenic vs s2 = non-toxigenic), vacAm1/m2 and iceA1 alleles was similar from cultures and corresponding biopsies, and agreements between genotypes inferred using DNAs directly from these two sources was good for both cagA and m1, m2 alleles of vacA, moderate for s1, s2 alleles of vacA, and poor for iceA. The poor agreement in the iceA analysis stemmed from the many classified only as iceA2 by PCR from bacterial culture but iceA1 and iceA2 by biopsy which could have been due to the fact that certain bacterial strains in a mixed infection grew much better than others in culture. In direct PCR up to 16.7% of culture positive biopsies failed to amplify DNA for individual alleles. Figure 1. PCR inferred results of cagA gene. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of cagA gene. Lane M is a 100 bp ladder (Biolabs, UK); lanes 1, 2, 3 and 5 showed PCR products (349 bp) of cagA genes, lane 4 is cagA negative. Figure 2. PCR inferred results of cag emptysite. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of cag emptysite. Lane M is a 100 bp ladder (Biolabs, UK), lanes 1, 2 and 4 showed PCR products of 535 bp indicating the presence of cag emptysite, lanes 3 and 5 were cag emptysite negative. Figure 3. PCR inferred results of signal region of vacA gene. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of s1 and s2 allelles of vacA gene. Lane M is a 100 bp ladder (Biolabs, UK), lane 1 and 2 showed the presence of s1 (259 bp), lane 3 is both s1 and s2 positive and lane 4 and 5 are s2 (289 bp) positive. Figure 4. PCR inferred results of mid region of vacA gene. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of m1 and m2 allelles of vacA gene. Lane M is a 100 bp ladder (Biolabs, UK), lanes 1, 2 and 3 are m1 positive; lanes 4 and 5 showed the presence of m2. Figure 5. PCR inferred results iceA1. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of iceA1 gene. Lane M is a 100 bp ladder (Biolabs, UK), lanes 1, 2 and 5 showed the presence of 297 bp of iceA1, lanes 3 and 4 are iceA1 negative. Figure 6. PCR inferred results iceA2. 1.5% gel electrophoresis of H. pylori genotypes showing PCR results of iceA2. Lane M is a 100 bp ladder (Biolabs, UK), lanes 1 and 2 showed the presence of 334 bp of iceA2 and lanes 3 and 4 were iceA2 positive of 229 bp, lane 5 was iceA2 negative. The proportion of biopsies that were cagA+, the proportion of vacAs1, and vacAm1, and the proportion of mixed cultures from individual subjects varied with age. Table 3 is a summary of all PCR results, including samples obtained from cultures and from direct PCR on biopsies (127 in total). If subjects were positive by both techniques, only the biopsy amplified sample was included in this analysis. None of the young children had mixed cultures with relation to cagA, vacAs or vacAm alleles. Young children also exhibited lower levels of the toxigenic genes than any of the adult groups. This difference was only statistically significant (P≤0.02) when isolates obtained from children were compared with those from adults aged less than 60 years for cagA and s1 allele of vacA, and when compared with isolates from adults aged 41-59 years for m1 region of vacA. However, the sample size in children was small and therefore the difference between children and adults should be interpreted with caution. The prevalence of virulence genes was age-dependent. For cagA, vacAs and vacAm the virulent genotype was most common among the 30-40 year age group and less common in younger and older age groups. This association was statistically significant (p < 0.05) for cagA and vacAs and not for the mid region of vacA gene and iceA alleles (p>0.05, table 3). Only 1 elderly subject (70 years) was found to have mixed colonization with vacAs1/s2. The situation with iceA was more complicated, with a large number of individuals exhibiting mixed iceA1/iceA2 colonization. Table 3. Variation in frequency of alleles with age from samples obtained by PCR directly from biopsies or subcultured H. pylori. In this study, we describe the comparison between results obtained from direct PCR to detect H. pylori from gastric biopsies in West Africa, compared to PCR of bacterial isolates obtained from the same set of gastric biopsies. Both techniques produced different success rates, as set out in table 1 and both failed to detect H. pylori in a significant proportion of infections. We agree with Park et al in that direct PCR can produce inconsistent results, and tend to underestimate the prevalence of specific virulence factors (table 2). However, in this study, we detected a good consistency of genotypes between both techniques consistent with what was reported in a similar study . Our data differs in that we experienced considerably greater difficulty in obtaining pure subcultures of H. pylori from gastric biopsies than Park, with a consequently higher failure rate. We have been involved in studies cultivating H. pylori from gastric biopsies from populations throughout the world, and it is our personal observation that sub-culture failure is a particular problem amongst West African isolates, as encountered in the present study. The reasons for this are not immediately apparent. As a consequence of this problem, not all biopsies from which virulence factor DNA was amplified yielded a primary isolation of H. pylori, and there was a significant loss of isolates at subculture. PCR from subcultures gave higher rates of mixed colonization for cagA and vacA genes than direct PCR of biopsies, in contrast to the situation reported elsewhere with higher culture success rates . This may have been due to artifact, either by enhancement of a minor strain from within the stomach, or due to modification of genome during culture . In our hands, therefore, direct PCR produced more positive results, gave rise to fewer concerns about the development of artifact, and was more rapid and convenient. Our data also indicate that there may be PCR inhibitors or potent nucleases in some gastric biopsies. This is consistent with findings in similar studies [12,14]. Their occasional presence and the underestimated prevalence of specific virulence factors by direct PCR illustrate that culture can be a useful complement to direct PCR for studies in which complete ascertainment of H. pylori virulence factor genotypes, including mixed colonization, is desired. We observed a difference in predominant genotype with subjects' age. Young children produced isolates that were more likely to be cagA-ve, and VacAs2m2, in contrast to adults who were more likely to harbor cagA+ve VacAs1m1 isolates. Children were also less likely to have mixed populations of H. pylori strains, which may relate to children aged 18 to 31 months being relatively recently colonized by H. pylori, compared to older individuals. The strains of H. pylori discovered in adult stomachs, at ages when typical H. pylori associated diseases develop, may be genotypically distinct from the original strains that first colonized young Gambian children. This could be due to recombination of the H. pylori genome over the course of decades [16,17] and/or re-exposure to novel strains, with more pathogenic strains circulating predominantly amongst adults. In order to detect the range of bacterial genotypes harbored by individual patients, direct PCR proved slightly superior to isolation of H. pylori by biopsy culture in our hands, but the techniques were complementary to each other, and the use of both together produced the most complete picture. Despite the lower success rate and greater cost of H. pylori culture relative to PCR directly from biopsies, culturing H. pylori is still important for antibiotic susceptibility tests that could guide therapy and other phenotypic tests such as bacterial adherence, cagA and vacA action on mammalian cells, expression of other colonization and virulence traits for which PCR alone is unsuitable. Ethical approval of study protocols was obtained from the joint Gambian Government MRC Ethical Committee and from The London School of Tropical Medicine and Hygiene. 169 gastric antral biopsies were obtained from adult subjects (50 female and 71 male) undergoing routine diagnostic endoscopy after obtaining informed consent. These subjects were consecutive patients attending the MRC endoscopy clinic for whom the endoscopist decided it was appropriate to take biopsies for research as well as clinical purposes. Patients with severe oesophago-gastroduodenal disease, including gastro-oesophageal varices or gastric cancer, were therefore not included in the study. In addition, gastric biopsies were obtained, after informed parental consent, from 21 children aged 18 to 31 months, who were undergoing endoscopic small bowel biopsy because of suspected enteropathy. The biopsies were immediately stored in Brain Heart Infusion (BHI) broth containing 20% glycerol and transported in ice to the laboratory for processing or stored at -70°C until used. Endoscopic biopsies were spread on the surface of selective Columbia-blood agar (Unipath, Basingstoke, UK) supplemented with 10% sheep blood (TCS Biosciences, UK), 2% vitox (Unipath, Basingstoke, UK) and the following antibiotics: trimethoprim (5 μg/ml), vancomycin (6 μg/ml), polymixin B (10 μg/ml), bacitracin (200 μg/ml), nalidixic acid (10 μg/ml), and an antifungal amphotericin B (8 μg/ml) . The inoculated plates were incubated in a micro-aerobic atmosphere at 37°C for 5-7 days. Isolation and identification of H. pylori was made by colony morphology, Gram stain, oxidase, urease and catalase activity. Strains were preserved in BHI broth containing 20% glycerol and stored at -70°C. DNA extraction from cultures DNA was prepared by harvesting a confluent growth of pooled H. pylori population from agar media and extracted using a commercial kit (QiagenR DNA Mini Kit, UK) as per manufacturer's guidelines. The DNA was stored at -20°C until used for gene amplification. DNA extraction directly from biopsies Total genomic DNA was extracted from the biopsy samples by using a combination of the QIAamp DNA isolation kit (Qiagen, UK) and a bead-beater method. Briefly, biopsies were lysed in 180 μl of QIAamp ATL buffer and 20 μl of proteinase K for 1 h at 56°C. Glass beads of different diameters (0.1 mm, 0.5 mm and 1 mm, Sigma) were added, and samples were homogenized in a FastPrep FP120 bead-beater (Bio101, Savant Instruments) for 30 sec at 4 m/s and incubated for an additional hour at 56°C. 200 μl of AL buffer were added to the lysate and samples were incubated for 30 min at 70°C. After the addition of 200 μl absolute ethanol, lysates were purified over a QIAamp column as specified by the manufacturer. PCR amplification of H. pylori 16s rRNA PCR was performed on extracted DNA from biopsies and also from cultures using H. pylori 16s rRNA specific PCR ["Hp16s"] as previously described under the following conditions: 35 cycles of 95°C for 30s, 60°C for 30s and 72°C for 30s and an extension time of 72°C for 5 min. The amplified genes were detected by electrophoresis in a 1.5% agarose gel with ethidium bromide (500 ng/ml) and bands visualized using Gel Doc 2000 (Bio-Rad laboratories, Milan, Italy). PCR to detect genotypes PCR was performed to detect cagA, vacA genes, iceA1 and iceA2 using previously described methods , under the following general conditions: 30 cycles of 94°C for 1 min, 55°C for 1 min and 72°C for 1 min. The amplified genes were detected by electrophoresis in a 1.5% agarose gel with ethidium bromide and bands were visualized using Gel Doc 2000 (Bio-Rad laboratories, Milan, Italy). The primers used are listed in table 4. Table 4. Primers used in this study Percentage agreement was calculated to compare H. pylori genotypes obtained by PCR performed directly on gastric biopsies with the genotypes obtained by PCR of DNA extracted from bacteria cultured. In addition, we report the kappa statistic which allows for chance agreement (kappa = 0 corresponds to no agreement beyond that expected by chance and kappa = 1 represents perfect agreement). We studied the prevalence of genotypes within different age categories. The null hypothesis of no association between prevalence and age was tested using Fisher's exact test. The authors declare that they have no competing interests and the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. JET, RAA and DEB conceived the study. OS performed all the experiments, analysis and wrote the paper with JET, DEB and RAA. TC, JET, MT, RW collected all biopsies from subjects referred for clinical diagnoses. VT participated in consenting of patients and preparing the patients for endoscopy. CB was involved in the statistical analysis. All authors read and approved the final manuscript. This work was supported in part by grant RO3-AI061308 from the US National Institutes of Health and The Medical Research Council Laboratories, The Gambia. Goodman KJ, C P, Tenganá Aux HJ, Ramírez H, DeLany JP, Guerrero Pepinosa O, López Quiñones M, Collazos Parra T: Helicobacter pylori infection in the Colombian Andes. a population-based study of transmission pathways. Transactions Royal Soc Trop Med Hygiene 1995, 89:347-50. Publisher Full Text Santanu Chattopadhyay, P R, Ramamurthy T, Abhijit Chowdhury, Amal Santra, Dhali GK, Bhattacharya SK, Berg Douglas E, Balakrish NairG, Mukhopadhyay Asish K: Multiplex PCR Assay for Rapid Detection and Genotyping of Helicobacter pylori Directly from Biopsy Specimens. Lim Chang-Young, Cho Myung-Je, Chang Myung-Woong, Kim Seok-Yong, Myong Na-Hye, Lee Woo-Kon, Rhee Kwang-Ho, Kook Yoon-Hoh: Detection of Helicobacter pylori in Gastric Mucosa of Patients with Gastroduodenal Diseases by PCR-Restriction Analysis Using the RNA Polymerase Gene (rpoB). Thoreson A-CE, B M, Andersen LP, JÖrgensen F, Kiilerich S, Scheibel J, Rath J, Krogfelt KA: Helicobacter pylori detection in human biopsies: a competitive PCR assay with internal control reveals false results. Smith SI, O K, Arigbabu AO, Cantet F, Megraud F, Ojo OO, Uwaifo AO, Otegbayo JA, Ola SO, Coker AO: Comparison of three PCR methods for detection of Helicobacter pylori DNA and detection of cagA gene in gastric biopsy specimens. Taylor NS, F J, Akopyants NS, Berg DE, Thompson N, Shames B, Yan LL, Fontham E, Janney F, Hunter FM, Correa P: Long-term colonization with single and multiple strains of Helicobacter pylori assessed by DNA fingerprinting. Sheng-Ang Ho, JA H, Lewis FraserA, Secker AlisonD, Cross Debra, Mapstone NicholasP, Dixon MichaelF, Wyatt JudyJ, Tompkins DavidS, Taylor GrahamR, Quirkel Philip: Direct Polymerase Chain Reaction Test for Detection of Helicobacter pylori in Humans and Animals. Mukhopadhyay AK, D K, Jin-Yong Jeong, Datta Simanti, Ito Yoshiyuki, Chowdhury Abhijit, Chowdhury Sujit, Santra Amal, Bhattacharya SujitK, Azuma Takeshi, Balakrish Nair G, Berg DouglasE: Distinctiveness of genotypes of Helicobacter pylori in Calcutta. India.
<urn:uuid:39654824-1a12-4378-bb8c-416074d4df2a>
CC-MAIN-2013-20
http://www.gutpathogens.com/content/3/1/5
2013-05-26T03:30:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936549
4,959
Depending on the type and severity of your arrhythmia, and the results of various tests including the electrophysiology study, there are several treatment options. You and your doctor will decide which one is right for you. Certain anti-arrhythmic drugs change the electrical signals in the heart and help prevent abnormal sites from starting irregular or rapid heart rhythms. To make sure the medication is working properly after two or more days in the hospital, you may be brought back to the laboratory for a follow-up study. Our goal is to find the drug that works best for you. All implantable devices or pacemakers work on "demand" and are used to treat slow heart rhythms. They are small devices that are implanted beneath the skin below the collarbone and connected to a pace wire(s) positioned inside the heart via a vein; this delivers a small electrical impulse to stimulate the heart to beat when it is going too slow. A technique pioneered at UCSF, radiofrequency catheter ablation destroys or disrupts parts of the electrical pathways causing the arrhythmias, providing relief for patients who may not have responded well to medications, or who would rather not or can't take medications. Catheter ablation involves threading a tiny metal-tipped wire catheter through a vein or artery in the leg and into the heart. Fluoroscopy, which allows cardiologists to view on a monitor the catheter moving through the vessel, provides a road map. Other catheters, usually inserted through the neck, contain electrical sensors to help find the area causing the short-circuits. The metal-tipped catheter is then maneuvered to each problem site and radiofrequency waves — the same energy used for radio and television transmission — gently burn away each unwanted strand of tissue. When catheter ablation was first tried, direct current shocks were used, but researchers later developed the use of radiofrequency waves — a more precise form of energy. With radiofrequency catheter ablation, patients usually leave the hospital in one day, compared to open heart surgery, which requires a week stay and months of recovery. For conditions like Wolff-Parkinson-White syndrome, in which a hair-thin strand of tissue creates an extra electrical pathway between the upper and lower chambers of the heart, radiofrequency ablation offers a cure. It has become the treatment of choice for patients with that disorder who don't respond well to drug therapy or who have a propensity for rapid heart rates. Even in arrythmias that can be controlled with drugs, the procedure has been shown to be cost effective because it eliminates medication failures that require hospitalization. It also is an attractive option for elderly patients who are prone to suffer side effects from drug therapy and women of childbearing age who can't take medications because of potential health risk to the fetus. While studies have shown that catheter ablation is more cost effective than drug therapy or surgery, patients who undergo the procedure also experience remarkable improvement in quality of life. A recent study of nearly 400 ablation patients with dangerously rapid heart rates — nearly a third of whom were considered candidates for open heart surgery — found that one month after the procedure 98 percent required no medication and 95 percent reported that their overall health had markedly improved. The UCSF study also found improvement in the patients' ability to work, exercise and take on physical activities. Internal cardioversion for conversion of atrial fibrillation and atrial flutter to a normal sinus rhythm was developed here at UCSF Medical Center in 1991. Internal cardioversion is low energy electrical shock (1 to 10 joules) delivered internally in the heart through two catheters inserted in a vein in the groin and a small electrode pad applied to the chest. This procedure is performed in the electrophysiology lab by our electrophysiologist. During the internal cardioversion, short-acting sedatives are given to make the patient sleepy. Currently, atrial flutter is successfully "cured" by radiofrequency catheter ablation; but treatment to restore atrial fibrillation to sinus rhythm has been the traditional use of medications and external cardioversion. External cardioversion is delivery of high energy shocks of 50 to 300 joules through two defibrillator pads attached to the chest. In some cases, external cardioversion has failed because the electrical current has to first travel through chest muscle and skeletal structures before reaching the heart. Internal cardioversion has been performed when medications and external cardioversion have failed to restore a patient's rhythm back to a normal sinus rhythm. UCSF's success rate of converting a patient from atrial fibrillation to normal sinus rhythm with internal cardioversion has been 95 percent. The less time a patient is in atrial fibrillation, the easier it is to cardiovert back to a normal rhythm, but even patients with long-standing chronic atrial fibrillation can be converted successfully to a normal rhythm through internal cardioversion. With internal cardioversion, our electrophysiology team was successful in converting a patient who had been in chronic atrial fibrillation for eight years. An implantable cardioverter defibrillator is a device for people who are prone to life-threatening rapid heart rhythms. It is slightly larger than a pacemaker and usually is implanted beneath the skin below the collarbone. It is connected to a defibrillation/pace wire(s) positioned inside the heart via a vein. It has the capability of delivering an electric shock to the heart when it determines the heart rate is too fast. It also is capable of pacing or stimulating the heart when it is going too slow. The U.S. Food and Drug Administration (FDA) recently approved the first of a new type of pacemaker that paces both ventricles of the heart to coordinate their contractions and improve their pumping ability. According to the test results presented to the FDA, cardiac resynchronization therapy (CRT): CRT devices work by pacing both the left and right ventricles simultaneously, which results in resynchronizing the muscle contractions and improving the efficiency of the weakened heart. In the normal heart, the electrical conduction system delivers electrical impulses to the left ventricle in a highly organized pattern of contractions that pump blood out of the ventricle very efficiently. In systolic heart failure caused by an enlarged heart (dilated cardiomyopathy), this electrical coordination is lost. Uncoordinated heart muscle function leads to inefficient ejection of blood from the ventricles. Reviewed by health care specialists at UCSF Medical Center.
<urn:uuid:9fcd23ce-97ba-4941-aa5b-1a62503c3b7d>
CC-MAIN-2013-20
http://www.ucsfhealth.org/conditions/atrial_flutter/treatment.html
2013-05-26T02:42:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930703
1,355
Everybody knows that some kind of climate change is occuring. The question is, can agriculture use climate change to its advantage? Studies show that some crops, like cotton, can handle higher temperatures more efficiently, while others, like corn and grain sorghum, are very responsive to elevated levels of carbon dioxide. Some physiologists have attributed the increase in cotton yields over the past 20 years to the slow, upward trend in carbon dioxide. A week ago, a giant snowstorm passed through the Mid-South. Today, it’s 71 degrees outside. Who knows what next week will bring. And it’s only February for gosh sakes. All this just reinforces in my mind that some sort of climate change is occurring. Whether this change is caused by global warming, or whether or not global warming is caused by man’s activities, is irrelevant. Governments believe they can reverse climate change through carbon credits, regionalization of economies, wind power, solar panels, etc. But I’m not so sure. The world’s industrial economy is a force not easily reckoned or tinkered with. For example, what does reducing greenhouse gas emissions here in the United States accomplish when China is expanding its highway construction and threatening to surpass the United States in automobile ownership? We did not discuss who or what is to blame for climate change, or whether or not it is possible to reverse its effects. The point of the discussion centered on how agriculture can adapt and either use climate change to our advantage, or at least make it possible for agriculture to succeed in spite of it. “U.S. agriculture has some unique capabilities that our competitors do not have,” said Kater Hake, vice president of agricultural research, Cotton Incorporated, who led the discussion. “Understanding those advantages and how we can maximize them will make us more profitable in the long term.” One of the most interesting concepts was the impact of temperature and carbon dioxide on crop yield. Hake pointed out that some crops, like cotton, can handle higher temperatures more efficiently, while others, like corn and grain sorghum in particular, are very responsive to elevated levels of carbon dioxide. Cotton, corn and peanuts seem to handle the combination of higher temperatures and elevated carbon dioxide better than other crops. Hake noted that some physiologists have attributed the increase in cotton yields over the last 20 years to the slow, upward trend in carbon dioxide. “I certainly couldn’t disagree with that.” Hake believes that climate disruption will impact agriculture in many ways over the next 20 to 30 years, and these changes will reverberate across the globe. “It’s going to do it directly through weather variability – too wet, too dry, too hot, too cold. This can have a rapid effect on commodity prices and governmental policies.” Industries that do the best job of adapting to this volatility and variability will come out on top.
<urn:uuid:5c9fa930-8e33-42d8-a561-1d5224f2acdc>
CC-MAIN-2013-20
http://deltafarmpress.com/print/government/climate-change-how-does-agriculture-adapt
2013-05-26T03:51:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93042
615
Electric and Magnetic fields are invisible lines of force that surround any electrical device. Electric fields are produced by voltage and increase in strength as the voltage increases. The electric field strength is measured in units of volts per meter (V/m). Magnetic fields result from the flow of current through wires or electrical devices and increase in strength as the current increases. Magnetic fields are measured in units of gauss (G) or tesla (T). Most electrical equipment has to be turned on, i.e., current must be flowing, for a magnetic field to be produced. Electric fields, on the other hand, are present even when the equipment is switched off, as long as it remains connected to the source of electric power. Electric fields are shielded or weakened by materials that conduct electricity (including trees, buildings, and human skin). Magnetic fields, on the other hand, pass through most materials and are therefore more difficult to shield. Both electric and magnetic fields decrease as the distance from the source increases.
<urn:uuid:79da8c3e-7233-4507-b087-25bcb399c9c9>
CC-MAIN-2013-20
http://www.cpuc.ca.gov/PUC/Templates/Default.aspx?NRMODE=Published&NRNODEGUID=%7B384F7545-63C2-4F4F-BC18-8B6F6A5D84E6%7D&NRORIGINALURL=%2FPUC%2Fenergy%2FEnvironment%2FElectroMagnetic%2BFields%2Fwhat_are_emf.htm&NRCACHEHINT=Guest
2013-05-24T09:11:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954865
201
Martin, 1356–1410, king of Aragón and count of Barcelona (c.1395–1410) and, as Martin II, king of Sicily (1409–10). He succeeded his brother, John I, in Aragón and became king of Sicily on the death of his son, Martin I of Sicily, who had married Maria, last of the Sicilian branch of the house of Aragón. Martin of Aragón and Sicily died without a male heir and thus was the last ruler from the Catalan dynasty of Aragón. After a two-year interregnum, his nephew, Prince Ferdinand of Castile, was chosen (1412) king of Aragón and Sicily as Ferdinand I. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Spanish and Portuguese History: Biographies
<urn:uuid:39952536-7478-4c22-86bd-61b314a55bf6>
CC-MAIN-2013-20
http://www.infoplease.com/encyclopedia/people/martin-1356-1410-king-aragon-count-barcelona.html
2013-05-26T03:37:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706578727/warc/CC-MAIN-20130516121618-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952214
187
What are maggots? Maggots are fly larvae, usually of the common housefly and also the bluebottle. Flies are attracted to food and other rubbish; they lay their eggs on the rubbish; later the eggs hatch into maggots. You will only have a problem with maggots if flies can get to your waste. If flies settle on your rubbish they may lay eggs which can hatch out as maggots within 24 hours. (Therefore the frequency of refuse collections is irrelevant). Householders are responsible for their own household waste and for the hygiene at their home; including their bins. How can I reduce the risk of maggots? - The first step is to make sure that flies can’t get at your rubbish; in fact wheelie bins are much better at keeping flies out than black bin bags. - Fly spray can be effective at helping with flies, but these must be purchased by the householder. - Try to reduce the amount of food wasted (visit www.lovefoodhatewaste.com) - If you have a green and a grey wheeled bin, in hot weather you may prefer to get a weekly collection of food waste by alternating which bin you put it in; i.e. one week wrapped in newspaper and put in the green bin, the other week bag it and put it in your grey bin. - Never leave food uncovered inside the home – this includes cat/dog food – remember flies may lay eggs on exposed food and in warm weather the eggs can hatch within 24 hours - Rinse polystyrene food trays and other food packaging that can’t be recycled before you put it in the bin, this will also reduce bad odours - Squeeze out the air from bags and tie them tightly - Any food scraps, pet waste, nappies should be double wrapped - If possible leave the bin out of direct sunshine - Ensure the bin lid is closed – if the lid is damaged please contact the sort-it team - Hang insecticide strips inside your bin to help control flies - Try using Citronella – a natural remedy used in gardens. This will discourage flies as they don’t like the smell. - Remember, flies will also be attracted to recycling material if they aren’t clean – so please make sure all food cans, bottles and jars are rinsed - If disposable nappies are in your rubbish, empty ‘solids’ down the toilet - Ensure your kitchen bin has a close fitting lid – ‘swing top’ lids can let flies in. What can I do about maggots in my bin? - Try using fly-spray - Pour over boiling water with a small amount of bleach - Most of the maggots will go when the bin is emptied, after it has been emptied clean it out with disinfectant or bleach and plenty of water. Use a cleaning product with a fragrance as this will help deter flies in the future. - If you do not want to wash out your bins - look in the local telephone directories/free papers for a professional bin cleaning company Can maggots cause health problems? Maggots are unpleasant but there is no evidence to suggest that they cause health problems. Flies are all around no matter what type of collection service is in operation. The best approach is to be careful with your waste and ensure that flies can’t get to it, by following the advice above. My household has problems with excess rubbish - can you offer any help? If your household is struggling to fit the refuse into your grey bin or allocated number of grey sacks, you may find a waste audit useful. Officers from the waste management team will visit your home and offer advice and support, in certain circumstances you may be entitled to extra capacity. If you would like to arrange one, please fill in the on-line form or call the Sort-It telephone line on 01926 456339. Page Last Updated: 11 Nov 2009
<urn:uuid:eb146ff0-b5c8-4a13-bd60-e05b0e9d0b54>
CC-MAIN-2013-20
http://www.warwickdc.gov.uk/WDC/Recycling-waste-and-environment/Rubbish_x2c_+waste+and+recycling/Refuse+collection/Advice+on+maggots.htm
2013-05-22T22:21:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941909
840
Nanosensors are arrays of wires that can capture and detect a single molecule, which would made them excellent bomb sniffers. Nanosensors exist, technically, but they’re extremely difficult to manufacture and extremely delicate. However, thanks to a new technique created by HP, scientists have been able to make nanosensors as quickly but not quite as easily as they would make a standard circuit board. According to one of the researchers, Regina Ragan (formerly with HP and now a professor of chemical engineering at UC Irvine), at some concentrations of platinum, the metal seems to form clumps, leaving parts of the wire uncoated. After the researchers exposed the nanowires to plasma, the uncovered parts of the wires were etched away, leaving tightly spaced platinum nanoparticles each about eight nanometers across. The technique could be easy and inexpensive to scale up because it uses common commercial techniques for deposition and etching, and requires few steps, Ragan says. This is very similar to the techniques used to etch silicon chips and is much easier to use than the current methods which require multiple steps and are not foolproof. Pretty high tech, but it looks like we’re headed in the right direction. A New Way to Make Ultrasensitive Explosives Detectors [TechnologyReview]
<urn:uuid:7f29f34e-c1c0-4738-86e9-b2b87641c153>
CC-MAIN-2013-20
http://techcrunch.com/2006/08/15/nanosensors-to-catch-bombs/
2013-05-25T13:02:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966895
267
Stopping the Suicide Spiral |Planetary Protection: X-ray Super-Flares Aid Formation of "Solar Systems" Credit: NASA Chandra Orion the Hunter is one of the most easily recognized constellations in the night sky, and lying just beneath his belt is the Orion Nebula, a nursery that cradles about 1400 newborn stars. Of these stars that gild Orion's sword, about 30 of them will grow up to be similar to our own sun. Half of the young suns in this cluster show evidence of being surrounded by planet-forming disks. In these gaseous envelopes, tiny grains grow into larger rocks, which eventually become the cores of both rocky and gaseous planets. Astronomers using the Chandra X-Ray Observatory have discovered that the young stars in the Orion Nebula let loose an extraordinary amount of X-rays. They observed 41 powerful X-ray flares during 13 weeks of observation. "These flares are incredibly strong," says Eric Feigelson of Penn State University, principal investigator for the international Chandra Orion Ultradeep Project. "Even the faintest of the X-ray events seen with Chandra is more powerful than the strongest events seen on the contemporary sun. And they also occur frequently: every few days there's a big flare in a baby sun, while similar events occur on our sun once every few years." Our sun is middle aged at about 4.5 billion years old, while the stars observed with Chandra are only between 1 and 10 million years old. Young stars tend to radiate more X-rays than older stars, because their magnetic fields are more unstable. The X-ray flares, strobing like hot disco lights, may affect the dancing of prepubescent planets in the gas disks. "The disks should experience hundreds of millions of powerful flares while the planets are forming," says Feigelson. There is evidence that the disks absorb rather than reflect energy from the lower energy X-ray flares. This energy should ionize the gas, knocking electrons off the gas atoms and generating electric charges in the disk. "It sort of resembles the painful shock you get if you short circuited wires on a plug in your house," says Feigelson. When the ionized gas becomes coupled with this weak electric field, a smooth and calm disk can become turbulent. The scientists believe that this turbulence may somehow act as a protective device for newly forming planets. Without intervention, the gravitational interaction between planetary cores and the disk gas is expected to cause the core to quickly spiral into the star. Turbulence could explain why planetary cores don't surrender to this gravity. |Orion as imaged by VLT. Image Credit: VLT Today, X-ray flares from our sun can disrupt communication and electricity on orbiting spacecraft and on Earth. But perhaps when the sun was much younger, such X-ray flares allowed the planets in our solar system to maintain their orbital positions. "We used the Orion Nebula cluster as a virtual time machine in order to view the sun as it appeared four and a half billion years ago," says Scott Wolk of the Harvard-Smithsonian Center for Astrophysics. "We found that the average very young sun-like star has an X-ray flare about once a week. Such flares would have had a profound affect on the material in the solar system, and could even have helped protect Earth from rapidly spiraling in towards the sun and being destroyed." At the moment this is only conjecture, since it is not known how turbulence in a gas disk affects planets or planetary formation. Feigelson says that studies on the effects of disk turbulence are ongoing. An alternative theory of planetary formation says that disk turbulence could be responsible for forming gas giant planets. Alan Boss of the Carnegie Institution of Washington has suggested that, rather than slowly build up from dust grains, planets like Jupiter might develop due to instabilities in the proto-planetary disk. In his gravitational instability model, spiral arms form in a gas disk and then break up into clumps. These gaseous knots can turn into Jupiter-like planets in as little as a thousand years, rather than the millions of years it would take for planets to develop by dust grain accretion. However, says Feigelson, "I'm personally worried about the earliest stages of making even pebbles in a turbulent region. It's not clear how turbulence would collect things together or spread them apart." Lower mass stars in the Orion Nebula - the ones that will become M or K-class stars someday - were found to have substantially weaker X-ray flares. A question the astronomers hope to answer is whether that means M or K stars will be less likely to hold onto planets. Without X-ray flares to create waves in the gas disk, will nascent planetary cores fall right into low-mass stars? Most of the extrasolar planets discovered so far orbit stars that are the mass of our sun or greater. Astronomers have just begun the search for gas giant planets orbiting lower mass stars. The results from the Chandra Orion Ultradeep Project will appear in an upcoming issue of The Astrophysical Journal Supplement. Related Web Pages The Search for More Earths Young Planet Challenges Old Theories Dying Planet Leaks Carbon-Oxygen Gem Sorting for the Next Earth Long, Strange Trips Black Hole Broadcasting The Catalog and Atlas of Cataclysmic Variables Automated Telescope Grids, Instant Messages The Mystery of Standard Candles Inevitability Beyond Billions
<urn:uuid:e81b3039-08f8-4ed1-8798-5ec014e8ecda>
CC-MAIN-2013-20
http://www.astrobio.net/exclusive/1554/stopping-the-suicide-spiral
2013-05-23T05:15:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935477
1,136
osteoarthritisArticle Free Pass osteoarthritis, also called osteoarthrosis or degenerative joint disease, disorder of the joints characterized by progressive deterioration of the articular cartilage. It is the most common joint disease, affecting more than 80 percent of those who reach the age of 70. Although its suffix indicates otherwise, osteoarthritis is not characterized by excessive joint inflammation as is the case with rheumatoid arthritis. The disease may be asymptomatic, especially in the early years of its onset. As it progresses, however, pain, stiffness, and a limitation in movement may develop. Common sites of discomfort are the vertebrae, knees, and hips—joints that bear much of the weight of the body. The cause of this disorder is not completely understood, but biomechanical forces that place stress on the joints (e.g., bearing weight, postural or orthopedic abnormalities, or injuries that cause chronic irritation of the bone) are thought to interact with biochemical and genetic factors to contribute to osteoarthritis. In its early stages there is softening and roughening of the cartilage, which eventually wears away. The bone, deprived of its protective cover, regenerates the destroyed tissue, resulting in an uneven remodeling of the surface of the joint. Thick bony outgrowths called spurs sometimes develop. Articulation of the joint becomes difficult. Depending on the site and severity of the disease, various treatments are employed. Individuals who experience moderate symptoms can be treated by a combination of the following: analgesic medications, periodic rest, weight reduction, corticosteroid injections, and physical therapy or exercise. Surgical procedures such as hip or knee replacement or joint debridement (the removal of unhealthy tissue) may be necessary to relieve more severe pain and improve joint function. What made you want to look up "osteoarthritis"? Please share what surprised you most...
<urn:uuid:be60eccb-b6e2-4eb6-bb40-53a35fc24869>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/434262/osteoarthritis
2013-06-18T05:20:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706933615/warc/CC-MAIN-20130516122213-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937651
402
Several scenarios could explain your bulbs' lackluster performance, and understanding a few key facts about tulips will help you solve the mystery. Like all green plants, tulips use photosynthesis—a process that turns carbon dioxide and water into sugars and starches—to create the energy to live. Photosynthesis happens in the leaves, and in the case of tulips, the bulbs store the energy. Cutting off a tulip's foliage after it blooms also cuts off its energy supply. "It's important to let the foliage yellow so the bulb gains energy for next year," says Linda Miranda, senior horticulturist at the Chicago Botanic Garden. If you removed the foliage too soon last spring, your tulips may have stored only enough energy to produce foliage this year. The type of tulip you plant (hybrid or species) also determines if the bulb will flower for years or dwindle away after a season or two. In general, you need to plant the tall hybrid tulips every fall for a spring display, while smaller species-type tulips (also called botanical tulips) will come back for several years and may even naturalize (that is, produce new tulips) in your garden. "Most hybrid tulips aren't reliable enough to treat as perennials," says Miranda. "They don't last from year to year." Even if you let the tulip's foliage yellow fully, the bulb will likely store only enough energy to send up foliage the next spring—though Fosteriana hybrids, often referred to as Emperor tulips, and the Darwin hybrids generally bloom for two or more years. Dividing and fertilizing your older bulbs may help them bloom again. "The ideal time to divide bulbs is after they flower, but if you're not getting any flowers, you can do it any time in spring," says Miranda. Dig up a nonblooming older bulb; it should have smaller daughter bulbs around its base. If the original bulb is split or doesn't have daughter bulbs, discard it. Replant the largest of the daughter bulbs 6 to 8 inches deep in an area with good drainage, such as a rock garden or raised bed that doesn't receive much summer irrigation. When you replant the bulbs, fertilize them with a balanced organic fertilizer. In fall, mulch your bulb beds with 2 to 3 inches of leaf mold or compost to protect them from freezing and provide nutrients. The daughter bulbs should bloom in two years. In the meantime, look into planting a few more botanical tulips this fall. Miranda suggests Tulipa greigii 'Oratorio' and T. kaufmanniana 'Fashion'.
<urn:uuid:04758b1f-8655-4f11-ad08-4e5763e87243>
CC-MAIN-2013-20
http://organicgardening.com/print/1033
2013-05-22T01:24:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937481
546
From car seats and baby gates to bicycle helmets and football pads, there are numerous ways to protect children as they grow. One of the most important is getting them vaccinated, says Dr. Dennis Murray, Infectious Diseases Chief at the Georgia Health Sciences Children’s Medical Center. “Many life-threatening diseases are not seen today in the United States because of the development and implementation of vaccines,” says Murray. “Polio and smallpox are examples. Other diseases like measles have been dramatically decreased.” Still, more than 900,000 American children are not fully immunized, according to the U.S. Centers for Disease Control and Prevention. As the new school year approaches, Murray urges parents to understand the value of immunizations and to be sure their children’s shots are up to date. He cites four main reasons: 1. Immunizations can save your child’s life. Advances in medical science enable your child to be protected against more diseases than ever before. Some potentially fatal diseases have been eliminated completely and others are close to being gone – primarily due to safe and effective vaccines. 2. Immunizations protect others you care about. Serious vaccine-preventable diseases still occur, striking groups such as infants who have not yet been fully immunized and those unable to receive vaccinations due to allergies, illness, weakened immune systems or other reasons. Full immunization in the general population is important to prevent the spread of diseases to vulnerable friends and loved ones. 3. Immunizations can save time and money. A child with a vaccine-preventable disease will likely be kept out of school or daycare. Likewise, a prolonged illness can take a financial toll because of lost time at work, medical bills and/or long-term disability care. Immunization is a good investment and usually covered by health insurance plans. For those without insurance or the underinsured, ask your health care provider about the Vaccines for Children program, a federally funded program that provides free vaccines to children. 4. Vaccinations are safe and effective. Vaccines are recommended only after a long and careful review by scientists and health care professionals. The side effects of vaccines (potential pain, redness or tenderness at the injection site) are minimal compared to the pain, discomfort and trauma of the diseases these vaccines prevent. Studies repeatedly debunk the link of vaccines to autism, sudden infant death syndrome, immune dysfunction or asthma—findings supported by groups including the American Academy of Pediatrics, Institute of Medicine, National Institutes of Health and CDC. Murray stresses the importance of immunizations both in well-child care and for periodic updating in adults. For more information, visit cdc.gov/vaccines or talk to your pediatrician. Dr. Dennis Murray is Chief of Pediatric Infectious Diseases at Georgia Health Sciences Children’s Medical Center and a Professor in the Department of Pediatrics in the Medical College of Georgia at Georgia Health Sciences University. He received his medical degree from the University of Michigan and completed his residency and fellowship at Children’s of Northern California and the University of California, Los Angeles, respectively. His clinical interests include childhood immunizations; viral infections of children; infections in child care settings and public policy of infectious diseases and immunizations. His research interests include childhood immunization safety, efficacy, and immunogenicity; prevention of RSV in high risk children; and influenza disease and prevention in children with chronic medical problems. Murray is a member of the American Academy of Pediatrics and serves on its Committee on Infectious Diseases.
<urn:uuid:f87cd4f5-0cf4-4176-a7f0-f2f16df279f0>
CC-MAIN-2013-20
http://www.grhealth.org/GhsuContentPage.aspx?nd=408&news=1576
2013-05-25T19:36:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951091
726
Rhodothamniella floridula on sand-scoured lower eulittoral rock Ecological and functional relationships This biotope is predominantly of algae which dominate the rock surface and canopy. Macroalgae provide habitats for many species of invertebrates and fish and also provide shade under their canopy. Rock type and sand scour effects are of critical importance to the development of this biotope. Sand-binding algal species are able to colonize soft or crumbly rock more successfully than fucoids (Lewis, 1964). Where sand scour is severe, fucoids and Rhodothamniella floridula tend to be absent while ephemeral green algae dominate the substratum and a different biotope will be present (Connor et al., 1997b). Seasonal and longer term change No information was found specifically on this biotope. However, some general observations from rocky shore communities are relevant. - Ephemeral green algae may show a peak in abundance during the spring. - Winter storms will reduce or damage fucoids and macroalgal cover. - Crab and fish tend to move to deeper water in the winter months, so that predation is probably reduced. - Corallina officinalis may be overgrown by epiphytes, especially during summer. This overgrowth regularly leads to high mortality of fronds due to light reduction (Wiedemann pers comm. to Tyler-Walters, 2000). - At least in northern Britain, Littorina littorea migrates down shore as temperatures fall in autumn (to reduce exposure to sub-zero temperatures) and up shore as temperatures rise in spring; migration depends on local winter temperatures. - The upper limit of distribution of Patella vulgata on a shore is increased by shade and exposure. In some situations seasonal variations in sunshine causes a downward migration in spring/summer and an upward migration in autumn/winter (Lewis, 1954). Habitat structure and complexity Bedrock and boulders form the substratum in this biotope; the pits, crevices and inclination of which create microhabitats exploitable by both mobile and sessile epilithic species. In addition, the macroalgal species of the community add considerable structural complexity to the biotope in the form of additional substratum for settlement by epiphytic species. The sand scour tolerant species, Rhodothamniella floridula, enhances the structural complexity by binding sand within a mat over the rocky substratum into which polychaetes and amphipods can burrow. There is likely to be considerable structural heterogeneity over a small scale within the biotope. For instance, although barnacles may form a dense layer over the substratum that largely excludes other species, the gaps created by dead barnacles may be exploited by small invertebrates. Rocky shore communities are highly productive and are an important source of food and nutrients for members of neighbouring terrestrial and marine ecosystems (Hill et al., 1998). Macroalgae exude considerable amounts of dissolved organic carbon which is taken up readily by bacteria and may even be taken up directly by some larger invertebrates. Dissolved organic carbon, algal fragments and microbial film organisms are continually removed by the sea. This may enter the food chain of local, subtidal ecosystems, or be exported further offshore. Rocky shores make a contribution to the food of many marine species through the production of planktonic larvae and propagules which contribute to pelagic food chains. Many rocky shore species, plant and animal, possess a planktonic stage: gamete, spore or larva which float in the plankton before settling and metamorphosing into adult form. This strategy allows species to rapidly colonize new areas that become available such as in the gaps often created by storms. For these organisms it has long been evident that recruitment from the pelagic phase is important in governing the density of populations on the shore (Little & Kitching, 1996). Both the demographic structure of populations and the composition of assemblages may be profoundly affected by variation in recruitment rates. - Community structure and dynamics are strongly influenced by larval supply. Annual variation in recruitment success, of algae, limpets and barnacles can have a significant impact on the patchiness of the shore. - The propagules of most macroalgae tend to settle near the parent plant (Schiel & Foster, 1986; Norton, 1992; Holt et al., 1997). For example, red algal spores and gametes are immotile and the propagules of Fucales are large and sink readily. Norton (1992) noted that algal spore dispersal is probably determined by currents and turbulent deposition (zygotes or spores being thrown against the substratum). For example, spores of Ulva spp. have been reported to travel 35km. The reach of the furthest propagule and useful dispersal range are not the same thing and recruitment usually occurs on a local scale, typically within 10m of the parent plant (Norton, 1992). Vadas et al. (1992) noted that post-settlement mortality of algal propagules and early germlings was high, primarily due to grazing, canopy and turf effects, water movement and desiccation (in the intertidal) and concluded that algal recruitment was highly variable and sporadic. However, macroalgae are highly fecund and widespread in the coastal zone so that recruitment may be still be rapid, especially in the rapid growing ephemeral species such as Ulva spp. and Ulva lactuca, which reproduce throughout the year with a peak in summer. Similarly, Ceramium species produce reproductive propagules throughout the year (Dixon & Irvine, 1977; Burrows, 1991; Maggs & Hommersand, 1993). - Gastropods exhibit a variety of reproductive life cycles. The common limpet Patella vulgata and the periwinkle Littorina littorea have pelagic larvae with a high dispersal potential, although recruitment and settlement is probably variable. - Barnacles such as Semibalanus balanoides have a planktonic nauplius larva, which spends about 2 months in the plankton, with high dispersal potential. Peak settlement in Semibalanus balanoides occurs in April-May in the west and May-June in the east and north of the British Isles, However, settlement intensity is variable, subsequent recruitment is inhibited by the sweeping action of macroalgal canopies (e.g. fucoids) or the bulldozing of limpets and other gastropods (see MarLIN review for details). - Many species of mobile epifauna, such as polychaetes that may be associated with patches of mussels or rock crevices, have long lived pelagic larvae and/or are highly motile as adults. Time for community to reach maturity The MLR.Rho biotope consists mainly of algal species, with high spore production and dispersal potential, enabling rapid colonization and recolonization. The development of the community from bare or denuded rock is likely to be similar to that occurring after an oil spill. Recovery of rocky shore populations was intensively studied after the Torrey Canyon oil spill in March 1967. Areas affected by oil alone recovered rapidly, within 3 years. If rocks or boulders are present with sand in suspension, it is likely that recovery of the MLR.Rho biotope would take approximately the same amount of time. No text entered This review can be cited as follows: Rhodothamniella floridula on sand-scoured lower eulittoral rock. Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line]. Plymouth: Marine Biological Association of the United Kingdom. Available from: <http://www.marlin.ac.uk/habitatecology.php?habitatid=12&code=1997>
<urn:uuid:2f6fff5a-82af-4a98-8949-3cfb16289cd9>
CC-MAIN-2013-20
http://www.marlin.ac.uk/habitatecology.php?habitatid=12&code=1997&code=1997
2013-05-24T09:45:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91313
1,674
Franklin vs. Roberts: Experiences as an Apprentice - Students will be able to appreciate the differences in experiences among apprentices. - Students will be able to recall, analyze, and synthesize historical information to formulate a position. - Students will be able to interpret primary source material using critical-thinking skills. Suggested Instructional Procedures (Note: This activity utilizes ideas discussed in the previous activity) Download and photocopy the excerpt from Benjamin Franklin’s Autobiography and Jonathan Roberts’s journal. 1. Class work- Divide the class into two big groups. Half the class will receive the photocopy packet of Benjamin Franklin’s Autobiography and the other half, Jonathan Roberts’s journal. Allow students within the halves to work together to read through their assigned piece. When completed, bring class back together. Divide class (mixing the original halves) into small groups for jigsaw activity. Download, photocopy, and distribute the worksheet to each student. Ask students to compare and contrast the experiences of both Franklin and Roberts, completing the worksheet and listing any additional areas they can identify as comparisons. 3. Review & Close- Review the completed worksheets as a class. Close with asking the students to draw conclusions about the individual experiences of Franklin and Roberts, and the extent to which the apprentice system did, or did not affect their future careers according to Franklin and Roberts. - See lesson 1 of this unit Plans in this Unit This unit was created by Laura Beardsley. Modified and updated for SAS by Kimberly L. Parsons, Education Intern, and Historical Society of Pennsylvania.
<urn:uuid:202379cd-e9a9-4a38-ae7b-86f9d38c66f5>
CC-MAIN-2013-20
http://hsp.org/education/unit-plans/educating-the-youth-of-pennsylvania-apprentices-in-the-age-of-franklin/franklin-vs-roberts-experiences-as-an-apprentice
2013-05-24T01:37:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.884894
341
The acquired immunodeficiency syndrome (AIDS) is characterized by the progressive loss of the CD4+ helper/inducer subset of T lymphocytes, leading to severe immunosuppression and constitutional disease, neurological complications, and opportunistic infections and neoplasms that rarely occur in persons with intact immune function. Although the precise mechanisms leading to the destruction of the immune system have not been fully delineated, abundant epidemiologic, virologic and immunologic data support the conclusion that infection with the human immunodeficiency virus (HIV) is the underlying cause of AIDS. The evidence for HIV's primary role in the pathogenesis of AIDS is reviewed elsewhere (Ho et al., 1987; Fauci, 1988, 1993a; Greene, 1993; Levy, 1993; Weiss, 1993). In addition, many scientists (Blattner et al., 1988a,b; Ginsberg, 1988; Evans, 1989a,b, 1992; Weiss and Jaffe, 1990; Gallo, 1991; Goudsmit, 1992; Groopman, 1992; Kurth, 1990; Ascher et al., 1993a,b; Schechter et al., 1993a,b; Lowenstein, 1994; Nicoll and Brown, 1994; Harris, 1995) have responded to specific arguments from individuals who assert that AIDS is not caused by HIV. The present discussion reviews the AIDS epidemic and summarizes the evidence supporting HIV as the cause of AIDS. The term AIDS first appeared in the Morbidity and Mortality Weekly Report (MMWR) of the Centers for Disease Control (CDC) in 1982 to describe "... a disease, at least moderately predictive of a defect in cell-mediated immunity, occurring with no known cause for diminished resistance to that disease" (CDC, 1982b). The initial CDC list of AIDS-defining conditions, which included Kaposi's sarcoma (KS), Pneumocystis carinii pneumonia (PCP), Mycobacterium avium complex (MAC) and other conditions, has been updated on several occasions, with significant revisions (CDC, 1985a, 1987a, 1992a). For surveillance purposes, the CDC currently defines AIDS in an adult or adolescent age 13 years or older as the presence of one of 25 AIDS-indicator conditions, such as KS, PCP or disseminated MAC. In children younger than 13 years, the definition of AIDS is similar to that in adolescents and adults, except that lymphoid interstitial pneumonitis and recurrent bacterial infections are included in the list of AIDS-defining conditions (CDC, 1987b). The case definition in adults and adolescents was expanded in 1993 to include HIV infection in an individual with a CD4+ T cell count less than 200 cells per cubic millimeter (mm3) of blood (CDC, 1992a). The current surveillance definition replaced criteria published in 1987 that were based on clinical conditions and evidence of HIV infection but not on CD4+ T cell determinations (CDC, 1987a). In many developing countries, where diagnostic facilities may be minimal, epidemiologists employ a case definition based on the presence of various clinical symptoms associated with immune deficiency and the exclusion of other known causes of immunosuppression, such as cancer or malnutrition (Ryder and Mugewrwa, 1994a; Davachi, 1994). Surveillance definitions of AIDS have proven useful epidemiologically to track and quantify the recent epidemic of HIV-mediated immunosuppression and its manifestations. However, AIDS represents only the end stage of a continuous, progressive pathogenic process, beginning with primary infection with HIV, continuing with a chronic phase that is usually asymptomatic, leading to progressively severe symptoms and, ultimately, profound immunodeficiency and opportunistic infections and neoplasms (Fauci, 1993a). In clinical practice, symptomatology and measurements of immune function, notably levels of CD4+ T lymphocytes, are used to guide the treatment of HIV-infected persons rather than an all-or-nothing paradigm of AIDS/non-AIDS (CDC, 1992a; Sande et al., 1993; Volberding and Graham, 1994). Between June 1981 and Dec. 31, 1994, 441,528 cases of AIDS in the United States, including 270,870 AIDS-related deaths, were reported to the CDC (CDC, 1995a). AIDS is now the leading cause of death among adults aged 25 to 44 in the United States (CDC, 1995b) (Figure 1). Fig. 1. Death rates from leading causes of death in persons aged 25-44 years, United States, 1982-1993 Reference: Centers for Disease Control and Prevention Worldwide, 1,025,073 cases of AIDS were reported to the World Health Organization (WHO) through December 1994, an increase of 20 percent since December 1993 (WHO, 1995a) (Figure 2). Allowing for under-diagnosis, incomplete reporting and reporting delay, and based on the available data on HIV infections around the world, the WHO estimates that over 4.5 million AIDS cumulative cases had occurred worldwide by late 1994 and that 19.5 million people worldwide had been infected with HIV since the beginning of the epidemic (WHO, 1995a). By the year 2000, the WHO estimates that 30 to 40 million people will have been infected with HIV and that 10 million people will have developed AIDS (WHO, 1994). The Global AIDS Policy Coalition has developed a considerably higher estimate--perhaps up to 110 million HIV infections and 25 million AIDS cases by the turn of the century (Mann et al., 1992a). Fig. 2. Cumulative AIDS cases worldwide. AIDS cases reported to the World Health Organization through December 1994. Reference: WHO, 1995a In 1981, clinical investigators in New York and California observed among young, previously healthy, homosexual men an unusual clustering of cases of rare diseases, notably Kaposi's sarcoma (KS) and opportunistic infections such as Pneumocystis carinii pneumonia (PCP), as well as cases of unexplained, persistent lymphadenopathy (CDC, 1981a,b, 1982a; Masur et al., 1981; Gottlieb et al., 1981; Friedman-Kien, 1981). It soon became evident that these men had a common immunologic deficit, an impairment in cell-mediated immunity resulting from a significant loss of "T-helper" cells, which bear the CD4 marker (Gottlieb et al., 1981; Masur et al., 1981; Siegal et al., 1981; Ammann et al., 1983a). The widespread occurrence of KS and PCP in young people with no underlying disease or history of immunosuppressive therapy was unprecedented. Searches of the medical literature, autopsy records and tumor registries revealed that these diseases previously had occurred at very low levels in the United States (CDC, 1981b; CDC, 1982f). KS, a very rare skin neoplasm, had affected mostly older men of Mediterranean origin or cancer or transplant patients undergoing immunosuppressive therapy (Gange and Jones, 1978; Safai and Good, 1981). Before the AIDS epidemic, the annual incidence of Kaposi's sarcoma in the United States was 0.02 to 0.06 per 100,000 population (Rothman, 1962a; Oettle, 1962). In addition, a more aggressive form of KS that generally occurred in younger individuals was seen in certain parts of Africa (Rothman, 1962b; Safai, 1984a). By 1984, never-married men in San Francisco were found to be 2,000 times more likely to develop KS than during the years 1973 to 1979 (Williams et al., 1994). As of Dec. 31, 1994, 36,693 patients with AIDS in the United States with a definitive diagnosis of KS had been reported to the CDC (CDC, 1995b). PCP, a lung infection caused by a pathogen to which most individuals are exposed with no undue consequences, was extremely rare prior to 1981 in individuals other than those receiving immunosuppressive therapy or among the chronically malnourished, such as certain Eastern European children following World War II (Walzer, 1990). A 1967 survey, for example, found only 107 U.S. cases of PCP reported in the medical literature up to that point, virtually all among individuals with underlying immunosuppressive conditions or who had undergone immunosuppressive therapy (Le Clair, 1969). In that year, CDC became the sole supplier in the United States of pentamidine isethionate, then the only recommended PCP therapy, and began collecting data on each PCP case diagnosed and treated in this country. After reviewing requests for pentamidine in the period 1967 to 1970, researchers found only one case of confirmed PCP without a known underlying condition (Walzer et al., 1974). In the period immediately prior to the recognition of AIDS, January 1976 to June 1980, CDC received only one request for pentamidine isethionate to treat an adult in the United States who had PCP and no underlying disease (CDC, 1982f). In 1981 alone, 42 requests for pentamidine were received to treat patients with PCP and no known underlying disorders (CDC, 1982f). By Dec. 31, 1994, 127,626 individuals with AIDS in the United States with definitive diagnoses of PCP had been reported to the CDC (CDC, 1995b). Another rare opportunistic disease, disseminated infection with the Mycobacterium avium complex (MAC), also was seen frequently in the first AIDS patients (Zakowski et al., 1982; Greene et al., 1982). Prior to 1981, only 32 individuals with disseminated MAC disease had been described in the medical literature (Masur, 1982a). By Dec. 31, 1994, the CDC had received reports of 28,954 U.S. AIDS patients with definitive diagnoses of disseminated MAC (CDC, 1995b). The fact that homosexual men constituted the initial population in which AIDS occurred in the United States led some to surmise that a homosexual lifestyle was specifically related to the disease (Goedert et al., 1982; Hurtenbach and Shearer, 1982; Sonnabend et al., 1983; Durack, 1981; Mavligit et al., 1984). These early suggestions that AIDS resulted from behavior specific to the homosexual population were largely dismissed when the syndrome was observed in distinctly different groups in the United States: in male and female injection drug users; in hemophiliacs and blood transfusion recipients; among female sex partners of bisexual men, recipients of blood or blood products, or injection drug users; and among infants born to mothers with AIDS or with a history of injection drug use (CDC, 1982b,c,d,f, 1983a; Poon et al., 1983; Elliot et al., 1983; Masur et al., 1982b; Davis et al., 1983; Harris et al., 1983; Rubinstein et al., 1983; Oleske et al., 1983; Ammann et al., 1983b). In 1983, for example, a study found that hemophiliacs with no history of any of the proposed causes of AIDS in homosexual men had developed the syndrome, and some of the men had apparently transmitted the infection to their wives (deShazo et al., 1983). Many public health experts concluded that the clustering of AIDS cases (Auerbach et al., 1984; Gazzard et al., 1984) and the occurrence of cases in diverse risk groups could be explained only if AIDS were caused by an infectious microorganism transmitted in the manner of hepatitis B virus (HBV): by sexual contact, by inoculation with blood or blood products, and from mother to newborn infant (Francis et al., 1983; Curran et al., 1984; AMA, 1984; CDC, 1982f, 1983a,b). Early suspects for the cause of AIDS were cytomegalovirus (CMV), because of its association with immunosuppression, and Epstein-Barr virus (EBV), which has an affinity for lymphocytes (Gottlieb et al., 1981; Hymes et al., 1981; CDC, 1982f). However, AIDS was a new phenomenon, and these viruses already had a worldwide distribution. Comparative seroprevalence studies showed no convincing evidence to assign these viruses or other known agents a primary role in the syndrome (Rogers et al., 1983). Also lacking was evidence that these viruses, when isolated from patients with AIDS, differed significantly from strains found in healthy individuals or from strains found in the years preceding the emergence of AIDS (AMA, 1984). By 1983, several research groups had focused on retroviruses for clues to the cause of AIDS (Gallo and Montagnier, 1987). Two recently recognized retroviruses, HTLV-I and HTLV-II, were the only viruses then known to preferentially infect helper T lymphocytes, the cells depleted in people with AIDS (Gallo and Reitz, 1982; Popovic et al., 1984). The pattern of HTLV transmission was similar to that seen among AIDS patients: HTLV was transmitted by sexual contact, from mother to child or by exposure to infected blood (Essex, 1982; Gallo and Reitz, 1982). In addition, HTLV-I was known to cause mild immunosuppression, and a related retrovirus, the lymphotropic feline leukemia virus (FeLV), caused lethal immunosuppression in cats (Essex et al., 1975). In May 1983, the first report providing experimental evidence for an association between a retrovirus and AIDS was published (Barre-Sinoussi et al., 1983). After finding antibodies cross-reactive with HTLV-I in a homosexual patient with lymphadenopathy, a group led by Dr. Luc Montagnier isolated a previously unrecognized virus containing reverse transcriptase that was cytopathic for cord-blood lymphocytes (Barre-Sinoussi et al., 1983). This virus later became known as lymphadenopathy-associated virus (LAV). The French group subsequently reported that LAV was tropic for T-helper cells, in which it grew to substantial titers and caused cell death (Klatzmann et al., 1984a; Montagnier et al., 1984). In 1984, a considerable amount of new data added to the evidence for a retroviral etiology for AIDS. Researchers at the National Institutes of Health reported the isolation of a cytopathic T-lymphotropic virus from 48 different people, including 18 of 21 with pre-AIDS, three of four clinically normal mothers of children with AIDS, 26 of 72 children and adults with AIDS, and one (who later developed AIDS) of 22 healthy homosexuals (Gallo et al., 1984). The virus, named HTLV-III, could not be found in 115 healthy heterosexual subjects. Antibodies reactive with HTLV-III antigens were found in serum samples of 88 percent of 48 patients with AIDS, 79 percent of 14 homosexuals with pre-AIDS, and fewer than 1 percent of hundreds of healthy heterosexuals (Sarngadharan et al., 1984). Shortly thereafter, the researchers found that 100 percent (34 of 34) of AIDS patients tested were positive for HTLV-III antibodies in a study in which none of 14 controls had antibodies (Safai et al., 1984b). In a study in the United Kingdom reported later that year, investigators found that 30 of 31 AIDS patients tested were seropositive for HTLV-III antibodies, as were 110 of 124 individuals with persistent generalized lymphadenopathy (Cheingsong-Popov et al., 1984). None of more than 1,000 blood donors selected randomly had antibodies to HTLV-III in this study. During the same time period, HTLV-III was isolated from the semen of patients with AIDS (Zagury et al., 1984, Ho et al., 1984), findings consistent with the epidemiologic data demonstrating AIDS transmission via sexual contact. Researchers in San Francisco subsequently reported the isolation of a retrovirus they named the AIDS-associated retrovirus (ARV) from AIDS patients in different risk groups, as well as from asymptomatic people from AIDS risk groups (Levy et al., 1984). The researchers isolated ARV from 27 of 55 patients with AIDS or lymphadenopathy syndrome; they detected antibodies to ARV in 90 percent of 113 individuals with the same conditions. Like HTLV-III and LAV, ARV grew substantially in peripheral blood mononuclear cells and killed CD4+ T cells. The same group subsequently isolated ARV from genital secretions of women with antibodies to the virus, data consistent with the observation that men could contract AIDS following contact with a woman infected with the virus (Wofsy et al., 1986). During the same period, HTLV-III and ARV were isolated from the brains of children and adults with AIDS-associated encephalopathy, which suggested a role for these viruses in the central nervous system disorders seen in many patients with AIDS (Levy et al., 1985; Ho et al., 1985). By 1985, analyses of the nucleotide sequences of HTLV-III, LAV and ARV demonstrated that the three viruses belonged to the same retroviral family and were strikingly similar (Wain-Hobson et al., 1985; Ratner et al., 1985; Sanchez-Pescador et al., 1985). In 1986, the International Committee of Viral Taxonomy renamed the viruses the human immunodeficiency virus (HIV) (Coffin et al., 1986). Serologic tests for antibodies to HIV, developed in 1984 (Sarngadharan et al., 1984; Popovic et al., 1984; reviewed in Brookmeyer and Gail, 1994), have enabled researchers to conduct hundreds of seroprevalence surveys throughout the world. Using these tests, investigators have repeatedly demonstrated that the occurrence of AIDS-like illnesses in different populations has closely followed the appearance of HIV antibodies (U.S. Bureau of the Census, 1994). For example, retrospective examination of sera collected in the late 1970s in association with hepatitis B studies in New York, San Francisco and Los Angeles suggests that HIV entered the U.S. population sometime in the late 1970s (Jaffe et al., 1985a). In 1978, 4.5 percent of men in the San Francisco cohort had antibodies to HIV (Jaffe et al., 1985a). The first cases of AIDS in homosexual men in San Francisco were reported in 1981, and by 1984, more than two-thirds of the San Francisco cohort had HIV antibodies and almost one-third had developed AIDS-related conditions (Jaffe et al., 1985a). By the end of 1992, approximately 70 percent of 539 men in the San Francisco cohort with a well-documented date of HIV seroconversion before 1983 had developed an AIDS-defining condition or had a CD4+ T cell count of less than 200/mm3; another 11 percent had CD4+ T cell counts between 200 and 500/mm3 (Buchbinder et al., 1994) (Figure 3). Fig. 3. Clinical and imunologic outcomes in patients HIV-infected for 10-15 years in the San Francisco City Clinic; n=539. Modified from Buchbinder et al., 1994. Retrospective tests of the U.S. blood supply have shown that, in 1978, at least one batch of Factor VIII was contaminated with HIV (Evatt et al., 1985; Aronson, 1993). Factor VIII was given to some 2,300 males in the United States that year. In July 1982, the first cases of AIDS in hemophiliacs were reported (CDC, 1982c). Through Dec. 31, 1994, 3,863 individuals in the United States with hemophilia or other coagulation disorders had been diagnosed with AIDS (CDC, 1995a). Elsewhere in the world, a similar chronological association between HIV and AIDS has been noted. The appearance of HIV in the blood supply has preceded or coincided with the occurrence of AIDS cases in every country and region where cases of AIDS have been reported (Institute of Medicine, 1986; Chin and Mann, 1988; Curran et al., 1988; Piot et al., 1988; Mann, 1992; Mann et al., 1992; U.S. Bureau of the Census, 1994). For example, a review of serosurveys associated with dengue fever in the Caribbean found that the earliest evidence of HIV infection in Haiti appeared in samples from 1979 (Pape et al., 1983, 1993); the first cases of AIDS in Haiti and in Haitians in the United States were reported in the early 1980s (CDC, 1982e; Pape et al., 1983, 1993). In Africa between 1981 and 1983, clinical epidemics of chronic, life-threatening enteropathic diseases ("slim disease"), cryptococcal meningitis, progressive KS and esophageal candidiasis were recognized in Rwanda, Tanzania, Uganda, Zaire and Zambia, and in 1983 the first AIDS cases among Africans were reported (Quinn et al., 1986; Essex, 1994). The earliest blood sample from Africa from which HIV has been recovered is from a possible AIDS patient in Zaire, tested in connection with a 1976 Ebola virus outbreak (Getchell et al., 1987; Myers et al., 1992). Serologic data have suggested the presence of HIV infection as early as 1959 in Zaire (Nahmias et al., 1986). Other investigators have found evidence of HIV proviral DNA in tissues of a sailor who died in Manchester, England, in 1959 (Corbitt et al., 1990). In the latter case, this finding may have represented a contamination with a virus isolated at a much later date (Zhu and Ho, 1995). HIV did not become epidemic until 20 to 30 years later, perhaps because of the migration of poor and young sexually active individuals from rural areas to urban centers in developing countries, with subsequent return migration and, internationally, due to civil wars, tourism, business travel and the drug trade (Quinn, 1994). As a retrovirus, HIV is an RNA virus that codes for the enzyme reverse transcriptase, which transcribes the viral genomic RNA into a DNA copy that ultimately integrates into the host cell genome (Fauci, 1988). Within the retrovirus family, HIV is classified as a lentivirus, having genetic and morphologic similarities to animal lentiviruses such as those infecting cats (feline immunodeficiency virus), sheep (visna virus), goats (caprine arthritis-encephalitis virus), and non-human primates (simian immunodeficiency virus) (Stowring et al., 1979; Gonda et al., 1985; Haase, 1986; Temin, 1988, 1989). Like HIV in humans, these animal viruses primarily infect cells of the immune system, including T lymphocytes and macrophages (Haase, 1986, 1990; Levy, 1993) (Table 1). Reference: Levy, 1993. Lentiviruses often cause immunodeficiency in their hosts in addition to slow, progressive wasting disorders, neurodegeneration and death (Haase, 1986, 1990). SIV, for example, infects several subspecies of macaque monkeys, causing diarrhea, wasting, CD4+ T cell depletion, opportunistic infections and death (Desrosiers, 1990; Fultz, 1993). HIV is closely related to SIV, as evidenced by viral protein cross-reactivity and genetic sequence similarities (Franchini et al., 1987; Hirsch et al., 1989; Desrosiers, 1990; Myers, 1992). One feature that distinguishes lentiviruses from other retroviruses is the remarkable complexity of their viral genomes. Most retroviruses that are capable of replication contain only three genes--env, gag and pol (Varmus, 1988). HIV contains not only these essential genes but also the complex regulatory genes tat, rev, nef, and auxiliary genes vif, vpr and vpu (Greene, 1991). The actions of these additional genes probably contribute to the profound pathogenicity that differentiates HIV from many other retroviruses. CD4+ T cells, the cells depleted in AIDS patients, are primary targets of HIV because of the affinity of the gp120 glycoprotein component of the viral envelope for the CD4 molecule (Dalgleish et al., 1984; Klatzmann et al., 1984b; McDougal et al., 1985a, 1986). These so-called T-helper cells coordinate a number of critical immunologic functions. The loss of these cells results in the progressive impairment of the immune system and is associated with a deteriorating clinical course (Pantaleo et al., 1993a). In advanced HIV disease, abnormalities of virtually every component of the immune system are evident (Fauci, 1993a; Pantaleo et al., 1993a). Primary HIV infection is associated with a burst of HIV viremia and often a concomitant abrupt decline of CD4+ T cells in the peripheral blood (Cooper et al., 1985; Daar et al., 1991; Tindall and Cooper, 1991; Clark et al., 1991; Pantaleo et al., 1993a, 1994). The decrease in circulating CD4+ T cells during primary infection is probably due both to HIV-mediated cell killing and to re-trafficking of cells to the lymphoid tissues and other organs (Fauci, 1993a). The median period of time between infection with HIV and the onset of clinically apparent disease is approximately 10 years in western countries, according to prospective studies of homosexual men in which dates of seroconversion are known (Lemp et al., 1990; Pantaleo et al., 1993a; Hessol et al., 1994) (Figure 4). Similar estimates of asymptomatic periods have been made for HIV-infected blood-transfusion recipients, injection drug users and adult hemophiliacs (reviewed in Alcabes et al., 1993a). Fig. 4. Typical course of HIV infection. During the period following primary infection, HIV disseminates widely in the body; an abrupt decrease in CD4+ T cells in the peripheral circulation is often seen. An immune response to HIV ensures, with a decrease in detectable viremia. A period of clinical latency follows, during which CD4+ T cells counts continue to decrease, until they fall to a critical level below which there is a substantial risk of opportunistic infections. Adapted from Pantaleo et al., 1993a HIV disease, however, is not uniformly expressed in all individuals. A small proportion of persons infected with the virus develop AIDS and die within months following primary infection, while approximately 5 percent of HIV-infected individuals exhibit no signs of disease progression even after 12 or more years (Pantaleo et al., 1995a; Cao et al., 1995). Host factors such as age or genetic differences among individuals, the level of virulence of the individual strain of virus, as well as influences such as co-infection with other microbes may determine the rate and severity of HIV disease expression in different people (Fauci, 1993a; Pantaleo et al., 1993a). Such variables have been termed "clinical illness promotion factors" or co-factors and appear to influence the onset of clinical disease among those infected with any pathogen (Evans, 1982). Most people infected with hepatitis B, for example, show no symptoms or only jaundice and clear their infection, while others suffer disease ranging from chronic liver inflammation to cirrhosis and hepatocellular carcinoma (Robinson, 1990). Co-factors probably also determine why some smokers develop lung cancer, while others do not. As disease progresses, increasing amounts of infectious virus, viral antigens and HIV-specific nucleic acids in the body correlate with a worsening clinical course (Allain et al., 1987; Nicholson et al., 1989; Ho et al., 1989; Schnittman et al., 1989, 1990a, 1991; Mathez et al., 1990; Genesca et al., 1990; Hufert et al., 1991; Saag et al., 1991; Aoki-Sei et al., 1992; Yerly et al., 1992; Bagnarelli et al., 1992; Ferre et al., 1992; Michael et al., 1992; Pantaleo et al., 1993b; Gupta et al., 1993; Connor et al., 1993; Saksela et al., 1994; Dickover et al., 1994; Daar et al., 1995; Furtado et al., 1995). Cross-sectional studies in adults and children have shown that levels of infectious HIV or proviral DNA in the blood are substantially higher in patients with AIDS than in asymptomatic patients (Ho et al., 1989; Coombs et al., 1989; Saag et al., 1991; Srugo et al., 1991; Michael et al., 1992; Aoki-Sei et al., 1992). In both blood and lymph tissues from HIV-infected individuals, researchers at the National Institutes of Health found viral burden and replication to be substantially higher in patients with AIDS than in early-stage patients (Pantaleo et al., 1993b). This group also found deterioration of the architecture and microenvironment of the lymphoid tissue to a greater extent in late-stage patients than in asymptomatic individuals. The dissolution of the follicular dendritic cell network of the lymph node germinal center and the progressive loss of antigen-presenting capacity are likely critical factors that contribute to the immune deficiency seen in individuals with AIDS (Pantaleo et al., 1993b). More recently, the same group studied 15 long-term non-progressors, defined as individuals infected for more than seven years (usually more than 10 years) who received no antiretroviral therapy and showed no decline in CD4+ T cells. They found that viral burden and viral replication in the peripheral blood and in lymph nodes, measured by DNA and RNA PCR, respectively, were at least 10 times lower than in 18 HIV-infected individuals whose disease progression was more typical. In addition, the lymph node architecture in long-term non-progressors remained intact (Pantaleo et al., 1995a). Longitudinal studies also have quantified viral burden and replication in the blood and their relationship to disease progression (Schnittman et al., 1990a; Connor et al., 1993; Saksela et al., 1994; Daar et al., 1995; Furtado et al., 1995). In a study of asymptomatic HIV-infected individuals who ultimately developed rapidly progressive disease, the number of CD4+ T cells in which HIV DNA could be found increased over time, whereas this did not occur in patients with stable disease (Schnittman et al., 1990a). Using serial blood samples from HIV-infected individuals who had a precipitous drop in CD4+ T cells followed by a rapid progression to AIDS, other groups found a significant increase in the levels of HIV DNA concurrent with or prior to CD4+ T cell decline (Connor et al., 1993; Daar et al., 1995). Increased expression of HIV mRNA in peripheral blood mononuclear cells has also been shown to precede clinically defined progression of disease (Saksela et al., 1994). In the longitudinal Multicenter AIDS Cohort Study (MACS), homosexual and bisexual men for whom the time of seroconversion had been documented had increasing levels of both plasma HIV RNA and intracellular RNA as disease progressed and had CD4+ T cell numbers that declined (Gupta et al., 1993; Mellors et al., 1995). Men who remained asymptomatic with stable CD4+ T cell numbers maintained extremely low levels of viral RNA. These findings suggest that plasma HIV RNA levels are a strong, CD4-independent predictor of rapid progression to AIDS. Another longitudinal study found that increasing plasma RNA levels were highly predictive of the development of zidovudine (AZT) resistance and death in patients on long-term therapy with that drug (Vahey et al., 1994). Other evidence suggests that changes in viral load due to changes in therapy can predict clinical benefit in patients. It was recently found that the amount of HIV RNA in the peripheral blood decreased in patients who switched to didanosine (ddI) after taking AZT and increased in patients who continued to take AZT (NTIS, 1994; Welles et al., 1995). Decreases in HIV RNA were associated with fewer progressions to new, previously undiagnosed AIDS-defining diseases or death. This study provided the first evidence that a therapy-induced reduction of HIV viral load is associated with clinical outcome. Similarly, studies of blood samples collected serially from HIV-infected patients found that a decrease in HIV RNA copy number in the first months following treatment with AZT strongly correlated with improved clinical outcome (O'Brien et al., 1994; Jurriaans et al., 1995). The emergence of HIV variants that are more cytopathic and replicate in a wider range of susceptible cells in vitro has also been shown to correlate with disease progression in HIV-infected individuals (Fenyo et al., 1988; Tersmette et al., 1988, 1989a,b; Richman and Bozzette, 1994; Connor et al., 1993, Connor and Ho, 1994a,b). Similar results have been seen in vivo with macaques infected with molecularly cloned SIV (Kodama et al., 1993). It has also been reported that HIV isolates from patients who progress to AIDS have a higher rate of replication compared with HIV isolates from individuals who remain asymptomatic (Fenyo et al., 1988; Tersmette et al., 1989a), and that rapidly replicating variants of HIV emerge during the asymptomatic stage of infection prior to disease progression (Tersmette et al., 1989b; Connor and Ho, 1994b). It is well established that a number of viral, rickettsial, fungal, protozoal and bacterial infections can cause transient T cell decreases (Chandra, 1983). Immune deficiencies due to tumors, autoimmune diseases, rare congenital disorders, chemotherapy and other factors have been shown to render certain individuals susceptible to opportunistic infections (Ammann, 1991). As mentioned above, chronic malnutrition following World War II resulted in PCP in Eastern European children (Walzer, 1990). Transplant recipients treated with immunosuppressive drugs such as cyclosporin and glucocorticoids often suffer recurrent diseases due to pathogens such as varicella zoster virus and cytomegalovirus that also cause disease in HIV-infected individuals (Chandra, 1983; Ammann, 1991). However, the specific immunologic profile that typifies AIDS--a progressive reduction of CD4+ T cells resulting in persistent CD4+ T lymphocytopenia and profound deficits in cellular immunity--is extraordinarily rare in the absence of HIV infection or other known causes of immunosuppression. This was recently demonstrated in several surveys that sought to determine the frequency of idiopathic CD4+ T-cell lymphocytopenia (ICL), which is characterized by CD4+ T cell counts lower than 300 cells per cubic millimeter (mm3) of blood in the absence of HIV antibodies or conditions or therapies associated with depressed levels of CD4+ T cells (reviewed in Fauci, 1993b; Laurence, 1993). In a CDC survey, only 47 (.02 percent) of 230,179 individuals diagnosed with AIDS were both HIV-seronegative and had persistently low CD4+ T cell counts (<300/MM3) in the absence of conditions or therapies associated with immunosuppression (Smith et al., 1993). In the MACS, 22,643 CD4+ T cell determinations in 2,713 HIV-seronegative homosexual men revealed only one individual with a CD4+ T cell count persistently lower than 300 cells/mm3, and this individual was receiving immunosuppressive therapy (Vermund et al., 1993a). A similar review of another cohort of homosexual and bisexual men found no case of persistently lowered CD4+ T cell counts among 756 HIV-seronegative men who had no other cause of immunosuppression (Smith et al., 1993). Analogous results were reported from the San Francisco Men's Health Study, a population-based cohort recruited in 1984. Among 206 HIV-seronegative heterosexual and 526 HIV-seronegative homosexual or bisexual men, only one had consistently low CD4+ T cell counts (Sheppard et al., 1993). This individual also had low CD8+ T cell counts, suggesting that he had general lymphopenia rather than a selective loss of CD4+ T cells. No AIDS-defining clinical condition was observed among these HIV-seronegative men. Studies of blood donors, recipients of blood and blood products, and household and sexual contacts of transfusion recipients also suggest that persistently low CD4+ T cell counts are extremely rare in the absence of HIV infection (Aledort et al., 1993; Busch et al., 1994). Longitudinal studies of injection-drug users have demonstrated that unexplained CD4+ T lymphocytopenia is almost never seen among HIV-seronegative individuals in this population, despite a high risk of exposure to hepatitis B, cytomegalovirus and other blood-borne pathogens (Des Jarlais et al., 1993; Weiss et al., 1992). HIV infects and kills CD4+ T lymphocytes in vitro, although scientists have developed immortalized T-cell lines in order to propagate HIV in the laboratory (Popovic et al., 1984; Zagury et al., 1986; Garry, 1989; Clark et al., 1991). Several mechanisms of CD4+ T cell killing have been observed in lentivirus systems in vitro and may explain the progressive loss of these cells in HIV-infected individuals (reviewed in Garry, 1989; Fauci, 1993a; Pantaleo et al., 1993a) (Table 2). These mechanisms include disruption of the cell membrane as HIV buds from the surface (Leonard et al., 1988) or the intracellular accumulation of heterodisperse RNAs and unintegrated DNA (Pauza et al., 1990; Koga et al., 1988). Evidence also suggests that intracellular complexing of CD4 and viral envelope products can result in cell killing (Hoxie et al., 1986). Direct HIV-mediated cytopathic effects (single-cell killing) HIV-mediated formation of syncytia Virus-specific immune responses HIV-specific cytolytic T lymphocytes Antibody-dependent cellular cytotoxicity HIV-specific cytolytic T lymphocytes Antibody-dependent cellular cytotoxicity Natural killer cells Anergy caused by inappropriate cell signaling through gp120-CD4 interaction Superantigen-mediated perturbation of T-cell subgroups Programmed cell death (apoptosis) Reference: Pantaleo et al., 1993a. In addition to these direct mechanisms of CD4+ T cell depletion, indirect mechanisms may result in the death of uninfected CD4+ T cells (reviewed in Fauci, 1993a; Pantaleo et al., 1993a). Uninfected cells often fuse with infected cells, resulting in giant cells called syncytia that have been associated with the cytopathic effect of HIV in vitro (Sodroski et al., 1986; Lifson et al., 1986). Uninfected cells also may be killed when free gp120, the envelope protein of HIV, binds to their surfaces, marking them for destruction by antibody-dependent cellular cytotoxicity responses (Lyerly et al., 1987). Other autoimmune phenomena may also contribute to CD4+ T cell death since HIV envelope proteins share some degree of homology with certain major histocompatibility complex type II (MHC-II) molecules (Golding et al., 1989; Koenig et al., 1988). A number of investigators have suggested that superantigens, either encoded by HIV or derived from unrelated agents, may trigger massive stimulation and expansion of CD4+ T cells, ultimately leading to depletion or anergy of these cells (Janeway, 1991; Hugin et al., 1991). The untimely induction of a form of programmed cell death called apoptosis has been proposed as an additional mechanism for CD4+ T cell loss in HIV infection (Ameisen and Capron, 1991; Terai et al., 1991; Laurent-Crawford et al., 1991). Recent reports indicate that apoptosis occurs to a greater extent in HIV-infected individuals than in non-infected persons, both in the peripheral blood and lymph nodes (Finkel et al., 1995; Pantaleo and Fauci, 1995b; Muro-Cacho et al., 1995). It has also been observed that HIV infects precursors of CD4+ T cells in the bone marrow and thymus and damages the microenvironment of these organs necessary for the optimal sustenance and maturation of progenitor cells (Schnittman et al., 1990b; Stanley et al., 1992). These findings may help explain the lack of regeneration of the CD4+ T cell pool in patients with AIDS (Fauci, 1993a). Recent studies have demonstrated a substantial viral burden and active viral replication in both the peripheral blood and lymphoid tissues even early in HIV infection (Fox et al., 1989; Coombs et al., 1989; Ho et al., 1989; Michael et al., 1992; Bagnarelli et al., 1992; Pantaleo et al., 1993b; Embretson et al., 1993; Piatak et al., 1993). One group has reported that 25 percent of CD4+ T cells in the lymph nodes of HIV-infected individuals harbor HIV DNA early in the course of disease (Embretson et al., 1993). Other data suggest that HIV infection is sustained by a dynamic process involving continuous rounds of new viral infection and the destruction and replacement of over 1 billion CD4+ T cells per day (Wei et al., 1995; Ho et al., 1995). Taken together, these studies strongly suggest that HIV has a central role in the pathogenesis of AIDS, either directly or indirectly by triggering a series of pathogenic events that contribute to progressive immunosuppression. Recent developments in HIV research provide some of the strongest evidence for the causative role of HIV in AIDS and fulfill the classical postulates for disease causation developed by Henle and Koch in the 19th century (Koch's postulates reviewed in Evans, 1976, 1989a; Harden, 1992). Koch's postulates have been variously interpreted by many scientists over the years. One scientist who asserts that HIV does not cause AIDS has set forth the following interpretation of the postulates for proving the causal relationship between a microorganism and a specific disease (Duesberg, 1987): The microorganism must be found in all cases of the disease. It must be isolated from the host and grown in pure culture. It must reproduce the original disease when introduced into a susceptible host. It must be found in the experimental host so infected. Recent developments in HIV/AIDS research have shown that HIV fulfills these criteria as the cause of AIDS. 1) The development of DNA PCR has enabled researchers to document the presence of cell-associated proviral HIV in virtually all patients with AIDS, as well as in individuals in earlier stages of HIV disease (Kwok et al., 1987; Wages et al., 1991; Bagasra et al., 1992; Bruisten et al., 1992; Petru et al., 1992; Hammer et al., 1993). RNA PCR has been used to detect cell-free and/or cell-associated viral RNA in patients at all stages of HIV disease (Ottmann et al., 1991; Schnittman et al., 1991; Aoki-Sei, 1992; Michael et al., 1992; Piatak et al., 1993) (Table 3). Modified from Hammer et al., 1993. 2) Improvements in co-culture techniques have allowed the isolation of HIV in virtually all AIDS patients, as well as in almost all seropositive individuals with both early- and late-stage disease (Coombs et al., 1989; Schnittman et al., 1989; Ho et al., 1989; Jackson et al., 1990). 1-4) All four postulates have been fulfilled in three laboratory workers with no other risk factors who have developed AIDS or severe immunosuppression after accidental exposure to concentrated HIVIIIB in the laboratory (Blattner et al., 1993; Reitz et al., 1994; Cohen, 1994c). Two patients were infected in 1985 and one in 1991. All three have shown marked CD4+ T cell depletion, and two have CD4+ T cell counts that have dropped below 200/mm3 of blood. One of these latter individuals developed PCP, an AIDS indicator disease, 68 months after showing evidence of infection and did not receive antiretroviral drugs until 83 months after the infection. In all three cases, HIVIIIB was isolated from the infected individual, sequenced, and shown to be the original infecting strain of virus. In addition, as of Dec. 31, 1994, CDC had received reports of 42 health care workers in the United States with documented, occupationally acquired HIV infection, of whom 17 have developed AIDS in the absence of other risk factors (CDC, 1995a). These individuals all had evidence of HIV seroconversion following a discrete percutaneous or mucocutaneous exposure to blood, body fluids or other clinical laboratory specimens containing HIV. The development of AIDS following known HIV seroconversion also has been repeatedly observed in pediatric and adult blood transfusion cases (Ward et al., 1989; Ashton et al., 1994), in mother-to-child transmission (European Collaborative Study, 1991, 1992; Turner et al., 1993; Blanche et al., 1994), and in studies of hemophilia, injection drug use, and sexual transmission in which the time of seroconversion can be documented using serial blood samples (Goedert et al., 1989; Rezza et al., 1989; Biggar, 1990; Alcabes et al., 1993a,b; Giesecke et al., 1990; Buchbinder et al., 1994; Sabin et al., 1993). In many such cases, infection is followed by an acute retroviral syndrome, which further strengthens the chronological association between HIV and AIDS (Pedersen et al., 1989, 1993; Schechter et al., 1990; Tindall and Cooper, 1991; Keet et al., 1993; Sinicco et al., 1993; Bachmeyer et al., 1993; Lindback et al., 1994). A recent study demonstrated that an HIV variant that causes AIDS in humans--HIV-2--also causes a similar syndrome when injected into baboons (Barnett et al., 1994). Over the course of two years, HIV-2-infected animals exhibited a significant decline in immune function, as well as lymphocytic interstitial pneumonia (which often afflicts children with AIDS), the development of lesions similar to those seen in Kaposi's sarcoma, and severe weight loss akin to the wasting syndrome that occurs in human AIDS patients. Other studies suggest that pigtailed macaques also develop AIDS-associated diseases subsequent to HIV-2 infection (Morton et al., 1994). Asian monkeys infected with clones of the simian immunodeficiency virus (SIV), a lentivirus closely related to HIV, also develop AIDS-like syndromes (reviewed in Desrosiers, 1990; Fultz, 1993). In macaque species, various cloned SIV isolates induce syndromes that parallel HIV infection and AIDS in humans, including early lymphadenopathy and the occurrence of opportunistic infections such as pulmonary Pneumocystis carinii infection, cytomegalovirus, cryptosporidium, candida and disseminated MAC (Letvin et al., 1985; Kestler et al., 1990; Dewhurst et al., 1990; Kodama et al., 1993). In cell culture experiments, molecular clones of HIV are tropic for the same cells as clinical HIV isolates and laboratory strains of the virus and show the same pattern of cell killing (Hays et al., 1992), providing further evidence that HIV is responsible for the immune defects of AIDS. Moreover, in severe combined immunodeficiency (SCID) mice with human thymus/liver implants, molecular clones of HIV produce the same patterns of cell killing and pathogenesis as seen with clinical isolates (Bonyhadi et al., 1993; Aldrovandi et al., 1993). Convincing evidence that HIV causes AIDS also comes from the geographic correlation between rates of HIV antibody positivity and incidence of disease. Numerous studies have shown that AIDS is common only in populations with a high seroprevalence of HIV antibodies. Conversely, in populations in which HIV antibody seroprevalence is low, AIDS is extremely rare (U.S. Bureau of the Census, 1994). Malawi, a country in southern Africa with 8.2 million inhabitants, reported 34,167 cases of AIDS to the WHO as of December 1994 (WHO, 1995a). This is the highest case rate in the region. The rate of HIV seroprevalence in Malawi is also high, as evidenced by serosurveys of pregnant women and blood donors (U.S. Bureau of the Census, 1994). In one survey, approximately 23 percent of more than 6,600 pregnant women in urban areas were HIV-positive (Dallabetta et al., 1993). Approximately 20 percent of 547 blood donors in a 1990 survey were HIV-positive (Kool et al., 1990). In contrast, Madagascar, an island country off the southeast coast of Africa with a population of 11.3 million, reported only nine cases of AIDS to the WHO through December 1994 (WHO, 1995a). HIV seroprevalence is extremely low in this country; in recent surveys of 1,629 blood donors and 1,111 pregnant women, no evidence of HIV infection was found (Rasamindrakotroka et al., 1991). Yet, other sexually transmitted diseases are common in Madagascar; a 1989 seroepidemiologic study for syphilis found that 19.5 percent of 12,457 persons tested were infected (Latif, 1994; Harms et al., 1994). It is likely that due to the relative geographic isolation of this island nation, HIV was introduced late into its population. However, the high rate of other STDs such as syphilis would predict that HIV will spread in this country in the future. Similar patterns have been noted in Asia. Thailand reported 13,246 cases of AIDS to the WHO through December 1994, up from only 14 cases through 1988 (WHO, 1995a) (Figure 5). This rise has paralleled the spread of HIV infection in Thailand. Through 1987, fewer than .05 percent of 200,000 Thais from all risk groups were HIV-seropositive (Weniger et al., 1991). By 1993, 3.7 percent of 55,000 inductees into the Royal Thai Army tested positive for HIV antibodies, up from 0.5 percent of men recruited in 1989 (U.S. Bureau of the Census Database, December 1994). Seropositivity among brothel prostitutes in Thailand rose from 3.5 percent in June 1989 to 27.1 percent in June 1993 (Hanenberg et al., 1994). By mid-1993, an estimated 740,00 people were infected with HIV in Thailand (Brown and Sittitrai, 1994). By the year 2000, researchers estimate that there may be 1.4 million cumulative HIV infections and 480,000 AIDS cases in that country (Cohen, 1994b). Fig. 5. Cumulative AIDS cases in Thailand, 1979-1994 References: WHO, 1995a By comparison, South Korea reported only 25 cases of AIDS to the WHO through Dec. 1994 (WHO, 1995a). In serosurveys in that country conducted in 1993, HIV seroprevalence was .008 percent among female prostitutes and .00007 percent among blood donors (Shin et al., 1994). By the end of 1994, 7,223 cumulative cases of AIDS in the United States resulting from blood transfusions or the receipt of blood components or tissue had been reported to the CDC (CDC, 1995a). Virtually all of these cases can be traced to transfusions before the screening of the blood supply for HIV commenced in 1985 (Jones et al., 1992; Selik et al., 1993). Compelling evidence supporting a cause-and-effect relationship between HIV and AIDS has come from studies of transfusion recipients with AIDS who have received blood from at least one donor with HIV infection. In the earliest such study (before the discovery of HIV), seven patients with transfusion-acquired AIDS were shown to have received a total of 99 units of blood components. At least one donor to each patient was identified who had AIDS-like symptoms or immunosuppression (Curran et al., 1984). With the identification of HIV and the development of serologic assays for the virus in 1984, it became possible to trace infected donors (Sarngadharan et al., 1984). The first reports of donor-recipient pairs appeared later that year (Feorino et al., 1984; Groopman et al., 1984). In one instance, HIV was isolated from both donor and recipient, and both had developed AIDS (Feorino et al., 1984); in the other, the recipient was HIV antibody-positive and had developed AIDS, and the donor had culturable virus in his blood and was in a group considered to be at high risk for AIDS (Groopman et al., 1984). Molecular analysis of HIV isolates from these donor-recipient pairs found that the viruses were slightly different but much more similar than would be expected by chance alone (Feorino et al., 1984; Groopman et al., 1984). In a subsequent study of patients with transfusion-acquired AIDS, 28 of 28 individuals had antibodies to HIV, and each had received blood from an HIV-infected donor (Jaffe et al., 1985b). Similar results were reported from a set of 18 patients with transfusion-acquired AIDS, each of whom had received blood from an HIV-infected donor (McDougal et al., 1985b). Fifteen of the 18 donors in this study had low CD4+/CD8+ T cell ratios, an immune defect seen in pre-AIDS and AIDS patients. Another group studied seropositive recipients of blood from 112 donors in whom AIDS later developed and from 31 donors later found to be positive for HIV antibody. Of 101 seropositive recipients followed for a median of 55 months after infection, 43 developed AIDS (Ward et al., 1989). More recently, Australian investigators identified 25 individuals with transfusion-acquired HIV whose infection could be traced to eight individuals who donated blood between 1980 and 1985, and subsequently developed AIDS. By 1992, nine of the 25 HIV-infected blood recipients had developed AIDS, with progression to AIDS and death more rapid among the recipients who received blood from the faster-progressing donors (Ashton et al., 1994). As noted above, HIV has been detected in stored blood samples taken from hemophiliac patients in the United States as early as 1978 (Aronson, 1993). By 1984, 55 to 78 percent of U.S. hemophilic patients were HIV-infected (Lederman et al., 1985; Andes et al., 1989). A more recent survey found 46 percent of 9,496 clotting-factor recipients to be HIV-infected, only 9 of whom had a definitive date of seroconversion subsequent to April 1987 (Fricke et al., 1992). By Dec. 31, 1994, 3,863 individuals in the United States with hemophilia or coagulation disorders had been diagnosed with AIDS (CDC, 1995a). The impact of HIV on the life expectancy of hemophiliacs has been dramatic. In a retrospective study of mortality among 701 hemophilic patients in the United States, median life expectancy for males with hemophilia increased from 40.9 years at the beginning of the century (1900-1920) to a high of 68 years after the introduction of factor therapy (1971 to 1980). In the era of AIDS (1981 to 1990), life expectancy declined to 49 years (Jones and Ratnoff, 1991) (Figure 6). Fig. 6. The changing prognosis of classic hemophilia. After improvement in survival from 1971-1980 (corresponding to widespread treatment with lyophilized concentrates of Factor VIII), mortality among individuals with Factor VIII deficiency is now increasing, due in large measure to AIDS among people who became HIV-infected during transfusions between 1978 and 1985 Reference: Jones and Ratnoff, 1991. Another analysis found that the death rate for individuals with hemophilia A in the United States rose three-fold between the periods 1979-1981 and 1987-1989. Median age at death decreased from 57 years in 1979-1981 to 40 years in 1987-1989 (Chorba et al., 1994). In the United Kingdom, 6,278 males diagnosed with hemophilia were living during the period 1977-91. During 1979-86, 1,227 were infected with HIV during transfusion therapy. Among 2,448 individuals with severe hemophilia, the annual death rate was stable at 8 per 1,000 during 1977-84; during 1985-92 death rates remained at 8 per 1,000 among HIV-seronegative persons with severe hemophilia but rose steeply in those who were seropositive, reaching 81 per 1,000 in 1991-92. Among 3,830 with mild or moderate hemophilia, the pattern was similar, with an initial death rate of 4 per 1,000 in 1977-84, rising to 85 per 1,000 in 1991-92 among seropositive individuals (Darby et al., 1995). In a British cohort of hemophiliacs infected with HIV between 1979 and 1985 and followed prospectively, 50 of 111 patients had died by the end of 1994, 43 after a diagnosis of AIDS. Only eight of the 61 living patients had CD4+ T cell counts above 500/mm3 (Lee et al., 1995). Newborn infants have no behavioral risk factors, yet 6,209 children in the United States have developed AIDS through Dec. 31, 1994 (CDC, 1995a). Studies have consistently shown that of infants born to HIV-infected mothers, only the 15-40 percent of infants who become HIV-infected before or during birth go on to develop immunosuppression and AIDS, while babies who are not HIV-infected do not develop AIDS (Katz, 1989; d'Arminio et al., 1990; Prober and Gershon, 1991; European Collaborative Study, 1991; Lambert et al., 1990; Lindgren et al. 1991; Andiman et al., 1990; Johnson et al., 1989; Rogers et al., 1989; Hutto et al., 1991). Moreover, in those infants who do acquire HIV and develop AIDS, the rate of disease progression varies directly with the severity of the disease in the mother at the time of delivery (European Collaborative Study, 1992; Blanche et al., 1994). Almost all infants born to seropositive mothers have detectable HIV antibody, which may persist for as long as 15 months. In most cases, the presence of this antibody does not represent actual infection with HIV, but is antibody from the HIV-infected mother that diffuses across the placenta. In a French study of 22 infants born to HIV-infected mothers, seven babies had antibodies to HIV after one year and all developed AIDS. In these seven infants, the presence of HIV antibodies marked actual infection with HIV, not merely antibodies acquired from the mother. The other 15 children showed a complete loss of maternally acquired HIV antibodies, were not actually infected, and remained healthy. Of the babies who developed AIDS, virus was found in four of four infants tested. HIV was not found in the 15 children who remained healthy (Douard et al., 1989; Gallo, 1991). In the European Collaborative Study, children born to HIV-seropositive mothers are followed from birth in 10 European centers. A majority of the mothers have a history of injection drug use. A recent report showed that none of the 343 children who had lost maternally transferred HIV antibodies (i.e. they were truly HIV-negative) had developed AIDS or persistent immune deficiency. In contrast, among 64 children who were truly HIV-infected (i.e. they remained HIV antibody positive), 30 percent presented with AIDS within 6 months of age or with oral candidiasis followed rapidly by the onset of AIDS. By their first birthday, 17 percent died of HIV-related diseases (European Collaborative Study, 1991). In a multicenter study in Bangkok, Thailand, 105 children born to HIV-infected mothers were recently evaluated at 6 months of age (Chearskul et al., 1994). Of 27 infants determined to be HIV-infected by polymerase chain reaction, 24 developed HIV-related symptoms, including six who developed CDC-defined AIDS and four who died with conditions clinically consistent with AIDS. Among 77 exposed but uninfected infants, no deaths occurred. In a study of 481 infants in Haiti, the survival rate at 18 months was 41 percent for HIV-infected infants, 84 percent among uninfected infants born to seropositive women, and 95 percent among infants born to seronegative women (Boulos et al., 1994). Investigators have also reported cases of HIV-infected mothers with twins discordant for HIV-infection in which the HIV-infected child developed AIDS, while the other child remained clinically and immunologically normal (Park et al., 1987; Menez-Bautista et al., 1986; Thomas et al., 1990; Young et al., 1990; Barlow and Mok, 1993; Guerrero Vazquez et al., 1993). Other researchers have used molecular epidemiology to find a single source of HIV for an outbreak of pediatric AIDS cases in Russia. In that country between 1988 and 1990, over 250 children were infected with HIV after exposure to non-sterile needles. By June 1994, 43 of these children had died of AIDS (Irova et al., 1993). In a recent report on 22 of these children from two hospitals, 12 had developed AIDS. Molecular analysis of HIV isolates from all 22 children showed the isolates to be very closely related, confirming epidemiological data that these two outbreaks resulted from a single source: an infant born to an HIV-infected mother whose husband was infected in central Africa (Bobkov et al., 1994). Skeptics of the role of HIV in AIDS have espoused a "risk-AIDS" or a "drug-AIDS" hypothesis (Duesberg, 1987-1994), asserting at different times that factors such as promiscuous homosexual activity; repeated venereal infections and antibiotic treatments; the use of recreational drugs such as nitrite inhalants, cocaine and heroin; immunosuppressive medical procedures; and treatment with the drug AZT are responsible for the epidemic of AIDS. Such arguments have been repeatedly contradicted. Compelling evidence against the risk-AIDS hypothesis has come from cohort studies of high-risk groups in which all individuals with AIDS-related conditions are HIV-antibody positive, while matched, HIV-antibody negative controls do not develop AIDS or immunosuppression, despite engaging in high-risk behaviors. In a prospectively studied cohort in Vancouver (Schechter et al., 1993a), 715 homosexual men were followed for a median of 8.6 years. Among 365 HIV-positive individuals, 136 developed AIDS. No AIDS-defining illnesses occurred among 350 HIV-negative men despite the fact that these men reported appreciable levels of nitrite use, other recreational drug use, and frequent receptive anal intercourse. The average rate of CD4+ T cell decline was 50 cells/mm3 per year in the HIV-positive men, while the HIV-negative men showed no decline. Significantly, the decline of CD4+ T cell counts in HIV-positive men and the stability of CD4+ T cell counts in HIV-negative men were apparent whether or not nitrite inhalants were used. There were 101 AIDS-related deaths among the HIV-seropositive men, including six unrelated to HIV infection. In the seronegative group, only two deaths occurred: one heart attack and one suicide. In this study, lifetime prevalences of risk behaviors were similar in the 136 HIV-seropositive men who developed AIDS and in the 226 HIV-seropositive men who did not develop AIDS: use of nitrite inhalants, 88 percent in both groups; use of other illicit drugs, 75 percent and 80 percent, respectively; more than 25 percent of sexual encounters involving receptive anal intercourse, 78 percent and 82 percent, respectively. Among HIV-seronegative men (none of whom developed AIDS), the lifetime prevalences of these behaviors were somewhat lower, but substantial: 56 percent, 74 percent and 58 percent, respectively. Similar results were reported from the San Francisco Men's Health Study, a cohort of single men recruited in San Francisco in 1984 without regard to sexual preference, lifestyle or serostatus (Ascher et al., 1993a). During 96 months of follow-up, 215 cases of AIDS had occurred among 445 HIV-antibody positive homosexual men, 174 of whom had died. Among 367 antibody-negative homosexual men and 214 antibody-negative heterosexual men, no AIDS cases and eight deaths unrelated to AIDS-defining conditions were observed. The authors found no overall effect of drug consumption, including nitrites, on the development of Kaposi's sarcoma or other AIDS-defining conditions, nor an effect of the extent of the participants' drug use on these conditions. A consistent loss of CD4+ T cells was limited to HIV-positive subjects, among whom there was no discernible difference in CD4+ T cell counts related to drug-taking behavior. Among HIV-seronegative men, moderate or heavy drug users had higher CD4+ T cell counts than non-users. Observational studies of HIV-infected individuals have found that drug use does not accelerate progression to AIDS (Kaslow et al., 1989; Coates et al., 1990; Lifson et al., 1990; Robertson et al., 1990). In a Dutch cohort of HIV-seropositive homosexual men, no significant differences in sexual behavior or use of cannabis, alcohol, tobacco, nitrite inhalants, LSD or amphetamines were found between men who remained asymptomatic for long periods and those who progressed to AIDS (Keet et al., 1994). Another study, of five cohorts of homosexual men for whom dates of seroconversion were well-documented, found no association between HIV disease progression and history of sexually transmitted diseases, number of sexual partners, use of AZT, alcohol, tobacco or recreational drugs (Veugelers et al., 1994). Similarly, in the San Francisco City Clinic Cohort, recruited in the late 1970s and early 1980s in conjunction with hepatitis B studies, no consistent differences in exposure to recreational drugs or sexually transmitted diseases were seen between HIV-infected men who progressed to AIDS and those who remained healthy (Buchbinder et al., 1994). Because many children with AIDS are born to mothers who abuse recreational drugs (Novick and Rubinstein, 1987; European Collaborative Study, 1991), it has been postulated that the mothers' drug consumption is responsible for children developing AIDS (Duesberg, 1987-1994). This theory is contradicted by numerous reports of infants with AIDS born to women infected with HIV through heterosexual contact or transfusions who do not use drugs (CDC, 1995a). As noted above, the only factor that predicts whether a child will develop AIDS is whether he or she is infected with HIV, not maternal drug use. Central to the "risk-AIDS" hypothesis is the notion that chronic injection drug use causes AIDS (Duesberg, 1992), a view that is contradicted by numerous studies. Although some evidence suggests injection drug use can cause certain immunologic abnormalities, such as reduction in natural killer (NK) cell activity (reviewed in Kreek, 1990), the specific immune deficit that leads to AIDS--a progressive reduction of CD4+ T cells resulting in persistent CD4+ T lymphocytopenia--is rare in HIV-seronegative injection drug users in the absence of other immunosuppressive conditions (Des Jarlais et al., 1993; Weiss et al., 1992). In a survey of 229 HIV-seronegative injection drug users in New York City, mean CD4+ T cell counts of the group were consistently over 1000/mm3 (Des Jarlais et al., 1993). Only two individuals had two CD4+ T cell measurements of fewer than 300/mm3, one of whom died with cardiac disease and non-Hodgkin's lymphoma listed as the cause of death. In a study of 180 HIV-seronegative injection drug users in New Jersey, the participants' average CD4+ T cell count was 1169/mm3 (Weiss et al., 1992). Two of these individuals, both with generalized lymphocytopenia, had CD4+ T cell counts less than 300/mm3. In the MACS, median CD4+ T cell counts of 63 HIV-seronegative injection drug users rose from 1061/mm3 to 1124/mm3 in a 15 to 21 month follow-up period (Margolick et al., 1992). In a cross-sectional study, 11 HIV-seronegative, long-term heroin addicts had mean CD4+ T cell counts of 1500/mm3, while 11 healthy controls had CD4+ T cell counts of 820 cells/mm3 (Novick et al., 1989). Recent data also refute the notion that a certain lifetime dosage of injection drugs is sufficient to cause AIDS in HIV-seronegative individuals. In a Dutch study, investigators compared 86 HIV-seronegative individuals who had been injecting drugs for a mean of 7.6 years with 70 HIV-seropositive people who had injected drugs for a mean of 9.1 years. Upon enrollment in 1989, CD4+ T cell counts were 914/mm3 in the HIV-seronegative group, and 395/mm3 in the seropositive group. By 1994, there were 25 deaths attributable to AIDS-defining conditions in the seropositive group; among HIV-seronegative individuals, eight deaths occurred, none due to AIDS-defining diseases (Cohen, 1994a). Excess mortality among HIV-infected injection drug users as compared to HIV-seronegative users has also been observed by other investigators. In a prospective Italian study of 2,431 injection drug users enrolled in drug treatment programs from 1985 to 1991, HIV-seropositive individuals were 4.5 times more likely to die than HIV-seronegative subjects (Zaccarelli et al., 1994). No deaths due to AIDS-defining conditions were seen among 1,661 HIV-seronegative individuals, 41 of whom died of other conditions, predominantly overdose, liver disease and accidents. Among 770 individuals who were HIV-seropositive at study entry or who seroconverted during the study period, 89 died of AIDS-related conditions and 52 of other conditions. In HIV-seropositive individuals, a number of investigators have found no statistical association between injection drug use and decline of CD4+ T cell counts (Galli et al., 1989, 1991; Schoenbaum et al., 1989; Margolick et al., 1992, 1994; Montella et al., 1992; Alcabes et al., 1993b, 1994; Galai et al., 1995), nor a difference in disease progression between active versus former users of injection drugs (Weber et al., 1990; Galli et al., 1991; Montella et al., 1992; Italian Seroconversion Study, 1992). Taken together, these studies suggest that any negative effects of injection drugs on CD4+ T cell levels are limited and may explain why many investigators have found that HIV-seropositive injection drug users have rates of disease progression that are similar to other HIV-infected individuals (Rezza et al., 1990; Montella et al., 1992; Galli et al., 1989; Selwyn et al., 1992; Munoz et al., 1992; Italian Seroconversion Study, 1992; MAP Workshop, 1993; Pezzotti et al., 1992; Margolick et al., 1992, 1994; Alcabes, 1993b, 1994; Galai et al., 1995). It has been asserted ". . . in America, only promiscuity aided by aphrodisiac and psychoactive drugs, practiced mostly by 20 to 40 year-old male homosexuals and some heterosexuals, seems to correlate with AIDS diseases" (Duesberg, 1991). Even a cursory review of history provides evidence to the contrary: such behaviors have existed for decades --in some cases centuries--and have increased only in a relative sense in recent years, if at all, whereas AIDS clearly is a new phenomenon. If promiscuity were a cause of AIDS, one would have expected cases to have occurred among prostitutes (male or female) prior to 1978. Reports of such cases are lacking, even though prostitution has been present in most if not all cultures throughout history. In this country, trends in gonorrheal infections suggest that extramarital sexual activity was extensive in the pre-AIDS era. Cases of gonorrhea in the United States peaked at approximately 1 million in 1978; between 250,000 and 530,000 cases were reported each year in the 1960s, approximately 250,000 cases each year in the 1950s, and between 175,000 and 380,000 cases annually in the 1940s (CDC, 1987c, 1993b). Despite the frequency of sexually transmitted diseases, only a handful of documented cases of AIDS in the United States prior to 1978 have been reported. Historians, archaeologists and sociologists have documented extensive homosexual activity dating from the ancient Greeks to the well-established homosexual subculture in the United States in the 20th century (Weinberg and Williams, 1974; Gilbert, 1980-81; Saghir and Robins, 1973; Reinisch et al., 1990; Doll et al., 1990; Katz, 1992; Friedman and Downey, 1994). Depictions of anal intercourse, both male and female, can be found in the art and literature of numerous cultures on all inhabited continents (Reinisch et al., 1990). In the 1940s, Kinsey et al. reported that 37 percent of all American males surveyed had at least some overt homosexual experience to the point of orgasm between adolescence and old age and that 10 percent of men were exclusively or predominantly homosexual between the ages of 16 and 55 (Kinsey et al., 1948). More recent surveys have found that 2 to 5 percent of men are homosexual or bisexual (reviewed in Friedman and Downey, 1994; Seidman and Rieder, 1994; Laumann, 1994). Many homosexuals had multiple sexual partners in the pre-AIDS era: a 1969 survey found that more than 40 percent of white homosexual males and one-third of black homosexual males had at least 500 partners in their lifetime, and an additional one-fourth reported between 100 and 500 partners (Bell and Weinberg, 1978). A majority of these men reported that more than half their partners had been strangers before the sexual encounters (Bell and Weinberg, 1978). Further evidence of extensive homosexual behavior in the years preceding the AIDS epidemic comes from reports of numerous cases of rectal gonorrheal and anal herpes simplex virus infections among men (Jefferiss, 1956; Scott and Stone, 1966; Pariser and Marino, 1970; Owen and Hill, 1972; British Cooperative Clinical Group, 1973; Jacobs, 1976; Judson et al., 1977; Merino and Richards, 1977; McMillan and Young, 1978). A temporal association between the onset of extensive use of recreational drugs and the AIDS epidemic is also lacking. The widespread use of opiates in the United States has existed since the middle of the 19th century (Courtwright, 1982); as many as 313,000 Americans were addicted to opium and morphine prior to 1914. Heroin use spread throughout the country in the 1920s and 1930s (Courtwright, 1982), and the total number of active heroin users peaked at about 626,000 in 1971 (Greene et al., 1975; Friedland, 1989). Opiates were initially administered by oral or inhalation routes, but by the 1920s addicts began to inject heroin directly into their veins (Courtwright, 1982). In 1940, intravenous use of opiates was seen in 80 percent of men admitted to a large addiction research center in Kentucky (Friedland, 1989). While cocaine use increased markedly during the 1970s (Kozel and Adams, 1986), the use of the drug, frequently with morphine, is well-documented in the United States since the late 19th century (Dale, 1903; Ashley, 1975; Spotts and Shontz, 1980). For example, a survey in 1902 reported that only 3 to 8 percent of the cocaine sold in New York, Boston and other cities went into the practice of medicine or dentistry (Spotts and Shontz). After a period of relative obscurity, cocaine became increasingly popular in the late 1950s and 1960s. Over 70 percent of 1,100 addicts at the addiction research center in Kentucky in 1968 and 1969 reported use or abuse of cocaine (Chambers, 1974). The recreational use of nitrite inhalants ("poppers") also predates the AIDS epidemic. Reports of the widespread use of these drugs by young men in the 1960s were the impetus for the reinstatement by the Food and Drug Administration of the prescription requirement for amyl nitrite in 1968 (Israelstam et al., 1978; Haverkos and Dougherty, 1988). Since the early years of the AIDS epidemic, the use of nitrite inhalants has declined dramatically among homosexual men, yet the number of AIDS cases continues to increase (Ostrow et al., 1990, 1993; Lau et al., 1992). In the general population, the number of individuals aged 25 to 44 years reporting current use of marijuana, cocaine, inhalants, hallucinogens and cigarettes declined between 1974 and 1992, while the AIDS epidemic worsened (Substance Abuse and Mental Health Services Administration, 1994). Although some individuals maintain that treatment with zidovudine (AZT) has compounded the AIDS epidemic (Duesberg, 1992), published reports of both placebo-controlled clinical trials and observational studies provide data to the contrary (Table 4). The Concorde trial and VA 298 compared immediate (imm) and deferred (def) use of zidovudine (ZDV); the other trials compared zidovudine (ZDV) and placebo (P). † Or p24 antigenemia. ‡ In ACTG 019, original treatment group included placebo, 500 mg. ZDV/day or 1500 mg. ZDV/day. ¶ After the unblinding of the original randomized trial in 1989, subjects in each original arm were offered a daily dose of 500 mg. open-label zidovudine. Modified from Concorde Coordinating Committee, 1994. In patients with symptomatic HIV disease, for whom a beneficial effect is measured in months, AZT appears to slow disease progression and prolong life, according to double-blind, placebo-controlled clinical studies (reviewed in Sande et al., 1993; McLeod and Hammer, 1992; Volberding and Graham, 1994). A clinical trial known as BW 002 compared AZT with placebo in 282 patients with AIDS or advanced signs or symptoms of HIV disease. In this study, which led to the approval of AZT by the FDA, only one of 145 patients treated with AZT died compared with 19 of 137 placebo recipients in a six month period. Opportunistic infections occurred in 24 AZT recipients and 45 placebo recipients. In addition to reducing mortality, AZT was shown to have reduced the frequency and severity of AIDS-associated opportunistic infections, improved body weight, prevented deterioration in Karnofsky performance score, and increased counts of CD4+ T lymphocytes in the peripheral blood (Fischl et al., 1987; Richman et al., 1987). Continued follow-up in 229 of these patients showed that the survival benefit of AZT extended to at least 21 months after the initiation of therapy; survival in the original treatment group was 57.6 percent at that time, whereas survival among members of the original placebo group was 51.5 percent at nine months (Richman and Andrews, 1988; Fischl et al., 1989). In another placebo-controlled study known as ACTG 016, which enrolled 711 symptomatic HIV-infected patients with CD4+ T cell counts between 200 and 500 cells/mm3, those taking AZT were less likely to experience disease progression than those on placebo during a median study period of 11 months (Fischl et al., 1990). In this study, no difference in disease progression was noted among participants who began the trial with CD4+ T cell counts greater than 500/mm3. A Veteran's Administration study of 338 individuals with early symptoms of HIV disease and CD4+ T cell counts between 200 and 500 cells/mm3 found that immediate therapy significantly delayed disease progression compared with deferred therapy, but did not lengthen (or shorten) survival after an average study period of more than two years (Hamilton et al., 1992). Among asymptomatic HIV-infected individuals, several placebo-controlled clinical trials suggest that AZT can delay disease progression for 12 to 24 months but ultimately does not increase survival. Significantly, long-term follow-up of persons participating in these trials, although not showing prolonged benefit of AZT, has never indicated that the drug increases disease progression or mortality (reviewed in McLeod and Hammer, 1992; Sande et al., 1993; Volberding and Graham, 1994). The lack of excess AIDS cases and death in the AZT arms of these large trials effectively rebuts the argument that AZT causes AIDS. During a 4.5 year follow-up period (mean 2.6 years) of a trial known as ACTG 019, no differences were seen in overall survival between AZT and placebo groups among 1,565 asymptomatic patients entering the study with fewer than 500 CD4+ T cells/mm3 (Volberding et al., 1994). In that study, AZT was superior to placebo in delaying progression to AIDS or advanced ARC for approximately one year, and a more prolonged benefit was seen among a subset of patients. The Concorde study in Europe enrolled 1,749 asymptomatic patients with CD4+ T cell counts less than 500/mm3. In that study, no statistically significant differences in progression to advanced disease were observed after three years between individuals taking AZT immediately and those who deferred AZT therapy or did not take the drug (Concorde Coordinating Committee, 1994). However, the rate of progression to death, AIDS or severe ARC was slower among the "immediate" AZT group during the first year of therapy. Although the Concorde study did not show a significant benefit over time with the early use of AZT, it clearly demonstrated that AZT was not harmful to the patients in the "immediate" AZT group as compared to the "deferred" AZT group. A European-Australian study (EACG 020) of 993 patients with CD4+ T cell counts greater than 400/mm3 showed no differences between AZT and placebo arms of the trial during a median study period of 94 weeks, although AZT did delay progression to certain clinical and immunological endpoints for up to three years (Cooper et al., 1993). Both this study and the Concorde study reported little severe AZT-related hematologic toxicity at doses of 1,000 mg/day, which is twice the recommended daily dose in the United States. Uncontrolled studies have found increased survival and/or reduced frequency of opportunistic infections in patients with HIV disease and AIDS who were treated with AZT or other anti-retrovirals (Creagh-Kirk et al., 1988; Moore et al., 1991a,b; Ragni et al., 1992; Schinaia et al., 1991; Koblin et al., 1992; Graham et al., 1991, 1992, 1993; Longini, 1993; Vella et al., 1992, 1994; Saah et al., 1994; Bacellar et al., 1994). In the Multicenter AIDS Cohort Study, for example, HIV-infected individuals treated with AZT had significantly reduced mortality and progression to AIDS for follow-up intervals of six, 12, 18 and 24 months compared to those not taking AZT, even after adjusting for health status, CD4+ T cell counts and PCP prophylaxis (Graham et al., 1991, 1992). In addition, several cohort studies show that life expectancy of individuals with AIDS has increased since the use of AZT became common in 1986-87. Among 362 homosexual men in hepatitis B vaccine trial cohorts in New York City, San Francisco and Amsterdam, the time from seroconversion to death, a period not influenced by variations in diagnosing AIDS, has lengthened slightly in recent years (Hessol et al., 1994). In a Dutch study of 975 males and females with HIV infection, median survival with AIDS increased from nine months in 1982-1985, to 26 months in 1990 (Bindels et al., 1994). Even taking into consideration the benefits of improved PCP prophylaxis and treatment, if AZT were contributing to or causing disease, one would expect a decrease in survival figures, rather than an increase that parallels the use of AZT. In an analysis from the San Francisco Men's Health Study, the investigators note that 169 (73 percent) of 233 AIDS patients had been treated with AZT at one time or another. However, 90 (53 percent of the 169) were diagnosed with clinical AIDS before beginning AZT treatment, and another 51 (30 percent of the 169) had CD4+ T cell counts lower than 200/mm3 before initiation of AZT treatment (Ascher et al., 1995). The authors conclude, "These data are not consistent with the hypothesis of a causal role for AZT in AIDS." It has been argued that HIV cannot cause AIDS because the body develops HIV-specific antibodies following primary infection (Duesberg, 1992). This reasoning ignores numerous examples of viruses other than HIV that can be pathogenic after evidence of immunity appears (Oldstone, 1989). Primary poliovirus infection is a classic example of a disease in which high titers of neutralizing antibodies develop in all infected individuals, yet a small percentage of individuals develop subsequent paralysis (Kurth, 1990). Measles virus may persist for years in brain cells, eventually causing a chronic neurological disease despite the presence of antibodies (Gershon, 1990). Viruses such as cytomegalovirus, herpes simplex and varicella zoster may be activated after years of latency even in the presence of abundant antibodies (Weiss and Jaffe, 1990). Lentiviruses with long and variable latency periods, such as visna virus in sheep, cause central nervous system damage even after the specific production of neutralizing antibodies (Haase, 1990). Furthermore, it is now well-documented that HIV can mutate rapidly to circumvent immunologic control of its replication. It has been argued that AIDS among transfusion recipients is due to underlying diseases that necessitated the transfusion, rather than to HIV (Duesberg, 1991). This theory is contradicted by a report by the Transfusion Safety Study Group, which compared HIV-negative and HIV-positive blood recipients who had been given transfusions for similar diseases. Approximately three years after the transfusion, the mean CD4+ T cell count in 64 HIV-negative recipients was 850/mm3, while 111 HIV-seropositive individuals had average CD4+ T cell counts of 375/mm3 (Donegan et al., 1990). By 1993, there were 37 cases of AIDS in the HIV-infected group, but not a single AIDS-defining illness in the HIV-seronegative transfusion recipients (Cohen, 1994d). People have received blood transfusions for decades; however, as discussed above, AIDS-like symptoms were extraordinarily rare before the appearance of HIV. Recent surveys have shown that AIDS-like symptoms remain very rare among transfusion recipients who are HIV-seronegative and their sexual contacts. In one study of transfusion safety, no AIDS-defining illnesses were seen among 807 HIV-negative recipients of blood or blood products, or 947 long-term sexual or household contacts of these individuals (Aledort et al., 1993). In addition, through 1994, the CDC had received reports of 628 cases of AIDS in individuals whose primary risk factor was sex with an HIV-infected transfusion recipient (CDC, 1995a), a finding not explainable by the "risk-AIDS" hypothesis. It has also been argued that cumulative exposure to foreign proteins in Factor VIII concentrates leads to CD4+ T cell depletion and AIDS in hemophiliacs (Duesberg, 1992). This view is contradicted by several large studies. Among HIV-seronegative patients with hemophilia A enrolled in the Transfusion Safety Study, no significant differences in CD4+ T cell counts were noted between 79 patients with no or minimal factor treatment and 53 patients with the largest amount of lifetime treatments (cumulative totals in the latter group ranged from 100,000 to 2,000,000 U in two years) (Hassett et al., 1993). Although the CD4+ T cell counts seen in the low- and high- groups (756/mm3 and 718/mm3, respectively) were 20 to 25 percent lower than controls, such levels are still within the normal range. In a report from the Multicenter Hemophilia Cohort Study, the mean CD4+ T cell counts among 161 HIV-seronegative hemophiliacs was 784/mm3; among 715 HIV-seropositive hemophiliacs, the mean CD4+ T cell count was 253/mm3 (Lederman et al., 1995). In another study, no instances of AIDS-defining illnesses were seen among 402 HIV-seronegative hemophiliacs treated with factor therapy or in 83 hemophiliacs who received no treatment subsequent to 1979 (Aledort et al., 1993; Mosely et al., 1993). In a retrospective study of patients with severe hemophilia A, the rate of CD4+ T cell loss was 31.4 every six months for 41 HIV-seropositive individuals without AIDS and 49.7 every six months for 14 HIV-seropositive individuals with AIDS. In contrast, among 28 HIV-seronegative individuals, CD4+ T cell counts increased at a rate of 13.1 cells/six months (Becherer et al., 1990). In a study of children and adolescents with hemophilia, the median CD4+ T cell count of 126 HIV-seronegative individuals was 895/mm3 at study entry; no individuals had CD4+ T cell counts below 200/mm3. In contrast, 26 percent of seropositive children had CD4+ T cell counts of less than 200/mm3; the mean CD4+ T cell count for seropositive children was 423/mm3 (Jason et al., 1994). Although some reports have suggested that high-purity Factor VIII concentrates are associated with a slower rate of CD4+ T cell decline in HIV-infected hemophiliacs than products of low and intermediate purity (Hilgartner et al., 1993; Goldsmith et al., 1991; de Biasi et al., 1991), other studies have shown no such benefit (Mannucci et al., 1992; Gjerset et al., 1994). In a study of 525 HIV-infected hemophiliacs, Transfusion Safety Study investigators found that neither the purity nor the amount of Factor VIII therapy had a deleterious effect on CD4+ T cell counts (Gjerset et al., 1994). Similarly, the Multicenter Hemophilia Cohort Study found no association between the cumulative dose of plasma concentrate and incidence of AIDS among 242 HIV-infected hemophiliacs and thus "no support for cofactor hypotheses involving either antigen stimulation or inoculum size" (Goedert et al., 1989). In addition to the evidence from the cohort studies cited above, it should be noted that 10 to 20 percent of wives and sex partners of male HIV-positive hemophiliacs in the United States are also HIV-infected (Pitchenik et al., 1984; Kreiss et al., 1985; Peterman et al., 1988; Smiley et al., 1988; Dietrich and Boone, 1990; Lusher et al., 1991). Through December 1994, the CDC had received reports of 266 cases of AIDS in those who had sex with a person with hemophilia (CDC, 1995a). These data cannot be explained by a non-infectious theory of AIDS etiology. Certain skeptics maintain that the distribution of AIDS cases casts doubt on HIV as the cause of the syndrome. They claim infectious microbes are not gender-specific, yet relatively few people with AIDS are women (Duesberg, 1992). In fact, the distribution of AIDS cases, whether in the United States or elsewhere in the world, invariably mirrors the prevalence of HIV in a population (U.S. Bureau of the Census, 1994). In the United States, HIV first appeared in populations of homosexual men and injection drug users, a majority of whom are male (Curran et al., 1988). Because HIV is spread primarily through sex or by the exchange of HIV-contaminated needles during injection drug use, it is not surprising that a majority of U.S. AIDS cases have occurred in men. Increasingly, however, women are becoming HIV-infected, usually through the exchange of HIV-contaminated needles or sex with an HIV-infected male (Vermund, 1993b; CDC, 1995a). As the number of HIV-infected women has risen, so too have the number of female AIDS cases. In the United States, the proportion of AIDS cases among women has increased from 7 percent in 1985 to 18 percent in 1994. AIDS is now the fourth leading cause of death among women aged 25 to 44 in the United States (CDC, 1994). In Africa, HIV was first recognized in sexually active heterosexuals, and in some parts of Africa AIDS cases have occurred as frequently in women as in men (Quinn et al., 1986; Mann, 1992a). In Zambia, for example, the 29,734 AIDS cases reported to the WHO through October 20, 1993, were equally divided among males and females (WHO, 1995a,b). One vocal skeptic of the role of HIV in AIDS argues that, in Africa, AIDS is nothing more than a new name for old diseases (Duesberg, 1991). It is true that the diseases that have come to be associated with AIDS in Africa--wasting, diarrheal diseases and TB--have long been severe burdens there. However, high rates of mortality from these diseases, formerly confined to the elderly and malnourished, are now common among HIV-infected young and middle-aged people (Essex, 1994). In a recent study of more than 9,000 individuals in rural Uganda, people testing positive for HIV antibodies were 60 times as likely to die during the subsequent two-year observation period as were otherwise similar persons who tested negative (Mulder et al., 1994b). Large differences in mortality were also seen between HIV-seropositive and HIV-seronegative individuals in another large Ugandan cohort (Sewankambo et al., 1994). Elsewhere in Africa findings are similar. One study of 1,400 Rwandan women tested for HIV during pregnancy found that HIV infected women were 20 times more likely to die in the two years following pregnancy than their HIV-negative counterparts (Lindan et al., 1992). In another study in Rwanda, 215 HIV-seropositive women and 216 HIV-seronegative women were followed prospectively for up to four years, during which time 21 women developed AIDS (WHO definition), all of them in the HIV-seropositive group. The mortality rate among the HIV-seropositive women was nine times higher than seen among the HIV-seronegative women (Leroy et al., 1995) In Zaire, investigators found that families in which the mother was HIV-1 seropositive experienced a five- to 10-fold higher maternal, paternal and early childhood mortality rate than families in which the mother was HIV-seronegative (Ryder et al., 1994b). In another study in Zaire, infants with HIV infection were shown to have an 11-fold increased risk of death from diarrhea compared with uninfected children (Thea et al., 1993). In patients with pulmonary tuberculosis in Cote d'Ivoire, HIV-seropositive individuals were 17 times more likely to die than HIV-seronegative individuals (Ackah et al., 1995). The extraordinary death rates among HIV-infected individuals confirm that the virus is an important cause of premature mortality in Africa (Dondero and Curran, 1994). HIV and AIDS have been repeatedly linked in time, place and population group; the appearance of HIV in the blood supply has preceded or coincided with the occurrence of AIDS cases in every country and region where AIDS has been noted. Among individuals without HIV, AIDS-like symptoms are extraordinarily rare, even in populations with many AIDS cases. Individuals as different as homosexual men, elderly transfusion recipients, heterosexual women, drug-using heterosexual men and infants have all developed AIDS with only one common denominator: infection with HIV. Laboratory workers accidentally exposed to highly concentrated HIV and health care workers exposed to HIV-infected blood have developed immunosuppression and AIDS with no other risk factor for immune dysfunction. Scientists have now used PCR to find HIV in virtually every patient with AIDS and to show that HIV is present in large and increasing amounts even in the pre-AIDS stages of HIV disease. Researchers also have demonstrated a correlation between the amount of HIV in the body and progression of the aberrant immunologic processes seen in people with AIDS. Despite this plethora of evidence, the notion that HIV does not cause AIDS continues to find a wide audience in the popular press, with potential negative impact on HIV-infected individuals and on public health efforts to control the epidemic. HIV-infected individuals may be convinced to forego anti-HIV treatments that can forestall the onset of the serious infections and malignancies of AIDS (Edelman et al., 1991). Pregnant HIV-infected women may dismiss the option of taking AZT, which can reduce the likelihood of transmission of HIV from mother to infant (Connor et al., 1994; Boyer et al., 1994). People may be dissuaded from being tested for HIV, thereby missing the opportunity, early in the course of disease, for counselling as well as for treatment with drugs to prevent AIDS-related infections such as PCP. Such prophylactic measures prolong survival and improve the quality of life of HIV-infected individuals (CDC, 1992b). Most troubling is the prospect that individuals will discount the threat of HIV and continue to engage in risky sexual behavior and needle sharing. If public health messages on AIDS prevention are diluted by the misconception that HIV is not responsible for AIDS, otherwise preventable cases of HIV infection and AIDS may occur, adding to the global tragedy of the epidemic. Ackah AN, Coulibaly D, Digbeu H, Diallo K, et al. Response to treatment, mortality, and CD4 lymphocyote counts in HIV-infected persons with tuberculosis in Abidjan, Cote d'Ivoire. Lancet 1995;345(8950):607-10. Alcabes P, Munoz A, Vlahov D, Friedland GH. Incubation period of human immunodeficiency virus. Epidemiol Rev 1993a;15(2):303-18. Alcabes P, Schoenbaum EE, Klein RS. Correlates of the rate of decline of CD4+ lymphocytes among injection drug users infected with the human immunodeficiency virus. Am J Epidemiol 1993b;137(9):989-1000. Alcabes P, Munoz A, Vlahov D, Friedland G. Maturity of human immunodeficiency virus infection and incubation period of acquired immunodeficiency syndrome in injecting drug users. Ann Epidemiol 1994;4(1):17-26. Aldrovandi GM, Feuer G, Gao L, Jamieson B, et al. The SCID-hu mouse as a model for HIV-1 infection. Nature 1993;363(6431):732-6. Aledort LM, Operskalski EA, Dietrich SL, Koerper MA, et al. Low CD4+ counts in a study of transfusion safety. N Engl J Med 1993;328(6):441-2. Allain JP, Laurian Y, Paul DA, Verroust F, et al. Long-term evaluation of HIV antigen and antibodies to p24 and gp41 in patients with hemophilia. Potential clinical importance. N Engl J Med 1987;317(18):1114-21. AMA (American Medical Association), Council on Scientific Affairs. The acquired immunodeficiency syndrome (commentary). JAMA 1984;252(15):2037-43. Ameisen JC, Capron A. Cell dysfunction and depletion in AIDS: the programmed cell death hypothesis. Immunol Today 1991;12(4):102-5. Ammann AJ, Abrams D, Conant M, Chudwin D, et al. Acquired immune dysfunction in homosexual men: immunologic profiles. Clin Immunol Immunopathol 1983a;27(3):315-25. Ammann AJ, Cowan MJ, Wara DW, Weintrub P, et al. Acquired immunodeficiency in an infant: possible transmission by means of blood products. Lancet 1983b;1(8331):956-8. Ammann A. T-cell immunodeficiency disorders. In: Stites D, Terr A, eds. Basic and Clinical Immunology; 7th ed. Norwalk: Appleton and Lange, 1991, pp. 335-40. Andes WA, Rangan SR, Wulff KM. Exposure of heterosexuals to human immunodeficiency virus and viremia: evidence for continuing risks in spouses of hemophiliacs. Sex Transm Dis 1989;16(2):68-73. Andiman WA, Simpson BJ, Olson B, Dember L, et al. Rate of transmission of human immunodeficiency virus type 1 infection from mother to child and short-term outcome of neonatal infection. Am J Dis Child 1990;144:758-66. Aoki-Sei S, Yarchoan R, Kageyama S, Hoekzema DT, et al. Plasma HIV-1 viremia in HIV-1 infected individuals assessed by polymerase chain reaction. AIDS Res Hum Retroviruses 1992;8(7):1263-70. Aronson DL. Infection of hemophiliacs with HIV. J Clin Apheresis 1993;8(2):117-9. Ascher MS, Sheppard HW, Winkelstein W Jr, Vittinghoff E, et al. Does drug use cause AIDS? Nature 1993a;362(6416):103-4. Ascher MS, Sheppard HW, Winkelstein W Jr, Vittinghoff E. Aetiology of AIDS. Lancet 1993b;341(8854):1223. Ascher MS, Sheppard HW, Winkelstein W Jr. AIDS-associated Kaposi's sarcoma (letter). Science 1995;267:1080. Ashley R. Cocaine: Its History, Uses and Effects. New York: St. Martins Press, 1975. Ashton LJ, Learmont J, Luo K, Wylie B, et al. HIV infection in recipients of blood products from donors with known duration of infection. Lancet 1994;344(8924):718-20. Auerbach DM, Darrow WW, Jaffe HW, Curran JW. Cluster of cases of the acquired immune deficiency syndrome: patients linked by sexual contact. Am J Med 1984;76(3):487-92. Bacellar H, Munoz A, Hoover DR, Phair JP, et al. Incidence of clinical AIDS conditions in a cohort of homosexual men with CD4+ cell counts < 100/MM3. J INFECT DIS 1994;170(5):1284-7. Bachmeyer C, Boufassa F, Sereni D, Deveau C, Buquet D. Prognostic value of acute symptomatic HIV-infection. IXth Int Conf on AIDS, (abstract no. PO-B01-0870), June 6-11, 1993. Bagasra O, Hauptman SP, Lischner HW, Sachs M, Pomerantz RJ. Detection of human immunodeficiency virus type 1 provirus in mononuclear cells by in situ polymerase chain reaction. N Engl J Med 1992;326(21):1385-91. Bagnarelli P, Menzo S, Valenza A, Manzin A, et al. Molecular profile of human immunodeficiency virus type 1 infection in symptomless patients and in patients with AIDS. J Virol 1992;66(12):7328-35. Barlow KM, Mok JY. Dizygotic twins discordant for HIV and hepatitis C virus. Arch Dis Child 1993;68(4):507. Barnett SW, Murthy KK, Herndier BG, Levy JA. An AIDS-like condition induced in baboons by HIV-2. Science 1994;266:642-6. Barre-Sinoussi F, Chermann JC, Rey F, Nugeyre MT, et al. Isolation of a T-lymphotropic retrovirus from a patient at risk for acquired immune deficiency syndrome (AIDS). Science 1983;220(4599):868-71. Becherer PR, Smiley ML, Matthews TJ, Weinhold KJ, et al. Human immunodeficiency virus-1 disease progression in hemophiliacs. Am J Hematol 1990;34(3):204-9. Bell AP, Weinberg MS. Homosexualities: A Study of Diversity Among Men and Women. New York: Simon and Schuster, 1978. Biggar RJ. AIDS incubation in 1,891 HIV seroconverters from different exposure groups. International Registry of Seroconverters. AIDS 1990;4(11):1059-66. Bindels PJ, Krol A, Mulder-Folkerts DK, van den Hoek JA, Coutinho RA. Survival of patients following the diagnosis of AIDS in the Amsterdam region, 1982-1991. Ned Tijdschr Geneeskd 1994;138(10):513-8. Blanche S, Mayaux MJ, Rouzioux C, Teglas JP, et al. Relation of the course of HIV infection in children to the severity of the disease in their mothers at delivery. N Engl J Med 1994;330(5):308-12. Blattner W, Gallo RC, Temin HM. HIV causes AIDS. Science 1988a;241(4865):515-6. Blattner W, et al. Blattner and colleagues respond to Duesberg. Science 1988b;241:514, 517. Blattner W, Reitz M, Colclough G, Weiss S. HIV/AIDS in laboratory workers infected with HTLV-IIIB. IXth Int Conf on AIDS, (abstract no. PO-B01-0876), June 6-11, 1993. Bobkov A, Garaev MM, Rzhaninova A, Kaleebu P, et al. Molecular epidemiology of HIV-1 in the former Soviet Union: analysis of env V3 sequences and their correlation with epidemiologic data. AIDS 1994;8(5):619-24. Bonyhadi ML, Rabin L, Salimi S, Brown DA, et al. HIV induces thymus depletion in vivo. Nature 1993;363(6431):728-32. Boulos R, Ruff A, Coberly J, McBrien M, Halsey JD. Effect of maternal HIV status on infant growth and survival. Xth Int Conf on AIDS, (abstract no. 054B), Aug 7-12, 1994. Boyer PJ, Dillon M, Navaie M, Deveikis A, et al. Factors predictive of maternal-fetal transmission of HIV-1. Preliminary analysis of zidovudine given during pregnancy and/or delivery. JAMA 1994;271(24):1925-30. British Cooperative Clinical Group. Homosexuality and venereal disease in the United Kingdom. Br J Vener Dis 1973;49:329-34. Brookmeyer R, Gail HM. Screening and accuracy of tests for HIV. In: AIDS Epidemiology: A Quantitative Approach. New York: Oxford University Press, 1994. Brown T, Sittitrai W. Estimates of HIV infection levels in the Thai population. Xth Int Conf on AIDS, (abstract no. 182C), Aug 7-12, 1994. Bruisten SM, Koppelman MH, Dekker JT, Bakker M, et al. Concordance of human immunodeficiency virus detection by polymerase chain reaction and by serologic assays in a Dutch cohort of seronegative homosexual men. J Infect Dis 1992;166(3):620-2. Buchbinder SP, Katz MH, Hessol NA, O'Malley PM, Holmberg SD. Long-term HIV-1 infection without immunologic progression. AIDS 1994;8(8):1123-8. Busch MP, Valinsky JE, Paglieroni T, Prince HE, et al. Screening of blood donors for idiopathic CD4+ T-lymphocytopenia. Transfusion 1994;34(3):192-7. Cao Y, Qin L, Zhang L, Safrit J, Ho DD. Virologic and immunologic characterization of long-term survivors of human immunodeficiency virus type 1 infection. N Engl J Med 1995;332(4):201-8. CDC (Centers for Disease Control). Pneumocystis pneumonia - Los Angeles. MMWR 1981a;30:250-2. CDC. Kaposi's sarcoma and Pneumocystis pneumonia among homosexual men - New York City and California. MMWR 1981b;30:305-8. CDC. Persistent, generalized lymphadenopathy among homosexual males. MMWR 1982a;31:249-52. CDC. Update on acquired immune deficiency syndrome (AIDS) - United States. MMWR 1982b;31:507-14. CDC. Pneumocytsis carinii pneumonia among persons with hemophilia A. MMWR 1982c;31:365-7. CDC. Possible transfusion-associated acquired immune deficiency syndrome (AIDS) - California. MMWR 1982d;31:652-4. CDC. Opportunistic infections and Kaposi's sarcoma among Haitians in the United States. MMWR 1982e;31:353-61. CDC. CDC task force on Kaposi's sarcoma and opportunistic infections. N Engl J Med 1982f;306:248-52. CDC. Immunodeficiency among female sex partners of males with acquired immunodeficiency syndrome (AIDS) - New York. MMWR 1983a;31:697-8. CDC. Prevention of acquired immune deficiency syndrome (AIDS): report of inter-agency recommendations. MMWR 1983b;32:101-4. CDC. Revision of the case definition of acquired immunodeficiency syndrome for national reporting - United States. MMWR 1985a;34:373-5. CDC. Revision of the surveillance case definition of acquired immunodeficiency syndrome. MMWR 1987a;36:3S-15S. CDC. Classification for human immunodeficiency virus (HIV) infection in children under 13 years of age. MMWR 1987b;35:224-35. CDC. Summary of notifiable diseases - United States. MMWR 1987c;36:1-59. CDC. 1993 revised classification system for HIV infection and expanded surveillance case definition for AIDS among adolescents and adults. MMWR 1992a;41:1-19. CDC. Guideline for prophylaxis against Pneumocystis carinii pneumonia for persons infected with human immunodeficiency virus. MMWR 1992b;41:1-11. CDC. Summary of notifiable disease - United States, 1992. MMWR 1993b;41:1-73. CDC. Update: AIDS among women - United States, 1994. MMWR 1994;44:81-4. CDC. HIV/AIDS surveillance report, 1994 year-end edition. 1995a;6(no.2). CDC. Division of HIV/AIDS, Reporting and Analysis Section. Personal communication, April 12, 1995b. Chambers CD, Taylor WJ, Moffett AD. The incidence of cocaine use among methadone maintenance patients. Int J Addict 1974;7(3):427-41. Chandra RK, ed. Primary and secondary immunodeficiency disorders. Edinburgh: Churchill Livingstone, 1983. Chearskul S, Chotpitayasunondh T, Wanprapa N, Sangtaweesin V, et al. Survival among HIV-1 perinatally-infected infants, Bangkok, Thailand. Xth Int Conf on AIDS, (abstract no. 222C), Aug 7-12, 1994. Cheingsong-Popov R, Weiss RA, Dalgleish A, Tedder RS, et al. Prevalence of antibody to human T-lymphotropic virus type III in AIDS and AIDS-risk patients in Britain. Lancet 1984;2(8401):477-80. Chin J, Mann JM. The global patterns and prevalence of AIDS and HIV infection. AIDS 1988;2(suppl 1):S247-52. Chorba TL, Holman RC, Strine TW, Clarke MJ, Evatt BL. Changes in longevity and causes of death among persons with hemophilia A. Am J Hematol 1994;45(2):112-21. Clark SJ, Saag MS, Decker WD, Campbell-Hill S, et al. High titers of cytopathic virus in plasma of patients with symptomatic primary HIV-1 infection. N Engl J Med 1991;324(14):954-60. Coates RA, Farewell VT, Raboud J, Read SE, et al. Cofactors of progression to acquired immunodeficiency syndrome in a cohort of male sexual contacts of men with human immunodeficiency virus disease. Am J Epidemiol 1990;132(4):717-22. Coffin J, Haase A, Levy JA, Montagnier L, et al. Human immunodeficiency viruses (letter). Science 1986;232(4751):697. Cohen J. Could drugs, rather than a virus, be the cause of AIDS? Science 1994a;266(5191):1648-9. Cohen J. The epidemic in Thailand. Science 1994b;266(5191):1647. Cohen J. Fulfilling Koch's postulates. Science 1994c;266(5191):1647. Cohen J. Duesberg and critics agree: hemophilia is the best test. Science 1994d;266(5191):1645-6. Cohen J. The Duesberg phenomenon. Science 1994e;266(5191):1642-4. Concorde Coordinating Commitee. MRC/ANRS randomised double-blind controlled trial of immediate and deferred zidovudine in symptom-free HIV infection. Lancet 1994;343(8902):871-81. Connor RJ, Mohri H, Cao Y, Ho DD. Increased viral burden and cytopathicity correlate temporally with CD4+ T-lymphocyte decline and clinical progression in human immunodeficiency virus type 1-infected individuals. J Virol 1993;67(4):1772-7. Connor RI, Ho DD. Transmission and pathogenesis of human immunodeficiency virus type 1. AIDS Res Hum Retroviruses 1994a;10(4):321-3. Connor RI, Ho DD. Human immunodeficiency virus type 1 variants with increased replicative capacity develop during the asymptomatic stage before disease progression. J Virol 1994b;68(7):4400-8. Connor EM, Sperling RS, Gelber R, Kiselev P, et al. Reduction of maternal-infant transmission of human immunodeficiency virus type 1 with zidovudine treatment. N Engl J Med 1994;331(18):1173-80. Coombs RW, Collier AC, Allain JP, Nikora B, et al. Plasma viremia in human immunodeficiency virus infection. N Engl J Med 1989;321(24):1626-31. Cooper DA, Gold J, Maclean P, Donovan B, et al. Acute AIDS retrovirus infection. Definition of a clinical illness associated with seroconversion. Lancet 1985;1(8428):537-40. Cooper DA, Gatell JM, Kroon S, Clumeck N, et al. Zidovudine in persons with asymptomatic HIV infection and CD4+ cell counts greater than 400 per cubic millimeter. N Engl J Med 1993;329(5):297-303. Corbitt G, Bailey AS, Williams G. HIV infection in Manchester, 1959. Lancet 1990;336(8706):51. Courtwright DT. Dark Paradise: Opiate Addiction in America Before 1940. Cambridge: Harvard University Press, 1982. Creagh-Kirk T, Doi P, Andrews E, Nusinoff-Lehrman S, et al. Survival experience among patients with AIDS receiving zidovudine. Follow-up of patients in a compassionate plea program. JAMA 1988;260(20):3009-15. Curran JW, Lawrence DN, Jaffe H, Kaplan JE, et al. Acquired immunodeficiency syndrome (AIDS) associated with transfusions. N Engl J Med 1984;310(2):69-75. Curran JW, Jaffe HW, Hardy AM, Morgan WM, et al. Epidemiology of HIV infection and AIDS in the United States. Science 1988;239(4840):610-6. d'Arminio Monforte A, Novati R, Galli M, Marchisio P, et al. T-cell subsets and serum immunoglobulin levels in infants born to HIV-seropositive mothers: a longitudinal evaluation. AIDS 1990;4(11):1141-4. Daar ES, Moudgil T, Meyer RD, Ho DD. Transient high levels of viremia in patients with primary human immunodeficiency virus type 1 infection. N Engl J Med 1991;324(14):961-4. Daar ES, Chernyavskiy T, Zhao JQ, Krogstad P, et al. Sequential determination of viral load and phenotype in human immunodeficiency virus type 1 infection. AIDS Res Hum Retroviruses 1995;11(1):3-9. Dale GM. Morphine and cocaine intoxication. JAMA 1903;1:12, 15-6. Dalgleish AG, Beverley PC, Clapham PR, Crawford DH, et al. The CD4 (T4) antigen is an essential component of the receptor for the AIDS retrovirus. Nature 1984;312(5996):763-7. Dallabetta GA, Miotti PG, Chiphangwi JD, Saah AJ, et al. High socioeconomic status is a risk factor for human immunodeficiency virus type 1 (HIV-1) infection but not for sexually transmitted diseases in women in Malawi: implications for HIV-1 control. J Infect Dis 1993;167(1):36-42. Darby SC, Ewart DW, Giangrande PLF, et al. Mortality before and after HIV infection in the UK population of haemophiliacs. Nature 1995;377:79-82. Davachi F. Pediatric HIV infection in Africa. In: Essex M, et al., eds. AIDS in Africa. New York: Raven Press, 1994, pp. 439-62. Davis KC, Horsburgh CR Jr, Hasiba U, Schocket AL, Kirkpatrick CH. Acquired immunodeficiency syndrome in a patient with hemophilia. Ann Intern Med 1983;98(3):284-6. de Biasi R, Rocino A, Miraglia E, Mastrullo L, Quirino AA. The impact of a very high purity factor VIII concentrate on the immune system of human immunodeficiency virus-infected hemophiliacs: a randomized, prospective, two-year comparison with an intermediate purity concentrate. Blood 1991;78(8):1919-22. Des Jarlais DC, Friedman SR, Marmor M, Mildvan D, et al. CD4 lymphocytopenia among injecting drug users in New York City. J Acquir Immune Defic Syndr 1993;6:(7)820-2. deShazo RD, Andes WA, Nordberg J, Newton J, et al. An immunologic evaluation of hemophiliac patients and their wives. Relationships to the acquired immunodeficiency syndrome. Ann Intern Med 1983;99(2):159-64. Desrosiers RC. The simian immunodeficiency viruses. Annu Rev Immunol 1990;8:557-78. Dewhurst S, Embretson JE, Anderson DC, Mullins JI, Fultz PN. Sequence analysis and acute pathogenicity of molecularly cloned SIVSMM-PBj14. Nature 1990;345(6276):636-40. Dickover RE, Dillon M, Gillette SG, Deveikis A, et al. Rapid increases in load of human immunodeficiency virus correlate with early disease progression and loss of CD4 cells in vertically infected infants. J Infect Dis 1994;170(5):1279-84. Dietrich SL, Boone DC. The epidemiology of HIV infection in hemophiliacs. In: Nilsson, Berntrop, eds. Recent Advances in Hemophilia Care. New York: Alan R. Liss, 1990, pp. 79-86. Doll LS, Judson FN, Ostrow DG, O'Malley PM, et al. Sexual behavior before AIDS: the hepatitis B studies of homosexual and bisexual men. AIDS 1990;4(11):1067-73. Dondero TJ, Curran JW. Excess deaths in Africa from HIV: confirmed and quantified (commentary). Lancet 1994;343(8904):989-90. Donegan E, Stuart M, Niland JC, Sacks HS, et al. Infection with human immunodeficiency virus type 1 (HIV-1) among recipients of antibody-positive blood donations. Ann Intern Med 1990;113(10):733-9. Douard D, Perel Y, Micheau M, Contraires B, et al. Perinatal HIV infection: longitudinal study of 22 children (clinical and biological follow-up) (letter). J Acquir Immune Defic Syndr 1989;2(2):212-3. Duesberg PH. Retroviruses as carcinogens and pathogens: expectations and reality. Cancer Res 1987;47(5):1199-220. Duesberg P. HIV is not the cause of AIDS. Science 1988;241(4865):514, 517. Duesberg PH. Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation. Proc Natl Acad Sci USA 1989;86(3):755-64. Duesberg PH. AIDS: non-infectious deficiencies acquired by drug consumption and other risk factors. Res Immunol 1990;141(1):5-11. Duesberg PH. AIDS epidemiology: inconsistencies with human immunodeficiency virus and with infectious disease. Proc Natl Acad Sci USA 1991;88(4):1575-9. Duesberg PH. The role of drugs in the origin of AIDS. Biomed Pharmacother 1992;46(1):3-15. Duesberg PH. AIDS acquired by drug consumption and other noncontagious risk factors. Pharmacol Ther 1992;55(3):201-77. Duesberg P. Infectious AIDS-stretching the germ theory beyond its limits. Int Arch Allergy Immunol 1994;103(2):118-27. Durack DT. Opportunistic infections and Kaposi's sarcoma in homosexual men. N Engl J Med 1981;305(24):1465-7. Edelman K, Horning P, Catalan J, Gazzard B. HIV does not cause AIDS--impact of T.V. programme on attitudes to zidovudine in HIV patients. VIIth Int Conf on AIDS, (abstract no. W.B.2097), June 16-21, 1991. Elliott JL, Hoppes WL, Platt MS, Thomas JG, et al. The acquired immunodeficiency syndrome and Mycobacterium avium-intracellulare bacteremia in a patient with hemophilia. Ann Intern Med 1983;98(3):290-3. Embretson J, Zapancic M, Ribas JL, Burke A, et al. Massive covert infection of helper T lymphocytes and macrophages by HIV during the incubation period of AIDS. Nature 1993;362(6418):359-62. Essex M, Hardy WJ Jr, Cotter SM, Jakowski RM, Sliski A. Naturally occurring persistent feline oncornavirus infections in the absence of disease. Infect Immun 1975;11(3):470-5. Essex M. Adult T-cell leukemia/lymphoma: role of a human retrovirus. J Natl Cancer Inst 1982;69:981. Essex M. The etiology of AIDS. In: Essex M, et al., eds. AIDS in Africa. New York: Raven Press, 1994, pp. 1-20. European Collaborative Study. Children born to women with HIV-1 infection: natural history and risk of transmission. Lancet 1991;337:253-60. European Collaborative Study. Risk factors for mother-to-child transmission of HIV-1. Lancet 1992;339:1007-12. Evans AS. Causation and disease: the Henle-Koch postulates revisited. Yale J Biol Med 1976;49(2):175-95. Evans AS. The clinical illness promotion factor: a third ingredient. Yale J Biol Med 1982;55(3-4):193-9. Evans AS. Does HIV cause AIDS? An historical perspective. J Acquir Immune Defic Syndr 1989a;2(2):107-13. Evans AS. Does HIV cause AIDS: author's reply (letter). J Acquir Immune Defic Syndr 1989b;2(9):514-7. Evans AS. AIDS: the alternative view. Lancet 1992;339(8808):1547. Evatt BL, Gomperts ED, McDougal JS, Ramsey RB. Coincidental appearance of LAV/HTLV-III antibodies in hemophiliacs and the onset of the AIDS epidemic. N Engl J Med 1985;312(8):483-6. Fauci AS. The human immunodeficiency virus: infectivity and mechanisms of pathogenesis. Science 1988;239(4840):617-22. Fauci AS. Multifactorial nature of human immunodeficiency virus disease: implications for therapy. Science 1993a;262(3136):1011-8. Fauci AS. CD4+ T-lymphocytopenia without HIV infection-o lights, no camera, just facts. N Engl J Med 1993b;328(6):429-31. Fenyo EM, Morfeldt-Manson L, Chiodi F, Lind B, et al. Distinct replicative and cytopathic characteristics of human immunodeficiency virus isolates. J Virol 1988;62(11):4414-9. Feorino PM, Kalyanaraman VS, Haverkos HW, Cabradilla CD, et al. Lymphadenopathy associated virus infection of a blood donor-recipient pair with acquired immunodeficiency syndrome. Science 1984;225(4657):69-72. Ferre F, Marchese A, Duffy PC, Lewis DE, et al. Quantitation of HIV viral burden by PCR in HIV seropositive Navy personnel representing Walter Reed stages 1 to 6. AIDS Res Hum Retroviruses 1992;8(2):269-75. Finkel TH, Tudor-Williams G, Banda NK, et al. Apoptosis occurs predominantly in bystander cells and not in productively infected cells of HIV- and SIV-infected lymph nodes. Nature Medicine 1995;1(2):129-34. Fischl MA, Richman DD, Grieco MH, Gottlieb MS, et al. The efficacy of azidothymidine (AZT) in the treatment of patients with AIDS and AIDS-related complex. A double-blind, placebo-controlled trial. N Engl J Med 1987;317(4):185-91. Fischl MA, Richman DD, Causey DM, Grieco MH, et al. Prolonged zidovudine therapy in patients with AIDS and advanced AIDS-related complex. AZT Collaborative Working Group. JAMA 1989;262(17):2405-10. Fischl MA, Richman DD, Hansen N, Collier AC, et al. The safety and efficacy of zidovudine (AZT) in the treatment of subjects with mildly symptomatic human immunodeficiency virus type 1 (HIV) infection. A double-blind, placebo-controlled trial. Ann Intern Med 1990;112(10):727-37. Fox CH, Kotler D, Tierney A, Wilson CS, Fauci AS. Detection of HIV-1 RNA in the lamina propria of patients with AIDS and gastrointestinal disease. J Infect Dis 1989;159(3):467-71. Franchini G, Gurgo C, Guo HG, Gallo RC, et al. Sequence of simian immunodeficiency virus and its relationship to the human immunodeficiency viruses. Nature 1987;328(6130):539-43. Francis DP, Curran JW, Essex M. Epidemic acquired immune deficiency syndrome: epidemiologic evidence for a transmissible agent. J Natl Cancer Inst 1983;71(1):1-4. Fricke W, Augustyniak L, Lawrence D, Brownstein A, et al. Human immunodeficiency virus infection due to clotting factor concentrates: results of the Seroconversion Surveillance Project. Transfusion 1992;32(8):707-9. Friedland G. Parenteral drug users. In: Kaslow RA, Francis DP, eds. The Epidemiology of AIDS. New York: Oxford University Press, 1989, pp. 153-78. Friedman RC, Downey JI. Homosexuality. N Engl J Med 1994;331(14):923-30. Friedman-Kien AE. Disseminated Kaposi's sarcoma syndrome in young homosexual men. J Am Acad Dermatol 1981;5(4):468-71. Fultz PN. The pathobiology of SIV infection of macaques. In: Montagnier L, Gougeon ML, eds. New Concepts in AIDS Pathogenesis. New York: Marcel Dekker, 1993, pp. 59-73. Furtado MR, Kingsley LA, Wolinsky SM. Changes in the viral mRNA expression pattern correlate with a rapid rate of CD4+ T-cell number decline in human immunodeficiency virus type 1-infected individuals. J Virol 1995 Apr;69(4):2092-2100. Galai N, et al. Changes in markers of disease progression in HIV-1 seroconverters: a comparison between cohorts of injecting drug users and homosexual men. J Acquir Immune Defic Syndr 1995;8:66-74. Galli M, Lazzarin A, Saracco A, Balotta C, et al. Clinical and immunological aspects of HIV infection in drug addicts. Clin Immunol Immunopathol 1989;50 (1 pt 2):S166-76. Galli M, Musicco M, Gervasoni C, Ridolfo AL, et al. No evidence for a role of continuing intravenous drug injection in accelerating disease progression in HIV-1 positive subjects. VIIth Int Conf on AIDS, (abstract no. TH.C.48), June 16-21, 1991. Gallo RC, Reitz MS Jr. Human retroviruses and adult T-cell leukemia-lymphoma. J Natl Cancer Inst 1982;69(6):1209-14. Gallo RC, Salahuddin SZ, Popovic M, Shearer GM, et al. Frequent detection and isolation of cytopathic retroviruses (HTLV-III) from patients with AIDS and at risk for AIDS. Science 1984;224(4648):500-3. Gallo RC, Montagnier L. The chronology of AIDS research. Nature 1987;326(6112):435-6. Gallo RC. Virus Hunting. AIDS, Cancer, and the Human Retrovirus: A Story of Scientific Discovery. New York: Harper Collins, 1991. Gange RW, Jones EW. Kaposi's sarcoma and immunosuppressive therapy: an appraisal. Clin Exp Dermatol 1978;3(2):135-46. Garry RF. Potential mechanisms for the cytopathic properties of HIV. AIDS 1989;3(11):683-94. Gazzard BG, Shanson DC, Farthing C, Lawrence AG, et al. Clinical findings and serological evidence of HTLV-III infection in homosexual contacts of patients with AIDS and persistent generalised lymphadenopathy in London. Lancet 1984;2(8401):480-3. Genesca J, Wang RY, Alter HJ, Shih JW. Clinical correlation and genetic polymorphism of the human immunodeficiency virus proviral DNA obtained after polymerase chain reaction amplification. J Infect Dis 1990;162(5):1025-30. Gershon AA. Measles virus (rubeola). In: Mandell GL, et al., eds. Principles and Practices of Infectious Diseases; 3rd ed. New York: Churchill Livingstone, 1990, pp. 1279-84. Getchell JP, Hicks DR, Svinivasan A, Heath JL, et al. Human immunodeficiency virus isolated from a serum sample collected in 1976 in Central Africa. J Infect Dis 1987;156(5):833-7. Giesecke J, Scalia-Tomba G, Hakansson C, Karlsson A, Lidman K. Incubation time of AIDS: progression of disease in a cohort of HIV-infected homo- and bisexual men with known dates of infection. Scand J Infect Dis 1990;22(4):407-11. Gilbert AN. Conceptions of homosexuality and sodomy in Western history. J Homosex 1980-81;6(1-2):57-68. Ginsberg HS (moderator). Scientific forum on AIDS: a summary. Does HIV cause AIDS? J Acquir Immune Defic Syndr 1988;1(2):165-72. Gjerset GF, Pike MC, Mosley JW, Hassett J, et al. Effect of low- and intermediate- purity clotting factor therapy on progression of human immunodeficiency virus infection in congenital clotting disorders. Blood 1994;84(5):1666-71. Goedert JJ, Neuland CY, Wallen WC, Greene MH, et al. Amyl nitrite may alter T lymphocytes in homosexual men. Lancet 1982;1(8269):412-6. Goedert JJ, Kessler CM, Aledort LM, Biggar RJ, et al. A prospective study of human immunodeficiency virus type 1 infection and the development of AIDS in subjects with hemophilia. N Engl J Med 1989;321(17):1141-8. Golding H, Shearer GM, Hillman K, Lucas P, et al. Common epitope in human immunodeficiency virus I (HIV) I-GP41 and HLA class II elicits immunosuppressive antibodies capable of contributing to immune dysfunction in HIV-infected individuals. J Clin Invest 1989;83(4):1430-5. Goldsmith JM, Deutsche J, Tang M, Green D. CD4 cells in HIV-1 infected hemophiliacs: effect of factor VIII concentrates. Thromb Haemost 1991;66(4):415-9. Gonda MA, Wong-Staal F, Gallo RC, Clements JE, et al. Sequence homology and morphologic similarity of HTLV-III and visna virus, a pathogenic lentivirus. Science 1985;227(4683):173-7. Gottlieb MS, Schroff R, Schanker HM, Weisman JD, et al. Pneumocystis carinii pneumonia and mucosal candidiasis in previously healthy homosexual men: evidence of a new acquired cellular immunodeficiency. N Engl J Med 1981;305(24):1425-31. Goudsmit J. Alternative view on AIDS. Lancet 1992;339(8804):1289-90. Graham NM, Zeger SL, Kuo V, Jacobson LP, et al. Zidovudine use in AIDS-free HIV-1-seropositive homosexual men in the Multicenter AIDS Cohort Study (MACS), 1987-1989. J Acquir Immune Defic Syndr 1991;4(3):267-76. Graham NM, Piantadosi S, Park LP, Phair JP, et al. CD4+ lymphocyte response to zidovudine as a predictor of AIDS-free time and survival time. J Acquir Immune Defic Syndr 1993;6(11):1258-66. Graham NM, Zeger SL, Park LP, Vermund SH, et al. The effects on survival of early treatment of human immunodeficiency virus infection. N Engl J Med 1992;326(16):1037-42. Greene JB, Sidhu GS, Lewin S, Levine JF, et al. Mycobacterium-avium-intracellulare: a cause of disseminated life-threatening infection in homosexuals and drug abusers. Ann Intern Med 1982;97(4):539-46. Greene MH, Nightgingale SL, DuPont RL. Evolving patterns of drug abuse. Ann Intern Med 1975;83(3):402-11. Greene WC. AIDS and the immune system. Sci Am 1993;269(3):98-105. Greene WC. The molecular biology of human immunodeficiency virus type 1 infection. N Engl J Med 1991;324(5):308-17. Groopman JE. A dangerous delusion about AIDS. New York Times, September 10, 1992; A19. Groopman JE, Salahuddin SZ, Sarngadharan MG, Mullins JI, et al. Virologic studies in a case of transfusion-associated AIDS. N Engl J Med 1984;311(22):1419-22. Guerrero Vazquez J, de Paz Aparicio P, Olmedo Sanlaureano S, Omenaca Teres F, et al. Discordant acquired immunodeficiency syndrome in dizygotic twins. An Esp Pediatr 1993;39(5):445-7. Gupta P, Kingsley L, Armstrong J, Ding M, et al. Enhanced expression of human immunodeficiency virus type 1 correlates with development of AIDS. Virology 1993;196(2):586-95. Haase AT. Lentiviruses. In: Mandell GL, et al., eds. Principles and Practices of Infectious Diseases; 3rd ed. New York: Churchill Livingstone, 1990, pp. 1341-4. Haase AT. Pathogenesis of lentivirus infections. Nature 1986;322(6075):130-6. Hamilton JD, Hartigan PM, Simberkoff MS, Day PL, et al. A controlled trial of early versus late treatment with zidovudine in symptomatic human immunodeficiency virus infection. Results of the Veterans Affairs Cooperative Study. N Engl J Med 1992;326(7):437-43. Hammer S, Crumpacker C, D'Aquila R, Jackson B, et al. Use of virologic assays for detection of human immunodeficiency virus in clinical trials: recommendations of the AIDS Clinical Trials Group Virology Committee. J Clin Microbiol 1993;31(10):2557-64. Hanenberg RS, Rojanapithayakorn W, Kunasol P, Sokal DC. Impact of Thailand's HIV-control programme as indicated by the decline of sexually transmitted diseases. Lancet 1994;344(8917):243-5. Harden VA. Koch's postulates and the etiology of AIDS: an historical perspective. Hist Phil Life Sci 1992;14(2):249-69. Harms G, Kirsch T, Rahelimiarana N, Hof U, et al. HIV and syphilis in Madagascar (letter). AIDS 1994;8(2):279-88. Harris C, Small CB, Klein RS, Friedland GH, et al. Immunodeficiency in female sexual partners of men with the acquired immunodeficiency syndrome. N Engl J Med 1983;308(20):1181-4. Harris SB. The AIDS heresies: a case study of skepticism taken too far. Skeptic 1995; 3(2):42-79. Hassett J, Gjerset GF, Mosley JW, Fletcher MA, et al. Effect on lymphocyte subsets of clotting factor therapy in human immunodeficiency virus-1-negative congenital clotting disorders. Blood 1993;82(4):1351-7. Haverkos HW, Dougherty JA, eds. Health Hazards of Nitrite Inhalants. NIDA research monograph 83, U.S. Department of Health and Human Services, PHS, ADAMHA. U.S. Government Printing Office, 1988. Hays EF, Uittenbogaart CH, Brewer JC, Vollger LW, Zack JA. In vitro studies of HIV-1 expression in thymocytes from infants and children. AIDS 1992;6(3):265-72. Hessol NA, Koblin BA, van Griensven GJ, Bacchetti P, et al. Progression of human immunodeficiency virus type 1 (HIV-1) infection among homosexual men in hepatitis B vaccine trial cohorts in Amsterdam, New York City and San Francisco, 1978-1991. Am J Epidemiol 1994;139(11):1077-87. Hilgartner MW, Buckley JD, Operskalski EA, Pike MC, Mosley JW. Purity of factor VIII concentrates and serial CD4 counts. Lancet 1993;341(8857):1373-4. Hirsch VM, Olmsted RA, Murphey-Corb M, Purcell RH, Johnson PR. An African primate lentivirus (SIVsm) closely related to HIV-2. Nature 1989;339(6223):389-92. Ho DD, Moudgil T, Alam M. Quantitation of human immunodeficiency virus type 1 in the blood of infected persons. N Engl J Med 1989;321(24):1621-5. Ho DD, Pomerantz RJ, Kaplan JC. Pathogenesis of infection with human immunodeficiency virus. N Engl J Med 1987;317(5):278-86. Ho DD, Schooley RT, Rota TR, Kaplan JC, et al. HTLV-III in the semen and blood of a healthy homosexual man. Science 1984;226(4673):451-3. Ho DD, Rota TR, Schooley RT, Kaplan JC, et al. Isolation of HTLV-III from cerebrospinal fluid and neural tissues of patients with neurologic syndromes related to the acquired immunodeficiency syndrome. N Engl J Med 1985;313(24):1493-7. Ho DD, Neumann AU, Perelson AS, Chen W, et al. Rapid turnover of plasma virions and CD4 lymphocytes in HIV-1 infection. Nature 1995;373:123-6. Hoxie JA, Alpers JD, Rackowski JL, Huebner K, et al. Alterations in T4 (CD4) protein and mRNA synthesis in cells infected with HIV. Science 1986;234(4780):1123-7. Hufert FT, von Laer D, Fenner TE, Schwander S, et al. Progression of HIV-1 infection. Monitoring of HIV-1 DNA in peripheral blood mononuclear cells by PCR. Arch Virol 1991;120(3-4):233-40. Hugin AW, Vacchio MS, Morse HC III. A virus-encoded superantigen in a retrovirus-induced immunodeficiency syndrome of mice. Science 1991;252(5004):424-7. Hurtenbach U, Shearer GM. Germ cell-induced immune suppression in mice. Effect of inoculation of syngeneic spermatazoa on cell-mediated immune responses. J Exp Med 1982;155(6):1719-29. Hutto C, Parks WP, Lai SH, Mastrucci MT, et al. A hospital-based prospective study of perinatal infection with human immunodeficiency virus type 1. J Pediatr 1991;118(3):347-53. Hymes KB, Cheung T, Greene JB, Prose NS, et al. Kaposi's sarcoma in homosexual men--a report of eight cases. Lancet 1981;2(8247):598-600. Institute of Medicine, National Academy of Sciences. Confronting AIDS. Directions for Public Health, Health Care and Research. Washington, D.C: National Academy Press, 1986. Irova T, Serebrovskaya L, Pokrovsky VV. The life of HIV+ children infected in nosocomial foci. IXth Int Conf on AIDS, (abstract no. PO-C04-2626), June 6-11, 1993. Israelstam S, Lambert S, Oki G. Use of isobutyl nitrite as a recreational drug. Br J Addict Alcohol Other Drugs 1978;73(3):319-20. Italian Seroconversion Study. Disease progression and early predictors of AIDS in HIV-seroconverted injecting drug users. AIDS 1992;6(4):421-6. Jackson JB, Kwok SY, Sninsky JJ, Hopsicker JS, et al. Human immunodeficiency virus type 1 detected in all seropositive symptomatic and asymptomatic individuals. J Clin Microbiol 1990;28(1):16-9. Jacobs E. Anal infections caused by herpes simplex virus. Dis Colon Rectum 1976;19(2):151-7. Jaffe HW, Darrow WW, Echenberg DF, O'Malley PM, et al. The acquired immunodeficiency syndrome in a cohort of homosexual men. A six-year follow-up study. Ann Intern Med 1985a;103(2):210-4. Jaffe HW, Sarngadharan MG, DeVico AL, Bruch L, et al. Infection with HTLV-III/LAV and transfusion-associated acquired immunodeficiency syndrome. Serologic evidence of an association. JAMA 1985b;254(6):770-3. Janeway C. Immune recognition. Mls: makes little sense. Nature 1991;349(6309):459-61. Jason J, Murphy J, Sleeper LA, Donfield SM, et al. Immune and serologic profiles of HIV-infected and noninfected hemophilic children and adolescents. Am J Hematol 1994;46(1):29-35. Jefferiss FJG. Venereal disease and the homosexual. Br J Vener Dis 1956;32:17-20. Johnson JP, Nair P, Hines SE, Seiden SW, et al. Natural history and serologic diagnosis of infants born to human immunodeficiency virus-infected women. Am J Dis Child 1989;143(10):1147-53. Jones PK, Ratnoff OD. The changing prognosis of classic hemophilia (factor VIII deficiency). Ann Intern Med 1991;114(8):641-8. Jones DS, Byers RH, Bush TJ, Oxtoby MJ, Rogers MF. Epidemiology of transfusion-associated acquired immunodeficiency syndrome in children in the United States, 1981-1989. Pediatrics 1992;89(1):123-7. Judson FN, Miller KG, Schaffnit TR. Screening for gonorrhea and syphilis in the gay baths--Denver, Colorado. Am J Public Health 1977;67(8):740-2. Jurriaans S, Weverling GJ, Goudsmit J, et al. Distinct changes in HIV type 1 RNA versus p24 antigen levels in serum during short-term zidovudine therapy in asymptomatic individuals with and without progression to AIDS. AIDS Res Hum Retroviruses 1995;11(4):473-9. Kaslow RA, Blackwelder WC, Ostrow DG, Yerg D, et al. No evidence for a role of alcohol or other psychoactive drugs in accelerating immunodeficiency in HIV-1-positive individuals. A report from the Multicenter AIDS Cohort Study. JAMA 1989;261(23):3424-9. Katz BZ. Natural history and clinical management of the infant born to a mother infected with human immunodeficiency virus. Semin Perinatol 1989;13(1):27-34. Katz JN. Gay American history. Lesbian and Gay Men in the USA: A Documentary. New York: Meridian, 1992. Keet IP, Krijnen P, Koot M, Lange JM, et al. Predictors of rapid progression to AIDS in HIV-1 seroconverters. AIDS 1993;7(1):51-7. Keet IP, Krol A, Klein MR, Veugelers P, et al. Characteristics of long-term asymptomatic infection with human immunodeficiency virus type 1 in men with normal and low CD4+ cell counts. J Infect Dis 1994;169(6):1236-43. Kestler H, Kodama T, Ringler D, Marthas M, et al. Induction of AIDS in rhesus monkeys by molecularly cloned simian immunodeficiency virus. Science 1990;248(4959):1109-12. Kinsey AC, et al. Sexual Behavior in the Human Male. Philadelphia: Saunders, 1948. Klatzmann D, Barre-Sinoussi F, Nugeyre MT, Danquet C, et al. Selective tropism of lymphadenopathy associated virus (LAV) for helper-inducer T lymphocytes. Science 1984a;225(4657):59-63. Klatzmann D, Champagne E, Chamaret S, Gruest J, et al. T-lymphocyte T4 molecule behaves as the receptor for human retrovirus LAV. Nature 1984b;312(5996):767-8. Koblin BA, Taylor PE, Rubinstein P, Stevens CE. Effect of zidovudine on survival in HIV-1 infection: observational data from a cohort study of gay men. VIIIth Int Conf on AIDS, (abstract no. PoC 4349), July 19-24, 1992. Kodama T, Mori K, Kawahara T, Ringler DJ, Desrosiers RC. Analysis of simian immunodeficiency virus sequence variation in tissues of rhesus macaques with simian AIDS. J Virol 1993;67(11):6522-34. Koenig S, Earl P, Powell D, Pantaleo G, et al. Group-specific, major histocompatibility complex class-I restricted cytotoxic responses to human immunodeficiency virus I (HIV-1) envelope proteins by cloned peripheral blood T cells from an HIV-1 infected individual. Proc Natl Acad Sci USA 1988;85(22):8638-42. Koga Y, Lindstrom E, Fenyo EM, Wigzell H, Mak TW. High levels of heterodisperse RNAs accumulate in T cells infected with human immunodeficiency virus and in normal thymocytes. Proc Natl Acad Sci USA 1988;85(12):4521-5. Kool HE, Bloemkolk D, Reeve PA, Danner SA. HIV seropositivity and tuberculosis in a large general hospital in Malawi. Trop Geogr Med 1990;42(2):128-32. Kozel NJ, Adams EH. Epidemiology of drug abuse: an overview. Science 1986;234(4779):970-4. Kreek MJ. Immune function in heroin addicts and former heroin addicts in treatment: pre-and post-AIDS epidemic. In: Pham PTK, Rice K, eds. Drugs of Abuse: Chemistry, Pharmacology, Immunology, and AIDS. NIDA Research Monograph 96, 1990, pp. 192-219. Kreiss JK, Kitchen LW, Prince HE, Kasper CK, Essex M. Antibody to human T-lymphotropic virus type III in wives of hemophiliacs. Evidence for heterosexual transmission. Ann Intern Med 1985;102(5):623-6. Kurth R. Does HIV cause AIDS? An updated response to Duesberg's theories. Intervirology 1990;31(6):301-14. Kwok S, Mack DH, Mullis KB, Poiesz B, et al. Identification of human immunodeficiency virus sequences by using in vitro enzymatic amplification and oligomer cleavage detection. J Virol 1987;61(5):1690-4. Lambert JS. Maternal and perinatal issues regarding HIV infection. Pediatr Ann 1990;19(8):468-72. Latif AS. HIV and AIDS in southern Africa and the island countries. In: Essex M, et al., eds. AIDS in Africa. New York: Raven Press, 1994, pp. 691-711. Lau RK, Jenkins P, Caun K, Forster SM, et al. Trends in sexual behaviour in a cohort of homosexual men: a 7 year prospective study. Int J STD AIDS 1992;3(4):267-72. Laumann, et al. The Social Organization of Sexuality. Chicago: University of Chicago Press, 1994. Laurence J. T-cell subsets in health, infectious disease, and idiopathic CD4+ T lymphocytopenia. Ann Intern Med 1993;119(1):55-62. Laurent-Crawford AG, Krust B, Muller S, Riviere Y, et al. The cytopathic effect of HIV is associated with apoptosis. Virology 1991;185(2):829-39. Le Clair RA. Descriptive epidemiology of interstitial pneumocystic pneumonia. An analysis of 107 cases from the United States, 1955-1967. Am Rev Respir Dis 1969;99(4):542-7. Lederman MM, Ratnoff OD, Evatt BL, McDougal JS. Acquisition of antibody to lymphadenopathy-associated virus in patients with classic hemophilia (factor VIII deficiency). Ann Intern Med 1985;102(6):753-7. Lederman MM, Jackson JB, Kroner BL, White, GC, et al. Human immunodeficiency virus (HIV) type 1 infection status and in vitro susceptibility to HIV infection among high-risk HIV-1-seronegative hemophiliacs. J Infect Dis 1995;172(1):228-31. Lee CA, Sabin CA, Phillips AN, Elford J, Pasi J. Morbidity and mortality from transfusion-transmitted disease in haemophilia. Lancet 1995;345:1309. Lemp GF, Payne SF, Rutherford GW, Hessol NA, et al. Projections of AIDS morbidity and mortality in San Francisco. JAMA 1990;263(11):1497-1501. Leonard R, Zagury D, Desportes I, Bernard J, et al. Cytopathic effect of human immunodeficiency virus in T4 cells is linked to the last stage of virus infection. Proc Natl Acad Sci USA 1988;85(10):3570-4. Leroy V, Msellati P, Lepage P, Batungwanayo J, et al. Four years of natural history of HIV-1 infection in African women: a prospective cohort study in Kigali (Rwanda), 1988-1993. J Acquir Immune Defic Syndr 1995;9(4):415-421. Letvin NL, Daniel MD, Sehgal PK, Desrosiers RC, et al. Induction of AIDS-like disease in macaque monkeys with T-cell tropic retrovirus STLV-III. Science 1985;230(4721):71-3. Levy JA, Hoffman AD, Kramer SM, Landis JA, et al. Isolation of lymphocytopathic retroviruses from San Francsico patients with AIDS. Science 1984;225(4664):840-2. Levy JA, Shimabukuro J, Hollander H, Mills J, Kaminsky L. Isolation of AIDS-associated retroviruses from cerebrospinal fluid and brain of patients with neurological symptoms. Lancet 1985;2(8455):586-8. Levy JA. Pathogenesis of human immunodeficiency virus infection. Microbiol Rev 1993;57(1):183-289. Lifson JD, Reyes GR, McGrath MS, Stein BS, Engleman EG. AIDS retrovirus induced cytopathology: giant cell formation and involvement of CD4 antigen. Science 1986;232(4754):1123-7. Lifson AR, Darrow WW, Hessol NA, O'Malley PM, et al. Kaposi's sarcoma in a cohort of homosexual and bisexual men. Epidemiology and analysis for cofactors. Am J Epidemiol 1990;131(2):221-31. Lindan CP, Allen S, Serufilira A, Lifson AR, et al. Predictors of mortality among HIV-infected women in Kigali, Rwanda. Ann Intern Med 1992;116(4):320-8. Lindback S, Brostrom C, Karlsson A, Gaines H. Does symptomatic primary HIV-1 infection accelerate progression to CDC stage IV disease, CD4 count below 200 x 10(6)/l, AIDS, and death from AIDS? Br Med J 1994;309(6968):1535-7. Lindgren S, Anzen B, Bohlin AB, Lidman K. HIV and child-bearing: clinical outcome and aspects of mother-to-infant transmission. AIDS 1991;5(9):1111-6. Longini IM Jr, Clark WS, Karon JM. Effect of routine use of therapy in slowing the clinical course of human immunodeficiency virus (HIV) infection in a population-based cohort. Am J Epidemiol 1993;137(11):1229-40. Lowenstein JM. Is AIDS a myth? California Academy of Sciences: Pacific Discovery Magazine. Fall, 1994. Lusher JM, Operskalski EA, Aledort LM, Dietrich SL, et al. Risk of human immunodeficiency virus type 1 infection among sexual and nonsexual household contacts of persons with congenital clotting disorders. Pediatrics 1991;88(2):242-9. Lyerly HK, Matthews TJ, Langlois AJ, Bolognesi DP, Weinhold KJ. Human T-cell lymphotropic virus IIIB glycoprotein (gp120) bound to CD4 determinants on normal lymphocytes and expressed by infected cells serves as target for immune attack. Proc Natl Acad Sci USA 1987;84(13):4601-5. Mann JM, et al., eds. AIDS in the World. Cambridge: Harvard University Press, 1992. Mann JM. AIDS--the second decade: a global perspective. J Infect Dis 1992;165(2):245-50. Mannucci PM, Gringeri A, de Biasi R, Baudo F, et al. Immune status of asymptomatic HIV-infected hemophiliacs: randomized, prospective, two-year comparison of treatment with a high-purity or an intermediate-purity factor VIII concentrate. Thromb Haemost 1992;67(3):310-3. Mannucci PM, Gringeri A, Savidge G, Gatenby P, et al. Randomized double-blind, placebo-controlled trial of twice-daily zidovudine in asymptomatic haemophiliacs infected with the human immunodeficiency virus type 1. Br J Haematol 1994;86(1):174-9. MAP Workshop (Multi-cohort Analysis Project). Extending public health surveillance of HIV infection: information from a five cohort workshop. Stat Med 1993;12(22):2065-85. Margolick JB, Munoz A, Vlahov D, Solomon L, et al. Changes in T-lymphocyte subsets in intravenous drug users with HIV-1 infection. JAMA 1992;267(12):1631-6. Margolick JB, Munoz A, Vlahov D, Astemborski J, et al. Direct comparison of the relationship between clinical outcome and change in CD4+ lymphocytes in human immunodeficiency virus-positive homosexual men and injecting drug users. Arch Intern Med 1994;154(8):869-75. Masur H, Michelis MA, Greene JB, Onorato I, et al. An outbreak of community-acquired Pneumocystis carinii pneumonia: initial manifestation of cellular immune dysfunction. N Engl J Med 1981;305(24):1431-8. Masur H. Mycobacterium avium-intracellulare: another scourge for individuals with the acquired immunodeficiency syndrome. JAMA 1982a;248(22):3013. Masur H, Michelis MA, Wormser GP, Lewin S, et al. Opportunistic infection in previously healthy women. Initial manifestations of a community-acquired cellular immunodeficiency. Ann Intern Med 1982b;7(4):533-9. Mathez D, Paul D, de Belilovsky C, Sultan Y, et al. Productive human immunodeficiency virus infection levels correlate with AIDS-related manifestations in the patient. Proc Natl Acad Sci USA 1990;87(19):7438-42. Mavligit GM, Talpaz M, Hsia FT, Wong W, et al. Chronic immune stimulation by sperm alloantigens: support for the hypothesis that spermatozoa induce immune dysregulation in homosexual males. JAMA 1984;251(2):237-41. McDougal JS, Mawle A, Cort SP, Nicholson JK, et al. Cellular tropism of the human retrovirus HTLV-III/LAV. Role of T cell activation and expression of the T4 antigen. J Immunol 1985a;135(5):3151-62. McDougal JS, Jaffe HW, Cabridilla CD, Sarngadharan MG, et al. Screening tests for blood donors presumed to have transmitted the acquired immunodeficiency syndrome. Blood 1985b;65(3):772-5. McDougal JS, Kennedy MS, Sligh JM, Cort SP, et al. Binding of HTLV-III/LAV to T4+ T cells by a complex of the 110K virtal protein and the T4 molecule. Science 1986;231(4736):382-5. McLeod GX, Hammer SM. Zidovudine: five years later. Ann Int Med 1992;117(6):487-501. McMillan A, Young H. Gonorrhea in the homosexual man: frequency of infection by culture site. Sex Transm Dis 1978;5(4):146-50. Mellors JW, Kingsley LA, Rinaldo CR, Todd JA, et al. Quantitation of HIV-1 RNA in plasma predicts outcome after seroconversion. Ann Intern Med 1995;15,122(8):573-7. Menez-Bautista R, Fikrig SM, Pahwa S, Sarngadharan MG, Stoneburner RL. Monozygotic twins discordant for the acquired immunodeficiency syndrome. Am J Dis Child 1986;140(7):678-9. Merigan TC, Amato DA, Balsley J, Power M, et al. Placebo-controlled trial to evaluate zidovudine in treatment of human immunodeficiency virus infection in asymptomatic patients with hemophilia. Blood 1991;78(4):900-6. Merino HI, Richards JB. An innovative program of venereal disease casefinding, treatment and education for a population of gay men. Sex Transm Dis 1977;4(2):50-2. Michael NL, Vahey M, Burke DS, Redfield RR. Viral DNA and mRNA correlate with the stage of human immunodeficiency virus (HIV) type 1 infection in humans: evidence for viral replication in all stages of HIV disease. J Virol 1992;66(1):310-6. Montagnier L, et al. A new lymphotropic retrovirus: characterization and possible role in lymphadenopathy and acquired immune deficiency syndromes. In: Gallo RC, et al., eds. Human T-cell Leukemia/Lymphoma Virus. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory, 1984, pp. 363-79. Montella F, Di Sora F, Perucci CA, Abeni DD, Recchia O. T-lymphocyte subsets in intravenous drug users with HIV-1 infection. JAMA 1992;268(18):2516-7. Moore RD, Creagh-Kirk T, Keruly J, Link G, et al. Long-term safety and efficacy of zidovudine in patients with advanced human immunodeficiency virus disease. Arch Intern Med 1991a;151(5):981-6. Moore RD, Hidalgo J, Sugland BW, Chaisson RE. Zidovudine and the natural history of the acquired immunodeficiency syndrome. N Engl J Med 1991b;324(20):1412-6. Morton WR, et al. Infection of Macaca nemestrina by HIV-1/HIV-2: development of infection and disease models. Laboratory of Tumor Cell Biology Annual Meeting, Aug 22-28, 1993. AIDS Res Hum Retroviruses 1994;10(suppl 1):S1-125. Mosley JW. Low CD4+ counts in a study of transfusion safety: correction. N Engl J Med 1993;328(15):1129. Mulder JW, Cooper DA, Mathiesen L, Sandstrom E, et al. Zidovudine twice daily in asymptomatic subjects with HIV infection and a high risk of progression to AIDS: a randomized, double-blind placebo-controlled study. AIDS 1994a;8(3):313-21. Mulder DW, Nunn AJ, Kamali A, Nakiyingi J, et al. Two-year HIV-1-associated mortality in a Ugandan rural population. Lancet 1994b;343(8904):1021-3. Munoz A, Vlahov D, Solomon L, Margolick JB, et al. Prognostic indicators for development of AIDS among intravenous drug users. J Acquir Immune Defic Syndr 1992;5(7):694-700. Muro-Cacho CA, Pantaleo G, Fauci AS. Analysis of apoptosis in lymph nodes of HIV-infected persons. Intensity of apoptosis correlates with the general state of activation of the lymphoid tissue and not with stage of disease or viral burden. J Immunol 1995;154(10):5555-66. Myers G, MacInnes K, Korber B. The emergence of simian/human immunodeficiency viruses. AIDS Res Hum Retroviruses 1992;8(3):373-86. Nahmias AJ, Weiss J, Yao X, Lee F, et al. Evidence for human infection with an HTLV III/LAV-like virus in Central Africa, 1959 (letter). Lancet 1986;31,1(8492):1279-80. Nicholson JK, Spira TJ, Aloisio CH, Jones BM, et al. Serial determinations of HIV-1 titers in HIV-infected homosexual men: association of rising titers with CD4 T cell depletion and progression to AIDS. AIDS Res Hum Retroviruses 1989;5(2):205-15. Nicoll A, Brown P. HIV: beyond reasonable doubt. New Scientist Jan. 15, 1994:24-8. Novick BE, Rubinstein A. AIDS--the paediatric perspective. AIDS 1987;1(1):3-7. Novick DM, Ochshorn M, Ghali V, Croxson TS, et al. Natural killer cell activity and lymphocyte subsets in parenteral heroin abusers and long-term methadone maintenance patients. J Pharmacol Exp Ther 1989;250(2):606-10. NTIS (National Technical Information Service). ACTG Protocol 116b/117: Viral load analysis for study investigators, 1994. Springfield, VA. O'Brien WA, Hartigan PM, McCreedy B, Hamilton JD. Plasma HIV RNA and beta 2 microglobulin as surrogate markers. Xth Int Conf on AIDS, (abstract no. 254B), Aug 7-12, 1994. Oettle AG. Geographical and racial differences in the frequencies of Kaposi's sarcoma as evidence of environmental or genetic causes. Acta Unio Int Contra Cancrum 1962;18:330-63. Oldstone MB. Viral persistence. Cell 1989;56(4):517-20. Oleske J, Minnefor A, Cooper R Jr, Thomas K, et al. Immune deficiency syndrome in children. JAMA 1983;249(17):2345-9. Ostrow D, Vanhaden MJ, Fox R, Kingsley LA, et al. Recreational drug use and sexual behavior in a cohort of homosexual men. AIDS 1990;4(8):759-65. Ostrow DG, Beltran ED, Joseph JG, DiFranceisco W, et al. Recreational drugs and sexual behavior in the Chicago MACS/CCS cohort of homosexually active men. J Subst Abuse 1993;5(4):311-25. Ottmann N, Innocenti P, Thenadey M, Micoud M, et al. The polymerase chain reaction for the detection of HIV-1 genomic RNA in plasma from infected individuals. J Virol Methods 1991;31(2-3):273-83. Owen RL, Hill JL. Rectal and pharyngeal gonorrhea in homosexual men. JAMA 1972;220(10):1315-8. Pantaleo G, Graziosi C, Fauci AS. The immunopathogenesis of human immunodeficiency virus infection. N Engl J Med 1993a;328(5):327-35. Pantaleo G, Graziosi C, Demarest JF, Butini L, et al. HIV infection is active and progressive in lymphoid tissue during the clinically latent stage of disease. Nature 1993b;362(6418):355-8. Pantaleo G, Demarest JF, Soudeyns H, Graziosi C, et al. Major expansion of CD8+ T cells with a predominant V beta usage during the primary immune response to HIV. Nature 1994;370(6489):463-7. Pantaleo G, Menzo S, Vaccarezza M, Graziosi C, et al. Studies in subjects with long-term nonprogressive human immunodeficiency virus infection. N Eng J Med 1995a;332:209-16. Pantaleo G, Fauci AS. Apoptosis in HIV infection. Nature Medicine 1995b;1(2):118-20. Pape JW, Liautaud B, Thomas F, Mathurin JR, et al. Characteristics of the acquired immunodeficiency syndrome (AIDS) in Haiti. N Engl J Med 1983;309(16):945-50. Pape J, Johnson WD Jr. AIDS in Haiti: 1982-1992. Clin Infect Dis 1993;17(suppl 2):S341-5. Pariser H, Marino AF. Gonorrhea: frequently unrecognized reservoirs. South Med J 1970;63(2):198-201. Park CL, Streicher H, Rothberg R. Transmission of human immunodeficiency virus from parents to only one dizygotic twin. J Clin Microbiol 1987;25(6):1119-21. Pauza CD, Galindo JE, Richman DD. Reinfection results in accumulation of unintegrated viral DNA in cytopathic and persistent human immunodeficiency virus type 1 infection of CEM cells. J Exp Med 1990;172(4):1035-42. Pedersen C, Lindhardt BO, Jensen BL, Lauritzen E, et al. Clinical course of primary HIV infection: consequences for subsequent course of infection. Br Med J 1989;299(6692):154-7. Pedersen C, Gerstoft J, Lundgren J, Jensen BL, et al. Development of AIDS and low CD4 cell counts in a cohort of 180 seroconverters. IXth Int Conf on AIDS, (abstract no. PO-Bo1-0862), June 6-11, 1993. Peterman TA, Stoneburner RL, Allen JR, Jaffe HW, Curran JW. Risk of human immunodeficiency virus transmission from heterosexual adults with transfusion-associated infections. JAMA 1988;259(1):55-8. Petru A, Dunphy MG, Azimi P, Janner D, et al. Reliability of polymerase chain reaction in the detection of human immunodeficiency virus infection in children. Pediatr Infect Dis J 1992;11(1):30-3. Pezzotti P, Rezza G, Lazzarin A, Angarano G, et al. Influence of gender, age, and transmission category on the progression from HIV seroconversion to AIDS. J Acquir Immune Defic Syndr 1992;5(7):745-7. Piatak M Jr, Saag MS, Yang LC, Clark SJ, et al. High levels of HIV-1 in plasma during all stages of infection determined by competitive PCR. Science 1993;259(5102):1749-54. Piot P, Plummer FA, Mhalu FS, Lamboray JL, et al. AIDS: an international perspective. Science 1988;239(4840):573-9. Pitchenik AE, Shafron RD, Glasser RM, Spira TJ. The acquired immunodeficiency syndrome in the wife of a hemophiliac. Ann Intern Med 1984;100(1):62-5. Poon MC, Landay A, Prasthofer EF, Stagno S. Acquired immunodeficiency syndrome with Pneumocystis carinii pneumonia and Mycobacterium avium-intracellulare infection in a previously healthy patient with classic hemophilia. Clinical, immunologic, and virologic findings. Ann Intern Med 1983;98(3):287-90. Popovic M, Sarngadharan MG, Read E, Gallo RC. Detection, isolation and continuous production of cytopathic retroviruses (HTLV-III) from patients with AIDS and pre-AIDS. Science 1984;224(4648):497-500. Prober CG, Gershon AA. Medical management of newborns and infants born to human immunodeficiency virus-seropositive mothers. Pediatr Infect Dis J 1991;10(9):684-95. Quinn TC, Mann JM, Curran JW, Piot P. AIDS in Africa: an epidemiologic paradigm. Science 1986;234(4779):955-63. Quinn TC. Population migration and the spread of types 1 and 2 human immunodeficiency viruses. Proc Natl Acad Sci USA 1994;91(7):2407-14. Ragni MV, Kingsley LA, Zhou SJ. The effect of antiviral therapy on the natural history of human immunodeficiency virus infection in a cohort of hemophiliacs. J Acquir Immune Defic Syndr 1992;5(2):120-6. Rasamindrakotroka AJ, et al. Seroprevalence of HIV-1, hepatitis B and syphilis in a population of blood donors in Antananavivo, Madagascar. VIth Int Conf on AIDS in Africa, Dakar, Senegal, (poster TA101), Dec 16-19, 1991. Ratner L, Gallo RC, Wong-Staal F. HTLV-III, LAV, ARV are variants of same AIDS virus. Nature 1985;313(6004):636. Reitz MS Jr, Hall L, Robert-Guroff M, Lautenberger J, et al. Viral variability and serum antibody response in a laboratory worker infected with HIV type 1 (HTLV type IIIB). AIDS Res Hum Retroviruses 1994;10(9):1143-55. Reinisch JM, et al. Sexual behavior and AIDS: lessons from art and sex research. In: Veoller B, et al., eds. AIDS and Sex: An Integrated Biomedical and Biobehavioral Approach. New York: Oxford University Press, 1990, pp. 37-80. Rezza G, Lazzarin A, Angarano G, Sinicco A, et al. The natural history of HIV infection in intravenous drug users: risk of disease progression in a cohort of seroconverters. AIDS 1989;3(2):87-90. Rezza G, Lazzarin A, Angarano G, Zerboni R, et al. Risk of AIDS in HIV seroconverters: a comparison between intravenous drug users and homosexual males. Eur J Epidemiol 1990;6(1):99-101. Richman DD, Fischl MA, Grieco MH, Gottlieb MS, et al. The toxicity of azidothymidine (AZT) in the treatment of patients with AIDS and AIDS-related complex. N Engl J Med 1987;317(4):192-7. Richman DD, Andrews J. Results of continued monitoring of participants in the placebo-controlled trials of zidovudine for serious human immunodeficiency virus infection. Am J Med 1988;85:208-13. Richman DD, Bozzette SA. The impact of the syncytium-inducing phenotype of human immunodeficiency virus on disease progression. J Infect Dis 1994;169(5):968-74. Robertson JR, Skidmore CA, Roberts JJ, Elton RA. Progression to AIDS in intravenous drug users, cofactors and survival. VIth Int Conf on AIDS, (abstract no. Th.C.649), June 20-23, 1990. Robinson WS. Hepatitis B virus and hepatitis delta virus. In: Mandell GL, et al., eds. Principles and Practices of Infectious Diseases; 3rd ed. New York: Churchill Livingstone, 1990. Rogers MF, Morens DM, Stewart JA, Kaminski RM, et al. National case-control study of Kaposi's sarcoma and Pneumocystis carinii pneumonia in homosexual men: part 2. Laboratory results. Ann Intern Med 1983;99(2):151-7. Rogers MF, Ou CY, Rayfield M, Thomas PA, et al. Use of the polymerase chain reaction for early detection of the proviral sequences of human immunodeficiency virus in infants born to seropositive mothers. N Engl J Med 1989;320(25):1649-54. Rothman S. Remarks on sex, age and racial distribution of Kaposi's sarcoma and on possible pathogenic factors. Acat Unio Int Contra Cancrum 1962a;18:326. Rothman S. Medical research in Africa. Arch Dermatol 1962b;85:311-24. Rubinstein A, Sicklick M, Gupta A, Bernstein L, et al. Acquired immunodeficiency with reversed T4/T8 ratios in infants born to promiscuous and drug-addicted mothers. JAMA 1983;249:2350-6. Ryder RW, Mugewrwa RW. The clinical definition and diagnosis of AIDS in African adults. In: Essex M, et al., eds. AIDS in Africa. New York: Raven Press, 1994a, pp. 269-81. Ryder RW, Nsuami M, Nsa W, Kamenga M, et al. Mortality in HIV-1-seropositive women, their spouses and their newly born children during 36 months of follow-up in Kinshasa, Zaire. AIDS 1994b;8(5):667-72. Saag MS, Crain MJ, Decker WD, Campbell-Hill S, et al. High level viremia in adults and children infected with human immunodeficiency virus: relation to disease stage and CD4+ lymphocyte levels. J Infect Dis 1991;164(1):72-80. Saah AJ, Hoover DR, He Y, Kingsley LA, Phair JP. Factors influencing survival after AIDS: report from the Multicenter AIDS Cohort Study (MACS). J Acquir Immune Defic Syndr 1994;7(3):287-95. Sabin C, Phillips A, Elfold J, Griffiths P, et al. The progression of HIV disease in a hemophiliac cohort followed for 12 years. Br J Haematol 1993;83(2):330-3. Safai B, Good RA. Kaposi's sarcoma: a review and recent developments. CA Cancer J Clin 1981;31:1-12. Safai B. Kaposi's sarcoma: a review of classical and epidemic forms. Ann NY Acad Sci 1984a;437:373-82. Safai B, Sarngadharan MG, Groopman JE, Arnett K, et al. Seroepidemiological studies of human T-lymphotropic retrovirus type III in acquired immunodeficiency syndrome. Lancet 1984b;1(8392):1438-40. Saghir MT, Robins E. Male and Female Homosexuality: A Comprehensive Investigation. Baltimore: Williams and Wilkins, 1973. Saksela K, Stevens C, Rubinstein P, Baltimore D. Human immunodeficiency virus type 1 mRNA expression in peripheral blood cells predicts disease progression independently of the numbers of CD4+ lymphocytes. Proc Natl Acad Sci USA 1994;91(3):1104-8. Sanchez-Pescador R, Power MD, Barr PJ, Steimer KS, et al. Nucleotide sequence and expression of an AIDS-associated retrovirus (ARV-2). Science 1985;227(4686):484-92. Sande MA, Carpenter CC, Cobbs CG, Holmes KK, Sanford JP. Antiretroviral therapy for adult HIV-infected patients: recommendations for a state-of-the-art conference. JAMA 1993;270(21):2583-9. Sarngadharan MG, Popovic M, Bruch L, Schupbach J, Gallo RC. Antibodies reactive with human T-lymphotropic retroviruses (HTLV-III) in the serum of patients with AIDS. Science 1984;224(4648):506-8. Schechter MT, Craib KJ, Le TN, Montaner JS, et al. Susceptibility to AIDS progression appears early in HIV infection. AIDS 1990;4(3):185-90. Schechter MT, Craib KJ, Gelman KA, Montaner JS, et al. HIV-1 and the aetiology of AIDS. Lancet 1993a;341:658-9. Schechter MT, Craib KJ, Montaner JS, Lee TN, et al. Aetiology of AIDS. Lancet 1993b;341(8854):1222-3. Schinaia N, Ghirardini A, Chiarotti F, Gringeri A, Mannucci PM. Progression to AIDS among Italian HIV-seropositive haemophiliacs. AIDS 1991;5(4):385-91. Schnittman SM, Psallidopoulos MC, Lane HC, Thompson L, et al. The reservoir for HIV-1 in human peripheral blood is a T cell that maintains expression of CD4. Science 1989;245(4915):305-8. Schnittman SM, Greenhouse JJ, Psallidopoulos MC, Baseler M, et al. Increasing viral burden in CD4+ T cells from patients with human immunodeficiency virus infection reflects rapidly progressive immunosuppression and clinical disease. Ann Intern Med 1990a;113(6):438-43. Schnittman SM, Denning SM, Greenhouse JJ, Justement JS, et al. Evidence for susceptibility of intrathymic T-cell precursors and their progeny carrying T-cell antigen receptor phenotypes TCR alpha beta + and TCR gamma delta + to human immunodeficiency virus infection: a mechanism for CD4+ (T4) lymphocyte depletion. Proc Natl Acad Sci USA 1990b;87(19):7727-31. Schnittman SM, Greenhouse JJ, Lane HC, Pierce PF, Fauci AS. Frequent detection of HIV-1 specific mRNAs in infected individuals suggests ongoing active viral expression in all stages of disease. AIDS Res Hum Retroviruses 1991;7(4):361-7. Schoenbaum EE, Hartel D, Selwyn PA, Davenny K, et al. Lack of association of T-cell subsets with continuing intravenous drug use and high risk heterosexual sex, independent of HIV infection and disease. Program and Abstracts of the Vth Int Conf on AIDS (Montreal). Ottawa: International Development Research Center, 1989. Scott J, Stone AH. Some observations on the diagnosis of rectal gonorhoea in both sexes using a selective culture medium. Br J Vener Dis 1966;42(27):103-6. Seidman SN, Rieder RO. A review of sexual behavior in the United States. Am J Psychiatry 1994;151(3):330-41. Selik RM, Ward JW, Buehler JW. Trends in transfusion-associated acquired immune deficiency syndrome in the United States, 1982-1991. Transfusion 1993;33(11):890-3. Selwyn PA, Alcabes P, Hartel D, Buono D, et al. Clinical manifestations and predictors of disease progression in drug users with human immunodeficiency virus infection. N Engl J Med 1992;327(24):1697-703. Sewankambo NK, Wawer MJ, Gray RH, Serwadda D, et al. Demographic impact of HIV infection in rural Rakai District, Uganda: results of a population-based cohort study. AIDS 1994;8:1707-13. Sheppard HW, Winkelstein W, Lang W, Charlebois E. CD4+ T-lymphocytopenia without HIV infection. N Engl J Med 1993;28(25):1847-8. Shin YO, Hur SJ, Kim SS, Kee MK. Human immunodeficiency virus (HIV) epidemiological trends among risk groups in Korea. Xth Int Conf on AIDS, (abstract no. 041C), Aug 7-12, 1994. Siegal FP, Lopez C, Hammer GS, Brown AE, et al. Severe acquired immunodeficiency in male homosexuals, manifested by chronic perianal ulcerative herpes simplex lesions. N Engl J Med 1981;305(24):1439-44. Sinicco A, Fora R, Sciandra M, Lucchini A, et al. Risk of developing AIDS after primary acute HIV-1 infection. J Acquir Immune Defic Syndr 1993;6(6):575-81. Smiley ML, White GC, Becherer P, Macik G, et al. Transmission of human immunodeficiency virus to sexual partners of hemophiliacs. Am J Hematol 1988;28(1):27-32. Smith DK, Neal JJ, Holmberg SD. Unexplained opportunistic infections and CD4+ T-lymphocytopenia without HIV infection. An investigation of cases in the United States. N Engl J Med 1993;328(6):373-9. Sodroski J, Goh WC, Rosen C, Campbell K, Haseltine WA. Role of the HTLV-III/LAV envelope in syncytium formation and cytopathicity. Nature 1986;322(6078):470-4. Sonnabend J, Witkin SS, Purtilo DT. Acquired immunodeficiency syndrome, opportunistic infections, and malignancies in male homosexuals. A hypothesis of etiologic factors in pathogenesis. JAMA 1983;249(17):2370-4. Spotts JV, Shontz FC. Cocaine Users: A Representative Case Approach. New York: The Free Press, 1980. Srugo I, Brunell PA, Chelyapov NV, Ho DD, et al. Virus burden in human immunodeficiency virus type 1-infected children: relationship to disease status and effect of antiviral therapy. Pediatrics 1991;87(6):921-5. Stanley SK, Kessler SW, Justement JS, Schnittman SM, et al. CD34+ bone marrow cells are infected with HIV in a subset of seropositive individuals. J Immunol 1992;149(2):689-97. Stowring L, Haase AT, Charman HP. Serological definition of the lentivirus group of retroviruses. J Virol 1979;29(2):523-8. Substance Abuse and Mental Health Services Administration. National Household Survey on Drug Abuse: Population Estimates, 1993. Rockville, MD, 1994. Temin HM. Mechanisms of cell killing/cytopathic effects by nonhuman retroviruses. Rev Infect Dis 1988;10(2):399-405. Temin HM. Is HIV unique or merely different? J Acquir Immune Defic Syndr 1989;2(1):1-9. Terai C, Kornbluth RS, Pauza CD, Richman DD, Carson DA. Apoptosis as a mechanism of cell death in cultured T lymphoblasts acutely infected with HIV-1. J Clin Invest 1991;87(5):1710-5. Tersmette M, de Goede RE, Al BJ, Winkel IN, et al. Differential syncytium-inducing capacity of human immunodeficiency virus isolates: frequent detection of syncytium-inducing isolates in patients with acquired immunodeficiency syndrome (AIDS) and AIDS-related complex. J Virol 1988;62(6):2026-32. Tersmette M, Lange JM, de Goede RE, de Wolf F, et al. Association between biological properties of human immunodeficiency virus variants and risk for AIDS and AIDS mortality. Lancet 1989a;1(8645):983-5. Tersmette M, Gruters RA, de Wolf F, de Goede RE, et al. Evidence for a role of virulent human immunodeficiency virus variants in the pathogenesis of acquired immunodeficiency syndrome: studies on sequential HIV isolates. J Virol 1989b;63(5):2118-25. Thea DM, St Louis ME, Atido U, Kanjinga K, et al. A prospective study of diarrhea and HIV-1 infection among 429 Zairian infants. N Engl J Med 1993;329(23):1696-702. Thomas PA, Ralston SJ, Bernard M, Williams R, O'Donnell R. Pediatric immunodeficiency syndrome: an unusually high incidence of twinning. Pediatrics 1990;86(5):774-7. Tindall B, Cooper DA. Primary HIV infection: host responses and intervention strategies. AIDS 1991;5(1):1-14. Turner BJ, Denison M, Eppes SC, Houchens R, et al. Survival experience of 789 children with the acquired immunodeficiency syndrome. Pediatr Infect Dis J 1993;12(4):310-20. United States Bureau of the Census, Center for International Research, Washington, D.C. HIV/AIDS Surveillance Database, 1994. Vahey MT, Mayers DL, Wagner KF, Chung RC, et al. Plasma HIV RNA predicts clinical outcome on AZT therapy. Xth Int Conf on AIDS, (abstract no. 253B), Aug 7-12, 1994. Varmus H. Retroviruses. Science 1988;240(4858):1427-35. Vella S, Giuliano M, Pezzotti P, Agresti MG, et al. Survival of zidovudine-treated patients with AIDS compared with that of contemporary untreated patients. Italian Zidovudine Evaluation Group. JAMA 1992;267(9):1232-6. Vella S, Giuliano M, Dally LG, Agresti MG, et al. Long-term follow-up of zidovudine therapy in asymptomatic HIV infection: results of a multicenter cohort study. J Acquir Immune Defic Syndr 1994;7(1):31-8. Vermund SH, Hoover DR, Chen K. CD4+ counts in seronegative homosexual men. N Engl J Med 1993a;328(6):442. Vermund SH. Rising HIV-related mortality in young Americans. JAMA 1993b;16,269(23):3034-5. Veugelers PJ, Page KA, Tindall B, Schechter MT, et al. Determinants of HIV disease progression among homosexual men registered in the Tricontinental Seroconverter Study. Am J Epidemiol 1994;15,140(8):747-58. Volberding PA, Lagakos SW, Koch MA, Pettinelli C, et al. Zidovudine in asymptomatic human immunodeficiency virus infection: a controlled trial in persons with fewer than 500 CD4-positive cells per cubic millimeter. N Engl J Med 1990;322(14):941-9. Volberding PA, Lagakos SW, Grimes JM, Stein DS, et al. The duration of zidovudine benefit in persons with asymptomatic HIV infection. Prolonged evaluation of protocol 019 of the AIDS Clinical Trials Group. JAMA 1994;272(6):437-42. Volberding PS, Graham NM. Initiation of antiretroviral therapy in HIV infection: a review of interstudy consistencies. J Acquir Immune Defic Syndr 1994;7(suppl 2):S12-22. Wages JM Jr, Hamdallah M, Calabro MA, Fowler AK, et al. Clinical performance of a polymerase chain reaction testing algorithm for diagnosis of HIV-1 infection in peripheral blood mononuclear cells. J Med Virol 1991;33(1):58-63. Wain-Hobson S, Sonigo P, Danos O, Cole S, Alizon M. Nucleotide sequence of the AIDS virus, LAV. Cell 1985;40(1):9-17. Walzer PD, Perl DP, Krogstad DG, Rawson PG, Schultz MG. Pneumocystis carinii pneumonia in the United States. Epidemiologic, diagnostic and clinical features. Ann Intern Med 1974;80(1):83-93. Walzer PD. Pneumocystis carinii. In: Mandell GL, et al., eds. Principles and Practices of Infectious Diseases; 3rd ed. New York: Churchill Livingstone, 1990, pp. 2103-10. Ward JW, Bush TJ, Perkins HA, Lieb LE, et al. The natural history of transfusion-associated infections with the human immunodeficiency virus. N Engl J Med 1989;321(14):947. Weber R, Ledergerber B, Opravil M, Siegenthaler W, Luthy R. Progression of HIV infection in misusers of injected drugs who stop injecting or follow a programme of maintenance treatment with methadone. Br Med J 1990;301(6765):1362-5. Wei X, Ghosh SK, Taylor ME, Johnson VA, et al. Viral dynamics in human immunodeficiency virus type 1 infection. Nature 1995;373:117-22. Weinberg MS, Williams CJ. Male Homosexuals. London: Oxford University, 1974. Weiss RA. How does HIV cause AIDS? Science 1993;260(5112):1273-9. Weiss RA, Jaffe HW. Duesberg, HIV and AIDS (commentary). Nature 1990;345:659-60. Weiss SH, Klein CW, Mayur RK, Besra J, Denny TN. Idiopathic CD4+ T-lymphocytopenia. Lancet 1992;340:608-9. Welles SL, et al. Prognostic capacity of plasma HIV-1 RNA copy number in ACTG 116A. Second National Conference on Human Retroviruses and Related Infections, Washington, D.C. Jan 29-Feb 2, 1995. Weniger BG, Limpakarnjanarat K, Ungchusak K, Thanprasertsuk S, et al. The epidemiology of HIV infection and AIDS in Thailand. AIDS 1991;5(suppl 2):S71-85. WHO (World Health Organization). AIDS: images of the epidemic. 1994. WHO. The current global situation of the HIV/AIDS pandemic. Jan 3, 1995a. WHO. Surveillance, Evaluation and Forecasting Unit, Division of Technical Cooperation, Global Programme on AIDS. Personal communication. June 1, 1995b. Williams CKO, et al. AIDS-associated cancers. In: Essex M, et al., eds. AIDS in Africa. New York: Raven Press, 1994, pp. 325-71. Wofsy CB, Cohen JB, Hauer LB, Padian NS, et al. Isolation of AIDS-associated retrovirus from genital secretions of women with antibodies to the virus. Lancet 1986;8,1(8480):527-9. Yerly S, Chamot E, Hirschel B, Perrin LH. Quantitation of human immunodeficiency virus provirus and circulating virus: relationship with immunologic parameters. J Infect Dis 1992;166(2):269-76. Young KY, Nelson RP, Good RA. Discordant human immunodeficiency virus infection in dizygotic twins detected by polymerase chain reaction. Pediatr Infect Dis J 1990;9(6):454-6. Zaccarelli M, Gattari P, Rezza G, Conti S, et al. Impact of HIV infection on non-AIDS mortality among Italian injecting drug users. AIDS 1994;8(3):345-50. Zagury D, Bernard J, Leibowitch J, Safai B, et al. HTLV-III in cells cultured from semen of two patients with AIDS. Science 1984;226(4673):449-51. Zagury D, Bernard J, Leonard R, Cheymier R, et al. Long-term cultures of HTLV-III infected T cells: a model of cytopathology of T-cell depletion in AIDS. Science 1986;231(4740):850-3. Zakowski P, Fligiel S, Berlin GW, Johnson L. Disseminated Mycobacterium avium-intracellulare infection in homosexual men dying of acquired immunodeficiency. JAMA 1982;248(22):2980-2. Zhu T, Ho DD. Was HIV present in 1959? Nature 1995;374:503-4. back to top Last Updated March 23, 2010
<urn:uuid:6406505d-ce35-4760-a9e1-8b055359c626>
CC-MAIN-2013-20
http://www.niaid.nih.gov/topics/HIVAIDS/Understanding/howHIVCausesAIDS/Pages/relationshipHIVAIDS.aspx
2013-06-18T22:52:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.856422
43,413
A cloudburst is an extreme amount of precipitation, sometimes with hail and thunder, which normally lasts no longer than a few minutes but is capable of creating flood conditions. Colloquially, the term cloudburst may be used to describe any sudden heavy, brief, and usually unforecast rainfall. Meteorologists say the rain fall rate equal to or greater than 100 mm (3.94 inches) per hour is a cloudburst. The associated convective cloud, can extend up to a height of 15 km above the ground. During a cloudburst, more than 20 mm of rain may fall in a few minutes. When there are instances of cloudbursts, the results can be disastrous. Cloudburst also responsible for Flash flood creation. Rapid precipitation from cumulonimbus clouds is possible due to so called Langmuir[disambiguation needed] precipitation process in which large droplets can grow rapidly by coagulating with smaller droplets which fall down slowly. Record Cloudbursts |1 minute||1.5 inches (38.10 mm)||Basse-Terre, Guadeloupe||26 November 1970| |5.5 minutes||2.43 inches (61.72 mm)||Port Bells, Panama||29 November 1911| |15 minutes||7.8 inches (198.12 mm)||Plumb Point, Jamaica||12 May 1916| |20 minutes||8.1 inches (205.74 mm)||Curtea-de-Arges, Romania||7 July 1947| |40 minutes||9.25 inches (234.95 mm)||Guinea, Virginia, USA||24 August 1906| |1 hour||9.84 inches (250 mm)||Leh, Ladakh, India||August 5, 2010 | |13 hours||45.03 inches (1,144 mm)||Foc-Foc, La Réunion||January 8, 1966| |1 hour||5.67 inches (144 mm)||NDA, Pune, India||September 29, 2010 | |1.5 hours||7.15 inches (182 mm)||Pashan, Pune, India||October 4, 2010 | |10 hours||37 inches (940 mm)||Mumbai, India||July 26, 2005| |20 hours||91.69 inches (2,329 mm)||Ganges Delta, India||January 8, 1966| |1 hour||6 inches (150 mm)||Port Louis, Mauritius||March 31, 2013| Cloudbursts in the Indian subcontinent In the Indian subcontinent, a cloudburst usually occurs when a pregnant monsoon cloud drifts northwards, from the Bay of Bengal or Arabian Sea across the plains, then onto the Himalaya and bursts, bringing rainfall as high as 75 millimeters per hour. - On Aug 1998: A cloudburst in gowalpara. 400 people died - On September 28, 1908 - A Cloudburst resulted in a flood where the Musi River was swollen up to 38–45 m. About 15,000 people were killed and around 80,000 houses were destroyed along the banks of this river. - In July, 1970 — Cloudburst in the upper catchment area led to a 15 metre rise in the Alaknanda river in Uttarakhand. Entire river basin, from Hanumanchatti near the pilgrimage town of Badrinath to Haridwar was affected. An entire village was swept away. - On August 15, 1997, 115 people were killed when a cloud burst came bustling and trail of death are all that is left behind in Chirgaon in Shimla district, Himachal Pradesh. - On August 17, 1998 — A massive landslide following heavy rain and a cloudburst at Malpa village killed 250 people including 60 Kailash Mansarovar pilgrims in Kali valley of the Kumaon division, Uttarakhand. Among the dead was Odissi dancer Protima Bedi. - On July 16, 2003, About 40 persons were killed in flash floods caused by a cloudburst at Shilagarh in Gursa area of Kullu, Himachal Pradesh. - On July 6, 2004, At least 17 people were killed and 28 injured when three vehicles were swept into the Alaknanda river by heavy landslides triggered by a cloudburst that left nearly 5,000 pilgrims stranded near Badrinath shrine area in Chamoli district, Uttarakhand. - On 26 July 2005, A cloudburst caused approximately 950 millimetres (37 in) of rainfall in Mumbai. over a span of eight to ten hours; the deluge completely paralysed India's largest city and financial centre leaving over 5000 dead. - On August 16, 2007, 52 people were confirmed dead when a severe cloud burst occurred in Bhavi village in Ghanvi, Himachal Pradesh. - On August 7, 2009, 38 people were killed in a landslide resulting from a cloudburst in Nachni area near Munsiyari in Pithoragarh district of Uttarakhand. - On August 6, 2010, in Leh, a series of cloudbursts left over 1000 persons dead (updated number) and over 400 injured in the frontier Leh town of Ladakh region in Jammu and Kashmir. - On September 15, 2010 cloud burst in Almora in Uttrakhand has drowned away two villages one of them being Balta, leaving a few people alive and rest entire village dead and drowned. Almora has been declared as a town suffering from the brunt of cloudburst by authorities of Uttrakhand. Had there been a bit more swaying of clouds, town of Ranikhet must have drowned also. - On September 29, 2010, a cloudburst in NDA (National Defence Academy), Khadakwasla, Pune, in Maharashtra state left many injured, hundreds of vehicles and buildings damaged due to this flash flood. - Again on October 4, 2010, a cloudburst in Pashan, Pune, in Maharashtra state left 4 dead, many injured, hundreds of vehicles and buildings damaged. The record books as the historical highest rainfall in intensity and quantity of the Pune city recorded since 118 years old (record of 149.1 mm in 24 hours)of October 24, 1892. In the history of IT (Information Technology) hub Pune, first time this flash flood also anable Pune people to for over night stay (sleep) in their vehicle, officies and what ever available shelter in the traffic jam. - October 4, 2010, a cloudburst in Pashan, Pune may be the world’s first predicted cloudburst, in well advanced. Since 2.30 pm in the afternoon of the day, a young weather scientist in the city was frantically sending out SMSes to the higher authorities warning of an impending cloudburst over the Pashan area. After taking the necessary precautions still 4 persons were dead including one young scientist. - On June 9, 2011, near Jammu, a cloudbursts left 4 persons dead and over several injured in Doda-Batote highway, 135 km from Jammu.Two restaurants and many shops were washed away - On 20 July 2011, a cloudburst in upper Manali, 18 km away from Manali town in Himachal Pradesh state left 2 dead and 22 missing. - On September 15, 2011 a cloudburst was reported in the Palam area of the National Capital Territory of Delhi. The Indira Gandhi International Airport's Terminal-3 was flooded with water at the Arrival due to the immense downpour. Even though no lives were lost in the rain that lasted an hour was enough to enter the record books as the highest rainfall in the city recorded since 1959. - On September 14, 2012 in Rudraprayag district there was a cloudburst and 39 people died. On Intense rainfalls spell over part of Delhi in the afternoon of 15 Sept 2011: On afternoon of 15 Sept, 2011, intense rainfall spell was observed over IGI Airport of Delhi and Delhi airport had experienced during 1435-1535 a cloud burst like intense rain spell event with rainfall during the period reaching up to 117mm(11.7 cm). Fortunately being not a peak time rush hour and good weather monitoring and warning system in place with ATC, only 1 diverted many were asked to circle around. It is unusual intense spell for Delhi airport. But there was large-scale flooding of road at approach road from RWY underpass of city side of Palm. The rainfall observation during 0830-1730 IST of 15 Sept 2011 recorded at various stations over Delhi region are as follows: Name of Station Rainfall in mm IGI Airport Palam 120.0 Safderjung Airport 35.0 Lodi Road 37.7 Ayanagar 31.8 Delhi Ridge 3.8 The main reason for such intense rainfall over parts of Delhi was interaction of westerly and easterly leading to intense convection. on august 4th 2012 there is a cloudburst in Uttarkhand and Jammu & Kashmir, heavy rain fall and flashing floods. Uttarakhand: 6 killed in landslides, flash floods; pilgrims stranded Cloudburst near Manali washes away 2 bridges: Two bridges and a few electricity poles were washed away in flashfloods triggered by a cloudburst near the Rohtang tunnel in the Solang Nullah area at Dhundi, 30 km from Manali, on Friday night. Fearing flooding, residents of five villages located downstream were moved to a safer place. Landslides block Srinagar-Jammu highway: The Srinagar-Jammu highway was closed on Saturday following landslides in Ramban sector. "Heavy rains triggered landslides in the Ramban sector of the Srinagar-Jammu road Saturday morning 22 people trapped in Jammu flash flood As reported in the Times of India (September 14, 2012),Over a month after a similar tragedy in Uttarkashi, 45 persons were killed on September 14, 2012 and 15 injured while 40 others went missing in a cloudburst that flattened homes in Ukhimath area of Uttarakhand's Rudraprayag district. "22 bodies have been recovered until 22:00 hrs IST and 40 persons are still missing," as per Disaster Management and Mitigation Department (DMMD) officials. Ukhimath tehsil and nearby villages like Chunni, Mangoli, Kimana, Sansari, Giriya, Brahmankholi, Premnagar and Juatok have been the worst hit, he said. Most of the people died in sleep as the natural calamity flattened homes in the wee hours.Communication and power lines were disrupted and traffic along several roads, including national highways in the area, was affected. Rishikesh-Badrinath and Rishikesh-Gangotri highways have been closed due to landslips triggered by incessant rains. Expressing shock over the tragedy, Chief Minister Vijay Bahuguna asked the District Magistrate to take up relief and rescue operations on a war footing in the affected area and sanctioned Rs 10 crore for the purpose. - On July 1, 1977, the city of Karachi was flooded when 207 millimetres (8.1 in) of rain was recorded in 24 hours. - On July 23, 2001 620 millimetres (24 in) of rainfall was recorded in 10 hours in Islamabad. It was the heaviest rainfall in 24 hours in Islamabad and at any locality in Pakistan during the past 100 years. - On July 23, 2001 335 millimetres (13.2 in) of rainfall was recorded in 10 hours in Rawalpindi. - On July 18, 2009, 245 millimetres (9.6 in) of rainfall occurred in just 4 hours in Karachi, which caused massive flooding in the metropolis city. - On July 29, 2010 a record breaking 280 millimetres (11 in) of rain was recorded in Risalpur in 24 hours. - On July 29, 2010 a record breaking 274 millimetres (10.8 in) of rain was recorded in Peshawar in 24 hours. - On August 9, 2011 176 millimetres (6.9 in) of rainfall was recorded in 3 hours in Islamabad flooded main streets. - On August 10, 2011 a record breaking 291 millimetres (11.5 in) of rainfall was recorded in 24 hours in Mithi, Sindh Pakistan. - On August 11, 2011 a record breaking 350 millimetres (14 in) of rainfall was recorded in 24 hours in Tando Ghulam Ali, Sindh Pakistan. - On September 7, 2011 a record breaking 312 millimetres (12.3 in) of rainfall was recorded in 24 hours in Diplo, Sindh Pakistan. - On September 9, 2012 Jacobabad received the heaviest rainfall in last 100 years, and recorded 380 millimetres (15 in) in 24 hours, as a result over 150 houses collapsed. - In September, 2004 341 millimetres (13.4 in) mm of rain was recorded in Dhaka in 24 hours. - On June 11, 2007 425 millimetres (16.7 in) mm of rain fell in 24 hours in Chittagong. - On July 29, 2009 a record breaking 333 millimetres (13.1 in) of rain was recorded in Dhaka, in 24 hours, previously 326 millimetres (12.8 in) of rain was recorded on July 13, 1956. See also - "It was a cloudburst, says weather scientest". news.saakaltimes.com. Retrieved 2010-11-04. - "What is a cloudburst?". Rediff News, India. August 1, 2005 - "Cloud Burst over Leh (Jammu & Kashmir)" - "Cloudburst in Ladakh". articles.economictimes.indiatimes.com. August 9, 2010. Retrieved 2011-09-25. - "Records_clim". Meteo.fr. Retrieved 2010-08-20. - Cloudburst In The Leh, WorldSnap, retrieved 9 September 2012 - Cloudburst In The Subcontinent Weathernotebook.org - syed akbar (2008-09-28). "Syed Akbar Journalist: Musi Floods 1908: What really happened that fateful day". Syedakbarindia.blogspot.com. Retrieved 2012-08-13. - "Sorry". Indianexpress.com. Retrieved 2012-08-13. - [dead link] - "17 killed as cloudburst hits Badrinath area". news.outlookindia.com. 2004-07-06. Retrieved 2012-08-13. - Ahmed, Zubair (May 19, 2006). "Mumbai commuters face travel woe". BBC (Mumbai, India). - "52 casualties confirmed in Ghanvi cloud burst". The Hindu (Chennai, India). August 16, 2007. - "38 die in Pithoragarh cloudburst, rescue works on". Indian Express. 2009-08-08. Retrieved 2012-08-13. - "Doda cloudburst: 4 feared dead, several stranded". NDTV.com. 2011-06-09. Retrieved 2012-08-13. - "Cloudburst in Manali: 2 dead, many missing". The Times Of India. July 21, 2011. - "Fresh Landslides in Uttarakhand, toll 39". The Times of India. September 17, 2012. - [dead link] - Tom Ross, Neal Lott, Axel Graumann, Sam McCown. "NCDC: Climate-Watch, July 2001". Ncdc.noaa.gov. Retrieved 2012-08-13. - [dead link] - "Effects of Heavy Rain in Karachi on 18 July 2009". Hamariweb.com. Retrieved 2012-08-13. - [dead link] - "Rain wreaks havoc in Islamabad, cities in Punjab and KP". Awaztoday.com. 2011-08-09. Retrieved 2012-08-13. - "Pakmet.com.pk : Widespread Heavy rainfall in Southern sindh". Pakmet.com.pk. Retrieved 10 August 2011. - "Pakmet.com.pk : 231 mm of rain recorded in Mithi in 24 Hours". Pakmet.com.pk. Retrieved 10 August 2011. - "Pakmet.com.pk : Record breaking rainfall in Mithi". Pakmet.com.pk. Retrieved 10 August 2011. - "Pakmet.com.pk : Record breaking heavy rain in Tando Ghulam Ali". Pakmet.com.pk. Retrieved 8 September 2011. - "Pakmet.com.pk : Record breaking heavy rain in Sindh". Pakmet.com.pk. Retrieved 7 September 2011. - "Urduwire.com : Record breaking rainfall in Jacobabad". Urduwire.com. Retrieved 10 September 2012. - "'Cloud Burst' Breaks 53-year Record". Independent-bangladesh.com. 2009-07-29. Retrieved 2012-08-13.
<urn:uuid:25716f7a-8522-49ba-a0fc-19371f20e949>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Cloudburst
2013-05-25T20:09:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919652
3,614
British Government Department The Admiralty is the department of His Majesty's Government charged with the administration of the Royal Navy and Royal Marines and with the naval defence of the British Empire in general. History: The Admiralty was established in 1708. However, some of its components existed earlier. The Navy Board, eventually abolished in 1832, had been created by Henry VIII in 1546 and the Board of Admiralty was established in 1628. The Board of Admiralty: The Board of Admiralty is the body which actually runs the Admiralty, and by extension the Royal Navy. Its membership comprises the First Lord and the Civil Lord, both civilian politicians, and four Sea Lords, all admirals, these six being collectively known as the Lords Commissioners of the Admiralty, as well as several other members. Despite their titles, the lords commissioners do not actually have to be peers. - The First Lord of the Admiralty is the Cabinet minister ultimately responsible for Admiralty and Royal Navy affairs. - The Civil Lord of the Admiralty is a non-Cabinet minister, who has direct responsibility for the Royal Navy's large civilian staff, the Civil Engineer-in-Chief's Department, the Royal Greenwich Hospital, and naval lands. - The First Sea Lord and Chief of the Naval Staff is the professional head of the Royal Navy. He is directly responsible for the direction of wartime naval strategy, planning, operations and intelligence. He is also responsible for all matters relating to the Royal Naval Reserve and, until 1923, HM Coastguard. - The Second Sea Lord and Chief of Naval Personnel is responsible for manning and mobilisation and other personnel questions. He is also directly responsible for all matters relating to the Royal Marines. - The Third Sea Lord and Controller of the Navy is responsible for the Naval Construction, Engineer-in-Chief's, Ordnance, Dockyards, and Scientific Research and Development Departments and the Admiralty Compass Observatory. - The Fourth Sea Lord and Chief of Naval Supplies is responsible for the Contract and Purchase Department and the Medical Department. - The Parliamentary and Financial Secretary to the Admiralty is a lord commissioner from 1929 (although he is a member of the Board throughout the 1920s) and is responsible for finance, the preparation of estimates, and parliamentary business. He is a non-Cabinet minister. - The Permanent Secretary to the Admiralty is a member of the Board of Admiralty from 1921. He is a civil servant responsible for the general administration of the Admiralty. He heads the Secretary's Department. - The Assistant Chief of the Naval Staff is a lord commissioner, at least from 1924. Personnel: Many shore functions of the Royal Navy are civilian-run, and the Admiralty has many more civilian employees than do the War Office or the Air Ministry. On 1 April 1920, the Admiralty employs 13,432 civilian staff, although this has shrunk to 7,433 by 1 April 1930. Office of the Civil Lord Civil Lords of the Admiralty: Lord Lytton (Con), 1919-26 Oct 1920; Lord Onslow (Con), 26 Oct 1920-1 Apr 1921; Bolton Eyres-Monsell MP (Con), 1 Apr 1921-19 Oct 1922; Lord Linlithgow (Con), 31 Oct 1922-22 Jan 1924; Frank Hodges MP (Lab), 24 Jan-3 Nov 1924; Lord Stanhope (Con), 11 Nov 1924-4 Jun 1929; George Hall MP (Lab), 11 Jun 1929-1931. - Civil Engineer-in-Chief's Department: Responsible for all naval shore construction. Office of the Controller of the Navy Third Sea Lords and Controllers of the Navy: Rear-Admiral Sir William Nicholson, 1919-1920; Rear-Admiral (Sir) Frederick Field, Mar 1920-May 1923; Rear-Admiral Cyril Fuller, May 1923-Apr 1925; Vice-Admiral Sir Ernle Chatfield, 1925-1928. - Armament Supply Department: Dissolved in 1923. - Dockyards Department Director of Dockyards: Vice-Admiral Albert Addison, 1928-. - Engineer-in-Chief's Department: The engineer-in-chief is the senior serving naval engineer officer, and the department is responsible for naval machinery. Engineer-in-Chief of the Fleet: Engineer Vice-Admiral Sir Robert Dixon, 1922-1928. - Naval Construction Department: Headed by the director of naval construction, the department is responsible for the construction of all naval vessels. - Naval Ordnance Department: Responsible for naval weaponry and ammunition. Ordnance is manufactured by the Royal Ordnance Factories or by private contractors, who are especially employed to make heavy guns. Director of Naval Ordnance: Captain Roger Backhouse, 1920-1922. - Naval Ordnance Inspection Department: Headed by the chief inspector of naval ordnance, the department inspects weapons and ammunition. - Scientific Research and Experiment Department: The department initiates, investigates and advises on proposals for the application of science and engineering to naval warfare. The department is headed by the assistant controller of the Navy (research and development). A director of scientific research is first appointed in 1920 to head the scientific side of the department. The Technical Records Section is established in 1929. The department runs several Admiralty Experimental Establishments. - Signal Department Director of Signals: Captain James Somerville, 1925-1927. - Directorate of Naval Equipment Directors of Naval Equipment: Rear-Admiral Edward Bruen, 1920-1922; Rear-Admiral Henry Parker, 1926-1928. - Directorate of Torpedoes and Mining Director of Torpedoes and Mining: Rear-Admiral Frederick Field, 1918-Mar 1920. Office of the First Lord of the Admiralty First Lords of the Admiralty: Walter Long MP (Con), 1919-13 Feb 1921; Lord Lee of Fareham (Con), 13 Feb 1921-19 Oct 1922; Leo Amery MP (Con), 24 Oct 1922-22 Jan 1924; Lord Chelmsford, 22 Jan-3 Nov 1924; William Bridgeman MP (Con), 6 Nov 1924-4 Jun 1929; Albert Alexander MP (Lab), 7 June 1929-1931. Naval Secretary to the First Lord of the Admiralty: Rear-Admiral Michael Hodges, 1923-1925. Hydrographer's Department: Headed by the hydrographer of the Navy, an admiral, the department is responsible for all marine surveying and chartmaking. The hydrographer is also responsible for the Royal Observatory at Greenwich. Naval Staff: The Naval Staff runs the navy's operations and wartime strategy, and has a number of separate divisions concerned with strategy and tactics, the planning and conduct of operations, and the collection and dissemination of intelligence. First Sea Lords and Chiefs of the Naval Staff: Admiral of the Fleet Lord Beatty, Nov 1919-July 1927. Naval Assistants to the Chief of the Naval Staff: Captain Sidney Bailey, 1925-1927; Captain William James, 1928-1929. Deputy Chiefs of the Naval Staff: Vice-Admiral Sir Osmond Brock, 1919-1921; Vice-Admiral Sir Roger Keyes, 1921-1925; Vice-Admiral Sir Frederick Field, 1925-June 1928. Assistant Chiefs of the Naval Staff: Rear-Admiral Sir Ernle Chatfield, 1920-1922; Rear-Admiral Cyril Fuller, 1 Dec 1922-May 1923; Rear-Admiral Frederic Dreyer, 1924-1927; Rear-Admiral Dudley Dound, 1927-. - Gunnery Division Directors of Gunnery: Captain Frederic Dreyer, 1920-1922; Captain Henry Brownrigg, 1926-1927. Deputy Director of Gunnery: Captain Henry Brownrigg, 1925-1926. - Naval Intelligence Division (NID) - Navigation Division Director of Navigation: Captain Frederick Loder-Symonds, 1921-1923. - Operations Division Directors of Operations: Captain Henry Parker, 1922-1924; Captain Percy Noble, 1928-. - Plans Division Directors of Plans: Captain Cyril Fuller, 1917-1920; Captain Dudley Pound, 1922-1925. Office of the Parliamentary and Financial Secretary Parliamentary and Financial Secretaries to the Admiralty: Thomas Macnamara MP (Lib), 1908-2 Apr 1920; Sir James Craig MP (Con), 2 Apr 1920-1 Apr 1921; Leo Amery MP (Con), 1 Apr 1921-19 Oct 1922; Bolton Eyres-Monsell MP (Con), 31 Oct 1922-25 May 1923; Archibald Boyd-Carpenter MP (Con), 25 May 1923-22 Jan 1924; Charles Ammon MP (Lab), 23 Jan-3 Nov 1924; John Davidson MP (Con), 11 Nov 1924-16 Dec 1926; Cuthbert Headlam MP (Con), 16 Dec 1926-4 Jun 1929; Charles Ammon MP (Lab), 11 Jun 1929-1931. - Accountant-General's Department: In addition to carrying out normal financial duties, this department maintains records of ships' establishments, petty officers' and seamen's service and medals, seamen's wills and effects, and the payment of prize money and bounty. It is responsible to the fourth sea lord for pay and pensions, to the permanent secretary for the general oversight of Admiralty expenditure (from 1921), and to the parliamentary and financial secretary for naval accounts. Office of the Chief of Naval Personnel Second Sea Lords and Chiefs of Naval Personnel: Admiral Sir Montague Browning, 1919-Sept 1920; Admiral Sir Michael Hodges, 1927-. - Royal Marine Office: Headed by the adjutant-general, Royal Marines, the office deals with the administration of the Royal Marines. Adjutant-General, Royal Marines: Lieutenant-General Alexander Hutchison, 1924-1927. - Directorate of Mobilisation Assistant Director of Mobilisation: Captain Max Horton, 1926-1928. - Directorate of Training and Staff Duties Director of Training and Staff Duties: Captain Sidney Meyrick, 1926-1927. Secretary's Department: Headed by the permanent secretary, the Admiralty's senior civil servant, the department is responsible for general administration and co-ordination. The Military Branch is responsible for the distribution of the Fleet. The Naval Branch is responsible for the manning of the Fleet. The Civil Branch is responsible for the civil establishment of the Admiralty. The Legal Branch is responsible for discipline, courts martial, courts of inquiry, naval prisons, and similar matters. The Treasury Solicitor deals with external legal matters. The Admiralty Record Office holds all records generated by the Admiralty. Permanent Secretary to the Admiralty: Sir Oswyn Murray, 1917-1936. Office of the Chief of Naval Supplies Fourth Sea Lord and Chief of Naval Supplies: Rear-Admiral Hon Algernon Boyle, 1920-1924. - Contract and Purchase Department: Although ultimately responsible to the fourth sea lord, the department is responsible to the parliamentary and financial secretary for purchasing and to the director of victualling for storekeeping. - Medical Department: Headed by the director-general of the Medical Department, the senior serving naval medical officer. - Paymaster Department Paymaster Director-General: Paymaster Rear-Admiral Bertram Allen, 1926-1929. Headquarters: The Admiralty occupies four buildings at the northern end of Whitehall, London SW1, adjoining Trafalgar Square. The Old Admiralty was completed in 1726, and was extended by the addition of the New Admiralty, completed in 1895. The Admiralty also occupies offices in Admiralty Arch, a great memorial arch built in 1910 between Trafalgar Square and The Mall. The final building in the complex is Admiralty House, completed in 1788 as the official residence for the first lord of the Admiralty.
<urn:uuid:a88eea3a-9ae5-42f4-9b98-0080e5a359be>
CC-MAIN-2013-20
http://homepages.warwick.ac.uk/~lysic/1920s/admiralty.htm
2013-05-25T20:45:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924468
2,515
Its source is a Latin expression meaning "Ninth." In ancient Rome, the praenomen (first name) was often ordinal -- Primus or Prima for the first child, Secundus or Secunda for the second, and so forth. This name was used to designate the ninth child, the ninth daughter, or a daughter born in the ninth month. With the modern trend toward smaller families, such usage has not been common since the 19th century. Of the spectrum of Latin ordinal names, only Nona and Quintus (designating the fifth boy) are commonly used. Sextus (''sixth''), Septimus (''seventh'') and Octavius (''eigth'') are also still in occasional use.
<urn:uuid:cf43199b-e8df-4af2-a937-eac36d49c973>
CC-MAIN-2013-20
http://www.babynamer.com/nonah
2013-05-25T06:14:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939024
158
Our Osteoporosis Main Article provides a comprehensive look at the who, what, when and how of Osteoporosis Definition of Osteoporosis Osteoporosis: Thinning of the bones, with reduction in bone mass, due to depletion of calcium and bone protein. Osteoporosis predisposes a person to fractures, which are often slow to heal and heal poorly. It is most common in older adults, particularly postmenopausal women, and in patients who take steroids or steroidal drugs. Unchecked osteoporosis can lead to changes in posture, physical abnormality (particularly the form of hunched back known colloquially as dowager?s hump), and decreased mobility. Treatment of osteoporosis includes exercise (especially weight-bearing exercise that builds bone density), ensuring that the diet contains adequate calcium and other minerals needed to promote new bone growth, use of medications to improve bone density, and sometimes for postmenopausal women, use of hormone therapy. Last Editorial Review: 3/19/2012 Back to MedTerms online medical dictionary A-Z List Need help identifying pills and medications? Get the latest health and medical information delivered direct to your inbox FREE!
<urn:uuid:78775e7b-19cd-4c83-bf2f-5085a121572d>
CC-MAIN-2013-20
http://www.medterms.com/script/main/art.asp?articlekey=4686
2013-05-21T10:08:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90703
252
Interested in linking to "Controlling the Post-Oil Energy Economy"? You may use the Headline, Deck, Byline and URL of this article on your Web site. To link to this article, select and copy the HTML code below and paste it on your own Web site. Our planet receives more solar energy in 45 minutes than what mankind will need by the end of the century in a year. Therefore, if the global conversion to renewable fuels is completed by the end of this century, from that time fuel will be inexhaustible and free. To meet the total global energy need expected for 2100 (when the life style of the 3rd world has reached our present one) all we need to do is to cover 3% to 4% of the Sahara (or the Mojave desert) with solar collectors. Therefore, it is time to “retune the economy control loop”. The first step in this retuning process must be to get accurate sensors (start dealing with facts)! In order to accomplish this transformation, we must start building the world’s first solar-hydrogen demonstration power plant described in my book (Post-oil Energy Technology). A key component of this solar-hydrogen demonstration power plant is the reversible fuel cell which I invented (see the Reversible Fuel Cell Model). During the day the RFC operates as an electrolyzer and generates hydrogen while at night it works as a fuel cell converting hydrogen back into electricity. Naturally, hydrogen would also be available as a transportation fuel, shipped and distributed as is LNG today. Considering that fuel cells are twice as efficient as internal combustion (IC) engines, hydrogen will not only be less expensive, but should provide twice the mileage as the IC engine and will emit no pollutant, only distilled water. Figure 2 provides a “bird’s eye view” of my proposed solar hydrogen power plant.. If this plant is built close to an existing fossil power plant, it could also convert the carbon dioxide emission of that conventional power plant into methanol fuel. If an electric grid exists in the area, the excess solar electricity can be “stored” on that grid. Figure 2: The components of the world’s first solar-hydrogen power plant. As was shown in Table 1, all fossil and nuclear fuel deposits are exhaustible, in addition fossil is dirty, nuclear in unsafe. Similarly, biofuels generate carbon emissions and interfere with the food supply. Yet, contrast, the solar electricity and solar-hydrogen is already economical (Figure 3) and the transition can be done in a calm, orderly and economical fashion. Figure 3: NREL projections of renewable energy cost 1980 to 2020. I believe that the time for building has arrived, that it is time for mankind to install a new “control loop” for our economy, but what I believe is completely irrelevant! What is relevant is that the building and operating this, the world’s first solar-hydrogen demonstration power plant will close the debate on “what to do?” and will initiate action! The factual data generated will force “top management” (the voters in our free society) to tell our “operators” in the control room (the leaders of mankind) to stop running the system in manual on-off and reconfigure our control loop to optimize it for the smooth transition to the new inexhaustible energy economy. Feedback control alone (market forces, cap and trade, increased fuel costs, taxation, etc.) will not be fast enough to convert our economy and in addition they will stifle the global economic growth. Instead, during the transition, temporary feed forward is required (applying large-scale public funding) to develop and demonstrate the feasibility of cost effective, clean, free and inexhaustible energy technologies. To achieve these goals, to start this, the third industrial revolution will need vision and commitment, but so did the landing on the Moon. The scale of the effort will exceed that of the Marshall Plan. The transition will start not only with the building of solar-hydrogen power plants, but also with the installation of millions of solar roofs to make millions of homes “energy free” and with the conversion to a totally electric transportation system. It is debatable, if our “operators” (political leaders) have any idea what process control is or how PID loops are tuned. I doubt that they understand the requirements for converting a batch process (operating with exhaustible resources) to a continuous one (operating with inexhaustible resources). It is also debatable how much fossil or nuclear resources are left on the planet, how much climate change can we live with or how long can we use our atmosphere as a global garbage dump. What is not debatable is that the conversion to a clean, free and inexhaustible energy economy is inevitable and understanding process control can help in the transition. The other thing that is not debatable is that we must not give reason to our grandchildren to ask: “Why did you not act?” |Check out other of Béla Lipták's articles on Solar Energy.| ControlGlobal.com is exclusively dedicated to the global process automation market. We report on developing industry trends, illustrate successful industry applications, and update the basic skills and knowledge base that provide the profession's foundation.
<urn:uuid:6329a451-4954-42b3-bbd2-bec3ced6130d>
CC-MAIN-2013-20
http://www.controlglobal.com/articles/2008/295.html?page=2
2013-05-18T06:45:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924374
1,110
"...reassuring in its analysis of the moral core of the series. Even parents who do not care to read the Potter books, but would like to know what they are about, will find that the author's analysis can help them discuss the book's moral choices and themes with their children." Since the 1997 release of J. K. Rowling's first novel - Harry Potter and the Sorcerer's Stone - no series of children's books has been more incredibly popular or widely influential. How do we explain the enormous appeal of these stories to children? Should parents welcome this new interest in reading among their kids or worry, along with the critics, that the books encourage either moral complacency or a perverse interest in witchcraft and the occult? In this original interpretation of the Harry Potter sensation, Edmund M. Kern argues that the attraction of these stories to children comes not only from the fantastical elements embedded in the plots, but also from their underlying moral messages. Children genuinely desire to follow Harry, as he confronts a host of challenges in an uncertain world, because of his desire to do the right thing. Harry's coherent yet flexible approach to dealing with evil reflects an updated form of Stoicism, says Kern. He argues that Rowling's great accomplishment in these books is to have combined imaginative fun and moral seriousness. Kern's comprehensive evaluation of the Harry Potter stories in terms of ethical questions reveals the importance of uncertainty and ambiguity in Rowling's imaginative world and highlights her call to meet them with typically Stoic virtues: constancy, endurance, perseverance, self-discipline, reason, solidarity, empathy, and sacrifice. Children comprehend that growing up entails some perplexity and pain, that they cannot entirely avoid problems, and that they can remain constant in circumstances beyond their control. In essence, Harry shows them how to work through their problems, rather than seek ways around them. Despite the fantastical settings and events of Harry's adventures, children are quick to realize that they are just a weird reflection of the confusing and disturbing circumstances found in the real world. Kern also shows adults how much they can gain by discussing with children the moral conundrums faced by Harry and other characters. The author outlines the central morals of each book, explains the Stoic principles found in the stories, considers the common critiques of the books, discusses Rowling's skillful blend of history, legend, and myth, and provides important questions for guiding children through Harry's adventures. This fresh, instructive, and upbeat guide to Harry Potter will give parents many useful and educational suggestions for discussing the moral implications of this continuously popular series of books with their children. Note: This book is not authorized, approved, licensed, or endorsed by J. K. Rowling, Warner Bros., or any other individual or entity associated with the Harry Potter books or movies. Harry Potter is a registered trademark of Warner Bros. Entertainment Inc. Book Binding: Paperback Shipping Weight: 1lbs Edmund M. Kern (Appleton, WI) is associate professor of history at Lawrence University.
<urn:uuid:ab20c9f3-faf8-4dbe-b6d6-0c734ce7a092>
CC-MAIN-2013-20
http://www.prometheusbooks.com/index.php?main_page=product_info&products_id=782
2013-05-23T04:27:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935342
617
Equilibrium inorganic chemistry underlies the composition and properties of the aquatic environment and provides a sound basis for understanding both natural geochemical processes and the behavior of inorganic pollutants in the environment. Designed for readers having basic chemical and mathematical knowledge, this book includes material and examples suitable for undergraduate students in the early stages of chemistry, environmental science, geology, irrigation science and oceanography courses. Aquatic Environmental Chemistry covers the composition and underlying properties of both freshwater and marine systems and, within this framework, explains the effects of acidity, complexation, oxidation and reduction processes, and sedimentation. The format adopted for the book consists of two parallel columns. The inner column is the main body of the book and can be read on its own. The outer column is a source of useful secondary material where comments on the main text, explanations of unusual terms and guidance through mathematical steps are to be found. A wide range of examples to explain the behavior of inorganic species in freshwater and marine systems are used throughout, making this clear and progressive text an invaluable introduction to equilibrium chemistry in solution. About the Author(s) Alan G. Howard, Senior Lecturer, Department of Chemistry, University of Southampton
<urn:uuid:9aa268dc-91cc-4d14-8963-7f0b0ad2d30b>
CC-MAIN-2013-20
http://www.oup.com/us/catalog/he/subject/Chemistry/OxfordChemistryPrimers/?ci=9780198502838
2013-06-19T12:41:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89909
243
Imagine if instead of a finger prick, you could measure your blood glucose with a puff of breath. The potential of such technology is enormous, going far beyond what would clearly be an improvement in the lives of people with diabetes. But because of its complexity, breath analysis of something as complicated as blood glucose has remained out of reach. With help from an ADA research grant, University of California–Irvine scientist Pietro Galassetti, MD, PhD, is hoping to change that. He's looking for ways to detect changes in blood glucose levels in the breath using technology originally developed to sense chemicals in the atmosphere. If he can prove the technique works, he hopes private industry can refine it and make breath-based glucose monitors a reality in the next 20 years. "Gases have for decades been courted by researchers," Galassetti says. "If we could find them, it would be the holy grail—it's very easy to take a breath sample." The idea is simple. Think of the body as a car. As it burns fuel—fats and sugars—it creates exhaust. Tweak the fuel mixture, and the hundreds of different gases change, too. When it comes to diabetes, the fuel-mixture metaphor is particularly apt. "The body usually has a balance of energy sources," Galassetti says, but diabetes cuts the potential fuel sources in half: Without insulin, the body can't burn sugar (glucose) and must rely on fat alone for fuel. Pietro Galassetti, MD, PhD Director, Metabolism and Bionutrition Core, Institute for Clinical and Translational Science, University of California–Irvine ADA Clinical/Translational Research Award In fact, before insulin was discovered in 1921, one of the characteristic signs of end-stage diabetes was the smell of acetone—the same chemical as nail-polish remover—on patients’ breath as the body used up its stores of fat. "I hoped it would be something as simple as acetone," Galassetti says. But acetone is just one of hundreds of chemicals the body produces while generating energy, and by the time it becomes easily detected, the body is already at the point of no return. Translating theory to practice is complex. "There's a lot of potential, but so far very little concrete has come out of it," Galassetti says. There have been two main barriers: finding machines sensitive enough to detect all the different gases that make up human breath, and finding algorithms that would enable computers to make sense of the resulting information. The first barrier began to crumble in the 1970s, thanks to a UC–Irvine chemist named F. Sherwood Rowland. Rowland won the Nobel Prize in chemistry in 1995 for his work detecting tiny amounts of gases in the atmosphere. Though he has since retired, his Irvine lab—run since Rowland's retirement by Galassetti's collaborator and UC–Irvine chemistry department chair Donald Blake—is still a leader in the field. "Their main thing is being extremely good at picking up tiny concentrations of gas in extremely large gas mixtures," Galassetti says. The technology Rowland pioneered is now sensitive enough to detect gases in concentrations as low as 10 parts per quadrillion— Galassetti says that's like covering the western United States in white golf balls, then picking 10 red ones out—and was originally deployed to measure the depletion of the Earth's ozone layer. More recently, it saw service during the 2008 Beijing Olympics, measuring pollution levels in that famously smoggy city. Working with Blake, Galassetti has harnessed it to pick apart the gases in exhaled air. To sponsor an ADA research project at the Research Foundation's Pinnacle Society level of $50,000 or more, call Elly Brtva, MPH, managing director of Individual Giving, at (703) 253-4377, or e-mail her at email@example.com. But that's only part of the battle. In the complex stew of hundreds of chemicals that make up breath, the challenge is figuring out which ingredients are relevant. "When glucose changes over time, not one but 20 or 30 different gases in the body change," Galassetti says. "We're looking for a mathematical algorithm that could correlate with enough accuracy that we could just use the breath [for blood glucose and other tests]." To do that, Galassetti and his team take hundreds of samples and run them through the computer to look for what they have in common. To train the machines, Galassetti’s team conducts tightly controlled experiments. Volunteers are brought to a normal glucose level using intravenous insulin. Then, taking breath and blood samples at five-minute intervals, the team gives them glucose infusions, bringing them up to the point of hyperglycemia over the course of an hour or so, then back down again to normal. Over the past year, they've tested people in four groups: people who don’t have diabetes to create a baseline, and then people with pre-diabetes and types 1 and 2. The data are plugged into a computer, which analyzes and compares more than 100 different variables in the hunt for patterns. So far the researchers have amassed over 100 data sets. "We have very strong data that allow us to predict glucose and insulin levels in healthy subjects," Galassetti says. "With the help of ADA, we’re making quite [a lot of] headway." Perfecting the technology could mean an end to painful finger-prick blood tests, making it easier for people with diabetes to monitor their blood glucose levels; at some point, it could conceivably make screenings for pre-diabetes economical on a mass scale. But don't expect to trade in your glucose meter for a Breathalyzer-style monitor any time soon. Galassetti freely admits there’s a long way to go before the technology makes it out of the lab. "The way we do it is extremely complex and extremely expensive," he says. "But this proves the point that it can be done."
<urn:uuid:bab401a5-10c1-47f8-a378-cd73be843d0a>
CC-MAIN-2013-20
http://forecast.diabetes.org/magazine/your-ada/breath-test-blood-glucose
2013-06-19T14:38:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952389
1,267
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Verifying Greenhouse Gas Emissions: Methods to Support International Climate Agreements FIGURE 2.1 Greenhouse gas emissions by sector in 2000 for Annex I and non-Annex I countries; 2000 is the most recent year for which comprehensive data on the greenhouse gases are available. SOURCE: Data compiled from the Climate Analysis Indicators Tool, Version 6.0, World Resources Institute, <http://cait.wri.org/>. In most Annex I countries, CO2 from energy use dominates anthropogenic greenhouse gas emissions. The CO2 emissions from fossil-fuel combustion accounted for 80 percent of total greenhouse gas emissions (on a CO2-equivalent basis) in the United States in 2006 (EPA, 2008). Other emissions from the energy sector include CO2 from the non-energy use of fossil fuels (e.g., as petrochemicals, solvents, lubricants), CH4 from fuel production and transport systems (e.g., coal mines, gas pipelines), and N2O from transportation systems. Carbon Dioxide. Most estimates of CO2 emissions from energy systems are based on self-reporting of fuel consumption. Emissions are estimated from the amount of fuel burned, the carbon content of the fuel, and the efficiency of combustion (i.e., the fraction of fuel that is left unoxidized or incompletely oxidized at the point of combustion as, for example, carbon monoxide or ash). The fraction left unoxidized is small in modern combustion systems, and the IPCC now suggests using the default assumption that 100 percent of the carbon in a fuel is fully oxidized (IPCC, 2006). A challenge is that the amount of fuel burned is generally measured in mass or volume units and the carbon content is not generally measured. There is a good cor- FIGURE 2.2 National greenhouse gas emissions from all IPCC sectors of the top 20 emitters in 2000. Note that the 27 countries in the European Union are treated as one. SOURCE: Data compiled from the Climate Analysis Indicators Tool, Version 6.0, World Resources Institute.
<urn:uuid:a4d302e7-d67b-49d2-a1df-ce73fb41dfc7>
CC-MAIN-2013-20
http://www.nap.edu/openbook.php?record_id=12883&page=24
2013-05-25T05:39:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.881324
470
NORD is very grateful to Prateek Sharma, MD, Professor of Medicine, Director of the GI Fellowship Training, University of Kansas School of Medicine, for assistance in the preparation of this report. Synonyms of Barrett Esophagus - Barrett metaplasia - Barrett ulcer - columnar-lined esophagus - No subdivisions found. Barrett esophagus is a condition in which the cells that make up of the tissue of the lower end of the esophagus are abnormal. The esophagus is the thin tube that connects the back of the throat to the stomach. Chronic inflammation and ulceration of the lower end of the esophagus eventually causes the cells normally found there to be replaced by cells normally found in the intestines (intestinal metaplasia). Since most patients with Barrett esophagus have acid reflux disease, they suffer from heartburn and/or acid regurgitation; Barrett esophagus does not cause any specific symptoms. The disorder is considered a premalignant condition and affected individuals are at an increased risk (although their overall risk remains low), of developing cancer (adenocarcinoma), of the esophagus. Barrett esophagus usually occurs more often in individuals with gastroesophageal reflux (GERD), a condition characterized by backflow (regurgitation), of the contents of stomach into the esophagus. The exact reason these tissue changes occur in Barrett esophagus is unknown. Barrett esophagus, per se, does not cause any specific symptoms on its own. Since most affected individuals have GERD, they may have symptoms normally associated with that condition including backflow of food and acid from the stomach into the esophagus (reflux), hoarseness, heartburn, a sore throat, a dry cough, shortness of breath, chest pain, loss of appetite and unintended weight loss. Rarely, some individuals may vomit up small amounts of blood. Some individuals with Barrett esophagus may develop difficulty swallowing (dysphagia), which may indicate narrowing of the esophagus (peptic stricture) or the development of cancer in the esophagus. Individuals with Barrett esophagus are at a greater risk than the general population for developing a form of cancer known as adenocarcinoma of the esophagus. However, the overall risk is still very low, less than 0.5 percent of individuals with Barrett esophagus develop cancer of esophagus on a yearly basis. The exact cause of Barrett esophagus is unknown. Most cases appear to occur randomly for no apparent reason (sporadically). Barrett esophagus occurs with greater frequency in individuals with GERD. Researchers speculate that the tissue changes that characterize Barrett esophagus are caused by chronic damage to the esophagus as is seen in individuals with chronic GERD. In individuals with GERD, backflow of the contents of the stomach including stomach acids and bile salts repeatedly damage the tissue of the lower esophagus. Over time, the tissue normally found lining the lower esophagus (squamous epithelium) is replaced by tissue normally found in the stomach (intestinal columnar epithelium), a process known as specialized intestinal metaplasia. Some researches suggest that this may occur because the intestinal tissue is more resistant to damage from stomach acids. Some cases of Barrett esophagus have run in families suggesting that some individuals have a genetic predisposition to developing the disorder. A genetic predisposition means a person carries a gene (or genes) for the disease, but it may not be expressed unless it is triggered or "activated" under certain circumstances, such as due to particular environmental factors. It is likely that several different factors, including environmental and genetic factors as well as lifestyle choices, cause the distinctive tissue changes that characterized Barrett esophagus. Such factors may also be why individuals with Barrett esophagus are more likely than individuals in the general population to develop adenocarcinoma of the esophagus. Certain risk factors have been indentified for developing Barrett esophagus including individuals who are of advancing age (60 or older), white, male, and obese. Smoking may also increase the risk of developing Barrett esophagus. Barrett esophagus affects men approximately twice as often as it does women. The disorder can affect individuals of any age, but is much more likely in older individuals. The average age at diagnosis is 60. It occurs in greater frequency in Caucasians. The exact prevalence of Barrett esophagus is not known because many people may have the disorder, but do not develop symptoms and remain undiagnosed. One estimate placed the prevalence of Barrett esophagus as high as approximately 700,000 to 1 million adults in the United States. Symptoms of the following disorders can be similar to those of Barrett esophagus. Comparisons may be useful for a differential diagnosis. Gastroesophageal reflux disease (GERD) is a digestive disorder characterized by reflux of the contents of the stomach or small intestines into the esophagus. Symptoms of gastroesophageal reflux may include a sensation of warmth or burning rising up to the neck area (heartburn or pyrosis), swallowing difficulties (dysphagia), and chest pain. This condition is a common problem and may be a symptom of other gastrointestinal disorders. Approximately 5-15 percent of individuals with GERD eventually develop Barrett esophagus. (For more information about this condition, choose "gastroesophageal" as your search term in the Rare Disease Database.) Hiatal hernia is a very common digestive disorder. Symptoms may include a backward flow (reflux) of stomach contents into the esophagus (gastroesophageal reflux), pain, and/or a burning sensation in the throat. The opening in the diaphragm becomes weakened and stretched, allowing a portion of the stomach to bulge through into the chest cavity. This disorder can easily be diagnosed through testing by a radiologist. Many people with Barrett esophagus also have hiatal hernia. A diagnosis of Barrett esophagus is made by examination of the esophagus through a device known as an endoscope, a thin flexible tube that has a small camera with a light on its tip. The tube is run down the throat allowing a physician to view the tissue of the lower esophagus and the junction where the esophagus meets the stomach. Healthy tissue in this area is usually a pearly white color; the tissue that characterizes Barrett esophagus is a darker pink color often described as "salmon-colored". A diagnosis of Barrett esophagus may be confirmed by the microscopic examination of tissue samples (biopsy) taken from this discolored tissue lining the esophagus. Under a microscope, the cells have an abnormal "column" shape that is characteristic for this disease. Since Barrett esophagus is associated with an increased risk of cancer of the esophagus, affected individuals should be evaluated periodically by a physician who specializes in treating diseases of the intestines (gastroenterologist). Examination of the esophagus every 2-3 years with a specialized endoscope is recommended to detect early pre-malignant cell changes (dysplasia). Cell dysplasia means that the cells show similarities with cancer cells, but cannot invade tissue or spread. The tissue changes at this stage can still be treated. Dysplasia may be classified as low-grade to high-grade. High-grade dysplasia indicates a greater risk of progression to esophageal cancer. There are no consensus accepted screening guidelines for individuals suspected of having Barrett esophagus. Some guidelines suggest that individuals more than 50 who have had chronic GERD symptoms for several years, and other risk factors such as obesity undergo an endoscopy exam to determine whether they have Barrett esophagus. The treatment of Barrett esophagus is often directed at the symptoms associated with GERD and may include the elevation of the head of the bed and the avoidance of bedtime snacks or liquids. Drug therapy may include the administration of medications that help to relieve the symptoms of GERD and acid reflux. These may include proton pump inhibitors including esomeprazole (Nexium), lansoprazole (Prevacid), omeprazole (Prilosec), and omeprazole powder (Zegerid). Additional drugs that may be prescribed include metoclopramide (Reglan), famotidine (Pepcid), cimetidine (Tagamet), and ranitidine (Zantac). Individuals with Barrett esophagus are urged not to smoke or drink alcoholic beverages. Some people with Barrett esophagus who do not respond to drug therapy may require surgery to heal areas of ulceration on the esophagus and prevent acid reflux. The procedure, known as laparoscopic Nissen fundoplication, tightens the muscle (sphincter) that connects the esophagus and the stomach preventing the backflow of contents from the stomach into the esophagus. For some individuals with high-grade dysplasia, the surgical removal of the esophagus may be recommended. However recent advances in endoscopic therapy including endoscopic mucosal resection and ablation (PDT, radiofrequency ablation) have them the initial choice of therapy in patients with high grade dysplasia. In August of 2003, the Food and Drug Administration (FDA) approved porfimer sodium (Photofrin) as an alternative to surgery for individuals with high-grade dysplasia associated with Barrett esophagus. This photosensitizing agent kills abnormal and potentially precancerous cells. Photofrin is manufactured by Axcan Pharma, Inc. For information, contact: Axcan Pharma, Inc. 22 Inverness Center Parkway Birmingham, AL 35242 Tel: (205) 991-8085 Fax: (205) 991-8176 Information on current clinical trials is posted on the Internet at www.clinicaltrials.gov. All studies receiving U.S. government funding, and some supported by private industry, are posted on this government web site. For information about clinical trials being conducted at the NIH Clinical Center in Bethesda, MD, contact the NIH Patient Recruitment Office: Toll-free: (800) 411-1222 TTY: (866) 411-1010 For information about clinical trials sponsored by private sources contact: Additional therapies that are being studied for individuals with high-grade dysplasia associated with Barrett esophagus including radiofrequency ablation, cryotherapy and laser therapy. Radiofrequency ablation is a procedure in which radiofrequency energy is used to destroy the affected tissue. Cryotherapy is a procedure in which extreme cold is used to freeze and destroy affected tissue. Laser therapy uses lasers to destroy the affected tissue. More research is necessary to determine the long-term safety and effectiveness of these procedures for the treatment of individuals with Barrett esophagus. A procedure known as endoscopic mucosal resection is being studied for the treatment of high-grade dysplasia associated with Barrett esophagus. During this procedure, the affected tissue is removed through the endoscope without damaging the underlying tissue of the esophagus. More research is necessary to determine the long-term safety and effectiveness of this potential therapy for individuals with Barrett esophagus. Contact for additional information about Barrett esophagus: Prateek Sharma, MD Professor of Medicine Director: GI Fellowship Training Veterans Medical Center University of Kansas School of Medicine Kansas City, KS 66160 Organizations related to Barrett Esophagus Higgins P, Askari FK. Barrett Esophagus. NORD Guide to Rare Disorders. Lippincott Williams & Wilkins. Philadelphia, PA. 2003:333. Yamada T, Alpers DH, Kaplowitz N, Laine L, et al. Eds. Textbook for Gastroenterology. 4th ed. Lippincott Williams & Wilkins. Philadelphia, PA; 2003:. Ballenger JJ., ed. Diseases of the Nose, Throat, Ear, Head & Neck, 14th ed. New York, NY: Lea & Febiger Co; 1991:1315-6. Sharma P. N Barrett's esophagus. NEJM 2009; 361(26):2548-56. Erratum in:NEJM 2010; 362(15):1450. Seewald S, Angl TL, Soehendra N. Endoscopic mucosal resection of Barrett's oesophagus containing dysplasia or intramucosal cancer. Postgrad Med J. 2007;83:367-372. Shalauta MD, Saad R. Barrett's esophagus. Am Fam Phys. 2004;69:2113-2118 Drovdlic CM, Goddard KAB, Chak A, et al. Demographic and phenotypic features of 70 families segregating Barrett's oesophagus and oesophageal adenocarcinoma. J Med Genet. 2003;40:651-656. Shaheen N, Ransohoff EF. Gastroesophageal reflux, barrett esophagus, and esophageal cancer: scientific review. JAMA. 2002;287:1972-1981. Spechler S J. Barrett's Esophagus. NEJM 2002;346:836-842. FROM THE INTERNET Azodo IA, Romero Y. Barrett's Esophagus. The American College of Gastroenterology.. Available at: http://www.gi.org/patients/gihealth/barretts.asp Accessed: 12/11. Johnston MH, Eastone JA. Barrett esophagus and Barrett Ulcer. Emedicine Journal,April 25, 2011. Available at: http://www.emedicine.com/med/TOPIC210.HTM Accessed:12/11. National Digestive Diseases Information Clearinghouse. Barrett's Esophagus.July, 2008. Available at: http://digestive.niddk.nih.gov/ddiseases/pubs/barretts/ Accessed:12/11. Mayo Clinic for Medical Education and Research. Barrett's Esophagus.May 25, 2011. Available at: http://www.mayoclinic.com/health/barretts-esophagus/HQ00312 Accessed:12/11. The information in NORD’s Rare Disease Database is for educational purposes only. It should never be used for diagnostic or treatment purposes. If you have questions regarding a medical condition, always seek the advice of your physician or other qualified health professional. NORD’s reports provide a brief overview of rare diseases. For more specific information, we encourage you to contact your personal physician or the agencies listed as “Resources” on this report. The National Organization for Rare Disorders (NORD) web site, its databases, and the contents thereof are copyrighted by NORD. No part of the NORD web site, databases, or the contents may be copied in any way, including but not limited to the following: electronically downloading, storing in a retrieval system, or redistributing for any commercial purposes without the express written permission of NORD. Permission is hereby granted to print one hard copy of the information on an individual disease for your personal use, provided that such content is in no way modified, and the credit for the source (NORD) and NORD’s copyright notice are included on the printed copy. Any other electronic reproduction or other printed versions is strictly prohibited. Copyright ©1986, 1994, 1998, 2000, 2001, 2004, 2008, 2011 Report last updated: 2011/12/08 00:00:00 GMT+0 NORD's Rare Disease Information Database is copyrighted and may not be published without the written consent of NORD.
<urn:uuid:7a18c6d2-558c-4d88-8501-34b6d1508988>
CC-MAIN-2013-20
http://www.rarediseases.org/rare-disease-information/rare-diseases/byID/173/viewFullReport
2013-06-19T12:41:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892479
3,344
Acute bacterial sinusitis, Chronic bacterial sinusitis, Subacute bacterial sinusitis Introduction to sinusitis: That runny nose and cough just won’t go away… Perhaps your child has a sinus infection. Sinusitis is a common problem in children. Nevertheless, it is often over-diagnosed in children with green runny noses, and missed in children who really have a sinus infection! What is sinusitis? The sinuses are small empty caverns in the bony skull. They are lined by mucus membranes and connect with the nasal passages. Some sinuses are present at birth; others continue to grow and develop for the first 20 years of life. Sinusitis is the name given when the lining of one or more of these sinuses is red, swollen, and tender, the opening is blocked, and the sinus is at least partially filled with fluid (mucus and/or pus). Technically, every cold is also a case of viral sinusitis. However, when doctors use the term sinusitis they are usually referring to a bacterial infection in the sinuses. Acute bacterial sinusitis has been present for less than three or four weeks; subacute bacterial sinusitis has been present for up to about ten weeks; and chronic bacterial sinusitis has been present for about ten weeks or more. The three may have different causes and treatments. Who gets sinusitis? Anyone can get a sinus infection. Colds or nasal allergies are usually present first. Sinus infections are also more common when there is exposure to cigarette smoke. Children who have ear infections, GE reflux, cystic fibrosis, immune problems, deviated nasal septa, or poorly functioning cilia are more likely to develop sinus infections. Asthma and sinus infections often go together. In addition, swimming, breathing cold dry air, or attending day care can predispose a child to sinusitis. Boys get more sinus infections than girls. What are the symptoms of sinusitis? Adults and adolescents with sinusitis will often have headaches or facial tenderness to make the diagnosis clear. These are much less common in younger children. Instead, the symptoms are usually similar to a prolonged cold. The common cold usually lasts about seven days. Within one to three days of the onset, the nasal secretions usually become thicker and perhaps yellow or green. This is a normal part of the common cold and not a reason for antibiotics. If a child has both a cough and nasal discharge that do not improve within 10 to 14 days, this may be acute bacterial sinusitis. The nasal discharge may be clear or colored. The cough is present during the day, but is often worse during naps or at bedtime. There may be a fever, sore throat from post-nasal drip, or bad breath. About half of the children also have ear infections (caused by the same bacteria). Occasionally a child with severe acute bacterial sinusitis will have a headache, colored nasal discharge, high fever, and facial tenderness well before the normal 10 days typically used to diagnose sinusitis. In subacute and chronic sinusitis, the symptoms are often minimal, but include the ongoing cough and nasal discharge. Is sinusitis contagious? In general, sinus infections are not contagious (although there have been rare outbreaks associated with swimming together). The colds that can lead to sinus infections are quite contagious. How long does sinusitis last? Sinus infections often last for weeks or months without treatment. How is sinusitis diagnosed? The diagnosis is often made based on the history and physical examination. Sometimes x-rays or CT scans are used to support the diagnosis. How is sinusitis treated? Acute and subacute bacterial sinusitis is usually best treated with appropriate antibiotics at an appropriate dose for the appropriate amount of time (usually 14-21 days). The antibiotics are usually continued for at least 7 days after symptoms disappear. If symptoms worsen or do not improve, the antibiotic is usually changed early in the course. Saline nose drops may thin the mucus and speed healing. Decongestants may help symptoms, but usually do not speed healing. Antihistamines may thicken the mucus and slow healing. Chronic sinusitis treatment usually lasts three weeks or more. For this reason, it is wise to obtain a culture of the infected material before treatment. Sometimes the infection is caused by fungus rather than bacteria. Most children with chronic sinusitis (and to a lesser extent, subacute sinusitis) either have allergies or an ongoing irritant exposure, such as smoke or fumes. These should be identified and addressed. How can sinusitis be prevented? Breastfeeding lowers the risk of sinus infections. Preventing sinus infections is possible. It involves the same proven measures outlined for preventing colds and ear infections. In addition, changing swimming habits may be helpful for older children (avoiding jumping, diving, or swimming underwater – unless holding the nose or using nose plugs). Immunizations, especially to pneumococcus, Haemophilus influenzae (Hib), measles, and the flu, are particularly important for children prone to sinus infections. Finally, identifying and properly addressing allergies and irritants is the key to reducing the frequency, duration, and severity of sinusitis. Related A-to-Z Information: Allergies (Allergic Rhinitis), Asthma, Common Cold, Cough, Cystic Fibrosis, Ear Infection, Food Allergies, Gastroesophageal Reflux, Haemophilus Influenzae (H flu, Hib), HIV, Influenza (Flu), Measles, Nosebleeds (Epistaxis), Otitis Media with Effusion (OME), Wheezing Last reviewed: March 30, 2010
<urn:uuid:61d009bf-3dd3-45e5-9a9e-57c93ceeddcc>
CC-MAIN-2013-20
http://www.drgreene.com/articles/sinusitis/?pagination=2&tid=115
2013-05-23T18:52:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922629
1,250
Read more: "2013 Smart Guide: 10 ideas that will shape the year" Move over Africa. It is where humanity began, where we took our first steps and grew big brains. But those interested in the latest cool stuff on the origins of our species should look to Asia instead. Why so? It looks as if some early chapters in the human story, and significant chunks of its later ones too, took place under Asian skies. For starters, 37-million-year-old fossils from Burma are the best evidence yet that our branch of the primate tree originated in Asia rather than Africa. A great deal later, after the emergence of early humans from Africa, some of our distant cousins set up shop in Asia, only to die out later. In 2012 anthropologists described for the first time human fossils unlike any others - ancient hominins dubbed the Red Deer Cave People who lived in what is now China as recently as 15,000 years ago, then vanished without further trace. The implication is that more long-lost cousins remain to be found in South-East Asia's neglected fossil record. We know a little about the enigmatic Denisovans, for example. Their DNA was first discovered in 50,000-year-old fossil fragments from a Siberian cave in 2010, and traces of that DNA live on in modern Indonesians. Not only does this show that the Denisovans interbred with our species, it also suggests they occupied a territory so large it took in South-East Asia as well as Siberia. Yet the only evidence we have of their existence are a finger bone and a tooth. Too long have the vast plains and forests of Asia remained untouched by the trowels and brushes of palaeoanthropologists. Teams of them are champing at the bit to explore Asia's rocks. Denisovans - and, hopefully, other ancestors, too - will not remain faceless for long. - New Scientist - Not just a website! - Subscribe to New Scientist and get: - New Scientist magazine delivered every week - Unlimited online access to articles from over 500 back issues - Subscribe Now and Save If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Tue Dec 25 10:29:09 GMT 2012 by MC ''Move over Africa. It is where humanity began, where we took our first steps and grew big brains.'' ''Yet the only evidence we have of their existence are a finger bone and a tooth.'' Seems the first quote is rather bombastic in view ot the second. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:106ee218-e40a-4c6b-9efd-f59592b17183>
CC-MAIN-2013-20
http://www.newscientist.com/article/mg21628966.100
2013-06-18T23:18:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946006
649
|Product #: CTP1696_TQ| Science Games Galore! - Earth, Life, and Physical Science (Kindergarten) (Res eBookKindergarten Please Note: This ebook is a digital download, NOT a physical product. After purchase, you will be provided a one time link to download ebooks to your computer. Orders paid by PayPal require up to 8 business hours to verify payment and release electronic media. For immediate downloads, payment with credit card is required. 10 Matching Games That Reinforce Basic Science Skills. Each Science Games Galore! book features 10 ready-to-use games and 10 reproducible activity pages designed to reinforce essential science skills. The titles focus on a variety of standards-based science concepts and include the following: - Interactive, hands-on, full-color card stock cards and answer keys - Games and reproducibles designed for varying ability levels that allow students to play independently while the teacher works with small groups - Reproducibles that are perfect for review or practice, extension activities, assessment tools, or homework assignments - Suggestions for preparing the game materials - Explicit instructions for implementing the games and tips for trouble-free game play - Additional ways to use the game pieces - A blank game template reproducible students and teachers can use to create their own games Watch a demo video of this series at YouTube. Submit a review
<urn:uuid:219207c5-844c-4769-bdff-2cac87831ced>
CC-MAIN-2013-20
http://www.schoodoodle.com/home/sch/page_56022_1450/science_games_galore___earth_life_and_physical_sci.html
2013-06-19T12:19:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.878841
292
We’ve seen claims that high fructose corn syrup is unnatural. Yet high fructose corn syrup is made from corn, a natural grain product. There are no artificial or synthetic ingredients and no color additives in high fructose corn syrup. It is refined with similar production methods to other sugars, making it no more “processed” than any other sweetener. High fructose corn syrup meets the Food and Drug Administration’s (FDA) requirements for use of the term ‘natural.’ Study: Straight talk about high-fructose corn syrup: what it is and what it ain't, John S White, American Society for Clinical Nutrition 2008 The super-sized myth surrounding high fructose corn syrup is that it is uniquely responsible for causing obesity. The only problem with that is that there’s no actual evidence to support this claim. Research does show, however, that obesity results from an imbalance of calories consumed and calories burned. Research also does not directly connect obesity to one specific food or ingredient. "After studying current research, the American Medical Association (AMA) today concluded that high fructose syrup does not appear to contribute to obesity more than other caloric sweeteners..." - American Medical Association, June 2008 Lack of findings for the association between obesity risk and usual sugar-sweetened beverage consumption in adults - A primary analysis of databases of CSFII-1989-1991, CSFII-1994-1998, NHANES III, and combined NHANES 1999-2002. Similar to misunderstandings around obesity, high fructose corn syrup also has no unique link to causing diabetes. This fact has been supported by the U.S. Centers for Disease Control and the American Diabetes Association who state that the primary causes of diabetes are obesity, advancing age and heredity. In fact, the U.S. Department of Agriculture (USDA) data shows that per capita consumption of high fructose corn syrup has been declining in recent years, while the incidence of obesity and diabetes in the United States continues on the rise. The ratio of fructose in the diet is not higher now than it was 30 years ago. Fructose is a natural simple sugar found in sugars, vegetables, fruits and honey. Significant confusion has stemmed from recent studies that examined pure fructose in excess amounts and linked it to obesity, a precursor to diabetes. The results were then inappropriately connected to high fructose corn syrup, even though the two substances are vastly different. The studies included abnormally high levels of fructose that are never consumed in a normal human diet. High fructose corn syrup and table sugar both have fructose in them but it is combined with glucose which balances the metabolic effects of fructose. Yes, there are no safety concerns. The safety of high fructose corn syrup is based on science and expert review accumulated over the past 40 years. In 1983, the FDA listed high fructose corn syrup as “Generally Recognized as Safe” (known as GRAS status) for use in food and reaffirmed that ruling in 1996. GRAS recognition by the FDA is important because it recognizes a long history of safe use as well as adequate scientific studies proving an ingredient’s safety. GRAS status is maintained indefinitely unless the FDA has a new reason to question an ingredient’s safety, in which case it will then look into maintaining or revoking the GRAS status. John White, Ph.D. noted, “Its safety was never seriously doubted because expert scientific panels in every decade since the 1960s drew the same conclusion: sucrose, fructose, glucose, and, latterly, HFCS did not pose a significant health risk, with the single exception of promoting dental caries [tooth decay].” In addition to government-convened expert panels, professional organizations have also confirmed the safety of high fructose corn syrup, including the American Medical Association (AMA) and the Academy of Nutrition and Dietetics (formerly the American Dietetic Association). Insulin is responsible for the uptake of glucose into cells and the lowering of blood sugar, a vital control process for metabolism. Both sugar and high fructose corn syrup have largely the same effect on insulin production. Among common sweeteners, pure glucose triggers the greatest insulin release, while pure fructose triggers the least. Both table sugar and high fructose corn syrup trigger about the same intermediate insulin release because they contain nearly equal amounts of glucose and fructose. The body metabolizes the sugars in high fructose corn syrup the same way it does table sugar, honey and many fruits. Since these sweeteners all have approximately equal ratios of fructose and glucose, these simple sugars are absorbed into the blood stream in similar ways. Multiple studies have confirmed this similar response on different aspects of metabolism: - Leptin and Ghrelin - Kathleen J. Melanson, et al., at the University of Rhode Island reviewed the effects of high fructose corn syrup and sugar on circulating levels of glucose, leptin, insulin and ghrelin in a study group of lean women. The study found “no differences in the metabolic effects” of high fructose corn syrup and sugar. - Triglycerides - A study by Linda M. Zukley, et al., reviewed the effects of high fructose corn syrup and sugar on triglycerides in a study group of lean women. This short-term study found “no differences in the metabolic effects in lean women [of high fructose corn syrup] compared to sucrose.” - Uric Acid - Joshua Lowndes, et al., reviewed the effects of high fructose corn syrup and sugar on circulating levels of uric acid in a study group of lean women. This short-term study found “no differences in the metabolic effects in lean women [of high fructose corn syrup] compared to sucrose.” There is no credible research demonstrating that high fructose corn syrup affects your feelings of fullness any differently than sugar or other sweeteners. As researchers have pointed out: Pablo Monsivais, et al., at the University of Washington: Found that beverages sweetened with sugar and high fructose corn syrup, as well as 1% milk, all have similar effects on feelings of fullness. Stijn Soenen and Margriet S. Westerterp-Plantenga, at Maastricht University in The Netherlands: Found “no differences in satiety, compensation or overconsumption” between milk and beverages sweetened with sugar and high fructose corn syrup. Tina Akhavan and G. Harvey Anderson at the University of Toronto: Found that sugar, high fructose corn syrup, and 1:1 glucose/fructose solutions do not differ significantly in their short-term effects on subjective and physiologic measures of satiety and food intake at a subsequent meal. The Glycemic Index (GI) is a ranking of foods, beverages and ingredients based on their immediate effect on blood glucose levels. The GI measures how much blood sugar increases over a period of two or three hours after a meal. Sugar and honey, both have moderate GI values that range from 55 to 60 and an index of up to 100. Although it has not yet been specifically measured, high fructose corn syrup is expected to have a moderate GI due to its similar composition to honey and sugar. However, it is important to keep in mind that the body does not respond to the GI of individual ingredients, but rather to the GI of the entire meal. Since added sugars like sugar and high fructose corn syrup typically contribute less than 20 percent of calories, they are a minor contributor to the overall GI in a normal diet. 61 Fed. Reg. 43447 (August 23, 1996), 21 C.F.R. 184.1866. Direct food substances affirmed as Generally Recognized as Safe; High Fructose Corn Syrup - Final Rule. U.S. Department of Agriculture, Economic Research Service. 2009. Calories: average daily per capita calories from the U.S. food supply, adjusted for spoilage and other waste. Loss-Adjusted Food Availability Data. Food allergies are caused by your body’s response to certain proteins in foods. Nearly all of the corn protein is removed during the production of high fructose corn syrup. Moreover, the trace protein that remains likely bears little immunological resemblance to allergens in the original kernel. A number of cereal grains, including wheat, rye, barley and corn, can cause allergic reactions in some individuals. Although the amount of corn protein found in high fructose corn syrup is extremely low, a person who is allergic or sensitive to corn and corn products should seek and follow the advice of medical professionals about consuming products with HFCS. High fructose corn syrup is made from corn using a process called wet milling. In general, the process includes: 1. Steeping corn to soften the hard kernels 2. Physically separating the kernel into its separate components (starch, corn hull, protein and oil) 3. Breakdown of starch into glucose 4. Use of enzymes to invert glucose to fructose 5. Removal of impurities 6. Blending of glucose and fructose to make HFCS-42 and HFCS-55 White JS. 1992. Fructose syrup: production, properties and applications, in FW Schenck & RE Hebeda, eds, Starch Hydrolysis Products – Worldwide Technology, Production, and Applications. VCH Publishers, Inc. pp. 177-200. According to Keith-Thomas Ayoob, Ed.D., RD, FADA, a practicing pediatric nutritionist with 25 years of experience, people often assume that sugar causes hyperactivity because of all the nutritional myths and misinformation they’ve read or heard about. ADHD is a definable medical condition, but it’s not caused by sugar, high fructose corn syrup or any other form of sweetener. This is the conclusion of dozens of well-controlled studies. High fructose corn syrup and other sugars may give you a quick burst of energy, but it is short-lived and relatively mild. No mercury or mercury-based technology is used in the production of high fructose corn syrup in North America. High fructose corn syrup is safe and does not contain quantifiable levels of mercury. A study was published claiming the opposite but independent experts have performed their own evaluations and confirmed that HFCS is mercury-free. One of the nation’s leading experts in mercury contamination, Dr. Woodhall Stopford, MD, MSPH, of Duke University Medical Center confirmed this assessment. You can view his analysis and conclusion by visiting http://duketox.mc.duke.edu/recenttoxissues.htm and clicking on the link under the Mercury heading. Reports Regarding High Fructose Corn Syrup (HFCS) and Mercury Misleading: ChemRisk discussing report flaws
<urn:uuid:cc12f3cd-53a8-4714-9a12-19b471ab3dab>
CC-MAIN-2013-20
http://sweetsurprise.com/hfcs-faqs?gclid=CLHFnvCnlbQCFUdxQgodchcA-A
2013-05-23T04:26:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916815
2,214
Lesson Plans & Teacher Guides Park Rangers and students from local area school. National Park Foundation The Dr. Martin Luther King, Jr.'s Legacy of Racial and Social Justice: A Curriculum for Empowerment is a teacher's resource guide that provides activities for students in kindergarten through eighth grade to explore the rich history of the civil rights movement and the persona of Dr. Martin Luther King, Jr. This educational curriculum was developed by The Alonzo Crim Center for Excellence in Urban Education at Georgia State University. The curriculum focuses on building on students' current civil rights knowledge and helping them to compare present-day realities to past struggles for justice in America and throughout the world. The format of this curriculum is divided into 5 major sections: Section 1 - Guidelines for Teachers (PDF 1,411 KB) Did You Know? December 12-16, 1961, Martin Luther King, Jr. and his forces launched an attack against segregation and discrimination in Albany, GA. Mass arrests and political maneuverings frustrated the effort. The Albany debacle taught civil rights leaders lessons for future massive assaults on segregation.
<urn:uuid:b263c861-7304-416b-a1d9-c57f5c97e8c9>
CC-MAIN-2013-20
http://www.nps.gov/malu/forteachers/lessonplansandteacherguides.htm
2013-06-20T02:22:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00050-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914096
226
While many people think it must be common sense for Swedish to be the official language of Sweden, it wasn’t until 2009! Swedish is a North Germanic language that is extremely similar to other Scandinavian languages such as Danish and Norwegian. Because of the prevalence of other languages, Swedish was not made official until 2009. While today most everyone speaks Swedish as at least a second language, almost as many people speak English. Almost everyone in Sweden speaks English because of the trade links established after World War II and a strong Anglo-American influence that was established. English became a mandated course taught in schools because it was so prevalent in the country. Because it is so popular, many people called for the strengthening of Swedish, thus it was officially made the language of Sweden!
<urn:uuid:cfaefdec-68a9-4492-a435-534da3fecaff>
CC-MAIN-2013-20
http://www.omg-facts.com/Language/Swedish-Wasn-t-The-Official-Language-Of/51061?fromN
2013-05-20T02:05:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00052-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98441
155
Video Review: Building Our Community: A Film about Restorative Practices. Collingwood School, located in an area affected by acute economic and social changes, was once considered a school in crisis. High crime rates and overt racism are common experiences for students attending the school. The school struggled with issues concerning student discipline and other methods of creating a safe school environment. Restorative practices provided the mechanism for doing just that. In this short documentary, teachers and students describe the use of circles each morning and afternoon as a way for students to check in and talk about how they are feeling that day. One staff member describes the process as helping the children to develop a vocabulary to express their emotions and learn more about taking responsibility for their behaviour. A teacher describes how the simple change of asking “What happened?” instead of “Why?” created significant improvements in her interaction with the children. “What happened?” allows the child to speak without automatically fearing punishment, whereas “Why?” causes a defensive Students describe circles using words such as "respect" and "safe place". One parent described how his son had expressed a lot of anger and was getting in fights. In response to one fight, the school held a circle that involved the participants in the fight, parents and teachers. The father describes how this process allowed everyone to have a say, find a solution, and walk away with a good feeling. He also describes changes he has seen in his son’s behaviour after the Another student describes how participating in circles has helped him. and how his family now uses the process in responding to At Collingwood, restorative practices are used at every level of the school, including among staff members. Each morning they meet in a circle as well. While described as lighthearted, the circle also provides teachers an opportunity to seek input from colleagues about issues they face. One teacher explains that her fear of making mistakes has reduced as her feeling of support from her co-workers and supervisors has grown through these practices. Another teacher describes how she feels more in control of her own emotions as she responds to students and their conflicts (with each other and with her). She feels that the students actually listen to her, and the students feel the same about her. This mutual respect has had an effect on the education environment and has resulted in increased work output and improved quality by the students. Building Our Community is available on DVD from the International Institute of Restorative Practices for $25. In the UK and Europe, the video is available for £14.99 from the UK office of the International Institute of Restorative
<urn:uuid:23643ee3-5f59-4d69-ae82-755d47839e11>
CC-MAIN-2013-20
http://www.restorativejustice.org/editions/2008/aug08/vrbuildingourcommunity
2013-05-21T18:26:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972783
558
Excerpted from Perennials for Every Purpose, by Larry Hodgson It's no wonder primroses are inseparable from spring in our minds. Not only are they among the first perennials to bloom—some even flower in late winter—but their very name implies earliness: Primula derives from the Latin word for "early." Most primroses produce a ground-hugging rosette of greenery, and their rounded flowers have five petals each. Their leaves range from thick, smooth, and waxy to narrow, hairy, and toothed. Some species bear one flower per stem while others have dome-shaped or rounded clusters of flowers on each stem. Primroses have common cultural needs, namely moist soil and cool growing conditions. They thrive in full sun in cool-summer areas, but usually need partial shade elsewhere. And while they generally need moist soil, most also require good drainage. The main exceptions are Japanese primrose (Primula japonica) and, to a lesser degree, drumstick primrose (P. denticulata): Both will do well in almost waterlogged soil. Although cold-hardiness varies from plant to plant, a winter mulch is wise almost everywhere to help primroses survive the winter. Gardeners in AHS Heat Zones 12 to 9 have trouble growing many primroses, not because of heat but because the plants need winter chill. Division is the ideal way to propagate primroses and the only way to maintain specific cultivars. You can also grow primroses from seed, but their need for temperatures between 40 and 50 degrees F (4 to 10 degrees C) during the long period between sowing and the first blooms makes starting plants indoors impractical for most of us. Suitable primrose companions for a moist, partly shaded spot include astilbes, ferns, hostas, Japanese iris (Iris ensata), forget-me-nots (Myosotis spp.), and pulmonarias. Drumstick primrose (P. denticulata) and Japanese primrose (Primula japonica) are suitable for wetland plantings—try them with marsh marigold (Caltha palustris), yellow flag (Iris pseudacorus), and cardinal flower (Lobelia cardinalis). Problems and Solutions Spider mites can be a real plague, especially if you grow primroses in full sun in a hot climate. Infested plants have yellow stippling on the leaves, and in serious cases, leaves turn brown. To keep spider mites in check, spray plants regularly with water. Polyantha primrose (Primula X polyantha). Once upon a time, there were individual wild species of European primroses, such as the fragrant, deep yellow cowslip (P. veris) and the lemon-yellow English primrose (P. vulgaris). Today, though, these and other primrose species sold commercially are probably all hybrids, and I prefer to lump them together under the name of (P. X polyantha). This varied group shares wrinked, tongue-shaped leaves and stemless or stemmed flowers, borne singly or in clumps. They bloom abundantly in early to late spring and sometimes again, much more lightly, in the fall. Most gardeners grow unnamed plants from popular seed strains such as Pacific Giant Hybrids, with large flowers in the full range of colors, and Cowichan Hybrids, in solid colors (no yellow eye) and with reddish leaves. Polyantha primrose: Primula X polyantha Bloom color: Yellow, pink, purple, blue or white, depending on cultivar Bloom time: Early to late spring Length of bloom: 3-4 weeks or more Height: 6-12 inches Spread: 8-9 inches Hardiness: USDA zones 3-9 Heat tolerance: AHS heat zones 8-6 Light preference: Partial shade (full sun in cool climates) Soil preferences: Humus-rich, moist but well-drained soil Propagation: Divide in early summer, after flowering Garden uses: Container planting, edging, mass planting, rock garden, wall planting, woodland garden; along paths, on slopes, in wet areas.
<urn:uuid:e5ec7210-d38a-488c-8c5c-d7f5ca4fcefa>
CC-MAIN-2013-20
http://www.organicgardening.com/learn-and-grow/primroses
2013-05-22T21:32:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896688
896
This guide is not an attempt to bash those who do believe but is an attempt to highlight the problems with the idea of Bigfoot existing. Also, for the purpose of this breakdown I will be using the most common theory, which is that Bigfoot is an ape or hominid. I will begin by looking at how a species such as Bigfoot is commonly assumed to have evolved and highlighting the problems regarding this. The two main ideas are that Bigfoot is a descendant of Gigantopithecus, or more closely related to man, putting it within the Hominid bracket. Gigantopithecus: Does roughly fit the sizes as commonly reported as that of Bigfoot, of around 7ft - 9ft tall and weighing around the 1000lb mark. The Problem with this: It is commonly held that Gigantopithecus was a quadruped, which if true Bigfoot would also most likely be a quadruped. The reason for this is that bipedalism usually only develops within small species, it is only after bipedalism has become established that the species can then increase in mass. Large Theropod Dinosaurs such as T.Rex are a good example of this. Although T.Rex may have been one of the largest Theropod Dinosaurs to live, it was its much smaller ancestors that evolved bipedalism. Once established this would allow the following species to increase in size as each generation evolves measures to deal with the increasing mass being exerted on the limbs. A 1000lb Gigantopithecus going from a quadruped to biped would just not be practical as each leg would have to support twice as much weight as before, severely hampering movement and cause excessive wear on the legs. An example can be seen in those who are bedridden with morbid obesity as their legs are not capable of supporting the mass of their bodies. Hominid: This would solve the problems with developing bipedalim as if Bigfoot was a Hominid bipedalism would already be the established form of locomotion which would allow for increase in mass over time. The Problem with this: Neanderthals,Homo Erectus and Homo Heidelbergensis are three commonly mentioned Hominids in regards to Bigfoot as well as occasionally older species such as Paranthropus. The difficulty here lies in that we have a fairly good idea of what the first three looked like, all of which were very Human in appearance. As for an older species such as Paranthropus, they do superficially resemble Bigfoot, apart from size, but like all non Human Hominids not a single species has been uncovered in America. Bigfoot is predominately sighted in the Pacific Northwest of the U.S, generally in wooded areas but has been sighted in almost every other state of America as well as Canada, again in mostly wooded areas. The Problem with this: Species that live in forested areas are usually smaller than those that live in open terrain. For example the African Forest Elephant is smaller than its open country counterpart. This is because size hampers movement through woodland. Also a tall biped would have a higher centre of gravity reducing an animals stability whilst traveling through potentially uneven terrain. Another problem with the habitat of Bigfoot is that no Apes are found in temperate forests within the northern hemisphere and their has never been any fossils uncovered to suggest the existence of any form of Ape in America. Although it is possible convergent evolution could have occurred within the New World Monkeys to produce an Ape-Like species, which then migrated to North America during the Great American Interchange. But again we would be expected to find fossil evidence of this occurring as well as wondering why it would become extinct in its original territory, etc. There are three options available regarding the diet of Bigfoot, Carnivorous, Herbivorous and Omnivorous. Problem with being a Carnivore: If Bigfoot was Carnivorous then it would be the only Ape or Hominid known to only eat meat. It would also be limited to eating fresh meat as Apes and Hominids do not have a sufficient digestive system to counteract the harmful toxins found in rotting meat. Limiting Bigfoot to only fresh meat would mean it would have to hunt and due to its size it would have a much higher calorific intake than a Human. One of the main problems I see, is that carnivores are generally opportunistic which would lead to reports of livestock being killed by a Bigfoot for example, as well as bringing it in to competition with other predators. The species would also struggle due to being hampered by its inability to consume rotting meat. Problem with being a Herbivore Bigfoot being a Herbivore is a much more plausible idea, yet still poses problems. Plant matter is generally poor in nutrients which leads to Herbivores having to spend long periods of time eating. This would suggest a lifestyle similar to that of Gorillas confining Bigfoot to a home range of no more than a few miles which would increase the likelihood of discovery as if one Bigfoot was sighted then the others wouldn't be too far away. Problem with being an Omnivore Out of the three being an Omnivore is the most likely diet as it is the most varied and fits closer to the diet of most Apes and Hominids. The problem here lies in that Omnivores are just as, if not more opportunistic than Carnivores and this poses the question as to why Bigfoot is not sighted as regularly as other Omnivores in urban areas. You also again have the problem of hunting and why livestock are not reported being killed, etc, and if the species uses rudimentary tools, why has no evidence of this been produced. If we look at the most likely scenario regarding diet and assume Bigfoot does indeed live within wooded/forested areas as is most commonly reported we would be looking at a large, Omnivorous, mammal. The Problem with this This would place it well within the niche currently occupied by the Black Bear, especially within the Pacific Northwest. It would also mean Bigfoot would most likely be viewed as a possible food source by Pumas, especially the young and weak. Due to this there would surely be evidence, in the form of carcasses, of young or weak Bigfoot being preyed upon by Pumas. Also, due to the clash of niches with the Black Bear there would be evidence of violence between the species as is seen between Hyenas and Lions. As Bigfoot is alleged to be an Ape or Hominid, therefore meaning it would be a social species, would give it a distinct advantage over the Black Bear, thus reducing the Bears numbers and range, which is contrary to the current trend as Black Bear numbers are steady increasing. This is also not taking into account possible interactions with Brown Bears and Wolves, who although occupy slightly more open country, could still encounter a Bigfoot. These reasons and more lead me to believe that Bigfoot does not exist. That is my take on the situation, which I have tried to come at as logically and unbiased as possible and I just hope it gives people a few things to ponder on, skeptics and believers alike. I know I haven't tackled every area, such as population levels, etc but then I don't want to be here forever. Edited by grendals_bane, 25 January 2012 - 11:00 PM.
<urn:uuid:8a1b4b15-320e-45d0-a6c8-d15027b20745>
CC-MAIN-2013-20
http://www.unexplained-mysteries.com/forum/index.php?showtopic=221382&st=0
2013-05-23T11:50:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00051-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979296
1,525
As the hands of the clock approach midnight for Sharks, organizations working for their protection have joined forces and declared 2009 The International Year of the Shark. The motion aims to raise global awareness of their imminent extinction and the oceanic crisis at hand. Recent findings of the Global Shark Assessment indicate that at current rates of decline, extinction of the most threatened species of Shark is forecast in 10 to 15 years. In large regions, species that were once numerous have virtually disappeared, in a massacre comparable to that of the buffalo on the North American plains 200 years ago, but on a much larger scale. For example, studies of oceanic Sharks estimate 80 to 90% of heavily fished species are gone. Yet these intelligent animals are still fished intensively, and finned, usually while still alive, for shark fin soup. As part of the International Year of the Shark, Beqa Adventure Divers and Shark Reef Marine Reserve have partnered with local and international organizations to offer you the Fiji Shark Conservation and Awareness Project.
<urn:uuid:909de6f2-6fd2-4b31-ab53-95e8aefd109a>
CC-MAIN-2013-20
http://fijishark.com/event
2013-05-23T11:48:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932463
204
Skip to main content More Search Options A member of our team will call you back within one business day. You may find there are things that trigger your asthma that aren’t allergens or irritants. These include weather changes, illness, exercise, and other conditions or situations. If any of these trigger asthma symptoms, check off the tips below that can help. Then, give the tips a try. Certain types of weather can trigger asthma or contribute to other triggers such as allergies. Of course, you can’t control the weather! But you can take more care at times when weather may be an issue. Keep track of which types of weather affect you most: cold, hot, humid, or windy. This varies from person to person. Limit outdoor activity during the type of weather that affects you. Protect your lungs by wearing a scarf over your mouth and nose in cold weather. Illnesses that affect the nose and throat (upper respiratory infections) can irritate your lungs. You can’t prevent all illness, but you may be able to prevent some: Wash your hands often with soap and warm water or a hand sanitizer. Get a yearly flu shot. Take care of your general health. Get plenty of sleep. And eat a healthy, balanced diet with lots of fruit and vegetables. Food additives can trigger asthma flare-ups in some people. Check food labels for “sulfites,” “metabisulfites,” and “sulfur dioxide.” These are often found in foods such as wine, beer, and dried fruit. Avoid foods that contain these additives. Certain medications cause symptoms in some people with asthma. These include aspirin and aspirinlike products such as ibuprofen and naproxen. They also include certain prescribed medicines such as some beta-blockers. Tell your healthcare provider if you suspect that certain medications trigger symptoms. Ask for a list of products that contain those medications. Check the labels on over-the-counter medicines. Medicines for colds and sinus problems often contain aspirin or aspirinlike ingredients. Laughing, crying, or feeling excited are triggers for some people. You can’t avoid these normal emotions, but you can learn ways to slow your breathing and avert a flare-up. Try this breathing exercise: Start by breathing in slowly through your nose for a count of 2 seconds. Then pucker your lips and breathe out for a count of 4 seconds. Try to focus on a soothing image in your mind. This will help relax you and calm your breathing. Remember to take your daily controller medications. When you’re upset or under stress, it’s easy to forget. For some people, exercise can trigger asthma symptoms. This is called exercise-induced asthma. Don’t let exercise-induced asthma keep you from being active. Exercise can strengthen the heart and blood vessels and may reduce sensitivity to asthma triggers. If your asthma is in control, you should be able to exercise without triggering symptoms. These tips (and your doctor’s advice) can help: If you have not been exercising regularly, start slow and work up gradually. Take quick-relief medication a few minutes before exercise, as prescribed. Always carry your quick-relief inhaler with you when you exercise. Stop and follow your action plan if you notice asthma symptoms.
<urn:uuid:5aa674a8-1739-4f05-882c-f7309ff546a8>
CC-MAIN-2013-20
http://www.einstein.edu/einsteinhealthtopic/?healthTopicId=401&healthTopicName=Pulmonology&articleId=82479&articleTypeId=3
2013-05-18T08:49:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933843
711