text
stringlengths
188
632k
ter, tir, (getah tembakau) 1. A dark, oily, viscous material, consisting mainly of hydrocarbons, produced by the destructive distillation of organic substances such as wood, coal, or peat. 2. Coal tar. 3. A solid residue of tobacco smoke containing byproducts of combustion. from: Merriam Webster Etymology:Middle English tarr, terr, from Old English teoru; akin to Middle Low German tere tar, Middle Dutch tar, terre, Old Norse tjara, tar, Gothic triu tree, wood * more at TREE 1 a : any of various dark brown or black bituminous usually odorous viscous liquids or semiliquids that are obtained by the destructive distillation of wood, coal, peat, shale, and other organic materials and yield pitch on distillation *road tars* see COAL TAR, WATER-GAS TAR, WOOD TAR b : a substance resembling tar in appearance and formed by chemical change *tars in tobacco smoke* As seen from origin of the term we could also say tar with ter or tir (see in KBBI (Kamus Besar Bahasa Indonesia)) Or in more easy to understand (a little bit traditional), Indonesian could say "getah tembakau" (tobacco sap) (Lingusitic dictionary) Local time: 00:39 Native speaker of: Indonesian PRO pts in category: 4
On November 11, we will celebrate Veteran's Day to honor those Americans, both living and dead, who have served with the United States armed forces during wartime. The contributions of such people cannot be truly appreciated until one realizes that the ranks of those who have served and died are as diverse as this country is. Of the various ethnic groups, the African Americans are no exception and, from the opening salvos of the Revolutionary War to Operation Desert Storm, they have contributed their fair share, frequently against a backdrop of segregation, discrimination, and racism. In the Revolutionary War (1776-1783), 10,000 African Americans -- some of them slaves -- served in the continental armies, participating in the defeat of the British at several famous battles. In one case, a female African American disguised herself as a man and served in the Fourth Massachusetts Regiment. She was later cited for bravery. Black Americans also helped defend American sovereignty in the War of 1812 and made up between ten and twenty percent of the fighting navy. On January 8, 1815, as General Andrew Jackson met the British army outside of New Orleans, six hundred Black soldiers in his ranks held their end of the line under massive British attack, then surged forward to help inflict a mortal blow on the enemy. In the American Civil War (1861-1865), the Confederacy declared that captured Black Union soldiers would be hanged or pressed back into slavery. In spite of that declaration, 186,000 soldiers of African descent served in 150 regiments of the Union army, making up about almost 13% of the Union army's combat manpower. Another 30,000 were in the Navy. In four years of fighting, it is believed that 37,000 African Americans died in battle or from disease. Twenty Black soldiers were awarded the Congressional Medal of Honor for their services in the Civil War. In the Indian Campaigns (1866-1890), African-American soldiers took part in many of the hostilities and twenty of them were awarded the Congressional Medal of Honor for service above and beyond the call of duty. Several years later, the Buffalo soldiers of the 9th U.S. Cavalry Regiment fought side by side with Teddy Roosevelt's celebrated Rough Riders in Cuba during the Spanish-American War of 1898. Eight Black soldiers were awarded the Medal of Honor for their role in that war. participation in World War I (1914-1918), 367,000 Black Americans served their country both at home and overseas. Eventually they would comprise 11% of the troops that went overseas. The two thousand African-American soldiers of the 369th Infantry Regiment served in France along the Western Front. The 369th won the respect and admiration of their French comrades for their tenacity in fighting off incessant German attacks. The men of the 369th called themselves the Black Rattlers, but their German adversaries, in recognition of their ferocity, called them the Hellfighters. Suffering heavy casualties after 191 days in combat -- more than any other American unit -- the 369th was awarded 170 medals by the French for their courage and In the years preceding World War II, Black labor battalions in the Army were assigned to loading ships and general maintenance. When war finally came in December 1941, most African-American volunteers were initially placed into segregated Army units and denied overseas combat duty. According to Ulysses Lee of Howard University, author of "The Employment of Negro Troops," Black Americans "asked with increasing frequency for the opportunity that they believed to be rightfully theirs in the first place: the opportunity to participate in the defense of their country in the same manner ... as other Americans." Under pressure from African-American leaders and First Lady Eleanor Roosevelt, the military was persuaded to change its policy and, by the end of the war, 500,000 African-American soldiers had been sent to overseas duty. In all, 1,154,720 Black soldiers served in the armed forces, 909,000 of them in the Army. At the Tuskegee Institute in Alabama, a group of highly-talented, college-educated Black soldiers attended a special flying school. In April 1943, the graduates of this school, later known as the Tuskegee Airmen, crossed the Atlantic into the war zone. Flying escort for heavy bombers over European skies, the pilots of the 332nd Fighter Group flew 15,533 sorties in the course of 1,578 combat missions. The Tuskegee Airman destroyed 261 enemy aircraft and caused great damage to the enemy on the ground. After gaining widespread recognition for their exploits, they received a total of 900 medals, including a Presidential Citation for the group. In the Battle of the Bulge (December 1944), Nazi forces launched a fierce winter counterattack and broke through the Allied defenses in the area of Belgium and Luxembourg. In desperation, the American military recruited and sent 2,500 black soldiers into the First Army's counterattack to replace lost soldiers. According to Colonel John R. Ackor, these hastily-assembled platoons of African-American soldiers "performed in an excellent manner at all times while in combat. These men were courageous fighters and never once did they fail to accomplish their assigned mission." The all-Black 761st Tank Battalion fought 183 consecutive days with General George S. Patton's army in Europe and was credited with killing 6,266 enemy soldiers and capturing another 15,818. During the Battle of the Bulge, the 761st "entered combat with... conspicuous courage and success." In April 1945, the 761st Battalion liberated the Nazi death camps at Buchenwald and Dachau, where they were greeted as heroes by the According to the noted author William Loren Katz, 71% of African-American troops in World War II were confined to quartermaster, engineer, or transportation duties and denied combat experience. However, many of these troops also performed their duties admirably and conscientiously. Ten thousand Black troops constructed the 1,044-mile Ledo Road, which connected China with India and proved vital to the American war effort. Operating in hostile territory, where they came under constant fire by Japanese snipers and had to contend with pounding rains, disease and attack by wild animals, the soldiers completed the road in 25 months. There was literally a fallen soldier's grave for each mile of road. In World War II, 3,902 African-American women answered the call of their country and enrolled in the Women's Army Auxiliary Corps (WACS); another 68 joined the Navy Auxiliary (WAVES). During the Cold War years, sixteen African-American soldiers received the Congressional Medal of Honor for duties performed in the Korean War and the Vietnam War. The exact number of African-American troops who have served their country cannot be determined with any degree of certainty. However, their record of courage under fire was irrefutable proof of their loyalty to America. |1. Chappell, Kevin. Blacks in World War II. Ebony, Vol. 50, No. 11 (September 1995), |2. Katz, William Loren. A History of Multicultural America: World War II to the New Frontier, 1940-1963. Austin, Texas: Raintree Steck-Vaughn |3. Ploski, Harry A. and Williams, James. The Negro Almanac: A Reference Work on the Afro-American. New York: John Wiley & Sons, 1983 (4th ed.). |4. Time-Life Books. African Americans Voices of Triumph: Perseverance. Alexandria, Virginia: Time-Life Books, 1993. |5. Wright, David K. A Multicultural Portrait of World War II. New York: Marshall Mr. Schmal is a senior editor at a publishing company in Chatsworth, California. His hobby, and passion, is that of a genealogist who specializes in African-American Southern lineages as well as Puerto Rican and Mexican lineages. He has written several articles about ethnic contributions to American life, usually military contributions or Copyright ©1999 by John P. Schmal. Reprints require approval by the
The waters in and around Australia contain many perils, from living things and other dangers: The white pointer or great white shark is well known in Australian waters. Their fierce reputation is well earned. When we visited my wife’s sister in Perth, Western Australia, we swam at the well-known City Beach. The newspapers reported that the City Beach Surf Life Saving Club spotted a European swimmer far away from the shore. They rowed out to rescue him, but he refused to be brought into the boat. He said he was training to swim to Rottnest Island, about twelve miles offshore. Every day he was seen at the beach, swimming farther out each day. The newspapers dubbed him ‘Mr. Shark bait’ and published cartoons of sharks talking about him and licking their lips. One day he went out on his swim and disappeared. He has not been seen since. The swim to Rottnest Island had been made by swimmers that swam in cages to protect them from predators. (More recently, the swim has become an annual event with hundreds of people swimming, while attending boats keep predators well away.) The greynurse shark is ubiquitous in Australian waters. It has had a reputation as attacking people, but it is largely undeserved. Many popular Australian beaches have shark nets to protect swimmers from roving sharks. The Blue-ringed Octopus is a cute little thing, about the size of a human hand. They turn bright blue when angry. Their bite is almost always lethal. There is no known antivenom. Click here for first aid information. Sea Wasps and other jellyfish live in tropical waters around Australia. A sting from their tentacles can be extremely painful or fatal. Sea snakes should all be regarded as poisonous. Rip currents are fast-moving currents along a beach that can sweep in and carry swimmers out to sea. Any beach can have them. Bondi beach, Australia’s most famous beach, has them regularly. If caught in one, don’t fight it, just wait for the surf life saving club to rescue you. Typically, the current might sweep you half a mile out to sea.
A Guide To Treating Common Soccer Injuries Given the popularity of soccer and the fact that most soccer injuries involve the lower extremity, this author offers pertinent treatment tips on ankle sprains and strains, pearls on proper shoe fit and keys to orthotic modifications. A large-scale 2006 Federation Internationale de Football Association (FIFA) survey shows that soccer is the world’s number one sport. There are 265 million male and female players, and 5 million referees and officials. A grand total of 270 million people — or 4 percent of the world’s population — are actively involved in the game of soccer.1 Of the 18 million Americans who play soccer, 78 percent are under the age of 18. In the 1990s, soccer was recognized as the fastest growing college and high school sport in the United States.2 The popularity of soccer has grown, especially among women, since the U.S. women’s soccer team won the World Cup in 1991 and 1999. In the United States, 35 percent of soccer players are women, one of the highest percentages of female participation in soccer in the world. Female participation in high school soccer has risen by more than 177 percent since 1990.3 Previous studies have shown that soccer has a high injury rate and injury percentage. More injuries occur in soccer than field hockey, volleyball, handball, basketball, rugby, cricket, badminton, fencing, cycling, judo, boxing and swimming. Most soccer injuries occur to the lower extremities, especially the ankle.4 Player to player contact is reportedly a contributing factor in 44 to 74 percent of soccer injuries. Most ankle sprains occur during running, cutting and tackling activities. Sixty-seven percent of foot and ankle injuries occur from direct contact. Significantly more injuries involve a force from the lateral or medial direction in comparison to an anterior or posterior direction. Researchers have noted that the weightbearing status of the injured limb is a significant risk factor.5 High school soccer participation in the United States increased fivefold over the last 30 years. With increased participation comes increased injury incidence. Overall, the most frequent diagnoses were incomplete ligament sprains (26.8 percent), incomplete muscle strains (17.9 percent), contusions (13.8 percent) and concussions (10.8 percent). The most commonly injured body sites were the ankle (23.4 percent), knee (18.7 percent), head/face (13.7 percent) and thigh/upper leg (13.1 percent).6 Sprains, contusions and strains of the lower extremities were the most common injuries in men’s collegiate soccer with player-to-player contact the primary injury mechanism during games.7 Ankle ligament sprains, internal derangements of the knee and concussions were the most common injuries in women’s collegiate soccer.8 Based on the prevalence of soccer injuries, it is important for podiatrists to be aware of common treatments for soccer injuries as well as recommended rehabilitation after injury. How To Address Ankle Sprains The single most common injury in soccer is the ankle sprain. Most ankle sprains are inversion sprains (85 percent), mainly involving the lateral ligament complex.9 In addition to standard radiographic views for ankle injuries (weightbearing AP, medial oblique and lateral ankle and foot), our office utilizes musculoskeletal ultrasound to aid in the diagnosis. We have found musculoskeletal ultrasound to be invaluable in the diagnosis of sports medicine foot and ankle injuries. Magnetic resonance imaging usually confirms ultrasound findings, especially when you cannot easily visualize or determine the degree of ligament tear. We will also use MRI when we suspect intra-articular, osteochondral lesions or when there is a history of ligamentous laxity due to chronic sprains in the absence of acute injury. Podiatrists can manage minor acute lateral and medial ankle injuries conservatively with compression, analgesics, home exercises and time away from the playing field for a period of two to four weeks. For more severe injuries including anterior talofibular, calcaneofibular or deltoid ligament tears, or ruptures with or without intra-articular osteochondral defects, initial treatment will consist of immobilization and edema reduction. One may accomplish this by applying an Unna Boot soft cast for five days if the edema is severe. We will also use a CAM walker for immobilization for an additional four to six weeks. This will be followed by physical therapy twice a week for an additional six to eight weeks. Physical therapy, which includes proprioceptive coordinative training, helps with both injury prevention and rehabilitation, and is essential in returning injured players back to the field.11 The most common risk factor for ankle sprain in sports is history of a previous sprain.9 Our clinic dispenses an Air-Stirrup® Ankle Brace (Aircast) to soccer players suffering a moderate or severe sprain. They wear the brace during play for at least six months after an ankle injury. This simple treatment can mean the difference between patients being able to continue to play uninjured and having to withdraw from soccer entirely due to chronic, worsening instability or pain. Conservative Care Tips For Tendon Strains And Plantar Fasciitis We commonly see soccer strains of the posterior tibial tendon, anterior tibial tendon, peroneal tendons, Achilles tendon and plantar fasciitis. Again, in these cases, treatment is symptomatic and includes compression sleeves, rest, ice, compounded topical anti-inflammatories and modified activity.12 In more severe or chronic cases, immobilization, bracing and taping can be effective as well as physical therapy. For taping, we instruct our players on how to use RockTape for injury prevention and rehabilitation. During the 2008 Summer Olympics, taping came to the forefront when Olympians competing in sports ranging from volleyball to water polo sported a brightly colored athletic tape called Kinesio Tape®. In 2009, Rocktape (www.rocktape.com ) became a major competitor to Kinesio. However, while one needs to be certified to promote and use Kinesio taping, this is not the case with Rocktape. Rocktape has how-to videos on its Web site as well as a brochure packaged with each roll offering instructions on how to apply the tape. Rocktape also sells wholesale to podiatrists so patients can purchase this directly from your office rather than buying it somewhere else. Rocktape primarily markets to athletes and our soccer players routinely respond favorably to this easy, inexpensive, treatment. Similar to Tensoplast® (formerly Elastoplast®), Rocktape is an elastic tape that one can use for support and compression. The best part is you can teach your patients how to do this themselves and the tape stays on for four to five days. In addition to the aforementioned treatments of bracing, compression sleeves, taping and physical therapy, there are additional considerations for soccer players that can significantly reduce the likelihood of injury. What You Should Know About Soccer Shoes Soccer shoes are typically designed to fit tight for better feel and control of the ball. Unfortunately, too many players select shoes that are too narrow and/or too short, and this contributes to injury. When the fit is too tight, neuroma pain, bunion pain, hammertoe pain, heloma molle or heloma durum can occur. When the fit is too short, ingrown toenails, subungual hematoma, onychodystrophy, plantar fasciitis, turf toe and sesamoiditis can occur. An improperly fitting or excessively worn soccer shoe can cause ankle instability, sprains, strains or even fractures in an otherwise healthy foot. The majority of soccer shoes are purchased online through recommendations from athletic trainers, sponsors, teammates or advertisers. The American Academy of Podiatric Sports Medicine (www.aapsm.org/ ) currently lists 94 recommended shoes for running and only four for soccer. It is shocking to me the lack of information available to patients to help them find appropriately fitting soccer shoes. Fortunately, this is an area of injury prevention that podiatrists can significantly impact. All that is required is applying the same principles for running shoes to selecting soccer shoes. 1. Size/fit. Easily 85 percent of patients whose feet are measured in our office are wearing the wrong size shoe. In those cases in which we cannot directly measure foot size in the office, we send patients to a running shoe store or other specialty shoe store to be properly measured and sized. One can subsequently apply this information to fitting soccer shoes. Soccer cleat sizes do not indicate widths and this can make it difficult to obtain a good fit. Since many medium width cleats will run either wide or narrow, you can use a side-by-side comparison to identify volume differences among different pairs of shoes. The Adidas Predator Absolion TRX TF has a narrow lasted cleat (see the cleat on the left of the photo at left) and the Nike Mercurial Victory II has a wide lasted cleat (see the cleat on the right of the photo at left). The differences in width are highlighted. Comparing the uppers, notice how the throatline (opening) of the Nike is so much wider than the Adidas. Also notice the difference in toebox shape and width. The Adidas is more tapered around the toes and the Nike is more rounded, accommodating a wider forefoot. In evaluating the lower, you can also see how much wider the forefoot and waist are in the Nike than in the Adidas. If your patients are having difficulty finding the perfect fit, showing them this method should help. These principles also apply to other shoes (tennis, basketball, football, etc.) that only come in medium widths. 2. Design. There are soccer shoes designed for firm ground, hard ground, soft ground and indoor fields, and turf depending on the intended playing surface. Most non-professional players own both a pair of firm ground cleats and a pair of non-cleated turf shoes. Firm ground cleats have molded studs for traction and stability, and are the most common type of soccer shoe worn. Turf shoes have a firm rubber outsole with raised patterns on the outsole designed for indoor soccer or turf. Many players wear these different types of soccer shoes interchangeably. However, similar to snow versus road tires, each type is designed for a specific playing surface and should only be used on that surface. 3. Wear. Running shoe companies have emphasized the need for running shoe replacement after 500 miles of wear. Soccer players have no such advocate to help them determine when to replace their shoes. Players can easily sustain injury if the shoes are not replaced when they have been excessively worn. Again, similar to educating runners on running shoes, the podiatric physician or staff can teach patients how to evaluate their soccer shoes for excessive wear. Keys To Successful Custom Orthoses For Soccer Players I always ask my patients who wear custom orthoses what sports they play. If soccer is one of their sports and they have biomechanical issues that could potentially contribute to a delay in healing or injury, I almost always prescribe custom orthoses for their soccer shoes. Soccer injuries for which I have found orthoses to be useful include plantar fasciitis, posterior tibial tendinitis, medial tibial stress syndrome, sesamoiditis and second metatarsophalangeal joint pre-dislocation syndrome. The only challenge an orthosis may pose is fitting into a soccer shoe. Considerations such as a medial heel skive, inversion or minimal cast fill with prescription orthoses should first and foremost be based on patient pathology. Then one may proceed to consider modifications to a standard orthoses prescription to ensure fit into a soccer shoe. I prefer a soccer orthoses shell to be manufactured out of semi-rigid polypropylene. This makes it easy to modify (including grinding) in the office if necessary. Due to the low profile of a soccer shoe, I will prescribe a 8 to 10 mm heel cup depth, normal width, with or without a medial flange and a strip post. Alternatively, you can send the shoe with the orthoses prescription to the lab for an exact fit if you desire. If the orthosis does not need additional forefoot padding, I will order the device without a top cover. Otherwise, I will extend the topcover to the sulcus to minimize excess forefoot bulk. Soccer injuries are not unique from most other sports medicine injuries podiatric physicians diagnose and treat. What is somewhat unusual however is the lack of knowledge our patients have regarding soccer injury treatments and prevention. With the ever increasing numbers of new players to the sport, it is important that our offices and clinics develop protocols as well as patient education tools to minimize injury and maximize injury prevention in this popular American sport. Dr. Sanders is an Adjunct Clinical Professor in the Department of Applied Biomechanics at the California School of Podiatric Medicine at Samuel Merritt University. She is in private practice in San Francisco. Dr. Sanders writes a monthly blog for Podiatry Today. For more information, please visit www.podiatrytoday.com/blogs/594 . Dr. Sanders also blogs at www.drshoe.wordpress.com . 1. 2006 Big Count: A FIFA Survey. Available at http://www.fifa.com/worldfootball/bigcount/index.html . Accessed July 30, 2012. 2. Sports in America — soccer. Available at http://usa.usembassy.de/sports-soccer.htm Accessed July 23, 2012. 3. Levitan P. Watching soccer: a popular U.S. pastime. Available at http://usa.usembassy.de/etexts/sport/feature_soccer4.htm . Published July 9, 2008. Accessed July 23, 2012. 4. Wong P, Hong Y. Soccer injury in the lower extremities. Br J Sports Med. 2005;39(8):473-482. 5. Giza E, Fuller C, Junge A, Dvorak J. Mechanisms of foot and ankle injuries in soccer. Am J Sports Med. 2003;31(4):550-554. 6. Yard EE, Shroeder MJ, Fields SK, et al. The epidemiology of United States high school soccer injuries, 2005-2007. Am J Sports Med. 2008;36(10):1930-1937. 7. Agel J, Evans TA, Dick R. Descriptive epidemiology of collegiate men’s soccer injuries: National Collegiate Athletic Association Injury Surveillance System, 1988-1989 through 2002-2003. J Athl Train. 2007;42(2):270-7. 8. Dick R, Putukian M, Agel J. Descriptive epidemiology of collegiate women’s soccer injuries: National Collegiate Athletic Association Injury Surveillance System, 1988-1989 through 2002-2003. J Athl Train. 2007;42(2):278-85. 9. Surve I, Schwellnus MP, Noakes T, et al. A fivefold reduction in the incidence of recurrent ankle sprains in soccer players using the sport-stirrup orthosis. Am J Sports Med. 1994;22(5):601-606. 10. Milz P, Milz S, Steinborn M, et al. Lateral ankle ligaments and tibiofibular syndesmosis: 13-MHz high-frequency sonography and MRI compared in 20 patients. Acta Orthop Scand. 1998;69(1):51–55. 11. Ergen E, Ulkar B. Proprioception and ankle injuries in soccer. Clin Sports Med. 2008;27(1):195-217. 12. Mazières B, Rouanet S, Velicy J, et al. Topical ketoprofen patch (100 mg) for the treatment of ankle sprain: a randomized, double-blind, placebo-controlled study. Am J Sports Med. 2005;33(4):515-523.
CICLing 2012, www.cicling.org/2012, will be held in New Delhi, India, on March 11–17, 2012. Submission deadline: October 31. Keynote speakers: Srinivas Bangalore, John Carroll, Salim Roukos, Bonnie Webber. What is computational linguistics? Computational linguistics is the scientific study of language from a computational perspective. Computational linguists are interested in providing computational models of various kinds of linguistic phenomena. These models may be "knowledge-based" ("hand-crafted") or "data-driven" ("statistical" or "empirical"). Work in computational linguistics is in some cases motivated from a scientific perspective in that one is trying to provide a computational explanation for a particular linguistic or psycholinguistic phenomenon; and in other cases the motivation may be more purely technological in that one wants to provide a working component of a speech or natural language system. Indeed, the work of computational linguists is incorporated into many working systems today, including speech recognition systems, text-to-speech synthesizers, automated voice response systems, web search engines, text editors, language instruction materials, to name just a few. Popular computational linguistics textbooks include: - Christopher Manning and Hinrich Schütze (1999) Foundations of Statistical Natural Language Processing, Cambridge, Massachusetts, USA. MIT Press. Also see the book's supplemental materials website at Stanford. - Daniel Jurafsky and James Martin (2008) An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, Second Edition. Prentice Hall.
A newly-opened dining hall in Zhejiang University offers a card less dining experience at lunchtime. Using a facial recognition scanner in the dining hall, a diner is paired with a chip-embedded food tray and can start taking food from the buffet. Each dish has a sensor recording the price, while the table the tray is placed on works as a scale. By walking through the buffet area, the price of the meal is calculated and money is automatically deducted from a corresponding campus card which was previously registered by the user. And it gets even better! Once the meal is taken, a report specifying total calories and proportion of protein, carbohydrates and fat will be sent to the diner's mobile device, to encourage them to eat a more balanced diet. "It is very convenient and reminds us to save food and eat healthily," said Wang Kai, professor at the university. "The new technology has also improved the efficiency of dining." "The new system weighs food accurately and can help remind people to treasure food and reduce waste," said Xia Xuemin, a researcher with the Public Policy Research Institute of Zhejiang University. The system was jointly developed by the center and the school of public health at the university.
TRT Podcast #27: What every teacher should know about dyslexia Do you suspect that one of your students has dyslexia? Here’s what every teacher should know about this common learning disability! Listen to the episode here Full episode transcript As I look back to the students that I taught, I can picture one particular little boy who I'm sure had dyslexia. He was bright, articulate, and very kind. I remember, on one particularly rough day of teaching, he gave me this little blue gem shaped like a heart and said, "This is for you because you're the nicest teacher I've ever had." I'll admit I was not feeling like a nice teacher that day, but this sensitive little guy knew that I needed some encouragement. He was all ears during whole class read alouds, and his language comprehension was excellent, but he struggled to get words off the page. At the time, I was a balanced literacy teacher. I advised his parents to read to him more, which they were already doing, and I gave him more practice with leveled texts. He was a hard worker, and he had committed parents, but nothing we did made a whole lot of difference. That little boy is now in his mid-twenties, and I sure hope he found a teacher who gave him more help than I did. Today's episode is what I wish that I knew about dyslexia. Number one, dyslexia is real and it's more common than you might think. Recently, a friend of mine shared with me that her graduate school professor told her that dyslexia doesn't exist. Lest you think this was decades ago, it was 2015. 2015! Dyslexia is real. People can have mild, moderate, or severe dyslexia. The Yale Center for Dyslexia and Creativity says that up to 20% of students have it. It is the most common learning disorder. There is much we can do for students with dyslexia, but there's no cure. Our students will not grow out of it. Number two, dyslexia is a language-based learning disorder. Old myths die hard, but dyslexia is not about seeing letters or words backward. It's most commonly due to a difficulty in phonological processing. This is the International Dyslexia Association's official definition, "Dyslexia is a specific learning disability that is neurobiological in origin. It is characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities. These difficulties typically result from a deficit in the phonological component of language that is often unexpected in relation to other cognitive abilities and the provision of effective classroom instruction. Secondary consequences may include problems in reading comprehension and reduced reading experience that can impede growth of vocabulary and background knowledge." Phew! If that felt like a mouthful, that's because it is! Dyslexia is difficult to define, and experts don't always agree on the definition. Number three, early screening helps us know which students are at risk for dyslexia. Clues for dyslexia can appear even before a child starts school, so it's imperative that teachers use a screener to detect red flags. Screeners do not diagnose dyslexia, but they do tell us which students would benefit from more testing. I'll make sure to link to some screeners you could try in the show notes. Number four, it's a big mistake to take a wait-and-see approach when it comes to dyslexia. Early identification is crucial so students can get the help they need. We've learned that the brain responds best to intervention when children are young. As we get older, our brains get less plastic. We can still help older dyslexic readers, but the process will be harder than it would've been when they were young, and, by that time, children have often become very discouraged, making it harder to reach them. Many students with dyslexia need one-on-one tutoring so they can move forward. As a teacher, it's your job to alert parents to this need. Number five, students with dyslexia can learn to read with the right approach. Students with dyslexia need a structured literacy approach. The good news is, all students benefit from structured literacy. It uses explicit, systematic teaching to teach phonology, sound-symbol relationships, syllables, morphology, syntax, and semantics. Structured literacy is very different from the way I used to teach reading, using predictable leveled texts and three-queuing. Balanced literacy approaches will not teach dyslexic students to be successful readers. Number six, students with dyslexia need reasonable accommodations. Here are some that make sense in a primary classroom: allow more time for test-taking, repeat directions as needed, use daily routines so it's easier for students to know what's expected, gives small step-by-step instructions, build in daily review, and provide books on audio. Finally, number seven, YOU can be the teacher that makes all the difference. When you educate yourself about dyslexia, point parents in the right direction, and change the way you teach so you reach all learners, you will make an incredible lasting impact on a child's life. Here's how to learn more, read "Conquering Dyslexia" by Jan Hasbrouck. It's short, easy to read, and practical. You can read it in a weekend. Read "Dyslexia Advocate!" by Kelli Sandman-Hurley to know how to help a child with dyslexia in the public education system. Bookmark the International Dyslexia Association's website; its printable fact sheets are super helpful. And finally, read my blog series, all about dyslexia, which I'll link to in the show notes for this episode. You can find those show notes at themeasuredmom.com/episode27. See you next time! Sign up to receive email updates Enter your name and email address below and I'll send you periodic updates about the podcast. Recommended dyslexia screeners - Lexercise Screener for schools - This Reading Mama’s Screener for teachers and parents - EarlyBird Education Recommended dyslexia resources - Conquering Dyslexia by Jan Hasbrouck - Dyslexia Advocate! by Kelli Sandman-Hurley - The International Dyslexia Association website Check out my blog series about dyslexia with This Reading Mama! - Misconceptions about dyslexia - What is dyslexia? - Signs of dyslexia - Getting tested for dyslexia - Using explicit instruction - Systematic teaching - Repetition and review - Multi-sensory teaching - What every teacher needs to know about dyslexia
Prosodic features are features that appear when we put sounds together in connected speech. It is as important to teach learners prosodic features as successful communication depends as much on intonation, stress and rhythm as on the correct pronunciation of sounds. Intonation, stress and rhythm are prosodic features. In the classroom One way to focus learners on various aspects of prosody is to select a text suitable to be read aloud - for example a famous speech - and ask learners to mark where they think pauses, main stress, linking, and intonation changes occur. They can then practise reading this aloud.
“Little did the Mitchell family know that their furry family members had other plans. They were coming on holidays too! Milly and Molly liked the thought of lying in the sun by a pool every day – it sounded a lot more exciting than the cattery. They couldn’t wait to see if Hawaii had catnip cocktails and rooms full of scratching poles.” Writing prompts are a great way to engage students with writing and encourage creativity. They provide students with choices and the opportunity to draw on ideas from known stories and personal experiences. Effective writing prompts create situations that interest students and provide them with some direction for their writing. Selecting prompts that are consistent with the interests and life experiences of the students in your class will assist them in achieving their writing goals. Writing prompts can be used for independent writing, guided writing, quick writes, partner writing, homework tasks and daily warm-ups. They can also be used to help identify different text types, genres and purposes for writing. When using writing prompts in the classroom, start by modelling how to use the prompt by brainstorming ideas and planning a piece of text. Jointly construct a text with the students before carrying out independent writing tasks. Encourage students to gather their ideas and plan their piece of text before writing. Five ways to spark imagination in the classroom using writing prompts include the use of stimulating photographs, sentence starters, videos, story generators and story boxes. Photographs can be fantastic writing prompts. Search the internet for a range of writing prompt photos, or simply choose an interesting photograph from a magazine or newspaper. Display the photograph on the board or print off a smaller copy for students to glue into their books. Photographs encourage students to think about who or what is in the photo, when and where the photo was taken, what happened before the photo was taken and what might have happened after the photo was taken. Students will use their imaginations to write a range of interesting stories. Alternatively, use the photo as the lead for a newspaper article or topical debate. For choice, provide students with a variety of photographs with the same theme. Fortunately, we have created an easy to use Visual Writing Prompts widget that allows your students to select a photo at random. 2. Sentence Starters Sentence starters are a great resource for setting the scene for an imaginative piece of text. They encourage students to think about the main character, the complication and the resolution of a story. Sentence starters provide students with the responsibility of continuing or finishing a story in an interesting way. Using sentence starters as a writing prompt can be as simple as providing students with a copy of the first sentence of a piece of text. For variety, you can read the start of an unknown text to the students and ask them to write about what they think might happen next. Videos are another great source of writing prompts to engage students with their writing. Select an appropriate video snippet from a movie, television show, documentary, news story, YouTube clip or music video. Play the short section of video to your class and then discuss with your students the topics and events that they viewed in the video. Use the video as a stimulus for writing a range of imaginative, informative and persuasive text types for a variety of current topics. Students may also like to use their written texts to create their own videos. Encourage them to demonstrate what might have happened before or after the video they viewed, or to create their own version of the snippet shown. 4. Story Generators Story generators are fun writing prompts to use with your class when writing imaginative texts. Use a range of electronic generators, posters and story board games to select a genre, character, setting and complication. Encourage students to use the selected elements from the generator, along with their imagination, to create an interesting story. As a class, create your own story generator using people and places of interest to the students in your class. By helping to create their own story generator, students will be more engaged with their writing and show a greater interest. Check out our Random Sentence Starter generator. Along with the story starter text, the generator has gorgeous visuals to inspire your students. You can even add in your own custom story starters to the generator. 5. Story Boxes Story boxes are another fun way to prompt students with their writing. Collect a range of items including pictures, toys and games on a particular topic that will interest the students. Prior to writing activities, place the items in a decorative box and make it accessible for students to explore. By handling the items in the box, students will be encouraged to use their imagination to write an interesting piece of text. Alternatively, allow students to bring in items from home to make their own story boxes to use in the classroom as a writing prompt. I hope that these ideas will help you to get creative with writing and engage your students in the writing process. Do you have any other creative ways to kick-start your students’ writing? Share them in the comments below!
One of the late Sen. Ted Kennedy’s (D-MA) last legislative fights was about the overuse of livestock antibiotics. “It seems scarcely believable that these precious medications could be fed by the ton to chickens and pigs,” he wrote in a bill called the Preservation of Antibiotics for Medical Treatment Act of 2007 (PAMTA). Over 70 percent of antibiotics go to livestock, not to people, said the PAMTA bill, a figure the drug industry refutes. “These precious drugs aren’t even used to treat sick animals. They are used to fatten pigs and speed the growth of chickens. The result of this rampant overuse is clear: meat contaminated with drug-resistant bacteria sits on supermarket shelves all over America,” said Kennedy. This week the Centers for Disease Control and Prevention has declared to be “Get Smart About Antibiotics Week” and is meeting with a coalition of 25 health organizations to address overuse of antibiotics. Excessive use of antibiotics in livestock operations and health care settings creates “superbugs” which no longer succumb to the antibiotics which once destroyed them. Anyone who has rented an apartment which has both an exterminator and bugs has firsthand experience with the principle of antibiotic resistance. Organisms can become impervious to what is supposed to kill them and bigger and stronger. Resistant mosquitoes, it is said, when you slap them, they slap you back. Antibiotics are overused by hospitals, clinics, schools, public facilities and households, and are unnecessarily put in dish and laundry detergent, soap, and even toothpaste by the chemical industry. In fact, antibiotics are overused and misused anytime they’re used preventatively, for potential infections instead of for existing bacterial infections. Doctors and patients who “treat” colds and viruses with antibiotics also encourage antibiotic resistance because these conditions are not caused by bacteria. And people who stop taking antibiotics (when conditions are bacterial) before the medication is gone because they feel better also contribute to antibiotic resistance. The few bacteria that survive the antibiotic “bath” are stronger for the challenge and go on to cause more trouble. It is the ultimate biological demonstration of the maxim “That which doesn’t kill you makes you stronger.” Still, most health professionals agree that the use of antibiotics in livestock operations is the biggest source of resistance. The pills are used to make animals grow faster and keep infections from erupting in the crowded, “factory” conditions and they do not appear on food labels. Efforts by medical organizations and the FDA to curtail farm use of antibiotics have gone nowhere even though they include the same drugs needed to treat urinary tract, intestinal, respiratory, ear and skin infections in humans, not to mention TB and STDs. Livestock antibiotics are a big source of revenue for Big Pharma. Thanks to the overuse of antibiotics, MRSA, (Methicillin-resistant Staphylococcus aureus infections) kills 20,000 people a year and Clostridium difficile, a serious intestinal bug is developing resistance. Resistant Acinetobacter baumannii so afflicted US troops in Iraq it was dubbed “Iraqibacter,” and Vancomycin-resistant enterococci (VRE) has developed because of the use of the antibiotics virginiamycin and Avoparcin in livestock. VRE infections have been reported in 33 states and in commercial chicken feed, reported the Hartford Advocate. And there is another worry. “Overuse of antibiotics could be fuelling the dramatic increase in conditions such as obesity, type 1 diabetes, inflammatory bowel disease, allergies and asthma, which have more than doubled in many populations,” writes Martin Blaser, professor of microbiology at New York University’s Langone Medical Center in the journal Nature. “Indeed, large studies we performed have found that people without the bacterium are more likely to develop asthma, hay fever or skin allergies in childhood,” writes Blaser. As H. pylori, one bacterium, “has disappeared from people’s stomachs, there has been an increase in gastroesophageal reflux, and its attendant problems such as Barrett’s oesophagus and oesophageal cancer. Could the trends be linked? “writes Blaser. The proton pump inhibitors such as Prilosec, Prevacid and Protonix makes matters worse–altering the composition and capacity of bacteria in the colon, researchers report. Because of their links to resistant infections and possibly obesity, asthma and gastrointestinal problems, it is good to see health professionals curtail the use of antibiotics. When will livestock operators follow suit?
Sep, 01, 2012 While reading Franco Moretti’s Graphs, Maps, Trees: Abstract Models For A Literary History I came across his use of geometric (and geographic) radial maps to document the development of a small village. These village story maps lead Moretti to visualize the arrival of industrialization to the village among other events. An example of the radial geographic visualization. Moretti’s examples made me wonder if the same mapping technique could be applied to other literature outside the village story genera. To test this idea I built a semi-automated system that uses text extraction to pull out the possible geographic names. Using n-gram frequency the tool then pins frequent words to that location in order to provide some context surround what happened at that location. The system then uses GPS lookup to find and place that location in relation to each other. I say semi-automated because it still is too much noise to select the location and look them up automatically. I made a crude command line input system to present the results allowing for approval of the location, its GPS and the terms associated with it. The result is a data file that can be read by my d3.js SVG mapping tool that plots the locations in a radial configuration similar to Moretti’s maps. The maps are to scale, although that scale changes (the geographic distance involved in Around the world in 80 days is quite different than The Dubliners). The inner circle is the most mentioned location, which becomes the center of the map. The outer ring represents the second most mentioned location. The results are interesting. I feel they do not work as well as Moretti’s map but by visualizing the story geographically, useful patterns emerge that define the novel. Joyce’s Dubliners graphs to a small group of tightly related points. This makes sense as the novel is about the local neighborhoods and people. As opposed to Verne’s Around the world in 80 days which presents an elongated string of locations spanning thousands of miles.
Republic Day: February 23 Republic Day is a public holiday in Guyana. Guyana celebrates Republic Day or Mashramani (abbreviated as ‘Mash) on February usually on the 23rd. Guyanese citizens enjoy festive and colourful parades much like that of a carnival, music, and games. Guyanese cook on this special day to mark the birth of a new country. Mashramani should not be confused with the country’s Independence Day on May 26. After it gained independence from the British in 1966, the Guyanese people established itself as an independent nation or a democratic sovereign country as stated in the British Commonwealth of 1970. The word Mashramani is an Amerindian word which means to celebrate/celebrating a job well done. It is the most celebrated holiday in Guyana as floats, parades and masquerade party and dancing flood the street a scenery which reflects the country’s proud African heritage. History of Guyana’s Republic Day For most foreigners, the idea of celebrating Guyanese Republic Day might be construed with celebrating Independence Day. While the country’s Independence Day marks the act of the British in relegating the sovereignty Guyana have long been wanting for, it is the Republic Day where the Guyanese commemorates the establishment of a sovereign republic after being granted independence from the British rule. Carnival-like celebration in Guyana has long been practiced in Mackenzie by local members of Junior Chamber International (JCI) or Jaycees. The celebration of Guyana’s Republic Day coincided with the establishment of Jaycees Republic Celebrations Committee headed by Basil Butcher but it was Jim Blackman was appointed to do the job because Butcher had to join the West Indies Cricket Team during that time. Blackman, along with other personnel, organized the first formal government-sponsored carnival activity to happen in Guyana. Butcher was the one to initially suggest that the name of the festival be based on Amerindian word. Amerindian is a language spoken by indigenous people of the Americas who are sometimes called upon as Native Americans or American Indian. One of Butcher’s personnels, Mr. Allan Fietdkow, a native Amerindian, helped Butcher in coming up with a name for the festival through his consultation with his grandfather. Ultimately, the word Mashramani was suggested. The first Mashramani in February 23, 1970 was a huge success and well-accepted by the locals; because of that, a government official named David Singh suggested that the festival be brought to the country’s capital – Georgetown. It later was given approval by the president of Guyana back then (President Forbes Burnham). The celebration of Mashramani is done in various regions in Guyana including Berbice, Linden, and Georgetown but due the largest concentration of events usually happen in Georgetown due in part to sponsorship both from private and public institutions and individuals.. Guyana’s Republic Day: Traditions, Customs and Activities Guyana’s constitutional milestone is celebrated with float parade, dancing and singing, and other fun activities. It aims to mobilize professionals, private individuals, and the youth in participating in the country’s celebration of the country’s political success. The three-day festival is joined by people from all walks of life coming from different regions in Guyana. The Mardi-Gras like celebration encourages both men and women to participate in street dancing and parade wearing colourful costumes. There is no other holiday to look forward to in Guyana other than the fun-filled carnival-like celebration of Republic Day.
The ADHD-Autism Link in Children ADHD and autism share many common traits and behaviors. Download this 42-page guide to learn how to differentiate between ADD and ASD, and how to ensure an accurate diagnosis and helpful supports for your child. Autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD or ADD) share many overlapping behaviors. In fact, most families of children with the condition previously known as Asperger’s syndrome (AS), an autism spectrum disorder, receive an ADHD diagnosis — or misdiagnosis — before a pediatrician correctly identifies autism. Most children on the autism spectrum have at least some symptoms of ADHD. Use this free 42-page download as a guide to understanding the distinctions and similarities between ASD and ADHD in children. Access case studies illustrating the ADHD-autism link and learn: - How to receive an accurate diagnosis for your child - Strategies to overcoming your child’s social challenges - How to determine if your child with ADHD is on the autism spectrum - Effective supports for children with autism and ADHD - The relationship between autism, ADHD, and anxiety in children NOTE: This resource is for personal use only.
In the Case of Wholesale Food Distributors, It’s All About Location Posted: December 11, 2013 “Our model addresses the problem of how to move food from producers to consumers efficiently,” said Hamideh Etamadnia, a member of the Distribution Team and lead author of the study. “In the case of farmers’ markets, producers bring their products directly to consumers themselves. But most products are trucked from processing facilities to wholesale distributors, and then on to retail stores. Our model will help identify the optimal locations of these intermediary distributors so as to minimize transportation costs and to maximize the number of producers and retailers that they serve.” Etamadnia and her colleagues developed the mathematical model to consider transportation and distributor-construction costs, as well as several possible constraints that will allow them to look at various “what if” scenarios. “The constraints that we built into our model allow us to understand how certain changes might affect the optimal locations of wholesale hubs,” she explained. “For example, officials who want to promote regional agriculture could place constraints on the distance food travels, to see how their region’s existing distribution structure would need to change for such a policy to succeed.” To test their model, the researchers applied it to the meat supply chain in the Northeastern U.S., which comprises 433 counties. Using County Business Patterns data from the U.S. Department of Commerce, they identified which counties contain slaughtering or meat-processing facilities, and which counties contain retail meat markets. Inserting these data into their mathematical model, they conducted several simulations to determine the optimal locations for wholesale distributors connecting these slaughter and processing facilities with retail markets. Their results show how optimal distributor locations change based on a number of variables, including distributor size and capacity, road conditions, and gas prices. “Our team can use this model to conduct simulations with other supply chains, such as those for fresh fruits and vegetables,” said Distribution Team Leader, Miguel Gomez. “These simulations will help us to identify the kinds of changes that would be required of the Northeastern U.S. food supply chain in order to support increased regionalization, and what kinds of economic effects such changes would have.”
Micro- and nanoparticle-based drug delivery systems are revolutionizing medicine, from minimizing the toxicity of therapeutics to improving their efficacy. Through the noise, a class of stars reveals its inner workings; poor ‘social distancing’ identified using NASA space telescope Convolutional neural networks provide stronger predictive performances for pharmacological assays compared to traditional machine learning models. Scientists are finding safer ways to keep drug-loaded microrobots attached to cancer tissue. A new approach drastically improves the amount of energy harvested from microalgae for sustainable bioenergy. A new technique opens up new possibilities in membrane fabrication. This year marks the 30th anniversary of the Hubble Space Telescope, which has opened a new eye onto the cosmos and has been transformative for our civilization. Scientists have developed a method for precise, fast, and high-quality laser processing of halide perovskites, promising light-emitting materials for solar energy, optical electronics, and metamaterials. Novel graphene‐reinforced elastomeric isolators are a viable, low‐cost alternative to reinforce buildings against earthquake damage. Uranus, its moons, and rings are all “tipped”, suggesting they formed during a cataclysmic impact early in its history.
We all like a bit of a scare. As children we’ve gathered under blankets with flashlights and spooked each other with ghost stories. Even while in diapers and growing sea legs we threw on a sheet and screamed “Boo!” As far back in human history fairytales have been scary. On All Saints Day we satiate the desire to scare and be scared. At other times we indulge in films like The Shining. There’s nothing like a good scare! This strange enjoyment comes from the primal nature of fear. Probing the hidden part of us that lives in fear has always been an important human experience. As the author Lovecraft wrote, “The oldest and strongest emotion of mankind is fear and the oldest and strongest kind of fear is fear of the unknown.” Ghosts are the embodiment of this fear and since imagination is part of being scared, ghosts can be whatever we want them to be. One definition of a ghost is someone who hasn’t quite made it. They died, but don’t know that yet, so they wander in a confused state. Regardless, whether they’re malevolent or benevolent, ghosts are the principle of life: the spirit of a departed person. In Buddhism they’re called “Hungry Ghosts” and are portrayed with a large belly and a very skinny throat. They want to eat and feel full, but cannot. No matter what they eat or how much, they’re always hungry. They didn’t adequately provide themselves with what they needed to flourish in the afterlife. The Zen Master, Thich Nhat Hanh uses the Hungry Ghost to describe a psychological condition that plagues many. When there’s a disconnection from our source of life, we begin to wither and become a hungry ghost, wandering and looking for something to revive us. As much as we like to be scared, we also enjoy a good laugh. Ghost stories and jokes are similar—both lead up to the point where either you laugh or shiver!
© Birmingham Civil Rights Institute The idea to build a museum to commemorate the history of the civil rights movement in Birmingham, Alabama began with Mayor David Vann in 1978. Inspired by his trips to Israel, Vann believed that healing could begin with remembrance. His successor and Birmingham’s first African American mayor, Richard Arrington Jr., appointed a Civil Rights Museum Study committee and created the first organization to begin planning the city sponsored museum. In 1986, the idea of a museum expanded into a Civil Rights Institute to imply action oriented ideals. Arrington created a task force that crafted a mission statement and an architectural firm was hired along with planning consultants to begin laying the ground work on property where the 1963 marches and civil rights demonstrations had been held in efforts to document strides towards freedom. The Institute opened to the public in 1992 and has since welcomed over 2 million visitors and provided educational programing to more than 140,000 students and adults. The BCRI believes that social problems and divisiveness are perpetuated through silence and indifference and encourages the exploration of morality, law and justice, and responsible citizenship. The Birmingham Civil Rights Institute is open Tuesday through Saturday from 10am until 5pm and Sunday’s from 1pm until 5pm. BCRI is closed all major holiday. From Martin Luther King Day, on which admission is free, through the month of February, BCRI is also open on Mondays from 10am until 5pm. Details on cost of admission can be found online. The permanent artifacts displayed in the Birmingham Civil Rights institute chronical the history of the civil rights movement in Birmingham Alabama and nationwide. The block that the institute is located on was the Killinger Park site where hundreds of people demonstrated for civil rights in 1963. There is a total of 27,000 square feet of exhibition space at the institute that includes the historic colored and whites only water fountains which have come to be a power symbol of segregation and the civil rights movement in Birmingham. Other exhibits show the difference between black and white classrooms in the 1950’s, a black church in the center gallery that talks about REVEREND FRED SHELTERSWORTH., one of the prominent leaders of the civil rights movement in the area. There are also exhibits that showcase the prominence and flourish of black owned businesses such as barbershops and other businesses on 4th avenue. The law and justice system is also reflected upon in exhibits that mimic court room scenes and trials of the south. Another popular exhibit space features how racism exists and is progressed through pop culture and images presented through the media and advertisements. The Ku Klux Klan and history of hate crimes in Birmingham and around the country is also talked about and remembered at the institute.
Blue dragons, also known as storm dragons, are among the most vain and prideful of an arrogant race. They take great pleasure in wielding their power, engaging in combat or lording over humanoids and other lesser creatures to prove that they can do so, rather than out of any real desire for results. A blue dragon might forgive insults, but it reacts with rage to any insinuation that it is weak or inferior. Blues are also extremely territorial dragons. They rarely give intruders, even accidental ones, the opportunity to explain themselves. Blue dragons are more likely than other varieties of chromatic dragons to battle powerful enemies or other dragons over violated borders. This can prove particularly problematic, given that blue dragons are also more finicky about their environment than their cousins. When other creatures give due respect to blue dragons’ pride and territorial claims, however, blues can be the most reasonable of the chromatic dragons. Blues lack the cruelty of black dragons and the ambition of greens and reds. Some blue dragons live as peaceful neighbors of humanoid communities or even, on occasion, of other dragon varieties. Blues might also employ humanoids to perform tasks for them, because blues enjoy both the opportunity to command others (thus showing their superiority) and the accomplishment of goals without having to exert themselves. Blue dragons savor large prey such as cattle and herd animals, preferring meals of fewer, larger creatures over many small meals. Blues have no particular desire to hunt sentient prey, but neither have they any compunction about doing so if opportunities present themselves. Blues prefer their meat charred but not cooked through: “lightly kissed by the lightning,” as one blue reputedly put it. Blue dragons rarely land during combat, preferring flight and far-reaching attacks to lumbering over land in close melee. Because they like to fight from a distance, blue dragons consider combat a long-term engagement. They fly near enough to their opponents to unleash a few barrages, then vanish, and then return—sometimes minutes or hours later. On rare occasions when a blue dragon hunts from the ground or rests away from its lair, it conceals itself beneath the terrain, burrowing with powerful claws. Because most stormy regions have soft ground, such as the sand of a coastline or the rich soil of a rain forest, blue dragons find it easy to hide in this fashion. Lairs and Terrain Sages maintain that blue dragons prefer coastal regions. More precisely, blue dragons prefer areas subject to frequent, violent storms. Although coastal areas and seaside cliffs fit this description, so too do certain tropical isles and mountainous highlands not terribly distant from the pounding sea. If a blue dragon cannot find a properly stormy region in which to settle, it can make do with whatever terrain is available. As long as it has its own territory, a blue dragon might locate its lair on a mountaintop, in a jungle, in the Underdark, or in a desert—anywhere except perhaps the coldest of arctic climes—but any blue living in a location that lacks frequent storms thinks of that location as temporary, even if it ends up dwelling there for a few hundred years. Ultimately, a blue dragon finds happiness only in a place where it hears regular thunder beating on the horizon and where it can soar between clouds with the lightning. For their lairs, blue dragons favor enormous stone ruins or caves in the sides of hills, cliffs, or mountains. Blues enjoy taking over structures built by other races. They make their lairs as lofty as possible to survey their domains from the heights. Elevation makes them feel truly part of storms that roll through. Even if a blue dragon cannot find or construct a lair at high altitude, it will likely choose a lair in which it can easily access the main entrance only by flight. Would-be intruders on land must undertake difficult, if not nearly impossible, climbs. Blue dragons favor treasures as visually appealing as they are valuable. Blues love gems, particularly sapphires and other blue stones. They equally admire lovely works of art and jewelry. Although such an event is rare, given blues’ innate draconic greed, blue dragons have been known to leave behind treasures they find unattractive, feeling that the presence of such treasures would sully the magnificence of their hoards and thus the magnificence of the dragons themselves. - Also see: Dragon Life Cycle Blue dragon eggs incubate for approximately twenty months, the last fifteen in the nest. An average clutch numbers two to four, and most eggs hatch into healthy wyrmlings. Blue dragons grow from wyrmlings into youth after about seven years. They become adults around age 160 and elders after about a thousand years. They become ancient at about 1,800. The oldest known blue dragon died at approximately 2,300 years of age. A blue dragon that undergoes environmental diffusion after death creates a permanent storm in the vicinity. This effect happens even underground, though cramped conditions might slacken the strength of the winds. Although the severity of wind and rain rises and falls, ranging from gentle gusts and mild showers to hurricane-force torrents, the storm never dissipates entirely, regardless of the prevailing weather conditions outside it. The scales of blue dragons are slightly more reflective than those of other chromatic dragons. A person could not use a blue dragon’s scales as mirrors, but in blue or dark environments, the scales take on the surrounding hue and blend into the sky or elements around them. The horns and brow ridge of a blue dragon funnel rainwater and other precipitation away from the eyes. When combined with a blue dragon’s keen vision, this feature enables the dragon to see better in inclement weather than most creatures do. The wings of blue dragons are more flexible than those of other chromatics. Blue dragons use winds to steer and to boost their speed, like sailors tacking a ship. A blue dragon might smell of ozone, though the presence of a storm or even a mild wind can mask this scent. Jump to another chromatic dragon:
I don’t know about you, but ever since I started reading all about pillows and the fantastic innovations there have been so far, I have become fascinated with how pillows began in the first place. I mean, whose idea was it anyway to stick something under their heads when they sleep, and why? Was it for comfort, or were there other reasons why the pillow was invented. And speaking of the pillow itself, where did the name “pillow” come from? We hardly ever think about the origin stories of everyday things, however, it’s always nice to know how these things began. It’s likely that we’ll encounter some pleasant (or not so pleasant) surprises along the way. Pillow history: The story behind it all… Well, can you imagine that people are said to have used pillows from as early as 7000 BC? The humble little bed accouterment that we all love got its start in early Mesopotamia, and were made of stone. Stone? You might well ask. Yes, stone. Apparently, the first pillows were not made for comfort but for a more practical reason. Since the early Mesopotamians were sleeping on the floor, small insects used to crawl into their hair, noses, mouths, and ears, and in order to prevent this, the richer ones among them put stones under their heads, since only the rich people in that society had these “pillows”. These, my friends, were the first pillows. These pillows actually became sort of a status symbol. The more pillows you had, the more people looked up to you. It’s also possible that the Mesopotamians used these stone pillows to help relieve back pain with certain ones, but, keeping bugs away was the bigger reason to use them. Pillows were used as well a few centuries later, in ancient Egypt. The ancient Egyptians used stones for pillows just like the early Mesopotamians did, but they sometimes used blocks of wood, too. These pillows had special significance for the dead in that society, who were entombed with pillows under their heads since the ancient Egyptians believed that the human head was the most sacred part of the body. Of course, these “pillows” also saved to support the heads of the deceased and were said to cause demons to flee. Pillows became significantly softer during the times of the Greeks and Romans, who put straw, feathers or reeds inside a casing in order to have something soft to lie on. But then, it was only the rich who were able to afford these soft pillows, even if everyone else used something to lean or lie on. Later on, pillows were used for kneeling in church, or as a place to lay holy scriptures. In some places, this is still done today. The Greeks and Romans also put pillows under the heads of their dead, a practice that they may have gotten from the Egyptians. As for the ancient Chinese, their early pillows were also made of hard materials, such as porcelain, wood, jade, bronze, bamboo or even ceramics, which actually was the most popular material for pillows. Sometimes, however, a softer material such as cloth was placed on these pillows to make them more comfortable. During the Sui Dynasty in the fifth and sixth centuries, ceramic pillows were introduced to society, and then became mass produced in the dynasty that followed, the Tang Dynasty (618-907). These Chinese ceramic pillows, which are now collectors’ items, were colorfully and intricately decorated with paintings of plants, animals, and people. They reached the height of their popularity in the 10th to 14th centuries, during the time of the Song, Jin, and Yuan dynasties. Later on, as better pillow materials were developed, the production of ceramic pillows was eventually halted. Today, these items would fetch a high price in the art market. In Japan, Geishas used small hard pillows to keep their head from touching the ground at night. Since Geishas had beautifully and elaborately designed hairstyles, it was important that they slept with something that elevated their heads, in order to keep their hair from getting messed up. Incidentally, Geishas-in-training slept with rice surrounding their pillows, so their trainers could tell who fell off their pillows in the middle of the night, by counting the grains of rice in their hair. During the Middle Ages, people almost stopped using pillows altogether, since they were considered to be a sign of weakness. There were two exceptions to this, however. Expectant mothers normally used pillows for their health and comfort, and so did King Henry VIII. Legend has it that he banned everyone else from using pillows. Everyone else was expected to spend the night flat on their backs. But this mindset began to change over the years and centuries that followed, and more and more people began to use pillows in their beds at night. By the Industrial Revolution, things had changed completely. Pillows became more and more a part of comfort and rest every night, and pretty much every man, woman, and child in the western world was sleeping on a soft pillow. And today, there is an increasing variety of pillows available to fit your needs and desires. And more and more pillows are developed every day. A bed pillow is no longer just a big block of softness that you rest your head on – it can play music, wake you up with an alarm, be a smart pillow that can track your sleep patterns, make a movement when you snore, as well as all sorts of other functions that would surprise you. There are pillows of all shapes and sizes and colors and fabrics and functions – from a boyfriend pillow complete with a movable arm to hug you at night, to a black pillow called NIGHT, which is supposed to reflect darkness and let you sleep longer and more restfully. I will be writing about these pillow innovations in the weeks to come, so stay tuned! In the meantime, if travel is your interest and you’re interested in an article that compares the different kinds of travel pillows, check out what I wrote here. By the way, if you need a rundown of Best pillow for neck pain – check out my article: Pillow history: What’s in a name? Oh, you may be wondering if I ever found out where pillows got their name. Yes, I did – and here’s the etymology of the word, “pillow.” It comes from the Middle English word “pilwe”, which in turn came from the Old English word “pyle”. All of these came from the Latin word “pulvinus” which means cushion. Pretty straightforward, don’t you think? The word “pillow” itself was first used sometime in the eleventh century, and we’ve never stopped. Today pillows are made of a variety of materials. In many countries cotton is the most popular filling, if not, synthetic fibers or some kind of foam. Some people use feathers or down for filling, the even softer feathers on the underside of geese and other types of birds, although others feel that the practice of taking down from live geese is unkind to these animals. Pillows also come in different standard sizes and shapes. Americans use rectangular pillows, while Europeans use much larger, square pillows. The most common sizes are regular or standard, which is 51 x 66 cm. Next in site is Queen sized, at 51 x 76 cm, and then King, at 51 x 92 cm. There is also a Jumbo size, which falls between the standard and Queen sizes. European pillows, which I find huge and haven’t gotten used to until now, are 66 x 66 cm. You can find a helpful chart for all sizes and types of pillows here, and if you’re unsure about the sizes of pillowcases, you can check out handy guidelines and charts here. Certified clinical sleep educator, Terry Cralle, says this about pillows and I totally agree: I have heard the saying that a pillow is a bed for your head and I could not agree more! In fact, I think a pillow is just as important as the mattress when it comes to getting a good night’s sleep. Our heads and necks need to be supported while we sleep. The right pillow can make a big impact on your quality of sleep, your health and ultimately your well-being. Pillows probably don’t get the attention they warrant, as they can be instrumental in getting a good night’s sleep or be the cause of a terrible night of sleep, which will lead to a terrible day! A pillow helps keep your neck and spine in comfortable alignment as you sleep. Yet pillows that do not fit the sleeper, for example [which] are too thin or thick, can leave you with neck pain, shoulder pain, back pain or headaches. So you see, pillows are important! An excellent pillow spells all the difference between waking up in pain and waking up refreshed, with a smile on your face. I hope that this short pillow history will cause you to be thankful that someone invented these contributors to our nightly comfort. Having a good pillow is like being hugged by your best friend every day! Pillow history: Minute of humor:
The mimicry of the Black-cowled Orioles (Icterus prosthemelas (AKA I.dominicensis prosthemelas)) is similar to the mimicry of the Hooded Oriole in North America. Douglas Von Gausig (recordist; copyright holder), Naturesongs.com This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Help us improve the site by taking our survey. To cite this page: Myers, P., R. Espinosa, C. S. Parr, T. Jones, G. S. Hammond, and T. A. Dewey. 2016. The Animal Diversity Web (online). Accessed at http://animaldiversity.org. Disclaimer: The Animal Diversity Web is an educational resource written largely by and for college students. ADW doesn't cover all species in the world, nor does it include all the latest scientific information about organisms we describe. Though we edit our accounts for accuracy, we cannot guarantee all information in those accounts. While ADW staff and contributors provide references to books and websites that we believe are reputable, we cannot necessarily endorse the contents of references beyond our control.
Ed Thomas, PhD student on the CoG3 project, explains the importance of cobalt to a group of school children in Manchester. As a Widening Participation Fellow I am often involved with outreach events encouraging school children in to science, technology, engineering and maths subjects. My workshops are usually based on an aspect of Earth Sciences that the children have come across before; the rock cycle, dinosaurs, volcanoes… However, the most engaging part of science is not what we already know, but the unsolved problems we face as a society. It is one of these unanswered questions I posed to year 9 children from four schools in Greater Manchester. On 3 February, sixty pupils attended a Gateway’s Day at the University of Manchester where I ran a workshop discussing the importance of my research for the CoG3 project. The main aim of the session was to introduce children to the idea of sustainability and security of supply of certain minerals. We discussed which metals are used in smartphones, where these metals come from, and why China dominates the smartphone production industry. For most of the children this issue of where the materials in their smartphones come from was something they hadn’t thought about before. The session finished in an engaging debate on alternative sources of minerals and metals including whether or not the seafloor or asteroids are potential solutions in the future. The children all agreed upon the importance of creating a sustainable and secure supply of metal resources for the UK in the future, and hopefully many of them were inspired to pursue a career in Earth Sciences to help us find an answer to how we will ensure one. The CoG3 (Cobalt: Geology, Geometallurgy and Geomicrobiology) consortium is investigating the recovery of cobalt, a metal of great strategic and economic importance. Follow them on Twitter.
by James R Booth and Tali Bitan, Science Direct Research shows conclusively for the first time that there is a biological basis for gender differences in learning, particularly in language acquisition. “Reconcilable Differences: What Social Sciences Show About the Complementarity of the Sexes & Parenting“ by W. Bradford Wilcox Wilcox claims that social science shows that each sex has different strong points when it comes to parenting, and each should be allowed to fulfill the role in which they excel. Mothers are more adept at breast-feeding, understanding, and nurturing/comforting their children. Fathers are better at disciplining and playing with their children, as well as equipping them to face life’s challenges and opportunities. Research has shown that the structure of families and the roles played by parents do produce measurable results in children’s lives.
ANSTO Publications Online > Conference Publications > Conference Publications > Please use this identifier to cite or link to this item: |Title: ||Size of the West Antarctic ice sheet at the last glacial maximum: new constraints from the Darwin-Hatherton glacial system in the Transantarctic Mountains.| |Authors: ||Storey, B| |Issue Date: ||1-Jul-2009| |Citation: ||Storey, B., Hood, D., Fink, D., Shulmeister, J., & Riger-Kusk, M. (2009). Size of the West Antarctic ice sheet at the last glacial maximum: new constraints from the Darwin-Hatherton glacial system in the Transantarctic Mountains. Annual Antarctic Conference 2009 - "Sustaining the Gains of the International Polar Year", 1st – 3rd July 2009. In Proceedings of the Annual Antarctic Conference 2009 (p. 14). Auckland, New Zealand: University of Auckland.| |Abstract: ||An understanding of how the Antarctic ice sheet has reacted to natural global warming since the last glacial maximum (LGM) 18 to 22 thousand years ago (kya) is essential to accurately predict the response of the ice sheets to current and future climate change. Although global sea level rose by approximately 120 metres since the LGM, the contribution from and rate of change of the Antarctic ice sheets is by no means certain. Mackintosh et al (2007) have suggested that the East Antarctic Ice Sheet (EAIS) made an insignificant contribution to global sea-level rise between 13 and 7 kya raising interesting questions about the initial extent and response of the West Antarctic Ice Sheet (WAIS) during that time frame. Terrestrial evidence of these changes is restricted to a few ice-free areas where glacial landforms, such as moraines, show the extent of former ice advances. One such area is the Darwin-Hatherton glacial system where spectacular moraines preserve the advance and retreat of the glacial system during previous glacial cycles. Previous researchers have suggested that the WAIS was more than 1000 metres thicker than it is today at this location at the LGM. As part of the Latitudinal Gradient Project, we mapped the moraines of the Lake Wellman area bordering the Hatherton Glacier and collected samples for cosmogenic nuclide dating, a technique that is widely used to calculate the exposure history of the glacial landscape and the amount of time that the glacial debris has been exposed to cosmic rays and not covered by ice or other glacial debris. While the technique is very successful in mid latitudes, it is more challenging in Polar Regions. Our mapping has shown that ice in the past was at least 800 metres thicker than current ice levels in this area. Our cosmogenic data suggest that this was at least 2 million years ago but for the most part our data record, as expected, a complex history of exposure and re exposure of the ice free regions in this area in accordance with advance and retreat of the ice sheets. However, a cluster of ages of 35 to 40 thousand years record a single exposure event and indicate that the ice in this area was not as thick as previous estimates for the extent of ice at the LGM. These ages are recorded from moraine boulders that are located below a prominent moraine feature mapped as representing the LGM. These results raise further questions about the size of the Antarctic ice sheets at the LGM, their contribution to global sea level change and how the Antarctic ice sheets respond to global warming.| |Appears in Collections:||Conference Publications| Files in This Item: There are no files associated with this item. Items in APO are protected by copyright, with all rights reserved, unless otherwise indicated.
Bladder inflammation is a very painful experience that many individuals have suffered from in their lives. There are many different reasons why one may develop this condition, which is also referred to as “Cystitis”. Here, you will learn a number of important facts that are related to the condition of Cystitis. The intent of this article is to educate, and equip you with the essential knowledge you need to better understand bladder inflammation, and all of the circumstances surrounding this condition. One of the most common culprits is a buildup of bacterium that results from an infection that develops in the urinary tract. This tract starts at the bladder and is the canal that urine travels in so that it may be eliminated from the body. One of the reasons for this build up of bacterium is where our bladder is located. Like the prostate gland the bladder is very close to the rectum. What does that have to do with anything? Everything when it come to how the bacteria gets to be in our bladder. If we do not get enough water and fiber in our diet our colon becomes very underactive and toxic. When that happens then harmful bacteria from the rectum can make there way to the bladder. It's very important to take VitaMarks probiotic friendly flora. When the bowel becomes overloaded with bad bacteria this will allow bacteria into the blood stream which will infect the kidneys and the bladder. When this continues an inflammation response will take place and stay on as long as the bacteria is present. What are the Symptoms Associated with Inflammation of the Bladder? A number of symptoms are associated with inflammation of the bladder. The most common symptom is that of pain. This pain can develop in many different forms; however, the most common forms are that of extreme pressure and also a sensation of burning and irritation. It is not uncommon for one who suffers from this form of inflammation to feel a constant urge or sensation that they need to urinate. When visiting the restroom, the individual may or may not be able to relieve themselves. Traces of blood lingering in the urine after it has been eliminated from the body may be experienced. Furthermore, once the urine is eliminated from the body many will experience a type of uncomfortable cramping. Limu Plus is an anti-inflammatory. Limu has the phytochemical called focoidan, which is one of its promary active ingredients. It will assist with joint care, improves digestion, helps support the liver, improves blood function [blood flow] and enhances cells and skin growth and much more. Limu also has 12 other adaptagen plants that are active ingredients and enzymes. These enhance other body functions. Who Is At Risk for Bladder Inflammation? Anyone can suffer from the effects of Cystitis, however, there are many who are more prone to this condition than others. Despite popular belief that women are more susceptible to complications of the urinary tract and bladder, it is men who are actually more prone to this type of medical condition. Those that experience problems with their immunity (lymphatic system) and are susceptible to various types of infections may suffer more than the average person. For example, auto-immune diseases, such as Multiple Sclerosis and Rheumatoid Arthritis patients may find that they experience inflammation of the bladder and urinary tract frequently. Children also stand a risk of developing this condition. This is especially true of children who are developing bathroom hygiene habits, are prone to infections, and suffer from immune disorders. What Should One Do To Overcome This Condition? People who are informed that they suffer from inflammation in the bladder are often directed to avoid certain items that may act as an irritant to the condition. These items may include various types of soaps that contain high levels of perfume, and even feminine cleansing products. In addition to these approaches, sufferers are often encouraged to increase their fluid intake of clear liquids on a daily basis to help flush out the bladder and the urinary tract. Bladder inflammation can be a very challenging and uncomfortable condition to endure. With whole anti inflammatory foods like cranberries, greens and fresh fruit like grapefruit will help you overcome bladder inflammation.
Historically, candles were made in numerous small workshops from expensive beeswax or cheap tallow (purified animal fat) which gave a poor light and released smoke and unpleasant smells. Candles can also be made using vegetable fat including palm nut oil. One of the British government’s alternatives to the slave trade was the encouragement of palm nut production in West Africa. The nuts were processed locally and the dark orange-brown coloured palm oil was exported to Britain where it was used in the soap industry. William Wilson, a London-based Scotsman and former Russian trader, took out a licence on an 1829 patent for the hydraulic separation of coconut fats and in 1830 set up as candle makers with a partner, Benjamin Lancaster, under the made-up name of Edward Price & Co. (It is thought that they did not use their own names as they did not want to be associated with a very low-class trade that involved dead animals and unpleasant smells.) The pair built a candle factory at Vauxhall with a crushing mill at Battersea and invested in a 1,000-acre coconut plantation in Sri Lanka. Initial sales of the coconut candles were not impressive but by 1840 Price’s had developed a composite candle made from a mixture of refined tallow and coconut oil. It was introduced just in time for the public to celebrate Queen Victoria’s wedding in the then tradition way of putting a candle in their windows. Later William’s son George experimented with discoveries made by the French chemist Chevreuil. Chevreuil had found that mixing strong alkalis with vegetable or animal fats caused the liquid to separate from the solid components. This process, known as saponification, was already used by soap makers but had not been applied to candle making. George added a further distillation using heat and high pressure to produce a harder, pure fat known as stearine. This was excellent for candle making as it burned brightly without smell or smoke. By-products of the stearine process included light oils and glycerine. Price’s soon found uses for these by-products, which made valuable contributions to the company balance sheets. Using the new process, candles could also be made from other raw materials including skin fat, bone fat, fish oils and waste industrial grease. The original Edward Price & Co. became the Price’s Patent Candle Company in 1847 with about 84 staff. In October 1849 Price’s produced twenty tons of coconut candles worth £1,590, and about twelve tons of other candles worth £1,227. Palm oil was landed at Liverpool and in 1853 the company decided to save on transport and handling costs by building a second factory on a green field site at Bromborough Pool. Price’s was a benevolent company, so when setting up the Liverpool factory it started building a village for its employees. This model village was built in stages but eventually comprised 147 houses, a church, institute, shop and library. The Price’s concept of model villages inspired Port Sunlight (Lever 1880s) and Bourneville (Cadbury 1890s). Prices also introduced an educational programme for staff, a profit-sharing scheme in 1869 and in 1893 a contributory employee’s pension scheme. Price’s developed and improved methods of mass production and by October 1855 the quantity of candles the firm made in a month had risen dramatically to 707 tons, worth £79,500. Paraffin wax was first used in candles during the 1850s but initially take-up was slow with only about a 12% share of the market in 1870. But by 1900 paraffin wax candles had a 90% share of the market. In the late 1850s Price’s cracked oil from the oil fields of Burma into various products including paraffin and kerosene which the firm exported to the USA. In 1859 oil deposits were found in Pennsylvania and the market collapsed, leaving Price’s with a huge stock of crude oil. Using their experience in light oil production, Price’s developed many new lubricants including Motorine, a brand that dominated the UK motor oil industry in the early 1900s. What had started as a relatively small operation in 1830, twenty-five years later had become a national household name employing 2,300 staff of which 1,200 were boys. By the end of the 19th century it was the world’s largest manufacturer of candles exporting to all parts of the empire. But UK demand was beginning to lose out to other energy sources, and starting in 1910 Price’s set up candle factories in Johannesburg, Shanghai, Chile, Rhodesia (now Zimbabwe), Morocco, Pakistan, New Zealand and Ceylon (now Sri Lanka) to serve local demand. In 1919 Price’s was taken over by Lever Brothers, which wanted to diversify into a wide range of fat-based products and saw Price’s as a direct competitor with a good range of products, extensive industry knowledge – and a factory next door to Port Sunlight.
QUESTION: I want to build a super-energy-efficient house using building products that are made from recycled materials or are made by energy-efficient methods. What specific types of building products should I use? ANSWER: If you shop carefully, you should be able to find recycled or low-energy-intensive building products to meet most your material needs. Many of these energy-efficient products use more than 50% recycled materials and require little additional energy for processing. These new "earth-friendly" building products include structural framing, foundations, walls, roofs, sheathing, insulation, interior wall and floor coverings. For exterior wall or roof framing, chose products that use as little lumber from old-growth trees as possible. Instead of using 2x10 floor joists, you can use "I-joists," which require less wood for the same strength. Also glue-laminated lumber and laminated veneer lumber use smaller pieces from second-growth trees to make large defect-free lumber. To even further reduce the amount of lumber, use super-insulated stress skin wall panels. These use only 5% wood as compared to 20% wood in a conventional studded wall. Another new wall panel uses a super-strong and efficient honeycomb structure made from recycled resin-impregnated paper. Producing cement for foundations and slabs is very energy intensive. ACC (autoclaved cellular concrete) uses small amounts of aluminum in the concrete. This creates small bubbles causing the concrete to expand and become less dense as it cures. It is still very strong, but requires less cement. Waste fly ash from power plants can replace about 20% of the cement. Many organic asphalt shingles contain recycled mixed waste paper. Some of the residential aluminum "shake-looking" roofing is made from 100% recycled beverage cans. Metal roofing also can cut your cooling costs. If you like the look of wood shakes, select ones made from remanufactured wood fibers. Many types of insulation are made from recycled and fireproof treated newsprint or waste mineral slag. One type of blowing wool fiber insulation is made from 100% recycled telephone books. Rigid insulating foam wall sheathing is now made from recycled foam containers. You can use gypsum-like wallboard made from waste ryegrass straw. Another type is made from waste paper and rice hulls or peanut shells. Some resilient tile flooring is made from recycled car tires. One company makes solar ceramic tiles from recycled waste glass from a light bulb factory. Some attractive carpeting is made from recycled plastic bottles. You can write for Utility Bills Update No. 355 listing addresses and telephone numbers of 70 manufacturers of "earth-friendly" building and home improvement products and descriptions of their products. Please include $1.50 and a self-addressed business-size envelope. Send to James Dulley, Los Angeles Times, 6906 Royalgreen Drive, Cincinnati, Ohio 45244. Using Heat Escaping From Clothes Dryer Q: During the winter, I moved my electric clothes dryer far from the window vent. This should have allowed some heat to transfer to the utility room before it blows outdoors. Was this a good idea? A: In theory your idea is a good one. However, there are some potential problems. First check with your dryer manufacturer about the maximum duct length. A very long duct can cause excessive back pressure. Another potential problem is a fire from accumulated lint. For a long duct, over 20 feet, always use an aluminum duct, not plastic. Letters and questions to Dulley, a Cincinnati-based engineering consultant, may be sent to James Dulley, Los Angeles Times, 6906 Royalgreen Drive, Cincinnati, Ohio 45244.
The flying bombs (V-1s) started raining down on Antwerp one month after the city was liberated. The German command wanted to prevent the Allies from capturing Antwerp unscathed and supplying their troops on the front line, by dropping thousands of these vengeance weapons on the city. But the bombs were not always that reliable, which is how they ended up spreading terror throughout the city. They killed 3,560 people, injuring more than 9,000 other people. And the port? Wonder by wonder, it was captured almost intact by the Allies, going on to play an important role in the end of the war. Are you curious to find out more? Then download the Antwerp Museum App via Google Play Store (Android) of App Store (iOS) to your smartphone or tablet. You find the app at 'Tours' and 'V-bomb walk'. Head out for a walk through the city or take a virtual tour on your PC. If you can't make the walk, you can see the film and map with the stories behind every location.
Defined in header char* ctime( const std::time_t* time ); Www Mmm dd hh:mm:ss yyyy Www- the day of the week (one of Mmm- the month (one of dd- the day of the month The function does not support localization. |time||-||pointer to a std::time_t object specifying the time to print| Pointer to a static null-terminated character string holding the textual representation of date and time. The string may be shared between std::asctime and std::ctime, and may be overwritten on each invocation of any of those functions. This function returns a pointer to static data and is not thread-safe. In addition, it modifies the static std::tm object which may be shared with std::gmtime and std::localtime. POSIX marks this function obsolete and recommends std::strftime instead. The behavior may be undefined for the values of time_t that result in the string longer than 25 characters (e.g. year 10000) Tue Dec 27 17:21:29 2011 C documentation for ctime
books on law enforcement history and the history of policing were authored by state and local law enforcement officials. A Concise History of American Policing explores the foundation of modern American police officers from their distant cousins in the Iron Age. Find out how the Draco, Caesar Augustus, the Hue and Cry, the Rattle Watch and Old West Gunslingers influenced today’s police operations. How did policing finally get to Broken Windows, Technology and Community Policing? Sergeant Sven Crongeyer has been employed with the Los Angeles County Sheriff’s Department for 17 years. His passion for historical research led him to write Six Gun Sound: The Early History of the Los Angeles County Sheriff’s Department, which “traces law enforcement efforts to meet the challenge of public safety that from the beginning were both enhanced and hampered by the influx of ranchers, cowboys, farmers, miners, gunfighters, and gamblers. Los Angeles was a den of iniquity that rivaled even the most famous towns of the Old West: Silver City, Tombstone, Dodge City, and Wichita.” Sergeant Kevin S. Foster, Fort Worth Police Department (Ret.) is the co-author of Written in Blood: The History of Fort Worth's Fallen Lawmen, Volume 1, 1861-1909. According to the book description of Written in Blood: The History of Fort Worth's Fallen Lawmen, Volume 1, 1861-1909, “Another line of duty death” is a chilling headline that serves as an obituary for too many “first responders.” In 2002 Fort Worth joined the ranks of other communities across the nation in building a memorial to its fallen heroes, an elaborate, million-dollar Police and Firefighters Memorial, dedicated in 2009, that recognized fifty-eight policemen going back to the city’s beginnings. Written in Blood is a more inclusive version of that idea because it covers more than just members of the Police Department; it is about the men from all branches of local law enforcement who died defending law and order in the early years: policemen, sheriffs, constables, “special officers,” and even a police commissioner. All were larger-than-life characters who took an oath to “preserve and protect” and therefore deserve to be remembered.” Lieutenant George J. Wren, Jr., New Jersey State Police (ret.), “enlisted in the New Jersey State Police in February 1982, and enjoyed postings at several Troop "A" duty stations including an eighteen-year stint in the Intelligence Bureau. Lieutenant Wren attained a BS and Masters Degree from Fairleigh Dickinson University. He resides with his wife, Sandy, on the Jersey shore.” Lieutenant George J. Wren is the author of Jersey Troopers II: The Next Thirty-Five Years (1971-2006). Robert Kirby was born in Fontana, California. His father, who was a criminal investigator for both the U.S. Army and U.S. Air Force retired and moved his family to Salt Lake City Utah. Kirby began his law enforcement career with the Grantsville Police Department, in Utah. After a year, he moved to the Springville Police Department where he worked for ten years. After leaving law enforcement, Kirby has worked as a newspaper editor, correspondent and columnist. He is currently a columnist for the Salt Lake City Tribune. His book, End of Watch: Utah’s Murdered Police officers from 1858-2003 chronicles the murders of law enforcement officials in Utah. James Lardner is a senior fellow at Demos was a police officer for the Metropolitan Police Department (Washington, DC) for two and half years during the early 1970s. Today, he is a well-regard researcher and writer. As a journalist, he has written for the New York Review of Books, The New Yorker, The Washington Post, and The Nation, among other publications. He is the author of Crusader: The Hell-Raising Police Career of Detective David Durk; and, the co-author of NYPD: A City and Its Police. said of NYPD: A City and Its Police, “A comprehensive and elegant history of the New York Police Department, this book, written by a journalist (Lardner) and a former cop (Reppetto), charts the department's development, from its origins as a collection of unorganized watchmen in the 1820s to its recent past. In crisp, anecdote-rich prose, Lardner (a New Yorker contributor) and Reppetto (now president of New York's Citizens Crime Commission) take readers on a chronological tour through the years when the department reluctantly adopted firearms and uniforms and when police applicants depended on patronage, through wave after wave of anti-corruption ferment, and through years of controversy.” Albert S. Kurek, a retired New York State Police trooper wrote two books on the history of the New York State Police in The Troopers Are Coming: New York State Troopers, 1917-1943 and The Troopers Are Coming II: New York State Troopers, 1943-1985. Wayne Knight 19 year law enforcement career included being a police officer in Newport Beach (California), a deputy sheriff in Washoe County (Nevada) and a Deputy Marshal for the Los Angeles County Marshal’s Department. Steven Knight is the author of 1857 Los Angeles Fights Again and 1853 Los to Midwest Book Review, “1853 Los Angeles Gangs by Steven W. Knight is an impressively written, historical novel of the lawless gangs of Los Angeles, and the determined Rangers who stood against them. The superbly drawn story of a turbulent "yesteryear" city is populated with such memorable characters as Juan Flores who intends for his gant to dominant a rapidly expanding and ethnically diverse city by first killing off the Chinese, and then the Americans; Don Thomas Sanchez struggling to preserve political power in the face of American landgrabs; and Horace Bell with his implacable dedication to the law. Drama, action, bloodshed, love and great courage fill the pages of this exciting and entertaining saga from cover to cover.” Steve R. Willard is a 20-year member of the San Diego Police Department. A writer for law enforcement periodicals, Steven Willard also serves as the vice president of the San Diego Police Historical Association, which supplied the vintage photos for his Images of America, San Diego Police Department. Since joining the San Diego Police Department in 1985, Steve Willard has worked “patrol, crime prevention and the detective bureau. In addition to extensive expertise in forensic video and composite artistry and covert alarm systems, Vice President Willard holds a certificate in intermediate Crime Scene Investigation from California State University Long Beach and an advance certificate from the California Department of Justice. He has also obtained certificates in intermediate and advanced courses in fingerprint classification and identification through the Federal Bureau of Investigation.” He is also the author of America’s Finest: The History of San Diego City Law Enforcement. According to the book description of Images of America, San Diego Police Department, “The San Diego Police Department dates to 1889, when out-of-control crime forced the end of the highly ineffective city marshal’s office. With violence on every corner and Tombstone’s venerable Wyatt Earp running the marshals’ gambling interests, change was desperately needed. But the first days of the SDPD weren’t easy. Within two years of its formation, the city’s economy tanked, 36,000 of the town’s 40,000 citizens left, and the department’s newly appointed chief refused to take the job. Still, San Diego eventually developed into one of the nation’s largest cities and most popular tourist destination—a multifaceted metropolis perched between the extremes of Los Angeles and Mexico, the Pacific Ocean and the desert. Today more than 2,000 highly trained sworn SDPD officers, 700 support staff, and more than 1,000 volunteers form one of the world’s most innovative and internationally recognized police forces.” Kevin J. Mullen served for more than twenty-six years with the San Francisco Police Department and retired at the rank of deputy chief. He has written extensively in magazines and newspapers on criminal justice issues. He is the author of Let Justice Be Done: Crime and Politics in Early San Francisco, Dangerous Strangers: Minority Newcomers and Criminal Violence in the Urban West, 1850-2000 and The Toughest Gang in Town: Police Stories From Old San Francisco. According to the book description of Dangerous Strangers: Minority Newcomers and Criminal Violence in the Urban West, 1850-2000, “Have newcomers to American cities been responsible for a disproportionate amount of violent crime? Dangerous Strangers takes up this question by examining the incidence of criminal violence among several waves of immigrant/ethnic groups in San Francisco over 150 years. By looking at a variety of groups--Irish, German, Italian, and Chinese immigrants, primarily--and their different experiences at varying times in the city's history, this study addresses the issue of how much violence can be attributed to new groups' treatment by the host society and how much can be traced to traits found in their community of origin.” Arthur W. Sjoquist and Thomas G. Hays are retired Captains from the Los Angeles Police Department as well as members of the Los Angeles Police Department Historical Society Board. They are co-authors of a pictorial look at the Los Angeles Police Department. According to the book description of Images of America: Los Angeles Police Department, “No police force in history has gained as much fame and notoriety as the Los Angeles Police Department. The acronym LAPD is practically synonymous with the idea of professional law enforcement. The men in blue who patrol Hollywood and the sprawling metropolis of L.A. have been investigated by screenwriters more times than vice versa. With more than 9,300 sworn officers today, the LAPD endures seemingly endless controversies and media circuses. But then there’s the other side of L.A.’s protective shield—the story of the force’s evolution alongside the spectacular growth of its unique melting-pot city. This book’s rare and often never-before-published photographs focus on that side: the excitement, danger, tragedy, and comedy of everyday beat cops and workaday detectives—with concessions to their limelight representations, including Jack Webb’s Dragnet and Adam-12.” Shulman is a seven-year member of the Napa Police Department, currently serving as a detective. An avid historian, Todd Shulman founded the Napa Police Historical Society in 2006 and has culled their archives for many of the photographs included in his book, Napa County Police. According the book description of Napa County Police, “with dazzling vintage imagery and rich historical text, Todd Shulman tells the tale of policing Napa County - from the Wild West days of the 1850s, through the boom era of the 1940s, and into the 21st of organized law enforcement in Napa County begins with the very first meeting of the board of supervisors in 1850 and the appointment of a county sheriff and marshals for each township. The foundations for progress and prosperity in place, Napa County grew from a remote agricultural outpost to the preeminent wine-growing region in the United States and a booming tourist destination—and policing has kept pace. Today, in addition to the Napa Sheriff’s Department, the county is protected by the California Highway Patrol and three police departments: Napa, St. Helena, and Calistoga. Specialized police agencies have also grown out of unique needs, including the Napa State Hospital Police, Railroad Police, and Community College Police.” Schulz is Professor of Law, Police Studies, and Criminal Justice Administration at the John Jay College of Criminal Justice. She was the first woman captain to serve with the Metro-North Commuter Railroad Police Department and its predecessor department, the Conrail Police Department. Dorothy Schulz is a member of numerous police and academic associations, and has spoken at conferences of the International Association of Women Police, Women in Federal Law Enforcement, the National Center for Women & Policing, the Senior Women Officers of Great Britain, and the Canadian Police College. Dorothy Schulz is the author of From Social Worker to Crimefighter: Women in United States Municipal Policing and Breaking the Brass Ceiling: Women Police Chiefs and Their Paths to the Top. to a review of From Social Worker to Crimefighter: Women in United States Municipal Policing, in Law Enforcement News, “Schulz offers a solid social history of the roles women filled in policing American communities from the 1820s through the 1980s. Not intended to be a theoretical or analytical treatment of either gender or law enforcement, it offers interesting narrative and presents with appropriate praise many actual women who faced high risks and high challenge as they sought first to improve policing and then to gain equal footing on patrol. This much-needed book will doubtless remain the authoritative work on the subject for some time and is essential reading for anyone with an interest in the development of women police or, indeed, the history of social control in the United States. J. Carlin is the Chief Deputy of Uniformed Operations with the Niagara County Sheriff’s Office, with more than 24 years of law enforcement experience. Chief Deputy Carlin began his law enforcement career with the United States Army Military Police in Germany. He joined the sheriff’s department in 1982 as a road patrol deputy. He served in that position until 1989 when he was promoted to Sergeant. In 2004, Sheriff Thomas Beilein appointed Carlin to the position of Chief Deputy Christopher Carlin is a graduate of the FBI National Academy and the FBI Law Enforcement Leadership Development Course. He has obtained his Associates in Applied Sciences Degree in Criminal Justice from Niagara County Community College and a Bachelor of Sciences Degree from Empire State College in Criminal Justice Public Administration. Carlin is a thirty year veteran of the military, serving on active duty with the U. S. Army from 1976 to 1979. He has served in the NY Air National Guard and the Air Force Reserve since 1981. Deputy Chief Christopher Carlin is the author of Protecting Niagara: A History of the Niagara County Sheriff’s Office.
1. There are objective, mind-independent truths of different kinds;2. By starting from the way things initially appear to us, we can use reason collectively to achieve justified beliefs about some of those objective truths;3. Those believes in combination can directly influence what we do;4. These processes of discovery and motivation, while mental, are inseparable from physical processes in the organism. Sunday, December 16, 2012 Nagel 4: Reason & A Rational Kosmos In Chapter 5 of Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False, Thomas Nagel takes a decisive (though unacknowledged) step in the direction of Socratic rationalism. Socrates, Plato, and Aristotle all assumed that the brute fact of logos, the human capacity for rational thought and speech, implied a certain view of the Kosmos. In the Phaedo, Socrates grounds the fundamental turn in his thought (how Socrates became Socratic) in the insight that if reason is to be reliable, this can only be because K is rational. I heard someone reading, as he said, from a book of Anaxagoras, that mind was the disposer and cause of all, and I was delighted at this notion, which appeared quite admirable, and I said to myself: If mind is the disposer, mind will dispose all for the best, and put each particular in the best place; and I argued that if any one desired to find out the cause of the generation or destruction or existence of anything, he must find out what state of being or doing or suffering was best for that thing, and therefore a man had only to consider the best for himself and others, and then he would also know the worse, since the same science comprehended both. And I rejoiced to think that I had found in Anaxagoras a teacher of the causes of existence such as I desired, and I imagined that he would tell me first whether the earth is flat or round; and whichever was true, he would proceed to explain the cause and the necessity of this being so, and then he would teach me the nature of the best and show that this was best; and if he said that the earth was in the centre, he would further explain that this position was the best, and I should be satisfied with the explanation given, and not want any other sort of cause. And I thought that I would then go on and ask him about the sun and moon and stars, and that he would explain to me their comparative swiftness, and their returnings and various states, active and passive, and how all of them were for the best. For I could not imagine that when he spoke of mind as the disposer of them, he would give any other account of their being as they are, except that this was best; and I thought that when he had explained to me in detail the cause of each and the cause of all, he would go on to explain to me what was best for each and what was good for all. Socrates supposed that, if K is rationally comprehensible, then K must involve the rationally good. Whether Nagel must move in that direction is not clear to me yet; though I see some signs of it. He argues in Section 5 of Chapter 4 that the human capacity for rational thought is as big a problem as the big problem of consciousness. Consciousness that divides the world into self and not-self is one thing. A grasp of objective reality and objective value, independent of the subjective position, is quite another. Here is how he lays out the implications. If there is such a thing as reason, then: If reason is what it appears to be, then two big consequences about K follow: there are objectives truths about the parts and the whole of K and the history of K includes the appearance of creatures that can discover those truths. This was the basic assumption of classical philosophy and perhaps of all possible philosophy: that the human mind and the Kosmos operate according to the same (or mostly the same) basic principles. Otherwise all rational investigation (including all scientific investigations) would be vain. That means that mind belongs not only to human beings but to Being itself. What this commits us to is not clear. Do we have to believe, as Socrates clearly does, that any explanation of astronomical phenomena must include the concept of what is best? That would require a very big leap beyond the boundaries of modern scientific thought. It is not altogether out of the question. The cosmological constants argument for the existence of an intelligent designer rests on the claim that K is fine-tuned for the existence of life on earth. If gravity were just an infinitesimal bit weaker or stronger (along with a considerable number of other cosmological constants), there would be either no K at all or a K without the possibility of life. Apart from the theological implications of this argument, it is conceivable that they prove Socrates right. Whether or how we conceive of G, it might be the case that we cannot explain K without the concept of what is best. I’m not buying in just yet, but I think that the possibility is open. Without deploying teleology on a cosmic scale, it remains a fact that the universe is rational if indeed it exists as science imagines it. The fact that such creatures as ourselves exist in it is central to understanding it, as Nagel argues.
Adapted from the article “Site preparation for walnut planting”, by Bill Krueger UCCE Farm Advisor Emeritus, Glenn County in the August 2008 Sacramento Valley Walnut News. Preparation for planting a new walnut orchard should start with a soil evaluation. Walnuts have traditionally been planted on class one soils, the deepest most uniform well drained soils. Recent research and grower experience has shown that with the right preparation and planting system walnuts can be successfully grown on less than ideal soils. Soil evaluation will help determine the steps to take prior to planting to insure successful results. Starting Point: Soil Survey Maps Soil survey maps are a good place to start. A soil survey for the area of interest is available at your local NRCS office or online. The soil survey will provide information on the type of soils present and their distribution and acreage. It describes each soil type and provides information about drainage, flooding, exchangeable sodium content and other details important to orchard success. Learning More: Backhoe Pits The soil survey cannot provide all of the necessary details. Using a backhoe to further explore the soil can provide valuable information necessary for orchard development. Digging backhoe pits 5 to 6 feet deep in strategic locations where soil differences are expected will allow for a first hand examination of the soil. Look for stratified soil, compacted zones, hard pans, clay pans etc. Your local Farm Advisor may be able to provide assistance in the evaluation of the backhoe pits. Abrupt changes in soil texture can result in a perched water table which is unhealthy for walnut roots. If soil modification is necessary it will be much easier to accomplish before planting. It should be done in the late summer or fall when the soil is dry to insure the most disruption possible while allowing winter rains to settle the soil before planting. Touch up leveling or smoothing can be done in the spring before planting. Leveling to smooth out low spots or improve surface drainage should be done to keep the future orchard healthy. Deep uniform soils may only require shallow ripping (1.5 to 3 ft.) to loosen the soil. Stratified soils or soils with hardpans or claypans will require deep ripping or slip plowing (3 to 6 ft.) to disrupt layers. Ripping is less effective for clay pan soil because the clay which will flow around the ripper and soon reseal. A slip plow, a ripper shank with an iron plate at a 45 degree angle, can lift soil at the bottom of the shank to the soil surface and permanently disrupt a clay layer. Ripping and slip plowing are typically done in two directions, with the second pass being diagonal to the first. Soil physical characteristics can to some extent be overcome by the use of low volume irrigation, especially under close tree spacing. Soil can be modified to a depth of 6 feet but large equipment is necessary and it is expensive. Backhoeing tree sites to mix the soil may be practical on sandy soils where it can be done quickly but will probably be cost prohibitive on heavier soils. Planting trees on berms is recommended on heavier soils. Ridge berms in the fall after soil preparation to allow for settling over the winter.
Pump Knowledge Menu | Fluid Pumps Suppliers Pump Impeller Types Impellers of pumps are classified based on the number of points that the liquid can enter the impeller and also on the amount of webbing between the impeller blades. Impellers can be either single suction or double-suction. A single-suction impeller allows liquid to enter the center of the blades from only one direction. A double-suction impeller allows liquid to enter the center of the impeller blades from both sides simultaneously. The illustration below shows simplified diagrams of single and double-suction impellers. Impellers can be open, semi-open, or enclosed. The open impeller consists only of blades attached to a hub. The semi-open impeller is constructed with a circular plate (the web) attached to one side of the blades. The enclosed impeller has circular plates attached to both sides of the blades. Enclosed impellers are also referred to as shrouded impellers. Figure 5 illustrates examples of open, semi-open, and enclosed impellers. The impeller sometimes contains balancing holes that connect the space around the hub to the suction side of the impeller. The balancing holes have a total cross-sectional area that is considerably greater than the cross-sectional area of the annular space between the wearing ring and the hub. The result is suction pressure on both sides of the impeller hub, which maintains a hydraulic balance of axial thrust.
When mentioning digestive problems, most of us probably automatically think of complaints such as stomach ache, diarrhea, and constipation. While these are all genuine issues, they are sadly only a few of the many symptoms of digestive problems. There are several debilitating and severe digestive issues that some of us don’t know about. From irritable bowel syndrome (IBS) to Crohn’s disease or ulcerative colitis, gastrointestinal disorders can be serious. They can also have a massive impact on the lives of those who suffer from them. This often leads to individuals becoming reclusive and isolated; in some cases, it can even lead to mortality. This informative post will discuss the different types of digestive problems and how CBD can potentially help alleviate common symptoms. What Are Digestive Problems? Approximately 40% of people have at least one symptom of a digestive issue at any one time. With that in mind, it comes as no surprise that many over-the-counter medications claim to treat these common digestive issues. Known as functional disorders, constipation, diarrhea, indigestion, and bloating are common and usually temporary digestive problems. A typical long-term gastric issue is IBS, which affects the large intestine. While many medications can treat IBS, symptoms can be uncomfortable and range from cramping and bloating to gassiness, constipation, and diarrhea. An even more severe digestive problem is Crohn’s disease, an inflammatory bowel disease (IBD). While it is more severe than IBS, it is no less common. Statistics show that approximately 700,000 people in the US have the disease. If symptoms are severe enough, it is unlikely that the sufferer would want to leave the house for work or to socialize. IBD causes inflammation of the digestive tract, leading to a variety of symptoms. These can range in severity from abdominal pain, tenderness, and discomfort to persistent vomiting, high fever, or even abscesses. Not only are these symptoms damaging physically, but the side effects can also interfere with an individual’s daily life and mental health. If symptoms are severe enough, it is unlikely that the sufferer would want to leave the house for work or to socialize. However, what causes these issues? Gastric problems can stem from something as small as a change in diet or increased stress levels. They can also arise from things like lack of exercise, medication use, or pregnancy. What Are the Current Treatments Available for Digestive Problems? Depending on the problem one has, there are a variety of things they can buy to help. However, this can become expensive, and the long-term side effects of taking the medication regularly can be damaging. An over-the-counter antiemetic can relieve nausea and vomiting if they have a common and less severe digestive issue. There are also antacids, which will help relieve indigestion. There is also a wide range of aminosalicylate drugs for long-term intestinal inflammation and antidiarrheal drugs to relieve severe symptoms of diarrhea. For more severe conditions such as Crohn’s, patients can look at a whole host of different medications prescribed to control the disease. These include antibiotics, steroids, and immune modifiers, which suppress the body’s natural immune response. As you can probably imagine, the side effects of taking such a range of medications could result in new problems from weight gain to a lack of immunity and thinning of the bones. It is not uncommon for patients to seek alternative treatment to avoid these nasty side effects. Every condition of the body, including those affecting the digestive system, can be helped through dietary modification and proper supplementation. CBD is one of those potentially essential supplements. Living with Digestive Problems A staggering 20 million Americans suffer from chronic digestive diseases. As such, it is no shock that this type of disease is one of the most expensive to treat in the US. With hundreds of prescription medications on the market, it can be a costly and overwhelming thing to treat for long-term sufferers. The common side effects that most treatments have on the body can cause further physical issues for the user and ultimately impact mental health. Why Choose CBD for Digestive Problems? There is a growing trend towards the use of CBD for a variety of digestive issues. This cannabinoid influences human health by interacting with something known as the endocannabinoid system (ECS). The ECS comprises chemicals called endocannabinoids and cannabinoid receptors such as CB1 and CB2. They trigger various responses that help maintain a state of balance throughout the body, and CBD influences the activity of the ECS and other various receptors. There is a high concentration of cannabinoid receptors in the digestive system. Therefore, CBD could have a significant impact on digestive health. One of the primary ways in which it does this is by reducing inflammation. It achieves this by preventing the production of interleukins and cytokines. These are proteins that give our immune system the signal to begin an inflammatory response. CBD could also lower cortisol, which is a catabolic hormone. Cortisol breaks down tissues in the body and has a destructive impact on the lining of our GI tract. Keeping cortisol levels in check is a key factor in maintaining a properly functioning digestive system. Now, let’s look at the studies that show CBD has a positive effect on stomach problems. CBD for Stomach Issues Research into CBD’s digestive system benefits goes back over a decade. While studies involving human subjects are limited, with mainly small sample sizes, the results are promising. An Israeli study published in 2012 involved 292 patients receiving treatment for IBD. Around half of them used cannabis in the past or were using it at the time of the study. Overall, 32% of MMJ users said they tried it for poor appetite, diarrhea, and nausea. The researchers found that the current cannabis users reported a significant improvement in those symptoms. A 2016 review of studies looked into the role of cannabinoids in inflammation and gastrointestinal mucosal defense. The researchers wrote that direct activation of CB1 receptors by cannabinoids helped reduce gastric acid secretion and gastric motor activity. The process also decreased the formation of gastric mucosal lesions induced by alcohol, stress, and NSAIDs. The researchers concluded that the ECS represented a promising target in the treatment of IBDs. They also said preliminary clinical studies confirmed the assumption. One of the most recent relevant reviews was published in 2020. It looked into CBD products and other non-intoxicating cannabinoids for the prevention and treatment of GI disorders. The researchers concluded that CBD, particularly, was potentially useful for helping with different conditions and diseases of the GI tract. However, more research is required to confirm the link. CBD Oil for Digestion Another reason to consider CBD for digestion is its apparent capacity to regulate appetite. This is extremely helpful if you eat too frequently or too much and suffer from bloating as a result. Ideally, you will leave a minimum of 3-4 hours between meals and snacks for optimal digestion. In general, CBD could lead to weight loss as it reduces appetite. This was the finding of a 2012 study published in Psychopharmacology (Berlin). The researchers found that CBD usage led to a significant reduction in food intake amongst rats compared to CBG and CBN. Does CBD Oil Help with Constipation? Certain gastrointestinal issues impact the frequency of bowel movements, known as motility. A study from 2005 found that CBD could potentially help regulate GI motility. However, it depends on an individual’s specific symptoms. This is positive news if you suffer from constipation. There is evidence that CBD could also alleviate symptoms of constipation, such as a stomach ache, nausea, and bloating. For example, this 2011 study found that cannabinoids could regulate nausea and vomiting. There are many forms of CBD worth considering for digestive problems. CBD oils and gummies are among the most popular. However, there is an age-old remedy for stomach issues that could provide extra relief when infused with CBD. Peppermint Tea for Digestive Problems Peppermint is an herb that people frequently use for digestive problems. It is a plant that is native to Europe and North America, and humans have used it therapeutically for centuries. Peppermint is rich in active compounds such as flavonoids, rosmarinic acid, and menthol, among others. Therefore, it has a variety of potential benefits. People often apply it topically to relieve symptoms like headaches and muscle aches. However, the most famous medicinal use of peppermint is as a digestive aid. It has anti-spasmodic actions, and research has shown that it can be useful for relieving IBS symptoms. Most of the studies on peppermint for IBS have focused on its essential oil in the form of enteric-coated capsules. Results suggest that peppermint oil can alleviate abdominal pain and improve global IBS symptoms too. It does not seem to cause any side effects, but high doses of peppermint oil can be toxic. Therefore, it is essential to take peppermint oil capsules according to a physician’s instructions and not exceed the maximum recommended amount. Another popular way of consuming peppermint, which we turn to now, is as an herbal tea. CBD Peppermint Tea Benefits CBD peppermint tea combines all the benefits of the two herbs into one tasty and refreshing beverage. It is convenient and enjoyable to use and is an ideal solution for people with busy schedules. Many people choose CBD peppermint tea with the hope of soothing digestive discomfort, but it has other potential benefits too. While many people choose CBD peppermint tea with the hope of soothing digestive discomfort, it has other potential benefits. For example, in traditional Chinese medicine, peppermint is known as a cooling herb. Therefore, it is ideal for drinking on hot summer days. Other traditional uses include alleviating pain, regulating the immune system, and relieving colds. However, there is little clinical evidence to back up these claims. CBD also has plenty of benefits outside of the digestive system. The most common reasons why people use it include chronic pain, anxiety, and depression. However, peppermint CBD tea may not be the best way to experience these effects. Let’s take a look at why. Disadvantages of Peppermint CBD Tea The main downside of CBD peppermint tea is that it generally contains only small amounts of CBD. So, while it makes a delicious beverage, it might be better viewed as a CBD top-up rather than a primary consumption method. Consumers will get a far higher CBD dosage using oil tinctures, capsules, edibles, or similar. One solution is to add a few drops of CBD oil to peppermint tea to create a homemade infusion. However, the oil will separate from the drink, and some may find the taste unpleasant. Furthermore, CBD is fat-soluble and needs a little fat to help the body utilize it fully. Therefore, unless consumers add something fatty to their tea, they will derive little benefit. Adding some coconut oil or full-fat milk will help, but it may also make the tea less enjoyable. Some CBD manufacturers are starting to recognize this problem and produce water-soluble CBD. However, these items are often harder-to-come-by and more expensive than regular CBD products. Therefore, it is necessary to research to find the best and most effective peppermint CBD tea. Does CBD Settle Your Stomach? The data available so far suggests that the answer is ‘yes.’ There is a clear connection between the ECS and the GI tract. Endocannabinoids work with CB receptors all over the body, including the stomach. While CB1 receptors are predominantly found in the brain, CB2 receptors are mainly in the gut. When any digestive system organ gets swollen, these receptors bind with cannabinoids such as CBD to slowly reduce inflammation. There are now dozens of studies that suggest CBD can help with gastrointestinal disorders such as IBS, Crohn’s disease, ulcerative colitis, gastritis, and constipation. The antiemetic, analgesic, anti-inflammatory, and anxiolytic properties of CBD could help alleviate various digestive issues. Final Thoughts on the Use of CBD for Digestive Issues Clinical trials are still ongoing. We have yet to see breakthrough results to show that CBD can fully eradicate the symptoms of digestive diseases. Nonetheless, there has been a great deal of promising work relating to how CBD works on the body. Also, a growing number of individuals who use CBD for GI issues claim that it is effective. While we understand the thought of moving to something such as CBD can be daunting, the results speak for themselves. Many positive testimonials suggest that CBD does offer digestive benefits with limited risks. This is an advantage that you can’t get from prescription medications.
Self-Propelled Underwater Neutrino Cherenkov Radiation Observation Platform Expansion of Observation Facilities: SPUNCROP Total Capital Expenditures = $67,285.00 Total Budget Request = $67,655.50 Currently one of the most controversial topics in solar astronomy is the so-called "solar neutrino problem." Experiments around the world set up to measure the flux of high energy neutrinos (a weakly interacting uncharged subatomic particle) have detected a flux of neutrinos lower than that which is predicted by the standard solar model. More data is required before an explanation of this discovery can be formulated. We believe that the proper position of the Ryerson Astronomical Society is at the forefront of this exciting new field. The Self Propelled Underwater Neutrino Cherenkov Radiation Observation Platform or SPUNCROP will not only make the Ryerson Astronomical Society the only amateur astronomy group actively engaged in the measurement of solar neutrino flux, but it will also make SG the first student government in the world to fund such an ambitious undertaking, a fitting honor for the University of Chicago. We propose to outfit a diesel submarine with photomultiplier tubes and lurk about the bottom of Lake Michigan late at night in an effort to detect the faint flashes (known as Cherenkov Radiation) that result from the collision of solar neutrinos with water molecules. By using Lake Michigan as a detector we will not only be effectively shielded from cosmic rays, but we will also avoid the expense of containing millions of gallons of fluid; a problem in several less innovative e xperiments currently being carried out by other research institutions A further advantage of our experiment is that we will be able to detect the direction of interacting neutrinos through the use of timing data from photomultiplier tubes, providing a crude imaging capability. We propose to purchase a used diesel-electric submarine from the Union of the Soviet Socialist Republics. Because of the end of the Cold War combined with the poor Soviet economy, the U.S.S.R. is selling these submarines a t give-away prices. Not only is this an excellent opportunity to get a good buy on a submarine, but our support will also help the faltering economic reforms in the Soviet Union. The cost of a suitable submarine is $50,000. We will need anther $5,000 f or photomultiplier tubes and assorted electronics. $2,000 is needed to fly two members to Vladivostok to pick up the submarine and $10,000 worth of fuel will bring the submarine back to Chicago and support our first year of observations.
Philanthropy – the love (philo) of humanity (anthropos) – has been a part of the lives of wealthy families for millennia. With their dollars and time, they have helped the poor, built libraries and hospitals, become patrons of the arts, and advocated for human and political causes. Societies, locally and around the world, have benefited from these gifts. But wealthy families themselves have also benefited from philanthropy. They have experienced the joy of giving to others and seeing progress. Through giving, they have helped to cultivate a spirit of generosity among their family members. And they have seen how a common external cause can knit their clan together and build powerful enduring cross-generation values, counteracting the natural entropy families face over time. In some ways, we are now in a golden age of philanthropy as mega-wealthy donors – Bill Gates and Warren Buffett come to mind – commit large portions of their wealth to charitable endeavors. At all levels of wealth, families are wrestling with many of the same philanthropic questions and issues. How much of our wealth should go to our heirs and how much to charity? What is a good investment of time and money? What are the right vehicles? How do we make wise decisions? How do we involve others? Our contributing authors will shed some light on some of these issues and give you some food for thought along the journey of family philanthropy. There is a whole vocabulary about philanthropy these days, which can be confusing. Ellen Remmer answers those questions directly: ‘What is the difference between charity, philanthropy, strategic philanthropy and impact investing?’ She also provides many examples and tips to help families decide how they want to approach their giving and charitable investing. When we think of our wishes for our children, most of us would include among them an attitude of authentic generosity, which often displays itself as gratitude, kindness, and a sense of community. Alasdair Halliday and Anne McClintock answer the question “How can you encourage generosity in your family?” and show how philanthropy can be an important tool in developing that spirit among all members of the family. The problem is that it is often difficult to interest and involve younger generations in family philanthropy. Sometimes, based on their stage of development, they may have other interests and priorities; in other cases, philanthropy been built into the culture and practice of the family. Lisa Parker’s essay answers the question “How do you engage children and grandchildren in philanthropy?” with some highly practical advice and suggestions that are sure to encourage engagement. Good philanthropy requires planning and forethought, but it also has a strong emotional side, often driven by passion, vision, and calling. Barnaby Marsh knits these threads together and looks at how you can wisely develop a long-term strategy for your philanthropy. He helps us think about why we give, how our giving might shift in the future as the world changes, and how we stay focused yet flexible on our philanthropic mission. Those families who embrace and nurture their philanthropic urgings will give gifts, but are likely to receive significant gifts as well. Philanthropy allows families to face outward together. It encourages them to reach beyond the immediate and dream big. It provides purpose and meaning to life. And it encourages humanity and character in its members. Enjoy the journey. Chapter 41 – What Is the Difference Between Charity, Philanthropy, Strategic Philanthropy, and Impact Investing? Chapter 42 – How Can You Encourage Generosity in Your Family? Alasdair Halliday and Anne McClintock Chapter 43 – How Do You Engage Children and Grandchildren in Philanthropy? Chapter 44 – How Can You Wisely Develop a Long-Term Strategy for Your Philanthropy?
Related Articles: French Furniture Styles Art Deco, 1925 French Furniture Styles Art Deco Style, 1925 The richness and proliferation of its ornament is what is most striking about this style. It exploited every available ornamental resource with imagination, invention and refinement. Furniture: Tables, rarely square, were crisply geometric in profile and light in appearance. Legs were gracious and bare of ornament. Armoires made of ebony and tulipwood tended to be richly decorated. Commodes were among the period's most attractive productions. Chair backs tended to be rather low and quite open, while legs were quite thin. Settees became increasingly prominent as the period advanced. Materials and techniques: Art Deco delighted in costly materials. Exotic woods were preferred to European ones. Ebony and macassar were preferred. Gilt bronze and copper were generously employed, as was silver and cast iron. Leather became as pervasive as fabric for chairs and settees. Ornament: Most ornamental techniques were used. Only carved wood sculpture was downplayed. The influence of cubism and abstraction was apparent. Elements were drawn from African art, vegetal, floral and maritime motifs. Curved lines were prevalent. Source: French Furniture by Sylvie Chadenet
Animal Species:Banded Pipefish, Doryrhamphus dactyliophorus (Bleeker, 1853) The Ringed Pipefish can be recognised by its pattern of red to blackish bars. In Australia, it occurs in marine tropical waters of Australia. Standard Common Name The Ringed Pipefish has red to blackish bars, a distinct caudal fin, and a very long snout. The species grows to a maximum length of 18 cm. The map below shows the Australian distribution of the species based on public sightings and specimens in Australian Museums. Source: Atlas of Living Australia. Distribution by collection data The species lives in caves and crevices. - Hoese, D.F., Bray, D.J., Paxton, J.R. & G.R. Allen. 2006. Fishes. In Beesley, P.L. & A. Wells. (eds) Zoological Catalogue of Australia. Volume 35. ABRS & CSIRO Publishing: Australia. parts 1-3, pages 1-2178. Mark McGrouther , Collection Manager, Ichthyology
Should environmentalists fear logging or learn to understand its impact? May 18, 2005 Environmentalists usually oppose logging, associating it with deforestation and biodiversity loss. A new report, Life after logging: reconciling wildlife conservation and production forestry in Indonesian Borneo, from CIFOR suggests that in reality, many logging operations have a lesser impact than than generally believed by conservationists. Further, since more forests in Borneo — the area of study — are allocated for logging than for protected areas it is imperative that we have a better understanding of how biological diversity and ecological services can be maintained in such areas and how they can be integrated with protected areas into “multi-functional conservation landscapes.” Conservationists, loggers, and policy-makers alike need to recognize that logged-over forests have conservation value and work to ensure that these areas are indeed used for this purpose especially when other options for biodiversity conservation are not available. Life after logging notes that logging in tropical forests is often highly selective and sometimes just a few trees per hectare are cut and removed. The main problems with logging stem from road construction and hunting — both activities that can be curtailed through better management. Logged-over forests themselves retain much of their original biodiversity as long as they are not converted for agriculture, exhaustively hunted, or seriously degraded through other activities. Life after logging aims to emphasize that “logged forest is a vital component of any comprehensive approach to landscape-scale conservation” not argue that all forests should be opened up for timber harvesting. Since some interior forest species cannot survive the changes wrought by logging, it is critical that selected areas still be afforded wth strict protecton measures. The CIFOR report looked at logging Malinau area of Indonesian Borneo (East Kalimantan) where biologically rich forests are being rapidly developed for industrial logging, mining and estate. Life after logging sees Malinau as an place to observe how well “protected forests, managed forests and more intensively developed agricultural areas could be combined in a mosaic to achieve both conservation and development objectives.” The authors argue that before commencing any conservation and development plan it is imperative to “assemble enough information on the ecology of an extensive tropical forest landscape to enable predictions to be made about the impacts of different sorts of development scenarios.” This is just what they’ve done in Life after logging, which “shows that different combinations of logging and protection and different patterns of infrastructure development will have different impacts on biodiversity and that these in turn will have impacts on the livelihoods of the people who depend upon this biodiversity.” The authors look at a variety of timber harvesting paths to measure the impact on local species: Selective logging has fewer direct negative consequences for many vertebrate species than is sometimes assumed. It certainly affects certain groups of species, like terrestrial, insectivorous birds and mammals, which suffer from the reduced ground cover. This may primarily be caused by the slashing of ground cover and lianas, which is currently required by law. Some species, though, such as deer and banteng, appear well adapted to, and can increase in, the more open habitats that follow logging… Terrestrial insectivores and frugivores appear particularly sensitive to timber harvest practices, whereas herbivores and omnivores were more tolerant or even benefited from logging. as well as the economic impact on local people. From these scenarios, the authors develop a list of recommendations to help in the conservation of local speces which are increasingly threatened by deforestation, forest degradation and hunting. The authors note that while “the Indonesian government has pledged to do its best to control these problems … [through] laws and international agreements … achieving conservation goals remains fraught with challenges.” The authors argue that one way the government may be able to meet conservation targets is to establish policies and extend existing regulatons in logged-over forests and areas concessioned for timber cutting. Some of their recommendations include: - retention of ecologically important habitat structures (large trees, hollow trees, fruiting species) and locations (salt licks, watercourses); - discontinuation of understorey slashing (currently a legal requirement in Indonesia); - regulation or restriction of hunting in timber concessions; - maintenance of forest corridors to allow the movement of species between areas of forest; - adoption of good road-building practices by reducing the width of, and maintaining canopy connectivity over, roads; and the - application of reduced-impact logging methods such as limiting felling-gap In developing these recommendations, the authors looked for a ideas that would enhance biodiversity preservation while addressing the needs of development interests. The future of Borneo’s wildlife depends on makin the best use of its remaining forest resources. Just because forest has been logged it can still be productive from a conservation standpoint. This article uses information and excerpts from Life after logging: reconciling wildlife conservation and production forestry in Indonesian Borneo by Meijaard, E.; Sheil, D.; Nasi, R.; Augeri, D.; Rosenbaum, B.; Iskandar, D.; Setyawati, T.; Lammertink, A.; Rachmatika, I.; Wong, A.; Soehartono, T.; Stanley, S.; O’Brien, T. The report is available for download in PDF format at http://www.cifor.cgiar.org/scripts/newscripts/publications/detail.asp?pid=1663
Image: Black Marlin pectoral fin ray The marginal (leading) ray of a billfish, probably a Black Marlin, found by Harry Rosenthal on Backwoods Beach, Cronulla (directly opposite Shark Island), New South Wales on 4 August 2012. The bone was found in its current 'clean' state. No other bones were found in the vicinity. - Carl Bento - © Australian Museum Black Marlins have rigid pectoral fins that have a very limited range of movement. Part of this rigidity results from the flat basal surface of the leading pectoral fin ray which sits against the flat surface of the scapula. Other billfishes, such as the Blue Marlin, have a markedly convex pectoral fin base that allows the fin to be moved. Wapenaar, M.-L. & F.H. Talbot. 1964. Note on the rigidity of the pectoral fin of Makaira indica (Cuvier). Annals of the South African Museum. 167-180.
Putting Things in PerspectiveThe picture above has been making the rounds of the internet lately (sadly it hasn't always been attributed to the Land Art Generator Initiative). It's a bit similar to things we posted about in the past and represents the total surface area that would be required to power the whole world in 2030 using nothing but solar or wind power (see below for wind power pic). All the assumptions used to create the solar power pic above (you can click on it to see a bigger version) can be found here, but here are the main ones: According to the US Department of Energy (Energy Information Administration), the world consumption of energy in all of its forms (barrels of petroleum, cubic meters of natural gas, watts of hydro power, etc.) is projected to reach 678 quadrillion Btu (or 7.15 exajoules) by 2030 - a 44% increase over 2008 levels (levels for 1980 were 283 quadrillion Btu and we stand at around 500 quadrillion Btu today). [...]Dividing the global yearly demand by 400 kW•h per square meter (198,721,800,000,000 / 400) and we arrive at 496,804,500,000 square meters or 496,805 square kilometers (191,817 square miles) as the area required to power the world with solar panels. [...] If divided into 5,000 super-site installations around the world (average of 25 per country), it would measure less than 10km a side for each. The UAE has plans to construct 1,500MW of capacity by 2020 which will require a space of 3 km per side. If the UAE constructed the other 7 km per side of that area, it would be able to power itself as a nation completely with solar energy. The USA would require a much larger area and approximately 1,000 of these super-sites. According to the United Nations 170,000 square kilometers of forest is destroyed each year. If we constructed solar farms at the same rate, we would be finished in 3 years. Click for bigger version. Credit: Land Art Generator Initiative. They did the same thing with wind power (again, you can click on the pic above to see a bigger version): A 5 MW turbine can be expected to produce 17 GWh per year (they are 40% effective from their peak rated capacity - 5 MW x 365 x 24 = 43.8 GWh). Therefore, it would require 11,748,294 of the 5 MW capacity turbines to create the same yearly output. There are 500 million cars in the world so it's not like that's an unattainable goal from a manufacturing standpoint. And each 5 MW turbine is a 30 year lifespan money making machine for whoever buys it. The same can not be said for my car. But if we can build 90,000 Cape Wind size installations, we would be there on wind alone. Based on that installation, each turbine requires 1/2 square mile of area for offshore sites. This would require 5.85 million square kilometers for 2030 world energy needs. Of course, nobody's suggesting creating a kind of "clean energy monoculture". The green energy future will no doubt include many sources, including wind and solar, but also wave, geothermal, green hydro, etc. Maybe even space-based solar power and what I call "estuary power" (harnessing the mixing of fresh water with salt water). But still, this exercise is useful because it puts things in perspective and shows us that while the scale is huge, it isn't so much bigger than a lot of other man-made things. Via Land Art Generator More Solar and Wind Power Japan's Moonshot? $21 Billion Invested in Space-Based Solar Power First Solar to Build 250MW & 300MW Solar Farms in California, Enough to Power 170,000 Homes Prometheus Institute Study: Solar Power to Reach Grid Parity in U.S. in 2015
The metaphor that was so popular in General Conference just a few years ago resurfaced again, but in a surprising way. A week ago, when I was meeting with my Jane Austen discussion group, one of the participants who was raised in a conservative Catholic home, but is now mainline protestant, referred to the analogy of religion as a ship. She used the analogy completely differently than we heard it used in General Conference. She said that religion is a ship that takes you to a new shore, in a new place, and then you are there. The new land is where you live the teachings of Jesus. The ship brought you there, but you don’t stay on the ship. You basically “graduate” from the ship and go forth to live and work among all of humanity, trying your best to live up to the ideals He preached. The ship is no longer necessary. This parallels a sentiment my grandparents (who were protestants, not Mormons) once shared, that Church is for children. A schoolbus would come by to pick up the children to take them to Church, but parents often didn’t go because they already “got it.” They were Christians. They didn’t need the exact same instruction over and over for the rest of their lives. They needed to just try to live a moral life using the teachings they already had. In Buddhism there are two sayings that are along these same lines of thinking: - “If you meet the Buddha in the road, kill the Buddha.” This saying means that you should not revere and idolize the teacher; you should live the teachings. Enlightenment comes from within, not from external sources. - “I am a finger pointing to the moon. Don’t look at me; look at the moon.” This is a saying attributed to Buddha in which he is saying he is just a teacher, but should not be worshipped or idolized. Rather we should look at the teaching, not the teacher. Just as the pointing finger guides us to the moon, but is not glorious like the moon, the teacher might bring us to enlightenment, but the teacher is not the teaching. We recently returned from a trip to Egypt. Because it is such an iconic part of the Egypt tourist experience, we booked a Nile cruise. We had just seen Death on the Nile, and I was picturing rocking up at amazing Egyptian temples, taking our time walking through with our guide explaining what we were seeing, and then sauntering back on board as the sun setted over the lazy Nile waters. These are fairly small ships with maybe 80 passengers. Tour groups were arranged through local tour companies, not the ships; some people were touring in groups of twenty or thirty. Others, like us, just shared a single guide with one other passenger. We weren’t excited about being at the sites with 80 other people all at once because it’s just harder to take pictures, to see what you want to see, etc., but this class of ship was much better than some of the other options. It’s possible to cruise one a single-sail felucca and sleep on the deck under the stars, for example, but it also seemed a little like I’m-too-old-for-that. Imagine our surprise when we discovered that ALL the ships are on the same schedule, and they all dock together. We weren’t just one ship docking and going to see the amazing sites. We were nested between other similarly sized ships, four or five ships deep, and both behind and in front of our ships were more ships similarly nested, so now we were among hundreds of passengers from a dozen ships, all pouring into the sites at the exact same time. As you can imagine, that meant standing shoulder to shoulder with dozens of other tourists trying to take the exact same picture of the exact same hieroglyph. In case you were wondering, this was nothing like the cruise in Death on the Nile. It was more like going to a concert, in a bad way, with more risk of being trampled than sexy dancing with Armie Hammer or delightful amuse-bouches with Hercule Piorot. Our guide said that if we come back, he recommends skipping the Nile cruise and staying in hotels instead, which are roomier and have nice pools, using a hired car to see the sites instead, because he knows the ship schedules and would time it so you can actually hear him speak and take the pictures you want to take, and wander the site, really feeling like you can experience it. I would definitely do it that way in future. But it got me thinking back to that ship analogy. If you had told teenage me that the Good Ship Zion was going to be nestling with a bunch of Evangelical churches and an occasional Catholic church, nearly indistinguishable in terms of our experience, and that this was how it would be for the rest of my adult life, I would have thought you were joking. If you had said that meetings would be hard to distinguish from conservative political rallies, I would never have believed it. Yet here we are. Maybe that’s because when I was young, Church was still teaching me things I didn’t know. As an adult, since I now know what Jesus taught, I am left to evaluate the Church in terms of its alignment with those teachings, or its misalignment. I can judge what is moral and what is not because I was taught in my youth. Back to the boat, though, why would anyone stay in a boat indefinitely? I enjoy sailing as much as the next person, but whenever you get in any vehicle, the point isn’t to stay in the vehicle forever, but to go somewhere. You don’t drive to church on Sunday and sit in your car in the parking lot. The only reasons I can think of to stay in a boat long-term are 1) a Covid outbreak on your cruise ship, which then descends into a Lord of the Flies scenario, or 2) Noah’s Ark, which basically starts as a Lord of the Flies (and all other animals) scenario. In the first case, you must stay aboard to quarantine, to protect others from contamination by you, and in the second, you can’t leave because you will drown since there’s no land above water (nevermind that the story isn’t historically accurate, but taking the story at face value for the metaphor it is). It’s this second reason that I think Church leaders (and conservatives in general) seem to be concerned about, the idea that leaving the boat equals danger and personal contamination, that we, as disciples of Christ, are too weak to survive. We can’t swim on our own. While there’s value in belonging to a community of supportive believers, all striving to do our best, we are in fact supposed to also interact with the world around us. The idea that everything outside of the boat (or Church) is scary and bad is no way to be a Christian. If you’re only interacting with other Christians, you’ve kind of missed the point. Right? - Do you think the Church was always “nested” with other high demand conservative religions or do you see this as a shift due to increased polarization? - Do you think the analogy of a ship that never reaches shore is a good analogy or not? - Do you like the version of the ship analogy that my friend said is more common in her experience? - Is Church for children? Do we outgrow it? Or do we need it because of the community, so long as that community is one we want to belong to?
I have posted comments and pictures on this site of my son Conor waiting in anticipation for his ABA therapist to arrive. I have seen ABA help Conor learn skills, reduce problem behaviors and expand his ability to communicate. But as a mere parent my actual knowledge of ABA and its positive influence on my son's life is of no weight to the anti-cure, anti-treatment, anti-ABA ideologues who attack ABA despite their own lack of actual knowledge or experience with the intervention and despite the hundreds of studies and many credible professional reviews of those studies speaking to the effectiveness of ABA as an autism intervention. Unlke some anti-ABA ideologues who accuse behaviourists of misbehaving and propogate unfounded, negative myths about ABA Mruzek and Mozing have actual experience with ABA. They have more than empty, heated rhetoric to offer - they have, as Board Certified Behavior Analysts talked the talk and walked the walk. Unlike most anti-ABA critics they actually know what they are talking about. They offer their comments on an article which perpetuated some of the ABA misconceptions: Unfortunately, the essay perpetuated some all-too-common misconceptions about applied behavior analysis, particularly that it is "rooted in repetition" and focuses mainly on making "children with autism ... indistinguishable from their peers." In fact, the concept is a very flexible approach, with teaching methods and goals carefully tailored to the needs of each child.And they speak to ABA's effectiveness as a means of helping autistic children: The first applied behavior analysis study specifically targeting autism was published in 1964. Since then, it has become the most-studied intervention for children with autism — the only one recommended by the New York State Department of Health. It includes a wide array of teaching methods grounded in scientifically derived principles of learning, especially those related to the powerful effect of positive reinforcement on behavior change. Applied behavior analysis can help people with autism develop new skills (in academics, play, communication, social interaction) and support those who engage in challenging behaviors (severe tantrums, refusing food, injuring themselves). Intensive, early intervention for young children with autism can be especially effective, although outcomes vary among individuals. A study under way at UR is investigating factors possibly implicated in these variable outcomes.
A blood sugar level lower than about 3.3 mmol/L (60 mg/dL) is called hypoglycemia. The feelings associated with hypoglycemia are called an “insulin reaction.” The earliest symptoms of low blood sugar can be like the feelings many people experience when they’ve gone without food for a long time: they may feel hungry, tired and irritable, and may even have a headache. These early warning signs tell us that the body needs sugar quickly. As the blood sugar continues to drop, other signs and symptoms may develop—shakiness, pale skin, cold sweat, dilated pupils, and pounding heart. These happen because the body is trying to boost the blood sugar from within. Certain hormones, including glucagons, adrenaline, cortisol, and growth hormone, stimulate our liver and muscles to convert stored sugar into glucose, which enters the bloodstream. In someone without diabetes, the body turns off the insulin supply whenever blood sugar is at a normal level. But in people with diabetes, the injected insulin continues to work. As fast as the glucose enters the bloodstream, the insulin pushes it into the cells, so the level of sugar in the blood remains low until the person takes extra sugar by mouth. Most people with type 1 diabetes have low blood sugar reactions from time to time – an average of about two mild ones per week. Indeed, mild reactions that are easily recognized and treated, without too much interruption in activities, should be expected. They can be seen as the price paid for good glucose control. Note that some people have symptoms of hypoglycemia even when their blood sugar level is higher than 3.3 mmol/L (60 mg/dL). Common signs and symptoms of a mild insulin reaction - shakiness: “butterflies,” feeling nervous for no reason - cold, clammy sweatiness, unlike sweat from playing hard - dilated pupils, “funny-looking” eyes - mood change: irritable, grouchy, impatient; temper tantrums in younger children - hunger, and sometimes nausea due to the hunger - lack of energy: tired, weak, floppy - lack of concentration - blurred vision - pounding heart - change in skin colour: pale, most noticeable in the face and around the mouth - disturbed sleep: restlessness, crying out, sleepwalking, or nightmares Usually insulin reactions happen suddenly, over a period of minutes rather than hours. While they may occur at any time of the day or night, they happen most often when insulin is working at its peak. Low blood sugar symptoms vary from child to child. Each child tends to develop his own set of symptoms. After 1 or 2 episodes, you and your child will learn to recognize an insulin reaction quickly. It is helpful if you explain your child’s specific symptoms to teachers, coaches, school bus drivers, and other caregivers. Even young children can be taught to tell an adult about these symptoms. They could use a specific phrase such as “I feel funny” or “I need sugar.” Hypoglycemia may be most difficult to detect in infants or toddlers, who can’t describe their feelings. A sudden change in behaviour, with irritability, crying, pale face and “floppiness” may be the tip-off that blood sugar is too low. How to treat a mild insulin reaction All insulin reactions must be treated right away. Always have a source of fast-acting sugar available, such as juice, dextrose tablets, or even table sugar. If possible, check the blood sugar level to confirm hypoglycemia. A blood glucose level lower than 4 mmol/l (about 72 mg/dl) in older children and teens or below 6 mmol/L (110 mg/dL) in toddlers or preschoolers, along with symptoms of low blood sugar should be treated. (If you are unable to check the blood sugar before treating the reaction, check it as soon as possible afterward. Note the response to treatment.) Give a source of quick-acting sugar. About 10 to 15 grams of carbohydrates is all it takes to treat an insulin reaction. Examples include: - four ounces (125 mL) of unsweetened juice or regular soft drink - two to three dextrose tablets – these are not appropriate for infants or toddlers - eight ounces (250 mL) of milk - two teaspoons (10 mL) of sugar - prepackaged glucose gels may also be available from your pharmacy or diabetes supply shop. Read the label ahead of time to determine the amount you should give to treat a low blood sugar reaction. An amount that will supply 10 to 15 grams of carbohydrate is generally recommended. For infants or toddlers, some parents keep a tube of cake frosting handy for treating mild hypoglycemia. If a mild reaction occurs just before a meal or snack, start the meal or snack immediately, beginning with some simple carbohydrates. - Wait for the sugar to take effect. This is the hardest part. People who experience hypoglycemia feel extremely hungry and scared. They are often tempted to continue to eat and drink until the symptoms go away. This may result in a high blood sugar level later in the day. If symptoms last, recheck the blood sugar in 10 to 15 minutes. If it is still low, the child should have an extra 10 to 15 grams of carbohydrate. If vigorous exercise is anticipated prior to the next meal or snack, or if the reaction occurs during the night, the simple carbohydrate should be followed with a complex carbohydrate (one from the starch category). - Try to figure out the cause of the insulin reaction. If there is no apparent reason, consider reducing the appropriate insulin by 10% to 20% the next day. - Note the blood sugar levels, time, response, and possible cause of the reaction in your record book. Note: If in doubt, treat. When you can’t check the blood sugar level to confirm an insulin reaction, give sugar to be safe. False low blood sugar reactions Sometimes children feel anxious, nervous or tired and think it’s due to low blood sugar when it isn’t. There are many reasons for this. A quick blood sugar check is the best way to find out whether or not the blood sugar is low. Sometimes low blood sugar symptoms occur when the blood sugar drops quickly from a high to a normal level. Feeling nervous or upset for other reasons, such as exams, can also be confused with hypoglycemia. And occasionally the symptoms of high blood sugar are mistaken for a low sugar reaction. Once the blood glucose has been checked and it is clear that the result is over 6 mmol/L, reassure the child and encourage her to resume activity. However, if in doubt, treat the symptoms. Why do insulin reactions occur? Understanding the reasons for hypoglycemia is key to preventing it. The causes are usually related to the three major factors affecting blood sugar balance: insulin, food and activity. Too much insulin Children can get too much insulin if: - the wrong amount is given - the dose is mistakenly given at the wrong time, such as giving the pre-breakfast dose at suppertime - the dose isn’t reduced when blood sugar readings are consistently less than the target level. Not enough food This can happen easily enough—for example, when children get caught up in their activities and forget to eat, when toddlers sleep through snack time, or when teens sleep through breakfast or skip a meal. Too much unplanned activity This is the most frequent cause of hypoglycemia, because children aren’t used to planning ahead before they jump into an active game like tag or football. That’s why the blood glucose target range is wider in younger children than in adults; it allows for such spontaneity. Children should be able to enjoy any sport or activity with planning. How many insulin reactions are too many? It is not unusual to have one or two mild reactions per week that can be easily treated with juice. However, these mild reactions can interrupt the school day or other activities, and make it difficult for your child to focus for the following half hour or so. Prevent them as best you can, and respond to them quickly. If your child often has lows, this needs to be addressed with a change to the regimen. For example, if the teacher notices that your child is cranky every day at 11:30 a.m. and low blood sugar is confirmed, it’s time to re-examine the meal plan or insulin dose and make a change. What are the long-term effects of a severe low? The greatest long-term effect is the fear that the child will have another severe insulin reaction. This is a very real fear for many parents, siblings, and children with diabetes. It can make people reluctant to keep trying to maintain good blood sugar control. This psychological setback is the only real long-term impact, because the body works very hard to protect the brain during events like this. Mildly delayed intellectual development has been noted in some babies who had repeated episodes of severe hypoglycemia in the first three to five years of life. Reducing the risk of hypoglycemia Low blood sugar reactions are not always preventable, but there are things you can do to keep them to a minimum. - Eat meals and snacks on time. A delay of half an hour or more can result in hypoglycemia. This is most important for youngsters on Humulin N or Novolin NPH. - Make sure that the proper insulin dose is prepared and given. Children require close supervision with this task. - Plan for extra activity with extra food or an insulin reduction. Set up a good communication system with teachers, coaches, and other leaders so you’ll know when extra activity is planned. - Set up realistic blood sugar targets with your health care team. For example, it may be inappropriate and even dangerous to aim for “normal” blood sugar levels in very young children. - Remember to lower the insulin dose if the sugar level is lower than the target at the same time of day 2 days in a row, or 3 times in a week. - Always have some form of quick-acting sugar close by—and make sure everyone knows where it is. - Always have a glucagon kit at home. Review its use regularly. Take it with you on vacation. Replace it when it reaches its expiry date, and practice preparing it before you throw it out. - Encourage your child to wear medical-alert identification, and to carry a wallet card if older.
I am having difficulties trying to understand this sentence: In some respects, Courses of Action are the more basic of the two. In and of themselves, however, Courses of Action tend to be rather blunt instruments. In Merrian Webster, blunt is: slow, deficient in feeling; obtuse in understanding; abrupt in manner;... and when searching for images in Google, it seems a blunt instrument can be a bat, a hammer, and any other object used to hit something or someone. What is the meaning of tend to be "blunt instrument"? It will determine if a course of action will be successful or not like a judge's hammer?
About Pocket Queerpedia The Pocket Queerpedia is a resource Tshisimani Centre for Activist Education developed for activists, educators and the queer community generally, to assist in teaching on queerness. Queer education can be one of the most freeing of experiences, yet resources are not always accessible, suitable for a South African context or visually appealing to young audiences. The Pocket Queerpedia is an offering to respond to this. It has been reviewed by academics, progressive organisations and queer activists. The book comes in three languages during the first phase (English, Afrikaans, isiXhosa). It is available for free download below The Story Behind Pocket Queerpedia Tshisimani Centre for Activist Education is an organisation dedicated to resourcing and supporting activists towards their goals of equality, freedom, dignity and better futures. The idea for this glossary was sparked by a moment in one of our offerings ‘Feminism and Freedom’ a course we hosted for young activists in 2019. While grappling with discussions on gender, sexuality and freedom, we ran into a number of difficulties. As with most of our courses, the participants in the room were quite diverse – drawn from different communities, geographic locations and organisations. What we considered basic and familiar terms in queerness, we thought all would know, left many participants lost. What we thought were commonly accepted definitions proved otherwise. In that moment, we faced a big dilemma – how do we discuss the power and importance of queer politics, when so many terms are not commonly understood? This question led us to reflect deeply on some of the questions posed by our participants. Why are some terms used in different ways by different people? Where do I begin understanding the differences between biology and gender? What are these terms in my own home language? How would I explain all this in a way my mother can understand? Are there African examples and experiences we can draw on to better understand and make cultural links? Words have power. They can offer recognition or erase experiences. We offer this glossary to activists who wish to broaden their understanding of the world and how gender and sexuality shape it. About the creators The book was conceptualised, designed, and illustrated by Seth Deacon, Tshisimani’s Visual Materials Developer and Art curator, with input from the entire Tshisimani staff. Seth is a queer artist who previously taught digital arts and multimedia design, and completed an MAFA in which he focused on the depiction of violence, gender, race and class in photographs of the body in a South African context. Content editing, consultation and copy edits were done by Tshisimani’s Social Media Specialist and Content Creator, Mohammed Jameel Abdulla. Further consultations outside Tshisimani were done with queer performance artist, activist and scholar, Tandile Mbatsha; Clinton Osborne, an activist, artist and educator of the Sex Workers’ Education and Advocacy Taskforce (SWEAT); and activist and scholar Mmakatleho Sefatsa. Veteran pan-african, feminist scholar, and co-director of the Association for Women’s Rights in Development, Hakima Abbas, provided extensive consultation on the written content of the glossary, as well as academic feedback. The many rounds of translations were held by a team of writers, activists, educators and translators consisting of Simone Cupido, Kealeboga Ramaru, Allan Maasdorp, Chulumanco Mihlali Nkasela, Dinga Sikwebu and Akha Hamba Mchwayo Tutu.
Exceptions are terms of the Ubiquitous Language04 Mar 2013 » Giacomo Tesio In the previous article we have already saw that exceptions are normal results of reliable computations. But exceptions are well known element of our daily life. The etimology of exception comes from the latin excipiō, composition of ex and capiō. Literally something that “take out” of something (a process, a rule, a law and so on). Since human experience leaves sediment in the languages that traverses, modern language designers simply borrowed this concept to denote conditions that move the computation out of the desired path. In domain driven design, we distill a code model from the language that the domain expert talks when he solves the problem. During the modeling session (or the lesson, from the expert’s perspective), the modeler asks many questions about the behaviours of the system and, as a senior coder, he try to explore borderline cases. From such analisys, we often get a deeper insight of the model that can either confirm or not the previously defined terms. Sometimes it leads to deep refactoring (like when you see that a new concepts interact heavily with the previously modeled ones), sometimes it can even lead to (almost) restart from scratch. But often, borderline cases are either senseless or prohibited (in the context under analisys). An example from real world Let consider simple case: a command that registers in an investment proposal an order to dismiss a financial instrument that the customer does not own. Short selling is a well known practice in financial market, but it’s senseless when you talk with our own financial advisor. Indeed, when I asked to the domain expert how to handle this case he said: “it’s childishly simple, going short is not allowed here!”. What’s this, if not an exception thrown at me? Thus I simply made the expert objection explicit in the model with a class named GoingShortIsNotAllowedException. Simple enough. Such an exception is a normal outcome of the advisory process, so we have to explicitly model it as a normal and well documented computational result. The application that use the domain can prevent that such a situation occurs or not, but the business’s invariant is nevertheless enforced by the domain. Still any coder that uses the domain has to ponder whether to catch such exception (and may be present it to a user) or ignore it (turning it in an error that should crash the application itself). Thus our work is still incomplete. We have to provide more informations to the client. What financial instrument has caused such exception? From which dossier? Expressive exceptions expose useful properties to the clients. They help the user a lot, since through a proper UI rappresentation of the exception, he can understand why his own request cannot be satisfied. In applications used all over the world, expressive exceptions simplify localization and internalization, since useful properties can be shown differently at different latitudes, according to the user culture (just like any other value object). Moreover they can halve maintenance costs, since developers can rapidly identify what’s happened and why from logs. A cheap but very useful practise is to throw useful messages with exceptions. This is particularly important when an exception can be thrown in more than situation. For example, you can get a lot more from a KeyNotFoundException with a message containing the misspelled key. Exception chaining is another important technique that allows clients to further understand why an exception occurred. These look as common sense suggestions, but more often than not, good developers under pressure think that they can be faster, agile and leaner ignoring exceptional paths. Unfortunatly, this is true only for disposable prototypes. On applications that will run in production, well designed exceptions pay well on long term (unless maintenance fees are your business model). A final tip We all know that a IL knowledge in a resume makes you look like a nerd. Still, sometimes, it can make your life a lot easier. If you occasionally avoid exception chaining and you want to re-throw a caught exception, you should remember to use the throw keyword without specifying the caught exception. Indeed, there are two distinct IL instructions that throw exceptions (actually not only exceptions, but this is off-topic here), namely throw pops the exception from the stack, reset its stack trace and throws it to the caller, rethrow just rethrow the exception that was caught (it’s only allowed within the body of a catch handler). Thus in the following CS code you will lose the stacktrace forever, since it’s compiled to a If you really can’t wrap an exception that you caught before throwing it again, remember to rethrow it as in the following snippet: This simple trick will help you (and your colleagues) a lot. [1 ^] This is often what makes the difference between a junior and a senior modeler: while the junior one is fully focused to model rightly the intended business behaviour, the senior one always keeps an eye open upon unusual cases. Indeed even the domain expert often ignores how many borderline situations he daily handles by borrowing from his own experience in the business. But since the application will be based on such experience, we have to encapsulate it in the domain model, and thus we need to turn that knowledge conscious and explicit. This is a two fold aspect of DDD: it's more than an expensive software development process, it's a tool to improve the customer understanding of its own business. And believe me, a lot of the customer's business success comes from his (almost inconscious) ability to identify such cases and properly handle them.
By LARRY SHAW One year ago in late November 1999, 50,000 protesters converged on Seattle in an attempt to direct public attention to the World Trade Organization. Until that point the WTO had received almost no US media coverage, despite the fact that this extra-national organization had already forced the weakening of US laws protecting clean air and endangered species. As a result of the protests many people now know of the existence of the WTO, and what the three letters stand for. However, most people still have almost no knowledge of its operations and effects. With the nation's interest piqued, it was hoped by the protesters that Seattle would be a starting point for the education of the public. However, the WTO has disappeared from the media like the Cheshire cat, leaving only its smile. This writer conducted a study of newspaper coverage of the WTO to determine in what contexts the WTO has and has not been mentioned. The study was conducted for publications from Jan. 1 to Nov. 1, 2000, using a full-text search of the ProQuest8 database of 27 major US daily newspapers, including the Wall Street Journal, New York Times, Boston Globe, Chicago Tribune, USA Today, Christian Science Monitor, Los Angeles Times, San Francisco Chronicle and San Francisco Examiner. The study shows that mentions of the WTO were virtually nonexistent in articles covering issues related to attacks by the WTO on domestic environmental and human rights laws, ie., those issues most likely to turn public opinion against the WTO. In contrast, in articles covering other types of trade news, the WTO was mentioned frequently and made headlines numerous times. One of the major WTO-related stories of the year was an April 11 federal court ruling that thwarted the implementation of a weakened "dolphin-safe" labeling for tuna. The Federal Court ruled that the Commerce Department was attempting to prematurely implement the new labeling law without having even met the requirements of the weakened, 1997 version of the Marine Mammal Protection Act (MMPA). The dolphin protections in the initial, 1990 version of the MMPA were twice ruled trade-illegal under the General Agreement on Tariffs and Trade (GATT), the parent treaty to the WTO. To avoid the embarrassment of having a foreign authority force the revision of a domestic environmental law, Congress weakened the dolphin protections and the tuna labeling requirements of the MMPA in 1997. In marked contrast with the fanfare that the MMPA received upon its inception, the delay of the implementation of the eviscerated "dolphin-safe" labeling was only covered in four major US dailies: San Francisco Chronicle, San Francisco Examiner, Wall Street Journal and Los Angeles Times. Even more striking was the fact that the seven articles published in those papers did not contain a single mention of GATT or the WTO. Portions of the US Endangered Species Act (ESA) protecting endangered sea turtles have also been under attack by the WTO, as protesters dressed in sea turtle costumes tried to publicize in Seattle. On Oct. 23, Malaysia continued the aggressions against the ESA by filing a complaint with the WTO charging that the US has not yet lifted its ban on shrimp caught in a manner that kills sea turtles, as required by a previous WTO ruling. That news was covered in only a single article in one major US daily, the New York Times. Furthermore, a June 1 study published in the scientific journal Nature predicted that leatherback sea turtles will be extinct within ten years unless fishing methods are substantially altered. Only four major US dailies (San Francisco Chronicle, New York Times, USA Today, and Christian Science Monitor) covered the story, and none of the four articles mentioned the role the WTO has had in hampering the implementation of the Endangered Species Act. Another dozen articles in the major dailies addressed issues relating to endangered sea turtles during the period of the study, and, again, none of the articles mentioned the WTO. Curiously, in a San Francisco Chronicle article about the Nature study, the headline on the continuation page read, "International Trade Agreements Slow Protection of Sea Turtles," even though there was no mention of international trade agreements anywhere in the article. When queried by this writer, the reporter said a discussion of the relationship to trade agreements had been in the story. It was edited out by an editor -- the headline accidentally slipped through. Another recent WTO-related development this year was the Supreme Court's consideration of Massachusetts's selective purchasing law, which prevented the state government from contracting with corporations doing business with the brutal totalitarian regime of Burma. Had the Supreme Court not decided against Massachusetts, the attack on the Massachusetts law would have escalated by resuming a pending WTO challenge mounted by Japan and the European Union. Of the 48 articles and commentaries which covered the story, 43 contained no mention of the role of the WTO. Coyly, nine of the 43 articles mentioned pressures from foreign countries without mention of the WTO. The Los Angeles Times went so far as to state cryptically that "In a sense, the Supreme Court case contains echoes of December's demonstrations against the World Trade Organization in Seattle." But the New York Times deserves the award for coyness for writing that the European Community had "lodged an official protest," and defenders of the Massachusetts law had "tapped into some of the populist anger ... recently directed against the World Trade Organization," without actually mentioning that the official protest was a WTO challenge. The WTO did, however, make numerous US headlines this year -- for an export subsidies dispute between the US and Europe, and for the grant of Permanent Normal Trade Relations (PNTR) status to China. Neither issue involved a direct attack by the WTO on domestic environmental or human rights laws. The export subsidies dispute with Europe was covered in 17 articles in the major dailies, and the WTO appeared in eight of the headlines. PNTR for China was covered in at least 1,063 articles and commentaries. China's application for membership in the WTO was mentioned in 292 of the articles and commentaries, and referred to in 31 headlines. (Although the articles often suggested a direct link between the grant of PNTR for China and China's admittance into the WTO, such was not the case. The relevance of the WTO is somewhat less direct and therefore less deserving of such frequent mention: Had the US not granted PNTR to China, and if China does attain membership in the WTO, then, if the US were to ever apply trade sanctions against China, China could retaliate via the WTO.) In summary, of the articles covering developments in the Marine Mammal Protection Act, none of them mentioned the WTO; none of the articles covering news related to endangered sea turtles mentioned the WTO; only a single article nationwide mentioned Malaysia's resumption of aggressions against the Endangered Species Act via the WTO; and about 10% of the articles about the Massachusetts-Burma law mentioned the WTO. In contrast, nearly 30% of the articles covering PNTR for China discussed the WTO; and, with regards to the export subsidies dispute with Europe, the WTO was mentioned in all the articles, and appeared in nearly half the headlines. Furthermore, disregarding whether the WTO was mentioned or not, the 72 articles covering issues related to WTO attacks on US environmental and human rights laws was substantially outnumbered by 1080 articles covering other types of WTO-related news. There has indeed been a Cheshire cat-like disappearance of the WTO from the major US daily newspapers. Only the smiling teeth of news which do not involve WTO attacks on US environmental and human rights laws remain visible. Larry Shaw is the author and performer of "Sold Down the River," the anti-WTO song played from the Steelworkers' billboard truck in Seattle. See www.solddowntheriver.org.
By: Rabbi Grisha Abramovich, Rabbi of the Union for Progressive Judaism in the Republic of Belarus and the Sandra Breslauer “ ” center in Minsk. Our chapter starts with the commandment “You shall appoint magistrates and officials and they will govern the people with true justice”. We have the list of laws with judicial matters and procedures; conduct of war, pursuit of justice, and by the very end of chapter the instruction in the case of unsolved homicide. The ritual for unsolved homicide seems to have roots in ancient times. The act of killing is thought to soil the land. The Torah teaches us the ritual when the identity of the murderer is not known, which creates a situation that the community/city nearest the place of the murder is considered responsible. This ritual does not actually solve the murder. It’s unlikely that it comforts relatives or helps to find murderer. This raises questions about life and death. The Medieval Midrash Yalkut (proverb 943) tells us about a very old woman who asked Rabbi Yosi ben Halafta to teach her how to depart the world without violating Torah. In trying to understand the reason for such a peculiar wish he discovered the reason for her long life. “Whenever I have something to do, enjoyable or not, I am in the habit of putting it aside early morning and going to the synagogue” the woman explained. Rabbi Yosi told her, in order to die, she should stay away from the synagogue for three days. For the next three days the woman followed the advice and did not go to the synagogue. She quickly became ill and died. It is not explained in the Midrash neither what motivated the woman to turn to a rabbi with such a strange question, nor why Rabbi Yosi, one of the foremost scholars of Jewish law in the second century, advised her so. Besides the story, we also learn from the Talmud (Sotah 46) that a very old citizen of Luz with the same wish could not die inside the city, and had to leave the walls of Luz, as the angel of death did not have power in the city. These strange situations demand attention – we needs answers both in Torah and in our life. Back to the ritual to purify the community from responsibility of unsolved homicide. The ritual is called the broken-necked heifer or in Hebrew . According to 15th century statesman, financier and Bible commentator Don Isaak Abarvanel, this ritual is not about the community being responsibility for the murder itself. Rather, it is about the value of life and the importance of continuing to search for the offender. Some commentators after him, including Hoffman, agree that the level of communal responsibility would be raised as much as punishment of the captured murderer. Nevertheless, based on Mishna (Sota 9), commentator Gunther Plaut explains that the ritual was ceased when this kind of crime multiplied to such a degree that the procedure of “ ” was no longer practicable. Moreover, this teaches us that no ritual motivates communal responsibility for human life more than teaching of Torah and consciousness. Nehama Leibowitz comments on the ritual: “Thus responsibility for wrongdoing does not only lie with the perpetrator himself and even with the accessory. Lack of proper care and attention is also criminal”. Next week reform Jewish leaders from Russia, Ukraine and Belarus together with the chair and the president of the WUPJ will gather for the biennial conference in Minsk. We will discuss a number of unsolved matters and, contrary to the biblical priests, elders and magistrates who through the ritual of broken-necked heifer could “wash their hands in innocence”, we will raise and discuss issues and unsolved matters. Due to Communism, it was not our fault for forgetting Judaism in the 20th century Soviet Union, but it is now the responsibility of the rabbis, directors, educators and youth leaders to develop and strengthen our Jewish communal life and programs. And may the Almighty help us in this all-important task.
The UN has discussed the need to avoid armed conflict in North East Asia in a Security Council meeting convened on April 28. The chairs of the meeting, United Nations Secretary-General António Guterres and US Secretary of State Rex Tillerson, stated that it is the collective responsibility of the international community to avoid war with the Democratic People’s Republic of Korea, however, much of the onus also falls on North Korea itself. Since January 2016, North Korea has intensified its weapons testing, which has included two nuclear tests, over 30 launches of ballistic missiles, and numerous other activities linked to weapons testing. These tests have consisted of short, medium and intermediate-range missiles, ballistic missiles fired from submarines, and the launching of a satellite into orbit. All of the above are in direct breach of numerous resolutions of the United Nations Security Council. In the meeting, Mr Guterres voiced his concern that war in North East Asia could lead to a global destabilisation due to the fact that one-fifth of the world’s population, together with one-fifth of the world’s global domestic product, is based in that region. He also voiced his worry at the risk of escalation of hostilities with North Korea, stating that misunderstanding or miscalculation could rapidly lead to increased tensions or even hostilities, and further impede on the work of the international community in maintaining unity and finding a peaceful resolution. On the other side, DPRK leader Kim Jong-un has stated that the development of nuclear arms is the policy of his country and that North Korea will become a “responsible nuclear-weapon State”. While the United Nations has said that North Korea will have to cease its weapons testing and comply with the UN Security Council resolutions if a dialogue is to resume, North Korea has given no indication that it will do so and comply with its ‘international obligations’. Tensions and hostilities aside, North Korea is facing a vast humanitarian crisis, with between half and one-third of the country’s population, over 13 million people, in an extremely vulnerable situation, and in great need of aid. These people have little access to food, suffer from malnutrition, and have a lack of access to basic health and sanitation services. Most at risk are children under the age of five, pregnant women and the elderly. There are currently 13 UN agencies and non-government organisations operating in North Korea, who are presently asking for $114 million from the international community to help these people. Antònio Guterres also used the meeting to call on the DPKR to utilise the resources and mechanisms of the United Nations as a means of helping these 13 million vulnerable people.
Battle of Vienna Contributor: C. Peter Chen ww2dbaseBy the end of Mar 1945, Soviet troops had overran the territory of the First Slovak Republic, ie. the eastern half of occupied Czechoslovakia. Although the Soviet forces were now ready to concentrate on Berlin, Germany, Joseph Stalin observed that there was no risk that the Anglo-Americans would reach Berlin first, thus he decided to divert some of his forces to take over Vienna, Ostmark, southern Germany. Under the command of General Fyodor Tolbukhin, the Soviet 3rd Ukrainian Front encircled the city, which was defended by the German II SS Panzer Corps and the 6th Panzer Army, both of which were under-strength through prolonged fighting. The German defense was under the overall command of General Rudolf von B├╝nau, while the SS corps was under the command of SS General Wilhelm Bittrich. Between 2 Apr and 7 Apr, fighting was generally contained in the southern and eastern suburbs, but by 8 Apr, Soviet troops had gained several key positions in the southern suburbs, including the main rail station, and moved into the western and northern suburbs. The main attack against the city center was launched on the following day. By 13 Apr, most German forces in Vienna were isolated in various pockets, with the exception being the remaining troops of the II SS Panzer Corps, which was able to penetrate the western ring of the encirclement and escape destruction. Without the ability to coordinate between the pockets, German resistance ceased to be effective by the end of 13 Apr 1945. ww2dbaseWith Vienna generally secured, the battle-hardened and disciplined Soviet forces moved to toward Graz in an attempt to destroy the fleeing German forces. When the Soviet occupation forces arrived in this ruined city, the city suffered rape, looting, and murder for the subsequent weeks. Last Major Update: Feb 2012 Battle of Vienna Interactive Map Battle of Vienna Timeline |30 Mar 1945||Soviet troops crossed the Hron and Nitra Rivers in Czechoslovakia and crossed into occupied Austria near Koszeg, Hungary. They were now 50 miles from Vienna, Ostmark, Germany.| |1 Apr 1945||Soviet 3rd Ukrainian Front captured Wiener Neustadt, occupied Austria.| |2 Apr 1945||Soviet troops captured Wiener Neustadt, Eisenstadt, Neunkirchen, and Gloggnitz in southern Germany, thus now threatening Vienna.| |3 Apr 1945||Soviet 2nd Ukrainian Front penetrated the German defensive lines between Wiener Neustadt and Neusiedler Lake, advancing toward Vienna, Austria. Major Carl Szokoll, a leader of the Austrian resistance, met with Soviet authorities about cooperation in Vienna to prevent the city's destruction.| |5 Apr 1945||Soviet 3rd Ukrainian Front cut the rail line from Linz to Vienna, occupied Austria.| |6 Apr 1945||Soviet 3rd Ukrainian Front began attacking the suburbs of Vienna, occupied Austria.| |8 Apr 1945||Soviet troops gained control of the main railway station in Vienna, Ostmark, Germany and surrounded the city.| |9 Apr 1945||Soviet troops began assaulting the central region of Vienna, Ostmark, Germany.| |10 Apr 1945||German 6.SS-Panzerarmee defended against strong Soviet attacks against Wiener Neustadt and Baden in occupied Austria. Meanwhile, heavy fighting continued in the central districts of Vienna.| |11 Apr 1945||The Soviet 4th Guards Army attacked the canals over the Danube River in Vienna, Ostmark, Germany. Nearby, Soviet 20th Guards Rifle Corps and 1st Mechanized Corps attacked the Reichsbr├╝cke Bridge but failing to take it. After observing the fighting on the front lines in the district of Florisdorf, Otto Skorzeny concluded that Vienna was to fall within a day.| |13 Apr 1945||The Soviet Danube Flotilla landed men of the 80th Guards Rifle Division and 7th Guards Airborne Division on both sides of the Reichsbr├╝cke Bridge in Vienna, Ostmark, Germany, securing it. Later on the same day, Soviet troops secured the Essling district of Vienna while the Danube Flotilla delivered more men near Klosterneuburg 15 kilometers up the river. By the end of the day, German resistance in Vienna, broken up in several pockets, ceased to be effective.| |15 Apr 1945||Soviet 3rd Ukrainian Front advanced toward Graz in occupied Austria.| |29 Apr 1945||The Soviet Union set up a provisional government in Vienna, Austria.| |9 Jun 1945||277,380 Soviet and Bulgarian personnel were awarded the medal for the capture of Vienna, Ostmark, Germany (occupied Austria).| Did you enjoy this article or find this article helpful? If so, please consider supporting us on Patreon. Even $1 per month will go a long way! Thank you. Share this article with your friends: Stay updated with WW2DB: Visitor Submitted Comments All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB. » Kamanin, Nikolai » Atlas of the Eastern Front 1941-45 - » 1,120 biographies - » 334 events - » 40,128 timeline entries - » 1,179 ships - » 340 aircraft models - » 193 vehicle models - » 365 weapon models - » 123 historical documents - » 234 facilities - » 466 book reviews - » 28,366 photos - » 379 maps Chiang Kaishek, 31 Jul 1937 Please consider supporting us on Patreon. Even $1 a month will go a long way. Thank you! Or, please support us by purchasing some WW2DB merchandise at TeeSpring, Thank you!
Cultural Meanings Of The Eclipse All cultures, including our own, have intimate connections with the sun, the moon, the stars, and other celestial phenomena observed in the night sky. These connections appear in mythologies and religions, as well as in human settlement and subsistence. They are as old as culture itself and change through time just as cultures adapt to new circumstances through time. Cultural astronomer Lee Minnerly, a former archives assistant at Adler Planetarium's Webster Institute for the History of Astronomy and lecturer at the Newberry Library, joins us to discuss several cultural meanings of the eclipse.
Management of Natural Disasters Edited By: S. Syngellakis, Wessex Institute, UK $265.00 (free shipping) WIT Transactions on State-of-the-art in Science and Engineering Comprising a selection of articles dedicated to disaster management this volume focuses on the challenges arising from extreme natural phenomena and descriptions of methods for assessing their occurrence probability and of measures for mitigating their intensity and detrimental effects. The first group of articles describes general strategies for risk assessment and mitigation, providing examples in the context of various kinds of natural disasters. The economic impact of mitigation measures, communities’ differing coping capabilities, human attitudes towards relocation and possible links to climate change are among the topics considered. Natural strategies are outlined in the contexts of Turkey, Brazil and United Arab Emirates. The second part of the book is concerned with disasters from specific natural causes starting with a group of ten articles on floods. The corresponding contributions address flood frequency, vulnerability and resilience of communities, response of small and medium enterprises, risk in terms of financial losses, private investment participation to mitigation measures, assessment of design solutions against flood hazard, sleeper dykes as a means of reducing risk, preparedness of hospitals, causes of highway flooding and their relative importance, and impact of floods on poor communities. The third set of articles are related to earthquake-related hazards describing, in particular, an analysis tool providing integrated risk, coping capacity and management output, a method for assessing vulnerability considering key contributing factors, a technique for urban aftershock management and damage assessment, and neural network modelling to estimate tsunami damage. Finally, a group of three articles address issues related to landslides, namely, slope management as a means of reducing risk and losses, early warning based on rainfall data, and hazard prediction using favourability function modelling and spatial target mapping software. Providing a unique global perspective this volume focuses on recent developments over a wide range of topics that cannot be found in similar, currently available, publications in this field. This is a valuable addition to the relevant literature available to researchers and engineers working on risk assessment and mitigation of natural disaster intensity and consequences. It will appeal of those working in academic and research environments as well as governmental, professional, national and international organisations.
Complete Details Of Ebor Falls Complete Details Of Ebor Falls.Ebor Falls is a breathtaking natural attraction located in the New England region of New South Wales, Australia. Renowned for its stunning cascading waterfalls and scenic beauty, Ebor Falls is a popular destination for both locals and tourists seeking to immerse themselves in the awe-inspiring beauty of nature. Geological Formation and Features: Ebor Falls is situated within the Guy Fawkes River National Park and is an outstanding example of Australia’s geological diversity. The falls are formed by the Guy Fawkes River, which flows over the Ebor Basalts, a type of volcanic rock that gives the falls their distinctive appearance. These basalts were formed during the volcanic activity that occurred millions of years ago, shaping the landscape and creating the foundation for the falls. The falls are characterized by a series of cascades, creating a stunning visual display of water tumbling over the rocky outcrops. The top falls have a sheer drop, plunging into a deep gorge below, while the lower falls create a more gradual descent, resulting in a mesmerizing natural spectacle. History and Indigenous Connection: The indigenous Gumbaynggirr and Anaiwan people have inhabited this area for thousands of years, with Ebor Falls holding significant cultural and spiritual importance for them. The falls are seen as a sacred site, deeply connected to their cultural beliefs and stories. European settlers discovered Ebor Falls in the 19th century, and it has since become a popular attraction for both locals and visitors. Flora and Fauna: The Ebor Falls area is teeming with diverse flora and fauna, showcasing the rich biodiversity of the region. The surrounding bushland is dominated by eucalyptus forests, which provide a habitat for a wide array of bird species, including kookaburras, rosellas, and various birds of prey. Visitors can also spot native wildlife such as wallabies, kangaroos, and possums. The riverine vegetation along the Guy Fawkes River supports a unique range of plant life, including ferns, mosses, and other water-loving plants. The convergence of different habitats makes Ebor Falls a crucial area for preserving biodiversity and promoting environmental conservation efforts. Recreational Activities : Ebor Falls offers visitors a plethora of recreational activities to enhance their experience and enjoyment of the natural surroundings. Some popular activities include: 1. Hiking and Bushwalking: Explore the well-marked trails and embark on a journey through the lush forests surrounding Ebor Falls. The trails offer varying levels of difficulty, catering to both casual strollers and avid hikers. Ebor Falls provides a fantastic opportunity for photography enthusiasts to capture the mesmerizing beauty of the cascading waterfalls, the verdant landscape, and the diverse wildlife in their natural habitat. Take advantage of the designated picnic areas and enjoy a relaxing meal amidst the scenic beauty. The sound of rushing water and the cool breeze make for an idyllic setting. The Guy Fawkes River is a popular spot for fishing enthusiasts. Cast a line and try your luck at catching freshwater fish while soaking in the natural serenity. Best Time To Visit : The best time to visit Ebor Falls is during the Australian spring (September to November) and autumn (March to May) when the weather is mild, and the landscapes are lush and vibrant. During these months, the waterfalls are at their peak flow, creating a breathtaking sight. Additionally, the moderate temperatures make it perfect for hiking, picnicking, and enjoying the surrounding natural beauty. It’s advisable to avoid the Australian summer (December to February) due to potential heat and the risk of bushfires, and the winter months (June to August) when the weather can be quite chilly. Complete Details Of Ebor Falls Visitor Information : Ebor Falls is located near the town of Ebor in the New England region of New South Wales, Australia. 2. Access and Transportation: Visitors can access Ebor Falls by car via the Waterfall Way, a scenic route that provides stunning views of the countryside. The falls are easily accessible and have ample parking facilities. 3. Operating Hours: Ebor Falls is open to visitors throughout the year, but it’s advisable to visit during daylight hours for the best experience and safety. 4. Entry Fees: Entry to Ebor Falls is free of charge, making it an affordable and accessible destination for all. 5. Visitor Facilities: Facilities at Ebor Falls include picnic areas with tables, barbecues, toilets, and well-maintained walking trails. Ebor Falls is situated in a region rich with natural and cultural attractions. Some nearby places worth exploring include: 1. Dorrigo National Park: Known for its stunning rainforests, waterfalls, and walking trails, Dorrigo National Park is a short drive from Ebor Falls and offers a wealth of natural beauty. 2. Point Lookout: Point Lookout, within New England National Park, provides panoramic views of the surrounding landscapes and is a must-visit for photography enthusiasts. This charming town nearby offers a glimpse into rural Australian life and is surrounded by scenic countryside. Efforts are underway to preserve and protect the natural beauty of Ebor Falls and its surrounding environment. Various organizations, including the New South Wales National Parks and Wildlife Service, are actively involved in conservation initiatives aimed at preserving the flora, fauna, and geological features of the area. These efforts help maintain the ecological balance and ensure that future generations can continue to enjoy the splendor of Ebor Falls. Ebor Falls is a captivating natural wonder that showcases the beauty of Australia’s geological formations and biodiversity. Its accessibility, recreational activities, and nearby attractions make it an ideal destination for nature lovers and adventure seekers. Conservation efforts ensure the longevity of this remarkable site, allowing people to appreciate its natural beauty for generations to come.
Juvenile rheumatoid arthritis is a joint condition that affects teens and children who are 15 years of age or younger. It’s sometimes called juvenile idiopathic arthritis. Juvenile rheumatoid arthritis causes the lining of the joints to swell and release fluid inside the joint. Joints become swollen, stiff, painful and warm to the touch. Symptoms can vary greatly from child to child. Your child may complain of joint pain or may limp. His or her joints may be very swollen or feel hot. Your child may have stiffness in the morning or have problems moving. You may notice that he or she avoids normal activities. Your child’s symptoms may come and go, and may be mild or intense. Symptoms can last for a short time or for years. There are three main types of juvenile rheumatoid arthritis. Your child’s symptoms will depend on what type he or she has. In serious cases, juvenile rheumatoid arthritis can stunt growth. Eye swelling can be serious, and lead to vision problems. If your child has signs or symptoms of juvenile rheumatoid arthritis, be sure to take him or her to the doctor. No single test can identify juvenile rheumatoid arthritis, and it can be hard to diagnose. Your child’s doctor will likely ask about your child’s symptoms and medical history. He or she will also examine your child, and may do an X-ray or blood test. Your child’s doctor may also want to get a sample of the fluid in the lining of your child’s joints. In some cases, the doctor will want to follow your child’s symptoms for a few months. The patterns of your child’s symptoms can help identify which type of juvenile rheumatoid arthritis he or she has. Juvenile rheumatoid arthritis and its symptoms, such as pain and long-term joint and eye damage, can be managed with treatment. Your child’s doctor may recommend a combination of treatments that may include medicine to relieve pain, along with physical therapy and exercise. Physical therapy and an exercise plan can help your child maintain range of motion and strength without causing further damage to the joints. Your child’s doctor will probably suggest an over-the-counter nonsteroidal anti-inflammatory drug (NSAID), such as ibuprofen (brand names: Advil, Motrin), to reduce joint swelling. If these medicines do not help your child’s symptoms, your child’s doctor may suggest a combination of NSAIDs with slow-acting anti-inflammatory medicines, which are more powerful and may slow down the progression of the disease. If symptoms and risk of damage are severe, your child may need steroid treatment to reduce inflammation. With all of these medicines, regular testing must be done to watch for side effects. Newer medicines allow doctors to treat the autoimmune problems that cause juvenile rheumatoid arthritis. These medicines help slow your child’s immune system so it doesn’t cause further damage to joints. These may be prescribed if anti-inflammatory drugs alone are not helping. Rarely, children need surgery to help treat juvenile rheumatoid arthritis. Soft tissue surgery to repair joints may be needed if the joints have become badly bent or deformed. Joint replacement surgery may be needed if joints are badly damaged. With proper treatment, though, many children can eventually lead full, normal and even symptom-free lives. It’s actually important for your child to be as active as possible. Regular exercise, including games and sports, can be an important part of managing juvenile rheumatoid arthritis. But be sure to check with your doctor before your child starts any new sports or activities. Chronic Musculoskeletal Pain in Children: Part II. Rheumatic Causes by JL Junnila, VW Cartwright (American Family Physician July 15, 2006, http://www.aafp.org/afp/20060715/293.html) Written by familydoctor.org editorial staff
Global population increases, surging economic growth in new economies, and an unabated appetite for fossil fuels all are driving huge demand for the world's natural resources. At the same time, climate change is upon us. Add to that instability across the Middle East--the world's oil epicenter--and the growth of extremism and international terrorism. The complexities of today's world are confounding and frightening, but there are still reasons for hope: -Groundbreaking research on alternatives to fossil fuels -Breakthroughs in energy efficiency -Progress in addressing threats to ocean and freshwater resources -Increased understanding of terrorism, poverty, and extremism--threats to the stability of current energy sources In the face of such extraordinary circumstances, how do we understand the complex interconnections among these issues? What can we do as individuals and as a nation to address them? And what is the way forward when violence and the threat of terrorism put us on a razor's edge? Sponsored by the 2007 Roundtable at Stanford and Stanford Reunion Homecoming.
Natural News Nov 8, 2012 A common misconception about vaccines purports that they are the primary reason why infectious disease rates saw a rapid and steady decline throughout the early-to-mid 20th century. But an honest look at the figures reveals that diseases like polio, typhoid, measles, and tuberculosis were already in significant decline long before vaccines were ever even invented, this being the result of improved hygiene and diet. Data compiled by the National Health Federation (NHF), and relayed by Cynthia A. Janak of RenewAmerica.com, tells the real story about how virtually every major infectious disease of the 20th century was already on its way out long before its associated vaccine came onto the scene. This fact is clearly illustrated in these powerful visual graphs created by NHF that contain vital statistics from official U.S. public health records. As you will notice in the first graph, mortality rates from diphtheria, for instance, had already dropped by more than half before a vaccine for the infectious bacterial disease was introduced in 1920. The same can be seen for both whooping cough (pertussis) and measles as well, the vaccines for which emerged in the mid-1940s and 1963, respectively. Some infectious disease vaccines actually triggered more disease deaths Another important piece of information regarding vaccines is that some of them appear to have actually triggered a spike in mortality rates following their initial release. Just after the diphtheria vaccine’s release in 1920, for instance, there was actually a spike in death cases from the disease, followed by the continued decline that had already been taking place before the vaccine’s release. A similar spike was seen following the release of the whooping cough vaccine as well. “Most people believe that victory over the infectious diseases of the last century came with the invention of immunizations. In fact, cholera, typhoid, tetanus, diphtheria and whooping cough, etc., were in decline before vaccines for them became available — the result of better methods of sanitation, sewage disposal, and distribution of food and water,” writes Dr. Andrew Weil in his book Health and Healing. Typhoid fever is an excellent illustration of this crucial fact, as the disease almost completely disappeared in the 40 years between 1920 and 1960 without ever even having had an associated vaccine. Likewise, scarlet fever followed a similar pattern of natural eradication without ever having had an associated vaccine, as you will observe in these compelling graphs . - A d v e r t i s e m e n t So the next time somebody tries to guilt-trip you into accepting pro-vaccine dogma on the premise that vaccines have saved the world from infectious disease, simply refer them to actual historical data, which clearly shows that this is simply not the case. Exposing the truth may not be popular with the vaccine industry and its corporate-political allies, which profit heavily from the “vaccines ended infectious disease” myth, but it could help save you and your loved ones from needless vaccine injuries. Sources for this article include:
Paper is everywhere and has to satisfy all manner of requirements. For a start, it has to be permeable to air, stable and tear-resistant. It must also optimally absorb pigments and avoid undesirable changes of shape if it is wetted. Depending on the application, these properties must be combined perfectly. The new “CD Laboratory for Fibre Swelling and Paper Performance” at Graz University of Technology will investigate how to optimally combine the key characteristics of paper to achieve the desired result. The Federal Ministry of Science, Research and Economy of Austria sponsors innovation “Wood and paper are industries with a long tradition in Austria,” explains Dr. Reinhold Mitterlehner, Austria’s Federal Minister of Science, Research and Economy. “The laboratory will contribute to technological progress in these important segments. This will benefit all participating partners and, in the long term, Austria as a location of industry.” Optimal fibre swelling Paper is produced from renewable natural resources and – contrary to many other packaging materials – it is biologically degradable. As an everyday product, it seems to hold few secrets. In spite of this, many questions remain unanswered. For instance, it is still unclear how individual paper fibres behave if printer’s ink is applied, or how moist paper can be prevented from bulging or curling up. The team of Ulrich Hirn from the Institute for Paper, Pulp and Fibre Technology at Graz University of Technology is searching for scientifically sound explanations in the recently opened CD laboratory. As Ulrich Hirn, Head of the Laboratory, points out, “The swelling processes within the paper fibres are particularly relevant in modern high-speed inkjet printers. The less swelling occurs, the quicker the paper will dry. On the other hand, swollen fibres increase the strength of paper. If we are to optimally mix paper properties for each field of application, we need to understand, describe and ideally simulate the absorption of water as well as the mechanical processes down to the individual paper fibre. In the next seven years, this is our mission in the CD laboratory.” More specifically, the team of the CD laboratory plans to establish precise mechanical models of the swelling processes that occur when paper is wetted or dried, and it will develop modification and improvement concepts as the basis for paper simulation for the development of printing machines. The scientists rely on the support of two big corporate partners: Mondi Uncoated Fine and Kraft Papers, a paper company headquartered in Vienna, and Océ Technologies B.V., a manufacturer of industrial printing machines and member of the Canon group of companies with principal place of business in the Netherlands. Christian Doppler laboratories engage in application-oriented high-level fundamental research. In this context, distinguished scientists co-operate with innovative enterprises. Christian Doppler Research Association is internationally recognised as a best-practice example for the promotion of this co-operation. Christian Doppler Laboratories are jointly financed by the public sector and the participating private companies. The most important public sponsoring agency is the Federal Ministry of Science, Research and Economy of Austria.
Magnetic-reduction roasting is a process in which non-magnetic ore is converted by the action of reducing gases to a state in which subsequent magnetic separation can achieve the best balance between recovery and grade. The principal characteristics of the rotary kiln method of magnetic-reduction roasting are complete processing and heat exchange within a revolving cylindrical kiln. The only large-scale rotary kiln plant for magnetic-reduction roasting was designed by Lurgi-Chonle, of Frankfurt, Germany, and built at Watensttev-Salzgitter prior to World War II. Lurgi reports that these 3.6-meter diameter by 50 meter long kilns were designed for a throughput of 800 tons/day, but actually achieved a throughput of 1150 tons/day. Complete and reliable results from this plant could not be located however it is reported that the concentrate was acceptable but iron unit recovery was on the low side. The kilns were not re-started after the war because the low cost-blast furnace gas was no longer available. What is Expected from the Rotary Kiln? A prime consideration in evaluating the rotary kiln is whether it can produce a metallurgically acceptable product. What is an acceptable product? Is it necessary to produce a calcine having a ferrous iron (bivalent iron) divided by total iron ratio of 0.33 (a theoretical magnetite)? The answer is no. Because of the differences in characteristics of various ores, testing is required to find the actual range for maximum beneficiation (grade balanced against recovery and cost); normally this information is developed as the other tests are conducted and further work is not needed. Test work has indicated that, for maximum beneficiation, a ferrous-to-total iron ratio j between 0.25 and 0.40 will be required. Present operations and plans call for heating and reduction to be done by producer gas or natural gas, depending on availability and cost of the fuel. The producer could run on coal or coke perhaps even lignite or peat, although these latter two have-yet to be proven. The mineralogy of the ore is extremely important. Porosity, surface area, grain size, iron distribution, and history are important variables for each ore. The kiln is a relatively simple piece of equipment from the mechanical point of view, although gas-tight sealing at each end requires careful design. Much has been written on rotary kiln design, especially in the cement industry therefore, the kiln’s mechanical operation will not be discussed extensively. Of the many different conditions that affect roasting, a major one is the size of feed. With no special preparation of the ore fines, a rotary kiln is able to handle ores with a larger proportion of fines than a shaft furnace or traveling grate. Ore containing, up to 10 percent of -325 mesh requires no special preparation before feeding to the kiln. Although the outcome of roasting even finer ores can not’ be accurately predicted, there is a strong probability of success in this area. The kiln drive should be designed so that several speeds are available in the 0.5 to 1.5 rpm range. In addition, an emergency drive motor should be provided that will turn the kiln very slowly. The emergency drive should have a gasoline or other non-electric motor. When minor repair work to the kiln ii needed, this drive can keep the kiln turning to prevent warpage, allowing repairs to be made without complete shutdown. A variable speed unit is a must for pilot scale operations and would be desirable on the commercial unit. Once the process variables have been established for a particular ore the kiln plant operation should be relatively simple and require only minor control, changes. The adaptability of the kiln for reduction roasting permits minor process changes to be made and evaluated without disrupting production. The nature of the kiln process will balance out variations in feed size moisture content and iron content which have been known to cause trouble in other systems. Such variations may cause a limited drop in grade or recovery but would not disrupt the entire process. The heat of reaction in oxidizing the magnetite back to hematite falls between 210 and 240 Btu/lb of magnetite. For a 36% Fe ore with % free water, this could amount to nearly 245,000 Btu/Ton of feed. Stephen reportcs that when gamma-hematite is produced in a shaft furnace by controlled oxidation of magnetite to the magnetic form of hematite this oxidation provides the heat necessary for the process. Heat exchange equipment is also being designed for use with a hot central gas system. By using the producer gas hot without quenching a 5% fuel savings is reported by Hamilton on a Wellman-Galusha producer. It is not anticipated that the tars from coal or coke producer gases would present a problem in the kiln or the magnetic separation sections of the plant. The balance is for a 36% Fe ore with no moisture or ignition loss. The following assumptions were made: All heat in the calcine at the discharge temperature is lost all the sensible heat in the exit gases at the exit temperature is lost, the heat of reaction for the reduction step is 35 Btu/lb. of magnetite formed 100% conversion to magnetite is used unless otherwise stated and a 5% safety factor is sufficient to cover excess oxygen and uncombusted fuel values in the exit gas. All figures are based on one ton of feed. The rotary kiln is a very simple piece of equipment that has proven to be dependable in the cement industry. Maintenance will not present a serious problem; the major item would be abrasion wear on the steel lifters. It has been estimated that this cost will fall within the three cont per ton maintenance and labor differential quoted in the cost analysis. Brick lining wear will be extremely small at the relatively low temperature required for the process. Potential Ore Reserves Treatable in the Rotary Kiln Available data on iron ore reserves in the U.S. are difficult to interpret, especially as regards the portion that would permit successful roasting. The situation is further complicated by the fact that many of the ores potentially treatable in the kiln are not classified as ores and are not included in the estimates given. All non-magnetic ores that have the iron present as the oxide or hydrate must-be considered. It is also possible that carbonate type deposits can be utilized. Any iron that is chemically tied up with other metals will probably not provide a satisfactory feed for the simple magnetic reduction roasting technique considered today. In general the type of ore that would be treated by magnetic roasting is not listed as measured ore reserves especially in certain areas of the country where taxes are levied on any ore reported. The largest potential in this country seems to be the non-magnetic taconites of the Mesabi district.
APA Format Research Papers One of the vital skills every researcher should possess is to effectively communicate research results and analytics to the public. American Psychological Association Style gives researcher an opportunity to structure research paper well and makes it more readable to the public. The American Psychological Association prescribes a format called the APA for research paper writing. This is one of the two regularly used formats, the other one being MLA format. Before you start writing your research paper keep in mind that people generally read research papers selectively. Some of them will read summary of the paper. Some readers will be interested in research methods used in your work while others may read specific points mentioned in your research. To this end, you should start each section on the new page and pay special attention to the structure of your research paper. APA format will help you to organize your paper well. In APA style research paper you should list all your sources alphabetically on a separate page named References. APA style is the most popular format for social science research papers for many years. We accumulate tips and instructions on how to write research paper in APA format. APA format requires 12 Times New Roman and many other features to make your research paper readable for users. - Free Sample of Research Paper in APA Style - APA Style Title Page for Research Paper - APA Format Research Paper Outline - APA Format Research Paper Template - How Can We Help - Research Paper Courtesy: Avoid Bias in Your APA Paper - Help with Research Paper Writing Free Sample of Research Paper in APA Style APA Research Paper Sample (Click the Image to Enlarge) Writing a research paper in APA style is quite a task especially when the students have the additional burden of searching sources for their topics. It is not difficult but it is confusing and complicated. Students could rather take the help of writing companies. APA research paper sample would be as follows • Title page: Title in centre. Author Name with university affiliation. • Abstract: describes the introduction, method, result and discussion in about 150 words. Should be a single paragraph. • Introduction: in brief inform about the topic- what you have done, why it is important, sources of research and purpose of the research. • Method: has 3 or 4 sub sections-a] participants b]design of research c] measures d] procedure • Discussion or text: interpret and explain the result. • Discuss the limitations and implications of the study. References: sources of your study to be mentioned. Please note APA style requires double spacing on all pages throughout. Margins of 1 inch on all four sides. You can contact ProfEssays about free APA research paper sample giving you exact specifications and requirements. An APA format research paper example would require: - Margins of 1 inch on all four sides of the paper. - Essay should be written on standard sized paper. - Every page will have a header which will cite the title of the essay / paper. It has to be placed on the left side of the paper. - The abstract of the paper will also have the header on the top left hand side. Then the word abstract comes as a heading in the center of the first line followed by an abstract of the essay. - The conclusion and reference pages will first have the title and then will have to be written according to the format prescribed. - In conclusion you are expected to summarize the main points of your essay. - Citation includes references with names of contributors and last edited dates. A title page is usually in the centre of the page and is a brief summary of the topic. There are a few guidelines in a research paper title page done in APA style set as: • 1 inch margin on all 4 sides • Double spacing in entire text • Page header on right 3 words from title • Pagination start from title as page no.1 • Title in the centre • Author name, university affiliation two spaces below the title. • Due date of the research We can write research paper title page in APA style for you while writing the research paper as per your requirements. ProfEssays is very particular about customer satisfaction and do not mind any number of revisions till such time that the client is convinced. All this at no extra charge. APA style research paper title page is as important as the rest of the research paper. In case you are unable to do the research paper due to any reason, you can count on ProfEssays to write for you in accordance to your needs. Their 24/7 customer service team will keep you updated on all your queries. You will be glad you filled up the order form. The requisites of APA format title page research paper is - Double spacing in the essay with 1 inch margins on all sides on a standard 8 ½ inch x 11 inch paper. - The fonts should be new times roman or any similar font 10-12 pt. - Every page will have the title on the left hand top corner. - Titles as suggested by APA should be around 12 words and not more. - The page numbers have to be on the top right corner of every page. - The title page will have the title of the essay in the centre of the page in one or two lines followed by the authors’ name. - The authors name should not be prefixed or suffixed with any titles or degrees. ProfEssays suggests the following APA format title page research paper for an essay on “yoga is the answer to all physical and psychological ailments” Yoga is the answer to all physical and psychological ailments This is just an example of the title page as suggested by ProfEssays. This is the format used but the paper size will be A4 size, i.e. 8 ½ inches * 11 inches. Being in this field since 2003 gives us the advantage of experience. APA Format Research Paper Outline The APA format research paper outline is no different from any other outline. The main point is there are certain guidelines to be followed for writing the outline according to the APA format. These outlines should consist of headings and subheadings set in such a way that the arrangement of the whole paper is evident. One of the essential skills every researcher should possess is to effectively communicate research results and analytics to the public. American Psychological Association Style gives researcher an opportunity to structure research paper well and makes it more readable to the public. APA style is the most popular format for social science research papers for many years. We accumulate tips and instructions on how to write research paper in APA format. APA style research paper outline writing by students helps them perfect their skills in writing. APA format is popular among students due to its simple guidelines and approach. The outline brings out the drawbacks in the presentation style and gives an introduction to the research paper. The outline is a brief synopsis of main research paper. The students feel the pressure of writing APA format due to usage of language in addition to the searches to be made for the research. A research paper outline in APA style should be as follows • The main idea should be stated briefly • Supporting facts to justify the main idea • The second main idea should be stated • Supporting facts to justify it. The above procedure should be continued when the opposing facts are given to counter the ideas. ProfEssays has over 500 qualified writers. They can write not only APA style research paper outline but also term papers, essays, dissertations, resumes, thesis and reports. Once your order is placed you can be assured of a brilliant piece of work. ProfEssays says an APA format research paper outline should have: - Headings and subheadings have to be on topics that are related to each other. - The subheading should be a subsidiary of the main heading. - There should be an even structure in the headings, subheadings following the same formats and grammar. APA Format Research Paper Template A template is a sort of design which is already formatted in your document so that you can begin writing on opening it. For example if you are writing a business letter you can use a template which has space assigned for your address, your clients address and other such requisites. Similarly an APA format research paper template should have the following: - The heading on the top of every page on the left hand corner - Page numbers on the right hand side on the top of the page - Title page will have heading in the centre of the page with the authors name and University in the next two lines respectively. - The authors note can be written here after these. - The next page will have the standard header on the left top of the page and then the next line should have abstract written. Beneath this comes the abstract. - Next few pages are the main body pages but all these pages also will have the header on top. - Last is the reference page which will have the list of references used along with date last edited. Majority of the universities and educational establishments all over USA and most of the other parts of the world have adopted the research paper template. The papers may be rejected if they do not apply the APA format. The purpose was to standardize the format. The APA paper format is as follows: • Short title of Paper [less than 50 characters] on the top as running head. • The title in the centre • Author’s name • Author’s affiliation • Page no. 1 • Short title of Paper [less than 50 characters] on the top as running head. • Abstract of 1 paragraph less than 120 words • It is the summary of important elements of the paper • Page no 2 Similarly the rest of the research topic. Details are usually there in the APA manual. ProfEssays will help you out with writing in a template and format. It’s a simple task for our writers. We will suggest a template according to the APA format and you can save it and use it for all your research papers etc. How Can We Help ProfEssays will help you with writing of research paper and give you APA format research paper examples for free. Our expert writers can write in any format, any style, on any topic and on any subject. It is our privilege that we can serve you. ProfEssays is a custom essay writing company formed in 2003 and it has grown in stature in a span of eight years. We have more than 500 expert writers on our team who are qualified from the best of universities. When you have placed your order with them giving your exact requirements you are assured of • No plagiarism • No delay in delivery • Affordable charges • Revisions allowed • Round the clock customer helpline. Order your paper now and get 15% discount for your first order! Use discount code FPE15OFF on our order page www.professays.com/order-now/ You are always on the safe side with ProfEssays.com! Research Paper Courtesy: Avoid Bias in Your APA Paper Here are some courtesy tips on avoiding bias in a research paper in APA style from ProfEssays.com: - Select labels with either very little or slightly flattering connotative meaning for establishing the identity of things included in the APA style paper. For instance, instead of using “black man” you might consider the less emphatic “man with a very dark complexion”, or instead of using “Mohammedans” settle for “Arabs.” And for “papist” you could use “Catholic.” - Do not show preference or connote pre-eminence for a certain sex by using gendered pronouns to refer to man in general. Instead of saying “all men are equal,” you might write “all human beings are equal.” Another way to avoid choosing which gender pronoun to use is to avoid using them altogether and substituting the pronoun for a non-gendered noun. - Avoid descriptive phrases that connote inferiority of some kind. Do not say “Drunken people can’t walk straight” but rather “People who have drunk a lot can’t walk straight.” Help with Research Paper Writing That can be thorny ground to tread for some people. So if you are not absolutely sure of your mastery of the language, scribble down your thoughts on the research paper topic and run over to ProfEssays.com. Their writers are all masters and doctors in their proper field. They are competent in both theory and practice as well as in writing any type of essay. You can trust them to produce a masterful and original research paper in APA style for you in as short as 8 hours for rush work. And you can continue revising until it matches your preferences completely. The custom essay paper you commission is copyrighted to you upon delivery and will not be re-sold or re-used anywhere else. All these excellent services you can have at an affordable price. Best of all, you are confident that your personal data will be kept in strictest confidence. ProfEssays is the expert in APA style outline writing as well as on essay or a research paper writing. There is no two ways about it. All the formats, styles, grammar etc will be adhered to by our writers. When you buy a research paper from ProfEssays you get the following services: - On time delivery of your paper - In case of emergency delivery within 8 hours - Your paper will be written from scratch - We assure you of quality and originality - Our prices are most reasonable - We are available 24/7 for your assistance Place an order with us. note: “ProfEssays.com is an outstanding custom writing company. We have over 500 expert writers with PhD and Masters level educations who are all ready to fulfill your writing needs no matter what the academic level or research topic. Just imagine, you place the order before you go to sleep and in the morning an excellent, 100% unique essay! or term paper, written in strict accordance with your instructions by a professional writer is already in your email box! We understand the pressure students are under to achieve high academic goals and we are ready to take some of it off you because we love writing. By choosing us as your partner, you achieve more academically and gain valuable time for your other interests. Place your order now!” Tags: APA Style Research Paper example, APA Style Research Paper help, APA Style Research Paper sample, APA Style Research Paper topics, APA Style Research Paper writing, custom APA Style Research Paper, research paper outline template
Human hands have stretched far into the cosmos during our half-century of exploring the final frontier. Men and women have circled hundreds of miles above the protective gaseous veil of Earth’s atmosphere and a handful of men have ventured further and left their footprints on the flat plains and undulating hills of our closest celestial neighbour, the Moon. Many machines crafted by human hands have been sent into the most inaccessible reaches of the Solar System…and several of those were delivered, personally, by humans. In June 2009, one such machine fell silent after two decades exploring the poles of the Sun. The joint US-European Ulysses mission, now defunct, continues to orbit our parent star, completing a full circuit every six years or so, and its legacy stands testament to the ingenuity of the scientists, engineers, visionaries and thinkers who laboured to put it there. From its launch in October 1990 to the end of its life, Ulysses pushed the boundaries of knowledge about our Sun and fundamentally altered our understanding of how it works. The Sun is, quite literally, the reason that life exists on Earth. Most scientists accept that the Solar System formed from a vast cloud of gas and dust, around 4.5 billion years ago, with immense temperatures and pressures serving to form a proto-star and an enormous disk which eventually coagulated into the primordial versions of the planetary attendants that exist today. On the third of those attendants, the largest of the innermost, ‘rocky’ planets, life eventually arose; life which would someday build and despatch Ulysses to learn more about the star which had given it life. Ulysses was the first spacecraft to venture outside the ‘ecliptic plane’ – the plane of Earth’s orbit – to directly explore the Sun’s northern and southern polar regions. In doing so, it enabled physicists to study the star in three dimensions and provide an accurate assessment of the total solar environment, across a full range of heliographic latitudes. Since the ecliptic plane differs from the solar equatorial plane by only 7.25 degrees, it was previously only possible to observe the Sun from low solar latitudes. To explore from higher inclinations demanded a prohibitively large launch vehicle, but by utilising the enormous gravity of the planet Jupiter a significant ‘plane change’ could be effected, enabling travel outside of the ecliptic. More than four decades ago, consideration was given to launching a Pioneer spacecraft in 1974 for precisely this purpose, but it failed to gain approval. A seed of interest had been sown, however, and ultimately bore fruit as the International Solar Polar Mission (ISPM). In its original incarnation, this was a truly ‘international’ endeavour, employing two separate spacecraft – one built by NASA, the other by the European Space Agency (ESA) – to travel towards Jupiter. One would hurtle ‘beneath’ the giant planet’s south pole, using its gravity to direct it northwards, out of the ecliptic, towards northern solar latitudes. Meanwhile, the other craft would do the reverse, travelling ‘above’ Jupiter’s north pole to bend its trajectory southwards to explore southern solar latitudes. The result would be a pair of in-situ instruments to provide simultaneous measurements of both solar hemispheres for mapping, measurements of magnetic fields and observations of the anamolous ‘solar wind’, a stream of charged particles known to emanate from the Sun at hundreds of thousands of miles per hour. The ISPM was formally approved in 1976, its scientific instruments were agreed the following year and work on the spacecraft began in October 1978. Both would be launched on a single Space Shuttle mission in February 1983 and were to be boosted towards Jupiter by a solid-fuelled rocket known as the Inertial Upper Stage (IUS). However, the limited capabilities of this rocket was already raising eyebrows in scientific and political circles and there was doubt that it was powerful enough to deliver the twins as far as Jupiter. As a result, in April 1980 the ISPM was split into two halves and rescheduled for separate launches in 1985. The IUS woes continued, however, and the infant Shuttle drew voraciously on NASA’s funds. In February 1981, the space agency slowed the development of its ISPM craft and the IUS was dropped in favour of a more powerful, liquid-fuelled booster, built by General Dynamics. It was called the Centaur-G Prime and its implementation pushed the launch back still further to May 1986. It also opened an entirely new can of worms. The Centaur carried an enormous load of cryogenic hydrogen and oxygen – totalling more than 36,000 pounds – and came to be nicknamed a ‘balloon tank’, since it required total pressurisation in order to become fully rigid. In fact, if it was not fully pressurised, a single push from a finger could literally flex its metal walls. Right from the star, the Centaur was viewed warily by NASA’s safety officials, whose rule of thumb dictated that no single failure should ever be capable of endangering the Shuttle or her crew. Disturbingly, the Centaur’s pressure regulation hardware lacked a backup facility and, worse, a failure of its internal bulkhead had the potential to rupture the walls of both of its propellant tanks. Moreover, the dangers of ‘sloshing’ of these propellants risked a whole range of controllability problems for the Shuttle itself…but, balanced against these enormous risks was the promise that the Centaur was powerful enough to boost the ISPM and other deep space probes, including the Galileo mission to Jupiter. In the end, it was not enough and the Centaur was removed from consideration in favour of the less powerful, but safer IUS. Potential disaster hit the ISPM in September 1981, when NASA was forced by the House Appropriations Committee to terminate the production of its spacecraft. However, ESA pressed on with its own craft, which, at 815 pounds, was small and light enough to be reassigned back onto an IUS in January 1982. (In fact, it was so small, said astronaut Dick Richards, that it could quite easily be fitted onto the back of a pickup truck.) By this time, the absence of the Centaur was creating massive financial consequences: Galileo was a hugely important voyage and to be launched by an IUS meant that its journey time to Jupiter would double, its mission duration would effectively be halved and its overall scientific harvest would be seriously compromised. Within a matter of months, plans changed yet again. A groundswell of support for the Centaur, spearheaded by New Mexico Senator and former Moonwalker Harrison ‘Jack’ Schmitt, led to its reinstatement. In spite of the cost of changing boosters again and the lingering safety fears, the reduced journey times and increased scientific bounty which the Centaur could offer Galileo and the ISPM were deemed worthy of the risk. In July 1982, President Ronald Reagan himself approved the change. The ISPM, therefore, reverted back to the Centaur and, since both it and Galileo needed to travel to Jupiter, both were scheduled for two separate Shuttle missions during the same ‘launch window’ in May 1986. In the meantime, by 1983, the Europeans had completed the fabrication of their spacecraft – a small, boxy machine, with an attached dish antenna and a NASA-provided Radioisotope Thermoelectric Generator power unit. It would be spin-stabilised at five revolutions per minute and its attitude would be managed by four pairs of hydrazine thrusters. Ten scientific instruments were manifested, half of them provided by ESA and half by NASA, to explore radio wave emissions from solar plasmas, together with measurements of magnetic fluxes, observations of electrons, ions, neutral gas, dust and cosmic rays and analysis of the solar wind. As the ISPM changed, so too did its name. One leading contender was ‘Odysseus’, to honour the mythical Greek hero of the Trojan War, whose ten-year journey back home to reclaim his kingdom of Ithaca and his suitor-pestered wife, Penelope, has made his name a synonym for a voyage with many changes of fortune. The name was entirely fitting. In the same way that the ISPM would follow an indirect path to explore an uncharted destination, so the mythical Odysseus had taken many unexpected twists and turns before reaching the end of his journey. At length, the Latinised version of Odysseus’ name – Ulysses – was picked instead. It had been proposed by an Italian physicist, Bruno Bertotti of the University of Pavia, whose gravitational wave experiment was aboard the mission. In Bertotti’s mind, the name drew upon not only the rich cultural heritage of Troy, but also offered a nod toward other more recent writings: Alfred, Lord Tennyson’s epic poem, James Joyce’s novel and Dante Alighieri’s Inferno. In the latter, Ulysses famously guided his crew westwards into the unexplored waters beyond the Strait of Gibraltar. His terrified men mutinied, but Ulysses calmed them and encouraged them “to follow after knowledge and excellence”. As with all voyages of exploration, Ulysses and its Centaur booster required massive preparation on the ground. Challenger, the vehicle assigned to deliver the spacecraft into low-Earth orbit, underwent extensive modifications for the purpose. Extra plumbing and emergency dumping vents were installed into the Shuttle to load and drain the Centaur’s propellants, control panels were fitted in the flight deck, an S-band telemetry antenna was added and a huge Centaur Integrated Support Structure in the payload bay served to position the ‘stack’ for deployment. According to the crew activity plan for Challenger’s mission, released by NASA in mid-January 1986, the Shuttle and its crew of four – astronauts Rick Hauck, Roy Bridges, Mike Lounge and Dave Hilmers – would launch at 4:10 pm EST on 15 May. Assuming an on-time liftoff, the Shuttle carried provisions for a four-day flight and the sheer weight of the Centaur was such that a number of crew provisions, including the galley, had to be removed. Hauck’s crew would have entered a relatively low orbit of just 105 miles and had just nine hours to get Ulysses out of the payload bay. The Centaur was required to periodically dump its boiled-off gaseous hydrogen to keep tank pressures within their mandated limits and, beyond nine hours, it would have ‘bled’ so much propellant that the remainder would have been insufficient to perform the engine burn for Jupiter. After deployment, the Centaur’s twin engines would have ignited to carry Ulysses towards a rendezvous with the giant planet in July 1987 and from thence an encounter with the Sun’s polar regions in 1989-1991. All of these plans ground brutally to a halt on 28 January 1986, when Challenger exploded during liftoff, killing her entire crew. The resulting investigation uncovered many safety flaws in the reusable Shuttle, several of which related directly to the Centaur, and in June the tempestuous booster was formally cancelled by NASA Administrator Jim Fletcher. With both Ulysses and Galileo now forced to wait out lengthy delay before the Shuttle flew again, other options to deliver them to Jupiter had to be worked out. At this point, the IUS returned to the fore and in April 1987 a firm launch target of October 1990 was established for Ulysses. It would be a narrow ‘window’, just two weeks long, and achieving it would be critical if the spacecraft was to properly rendezvous with Jupiter in February 1992 and go on to explore the solar poles in 1994-95. Veteran astronaut Dick Richards commanded the Ulysses deployment flight. He had previously flown as pilot of STS-28 in August 1989. Six weeks after his return, at the end of September, he was named to lead the Ulysses flight, STS-41. It was an incredibly rapid turnaround and a ‘plum’, of sorts, for Richards had waited an unenviable nine years for his first flight…longer than any of his contemporaries in his astronaut class. In fact, when he did his debriefing after STS-28, he described himself as “the plank-holder”, for having waited the longest time for a mission, and expressed his fervent hope that no one else would be subjected to the same. “I guess management felt like they owed it to me to make it up to me,” Richards told the NASA oral historian, “and so they had turned me around and got me ready for my first command on STS-41, right away.” He was joined by pilot Bob Cabana – today’s Director of the Kennedy Space Center – and mission specialists Bruce Melnick, Bill Shepherd and Tom Akers. Richards saw it as his duty to ensure that his crew was ready; “I had the luxury of nine years getting ready to go fly,” he said, but “they didn’t have that much time”. To make them as confident with the Shuttle as possible, he decided to put together a crash course in systems knowledge – they ended up giving each other weekly lectures from their perspective – and although Richards admitted that the move was both popular and unpopular, the end fulfilled the means. “I spent a lot of time worrying about their systems knowledge,” he said, “and ship basics, because of the lack of their shelf life. By the time we got done on that crew, we knew the vehicle backwards and forwards.” It helped, of course, that all five men were incredibly smart, focused and self-motivated individuals and that all were active military personnel. They knew the chain of command, they understood the importance of duty and single-minded devotion to accomplishing The Mission, and performed admirably. On STS-41, Akers was primarily responsible for overseeing the deployment of Ulysses, which was mounted, uniquely, atop an IUS and a PAM-S booster. The latter was a special variant of McDonnell Douglas’ Payload Assist Module, whose primary objective was to deliver the spacecraft out of Earth orbit and onto a trajectory towards Jupiter. Equipped with a Star-48B solid rocket motor, the PAM-S was designed to be spin-stabilised after separation from the final stage of the IUS. After Discovery reached orbit on 6 October 1990, the payload bay doors were opened, allowing unfiltered sunlight to flood across Ulysses for the first time, and Tom Akers took the lead in preparing the solar explorer for its voyage. Deployment was scheduled for six hours and one minute into the mission. Watching his movements, Dick Richards could not hide his admiration. “There was a time-critical bunch of steps,” he recalled, none more so that the purging of coolant from Ulysses’ plutonium-powered Radioisotope Thermoelectric Generator (RTG). “Tom had to get down on this switch panel, which was, for some reason, located in this obscure corner of the flight deck.” Step by critical step, the instant of deployment drew closer: forward payload restraints were released, the aft frame of the IUS’ support structure tilted the stack to an angle of 29 degrees, Richards and Cabana manoeuvred to the correct attitude and electrical power was switched from the orbiter to the IUS. Finally, the three-minute purge of RTG coolant occurred, minutes before deployment. As Akers worked, his crewmates anxiously eyed the clock, keenly aware that a few minutes hence Ulysses would have to be gone. It brought back memories from a couple of their pre-flight simulations, in which Akers had been momentarily late with switch throws, but Richards trusted him implicitly to complete the job and for a few minutes left him alone. At length, however, the anxiety was pressing. “Tom?” he asked. “How you doing?” Akers looked up from his work and gave a broad grin. “Never had so much time!” The tension in Discovery’s cabin was thus broken and, precisely on time, the ordnance to separate the IUS umbilical cables was activated and the stack was tilted to its deployment position of 58 degrees above the payload bay. Seemingly in slow motion, the spacecraft drifted smoothly and serenely away. Nineteen minutes later, Richards and Cabana fired the orbiter’s thrusters to manoeuvre to a safe distance in anticipation of the firing of the IUS’ first stage engine. That occurred three-quarters of an hour after deployment, unseen by the crew because Discovery had been oriented with her belly facing the direction of Ulysses to protect the orbiter’s windows from the exhaust plume. The first stage burned out, as planned, after a 150-second firing and was jettisoned; whereupon the second stage ignited for almost two minutes, before separating itself. Next came the turn of the PAM-S. Firstly, it ‘spun-up’ Ulysses to 70 revolutions per minute for stability, then executed an 88-second burn to provide the final velocity increment and set the spacecraft on its way to Jupiter. After the burnout of the PAM-S, the spacecraft was ‘yo-yo-despun’ – with weights deployed at the end of cables – to less than eight revolutions per minute. By now, departing Earth’s gravitational well at an escape velocity of 34,510 mph, Ulysses became the fastest man-made object yet to leave the vicinity of the Home Planet; a record which would remain unbroken until the New Horizons spacecraft was boosted towards Pluto in January 2006. For the astronauts, their involvement with Ulysses was now effectively at an end and responsibility passed to an army of flight controllers and trajectory specialists who would guide it towards a rendezvous with the Solar System’s largest planet in February 1992 and, later, for its exploration of the Sun. Although the role of the STS-41 crew had been exclusively to launch Ulysses, they had undertaken several trips to Europe, and particularly Holland and Germany, where much of the contracting and project management was undertaken. On one occasion, Richards recalled, it gave him a slightly unsettling introduction to European culture. At the end of each afternoon, at 4:30 pm, the German team would reach the end of their day, open up their cooler and pull out several kegs of beer. “We’d all sit around there, next to Ulysses,” he recalled, “toasting Ulysses and having beer. We didn’t do that here in the United States, so that was different. I kinda liked it.” STS-41 ended, as planned, after just four days – a woefully short period of time, according to Richards, who felt the monumental effort to simply get there could have been monopolised by more time aloft and more experiments – and the orbiter returned to a desert landing at Edwards Air Force Base in California on 10 October. By the time Discovery landed, Ulysses had already traversed almost a million miles in its journey toward Jupiter. The spacecraft reached the giant planet on 8 February 1992, utilising its gravitational influence to increase its inclination to the ecliptic plane by 80.2 degrees and bend its trajectory southwards to encounter the solar south pole in June-October 1994. From then until the end of operations, its mission would profoundly alter our knowledge of the Sun, demonstrating the dynamic nature of solar magnetism and highlighting the strength of the solar wind. Its northward journey carried it for the first time over the solar north pole in June-September 1995. From its unique vantage point, Ulysses was also employed to observe Jupiter and Comet Hale-Bopp from afar, as well as examining highly energetic gamma ray bursts and interstellar dust from beyond the Solar System. For a mission which came so close to cancellation, Ulysses transformed itself into one of the greatest success stories and one of the grandest adventures of scientific exploration ever undertaken in the annals of human history. This is part of a series of History articles, which will appear each weekend, barring any major news stories. Next week’s articles will focus upon two important Shuttle science flights performed 20 years ago, during International Space Year.
This holiday season, we have witnessed a proliferation of beacons in a wide variety of settings. Retailers, shopping malls, museums, airports, hospitals and sports complexes are experimenting with, or deploying beacons for a variety of purposes. Beacons positioned near an airport security checkpoint, for example, might trigger an airline app to display a boarding pass. A beacon next to a painting in a museum might signal the museum’s app to show information about the artist. Retail-store beacons can help users locate products or indicate on-sale items. Library beacons can be used to remind users about overdue books. Advertisers are using beacons to help tailor a location based ad or to attribute that a consumer exposed to an ad has arrived at a destination. It’s critical that app developers and publishers understand how beacons and similar technology are used, especially in terms of data collection from consumers. This will enable them to develop the privacy practices needed to both ensure the opportunity to innovate around this technology, as well as the responsible use of this data. In this article, we seek to explain beacons and how they work. In future articles we will discuss some of the emerging best practices. What are Beacons? Beacons consist of a chip and other electronic components (e.g., antenna) on a small circuit board. A beacon is essentially a radio transmitter that sends out a one-way signal to devices equipped to receive it. There are numerous beacon makers around the world. Beacons come in various sizes but are generally small and inexpensive. Prices vary but they can be purchased for less than $30 per beacon. Beacons transmit a low-power signal that can be picked up by nearby Bluetooth-enabled mobile devices, including smartphones. Beacons themselves don’t collect data. They broadcast short-range signals that can be detected by apps on mobile devices in close proximity to a beacon. Beacon signals won’t be received unless users have installed apps that are associated with those beacons (i.e. the airline app, a museum app, a retail store app, a library app, etc.). With a corresponding app installed, beacons can help deliver an improved indoor experience. The NBA’s Golden State Warriors, for example, used its team app and beacons to inform fans at the game about the availability of better seats. Barclays has launched a beacon technology system in a UK branch to help disabled customers with their accessibility needs. The Barclays service, which requires customers to download an app and opt-in to the services, notifies staff if a customer with disabilities enters the branch. This way, the staff can provide quicker and more tailored services to any customers with disability needs, and the customer does not have to tell the staff about his or her individual needs every time he or she visits the branch. For this technology to succeed, it will be important for apps to help explain how and why data is collected by beacons, and to ensure that the uses are compatible with the reasonable expectations of consumers. With TUNE and other supporters, the Future of Privacy Forum provides a range of useful privacy resources for app developers at applicationprivacy.org. If responsible practices are maintained, beacons will be a technology that delivers on providing value to consumers, support to advertisers, and privacy controls in a way that will ensure holiday cheer. The Future of Privacy Forum (FPF) has a location privacy working group consisting of leading retail and technology companies that are discussing and formulating privacy practices relating to beacons. Working together with the Local Search Association, FPF released Understanding Beacons, a basic guide aimed at media and policymakers. For more information about FPF and its efforts, please contact [email protected]. Privacy & Policy Newsletter Are you interested in learning more about cutting edge topics at the intersection of privacy and technology? Sign up to receive TUNE’s bi-monthly privacy and policy newsletter today! Becky is the Senior Content Marketing Manager at TUNE. Before TUNE, she led a variety of marketing and communications projects at San Francisco startups. Becky received her bachelor's degree in English from Wake Forest University. After living nearly a decade in San Francisco and Seattle, she has returned to her home of Charleston, SC, where you can find her enjoying the sun and salt water with her family.
Several authors have attempted to combine advantages of two or more existing projections. Values can be mathematically averaged, or different pieces of the map may be separately projected along lines of similar scale. This latter approach is made convenient in pseudocylindrical projections due to their straight parallels with constant scale; therefore different map "slices" can be projected separately, then fused, i.e., "stitched" together, possibly after rescaling. Not counting the obsolete trapezoidal, the first known flat-polar pseudocylindrical projection was published by Adam Nell in 1890; actually it is the limiting case of Nell's pseudoconic projection for the ellipsoid, with the Equator as a reference parallel. Its x-coordinates are similar to an average of a cylindrical projection with the sinusoidal, but using an auxiliary angle; the whole map is equal-area. |Goode's homolosine map, in the almost never used uninterrupted form. In contrast, the interrupted version was very popular.| John P. Goode combined the sinusoidal and Mollweide (homolographic) projections in his hybrid homolosine (homolographic + sinusoidal) projection of 1923-25: three horizontal stripes are joined at the two parallels with the same length in the two base projections - approximately 40°44'12"N and 40°44'12"S. Latitudes higher than the boundary parallels are represented using Mollweide's projection, and the remaining area in the central stripe by the sinusoidal. The meridians are broken at the joint and the result is not appreciably better than either original methods used alone; however, horizontal scale is preserved in nearly 65% of the map and the polar caps are reasonably legible while preserving the sinusoidal's constant meridian scale at the tropical band. This projection, especially designed for interruption, was for long quite popular in atlases. |Uninterrupted Boggs eumorphic map| Created by S.W. Boggs, the eumorphic (Greek for "well-shaped") projection of 1929 is another hybrid. However, instead of discretely joining separate bands, it defines its y-coordinates as the arithmetic average of corresponding sinusoidal and Mollweide coordinates. The x-coordinates are calculated for an equal-area map, usually presented in interrupted form. |Sinu-Mollweide projection in its plain oblique form. The interrupted version is more interesting| Another fused pseudocylindrical design, Allen K. Philbrick's Sinu-Mollweide projection (1953) shares the base projections and fusion latitude of Goode's homolosine; however, |Winkel's first projection with 50°27"35'N and S as standard parallels| The first projection published by O.Winkel in 1921 is a generalization of Eckert's V using the equidistant cylindrical projection with any two opposite parallels standard, not necessarily the Equator (therefore only the horizontal scale is changed from the special case). Winkel preferred 50°27"35' N and S, which makes the total area proportional to the equatorial circumference. |Winkel II map| Also published in 1921, this projection averages the equidistant cylindrical and a 2:1 elliptical projection similar to Mollweide's, but with equally-spaced parallels and therefore not equal-area (some sources maintain Mollweide's itself is the base projection). The resulting map, also neither conformal nor equal-area, is constructed much like Winkel's first projection. |World maps in the HEALPix projection| |HEALPix map with H = 4. Compared with the map above, facets are still square, but the overall aspect ratio differs.| |HEALPix map with H = 6, rescaled in order to show facets as equilateral triangles. Each facet may be subdivided into four triangles.| HEALPix, Hierarchical Equal Area and isoLatitude Pixelisation (Górski and others, 1999), is a collection of standards and resources for efficient storage and processing of large sets of data for astronomical and cosmological research. At discrete points ("pixels") covering a conceptual celestial sphere surrounding the Earth, satellite probes detect incoming radiation, like gamma rays and the cosmic microwave background; the measured values are saved on a raster grid for further analysis. HEALPix defines a family of hybrid interrupted pseudocylindrical projections mapping from the celestial sphere to the plane. A HEALPix map comprises H lobes; in each lobe in the normal aspect, the equatorial band is mapped to a square using Lambert's equal-area cylindrical projection; the polar areas are mapped to two right isosceles triangles using a rescaled, interrupted form of Collignon's projection. The boundary parallels, approximately 41°48'37" N and S, are chosen in order to make the triangular regions cover 1/3 of the total area. For a special case, with H = 2 and dispensing with the equatorial band, the result is an interrupted Collignon map in two squares. In general, changing H affects the unscaled aspect ratio (much like in variations of Lambert's projection) and the parallel of least shape distortion; the most common case, H = 4, can be trivially folded into a cube, like polyhedral projections. The whole map may be divided into 3H identical facets, each a square with vertical and horizontal diagonals; a single facet is split along the boundary opposite the central meridian, but this can be fixed moving one half to the opposite side of the map, leaving all facets whole in a herringbone lay-out. Further, each facet may be recursively divided into 4 smaller squares. This hierarchical organization allows data processing in different levels of detail. In the last level, pixels are also squares with horizontal and vertical diagonals; this unorthodox orientation may be fixed by the artifice of rotating the entire map by 45 degrees. Since the overall result is equal-area, raster computation yields consistent results. The pseudocylindrical property provides uniform pixel distribution along parallels. Another favorable feature for large data sets is the easily predictable number and location of neighbor pixels. Variants of HEALPix projections involve further rescaling. For instance, with H = 6, stretching makes the triangles equilateral; this makes for 6H triangular facets, and each may be subdivided into 4 smaller triangles; pixels are likewise triangular. For H = 3, an inverse stretching creates 3 hexagonal facets, which may be subdivided after some artifices for sharing segments between neighbor facets.
The North Star Project, Summer Report Number Five, Northern Ireland By Megan Hennen Northern Ireland Report #1 First and foremost, it’s important to understand that the island of Ireland is not a single country. The island is divided into two parts, the Republic of Ireland and Northern Ireland. There’s a long and confusing history riddled in conflict between the Irish and the British Crown that eventually led to the Government of Ireland Act of 1920 resulting in the Partition of Ireland meaning Ireland became a free state aside from the Irish province of Ulster (Northern Ireland) which would remain in a union with the UK and thus forming the border dividing the island. The decision for Ulster to remain united with England ultimately came about by its population, which had been for the most part comprised of British and Scottish protestant settlers who did not wish to separate UK. Although more people in Northern Ireland view themselves as British and wish to remain in a union with the UK, there are nearly just as many people who identify as being Irish and who’d like for nothing more to form a unified Ireland separate from British rule. These conflicting desires within NI have been the foundation of decades of violence, which the people of Northern Ireland often refer to as ‘the Troubles’. My semester abroad was focused on these violent times and what’s currently being done to propel to peace process forward, as well. Located in Derry/Londonderry Northern Ireland, this monument is representative of the peace process, the people of Ulster ‘reaching across the divide’ and moving past differences. For all of the North Star Project 2013-2014 Reports, see https://mgjnorthstarproject.wordpress.com/ For all of the North Star Project 2013 Summer Reports, see http://www2.css.edu/app/depts/HIS/historyjournal/index.cfm?cat=10 The North Star Project 2013-2014 School Year Reports: The Middle Ground Journal’s collaborative outreach program with K-12 classes around the world. We acknowledge North Star Academy of Duluth, Minnesota as our inaugural partner school, and the flagship of our K-12 outreach program. We also welcome Duluth East High School, Dodge Middle School and other schools around the world to the North Star Project. The North Star Project has flourished since 2012. For a brief summary, please see recent articles in the American Historical Association’s Perspectives on History, at: The Middle Ground Journal will share brief dispatches from our North Star Project student interns, particularly from those who are currently stationed, or will soon be stationed abroad. We have confirmed student interns who will be reporting from Mongolia, Southern China, Shanghai, northeastern China, The Netherlands, Tanzania, Ireland, England, Finland, Russia, and Haiti. We also have students developing presentations on theatrical representations of historical trauma, historical memory, the price individuals pay during tragic global conflicts, and different perceptions of current events from around the world. We will post their dispatches here, and report on their interactions with the North Star Project students and teachers. Hong-Ming Liang, Chief Editor, The Middle Ground Journal, The College of St. Scholastica, Duluth, MN, USA (c) 2013-present The Middle Ground Journal. See Submission Guidelines page for the journal’s not-for-profit educational open-access policy.
India is the second overpopulated country in the world which reached 110 crores (110,000,000,0) and one sixth of the population of the world. It is next to China and has 2.4 per cent of the total population of the world land area where she has to support nearly 16% of the total world population. According to 191 census it had 68.4 crores of population. In 1991 its population was 84.4 crores. According to the 2001 census its population is 102.7 crores. The growth of population per year is more than 17 million which is equal to the total population of Australia and little more population than Japan. So it is right said that India creates one Australia and little more Japan in a year. The density of population is 324 per square kilometer in 2001. The life expectancy is 63.9 years, literacy rate 65.4 per cent and the sex ratio is 933 females per 1000 males. The annual growth rate of population was 2.14 in 1991 which decreased to 1.95 according to 2001 census. Causes of rapid growth of population in India There are various causes responsible for the rapid growth of population in India. Generally all the causes can be divided into three categories like – (I) High birth rate, (II) Low birth rate, (III) Migration. (I) High Birth Rate (Fertility) : Birth rate refers to the number of children taking birth per thousand people. In 1991the birth rate was 29.9 per thousand. In 2000 the birth rate was 25.8 per thousand which is very compound to other counties of the world. Birth rate is high due to the following reasons. Early Marriage System : Early marriages are commonly seen in our country. It is generally in the case of women. Maximum number of girls are married between 16 to 18 years. Early marriage prolongs the childbearing period and this leads to a high rate of growth of population. Universal Marriage : Marriage is universal practice and regarded as a sacred obligation in India. Presently in India about 76 per cent of the women are married at their reproductive age. By attaining the age of 50 only 5 out of 100 Indian women remain unmarried. As marriage is universal in our country, the birth rate becomes higher which raise the growth rate pf population. Joint Family System : Thought the importance of jointly family system has considerably declined in our country the system has no disappeared till now. In a joint family system the children are looked after by all the earning members of the family. The system acts a protection against economic hardship. A member may not be in a position to earn something but when he gets married he gets more children. The birth rate as a result of which population increases. Poverty is another factor which is mostly responsible for the rapid growth of population. India houses are the museum of poverty. According to 2001 census, nearly 37 per cent of people live below the poverty line. Small children in poor families are put to work and this helps to increase the family income. Children in poor families are considered as assets. Illiteracy, ignorance and superstitions : A majority of population in our country are illiterates. When illiteracy is combined with poverty it leads to firm belief a superstition. Children are considered as the gifts of God. They know nothing about the birth control measures. All those account of a higher birth rate in India’s population. Attitude towards male child : Every Indians wants to have a male child. A male child is considered as an asset for the poor, a dowry earner for the greedy, and liberator for the God fearing, a life insurance for the middle man and a matter of pride for the mother. (II ) Low Death Rate : (Mortality rate why low in India) : Death rate refers to the number of death taking place in thousand people. According to the 1991 census, the death rate was 10 per thousand. It decreased to 8.5 per thousand in 2001. Following are the causes responsible for the low death rate in India. Control of epidemic and other deadly disease: Epidemic like cholera, smallpox, plague, malaria etc. which too away lakhs of lives has been successfully controlled an even completely eliminated. The number of people dying in diseases has fallen. Development of medical science : Due to the development of medical science and invention of life saving drugs the death rate has sharply declined. Spread of health care facilities and hospitals in rural areas has created consciousness among the people about their health. The drinking water facilities food and other sanitation measure has helped to people to escape from death. This reduces the death rate to a marked extent in India. Decline of infant mortality rate : The infant mortality rate has declined due to mass immunization programmes and proper medical treatment to the children. In 1994, the infant mortality rate was 74 per thousand which declined to 70 per thousand in 2006. hen infant mortality rate decreases death rate also increases leading to heavy population growth. (III) Migration : Migration is another important point which is responsible for the higher growth rate of population. Is is seen that large number of people migrate from foreign counties to India and permanently stay here. Although this factor is not very crucial yet has increased the population of our country. These are the most important factors responsible for the population explosion in our county.
Repetitive Motion Injuries A Repeat Performance Bending your wrist, raising your arm above your head, or working with your elbow at an awkward angle- each is a simple movement you use to perform your job throughout the day. But if you repeat these or other motions over and over again while you work or play, you may develop repetitive motion injuries (also called cumulative trauma disorders or CTD). It could be days, months, even years before symptoms of pain or tingling appear in your hand or arm. But if you know how to work and play smart, symptoms may never appear. And if they do, you can take steps to prevent them from getting worse. Are You at Risk? If you use the same hand or arm movements each day, you could be at risk for developing repetitive motion injuries. Use this inspection checklist, to see if you’re likely to develop repetitive movement problems. If you check even one box, take steps now to reduce your chances of a repetitive motion injury. Do Movements Include… - Using a lot of repetition in your hand and arm-either at work or play? - Frequently bending your wrist? - Frequently grasping or pinching objects? - Frequently raising your arm above your shoulder? - Frequently using a lot of force with your hand or arm? Do Symptoms Include… - Waking up at night because of pain in your hand or arm? - Numbness in your fingers, hands, or arms? - Tingling in your hand or arm? - Ongoing aches in your hand or arm? Working into Repetitive Motion Injuries Repetitive motion injuries don’t just happen. By combining highly repetitive motions with fast, forceful movements and awkward positions over a period of time, you may set yourself up for repeat motion problems. Over using your hand or arm without giving them a chance to rest increases the odds of injury. The result? Pain and Minimal Movement. You have to break the pattern work: play smart and learn how to prevent repetitive motion injuries and their symptoms. Then you can avoid repetitive motion problems and look forward to remaining active and productive. A Formula for Trauma Are you setting yourself up for repetitive motion injuries? You’re more likely to get them if you frequently use too much force or repeat the same movements. Whether your goal is to prevent repetitive motion injuries or to recover from them, just a few simple exercises can bring about big benefits. Exercise can help prevent further injury by increasing your strength and endurance. As a result, you’re more likely to stay healthy and able to work comfortably for longer periods of time. Your practitioner can help set up a daily exercise program for you. Developing a general plan of action that helps you live a healthy lifestyle-both on and off the job-is another good move you can make to keep in shape What Conditions Respond to Acupuncture? Acupuncture treats a wide variety of disorders. The World Health Organization lists over forty diseases that acupuncture can treat effectively. The following is a partial list of the most common ailments. |Nerve Disorder||Bell’s Palsy (Facial Neuralgia) · Headache · Intercostal Neuralgia · Migraine · Numbness in hand and feet · Parkinson’s Disease · Shingles · Sciatica · Toothpain · Trigeminal Neuralgia| |Motor System Disorder||Back Pain · Bursitis · Herniated Disk · Knee Pain · Neck Pain · Repetitive Motion Injuries · Sprained Ankle · Sports Injuries · Stiff Shoulder · Tempromandibular Disorders · Tennis Elbow · Thoracic Outlet Syndrome · Whip Lash| |Gastro-intestinal Disorder||Anal Fissure · Constipation · Diarrhea · Gastritis · Gastroptosis · Hemorrhoid · Hepatitis · Indigestion · Irritable Bowel Syndrome · Kidney Stones · Stomatitis · Ulcer · Ulcerative Colitis| |Circulatory Disorder||Cold Constitution · Edema · Heart Problems · Heartthrob · High Blood Pressure · Jaundice · Low Blood Pressure| |Endocrine Disorder||Diabetes ·Thyroid Problem · Gout| |Respiratory Disorder||Asthma · Bronchitis ·Common Cold · Coughing · Sore Throat ·Tonsillitis| |Urinary Disorder||Cystitis · Enlarged prostate · Frequent Urination · Impotence · Kidney Inflammation · Mastitis · Male Infertility · Urethritis| |Sensory System Disorder||Deafness · Eczema ·Dermatitis · Dizziness · Eye fatigue/strain · Hayfever & Sinus · Meniere’s Disease · Myopia · Skin Infections · Tinnitus · Vertigo| |Gynecological Disorder||Breech · Cold Constitution ·Dizziness · Endometritis · Endometriosis · Hysteromyoma · Infertility (Acura Fertility Treatment) · Mastopathy · Menopause · Menses · Menstrual Disorder · Morning Sickness (BBC News)| |Pediatric Disorder||Asthma · Bedwetting · Diarrhea · Excessive Night Crying ·Indigestion · Thrush · Weak Constitution| |Other Disorder||Anxiety ·Autonomic Ataxia ·Breathlessness ·Chronic Fatigue Syndrome ·Hair Loss · Hernia · Insomnia · Sleeping Disorder · Multiple Sclerosis · Osteoporosis · Physical Condition Improvement · Psychosomatic Disorder · Quit Smoking · Stress Management ·Weight Control|
In the thoracolumbar spine there are three biomechanical regions. The upper thoracic region (T1-T8) is rigid due to the ribcage which provides stability. The transition zone T9-L2 is the transition between the rigid and kyphotic upper thoracic part and the flexible lordotic lumbar spine. This is where most injuries occur. Finally we have the L3-Sacrum zone which is flexible and this is the region where axial loading injuries occur. In the upper thoracic spine the center of gravity is anterior to the spine. Axial loading will result in compressive forces anteriorly and tensile forces posteriorly. This will result in flexion-type of injuries. In the lumbar spine due to the lordosis, the center of gravity is posteriorly. Flexion type of injuries will straigthen the lumbar spine and result in axial loading. In this area we will see many burstfractures. On the left the three column model of Denis. This model is used to predict the soft tissue injury from bone injury. Spinal stability is dependent on at least two intact columns. When two of the three columns are disrupted, it will allow abnormal segmental motion, i.e. instability. So a simple anterior wedge fracture or just sprain of the posterior ligaments is a stable injury. However a wedge fracture with rupture of the interspinous ligaments is unstable, because the anterior and the posterior column are disrupted. A burst fracture is always unstable because at least the anterior and middle column are disrupted. Criteria to predict soft-tissue injury from bony injury are: - Angulation greater than 20 degrees. - Translation of 3.5 mm or more. On the left images of a 31 year old male. He was working on a roof, fell approximately 5 meters landing on his feet. He complained of pain in left lower extremity and lower back. First study the images, then continue reading. On the x-ray there is a hyperflexion injury of L1 with involvement of the anterior column and possible involvement of the middle column. The sagittal reconstructions of the CT demonstrate that the posterior part of the vertebral body is of normal height, but there is some involvement of the posterior part of the vertebral body. There is debate on how to treat these patients and if there is any role of MRI in these cases. If you are aggressive you could call this a two column injury, which would require stabilizing surgery. If you are conservative you could call this an injury with only minor involvement of the middle column. On the left a coronal reconstruction and an axial image at the level of the fracture. Continue with the MR. The MR images show bone marrow edema in the involved vertebral body, but no additional soft tissue injury. Based on the fact that the MR did not show any additional findings, this patient was treated as having a single column injury. Consultation with orthopedic surgery recommended conservative management with a TLSO brace. Nowadays there is a tendency to treat these thoracolumbar injuries conservatively, even if there is slight involvement of the middle column. The role of MRI in these cases is not clear yet. On the left a fracture of the calcaneus and a lumbar spine fracture. This is called a 'jumpers fracture' or a 'lover's fracture', because it is usely seen in people jumping out of a window to escape from the police or a jealous husband. In this case it is clear that we are looking at an unstable fracture, because this is a burst fracture. Both the anterior and the middle column are disrupted. In addition there is edema in the posterior soft tissues indicating that there is also involvement of the posterior column. Notice also the marrow edema in the adjacent bodies due to the severe axial loading. On the left images of a 21-year-old female who presented after sustaining a seatbelt type injury. She had an exploratory laparotomy for repair of a ruptured duodenum. There was no neurologic deficit. First study the images, then continue reading. What we see is a classic example of a chance fracture, which is a three column injury with a horizontal orientation of the fracture. Continue with the CT-images. What is unique about the Chance fracture is the horizontal orientation, which is nicely demonstrated on the sagittal reconstructions on the left. Continue with the coronal reconstructions. Also on the coronal reconstructions we can see the horizontal orientation of the fracture. What type of circumstances results in a fracture of this type? The classic mechanism of this injury is a lap-belt injury. If you don't have an addtional shoulder belt, the body will fold over. Chance fracture (2) On the left another example of a Chance fracture. Chance fracture (3) On the left a Chance variant. This is a pure ligamentous injury, which is analogous to bilateral interfacet dislocation, which is also a pure ligamentous injury. There is rupture of the interspinous ligament, dislocation of the facet joints and a horizontal rupture of the disc. Pure ligamentous and combined osseous / ligamentous variants have an increased risk of instability compared to the osseus type. Always look for a split of the posterior elements, disc widening or widening of the spinous processes and facets.
Chinese scientists have become the first to realize quantum key distribution from a satellite to the ground, laying the foundation for building a hack-proof global quantum communication network. The achievement based on experiments conducted with the world's first quantum satellite, Quantum Experiments at Space Scale (QUESS), was published in the authoritative academic journal Nature on Thursday. The Nature reviewers commented that the experiment was an impressive achievement, and constituted a milestone in the field. Nicknamed "Micius" after a 5th century B.C. Chinese philosopher and scientist who is credited as the first person ever to conduct optical experiments, the 600-kilogram-plus satellite was sent into a sun-synchronous orbit at an altitude of 500 km on Aug. 16, 2016. Pan Jianwei, lead scientist of QUESS and an academician of the Chinese Academy of Sciences (CAS), said the satellite sent quantum keys to ground stations in Xinglong, in north China's Hebei Province, and Nanshan near Urumqi, capital of northwest China's Xinjiang Uygur Autonomous Region. Communication distance between the satellite and the ground stations varied from 645 km to 1,200 km, and the quantum key transmission rate from satellite to ground is up to 20 orders of magnitude more efficient than that expected using an optical fiber of the same length, said Pan. When the satellite flies over China, it provides an experiment window of about 10 minutes. During that time, 300 kbit secure keys can be generated and sent by the satellite, according to Pan. "That, for instance, can meet the demand of making an absolute safe phone call or transmitting a large amount of bank data," Pan said. "Satellite-based quantum key distribution can be linked to metropolitan quantum networks where fibers are sufficient and convenient to connect numerous users within a city over 100 km. We can thus envision a space-ground integrated quantum network, enabling quantum cryptography - most likely the first commercial application of quantum information - useful at a global scale," Pan said. The establishment of a reliable and efficient space-to-ground link for faithful quantum state transmission paves the way to global-scale quantum networks, he added.
Long-time residents of the Sierras, Jim and Mary Rickert are seasoned ranchers who manage several ranches throughout the Sierra Cascades. Discussing ranch management, Jim Rickert stated “Land is not a rich man’s toy. You have to make it into a business; it has to earn its way.” The most well-known of the ranches that the Rickerts manage is the 34,000-acre Prather Ranch, home to 4,000 head of cattle believed to be the largest closed-herd in the world. This designation stems from the fact that for close to 30 years no other females have been introduced into this herd. To ensure the high-quality beef that is synonymous with the Prather name, the cows are closely monitored, fed either natural or certified organic feed grown on the ranch, and treated humanely. Mary Rickert emphasized this last point. “We make our livelihood from them. We respect them and their contributions.” The contributions of these cows go far beyond the high-quality beef that is prized by fine restaurants and their patrons. Because they come from a well-monitored closed-herd, the Prather cattle are sought-after by pharmaceutical companies for both surgical and cosmetic products. Two of the most notable contributions of these cattle are their bones, which are increasingly used for surgical implants; and their pituitary glands, which are used to make artificial skin grafts for burn patients. While Prather Ranch is the name associated with this unique cattle herd, it is Fenwood Ranch, the Rickerts’ own working ranch, that is the focus of this story. The Rickerts had managed Fenwood Ranch, where they graze 200 Prather Ranch mother cows, for several years when an opportunity arose to purchase it. All the pieces fell into place and the Rickerts became owners of the 2,200-acre property. Lying just five miles from the Redding area, Fenwood Ranch is a true oasis for residents and visitors alike in the midst of this rapidly growing area. With 2.5 miles of the Sacramento River running alongside it, the ranch boasts beautiful riverfront property. Blue oaks cover the gently rolling hills and valley oaks grow near the riverbanks. Native salmon spawn in Bear Creek and Cow Creek, each of which flanks one side of the property. The Rickerts have worked diligently to improve the land; including applying sustainable agricultural practices, creating wildlife habitat, and managing the use and quality of the water. The Rickerts’ son, James, helps manage the property and is conducting a huge restoration project in the property’s historical China Garden area, where Chinese immigrants farmed during the Gold Rush days. Their commitment to taking care of the land and to assuring that it continues to serve as a resource for the community prompted the Rickerts to place a conservation easement on the property. They worked with Shasta Land Trust to achieve this and said that the process “worked out quite well.” They are strong proponents of conservation easements and often advise other landowners of their benefits. The Rickerts’ support of land trusts extends to offering their property each year for a variety of community events hosted by Shasta Land Trust. Clearly, the beautiful land that is Fenwood Ranch earns its way many times over.
A urethral stricture is a narrowing of the urethra, the tube that carries urine out of the body. It is caused by a build up of scar tissue or inflammation around the urethra and is more common in men than in women. There are several causes of urethral stricture, including a prior surgery, catheter placement, infection resulting from a sexually transmitted disease (STD), and, in men, trauma to the penis or perineum, which is the pelvic floor area behind the scrotum. This includes “straddle injury,” in which trauma to the groin occurs when a person falls onto an object, such as the crossbar on a bicycle. Radiation therapy for prostate cancer or other types of cancer can also lead to a stricture. Some people with urethral stricture notice no symptoms. In others, problems with urination may be the first sign that something is wrong. You may experience pain during urination, have a slower than usual stream of urine, be unable to completely empty your bladder, or notice extra drips of urine after you finish urinating. If left untreated, a urethral stricture can cause serious problems, including bladder and kidney damage, infections caused by the obstruction of urine flow, and poor ejaculation and infertility in men. Fortunately, strictures can be successfully treated. If you are experiencing urination problems, your doctor may ask about your medical history, perform a physical exam, and analyze a urine sample before ordering additional tests to determine the cause of your symptoms. At NYU Langone, our diagnostic tests for urethral stricture include: During the uroflow screening test, a funnel-shaped collection device is used to measure the force of your urinary stream. The flow rate helps doctors determine whether there may be a blockage due to a stricture. Urethral stricture can prevent urine from leaving the bladder, which can lead to infection. In a postvoid residual test, ultrasound imaging of the bladder is performed immediately after urination. During an ultrasound scan, sound waves are used to create an image that allows your doctors to measure any remaining urine. A cystoscopy procedure allows doctors to view the urethra to see if strictures are present. A doctor guides a narrow, flexible scope into the urethra and uses ultrasound to determine the location and length of strictures. This procedure is performed in the doctor’s office. After a stricture is diagnosed, the length and degree of narrowing can be evaluated with a retrograde urethrogram. This is an important step toward planning treatment, because the length of the stricture affects the types and success of treatment. In this diagnostic test, iodine contrast is inserted into the urethra while images are created using X-ray or ultrasound.
English Language Arts We are on our 3rd year of our English Language Arts Curriculum Adoption. The program that we use TK-6 is McGraw Hill Wonders and it closely addresses the common core standards. The program is rigorous and challenges our students to think critically and write extensively. Our students are enjoying the stories they read and will continue to excel as we strive for success. The program has a Home-School component where students can practice common core standards a and apply strategies at home. Ask your child's teacher for your child's login information. Lexia Core 5 (TK-5) and Power Up (students who have cpmpleted Lexia) are internet based reading programs used daily in the classroom to enforce phonics, phonemic awareness, parts of speech, prefixes, suffixes, and reading comprehension. If you have internet access, these programs can also be done at home. Ask you child's teacher for login information. To access English Language Arts/English Language Development curriculum as well as Lexia and Power Up,
If you've ever wondered what it must feel like to fly around the Earth at 28,000 kilometers per hour, then wonder no more. [Make sure you set it to the highest resolution, then make it full screen. You're welcome.] I saw this on Universe Today, where you can get details, as well as in the YouTube link above. Created by James Drake, it's a compilation of 600 publicly available images, strung together to make an incredible time lapse animation. The actual motion of the International Space Station would appear much slower than this, but still. The clarity, color, dynamism, and sheer jaw-dropping wonder of this is spectacular to behold. A lot of people on Twitter were asking about the brown-green arc above the Earth. That's an aerosol haze, a glow caused by particles suspended high above the planet's surface. It's an extremely thin layer, so it's best seen edge-on, for the same reason some very thin shells in space are bright only around the edges. From the ground it's too faint to see this clearly, and from space it's only visible on the night side of Earth. This is truly magnificent. And the ending is, I hope, a metaphor for the future of human exploration of space. Things may seem dark now, but I am still hopeful that a new day will dawn on our efforts to reach out into the Universe around us.
Beyond Good and Evil By Friedrich Nietzsche Friedrich Nietzsche’s Beyond Good and Evil is translated from the German by R.J. Hollingdale with an introduction by Michael Tanner in Penguin Classics. Beyond Good and Evil confirmed Nietzsche’s position as the towering European philosopher of his age. The work dramatically rejects the tradition of Western thought with its notions of truth and God, good and evil. Nietzsche demonstrates that the Christian world is steeped in a false piety and infected with a ‘slave morality’. With wit and energy, he turns from this critique to a philosophy that celebrates the present and demands that the individual imposes their own ‘will to power’ upon the world. This edition includes a commentary on the text by the translator and Michael Tanner’s introduction, which explains some of the more abstract passages in Beyond Good and Evil. Frederich Nietzsche (1844-1900) became the chair of classical philology at Basel University at the age of 24 until his bad health forced him to retire in 1879. He divorced himself from society until his final collapse in 1899 when he became insane.
Young children have a natural curiosity about how the world works and science is essentially about discovering more about this through research and experimentation. The home-educated children I know tend to retain an enthusiasm for science throughout their lives and the world is full of opportunities to experiment and research. This means that young children may be best learning by exploring the world around them, with some guidance and plenty of opportunity to ask 'why' things work. Older kids could benefit from more structure and perhaps a tutor, like the one advertised on the right. We have two younger children and follow an autonomous approach to learning, which means we tend to seize opportunities to explore science when they occur naturally. An obversation of how beautiful the moon looks might lead to a conversation about gravity, the tides, eclipses, reflection of sunlight and space travel. Often we don't know the answer, in which case we do research together (on the Internet or in illustrated books) to discover it - whether it is the name of an insect we have found or why when you fill a cup of water the top of the water is higher than the edge of the cup. We have also recently started doing experiments (the experiments themselves often only take a minute or two, but can lead to long fascinating discussions). For some great suggestions of easy science experiments, see: GCSE Science and SATS in Derbyshire and Nottinghamshire, UK. The following information is from Alan Woods, for more information on his services, please see Science Tuition in Derbyshire. - I was a very successful Head of Department and since retirement I have been giving private tuition. - I am familiar with all the major syllabi including Edexcel, Nuffield and AQA. - I am a tutor for both home-educated and school children. - I am able to offer science tuition at my home address or in your home covering an area around Derby, Nottingham and Mansfield. - I like to make the sessions relaxed, interesting and enjoyable, as well as challenging. - I introduce methods to achieve examination success in all the sessions, including revision and examination techniques. Cheminstry Tutor's Details My A-level tuition rates are £23.00 per hour at my home address and I would negotiate a rate for travelling to another destination. No extra fees via this site. Please contact me via this site for the cheapest rates! If you need any more information please do not hesitate to contact me by email or telephone: Alan Woods, 25 Ford Avenue, Loscoe, Heanor, Derbyshire, DE75 7LR NB, This page details a chemistry tutor offering science tution in Nottingham and Derby and throughout Derbyshire and Nottinghamshire. The rest of this website provides information on home education in France.
What is a “food system”? Food systems comprise all aspects of food production (the way the food is grown or raised; the way the food is harvested or slaughtered; and the way the food is processed, packaged, or otherwise prepared for consumer purchase) and food distribution (where and how the food is sold to consumers and how the food is transported). Food systems can be divided into two major types: the global industrial food system, of which there is only one, and sustainable/local (or regional) food systems, of which there are many. The global industrial food system has a much wider geographic reach than a local or regional food system. F What is a “local (or regional) food system”? The term “local food system” (or “regional food system”) is used to describe a method of food production and distribution that is geographically localized, rather than national and/or international. Food is grown (or raised) and harvested close to consumers' homes, then distributed over much shorter distances than is common in the conventional global industrial food system. In general, local/regional food systems are associated with sustainable agriculture, while the global industrial food system is reliant upon industrial agriculture. F What is local? What is regional? Commonly, “local food” refers to food produced near the consumer (i.e., food grown or raised within X miles of a consumer). FHowever, because there is no universally agreed-upon definition for the geographic component of what “local” or “regional” means, consumers are left to decide what local and regional food means to them. A 2008 survey found that half of consumers surveyed described “local” as “made or produced within a hundred miles” (of their homes), while another 37% described “local” as “made or produced in my state.” FThe ability to eat “locally” also varies depending on the production capacity of the region in question: people living in areas that are agriculturally productive year-round may have an easier time sourcing food that is grown or raised 100 miles (or even 50 miles) from their homes than those in arid or colder regions, whose residents may define “local food” in a more regional context. Is local food the same as sustainable food? Not necessarily. Many people now equate the terms "local food” and “sustainable food,” using local as a synonym for characteristics such as fresh, healthful, and produced in an environmentally and socially responsible manner. Technically though, “local” means only that a food was produced relatively close to where it’s sold – the term doesn’t provide any indication of food qualities such as freshness, nutritional value, or production practices, and can’t be used as a reliable indicator of sustainability. For instance, while meat from a factory farm could be accurately marketed to a nearby community as “local,” the meat would certainly not be considered sustainable. Furthermore, as noted above, the maximum acceptable distance from a “local” food’s point of production to its point of sale isn’t actually defined or regulated – it’s left up to the interpretation of whoever is using the term. Unfortunately, in order to capitalize on increased consumer demand for local food, less scrupulous producers have begun to use the term to “greenwash” (or “localwash”) their products. By taking advantage of the ambiguity regarding the term’s definition, these producers can mislead consumers by using the local label to imply that their foods are grown closer and/or more sustainably than they actually are. Of course, it’s important to note that food marketed as “local” isn’t always industrial food in disguise; indeed, plenty of local food is produced according to the highest sustainability standards. Nonetheless, since local is not defined or regulated, consumers should always be prepared to find more information about production practices in order to determine whether a local food is sustainable. Food Distribution: the Way Local Food Reaches the Consumer The ways in which food reaches the consumer vary widely between local food systems and the conventional global industrial food system. The development of refrigerated trucking, in combination with subsidized fuel costs and changes to methods of harvesting and transporting food, enable conventional food to be shipped over very long distances at fairly low cost to producers. F FThe conventional food system also heavily relies upon centralized processing and packaging facilities that are often located far from the grower and the consumer. FLocal food systems value a shorter distribution distance between grower/producer and consumer. In addition, local food systems often cut out the middlemen involved in processing, packaging, transporting, and selling food. Sustainable/Local Food Distribution Local food production-distribution networks often start on smaller, sustainable family farms. Farm products are transported over shorter geographic distances, generally processed either on the farm itself, or with smaller processors. Sustainable/local food distribution networks rely on two primary markets: the direct-to-consumer market and the direct-to-retail, foodservice, and institution market. The Direct-to-Consumer Market The direct-to-consumer market is currently the most established sector of local food distribution. FDirect-to-consumer means that all middlemen are cut out of the food distribution equation – farmers sell their products directly to consumers, rather than through third parties, such as grocery stores. Common direct-to-consumer operations include: - Farmers' Markets Farmers' markets are communal spaces in which multiple farmers gather to sell their farm products directly to consumers. Farmers' markets may be municipally or privately managed and may be seasonal or year-round. Farmers may have to pay a vendor’s (or other similar) fee to participate, and must usually transport their own farm products to the farmers' market site. The United States Department of Agriculture (USDA) reports that the number of farmers' markets in the US increased from 1,755 in 1994 to 7,175 in 2011. F - Community Supported Agriculture Community Supported Agriculture (CSAs) are direct-to-consumer programs in which consumers buy a “share” of a local farm’s projected harvest. Consumers are often required to pay for their share of the harvest up front; this arrangement distributes the risks and rewards of farming amongst both consumers and the farmer. CSA participants often pick up their CSA shares in a communal location, or the shares may be delivered directly to customers. The USDA estimates that there may be as many as 2,500 CSAs currently operating in the US. F - Other Direct to Consumer Programs A much smaller proportion of the direct-to-consumer market are options such as pick-your-own farms, on-site farm stands and stores, and gleaning programs, in which consumers are invited to harvest crops that are left in fields, usually after harvest. The Direct to Retail, Foodservice, and Institution Market A growing component of local food systems are programs that provide farm products directly to retail, foodservice, and institutions. These types of programs cut out the (usually corporate) middlemen involved in storing, processing, and/or transporting food destined for grocery (and other retail) stores, restaurants, schools, hospitals, and other institutions. F Direct to retail, foodservice, and institution programs may involve farmers delivering farm products directly to these establishments, or may rely upon a “food hub,” which is a centralized location where many farmers drop off their farm products for distribution amongst multiple establishments. (Read more about food hubs below.) The Global Industrial Food System The mainstream food production-distribution network starts on large, industrial farms, where monocropping G (in the case of fruits and vegetables) and factory farming G (in the case of animal products) is often the norm. F FFarm products may be transported to a centralized facility for further packaging, processing, and/or inspection, then transported nationally or internationally to finally reach their destination – usually a conventional grocery store or retail establishment. F As farms have consolidated over the past 50 years, so has the food processing industry. F This consolidation means food is transported over vastly greater distances, and the production and processing of our food is in the hands of only a small number of corporations. FThis has implications for food safety, food security, and the loss of small processing establishments (e.g., slaughterhouses and canneries). Why are local/regional food systems important? Supporting local/regional food systems helps support local, sustainably run farms, can help protect our health and the health of our communities, and helps stimulate local economies. We outline some of reasons why local/regional food systems are important below: Local food systems rely upon a network of small, usually sustainably run, family farms (rather than large industrially run farms) as the source of farm products. F Industrial farming negatively impacts the environment in myriad ways (e.g., by polluting the air, surface water, and groundwater, over-consuming fossil fuel and water resources, degrading soil quality, inducing erosion, and accelerating the loss of biodiversity G). FIndustrial agriculture also adversely affects the health of farm workers, degrades the socioeconomic fabric of surrounding communities, and impairs the health and quality of life of community residents. FIn addition, although the concept of “food miles” (i.e., the number of miles a food item travels from farm to consumer) has been criticized as an unreliable indicator of the environmental impact of industrially produced food, Fit should be noted that conventional food is estimated to typically travel between 1,500 and 3,000 miles to reach the consumer and usually requires additional packaging and refrigeration. FMany small-scale, local farms attempt to ameliorate the environmental damage done via industrial farming by focusing on sustainable practices, such as minimized pesticide use, no-till agriculture and composting, minimized transport to consumers, and minimal to no packaging for their farm products. F Food Safety, Health, and Nutrition As production networks in the conventional food system have become increasingly consolidated, and as distribution networks have become increasingly globalized, the risk of food safety problems, such as foodborne illness, has also increased. FThe consolidation of meat and produce production, including animal slaughter and processing, means that there are more possibilities of improper processing, handling, or preparation affecting vast quantities of food (and subsequently consumers). Recent multi-state outbreaks affecting hundreds of people have been traced to individual farms, food processing facilities, and even individual food handlers. FWhen a small amount of contamination (e.g., bacteria) enters these consolidated production systems, vast quantities of the food product being processed and distributed nationally (or globally) may be affected due to the sheer volume of food being produced. This risk is heightened by weak food safety standards, inadequate food safety inspection procedures, and in the case of meat production, the trend toward increasingly rapid line speeds at industrial processing facilities. Tracing outbreaks of foodborne illnesses also becomes more difficult because the production and distribution of conventional food products, such as ground beef, often involves multiple farms, food processors, and food distributors. The distribution of these food products over vast geographical areas further complicates the capability to quickly track an outbreak. F In addition, higher yielding plant varieties suitable for industrial production and international travel have come at the expense of nutrition. FThe global industrial food system relies on crops that have been bred primarily for higher yield and ease of transport, while farmers involved in local food systems often place a higher value on plant varietals that are more nutritious by virtue of their variety (i.e., not bred for yield alone) or by their method of production. F FLocal, sustainably produced farm fruits and vegetables are often fresher, as they do not require long distances for transport, and thus can be harvested closer to peak ripeness. Many fruits and vegetables contain more nutrients when allowed to ripen naturally on the parent plant. FMeat from animals raised sustainably on pasture is also more nutritious – for example, grass-fed beef is higher in “good” cholesterol (and lower in “bad”), higher in vitamins A and E, lower in fat, and contains more antioxidants than factory farmed beef. FSustainably produced food also means less (or no) agricultural chemicals (such as pesticides), antibiotics, G and hormones, all of which are common in conventional farm products. - Read more about the health and nutrition of sustainable food, foodborne illness, pesticides, antibiotics, and hormones The Food and Agriculture Organization of the United Nations says that “food security exists when all people, at all times, have physical, social and economic access to sufficient, safe and nutritious food which meets their dietary needs and food preferences for an active and healthy life.” FLocal food systems may help improve food security by making local, fresh food available to populations with limited access to healthful food; this is especially salient as more farmers' markets accept food stamps (or the equivalent). F F - Read more about food security Support Local Economies and Protect Local Farms and Farmland Evidence indicates that local food systems support local economies; Ffor example, farmers' markets positively affect the business surrounding them, while also providing significant sources of income for local farmers, thus maintaining the viability of many small, local farms. FUnlike large industrial farms, small family farms are more likely to spend their dollars in the community on farm-related inputs (e.g., machinery, seeds, farm supplies, etc.); in addition, food grown locally, processed locally, and distributed locally (for example, to local restaurants) generates jobs and subsequently helps stimulate local economies. F In 1959, there were 4,105,000 farms in the United States, while the latest US farm census in 2011 recorded only 2,200,000 farms. F FIn the last 50 years, though the number of farms has shrunk, the size of the farms still in existence has grown tremendously, which demonstrates the consolidation and industrialization of US agriculture. FLocal food systems help preserve farmland by providing small family farms a viable outlet through which to sell their farm products. In addition, the creation of relationships between farmers and their urban/suburban customers through direct-to-consumer markets can help preserve farmland as protecting family farms becomes a shared goal for both farmers and their local consumers. F Barriers to the Creation of Local and Regional Food Systems Although local and regional food systems are growing, there are a number of barriers to their creation and expansion. As a result of the consolidation of food processing, small, local farms may have difficulty finding a local slaughterhouse for their pastured G animals or a local food processor (e.g., canner, bottler, commercial kitchen, etc.) for added-value farm products. FAs large corporate entities begin to capitalize on the “local” moniker, small farmers may have difficulty competing with large-scale producers with large-scale marketing apparatuses. FFinally, farmers may have logistical problems finding reliable and convenient transport for their farm products, especially during the growing season. However, there is an emerging network of small-scale, local (and even mobile) slaughterhouses, a growing trend of farms processing their own added-value products (e.g., jams, pickles, etc.), and the creation of food hubs to solve the dual challenges of transportation and marketing for small family farms. As the demand for local, fresh produce and animal products continues to grow, innovative programs to help small farmers bring their farm products to market are also expanding. One increasingly common solution to the logistical, transportation, and marketing challenges faced by small family farmers is the creation of local and regional “food hubs.” The USDA describes a food hub as the “drop off point for multiple farmers and a pick up point for distribution firms and customers that want to buy source-verified local and regional food.” FSome food hubs also provide transportation of farm products directly to consumers and retail, restaurant, and institutional customers. Food hubs take much of the burden of marketing and transportation from local farmers by finding viable consumers, and provide other business-related services, such as logistical coordination. FIn addition, they often provide refrigerated storage facilities and auxiliary services such as commercial kitchens and light food processing. FFood hubs can expand the market reach of small, local farmers, help create local jobs, and can expand access to fresh, local food in urban and suburban markets. F
The Army High Command Before Pearl HarborSome of the greatest generals in World War II, far from striking the classic posture of the man on horseback, issued their military orders from the quiet of their desks and fought their decisive battles at conference tables. Strategic plans and policies fixing the essential character of the conflict were worked out in the capital cities of the warring nations. In Washington, as in London, Moscow, Berlin, and Tokyo, military leaders had to deal with urgent world-wide problems that transcended the problems of the individual battlefronts. Using new systems of rapid communication, they kept in touch with the movements of armies and set the patterns of grand strategy as effectively as the Caesars and Napoleons of the past. In so doing they had to reconcile divergent views about the employment of ground, sea, and air forces in the common effort. They had to assist in the delicate process of balancing military requirements of all kinds with the political, social, and economic programs of their national governments. Finally, they had to help adjust differences of military policy among the Great Powers in the coalition. The "fog of war," which traditionally has obscured and confused the scene of maneuver, quickly settled over this military work at the capital of the United States. President Franklin D. Roosevelt and, in the last months of the war, President Harry S. Truman necessarily acquitted much of the tremendous responsibility of wartime Commander in Chief through the highest ranking professional officers in the three fighting services. The highest position in the Navy was held initially by Admiral Harold R. Stark, Chief of Naval Operations, and after March 1942 by Admiral Ernest J. King, Chief of Naval Operations and Commander in Chief, United States Fleet. Throughout the entire war the military leaders of the Army were Gen. George C. Marshall, Chief of Staff, United States Army, and Gen. Henry H. Arnold, Commanding General, Army Air Forces. The latter organization was administratively a subordinate part of the Army but enjoyed almost complete independence in developing resources and techniques in the special field of air combat and air bombardment. Admiral King, General Marshall, General Arnold, and a personal representative (sometimes called chief of staff) of the President, Admiral William D. Leahy, constituted the U. S. Joint Chiefs of Staff committee during most of World War II. This committee not only guided the efforts of all three services in support of the common objective but also represented the United States in continuous military staff work with Great Britain and, much more intermittently, in negotiations with the military leaders of the Soviet Union. The prestige that it enjoyed came in considerable part from the fact that the committee effectively represented the armed services whose chiefs constituted its membership. Its decisions were binding because they were carried out under the authority of each service chief in his own department and because in many cases they were given formal approval by the President.The Chief of Staff of the U. S. Army, on the basis of the deliberations and decisions of the military high command of the United States, gave strategic direction to the efforts of the huge American ground and (Army) air forces that helped to fight and win World War II. Although strategy came to be determined almost entirely in interservice and coalition councils, the Chief of Staff was responsible for the Army's actions, first in helping to work out common strategic plans and then in carrying them out as agreed. He was the principal Presidential executive agent of the Army's "strategy, tactics, and operations," as well as immediate adviser of the Secretary of War in developing and supervising the entire Military Establishment.1 The full weight of this office fell on one man, General Marshall. In the task of planning for and employing an army of eight million men engaged in military operations all over the globe, General Marshall leaned most heavily on one division of the General Staff. It was first called the War Plans Division (WPD) because it was primarily concerned with strategic planning, but in March 1942 it was given new powers in directing military operations and was renamed the Operations Division. Usually called "OPD," it was "charged with the preparation of strategic plans and coordination of operations throughout the world." 2 The second function was unprecedented in General Staff assignments of responsibility. In fact, OPD was unique in the history of American military institutions. It served as General Marshall's Washington command post from which he issued orders establishing U. S. Army commands all over the world, deploying millions of American troops to the theaters of war and setting the general strategic pattern of their military efforts. Its officers participated in the national and international staff work that lay behind the strategic decisions of the American and Allied high command. It was the staff that first clearly formulated and most strongly advocated some of the essential elements of the grand strategy actually followed in World War II, most notably the central military project of massing American and British forces for the invasion of Europe across the English Channel. In all of these roles OPD acted only as a single and, indeed, very small part of a military organization whose success depended on the efficiency of its leader, the Chief of Staff, and the competence of every staff and unit in the Army. The Chief of Staff in World War II, for the first time in the history of the U. S. Army, exercised control over all the Army's wartime activities. The strategic instructions he issued not only governed the conduct of military operations in the theaters of war but also co-ordinated them with mobilization, training, equipment, supply, and replacement capacities in the United States. He had both responsibility and authority to co-ordinate all Army activities and direct them toward the primary aim of winning the war. For this purpose he needed a staff capable of studying carefully the operations of the Army in combat and of issuing instructions to all Army agencies as deemed necessary to insure that strategic plans could and would be carried out. OPD's work under General Marshall, which aimed at "getting things done" as well as helping to devise plans and policies, indicated that it was feasible, through efficient, aggressive staff action, to centralize supervision of the vast and complex business of modern warfare.3For some years before World War II, the U. S. Army had been teaching its officers a consistent doctrine concerning command and staff work. This doctrine was designed for tactical units of all sizes engaged in combat and in supporting activities in the field. The headquarters where the Chief of Staff was doing his work, the War Department, for a variety of reasons did not conform to these principles laid down for field commands.4 During 1940 and 1941 General Marshall turned for help to the staffs and agencies already existing in the War Department or already provided for in legislation and regulations governing the Army. These staffs and agencies were not equipped to meet the critical situation as it actually developed in the hectic years of mobilization, rearmament, and training. Perhaps in time they might have met it and in some fashion have coped with the graver tests of war. Instead, however, from the effort, confusion, accomplishment, and error of 1941 the outlines of a plan for a new Army command post in Washington began to emerge, with a staff modeled more closely than any previous War Department agency on the lines of a general staff in the field. General Marshall finally established such a strategic and operations command post, which served him throughout World War II.5 The Operations Division came into being and developed as the concrete embodiment of this idea in staff work for the support of the high command of the U. S. Army. General Marshall's six-year tour of duty as Chief of Staff and ranking officer in the U. S. Army had begun in 1939. A graduate of the Virginia Military Institute in 1901, General Marshall entered the Army at the age of twenty-one as an infantry second lieutenant in February 1902. During World War I he spent two years in France as a high staff officer, reaching the temporary rank of colonel, principally with the First Army and at the general headquarters of the American Expeditionary Force. He returned to the United States in 1919 and served as aide-de-camp to General Pershing during that officer's tenure as Chief of Staff, 1921-24. He attained the permanent rank of brigadier general in the peacetime Army in 1936, and in July 1938 he was ordered to Washington as chief of the War Plans Division. He became Deputy Chief of Staff on 16 October 1938, and less than a year later succeeded General Craig as Chief of Staff. He first received the title of Acting Chief of Staff on 1 July 1939, and then, upon the effective date of his predecessor's formal retirement, 1 September 1939, he acquired the full authority and rank (four-star general) of the Chief of Staff. He held that post until 20 November 1945, receiving in the meantime one of the four special Army appointments to five star rank, with the title of General of the Army, conferred by Congress in December 1944.During the first thirty months of his duty as Chief of Staff, German and Italian aggression in Europe and Japanese aggression in the Far East were bringing the threat of war closer and closer to the United States. General Marshall devoted himself to the urgent task of expanding the Army and training its ground and air forces to meet the grave challenge of the times. In preparing for the eventuality of war and making strategic plans, as in mapping out the course of military operations after war came, General Marshall enjoyed the confidence and support of his civilian superiors, Secretary of War Henry L. Stimson, President Roosevelt, and President Truman. The Secretary worked closely and harmoniously with the Chief of Staff, exercising essential civilian control over the Military Establishment. The President, as Chief Executive, shaped national policy in the light of the advice on military affairs that Secretary Stimson and General Marshall gave him. As Commander in Chief, determining strategic policy, he relied very heavily on General Marshall's views, whether expressed in his capacity as military head of the Army or as member of the interservice high command. The advice the Chief of Staff gave on matters within his sphere of professional competence was valuable precisely insofar as it reflected his understanding of the capabilities of the Army and to the extent that he could bring about military performances commensurate with national needs. As the Army grew in size eightfold within two years, reaching a total strength of 1,500,000 in 1941, and as the outbreak of hostilities seemed nearer and nearer, General Marshall had to deal with military problems of unprecedented scope and complexity. He plainly needed staff assistance of the finest kind for the task at hand and the trials ahead. Principles of CommandThe idea of the new command post, nourished at its roots by orthodox General Staff doctrine, grew out of the unorthodox character of the Army's high command in Washington in 1939, 1940, and 1941. An understanding of this doctrine and of the structure of the high command is essential to the story of the development of OPD. The U. S. Army; particularly through the system of service schools that flourished between World War I and World War II, had tried to formulate and codify principles that would aid its officers to carry out their military duties efficiently and systematically despite the complexities and difficulties which they recognized to be inherent in the "human nature" of the "war-making machine" of which they were a part.6 According to the Army's formulation of principle, the idea of command is central in all military organizations and effort. By the exercise of command the officer in charge of any unit controls its military action. A chain of command links the commanders of small military units through the commanders of successively larger organizations to the highest level of authority. The high command, the top level of military authority, tries to provide adequate material resources, or to distribute them wisely when they are inadequate, and to insure the proficiency of individual officers and men throughout the hierarchy. Its primary function is to make plans and then issue orders that insofar as possible gear the actions of every element of the organization into a unified military effort. The exercise of command, to be effective, requires the formulation of clear-cut decisions governing the conduct of all of the Army's ramified activities. The decisions must reflect an intelligent appraisal of the specific situations which they are intended to meet. Finally, instructions embodying these decisions must be conveyed speedily and clearly to the men who are required to carry them out.7In this context the chain of command is a chain of military ideas expressed in the form of orders. Primarily the ideas are either strategic, prescribing military missions or objectives, or tactical, prescribing military maneuvers aimed at accomplishing some mission. At the highest level of command, ideas are mainly strategic. They are cast in very broad terms chosen to provide a common frame of reference for many military enterprises. Though comparatively simple in form, they are also most complex to arrive at and most intertwined with other, nonmilitary affairs. They are difficult to formulate precisely and to convey clearly to subordinate elements. The U. S. Army, like other armies, recognizes that every officer who commands the common effort of more than a few men needs some kind of staff to assist him.8 In small units it may be merely an informal, part-time group of immediate military subordinates acting in a secondary, advisory capacity. In large military organizations, especially in combat units in the field, it ordinarily has to be an agency formally constituted for the sole purpose of assisting in the exercise of command. In a field command, some staff officers customarily relieve their commander of administrative or technical duties, in particular making plans according to his desires and establishing programs for providing the combat troops with all types of military supplies and for rendering other special services such as transport, ordnance, and medical aid. Other officers in the field, called general staff officers, devote themselves mainly to supplying the commander with information, helping him to reach strategic and tactical decisions, and conveying these decisions to subordinates. They may suggest feasible solutions to him, usually recommending a concrete line of action. When specifically instructed to do In time of peace, in the 1920's and early 1930's, the only prospective overseas theaters of military operations were the outlying territorial possessions of the United States. The defensive garrisons in some of these bases had a strength of only a few hundred each, and as late as mid-1939 they had a total strength of less than 50,000 officers and men.12 A single officer could and did command the entire Army without the support of the kind of well co-ordinated staff work considered essential in the commands of most of his subordinates. As German and Japanese military moves threatened to plunge the U. S. Army into combat in many scattered theaters of war, the attention of the Chief of Staff was stretched dangerously thin over his rapidly increasing forces. Territorial and Tactical Elements of the Army in 1941Until the Pearl Harbor attack of 7 December 1941 put the Army unequivocally on a war footing, General Marshall, like his predecessors, controlled most routine Army activities through territorial commands directly responsible to the Chief of Staff. These commands were of two main types: first, the corps area into which the continental United States (including Alaska) was divided for purposes of military administration and, second, the overseas departments. There were nine corps area commands. They had been established by provision of the National Defense Act as amended in 1920, and originally provided the only administrative machinery for local mobilization of forces in emergency and for routine control of other activities, including training of Regular Army units in the continental United States. The formal activation of field armies (tactical units) in 1932 removed from the corps areas as such the responsibility for administrative control and field maneuvering of tactical elements of the Army. These armies, to which the bulk of the tactical units of the ground army were assigned, operated directly under the command of the Chief of Staff, acting in the special capacity of Commanding General, Field Forces, formally granted him in 1936 Army Regulations.13 Until 1940 four of the nine corps area commanders acted in a dual capacity as army commanders, and their staffs served them in both capacities. At that time the Second Corps Area (New York) was headquarters for the First Army, the Sixth Corps Area (Chicago) for the Second Army, the Eighth Corps Area (San Antonio) for the Third Army, and the Ninth Corps Area (San Francisco) for the Fourth Army. In 1940, the four armies received commanders and staffs separate from those of the corps areas.14 Thereafter the corps area commanders, although they retained responsibility for administrative control and training of nontactical units, had as their primary job the provision of administrative and supply services for Army installations and tactical units in the United States.15The overseas departments, unlike the corps areas, continued to have both administrative and operational (tactical) responsibilities throughout the period between the wars and during World War II, The departments, four in number in the pre-Pearl Harbor years, controlled all Army activities in Hawaii, the Philippines, the Panama Canal area, and the Puerto Rican area. In addition, the department commanders were immediately responsible for directing military operations by tactical units assigned to defend these four vital outlying base areas of the United States. The tactical chain of command was distinct, if not always separate, from the chain leading from the War Department down to the territorial agencies. General Marshall exercised command of the Army as a fighting force through tactical headquarters responsible for training units and eventually for employing them in combat or in support of combat. The commanders of overseas departments and their staffs acted in both administrative and tactical capacities. Combat units were assigned directly to the departments for defensive deployment and, in event of war, for military operations. The actual field forces in July 1939 constituted the mere skeleton of a combat force. There were theoretically nine infantry divisions in the Regular Army in the continental United States, but their personnel, scattered about in small units among various Army posts, provided the equivalent of only three and one-half divisions operating at half strength.16 There were two divisions in Hawaii and the Philippines among the overseas department garrisons. It was impossible to organize tactical units larger than division size. Expansion from this low point was rapid. Successive increments were added to the Regular Army in rhythm with the recurring crises abroad. The entire National Guard was mobilized and called into the active service of the United States. The induction of citizen soldiers began soon after the passage of the Selective Service Act of August 1940. By mid-1941 the four field armies contained twenty-nine infantry and cavalry divisions at nearly full strength, totaling over 450,000 officers and men. An armored force, established on 10 July 1940, had grown to comprise four divisions with a total strength of over 40,000 officers and men.17 With combatant air units, the four armies and the armored force constituted the field forces of the U. S. Army. In 1935 a military organization called the General Headquarters Air Force had been established to organize and command in combat the comparatively small number of tactical air units being trained, equipped, and supplied by the Air Corps, a so-called bureau in the War Department. Total Air Corps strength in July 1939 amounted to 22,000 officers and men. It had on hand about 2,400 aircraft of all types, including sixteen heavy bombers, and reckoned its combat units by squadrons, which numbered about eighty. By July 1941 the Air Corps had increased in size almost eightfold to 152,000 officers and men and had established four defensive air forces in the continental United States and two addi- tional air forces in overseas bases, Hawaii and Panama. The latter were an advance guard of the dozen combat air forces which eventually carried the air war to the enemy. By this time the Army had on hand about 7,000 aircraft of all types, including 120 heavy bombers, and was planning in terms of 55 to 70 combat groups of 3 or 4 squadrons each. These Army air units, organized as a virtually autonomous striking arm under the superior direction of the Chief of Staff, together with the four field armies, provided the nucleus of the combat units that protected the bases of the United States and moved across the Atlantic and Pacific to help win World War II.18The Army could hardly absorb the thousands of untrained recruits it received in 1940 and 1941 and at the same time maintain or raise its combat efficiency, as it badly needed to do. In the continental United States the basic training of individuals and small units, together with the necessary construction, procurement, and administrative expansion, demanded the attention of Regular Army officers and men, in addition to that of their auxiliaries from the organized Reserve and National Guard. In overseas outposts there was less dilution of trained units by recruits. The garrisons in the overseas departments, the units most exposed to attack, expanded only about threefold during this two-year period, while the forces in the continental United States increased nearly tenfold. The imminence of war brought about several changes in the structure of the Army. For years war planning had been built around "M Day," when general mobilization of forces should begin. In the uneasy atmosphere of world affairs in 1939 and 1940, mobilization was a political matter of both domestic and diplomatic importance. Technically the United States never had an M Day for World War II. Nevertheless, the German triumphs in western Europe in mid-1940 brought about a vast though slow mobilization of American armed forces. These forces had to be trained before they could be employed. The Chief of Staff was responsible for the task of training the new Army, as he was for every other Army activity. Consequently General Marshall faced the prospect of a multitude of decisions concerning the mobilization of men and matériel, strategic development of troops, and continuous strategic planning. The menacing international situation was steadily increasing the work of the entire War Department. Some of the requisite decisions concerning troop training were of the kind that called for speed and vigor of execution rather than for careful and deliberate planning. What was needed, particularly for the job of building a powerful tactical force out of the peacetime army, was an operating service of the kind for which the General Staff was wholly unadapted.19 There was widespread dissatisfaction on the one hand with the amount of "operating and administrative duties" in which the War Department was involved and on the other with the "time killing system of concurrences" which tended to slow down War Department action.20 Under these circumstances General Marshall decided to exercise his command of ground units in tactical training through a new agency, which he designated General Headquarters, U. S. Army (GHQ). Activated on 26 July 1940, GHQ was assigned the specific function of decentralizing activities under the Chief of Staff and assisting him in his capacity as Commanding General, Field Forces.21 Brig. Gen. Lesley J. McNair became Chief of Staff, GHQ, and set up offices for the new staff at the Army War College building in Washington. The physical separation of General McNair's staff from the Munitions Building, where General Marshall and most of the staffs worked, was itself both a practical and psychological barrier to smooth integration with War Department activities.The name GHQ, a time-honored Army designation for a headquarters controlling operations in the field, particularly the highest headquarters in an area or command, was misleading. General McNair's mission covered only the training of the combat forces, that is, the four field armies, the GHQ Air Force (until the creation of the Army Air Forces on 20 June 1941), the Armored Force, and miscellaneous GHQ reserves. In practice this assignment made GHQ a kind of operating agency for the G-3 Division of the General Staff, the part of the War Department responsible for making plans and issuing General Marshall's instructions governing troop organization, training, and routine movements. For the time being General Marshall continued to exercise tactical command of the ground combat forces, other than those in training, through the War Department, under his authority as Chief of Staff and as advised by the General Staff.22 Nevertheless, he made clear his intention of expanding GHQ functions progressively in conformity with the basic idea of a powerful GHQ and with formal Army plans for establishing such a command in the event of mobilization for war. As thus conceived the designation of GHQ was not a misnomer. Few Army officers saw any reason to doubt that the staff which handled the countless details connected with training troop units for tactical operations would in time direct those troops in combat. Determination of the status of GHQ in controlling Army operations, particularly in relation to the War Department, was one of the most pressing questions General Marshall had to try to solve when war came to the United States late in 1941.23 Another change in Army organization reflecting the international situation was the establishment of base commands as semi territorial, semi tactical organizations. For the most part these bases were on islands along the North Atlantic coastline and in the Caribbean area. Several were British territory leased to the United States in the destroyer-base transaction concluded by the President in 1940. By mid-1941 a number of areas containing vital U. S. Army bases had been set up as independent commands, each responsible for the administration and defense of the bases in it. The largest base commands were in Newfoundland, Greenland, and Bermuda.Originally all of the base commands reported to the Chief of Staff. Early in 1941, however, pursuant to a General Staff study, the Puerto Rican Department, the Panama Canal Department, and the several base commands that had been established in British Caribbean territory were integrated for purposes of general defensive planning under the newly constituted Caribbean Defense Command.24 This consolidation introduced a new type of command in the Army. Only a few weeks later the local headquarters of Army troops stationed in Alaska was redesignated the Alaska Defense Command. The Army organization in Alaska, while not exactly analogous to the overseas departments or to the consolidated department and base command structure in the Caribbean, had a more active and comprehensible mission than a local base command.25 In March the War Department put the new designation to further use when it set up within the continental United States four defense commands to "coordinate or to prepare and to initiate the execution of all plans for the employment of Army Forces and installations in defense against enemy actions." 26 These new agencies—the Caribbean, Alaska, Northeast (later Eastern), Central, Southern, and Western Defense Commands—varied in practical military importance approximately as Army activities in each area centered in a defensive mission. The Caribbean Defense Command operated in a region where defense of the Panama Canal was the paramount task and where sustained hostile action was always possible. It was an active command with combatant ground and air forces assigned to it.27 The Alaska Defense Command was also an active defense outpost but was under the control of the Commanding General, Fourth Army, and conducted its defensive planning under the supervision of the same officer as Commanding General, Western Defense Command. The operational functions of the continental defense commands were potential rather than actual until such a time as hostilities opened. In constituting them, the War Department designated each of the commanders of the four field armies as commanding general of one of the continental defense commands and, in effect, charged them with organizing separate staffs to plan defense measures for the areas in which the armies were training. The objective was to "integrate the army command" with "what might later be a theater command." The defense commands thus were created to fix responsibility for peacetime planning of regional defense and, in case of hostilities, to assure continuity between planning and the direction of defensive operations. The corps area headquarters already were fully occupied with their primary functions of supply and administration and did not con- trol tactical troops, while the field armies were supposed to be able to move out of their training areas at any time to engage in offensive military operations. The responsibility of the defense commands for regional defense measures could not be made to include operational control over troops or installations without seriously interfering with the normal handling of supplies and training. The extent to which it might become necessary to give operational control to the defense commands therefore was left to be determined by specific circumstances in case of actual hostilities. In the meantime the commanding general of a defense command, being also in command of the field army in the area, was in a position to correlate planning for defense with activities already going on in the area, and to act promptly in case of hostilities.28Provision for air defenses of the continental United States was made on a separate basis. The Chief of Staff decided in February 1941 that the "Air defense setup should be in time of peace under the direction and control of the Commanding General of the GHQ Air Force." 29 Accordingly the directive that created the defense commands also established the four continental air forces, centralizing control of air defense measures conducted by them under the GHQ Air Force. After the creation of the Army Air Forces in June 1941, its chief became responsible for the "organization, planning, training and execution of active air defense measures, for continental United States." 30 Later in 1941 Army organizations responsible for defending the United States were further supplemented by new commands in the two outlying areas, Iceland and the Philippines, where American troops were stationed farthest from the continental United States and closest to the zones of combat or potential combat. Although their missions were defensive, their proximity to actual or threatened enemy action gave a special military status to the forces in Iceland and the Philippines beyond that of a base command or even a department. Had hostilities involving the United States already begun, the two new commands probably would have been designated theaters of operations. As it was, they were constituted more nearly like task forces, temporary commands established for specific missions, despite the fact that the missions were not exclusively or at the time even primarily tactical. Their official designations were U. S. Forces in Iceland, commanded by Maj. Gen. Charles H. Bonesteel, and U. S. Army Forces in the Far East, commanded by Lt. Gen. Douglas MacArthur. The former was responsible for assisting in the defense of Iceland, a vital base on the North Atlantic convoy route and outpost of the Western Hemisphere. The latter, which included the troops formerly assigned to the Philippine Department and the forces of the Philippine Army, was given the task of organizing the defense of the Philippines and preparing ground and air forces to oppose with as much strength as possible any Japanese attack on American forces in the Far East.With the organization of these theater-type commands, the U. S. Army was moving far toward the kind of organization it was to establish in the event of war. Yet the formal maintenance of peaceful relations with other powers and the defensive orientation of national policy inhibited any sharp break with the institutions and procedures of the peacetime Army. As a result, the rapid growth of the Army and the establishment of new military agencies to meet new military situations had created an extraordinarily complex structure under the Chief of Staff. Origins and Development of the General Staff The central headquarters of the Army at the beginning of World War II was the War Department. Through it the Chief of Staff supervised the mobilization and administration of the growing Army. Its components in 1940 and 1941 were the offices of the chiefs of the arms and services—successors of the old War Department bureaus—and the War Department General Staff. In a certain sense the arms and services constituted the administrative and technical staff of General Marshall's headquarters, and the General Staff assisted him in formulating plans and issuing orders to all organizations under his control. The structure of high command and the patterns of higher staff work in the U. S. Army at the beginning of World War II had been set by the developments of the past four decades. Legislation, regulations, and tradition alike placed the military chief of the Army and the Army's highest staffs apart from other military organizations. General Marshall necessarily worked within that structure as best he could, for the most part using officers and staffs as he found them to meet situations as they arose. Only within this general framework of law and custom could he gradually make judicious rearrangements in organization and functions and trace new procedural patterns to replace the old ones that were inadequate.Before the creation of the General Staff, the President of the United States, Commander in Chief of all the armed forces by provision of the Constitution, entrusted command of the combatant army, the "troops of the line," to a professional soldier called Commanding General of the Army. The Secretary of War was the special adviser to the President on all Army matters, but his primary responsibility extended only to the "fiscal affairs of the Army" as distinct from "its discipline and military control" 31 The commanding general had no effective authority over the semimilitary services upon which the success of military operations by the line soldiers so greatly depended.32 Special bureaus, as they were traditionally called, performed such services for the Army, which primarily consisted of engineering, ordnance, signal, medical, transportation, supply, and general administrative work. Each of these War Department bureaus commissioned specialist officers in its own branch of the Army and controlled their subsequent careers. The bureaus supervised the noncombatant tasks performed by their officers and men in all Army organizations, including tactical units, above the brigade level. They developed, procured, and distributed the military equipment and supplies which the Army used and on which it subsisted. The Adjutant General's Department, one of the most powerful of the bureaus, kept all official records and issued all the formal orders emanating from the War Department under the authority of the President or the Secretary of War. Thus the bureaus controlled much of the manpower, all of the matériel, and most of the administration of the Army. They composed the administrative and technical staff advising the Secretary of War on policies in their special fields, and in addition were the operating agencies that actually performed the duties required under the policies they helped devise. The bureau chiefs reported directly to the Secretary of War and, especially because they had permanent tenure, enjoyed an almost independent status in the Army. Thus co-ordination of military and semimilitary aspects of War Department work could take place nowhere except in the Office of the Secretary of War. There was no professional soldier with authority broad enough to help accomplish such coordination There was no staff concerned with military affairs and military operations as distinct from specialized combat, technical, administrative, or supply tasks.33Experience in time of war had never highly recommended this system of Army control. It became less and less satisfactory as success more and more came to depend on the efficient mobilization and movement of vast quantities of increasingly specialized equipment and supplies for the support of the combatant troops. At the end of the nineteenth century the Spanish-American War showed that existing machinery for planning and managing the military effort was inadequate for the complexities of modern war.34 Elihu Root, Secretary of War 1899-1904, undertook to recommend a remedy for the deficiencies of Army organization. He worked for many months to convince the Congressional military affairs committees that the War Department as then constituted could not provide the information required or effect the coordination necessary for efficient prosecution of war. In 1902 Secretary Root reported to the President: The most important thing to be done now for the Regular Army is the creation of a general staff. . . . Our military system is . . . exceedingly defective at the top. . . . We have the different branches of the military service well organized, each within itself, for the performance of its duties. But when we come to the coordination and direction of all these means and agencies of warfare, so that all parts of the machine shall work true together, we are weak. Our system makes no adequate provision for the directing brain which every army must have to work successfully. Common experience has shown that this can not be furnished by any single man without assistants, and that it requires a body of officers working together under the direction of a chief and entirely separate from and independent of the administrative staff of an army (such as the adjutants, quartermasters, commissaries, etc., each of whom is engrossed in the duties of his own special de- partment). This body of officers, in distinction from the administrative staff, has come to be called a general staff.35In accordance with this analysis and recommendation, the Secretary of War urged the passage of legislation creating a general staff to advise and assist the Secretary of War in integrating the work of the bureaus with combat needs and to develop sound military programs and plans. The general staff idea finally overcame Congressional reluctance, which may have been based partly on public fear of a central staff system commonly identified with Prussian militarism and certainly was based partly on the determined opposition from bureau chiefs whose eminence it threatens.36 An Army general staff corps came into being on 15 August 1903.37 Its strength, including the Chief of Staff, amounted to forty-five officers, who were to be detailed for approximately four-year tours of duty from other branches of Army service. The old title of Commanding General of the Army ceased to exist. The Chief of Staff took over his responsibility for the troops of the line and in addition assumed the crucial extra prerogative of supervising and co-ordinating the technical, administrative, and supply bureaus of the War Department. The law authorizing the reorganization of the Army embodied Secretary Root's idea of a planning and coordinating staff, one which, he said, "makes intelligent command possible by procuring and arranging information and working out plans in detail, and . . . makes intelligent and effective execution of commands possible by keeping all the separate agents advised of the parts they are to play in the general scheme." 38 Spelled out in detail, the duties of the new staff were as follows: . . . to prepare plans for the national defense and for the mobilization of the military forces in time of war; to investigate and report upon all questions affecting the efficiency of the Army and its state of preparation for military operations; to render professional aid and assistance to the Secretary of War and to general officers and other superior commanders, and to act as their agents in informing and coordinating the action of all the different officers who are subject under the terms of this act to the supervision of the Chiefs of Staff.39 The significance of this assignment of tasks to the General Staff depended upon the vesting of broad powers in its chief. The law was fairly specific:The Chief of Staff, under the direction of the President or of the Secretary of War, under the direction of the President, shall have supervision of all troops of the line and of The Adjutant General's, Inspector General's, Judge Advocate's, Quartermaster's, Subsistence, Medical, Pay, and Ordnance Departments, the Corps of Engineers, and the Signal Corps. . . . Duties now prescribed by statute for the Commanding General of the Army . . . shall be performed by the Chief of Staff or other officer designated by the President.40 Only the ambiguity of the word "supervision," selected to describe the kind of control he exercised over all Army forces, beclouded the statement of the superior position of the Chief of Staff. In any case, regardless of arguments that later were to arise over the precise meaning of "supervision," the terms of the new legislation permitted the relationship between the Chief of Staff and Secretary of War to be redefined in a way that made for harmony rather than discord. The new Army Regulations drafted to carry out the provisions of the reorganization act read: The President's command is exercised through the Secretary of War and the Chief of Staff. The Secretary of War is charged with carrying out the policies of the President in military affairs. He directly represents the President and is bound always to act in conformity to the President's instructions. The Chief of Staff reports to the Secretary of War, acts as his military adviser, receives from him the directions and orders given in behalf of the President, and gives effect thereto.41 Secretary Root dwelt on the fact that the new law did not impair civilian control of the Army. In the words of his report for 1903: We are here providing for civilian control over the military arm, but for civilian control to be exercised through a single military expert of high rank, who is provided with an adequate corps of professional assistants to aid him in the performance of his duties, and who is bound to use all his professional skill and knowledge in giving effect to the purposes and general directions of his civilian superior, or make way for another expert who will do so.42 The creation of the General Staff Corps was a great advance toward centralization and professionalism in the administration of military affairs, but the General Staff encountered many difficulties in its early years. For instance, Secretary Root had silenced some of his initial critics by emphasizing its lack of either executive or administrative authority.43 This very emphasis contributed to the tradition, wholeheartedly supported by the older administrative and technical bureaus, that "supervision" of the execution of War Department instructions or policies by the Chief of Staff or by the General Staff in his behalf did not entail any kind of intervention in or even detailed observation of the actual workings of subordinate agencies. Until World War I the General Staff confined itself almost exclusively to formulating general policies and plans and left their execution to the troop units and to the bureaus, the operating or performing elements of the Army.44 During World War I the General Staff, particularly after its reorganization in 1918, showed a great deal of vigor, exerting increasingly detailed supervision and control over the technical and administrative services. The Chief of Staff at the time, Gen. Peyton C. March, was willing to admit the inadvisability of having the General Staff do the work of the bureaus. He defended his staff's inclination to do so because of an urgent need to solve practical supply and transportation difficulties that no amount of policy planning would remedy.45 Nevertheless, the General Staff was vulnerable to criticism within the terms of its own philosophy.Early in World War I the General Staff was handicapped in developing an effective program of any kind because of the rapid rotation of officers in the position of Chief of Staff.46 General March, however, who took over the duties of Chief of Staff on 4 March 1918, remained on duty until 30 June 1921. At the beginning of his tenure he promptly approved a previously expressed opinion that the "organization of the War Department as it existed at the be ginning of the war was in many respects entirely inadequate to meet the requirements of the situation." 47 Accordingly he undertook a thorough reorganization along the general lines already marked out a few weeks before he took office.48 This 1918 reorganization as finally carried out revamped the General Staff and affirmed the powers of the Chief of Staff in relation to other officers and to the bureaus. It gave the General Staff something comparable to its post-World War I structure. Staff functions were divided among four divisions: (1) Military Intelligence, (2) War Plans, (3) Operations, and (4) Purchase, Storage, and Traffic. Each division was headed by an officer called a director.49 In addition, the 1918 reorganization strengthened the staff by clarifying the authority of its chief. War Department General Order 80, 26 August 1918, provided: The Chief of the General Staff is the immediate adviser to the Secretary of War on all matters relating to the Military Establishment, and is charged by the Secretary of War with the planning, development and execu- tion of the Army Program. The Chief of Staff by law (act of May 12, 1917) takes rank and precedence over all the officers of the Army, and by virtue of that position and by the authority of and in the name of the Secretary of War, he issues such orders as will insure that the policies of the War Department are harmoniously executed by the several Corps, Bureaus, and other agencies of the Military Establishment and that the Army Program is carried out speedily and efficiently.This language, at least according to General March's interpretation, made the Chief of Staff the superior of the commander of the American Expeditionary Forces.50 Nevertheless, throughout World War I the authority of the Chief of Staff was confused by the fact that General John J. Pershing exercised virtually independent command over Army forces in France, the single important theater of operations. Army Regulations drafted in accordance with the 1903 legislation creating the position of the Chief of Staff explicitly stated that the President had authority to delegate command of all or part of the Army to an officer other than the Chief of Staff, and President Woodrow Wilson had exercised this prerogative.51 General Pershing considered that he "commanded the American Expeditionary Forces directly under the President" and that "no military person or power was interposed between them." 52 In view of this attitude, of the magnitude of the job to be done in France, and of the indisputable paucity of qualified staff officers, General Pershing built up an independent staff in the theater to help him direct military operations. 53 For most purposes the War Department was simply a mobilization and supply agency in the zone of interior, in a position of authority parallel perhaps with the American Expeditionary Forces (AEF) but clearly not superior. Since the effort of the United States was primarily made in one theater, in which liaison with Allied forces was maintained on the spot, military operations were conducted successfully without any very close co-ordination between the theater of operations and the General Staff. As a result of these circumstances, the end of World War I found the command situation considerably confused despite the special eminence given the Chief of Staff in General Order 80 of 1918. The General Staff was handicapped by this fact as well as by its other limitations. The War Department After World War IThe Army underwent a thorough reorganization after the end of World War I. The National Defense Act, as revised on 4 June 1920, laid down the principal elements of the system which was to last almost unchanged for twenty years. It established the framework for wartime mobilization of a citizen "Army of the United States," including, besides men who might be drafted, Regular Army, National Guard, and Reserve components. 54 General Pershing became Chief of Staff on 1 July 1921 and helped rebuild the Regular Army in accordance with its central place in the new pattern. Several additional branches of the service, including the four combat components of the line, the Infantry, the Cavalry, the Coast Artillery, and the Field Artillery, were established by law on an administrative level with the service bureaus. The independent power of all the bureaus was permanently reduced in one important respect by the inauguration of a single promotion list for most officers instead of the former system of separate lists in each branch.55Within this Army framework, the General Staff assumed something very close to its World War II form in accord with the recommendations of a board convened to study this problem. General Pershing enthusiastically approved the findings of the board, which was headed by Maj. Gen. James G. Harbord, his deputy. The new staff organization went into effect on 1 September 1921 and became part of basic Army Regulations in November of the same year.56 The General Staff was given as its primary responsibility the preparation of plans for "recruiting, mobilizing, supplying, equipping, and training the Army for use in the national defense." It was also required to "render professional aid and assistance to the Secretary of War and the Chief of Staff." Functional assignment of responsibilities represented the results of World War I experience both in the zone of interior and in France. Four "G" divisions, called G-1, G-2, G-3, and G-4, dealt respectively with the personnel, intelligence, mobilization and training, and supply aspects of General Staff work. 57 A fifth staff unit, called the War Plans Division, was assigned broad responsibilities for strategic planning. It was instructed also to be ready to "provide a nucleus for the general headquarters in the field in the event of mobilization," provision of such a nucleus having been called for in the Harbord Board report.58 The division heads each received the title of Assistant Chief of Staff. General Pershing's replacement of General March as Chief of Staff in 1921 brought an end for the time being to the practical situation that had obscured the import of Army orders defining the authority of the Chief of Staff. General Pershing himself held the rank of "General of the Armies," and would unquestionably command the field forces in the event of a mobilization during his tenure. The Harbord Board wished also to avert any possibility in the future of two great, nearly independent commands such as those exercised by the Chief of Staff and the commanding general of the AEF in 1917 and 1918. Its subcommittee assigned to draft recommendations on the GHQ problem came to the conclusion that it was highly desirable for the Chief of Staff to be designated to command in the field in the event of mobilization.59 This committee stated that all its recommendations rested on the "working basis" that "it must be possible to assign the Chief of Staff to command in the field." 60Despite the apparent desires of the members of the Harbord Board, the positive designation of the Chief of Staff as commanding general of the combatant army in the field did not go into either the General Orders or the Army Regulations implementing the Harbord recommendations. In subsequent peacetime years the U. S. Army was small and its largest tactical unit was the division. According to military usage the "field forces" did not actually exist until a number of divisions had been organized for tactical purposes into one or more field armies.61 General Pershing and his two successors, Maj. Gen. John L. Hines and Gen. Charles P. Summerall, did not press the issue of formal title. About ten years later, when field armies were activated as skeleton tactical organizations containing the combatant troops, the term Commanding General, Field Forces, came into official use as a second title for General MacArthur, who was Chief of Staff from November 1930 until October 1935. Finally in 1936, during the tenure of Gen. Malin Craig, the dual designation of the Chief of Staff appeared in print in formal Army Regulations. They then included the stipulation that the "Chief of Staff in addition to his duties as such, is in peace the Commanding General of the Field Forces and in that capacity directs the field operations and the general training of the several armies, of the overseas forces, and of G. H. Q. units." 62 Although these Army Regulations, still in effect at the beginning of World War II, specifically reserved for the President the power to select an Army officer other than the Chief of Staff to assume high command in the field, President Roosevelt from the beginning made it clear in his handling of Army affairs that General Marshall was the superior officer to whom he would turn for advice and who would be held responsible for the Army's conduct in the war.63 This fact, plus the intimate understanding with which General Marshall and Secretary of War Stimson worked together throughout the period of hostilities, made the Chief of Staff's position unassailable. General Marshall delegated tremendous responsibilities and powers to his field generals and relied greatly on their individual initiative and capacities for success. Nevertheless, he retained in his own hands, insofar as it could remain with one man in a coalition war, control of the Army's conduct of military operations. It was significant that he exercised his command from Washington, where he also had effective authority over the Army's zone of interior programs. Thus General Marshall had a far broader responsibility than his predecessors in World War I. Moreover, he faced the new and intricate problems of a struggle involving many great industrial nations and joint operations by ground, sea, and air forces employing modern weapons. Yet at the outset he had to discharge that responsibility with the assistance of the same organization and under the same procedural traditions as had been established soon after the end of World War I.In 1940 and 1941 the chiefs of the arms and services, who performed dual functions as heads of operating agencies and as administrative or technical staff advisers, still reported directly to the Chief of Staff. All officers continued to be commissioned in one of these arms or services—that is, the Infantry, Field Artillery, etc.—and enlisted men "belonged" to the branch to which they were currently assigned. Procurement and distribution of equipment and other supplies, training of officers and some specialized units, and administrative management of the bulk of Army affairs, were still the functions of the successors to the bureaus. The offices of the chiefs of the services paid, fed, equipped, rendered legal and medical service to, and did the administrative work for the Army as a whole. The principal branches in the service category (excluding the service arms) at the beginning of World War II were Adjutant General's Department, Inspector General's Department, Judge Advocate General's Department, Quartermaster Corps, Finance Department, Medical Department, Ordnance Department, and Chemical Warfare Service. Two of these branches, Ordnance and Chemical Warfare, developed actual weapons of war. Four, including Ordnance, Chemical Warfare, the Medical Department, and the Quartermaster Corps, organized special units for assignment to the larger Army units or headquarters requiring their particular services. In these latter respects the services resembled the combatant branches, the five arms and, more especially, the two service arms. The combat army was built around the Air Corps and the team of ground force combat arms, the Infantry, Cavalry, Field Artillery, and Coast Artillery. These branches were responsible for developing equipment, training personnel, and organizing units for the specialized job that each branch performed in actual combat. They produced the troops of the line of the old Army. The service arms—the Corps of Engineers and the Signal Corps—similarly developed equipment, trained technicians, and formed considerable numbers of units for combat service, but their primary mission was to develop efficiency in the performance of their particular specialized functions in support of the "line" Army. The growth of a comparatively independent military organization, the Army Air Forces, out of one of the branches constituted the most radical change in War Department organization before World War II. The Air Service, which became a branch of the Army in 1918, received the name "Air Corps" in 1926. Like the ground combat branches, the Air Corps was responsible for developing its own kind of equipment and for training personnel to use it. In 1935 it developed the GHQ Air Force, the combatant air establishment, which represented the end product of Air Corps supply and training work in the same way that the field armies were the end product of the work of the other arms and service arms. The creation of an integrated combatant air force marked an important stage in the growth of the Army's air force toward acquiring a strategic mission of its own, air operations to destroy the enemy's will and capacity to fight by air bombardment, in addition to its conventional tactical mission of supporting operations by ground armies. The designation in October 1940 of the chief of the Air Corps, General Arnold, to act concurrently as Deputy Chief of Staff for Air gave the air arm a voice on the high command level as well as the "bureau" level and the combatant level of the War Department. The mutual understanding of General Marshall and General Arnold made an operational success of an administrative arrangement that was at best complex and awkward.In June 1941 the combatant air organization, renamed the Air Force Combat Command, and the Air Corps were grouped together to form the Army Air Forces under General Arnold as chief.64 The new organization was intended to have, "so far as possible within the War Department, a complete autonomy similar in character to that exercised by the Marine Corps of the Navy." 65 Thenceforth throughout World War II the air force of the United States constituted a special and largely autonomous entity within the Army. The special needs of the air arm and the policy of employing its special power, particularly as a long-range striking force, had to be correlated with the needs, particularly for support aircraft, and the strategic objectives of the ground elements of the Army. The Chief of Staff, assisted by the General Staff, continued to exercise broad supervisory control over the air forces in an effort to develop for the Army as a whole a balanced program of production, training, and military operations. Consequently, the General Staff, with Air officers serving on it, was in effect a joint or interservice staff responsible under the Chief of Staff for the employment of two complementary military weapons, the ground and the air arms.66 During 1940 and 1941 the War Department General Staff assisted the Chief of Staff in co-ordinating the whole of the military machine under his control, the territorial and tactical organization and the arms and services insofar as they were operating agencies. In all, about one hundred officers were serving on the General Staff in mid-1939 and more than twice that many by mid-1941.67 In supervising their work in particular and Army activities as a whole, the Chief of Staff in 1939 had the assistance of the Deputy Chief of Staff, who regularly handled budgetary, legislative, and administrative matters, and had authority to act for the Chief of Staff in his absence.68 In 1940 two new deputies, one for air matters and one for equipment, supply, and other G-4 activities, were appointed to help get command decisions on a great many questions which were clogging the General Staff machinery and which had to be disposed of in order to get ahead with the rapid expansion of the Army.69 The Chief of Staff was further aided by the Secretary of the General Staff, who kept records for the immediate Office of the Chief of Staff and his deputies, initiated staff action as required by them, and supervised the routing of papers and studies to and from the appropriate staff divisions.70 Co-ordination of General Staff work for the most part had to be done by the Chief of Staff himself, although he was assisted in the process by his principal deputy. This latter officer periodically met with the War Department General Council, which consisted of the Assistant Chiefs of Staff, G-1, G-2, G-3, G-4, and WPD, as well as the chiefs of arms and services. Increasingly in the 1939, 1940, and 1941 emergency, the Chief of Staff settled problems simply by calling staff officers concerned into informal conference and reaching a decision therein.71 General Staff Doctrine and Procedure The United States, in setting up its General Staff Corps in 1903, had created a unique institution with its own characteristic procedures.72 Like most higher military staffs of the nineteenth and twentieth centuries, the new General Staff derived a great deal of its functional theory and terminology from the Prussian system. In German usage the Generalstab had been understood to be almost literally the "General's Staff," that is, a staff versed in generalship, or a staff concerned with military operations. In contrast, the phrase as usually interpreted in the U. S. Army conveyed the correct but rather vague idea of a staff with "general" rather than specific responsibilities.73 Army Regulations and Army practice emphasized that the highest general staff, the War Department General Staff, had as its primary concern general planning and policy making.Until 1903 the Army's technical, administrative, and supply agencies collectively had been termed the "General Staff." 74 After 1903 and through 1941 they still constituted both in numbers and in established prestige a major part of the War Department. The early activities of the General Staff, particularly during World War I, fastened its attention on the zone of interior, where mobilization and supply were the major tasks. The bureaus were handling these tasks, as they always had, and the main contribution of the General Staff was the preparation of basic studies on organization, training, production, transportation, and supply.75 The many high-ranking officers who returned from France after World War I to take important positions in the War Department under General Pershing naturally tended to assume automatically that the General Staff served best when it devoted itself primarily to the zone of interior and did not interfere much with the conduct of military operations in the field. The unwritten, unquestioned law preserving broad discretionary powers for the commander of an overseas theater became and remained one of the basic traditions of the Army. Between the operating agencies in the zone of interior and the overseas commands, the General Staff was squeezed into a narrow compass. Its avenue of escape was to rise above operating at home and operations abroad. Thus Army Regulations from 1921 through 1941 defined the basic duty of the General Staff as the preparation of "necessary plans for recruiting, mobilizing, organizing, supplying, equipping, and training the Army." 76 Once its area of responsibility had been marked out as coincident with these military programs and once its role there was confined to a very general planning, the General Staff developed appropriate procedural traditions. The War Department manual for staff officers current at the beginning of World War II stated categorically: "A staff officer as such has no authority to command." 77 This statement did not alter the fact that the general staff of any commander could act with his authority, insofar as he approved, not only in devising plans and issuing orders, but also in observing the "execution of orders to insure understanding and execution in conformity with the commander's will" 78 In a field command, the general staff officers with combat troops had a strong incentive and ample opportunity to perform this final function of command. In the General Staff there was much less emphasis on seeing that things were done than on helping determine how they should be done. Army Regulations emphasized the point that the General Staff was not supposed to do the actual work called for in the plans it was making. They specifically stated: "The divisions and subdivisions of the War Department General Staff shall not engage in administrative duties for the performance of which an agency exists, but shall confine themselves to the preparation of plans and policies (particularly those concerning mobilization) and to the supervision of the execution of such plans and policies as may be approved by the Secretary of War." 79In other words the General Staff was designed first and foremost to think about military activities and, to a smaller extent, to see that they were conducted in conformity with approved thinking; but it was not at all to participate in them. Normally it merely furnished memoranda approved by the Chief of Staff or the Secretary of War to The Adjutant General, who issued official instructions on behalf of the War Department to the Army agencies concerned, principally the arms and services and the tactical headquarters such as the field armies and the overseas departments. These organizations were responsible for performing the military duties necessary to carry out plans and policies. Such executive or administrative tasks, including training and. mounting garrison defenses (the peacetime equivalent of military operations), were not staff duties, and the General Staff tried not to take part in them. Often the problems it spent months in studying concerned picayune matters, but this fact was a reflection of the smallness of the Army and the severe fiscal limitations put upon it in peacetime. They were viewed as problems of general significance according to the perspective of the time. True, the General Staff was supposed to supervise the execution of plans and policies it had helped formulate in order to observe the results. This supervision provided the basis for future staff recommendations and, if faulty execution of orders was discovered, made it possible to correct the deficiency through appropriate command channels. But the kind of direct inspection or observation that enabled a general staff in the field to check on compliance with orders was not always feasible for the War Department. In technical and administrative work, about the only way to be certain that War Department policy was carried out in practice was to become intimately acquainted with the performance of the work in detail. The General Staff could not consistently take such action, not only because the subordinate agencies would object but also because it was too small to assume such a burden. Comparing data on troop dispositions, unit strength, training problems, and levels of supply in the overseas commands against current plans and policies was easier, but securing up-to-date information of the kind required was still a difficult task. Correspondence with the troop commanders, especially with the overseas departments, was slow. It was also voluminous. Misunderstandings of intent and fact in written instructions and reports were hard to avoid, to detect, and to remedy. Travel to and from outlying bases on temporary duty was restricted by the necessity for economy. Under these circumstances the War Department could not effectively control tactical movements designed to carry out strategic plans or specific strategic instructions emanating from Washington. For all these reasons, as well as for more adventitious or personal ones that may have existed, officers on duty in the General Staff as a rule did not intervene in the conduct of Army affairs by subordinate agencies, whether operating staffs in the zone of interior or tactical commands in the field. A clear-cut case of disregard of approved policy anywhere in the Army plainly warranted intervention in order to make the Chief of Staff's orders effective. It was a common presumption, however, that senior commanders in the field knew their responsibilities and how to discharge them, as did the chiefs of the arms and services, and that they did not require constant surveillance by a staff officer in Washington.Continuous and systematic checking of all Army activities to ascertain compliance in detail with War Department instructions "following-up," as Army officers called it was left largely to the exertions and judgment of individual officers. This responsibility was neither reflected in the internal organization of the General Staff nor emphasized in its traditions. To a great extent the General Staff in the early years of General Marshall's leadership was still working on the assumption that had been noted by General Pershing in 1923 as basic to its work: It is evident that proper General Staff procedure must be slow, even when there is substantial agreement as to what action is desirable. When there are conflicting ideas and interests, as there usually are when dealing with important questions, the different ideas must be investigated and threshed out with the greatest care, with the result that the time required to obtain a decision is multiplied many times. This necessary slowness of procedure in General Staff work makes it essential and proper that the General Staff should confine itself entirely to matters of the broadest policy. Its procedure is wholly unadapted to an operating service.80 The procedure to which these official remarks referred was mainly concerned with the formal memorandum, usually called more descriptively the staff study. Concurrence by any of the five staff divisions and by any of the chiefs of the arms and services, depending on whether the matter was of primary concern to them, might be, and very often was, required before a particular General Staff study could be approved. Specific approval by the Chief of Staff or the Secretary of War was secured in every important case and in many comparatively trivial ones before any of the Assistant Chiefs of Staff issued instructions for carrying out the plan or policy recommended in any staff study.81 There was nothing wrong with this procedure in principle, or with the tradition it reflected. As long as the Army was small and there was no immediate emergency, these procedures did not handicap the Army in carrying on its routine activities. The War Department worked slowly but satisfactorily. By the time the emergency of World War II came, habits of War Department General Staff officers had tended to solidify in the forms established during the 1920's and early 1930's. After 1939 the Army was no longer able to enjoy the luxury of thinking about military operations in the distant future. Ready or not, it might have to carry them out on a moment's notice. More and more often the staff divisions violated their own traditions and descended from their theoretically ideal plane of high abstraction to see that certain urgent steps were taken in building the new Army. It was characteristic that when the threat of war thus spurred the General Staff to new vigor, the most frequent criticisms were offered, even by staff officers, on the grounds that it was operating too much, concerning itself with the details of Army administration.82 Yet the overwhelming danger, dimly seen or felt as the crisis developed, was that the Chief of Staff might, as a result of enemy action, find himself suddenly in command of one or more active theaters of operations. Each of the overseas bases was a potential combat zone. The General Staff, whether planning as it was supposed to do or operating as it often did, was unsuited to act as a field-type general staff in helping direct military operations. So long as the General Headquarters envisioned in 1921 was only a theory, as it had remained for nearly twenty years, the Chief of Staff would have no staff specifically instructed and carefully organized to help him control military activities in these areas of danger and in all the theaters of operations that would develop in case of war.The United States Government was pledged to a policy of seeking peace at nearly any cost after war broke out in Europe in 1939. The Army was in no condition to conduct major military operations. These circumstances gravely complicated the task of building and managing a first-class fighting force. But a weakness potentially more crippling was inherent in the structure of the high command. In 1932 when he was Chief of Staff, General MacArthur pointed it out: "The War Department has never been linked to fighting elements by that network of command and staff necessary to permit the unified tactical functioning of the American Army." 83 The situation had not changed materially in the next eight years. Moreover, General MacArthur had promptly diagnosed the ultimate Army need that led to the creation of a new central staff to support the high command in World War II. He urged adoption of a system through which the "Chief of Staff, in war, will be enabled to center his attention upon the vital functions of operating and commanding field forces" and which would serve to "link in the most effective manner military activities in the Zone of the Interior to those in the Theater of Operations." 84 Achievement of this goal still lay ahead in mid-1941. Return to the Table of Contents Last updated 19 October 2004
The Rbesus macaque and common langur are found throughout Rajasthan with the exception o the arid Thar desert. Monkeys are unharmed by people because of religious sentiments, resulting in their bold behaviour, especially near towns and villages where they snatch food and offerings from unwary pilgrims. The langur feeds on wild leaves and fruit. A wasteful feeder, it drops large quantities on the ground where it is consumed by deer and wild boar which often move with the langur. The langur sounds the alarm to announce the presence of large predators like the tiger or leopard to warn their prey. The state provides shelter to around 500 species of birds, some of which are rare and endangered. About 50 per cent of these species are local and the balance migratory, mostly from eastern Europe, northern Asia and Africa. It is easy to spot as many as 100 species of birds in just a day in Bharatpur. Sarus is a handsome crane and the tallest bird in the world to fly. The state’s only resident crane, it is commonly found in its eastern and southern parts. Sarus cranes usually live in pairs or small family groups, but congregate in large groups in the summer months before the onslaught of the monsoons. Even popular legend acknowledges that these birds pair for life, the partner pining away on the death of one. They indulge in an elaborate courtship dance and nest in shallow waters using a heap of grass and reeds. Both partners incubate the eggs. Partners sometimes greet each other while exchanging incubation duties at the nest and perform their courtship dance accompanied with trumpeting. The majesti great Indian bustard or godawan is the state bird and is a protected species. Easily spotted in many areas in the desert region, the Desert national park near Jaisalmer is a good area to look for it and, during winter, for the migratory Hubara bustard. The lesser florican too is becoming scare in Rajasthan, though a few birds can be spotted during their breeding season (monsoon) in the fields near Nasirabad and Kishangarh, in the district of Ajmer. Because peacocks are considered sacred by Hindus, they are quite common in the forests, fields and villages of the state. In the Kumbhalgarh and Mt Abu wildlife sanctuaries, the graceful grey jungle fowl is to be found in its northern most limit o distribution in India. The wetlands and waterbodies of Rajasthan provide refuge to a large number of migratory and resident birds. These include ducks, cranes, pelicans, storks, herons, jacanas, ibises and other aquatic birds. The migratory birds are accomopanied by a number o predatory birds. The Siberian crane is the rarest bird that comes to Bharatpur: its numbers have dwindled from over 40 to a mere three birds in less than 20 years, probably because it is hunted on its migration route over Pakistan and Afghanistan. Attempts to induct captive-bred birds in the wild have not succeeded. Common cranes visit Rajasthan in winter and can be observed at the Keoladeo national park. Demoiselle cranes visit western Rajasthan in large numbers.
by Staff Writers Los Angeles CA (SPX) Aug 24, 2012 Human and chimp brains look anatomically similar because both evolved from the same ancestor millions of years ago. But where does the chimp brain end and the human brain begin? A new UCLA study pinpoints uniquely human patterns of gene activity in the brain that could shed light on how we evolved differently than our closest relative. Published in the advance online edition of Neuron, these genes' identification could improve understanding of human brain diseases like autism and schizophrenia, as well as learning disorders and addictions. "Scientists usually describe evolution in terms of the human brain growing bigger and adding new regions," explained principal investigator Dr. Daniel Geschwind, Gordon and Virginia MacDonald Distinguished Professor of Human Genetics and a professor of neurology at the David Geffen School of Medicine at UCLA. "Our research suggests that it's not only size, but the rising complexity within brain centers, that led humans to evolve into their own species." Using post-mortem brain tissue, Geschwind and his colleagues applied next-generation sequencing and other modern methods to study gene activity in humans, chimpanzees and rhesus macaques, a common ancestor for both chimpanzee and humans that allowed the researchers to see where changes emerged between humans and chimpanzees. They zeroed in on three brain regions - the frontal cortex, hippocampus and striatum. By tracking gene expression, the process by which genes manufacture the amino acids that make up cellular proteins, the scientists were able to search the genomes for regions where the DNA diverged between the species. What they saw surprised them. "When we looked at gene expression in the frontal lobe, we saw a striking increase in molecular complexity in the human brain," said Geschwind, who is also a professor of psychiatry at the Semel Institute for Neuroscience and Behavior at UCLA. While the caudate nucleus remained fairly similar across all three species, the frontal lobe changed dramatically in humans. "Although all three species share a frontal cortex, our analysis shows that how the human brain regulates molecules and switches genes on and off unfolds in a richer, more elaborate fashion," explained first author Genevieve Konopka, a former postdoctoral researcher in Geschwind's lab who is now the Jon Heighten Scholar in Autism Research at University of Texas Southwestern Medical Center. "We believe that the intricate signaling pathways and enhanced cellular function that arose within the frontal lobe created a bridge to human evolution." The researchers took their hypothesis one step further by evaluating how the modified genes linked to changes in function. "The biggest differences occurred in the expression of human genes involved in plasticity - the ability of the brain to process information and adapt," said Konopka. "This supports the premise that the human brain evolved to enable higher rates of learning." One gene in particular, CLOCK, behaved very differently in the human brain. Considered the master regulator of Circadian rhythm, CLOCK is disrupted in mood disorders like depression and bipolar syndrome. "Groups of genes resemble spokes on a wheel - they circle a hub gene that often acts like a conductor," said Geschwind. "For the first time, we saw CLOCK assuming a starring role that we suspect is unrelated to Circadian rhythm. Its presence offers a potentially interesting clue that it orchestrates another function essential to the human brain." When comparing the human brain to the non-human primates, the researchers saw more connections among gene networks that featured FOXP1 and FOXP2. Earlier studies have linked these genes to humans' unique ability to produce speech and understand language. "Connectivity measures how genes interact with other genes, providing a strong indicator of functional changes," said Geschwind. "It makes perfect sense that genes involved in speech and language would be less connected in the non-human primate brains - and highly connected in the human brain." The UCLA team's next step will be to expand their comparative search to 10 or more regions of the human, chimpanzee and maque brains. Geschwind and Konopka's coauthors included Tara Friedrich, Jeremy Davis-Turak, Kellen Winden, Fuying Gao, Leslie Chen, Rui Luo, all of UCLA; Michael Oldham of UC San Francisco; Guang-Zhong Wang of the University of Texas Southwestern Medical Center; and Todd Preuss of Emory University. University of California - Los Angeles Health Sciences All About Human Beings and How We Got To Be Here |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
African art > Jars, amphoras, pots, matakam > Jarre Mambila Jarre Mambila (N° 16716) Equipped with an anthropo-zoomorphic spout, the spherical container is studded with peg patterns, which echo those of the hairstyle of the male figure forming the collar. The patina is divided between yellow ochre and red ochre in a lower proportion. A second circular orifice appears on the back under the back of the creature whose head is in the image of the mask suaga representing an animal difficult to identify, although the dog has a role in the holiday rituals suaga . Watch the video Despite their small number, the thirty thousand Mambila (or Mambilla, Mambere, Nor, Torbi, Lagubi, Tagbo, Tongbo, Bang, Ble, Juli, Bea) (U.S. men" , in fulani), located in northwestern Cameroon, on both sides of the border of Cameroon and Nigeria, have created a large number of masks and statues easily identifiable by their heart-shaped faces. Although the Mambila believe in a creative god named Chang or Nama, they worship only their ancestors. Their leaders were buried in attics like wheat because they were supposed to symbolize prosperity. The Mambila are farmers and mainly grow coffee. Their masks and statues were not to be seen by women. This item is sold with its certificate of authenticity Estimated shipping cost You could also be interested by these items
Not only are Afghan Hounds gorgeous and regal, they also are also extremely interesting animals as well. Here are some fun and unusual facts about the Afghan hound that you might not know. In their native home of Afghanistan these dogs were used to hunt food including hare, gazelles, wolves and even snow leopards. They are very agile, fast and powerful making them perfect for chasing prey. This is why domesticated Afghan Hounds have such a strong instinct to chase animals today. The long beautiful coat of the Afghan hound is standard of animals normally living at very high altitudes. The coat protects the dogs from the extreme winter weather that is typical in the mountains of Afghanistan. British soldiers introduced these dogs to England sometime around 1890 after the second Afghan war. The Afghan people originally refused to sell the dogs to outsiders so the first Afghan hound wasn't brought to the United States until 1926. The Afghan hound wasn't extremely popular when first introduced in the United States and didn't become a recognized breed until the 1930's. On August 3, 2005 a Korean scientist named Hwang Woo-Suk claimed that his team of researchers had cloned the first dog, an Afghan hound named Snuppy. Afghan hounds are often considered the "super models" of the dog world because of their elegant appearance and aloof attitude. When Afghan hounds find themselves in an overly stressful situation they often refuse to move and sometimes they even go to sleep until the incident or source of stress has passed. They also occasionally develop a "drippy" nose when they become upset. White markings, especially on the face, are considered a fault in show dogs and are believed to indicate impure breeding. An afghan hound can give birth to up to 15 puppies in a one litter. The average, however, is only eight. The true start of this breed in the United States began in 1931 when Zeppo Marx of the Marx Brothers and his wife brought two dogs, Asra of Ghanzi and Westmill Omar, back to the States for breeding. Afghans can gallop at speeds of up to 40 miles (64 kilometers) per hour and can leap seven feet (2 meters) from a standing position. Afghan hounds are also known as Baluchi Hound, Balkh Hound, Barutzy Hound and Kabul Hound. These dogs are members of the greyhound family. The Afghan hound has very large feet that, despite their graceful appearance, make them perfectly suited for climbing over the rocky terrain that is found in their native homeland. The oldest known illustration of an Afghan hound was copied in a set of letters written in India and printed in England in 1813. Some people believe that the Afghan hound lived in Egypt thousands of years ago due to drawings of a dog that resembled the Afghan but no physical evidence of the dogs has been discovered inside the tombs.
GFS (Global Forecast System) Global Model from the "National Centers for Environmental Prediction" (NCEP) 4 times per day, from 3:30, 09:30, 15:30 and 21:30 UTC Greenwich Mean Time: 12:00 UTC = 13:00 BST 0.5° x 0.5° for forecast time <= 384 hrs The Soaring Index map - updated every 6 hours - shows the modelled lift rate by thermals (convective clouds). The index is based on weather information between 5 000 feet (1 524 metres) and 20 000 feet (6 096 metres) and is expressed in Kelvin. Table 1: Characteristic values for Soaring Index for soaring -10 to 5 5 to 20 Table 2: Critical values for the Soaring Index ||Isolated showers, 20% risk for thunderstorms ||Occasionally showers, 20-40% risk for thunderstorms ||Frequent showers, 40-60% risk for thunderstorms. ||60-80% risk for thunderstorms. ||>80% risk for thunderstorms The Global Forecast System (GFS ) is a global numerical weather prediction computer model run by NOAA. This mathematical model is run four times a day and produces forecasts up to 16 days in advance, but with decreasing spatial and temporal resolution over time it is widely accepted that beyond 7 days the forecast is very general and not very accurate. The model is run in two parts: the first part has a higher resolution and goes out to 180 hours (7 days) in the future, the second part runs from 180 to 384 hours (16 days) at a lower resolution. The resolution of the model varies in each part of the model: horizontally, it divides the surface of the earth into 35 or 70 kilometre grid squares; vertically, it divides the atmosphere into 64 layers and temporally, it produces a forecast for every 3rd hour for the first 180 hours, after that they are produced for every 12th hour. Numerical weather prediction uses current weather conditions as input into mathematical models of the atmosphere to predict the weather. Although the first efforts to accomplish this were done in the 1920s, it wasn't until the advent of the computer and computer simulation that it was feasible to do in real-time. Manipulating the huge datasets and performing the complex calculations necessary to do this on a resolution fine enough to make the results useful requires the use of some of the most powerful supercomputers in the world. A number of forecast models, both global and regional in scale, are run to help create forecasts for nations worldwide. Use of model ensemble forecasts helps to define the forecast uncertainty and extend weather forecasting farther into the future than would otherwise be possible. Wikipedia, Numerical weather prediction, http://en.wikipedia.org/wiki/Numerical_weather_prediction (as of Feb. 9, 2010, 20:50 UTC).
Scrabble word: TONSILS In which Scrabble dictionary does TONSILS exist? Definitions of TONSILS in dictionaries: - noun - either of two masses of lymphatic tissue one on each side of the oral pharynx - adj - a lymphoid organ [n -S] : TONSILAR There are 7 letters in TONSILS: I L N O S S T Scrabble words that can be created with an extra letter added to TONSILS All anagrams that could be made from letters of word TONSILS plus a wildcard: TONSILS? Scrabble words that can be created with letters from word TONSILS 7 letter words 6 letter words 5 letter words 4 letter words 3 letter words 2 letter words Images for TONSILS SCRABBLE is the registered trademark of Hasbro and J.W. Spear & Sons Limited. Our scrabble word finder and scrabble cheat word builder is not associated with the Scrabble brand - we merely provide help for players of the official Scrabble game. All intellectual property rights to the game are owned by respective owners in the U.S.A and Canada and the rest of the world. Anagrammer.com is not affiliated with Scrabble. This site is an educational tool and resource for Scrabble & Words With Friends players.
Your chances of developing iron deficiency anemia are increased if you have any of the following risk factors: - Heavy menstrual bleeding - Recently had a baby - Gave birth to multiple children in succession: pregnancy induced iron deficiency anemia usually resolves in a few months after giving birth, but may persist in women from low income groups due to lack of nutritional supplements - Lactation/Breastfeeding which increases the demand for iron - Vegetarianism: lack of red meat (high in iron) in diet. There are many vegetables that do not contain iron, however, not all vegetarians develop anemia - A lack of folic acid, vitamin B6, or vitamin B12 - Gastrointestinal disease: Celiac and Crohn's Disease, inflammatory bowel disease (IBD), colon cancer, or gastric bypass or gastric banding - Blood loss - Recent surgical procedure - History of bleeding disorders or blood disease (hemophilia) - GI bleeding due chronic or continuous aspirin/NSAID use - Body Mass Index (BMI) over 30 - History of eating disorders (bulimia, anorexia) - Certain types of hereditary conditions, such as sickle cell disease or thalassemia - Chemotherapy treatment Heavy Menstrual Bleeding Most women experience heavy menstrual bleeding at some point in their lives, but if you are one of the 1 in 5 women who bleeds so heavily each month that you have to put your life on hold, you may be at serious risk for developing iron deficiency anemia. Typically, women have a period about every 28 days. It lasts for 4 to 5 days, and they lose somewhere between 4 tablespoons and 1 cup of blood. (1 tablespoon is equivalent to 15 mL). Each mL (milliliter) of blood loss results in 0.5 mg of iron loss. But the menstrual cycle isn’t the same for all women. Your period may be more or less regular, last a longer or shorter time, and still be normal. You may be suffering from heavy uterine bleeding (also known as menorrhagia). If you have some of the following symptoms, your periods may not be normal: - Soaking through a tampon and/or pad every hour or less for several hours in a row - Needing to use double protection during your period - Having to change your pad or tampon during the night - Passing large blood clots in your menstrual flow - Periods that last longer than 7 days - Severe cramping - Restricting daily activities due to heavy menstrual flow - Symptoms of anemia such as tiredness, fatigue or shortness of breath The most common causes of heavy uterine bleeding are fibroids or polyps, which are noncancerous tumors and growths on the lining of the uterine wall. Other possible causes include: a thyroid problem, use of an intrauterine device cancer of the uterus, adenomvosis, pregnancy complications, medications, pelvic inflammatory disease (PID), endometriosis, or an infection of the cervix. If you are having symptoms of heavy uterine bleeding, it is very important for you to talk to your healthcare professional. You could be losing more than twice as much blood as normal and more than twice as much iron. Menorrhagia may deplete iron levels enough to increase the risk of iron deficiency anemia. Signs and symptoms include pallor, weakness and fatigue. If your healthcare professional diagnoses you with iron deficiency anemia, he or she may prescribe either oral iron supplements or Intravenous iron can replenish your iron stores more rapidly than oral iron. Pregnancy and Delivery If you are pregnant, your daily requirement of iron doubles. Some of the extra iron that you need is to help supply oxygen to your growing some of it is to take care of your own body, and some of it is to prepare for the blood you will probably lose during and after delivery. Women lose varying amounts of blood during delivery, but 1 in 20 women actually lose more than 1,000 mL of blood (potentially more than 20% of the blood in her body). Blood loss of 1,000 mL calculates to an estimated iron loss of 500 mg. It is generally estimated that half of the anemia cases in pregnancy are related to iron deficiency. If you are like most women, you probably don’t have enough stored iron to take you through your pregnancy, and it is very hard to get enough iron through your diet. The good news is that nature gives your fetus the ability to take the iron it needs—even if you are iron deficient. The bad news is that you may end up suffering from anemia after your delivery. As any mother knows, taking care of a newborn is a demanding job. Delivery leaves most mothers feeling very tired, but if you are anemic, you may feel much more exhausted—just when your infant needs so much attention and care. Maternal iron-deficiency anemia has also been shown to be strongly associated with depression, stress and cognitive function in the postpartum period. This may result in difficulty for the mother to care for her baby, thereby influencing the emotional mother-infant bond. What’s worse is that you may not realize that your level of exhaustion isn’t normal, and that, with treatment, you could feel much, much better. There are certain conditions that can increase your risk of postpartum anemia, such as: - A low level of prenatal iron. If you start out with too little iron, you will have a harder time dealing with the effects of blood loss during delivery. It will also be more difficult for you to rebuild your stores of iron after you have your baby - Overweight before pregnancy. Being overweight can increase your chances of losing blood—and iron—through complications during your pregnancy - Carrying multiple babies. Two or more growing fetuses will naturally require a greater amount of iron - Not breastfeeding. If you don’t breastfeed full time for the first 6 months after you give birth, you will probably begin menstruating during that time. And, since menstruating uses up about twice as much iron as breastfeeding does, you will have a greater risk of iron deficiency - Multiple pregnancies and deliveries. After giving birth, you may be iron deficient due to the loss of blood. If you become pregnant again within a year, you will begin your pregnancy with a low iron level, and that may greatly increase your risk of iron deficiency anemia It is very important to talk to your healthcare professional if you have any symptoms of anemia during your pregnancy or postpartum. If you are diagnosed with iron deficiency anemia, he or she may prescribe either oral iron supplements or intravenous iron. The benefit of intravenous iron is that it can bring your iron level back to normal much more quickly. An Inadequate Diet Iron comes from food, and your body is only able to absorb a small portion of the iron you consume. This means that if your diet doesn’t include enough of the foods that are rich in iron, you may be at risk for iron deficiency anemia. For a list of some iron-rich foods, click here. If you are dieting, it’s important that you make sure you are still getting enough iron. If you are a vegetarian, your risk for iron deficiency may be increased because it is harder to absorb the iron from plant foods than from animal foods.
How Long Would It Take To Fall Through The Center Of The Earth? by Stephen Luntz Photo credit: Kelvinsong. The greater density of the Earth’s core compared to the crust changes a famous physics problem The classic problem of how long it would take to fall through the Earth has a new answer, and it’s four minutes shorter than the one that has been calculated millions of times. For generations, physics students have been challenged to work out how long it would take to fall through a hole dug through the center of the Earth if there was no friction or air resistance. There are two astonishing aspects to the answer. The first is that it would take just 42 minutes. That’s right, without any form of propulsion you could get from wherever you are to the other side of the planet faster than the International Space Station. The second amazing aspect is that you don’t have to go straight through. Dig a path from anywhere to anywhere on the Earth’s surface (as long as it is far enough to make local variations trivial) and the travel time is 42 minutes. Any reduction in the distance is balanced by reduced acceleration from not falling vertically. Of course, no drilling equipment could pierce the center of the Earth, and anyone passing through it would be fried by the intense heat if they could. On the other hand, some people have toyed with the idea of digging tunnels between major cities, such as New York and LA, sucking the air out and letting vehicles fall through, accelerating until they reach the midpoint, then gradually slowing down as gravity pulls them back. In groundbreaking news, however, all those calculations are wrong. The American Journal of Physics has published the work of graduate student Alexander Klotz of McGill University. Klotz started with the common observation that the Earth is not uniformly dense. Aside from local variations in gravity, used to detect mineral deposits and track groundwater, the core is many times denser than the mantle or crust. Thousands of physics lecturers have waved this fact away, along with student concerns about removing friction entirely. Klotz, however, went where no physicist has gone before, using models of density variation with depth built up from seismic data. If one went right through the center of the planet, pole to pole for example, it would take just 38 minutes, reflecting the greater gravitational pull towards the center. On the other hand, 42 minutes remains a good estimate of the time required to get between two close points. “The time taken to fall along a straight line between any two points is no longer independent of distance but interpolates between 42 min for short trips and 38 min for long trips,” Klotz observed. Klotz also pointed out that while a straight line might be the shortest path between two points, it is not necessarily the quickest. The fastest journey between two non-opposite spots on the surface will be one that “tends to reach a greater maximum depth.” Useful to know if you’re trying to get from New York to Tokyo and can’t spare those extra four minutes. Via H/T Science Magazine
Star network topology types are the most common type of network topology that is used in homes and offices. Star network topology is not a central connection point is distuingue as the center that turns out to be the center of computing or times just a switch. In a StarNetwork the best way to maintain security is when a fault occurs in the cable, in this case only there will be a team that can be affected and not across the network. Star network topology usually need more cable to network than the usual bus network topology. A common wire used in Star Network is the UTP or unshielded twisted pair cable. Another common wire used in the networks of stars is the RJ45 or Ethernet cables. In a star topology entire network depends on Center so if the entire network is not working, then there could be a problem with the computer centre. This feature makes it easier to solve problems, providing a single point for the announcement of a connection error, the dependence of those connected to the network is also very high. The star topology of network advantages: * a star network topology is very easy to handle due to its simplicity in functionality. ** The problems can be easily and logically located in a star topology, and it is therefore easy to solve them. ** The star topology is very simple in format so it is very easy to extend. ** This type of network also offers more privacy than any other string. * It is optimal for managing marketing and information networks. Disadvantages of the star topology of network: * star topology is fully dependent on the central system or computer centre and the full operation of the network depends on your switch. ** If there are many nodes and the cable is long, the network can reduce the speed. All computers on the network communicate with a central computer located in the Center and there is no point to point of coordination. Point to point is conducted by the coordination that is routed through the shaft Central. This way keeps the privacy of each and every computer on the network.
THE CONVENTION TROOPS IN CONNECTICUT. BY MARY K. STEVENS. Published in the Connecticut Quartely Apr. May & June 1897 In the early summer of that eventful year in the history of the American Revolution, General John Burgoyne, in command of about eight thousand English and German troops, set out from Canada with orders to descend along the line of the Hudson River to Albany. Here he was to meet Colonel St. Ledger, who was to come down the Mohawk Valley from Lake Ontario, and General William Howe, who was to ascend the Hudson. The object of this campaign was to weaken the Colonies by dividing them East and West. If the two sections were unable to co-operate, it was believed that they might be subjugated separately. This in brief was the plan of the campaign. Colonel St. Ledger was overpowered at Fort Stanwix. General Howe, who by a curious accident, was the only one of the three commanders left with any discretionary power in the matter did not follow the original plan, and failed to support Burgoyne. General Burgoyne followed his instructions, and proceeded down the Hudson as far as Saratoga, where he met General Gates, with his overpowering force of Americans. It was after the battle of Saratoga, which has been classed by Creasy among the fifteen decisive battles in the history of the world, that General Burgoyne was forced to surrender. At the request of the British general the affair was styled a “Convention,” and the sbldiers who laid down their arms at that time have since been known as the “Convention Troops.” In John Fiske’s History of American Revolution, we read: “A dispatch containing positive and explicit orders for Howe to ascend the Hudson was duly drafted, and with many other papers awaited the Minister’s signature. Lord George Germaine, being on his way to the country, called at his office to sign the dispatches ; but when he came to the letter addressed to General Howe he found that it had not been ‘fair copied.’ Lord George, like the old gentleman who killed himself in the defence of the great principle that crumpets are wholesome, never would be put out of his way by anything. Unwilling to lose his holiday he hurried off to the green meadows of Kent intending to sign the letter on his return. But when he came back the matter had slipped from his mind. The document on which hung the fortunes of an army, and perhaps a nation, got thrust unsigned into a pigeon-hole, where it was duly discovered some time after the disaster at Saratoga had become a part of history.” The terms of the surrender, which were embodied in “Articles of the Convention,” provided that the troops under General Burgoyne march out of their camp with the honors of war, and lay down their arms at the word of command from their own officers. A free passage was to be granted the army under Burgoyne to Great Britain, on the condition that they should not serve in North America again during the war. The port of Boston was assigned for the entry of transports to receive the troops. The army was to march to Massachusetts Bay “by the easiest, most expeditious and most convenient routes." All officers were to retain their carriages, horses, baggage and side-arms. Gates made haste to accept these “Articles.” Although he sat in his tent during the battle, and commanded that Arnold be called from the field where he was leading the attack, Gates, as general in command, was praised for the brilliant victory, and for the most successful campaign of the war, while it has been forgotten that the “Hero of Saratoga” was Benedict Arnold, who was afterwards the traitor. The Convention Troops numbered about six thousand men. They marched to Boston, and spent the winter at Winter Hill, Cambridge. Detachments of them passed through Connecticut, over what was known as the “Old Colony Road,” which was one of the principal highways through the state. Alice Morse Earle, in “Customs and Fashions in Old New England,” gives the following description of some of the early Connecticut roads: “The Old Connecticut Road or Path started from Cambridge, ran to Marlbotough, thence to Grafton, Oxford, and Woodstock, and on to Springfield and Albany. it was intersected at Woodstock by the Providence path which ran through Narragansett and Providence plantations, and also by the Nipmuck path which came from Norwich.” “The new Connecticut road ran as did the old road, from Boston to Albany. It was known at a later date as the Post Road. From Boston it ran to Marlborough, thence to Worcester, to Brookfield, and so on to Springfield and Albany.” During the revolution there was a constant marching of troops over this road, but while traditions of their passing are common, no special records regarding them seem to have been kept. The march of one company of foreign troops, however, is recorded in a journal kept by Oliver Boardman, of Middletown, Connecticut, which is now in the possession of the Connecticut Historical Society. It states that the writer witnessed the surrender of Burgoyne. The first entry is dated September 2, 1777, and the last October 27, 1777. The following is a copy of the journal regarding the company referred to: “Monday. 20th. 1 was one of fifty that was called out of the regiment to guard 128 prisoners of war to Hartford. At evening we crossed the ferry and put up at Green Bush,” (New York.) “Tuesday, 21st. We marched from Green Bush to Canter Hook.” (Now Kinder Hook, New York.) “Wednesday, 22d. We marched from Canter Hook to Nobletown.” (Now Hillsdale, New York.) “Thursday, 23d. We marched from Nobletown to Sheffield,” (Massachusetts.) “Friday, 24th. We march from Sheffield to Rockwells, about the middle of the Greenwoods.” “Saturday, 25th. We marched from Rockwells to Simsbury,” (Connecticut.) “Sunday, 26th. We marched from Simsbury to Hartford (Connecticut), and delivered 123 prisoners to the sheriff; five of them left us on the march.” The arrival of this company in Hartford is confirmed by the Hartford Courant under date of Tuesday, October 28, 1777, it being reported in that paper as follows: “Last Sunday arrived in town 128 prisoners, among whom were several Hessian officers. They were taken at the northward before the capitulations.” “Rockwells, about the middle of the Greenwoods,” was a tavern in Colebrook, Connecticut. The house was built by Samiel Rockwell, who went to Colebrook f r o m East Windsor, Connecticut, in 1766. The Greenwoods road which extended from New Hartford to Norfolk, passed about a half mile south of the house. The name “Rockwells” was not altogether applied to the tavern. Qctite extensive works for those days were carried on by Samuel Rockwell and sons. Their saw mill, as well as a mill for grinding grain, a shop for the manufacture of agricultural implements, and works for carding wool, together with the tavern, gave the place notoriety. The house is still standing, and is occupied by a descendant of its builder. In the older towns of northwestern Connecticut there are homesteads now over a hundred years old, where tales are told of foreign soldiers who spent a night before the kitchen fire, or drank at the old well, or begged for food, and perhaps left articles which are treasured as having Once belonged to a dreaded Hessian. Mrs. Mary Geike Adam, in a paper recently published in THE CONNECT!CUT QUARTERLY, notes the passing of a company of Hessian soldiery through Canaan, and their stay at the old Douglas place in that town. Norfolk, in Litchfield county, was a thrifty, vigorous town in 1777. Its people were active in the defence of the independence which had been declared, and Norfolk men were present at very many of the important engagements of the war. Not only did the town send its quota of men to the army, but at great personal sacrifice the people sent money and provisions, notably during the terrible winter at Valley Forge. “When the British undertook the campaign of 1777, Litchfield county, being so near the line of march, was thoroughly roused, and Norfolk men went along with the rest, and were present at the surrender at Saratoga. More traditions remain concerning this battle and its consequences than concerning any other period of the war."* There is in the town to-day a house which at that time was owned and occupied by Captain Michael Mills, and the following authentic story is told of a Hessian who died there: In the latter part of October, 1777, a small party of Convention troops passed through the town on their way to Hartford. They camped for a few days on the village green. Among their number was a German lad, named Abram Si Hunchupp (pronounced "Sunchupp"), who was ill and unable to travel further. He was taken into the home of Captain Mills and cared for by his wife, Mercy Lawrence Mills, until, after some weeks, he died. He was buried in Loon Meadow, which is on the road leading from Norfolk to Colebrook, in a lot which belonged to Captain Mills. Upon a tree which stood above his grave these words were carved: "Here lies the body of Abram Si Hunchupp." Years passed, and the illness and death of the Hessian became one of the traditions of the house, when one evening the wife of Mr. Eden Mills, who was a son of Captain Mills, was sitting before the old kitchen hearth, singing softly to the little one nestled in her arms, and watching the glowing fire as it blazed lip the wide-mouthed chimney. Suddenly she noticed that letters were slowly shaping themselves upon the great back log, and was startled and frightened as she spelled out the burning words, "Here lies the body of Abram Si Hunchupp." With regret it was learned that a laborer, Clark Walter by name, had unwittingly cut down the tree which marked the lonely grave, and the place could not afterwards be found. This spot now lost in Loon Meadow, was always called the Grave of the Hessian, and the lot is still known as the "Hunchupp Lot." At the time Abram Si Hunchupp was taken to the house of Captain Mills, a number of German soldiers from the same company stopped at the house of Nathaniel Pease, a resident of Norfolk, and begged a night's rest. (The spot where the house then stood ison the farm of Nathaniel S. Lawrence in West Norfolk.) They were allowed to spend the night by the fire, and during the evening one of them took from his sack a curious black teapot and to the amazement of the family a small package of tea. After having made himself a cup of tea, he threw the little teapot far back into the deep fireplace, among the glowing embers. Mr. Pease and his family were too awed to appear to notice this strange behavior on the part of their guest, but in the morning, after he had departed, the careful housewife drew the little teapot out of the ashes. It was uninjured, and was afterwards known in the family as "The Hessian's Teapot." At a comparatively recent date, through the agency of a small boy who thought it unnecessary to mention its loss, the pot itself disappeared, but the cover is still in the possession of a descendant of Nathaniel Pease. During the fall of 1777, Hendrich Bale, a Hessian soldier who belonged to Burgoyne's army, deserted his company as it passed through the town. He remained in the village and married Sara Hotchkiss. The well known and dearly loved Rev. A. R. Robbins was at that time pastor of the church at Norfolk, and he helped with food and shelter the weary foreigners who passed through the place during those memorable Oc tober days. An old gentleman now residing in the town relates a story which be remembers hearing his grandfather narrate, to the effect that after the surrender at Saratoga, a small party of British troops came into his grandfather's house, which stood on the road now leading into Colebrook, and threw themselves on the floor to sleep. They were remonstrated with, the men of the family telling them that the women could not move about to do their work, whereupon the leader replied that his men would lie upon their, faces, and the women might step upon them, but sleep they must. There is told in Norfolk the story of an encounter between Captain Giles Pettibone (who was one of the foremost citizens of the town, and who led his company at Saratoga, and also held a command at West Point at the time of Arnold's treason) and a Hessian soldier, who, as he marched past the tavern kept by Captain Pettibone, stepped aside from his comrades, and made some demand upon the captain, which was refused. The Hessian then struck the doughty captain, who, it is said, defended himself with a pitchfork, to the serious discomfort of the Hessian. The house where this tavern was kept is still standing. Just outside the present village of Simsbury, there stands a house, now deserted and falling, which was built in 1765, by Daniel Holcomb. Previous to and during the revolution, a tavern was kept here, and the old bar-room is the same as in the days when foaming tankards of colonial flip were served from its oak board. The present owner of the house, Mr. Roswell J. Noble, has in his possession, among other valuable colonial relics, a curious staff, surmounted by an ornamental iron tip, which it is supposed was a color bearer, and which was left at the tavern by a company of Convention troops who camped there. The Convention troops were not allowed to sail for England. Congress refused to accept payment for their support in its own paper money, but insisted that all debts be paid in gold; demanded of General Burgoyne papers regarding his men which he was unable to furnish, and finally refused to carry out the agreement that the troops be allowed to leave the country. They remained in Boston until the latter part of 1778, when they were sent to Charlottesville, Virginia, and established as a colony there. Much assistance was given them by Thomas Jefferson, whose estate at Monticello was near there. In 1780, to prevent a possible uprising, the British were sent to Maryland, and the Germans to the northern part of Virginia. Afterwards some were sent to Lancaster, Pennsylvania, and in 1781, large numbers of the officers and men were billeted upon the people of East Windsor, Connecticut. In Stiles' History of Ancient Windsor, there is an account of these troops, in which their number is given as "nineteen British officers, with forty-three servants, and forty-three Hessian officers, with ninety-two servants." The officers seem to have been well supplied with money horse racing and betting were common amusements among them, and they enjoyed a considerable degree of freedom. At the suggestion of Lafayette numbers of the men were employed in planting trees. There were weavers and shoemakers among them, and they worked among the people of the town. Many of the Convention troops were allowed to escape, and many of them settled in the colonies, and became American By 1783 they had all become dispersed. Photo's from this artical 1) Haystack, "The Glory of 3) The Michael Mills House, Norfolk. 4) The Giles Pettibone Tavern, Norfolk 6) The Holcomb House, built 1765
With the wacky planting and growing season we have had thus far in 2019, I want to encourage producers to scout fields for disease. Our late planting season this year can make our crops more vulnerable to higher levels of disease according to Ohio State University agronomy specialists. The dry weather lately has helped hold back disease development in some cases since many diseases need moisture to survive and thrive. With that said it is still important to be watchful for disease and insects as well as some molds. Our experts at Ohio State University remind us that fungal diseases that can infect soybeans or corn can survive through the winter on crop residue left after harvest can cause the onset of disease again in the spring and spread. When we have a delayed planting year like this year, the disease spores carried over have more time to multiply. Pierce Paul, Ohio State University plant pathologist, notes that not only are more spores potentially available to infect corn and soybean plants, but because in a late planted year like this, these spores can infect plants in a much earlier growth stage increasing the potential for more impact on the growing plants. Right now it appears that gray leaf spot is the most commonly found in fields but producers should also be on the lookout for Northern corn leaf blight and a new disease in Ohio tar spot. All three diseases are potential threats to this year’s crop. In soybean fields, frogeye is the disease to be most concerned about. The incidents of this disease has been increasing every year in Ohio and we must keep in mind that is was severe in some cases late last growing season here in Clinton County. Anne Dorrance, Ohio State University plant pathologist reminds producers that if frogeye leaf spot is found on a soybean plant just before or during the growth of the bean pod, it could create significant yield loss if not controlled. Dorrance also suggests that cercospora leaf blight and downy mildew, a water mold, are also potential threats to this year’s crop. Phomopsis is another disease to be watchful of and remember it was a major problem throughout Ohio last year. We know there are some varieties of corn and soybeans that are resistant to a variety of diseases, so a grower would know which fungal diseases he/she would have some protection from but keep in mind resistance does not mean immunity, it only means that damage to a resistant plant will be milder according to Dorrance. Crop specialists suggest walking your field more frequently this year so you can stay on top of any diseases progressing in a given field. Don’t forget to also be watchful for insects and the potential for damage to crops. One insect we have been on the lookout this year has been the brown marmorated stink bug in soybean fields which was a big problem this year. We have been monitoring for stink bug in Clinton County this growing season and it has proven they are out there but insect trap numbers have been low here and in other counties thus far. No matter the disease or insect issue, understand the threshold the crop can tolerate before impacting revenue lost from less yield and do your homework on which pesticide will provide the best control. For more information about disease or insect threats on late-planted crops, visit the following: Tony Nye is the state coordinator for the Ohio State University Extension Small Farm Program and has been an OSU Extension Educator for agriculture and natural resources for over 30 years, currently serving Clinton County and the Miami Valley EERA.
Like many others before and after him, the first emperor of unified China, Qin Shi Huang, wanted to live forever. According to a set of recently discovered ancient texts, 2,200 years ago, the emperor issued an administrative order to seek for a potion that could grant him eternal life, reports the Xinhua news agency. Qin Shi Huang was born in 259 BC, and by the time of his death, in 210 BC, he had conquered the six disputed kingdoms of China and managed to create a unified nation of which, obviously, he was proclaimed emperor. As noted by scholars, during the reign of Qui Shi Huang, bamboo strips were a common writing material. In 2002, more than 36,000 of bamboo strips containing ancient calligraphy were discovered in an abandoned well in the central Chinese province of Hunan. The discovery was of great importance and historical value. Zhang Chunlong, a researcher from the Hunan Institute of Archeology, analyzed 48 of these strips and discovered among them, a decree where the emperor ordered a search for potions that would grant him eternal life. Experts note that; “It required an extremely efficient administration and a considerable enforcement to pass such a decree in ancient times when transport and communication were extremely underdeveloped.” Scholars explain that the search for the Emperor’s elixir of life reached the borders of the empire. The bamboo strips offer evidence of the unusual order and various details. The ancient documents mention a town called “Duxiang” where “no miraculous remedy had been found,” but it implied that “searches were continuing.” Another locality, referred to as Langya, in the present province of Shandong “alluded to a plant harvested in a sacred mountain” that might have been what the emperor was searching for. Before the discovery of the Bamboo stripes, scholars already had an idea of Qin Shi Huang’s obsession with immortality. According to Chemistry World, the emperor thought that consuming cinnabar, composed of 85% mercury and 15% sulfur, would prolong his life. Ironically, and as expected, it did the opposite killing him at the age of 49. Despite the fact that, the emperor did not discover the elixir of life, he equipped himself extraordinarily well for the afterlife: he was the emperor who had the underground mausoleum of Xian built in the north of the country, with 8,000 Terracotta warriors whose mission was to protect him in the afterlife. His eternal resting place was a supermassive subterranean mausoleum that has never been excavated by experts. Ancient records suggest that the underground palace has a roof that imitates the starry night with pearls and diamonds like stars and rivers of mercury.
A ship is an intricately organized structure of many important parts. To draw a ship, an artist or student needs some understanding of its organization. A ship, strictly, is a three-masted sailing vessel square rigged on all three masts. With more masts similary rigged, she becomes a four or five masted ship. If she has her aftermost mast fore-and-aft rigged, she becomes a bark, perhaps a four-masted bark. If only the foremast is square rigged, she becomes a barkentine. A brig has two masts, both square rigged. If her mainmast has a mainsail like a schooner's with a three-cornered gaff-topsail above it and no yards, while the foremast remains square rigged, the vessel is called a half brig or hermaphrodite brig. This type of vessel is sometimes mistakenly called a brigantine, which carries a single square topsail above the main. If a vessel has but one mast with a fore-and-aft mainsail she is a sloop, even if she carries a square topsail as was done in the eighteenth century. A vessel with two or more masts fore-and-aft rigged is a schooner - two-masted, three-masted, or more. A ship, generally, may be any important vessel, especially a seagoing vessel. "The way of a ship in the sea" did not necessarily refer to a three-masted square rigger. Do not underrate a ship by calling her a boat. The ship "Queen Mary" could stow fifty boats. Perspective is the only exact science having anything to do with art, and it cannot be ignored. Its primary value is to help create the effect of the third dimension. A serious error in perspective cannot be overcome by any amount of tone or color. Get a good book on elementary perspective which can be understood (many cannot). But perspective in architecture, railroad tracks, and interiors is one thing. Perspective afloat is quite another matter, because in anything like a seaway the vanishing points of a ship are constantly chasing each other up into the sky or diving down into the deep and dark blue sea. In the diagrams (above), you will note that the ship on top, rolling away from you, has the vanishing point of the squared yards in the water. The smaller ship to the left, rolling toward you, would have the vanishing point of her (squared) yards in the sky. Except in port, however, yards are seldom squared. But when you know where they would be if squared, you can trim them forward or aft as the wind calls for. On an even keel, a vessel's masts are perpendicular to the horizon and her deck parallel with it. The vanishing points, of course, are on that line. As soon as a vessel lists (leans over), the same rule applies, but the vanishing points are now on what is called the false horizon. (dotted line, Figure 2). In Figure 1, the yards squared with the keel would have their vanishing points well off to the left under the true horizon, while in Figure 3, the vanishing points would be up in the sky far to the right. A good-sized drawing board is necessary for accurate perspective. For a large drawing, the floor is useful. Do not force your perspective. It will distort your ship. The first thing to do after the order "Out studding sails" is to run out the booms. Note how one boom is out and the boom on the other side of the yard remains housed. A studding sail measures half the area of the sail to which it is an adjunct. But about one-fifth of the studding sail is overlapped by the principal sail, hence only four-fifths of it is seen from ahead. SHROUDS, SPARS, AND LIFTS Above is the spar plan of a ship, featuring the stays supporting the masts from forward. All other stays lead aft - port or starboard - to the rail. Below are shown the shrouds and topmast backstays. The lower shrouds lead up to and are looped around the hounds, where they are seized in a bight, then back to the same rail to be secured to the strong iron chain plates bolted to the ship's timbers. The rigging is set up taut by means of two deadeyes drawn and laced together by stout rope lanyards. The topmast shrouds, three in number, are set up the same way but come no lower than the tops. Here iron rods called futtock shrouds take up the strain. The diagram to the left shows this arrangement, as well as the spread of the rigging and the lifts that support the yards. Learn one side of a mast and you know both sides. When you know one mast of a ship you know all three. REEFING A TOPSAIL To close reef a modern double topsail, the upper half is merely furled. No so simple the old style single sails. Most of them had three reefs, some four. The yard was lowered to the cap (or lower masthead). The sail was then gathered in fold by means of clewlines and buntlines. Next, the reef tackle was manned, both sides and the outer edges or leech being hauled to the proper reef earring. This earring was lashed to the yard arm. The reef points were tied around the sail, which was then hoisted reefed. This operation sometimes took hours of heartbreaking labor in foul weather. The above diagram shows; The tip of the yard arm. The upper corner of the topsail. The chain sheet of the topsail. The iron jackstay to which the head of the sail is lashed. The pendant for the brace to swing the yard. The foot rope with its supporting stirrup. The Flemish horse, an extention of the foot rope. The iron ring at the tip of the yard, through which the studding-sail boom is run outboard. The reef tackle. HOW THE MAINSAIL IS FURLED The above diagrams give the basic formulas for all squared sails. Figure 1 shows the mainsail set, with the gear for furling it, from aft looking forward. The upper edge or head is lashed to an iron rod running from one end of the yard to the other through eyebolts in its top; this rod is called the jackstay. The outer edges (leeches) of the sails, port and starboard, are called weather or leeward under way. All corners are called clews. At the command "Clew up the mainsail," both lower corners are hauled up to the yard close to the mast by the clew tackles or clewlines. Then the buntlines (which lead from the yard to the foot of the sail) are used to bundle up the sail into loose folds, spilling the wind. The leech lines gather in the outer edges and the sail is then loosely furled as in Figure2. At the command "Lay aloft and furl," the men scramble up the rigging and out on the yard by means of the foot ropes, bundle the sail into a tight roll, lash it securely with gaskets (short ropes on the yard for that purpose), and the operation is complete. HOW THE YARDS ARE SWUNG The diagrams above show the yards as they are when all sail is set, and the lead (as in leader) of the braces. Only the starboard braces are shown. The port braces are the same. The main brace and the fore brace are shown seperately. They are fitted with extra gear (purchases) to aid in swinging the ship's heaviest yards and largest sails. The fore brace leads to a block on the forward end of the main channels; the main brace leads to a heavy timber called the bumpkin, projecting from the ship's side close to the stern. The hauling parts of both these braces lead to blocks fitted in the main rail directly above the standing end. The lower-topsail braces for both masts lead to similar blocks each a foot or so aft of its mate. In beating to windward, the lower yards are braced in as far as they will go; the upper yards are not braced quite so far in. In running before the wind, the yards are square with the keel. In any other wind abaft the beam, the yards are squared with the direction of the wind. The lower yards are neither hoisted nor lowered but suspended by heavy chain slings from the crosstrees; they swing on strong iron swivels called cranse irons. All the upper yards are hoisted and lowered by halyards and tyes secured to iron rings called parrels. The parrels are fastened firmly to the center of each yard; leathered and greased, they travel freely up and down the mast. Welin davits swing the boat outboard by means of a ratchet gear. With gravity type davits the boat cradle, when released, slides down inclined skids; the arms then swing the boat outboard. The helm is an important and outstanding object on a ship's deck. The double wheel in the illustration above belongs to the U.S.S. "Constitution". This ship also had an emergency steering gear between decks, less exposed to gunfire. In contrast is the diamond-geared iron wheel of the coaster "Lizzie D. Small" (c.1866), shown above right.
The beauty of calligraphy art is seen as one of the most inspirational sources for various graffiti and urban artists today. This form of ancient writing stands at the root of the most mesmerizing examples of graffiti works, stencil tags, non-classical hand-lettering, or even abstract painting. Depending on the sensibility of the artists, calligraphy art may inspire the most exquisite form of lettering today which decorate design works, book editions, or even personal invitations to significant events. On the other hand, various artists transform the letter to a simple gesture, mark, or abstract pattern design. With such appropriation of calligraphy art, authors today continue to re-shape the contemporary art production. The Origin of Calligraphy Art Calligraphy is an ancient practice of writing. Various cultures, such as Arabic, Chinese, Indian, Islamic, or Japanese considered the art of writing as an integral part of their culture and identity. Sharing the same or at least similar tools which aid the artists to create the script, variations of the styles exist between the cultures. Originally the act of writing was used in monasteries and for the copy of the sacred texts. In Islamic tradition, the writing was not only used for the copy of such stories but it also told the tale of deeper philosophical and spiritual concerns. The ornamentation of the letters, the connection between the first letter Aleph to the rest of the letters in Islam culture is, in fact, the story of creation itself. Presently, urban artists dip into such as tradition and use it to create amazing murals and gallery-sized contemporary paintings. In 2007, the Dutch artist Niels Shoe Meulman coined the term calligraffiti to illustrate the fusion of calligraphy and tagging which soon became a worldwide phenomenon. We have selected a group of artworks which illustrate the dominance of calligraphy art. These 10 pieces, created by celebrated names of urban culture, can easily be yours. Please scroll down to learn more about each piece and make one or more an integral part of your collection. L’Atlas - Blue Dreams French artist L’Atlas is well known for his unique and recognizable lettering style. In his hands, calligraphy art is transformed into a play between the hidden and visible. Inspired by both geometric art and act of writing, the paintings by L’Atlas hide the text and create elaborate geometric patterns. His painting Blue Dreams resembles a map of an ancient labyrinth. Upon closer inspection one notices the appearance of the letters which in fact form the artist’s name. This form of play is, in fact, the charm of L’Atlas’ production. Click here to learn more about the artwork and how it can become yours. Featured image: L’Atlas – Blue Dreams. Image via widewalls.ch Stohead - Enter the Dragon Stohead’s creations refer to a very puristic form of graffiti, tagging and throw ups. Deeply interested in graffiti culture, Stohead created his first tag and painting back in 1989. With the artist group ‘getting up’ he realized some of the biggest graffiti murals and exhibitions. Presently, Stohead’s creativity is expressed by the creation of typographic text-patterns. His mixed-media painting Enter the Dragon is created by both acrylic paint and spray paint on canvas. Transforming the letters into an abstract and fluid surface, the piece is open to interpretation by the public. Such openness only adds to its magic. For more information on the artwork click here. Featured image: Stohead – Enter the Dragon. Image via widewalls.ch Niels Shoe Meulman - Fear Itself In the world that is inspired by calligraphy art, artist Niels Shoe Meulman is considered as one of its revolutionary artists. In one of the featured interviews, the artist described his works and inspirations bringing closer his celebrated works and methods. His painting Fear Itself, created by the use of the ink and spraypaint on canvas, illustrates the artist’s style and the love for both calligraphy art and the expressive force of Action painting as well. Find out how to get this fascinating piece by clicking here. Featured image: Niels Shoe Meulman – Fear Itself. Image via widewalls.ch Seen - Grey Multi Tag The artist Seen became one of the most famous street artists at a time graffiti art was becoming a major trend. In the early seventies he was known for his subway graffiti. Later he developed his well-known style of vibrant lettering and cartoon characters. His painting Grey Multi Tag, follows the tradition of repetition and pattern in art and is an example of Seen’s interest step into abstract work. Click here for more data about the artwork. Featured image: Seen – Grey Multi Tag. Image via widewalls.ch Vincent Abadie Hafez - Jalousie 2 Vincent Abadie Hafez is yet another French artist on our list. Also known as Zepha, he is strongly inspired by traditional and contemporary Arabic typography. He is world famous for his ornamental and stunning pieces which decorate both the facades of buildings or are part of traditional exhibition pieces. Playing with the rhythm, form, and color, the artist attempts to merge the old with the new. His painting Jalouise 2 illustrates the artist’s style and display the mastery of his strokes. To learn more about the artwork and how it can become yours click here. Featured image: Vincent Abadie Hafez – Jalouise 2. Image via widewalls.ch Eackone - Pink Martini The love for graffiti and street culture inspires the author Eackone. Graduating graphic art, the artist balances the two world and is both a graphic designer and graffiti artists. His interest in tagging is seen as one of the influences for his painting Pink Martini. By covering the entire canvas surface with strokes and free-flowing letters, the image could be linked to the expressive painting by Jackson Pollock where the drip of the paint itself created the image. Find out how to get this fascinating piece by clicking here. Featured image: Eackone – Pink Martini. Image via widewalls.ch Sowat - Rue Des Vieux Marrakschis Making his first pieces on the wastelands of Marseille, presently Sowat is considered as one of the most important names of the urban culture. Choosing both the traditional materials for calligraphy art, such as inks and brushes, the artist also incorporates the usual spray technique. Often teaming with various other artists, Sowat is the author of the most intricate mural paintings decorating various cities. His painting Rue Des Vieux Marrakschis celebrates the line which helps to create an abstract and non-traditional form of lettering. To find out more about this piece, click here. Featured image: Sowat – Rue Des Vieux Marrakschis. Image via widewalls.ch Niels Shoe Meulman - White Shoe As a major art figure, both an artist and a writer, graphic designer and art director, we just had to mention Niels Shoe Meulman twice on our list of calligraphy art. This time his painting White Shoe is more expressive and exploding in nature. One is drawn to the movement of the paint which is identified as a recorded of artist’s energy and creative force. With the help of a high contrast, between the black flat background and the white lettering one could easily define this work as a fusion of calligraphy art, tagging, and abstract painting. Click here for more data about the piece and how it can become yours. Featured image: Niels Shoe Meulman – White Shoe. Image via widewalls.ch Vincent Abadie Hafez - X Factor Ending our list of ten calligraphy art pieces is Vincent Abadie Hafez and his mixed-media painting X Factor. Famous for his use of various techniques and materials the need to fuse opposing elements is evident in his painting. Both expressive and still, the painting resembles the surface of the ancient palimpsest. For more information, click here.
By Tanushri Majumdar Haven’t you always wondered where tigers came from? What the basis of their existence is? I don’t know about you, but I sure have! I mean, yeah, we evolved from chimps, dogs came from wolves, but what did our present day “kitty” evolve from? So, one day, I got up with a goal in my head – I had to figure this out! And that’s how this article came into existence. Forgotten felines are all the cats that have been forgotten over the ages - all the prehistoric cats that evolved into our present day big cats. Often called a tiger, this prehistoric predator was not really a tiger. Fossil evidence indicates that it was a lot smaller, more like a bobcat — around 3 feet tall, to be precise. A distinct feature seen in this peculiar feline was its fangs. This cat had huge canine fangs that closely resembled sabers, as the name indicates. These extremely sharp fangs were, on an average, 8 inches long! Also known as the smilodon, the saber-toothed cat was built for the kill. Almost bearlike with an extremely muscular neck and forelegs, its well-engineered body was perfect for latching onto the necks of its unfortunate prey. Its mouth opened very wide and it could bite huge chunks out of a prey. Clusters of fossils found in California suggest that it may have been a social animal. This big cat, a probable ancestor of our present-day lion, once roamed the continent of North America. This predator was huge, more than four-thirds the size of any modern lion. Standing 4 feet high, this predator had a huge head and long legs. Surprisingly, this big cat weighed less than expected, for something its size — between 256kg and 351kg. Also called the American lion, this predator lived at high altitudes, probably using caves as shelter against the cold weather. American lions likely preyed on deer, horses, North American camels, North American tapirs, bison, mammoths, and other large herbivorous animals. Human predation may have contributed to its extinction, indicated by the huge number of lion bones found in American Indian settlements of the Paleolithic age. This prehistoric lion was truly the king of beasts in its age. The dinictis was a strong and fierce predator indigenous to North America. Fossil evidence suggests that this beast was a strange mixture of the prehistoric smilodon and present-day felines like the house cat and tiger. It had a sleek body, almost 1.1 m long, with very short, un-catlike front legs. It looked like a small leopard, roughly the size of the present-day cougar and dwelled in trees. Its teeth were like modern cats and it is considered an ancestor to them. Fossils found in the western states of North America show that the dinictis preferred to live and hunt near rivers and open plains. Also known as the Scimitar cat, the homotherium was one of the most formidable felines in prehistoric times. Found in North and South America, Europe, Asia and Africa, it adapted very well to different climatic conditions and survived for five million years until its extinction. The homotherium may have been a social carnivore and was active mostly during the day, thus avoiding competition with nocturnal predators. Its short hind legs and rather long forelegs helped it in grabbing prey. An adept mammal hunter, its exceptional speed helped the homotherium hunt fast animals as well. The cave lion was a subspecies of the Panthera leo . This skilled hunter was one of the largest cats of its time (much larger than our present-day Siberian tiger and hybrid tiger) with the males weighing between 270kg and 320 kg. It was one of the most dangerous and powerful predators during the last Ice Age in Europe, and evidence indicates that it was feared. Interestingly it played a role in paleolithic religious beliefs evident from artefacts like cave paintings and a few statuettes that depicted the cave lion as a majestic, regal beast. Surprisingly, this cat did not have a mane like the present-day lion as indicated by paleolithic cave paintings and clay busts. They also show the cave lion with faint, tiger-like stripes on its body. Scientists have suggested that it may actually have been more related to the tiger. Extensive genetic studies on the fossils, however, have confirmed that the cave lion was actually a lion!
Black couples are talking about their love lives as part of a study under way at Loyola University in Chicago. “Most research on relationship function focuses on white couples,” said Tracy DeHart, a social psychologist at Loyola. The data pool focused strictly on black couples is small, she said. DeHart and Anthony Burrow, a developmental psychologist at Cornell University, are leading a study on 150 black couples to fill in the gap. There might be experiences specific to black couples that aren’t being examined, DeHart said, adding that the project has been under development for about three years. “Our relationship functioning is so closely linked to our mental and physical health,” DeHart said. The unique factors that lead to stress in black couple interaction should be noted, she said. The study, which began in January and is funded by the National Science Foundation, explores couples’ interactions through a three-week diary. People report on how they feel about themselves, their relationships, their positive and negative experiences, she said. Burrow, a former professor at Loyola, said the study will evaluate the couples every day in real time. “We not only get a sense of how their daily experiences shape their moods,” he said, “we also can see how an individual’s daily experiences, if shared with their partner, may impact their partner’s well-being.” Clinical psychologist Melissa Blount said: “There are universal truths that exist in all relationships. From my experience, when people are in love and they’re hurting it all looks the same.” From a social, cultural and economic standpoint, she said, couples face different issues. Unemployment, health problems and “residue of racial discrimination,” Blount said, are concerns she’s found specifically damaging to black relationships. “Your availability to resources and tools are limited when your finances are limited,” said Blount, who spent three years counseling couples in Chicago. “That makes it harder for your relationship to thrive and survive.” Also, if a person is under pressure because of real or perceived racial discrimination, she said, it will impact his or her partner. Upbringing and finances are the major differences between blacks and whites, said Clifton Jackson, 36, who has been with his girlfriend Bertrice Horton, 32, for two and a half years. They are not participants in the Loyola study. The environments in which many black people are reared shape how they behave in relationships, he said. Jackson adds that he was taught to lead in a relationship and that’s what he does. “You have to learn how to love and have a relationship,” Jackson said. Burrow said that there is no evidence proving whether black couples are different from other racial ethnic groups. “The whole objective here is not to compare,” he said, referring to the study. “I think historically what happens is either black or other minority ethnic groups are left out of the conversation all together, or there is a comparison in which one group ends up being viewed as the standard. “We want to supply a research-based understanding of African-American couples’ romantic relationships functioning to a literature that has already recognized the importance of romantic relationship functioning, but has not fully explored what this looks like in a specific cultural context.” The purpose is to leave the possibility open that black couples’ relationships could be similar with everybody else’s, Burrow said, but to also leave open the potential to find specific cultural “nuances that really shape or impact this particular experience.” Seventy-five couples are participating and applications are still being accepted. The study will last through the fall, and results are expected to be released as soon as next year.
Preventing Stroke Using TCAR Baptist Health Louisville: Preventing Stroke Using TCAR Vascular surgeon Brad Thomas, MD, explains how transcarotid revascularization, TCAR, reverses blood flow in the artery, protecting the brain from a stroke while surgeons implant a stent. Preventing Stroke Using TCAR HealthTalks Transcript Brad Thomas, MD Carotid artery disease is the build-up of plaque formation in the neck arteries. It can be because of smoking, primarily, but it can also be because of conditions like high cholesterol. The problem with carotid artery disease is that it increases your risk of having a stroke. TCAR uses a neuroprotection system, and that’s called reversal of flow. And so, what TCAR does is it temporarily reverses the flow in the artery that we’re working on. So, when we’re crossing the lesion using a balloon on this friable, diseased artery and putting a stent to try to stabilize plaque if the flow is going up toward the brain, that’s a potential time when a stroke can happen. Because that’s what a stroke is, it’s clot or pieces of plaque that go up to the brain and then block off circulation to it. When you reverse the flow, there’s a device that actually sits outside of the body and pumps the blood back down and runs it through a filter, so that if anything does break off — before we ever attack the lesion and start to work on it — we can see that the flow is reversing. It’s amazing sometimes how, after the procedure, when we go and open up this filter, we can see the plaque and debris that would’ve otherwise gone up toward the head. TCAR is an important new weapon in the fight against stroke. We want to do everything we can to reduce the risk of stroke in someone and improve the quality of their life. With stroke being the leading cause of serious, long-term disability, learn how to lower your risk by identifying your risk factors. Start by taking the free assessment and discussing the results with your family and your doctor.
The Studies window will open. Now click the New... button at the bottom left corner of the window. This will open the space we need to write our program. The first line is # SMA_X_WMA this is a comment and is not used by the computer it is only there to help the programmer or people trying to understand the code. The next two lines provide input to the program. This allows the user to make changes to the length of the moving averages. Type: input SMA_Length = 5; input WMA_Length = 9; These two line define variables SMA_Length and WMA_Length and assign a default value for each. Note do not forget the ; at the end and I used the _ since spaces are not allowed in variable names. Next we Define two more variables avg and wavg Type: Def avg = Average(Close, SMA_length); Def wavg = wma(close, WMA_length); The first line defines the simple moving average of the close for the length provided by the SMA_Length variable. and the second line defines the weighted moving average of the close over the length provided by the WMA_Length variable. Now we need to plot our arrows on the chart. We do this by Typing: plot crossing = avg > wavg and avg <= wavg; This code provides the logic used for our indicator it is saying if the simple moving average crosses above the weighted moving average and that it was previously below then it will draw crossing on the chart. The last bit of code will define what gets plotted on the chart. Type: Now you can Click the OK button the lower right of the window this takes use back to the Studies window. now you can select any changes to the parameters that you want to change. Next click Apply and then OK to see your indicator on the chart.
Preschool Bunny Count - App Store Info DescriptionPreschool Bunny Count is a wonderful game for kids. It serves two purpose. One is to teach counting in an interesting and playful manner with colorful images. Another is to make your child recognize different objects of 10 categories. Each category has ten different objects. Children love it because it includes fantastic images of animals, vegetables, vehicles, colors, school objects, wild animals, fruits, insects and birds. Game has lovely music with mute function. Bunny is the main character that teaches your child counting from 1 to 10 in a very very light manner. In the end your child gets marks for his performance and the star colored red, green, blue, pink or black according to the marks achieved. Marks are not given for any mistake made. But your child gets a chance to correct his mistake. Thus, moving further towards the end. Bunny has a special message for the world in the message section. Bunny keeps record of the stars scored by your child. Finally, your kid is ready to count quickly with fun and confidence. The game has so much of variety that your child would love to count round the clock.
Posted May 8, 2014 by Martin Armstrong QUESTION: Mr. Armstrong, I know someone who attended your 1985 Conference in Princeton. He said you illustrated the huge volatility, and forecast there would be the Crash in 1987 and 1989. He said you forecast the G5 would be the source of that volatility, and that there would be a Sovereign Debt Crisis that will begin 2010. But the most amazing forecast he said you bluntly stated that marijuana would begin to be legalized in 2013. How could you have made such a forecast? MARIJUANA TAX STAMPS ANSWER: The legalization of marijuana is precisely 43 years after it was made illegal following the 1969 Supreme Court decision that led to the Controlled Substance Act that began in 1970. The legalization of marijuana takes place simply as forecast because they need money – it is linked to the Sovereign Debt Crisis. The very same pattern took place with alcohol. Turn the economy down, the government legalizes what is illegal to make money. Casinos are now everywhere. The fascinating thing about taxes has been the thinking process behind them. Back in 1937, Congress imposed drastic new regulations and taxes on marijuana, cocaine, and opium, which was then legal. The tax on marijuana was a direct attempt to restrict its use. They were legalizing alcohol but really felt they had to be punitive with something else. The Prohibition was targeted to get Italians and Irish Catholics. This new tax targeted Mexicans. The marijuana tax was really being used to criminally prosecute Mexicans who were widely seen as taking American jobs during the hard times. In 1967, President Johnson’s Commission on Law Enforcement and Administration of justice opined, “The Act raises an insignificant amount of revenue and exposes an insignificant number of marijuana transactions to public view, since only a handful of people are registered under the Act. It has become, in effect, solely – a criminal law, imposing sanctions upon persons who sell, acquire, or possess marijuana.” In 1969, the Supreme Court overruled the tax in Leary v. United States, It held that part of the Act was unconstitutional for it violated the Fifth Amendment by forcing a person seeking the tax stamp would have to incriminate him/herself. Congress then passed the Controlled Substances Act as Title II of the Comprehensive Drug Abuse Prevention and Control Act of 1970. The 1937 Act was repealed by the 1970 Act. Hence, 1/2 of 8.6 is precisely 4.3. With time, people forget the original reasoning to outlaw alcohol and then opium, cocaine, and marijuana. There is the argument that there should be a “sin tax” to encourage people to stop a given practice. Governments applied the same theory in tobacco. Make cigarettes expensive and you will reduce their use and sales. Now they want to tax electronic cigarettes simply because they are losing taxes from real tobacco. Interesting facts behind taxes has been raising taxes on the “rich” (household income of $250,000 in USA C$150,000 Canada), will somehow not reduce the economy and result in fewer jobs. They realize raising “sin taxes” did reduce the use of cigarettes. So why will raining taxes not reduce the economy as well? Political thinking is never logical nor consistent because they lie out of their self-interest – not to help the people, With them, government is just the legal means of robbing the people while claiming you care so much. The forecast was rather simple. 4.3 is half of the 8.6 frequency times 10. Hence, 1970 plus 43 years brings us to 2013 and that was 2 years after the Sovereign Debt Crisis begins. That target 2010.29 was the Pi target and that was the start of the crack with Greece. That same target was the World Trade Center attack – 911 (2001.695)
Plants can begin flowering in as little as six to eight months, although container-grown plants may take up to two years to bear fruit. The good news is that once the plant is mature, you could see four to six fruiting cycles a year from a plant that is capable of bearing fruit for 20 to 30 years. - 1 Can you grow yellow dragon fruit from seed? - 2 How long does it take for a dragon fruit to fully grow? - 3 Is yellow dragon fruit hard to grow? - 4 How long does it take a dragon fruit cutting to produce fruit? - 5 What month does dragon fruit bloom? - 6 How fast does dragon fruit grow from seed? - 7 What is the best fertilizer for dragon fruit? - 8 Why is my dragon fruit plant not growing? - 9 How often should I water dragon fruit? - 10 When should I plant dragon fruit? - 11 Can you grow dragon fruit in pots? - 12 How do you make dragon fruit grow faster? - 13 How tall do dragon fruit trees grow? Can you grow yellow dragon fruit from seed? Wash off the fruit flesh and pulp from the seeds and lay out the seeds on a moist paper towel for at least twelve hours. Plant the seeds. Sprinkle the dragon fruit seeds across the soil surface and cover with a thin layer of soil. It’s okay if it barely covers the seeds—they don’t need to be planted deep. How long does it take for a dragon fruit to fully grow? It takes the fruits about 50 days to reach maturity after flowering and pollination occurs, and the dragon fruit continues to flower and set new fruits throughout its fruit-bearing season. Is yellow dragon fruit hard to grow? Learning how to grow dragon fruit is really not that hard! Dragon fruit is a cactus that is actually quite adaptive to its environment. Dragon fruit garden in California. Aside from the deliciously delicate flavor, the fruit of a dragon fruit tree is extremely healthy. How long does it take a dragon fruit cutting to produce fruit? Leave the treated stem segment to dry for 7-8 days in a dry, shaded area. After that time, dip the cutting into a root hormone and then plant directly in the garden or in a well-draining soil in a container. Cuttings will grow rapidly and may produce fruit 6-9 months from propagation. What month does dragon fruit bloom? This unique jungle plant typically blooms from early summer through mid-autumn. Dragon fruit cactus is a night blooming plant and the flowers last only one evening. How fast does dragon fruit grow from seed? If dragon fruit has intrigued you, the small seeds scattered throughout its flesh can be sprouted easily and grown into a dragon fruit plant of your own. Plants can begin flowering in as little as six to eight months, although container-grown plants may take up to two years to bear fruit. What is the best fertilizer for dragon fruit? Choose a fertilizer with a balanced NPK ratio. However, most experts agree that some type of balanced fertilizer, like 16-16-16 or 13-13-13, is a good choice for your dragon fruit. You can use fertilizer granules, or spread fertilizer through your irrigation system. Slow-release fertilizer is also an option. Why is my dragon fruit plant not growing? The most likely cause is inadequate growing conditions. The dragon fruit cactus is a tropical plant, which means it likes heat. It’s also possible your dragon fruit won’t develop fruit because of a lack of moisture. Since it’s a cactus, many gardeners assume the pitaya doesn’t need much water. How often should I water dragon fruit? Water more frequently than other cacti ( approximately once every 2 weeks ). Allow soil to dry between waterings. Soil should be moist, but not saturated. When should I plant dragon fruit? The growing season of this plant takes place during the hot months of the summer. It will not grow the rest of the year, but when it does grow, it grows rapidly. Blooms will occur from July to October, but they will only bloom for one night each year. After the flowering occurs, fruit will begin to form. Can you grow dragon fruit in pots? Dragon fruit are well suited to growing in pots, provided they are least 500 mm wide. Choose a pot at least 500mm wide and deep. Position in full sun and protect from strong winds. Fill pot with free-draining cacti and succulent mix. How do you make dragon fruit grow faster? Dragon fruit needs sun to produce fruit so plant it in a full sun spot or a place that gets at least 6 hours of sunlight a day. When situated indoors, make sure your plant is in a warm and sunny spot. Unlike most cactus, dragon fruit likes to have its soil on the slightly moist side. How tall do dragon fruit trees grow? Although it is a cactus, it requires a relatively high amount of water. Dragon fruit trees are vining, and need something to climb. They are also heavy – a mature plant can reach 25 feet (7.5 m.) and several hundred pounds.
The 20th century was like no time period before it. Einstein, Darwin, Freud and Marx were just some of the thinkers who profoundly changed Western culture. These changes took distinct shape in the literature of the 20th century. Modernism, a movement that was a radical break from 19th century Victorianism, led to postmodernism, which emphasized self-consciousness and pop art. While 20th century literature is a diverse field covering a variety of genres, there are common characteristics that changed literature forever. Prior to the 20th century, literature tended to be structured in linear, chronological order. Twentieth century writers experimented with other kinds of structures. Virginia Woolf, for instance, wrote novels whose main plot was often "interrupted" by individual characters' memories, resulting in a disorienting experience for the reader. Ford Madox Ford's classic "The Good Soldier" plays with chronology, jumping back and forth between time periods. Many of these writers aimed to imitate the feeling of how time is truly experienced subjectively. If there's one thing readers could count on before the 20th century, it was the reliability of an objective narrator in fiction. Modernist and postmodern writers, however, believed that this did a disservice to the reliability of stories in general. The 20th century saw the birth of the ironic narrator, who could not be trusted with the facts of narrative. Nick Carraway, narrator of Fitzgerald's "The Great Gatsby," for example, tells the story with a bias toward the novel's titular character. In an extreme case of fragmented perspective, Faulkner's "As I Lay Dying" switches narrators between each chapter. The Novel of the City The 20th century is distinguished as the century of urbanism. As more people moved to cities in Europe and America, novelists used urban environments as backdrops for the stories they told. Perhaps the best known of these is James Joyce's "Dubliners," a series of short stories that all take place in various locales in Dublin. Other 20th century writers are also closely associated with various urban centers: Woolf and London, Theodore Dreiser and Chicago, Paul Auster and New York, Michael Ondaatje and Toronto. Writing from the Margins The 20th century gave voice to marginalized people who previously got little recognition for their literary contributions. The Harlem Renaissance, for example, brought together African-Americans living in New York to form a powerful literary movement. Writers such as Langston Hughes, Nella Larsen and Zora Neale Hurston wrote fiction and poetry that celebrated black identity. Similarly, female writers gained recognition through novels that chronicled their own experience. Finally, the post-colonial literary movement was born, with writers such as Chinua Achebe writing stories on behalf of subjugated peoples who had experienced colonization by Western powers.
Cyanobacteria blooms pose a serious threat to drinking-water sources, because certain species contain toxins harmful to the liver or nervous system. (Photo credit: Dr. Ron Zurawell, Ph.D., P.Biol. Limnologist/Water Quality Specialist, Alberta Environment) Article courtesy of ScienceDaily |February 26, 2015 | ScienceDaily | Shared as educational material The organisms commonly known as blue-green algae have proliferated much more rapidly than other algae in lakes across North America and Europe over the past two centuries — and in many cases the rate of increase has sharply accelerated since the mid-20th century, according to an international team of researchers. The organisms commonly known as blue-green algae have proliferated much more rapidly than other algae in lakes across North America and Europe over the past two centuries — and in many cases the rate of increase has sharply accelerated since the mid-20th century, according to an international team of researchers led by scientists at McGill University. The following youtube video is one of many published regarding the toxic blue-green algae. “Bloom – the Plight of Lake Champlain” (Part 1 of 4) is a documentary featuring Lake Champlain. Their study, published in the journal Ecology Letters, represents the first continental-scale examination of historical changes in levels of cyanobacteria, the scientific term for the photosynthetic bacteria that form blue-green scum on the surface of ponds and lakes during hot summer months. Cyanobacteria blooms pose a serious threat to drinking-water sources, because certain species contain toxins harmful to the liver or nervous system. “We found that cyanobacterial populations have expanded really strongly in many lakes since the advent of industrial fertilizers and rapid urban growth,” says Zofia Taranu, who led the study as a PhD candidate in McGill’s Department of Biology. “While we already knew that cyanobacteria prefer warm and nutrient-rich conditions, our study is also the first to show that the effect of nutrients, such as phosphorus and nitrogen, overwhelm those of global warming.” Alpine lakes affected: Researchers from France, Italy, Spain, the UK, Malaysia, and across Canada contributed to the study. While the increase in cyanobacteria in agriculturally developed watersheds was in line with their expectations, the scientists were surprised to find that cyanobacteria also increased in many remote, alpine lakes. In those sites, warmer temperatures and nutrient loading from atmospheric sources are likely to have played a bigger role than direct agricultural runoff. Dense algal blooms have become a summertime staple of media coverage — and a growing concern of lakefront homeowners — in certain regions, but until now there had been little in the way of long-term, large-scale synthesis of data on the phenomenon. This left room for doubt as to whether harmful algal blooms were truly on the rise, or whether communities were simply better equipped to identify and report blooms when they occur. The rapid increase in cyanobacteria identified in the study points to the potential for a parallel increase in the concentration of harmful cyanotoxins, says Taranu, who is now a postdoctoral fellow at Université de Montréal. While potentially toxic species don’t synthesize toxins at all times, studies have shown that one of the best predictors of toxin concentrations in lakes is the total abundance of cyanobacteria. Cyanobacteria can produce toxins that cause damage to the liver or nervous system. The most common symptoms of acute exposure to harmful algal blooms are skin rash or irritation, gastroenteritis and respiratory distress. Chronic, low dose exposures over a lifetime may also result in liver tumors or endocrine disruption. Preliminary studies also suggest that a recently isolated cyanotoxin may become more concentrated across food chains and may be associated with the formation of progressive neurodegenerative diseases such as Alzheimer’s, Parkinson’s and ALS diseases. Although this latter work is still controversial among scientists, “our results underline the importance of further research in this area,” Taranu says. Collaborations needed to tackle problem: “Our work shows that we need to work harder as a society to reduce nutrient discharges to surface waters,” says Irene Gregory-Eaves, an associate professor of biology at McGill and co-author of the study. “Because diffuse nutrient loading (as opposed to end-of-pipe effluent) is the main issue, we need to build collaborations to tackle this complex problem. For example, partnerships among freshwater scientists and farmers are starting to happen, and more of this needs to take place, so that we can strike a balance between maximizing crop yields and minimizing excess fertilizer application.”
Facebook is revealing data about how the world's largest social network has achieved strong levels of water efficiency usage for cooling in the first building at its Prineville, Ore. data center. Water efficiency usage (WUE) measures water used for cooling data centers only -- not any plumbing and or office usage elsewhere on site. Specifically, as of the second quarter of 2012, the Prineville data center achieved a WUE of 0.22 L/kWh. To put that into a better perspective, Daniel Lee from the Open Compute Project described in a blog post that figure is "a great result, but it should be noted that the WUE concept is fairly new." However, a specific goal for Q2 wasn't mentioned in the post. More specific details about the nitty-gritty specifics behind the water cooling systems (including more details about the diagram below) are available on the Open Compute Project blog. But as an overview, Facebook has implemented a mechanical system comprised of a built-up penthouse that utilizes 100 percent of outside air economization with a direct evaporation cooling and humidification misting system. Lee asserted that most data center cooling systems don't actually employ outside air economization, instead recirculating up to 100 percent of the air used to cool the server room with a central chilled water plant and cooling towers. He added that consumes more energy and water. By comparison, Facebook's mechanical systems don't have any chillers or cooling towers, relying primarily on air economization and the aforementioned misting system. "It’s like using a window-mounted air conditioner to cool a room instead of putting a fan in a window when the outside temperatures are cooler than the temperature in the room," Lee described. This marks the first time that Facebook has publicized its water usage efficiency measurement. Last week, Facebook also shared information about its carbon footprint, outlining a goal to earn at least 25 percent of its data center energy from clean and renewable sources by 2015. Facebook is sharing this information as part of its commitment to the Open Compute Project. The Menlo Park, Calif.-based social media company is planning to continue to release water efficiency metrics on a quarterly basis. That reporting process will get larger soon, starting with the second building at the Prineville data center when WUE metrics for that site are available next year along with the Forest City, N.C. facility as well. Image via Open Compute Project This post was originally published on SmartPlanet's The Bulletin blog.
A risk factor is something that increases your chance of getting a disease or condition. It is possible to develop COPD with or without the risk factors listed below. However, the more risk factors you have, the greater your likelihood of developing COPD. If you have a number of risk factors, ask your healthcare provider what you can do to reduce your risk. Risk factors include: The most important risk factor for COPD is cigarette smoking. Almost all COPD cases are caused by cigarette smoking. However, not all smokers develop COPD. Factors in your environment or genetic make-up also contribute to the development of COPD. Smoking more "exotic" forms of tobacco, such as Chinese waterpipes, are can be even more harmful. In some cases, these can increase your risk more than traditional cigarettes. Research suggests that people who are chronically exposed to secondhand smoke (in any form) have an increased risk of developing COPD. COPD usually develops in older persons with a long history of cigarette smoking. However, one form of emphysema has a genetic component. It runs in families. It is also more common in people of northern European descent. People with this form of COPD have a hereditary deficiency of a blood component. It is known as alpha-1-protease inhibitor (alpha-1-antitrypsin [AAT]). People with this defect can develop COPD at an earlier age. If you have close relatives who developed COPD in their 30s or 40s, your risk of this type of COPD may be elevated. A deficiency of AAT can be detected with blood tests. You are more likely to develop COPD as you get older. This is partly related to the number of cigarettes smoked and the number of years as a smoker. A history of frequent childhood lung infections increases your risk of developing COPD. COPD is much more common in men than in women. But, this may be largely related to the higher rate of smoking among men. As the number of women who have significant smoking histories has increased, the number of COPD-related deaths in women has also risen. Exposure to Environmental and Occupational Pollutants Chronic exposure to dust, gases, chemicals, and biomass fuels increases your risk of developing COPD. These include smoke from burning wood, charcoal, and crop residue. Exposure to these can also worsen symptoms of the disease. - Reviewer: Michael Woods, MD - Review Date: 03/2016 - - Update Date: 03/15/2015 -