text
stringlengths 16
352k
| source
stringclasses 2
values |
---|---|
Like Cola was een onsuccesvol colamerk dat in 1982 op de Amerikaanse markt verscheen. Het werd geïntroduceerd door 7Up, dat destijds eigendom was van Philip Morris, met als slogan "Made From The Cola Nut." Like Cola was een van de eerste (bijna-)cafeïnevrije cola's en was ook verkrijgbaar in een light-variant. De gewone versie had een rode verpakking met blauwe letters; bij de light-variant was dit kleurenschema omgekeerd.
Like Cola was feitelijk maar voor 99% cafeïnevrij; op dat moment bestond er nog een wettelijke verplichting voor iedere cola om op zijn minst een kleine hoeveelheid cafeïne te bevatten.
Colamerk | wiki |
Patronymes
Georges Rouquier (1909-1989), acteur et réalisateur français ;
Louis Rouquier (1863-1939), écrivain occitan et homme politique français (maire de Levallois-Perret) ;
Raphaël Rouquier (1969- ), mathématicien. | wiki |
8×50mmR may refer to:
8×50R Lebel, a cartridge using smokeless powder
8×50mmR Mannlicher, a cartridge of Austria-Hungary from 1890 | wiki |
Drowning is a type of suffocation induced by the submersion of the mouth and nose in a liquid. Most instances of fatal drowning occur alone or in situations where others present are either unaware of the victim's situation or unable to offer assistance. After successful resuscitation, drowning victims may experience breathing problems, vomiting, confusion, or unconsciousness. Occasionally, victims may not begin experiencing these symptoms until several hours after they are rescued. An incident of drowning can also cause further complications for victims due to low body temperature, aspiration of vomit, or acute respiratory distress syndrome (respiratory failure from lung inflammation).
Drowning is more likely to happen when spending extended periods of time near large bodies of water. Risk factors for drowning include alcohol use, drug use, epilepsy, minimal swim training or a complete lack of training, and, in the case of children, a lack of supervision. Common drowning locations include natural and man-made bodies of water, bathtubs, and swimming pools.
Drowning occurs when a person spends too much time with their nose and mouth submerged in a liquid to the point of being unable to breathe. If this is not followed by an exit to the surface, low oxygen levels and excess carbon dioxide in the blood trigger a neurological state of breathing emergency, which results in increased physical distress and occasional contractions of the vocal folds. Significant amounts of water usually only enter the lungs later in the process.
While the word "drowning" is commonly associated with fatal results, drowning may be classified into three different types: drowning that results in death, drowning that results in long-lasting health problems, and drowning that results in no health complications. Sometimes the term "near-drowning" is used in the latter cases. Among children who survive, health problems occur in about 7.5% of cases.
Steps to prevent drowning include teaching children and adults to swim and to recognise unsafe water conditions, never swimming alone, use of personal flotation devices on boats and when swimming in unfavourable conditions, limiting or removing access to water (such as with fencing of swimming pools), and exercising appropriate supervision. Treatment of victims who are not breathing should begin with opening the airway and providing five breaths of mouth-to-mouth resuscitation. Cardiopulmonary resuscitation (CPR) is recommended for a person whose heart has stopped beating and has been underwater for less than an hour.
== Causes ==
A major contributor to drowning is the inability to swim. Other contributing factors include the state of the water itself, distance from a solid footing, physical impairment, or prior loss of consciousness. Anxiety brought on by fear of drowning or water itself can lead to exhaustion, thus increasing the chances of drowning.
Approximately 90% of drownings take place in freshwater (rivers, lakes, and a relatively small number of swimming pools); the remaining 10% take place in seawater. Drownings in other fluids are rare and often related to industrial accidents. In New Zealand's early colonial history, so many settlers died while trying to cross the rivers that drowning was called "the New Zealand death."
People have drowned in as little as of water while lying face down.
Death can occur due to complications following an initial drowning. Inhaled fluid can act as an irritant inside the lungs. Even small quantities can cause the extrusion of liquid into the lungs (pulmonary edema) over the following hours; this reduces the ability to exchange the air and can lead to a person "drowning in their own body fluid." Vomit and certain poisonous vapors or gases (as in chemical warfare) can have a similar effect. The reaction can take place up to 72 hours after the initial incident and may lead to a serious injury or death.
Risk factors
Many behavioral and physical factors are related to drowning:
Drowning is the most common cause of death for people with seizure disorders, largely in bathtubs. Epileptics are more likely to die due to accidents such as drowning. However, this risk is especially elevated in low and middle-income countries compared to high-income countries.
The use of alcohol increases the risk of drowning across developed and developing nations. Alcohol is involved in approximately 50% of fatal drownings, and 35% of non-fatal drownings.
Inability to swim can lead to drowning. Participation in formal swimming lessons can reduce this risk. The optimal age to start the lessons is childhood, between one and four years of age.
Feeling overly tired reduces swimming performance. This exhaustion can be aggravated by anxious movements motivated by fear during or in anticipation of drowning. An overconfident appraisal of one's own physical capabilities can lead to "swimming out too far" and exhaustion before returning to solid footing.
Free access to water can be hazardous, especially to young children. Barriers can prevent young children from gaining access to the water.
Ineffective supervision, since drowning can occur anywhere there is water, even in the presence of lifeguards.
Risk can vary with location depending on age. Children between one and four more commonly drown in home swimming pools than elsewhere. Drownings in natural water settings increase with age. More than half of drownings occur among those fifteen years and older occurred in natural water environments.
Familial or genetic history of sudden cardiac arrest (SCA) or sudden cardiac death (SCD) can predispose children to drown. Extensive genetic testing and/or consultation with a cardiologist should be done when there is a high suspicion of familial history and/or clinical evidence of sudden cardiac arrest or sudden cardiac death.
Individuals with undetected primary cardiac arrhythmias, as cold water immersion or aquatic exercise can induce these arrhythmias to occur.
Population groups at risk in the US are the old and young.
Youth: drowning rates are highest for children under five years of age and people fifteen to twenty-four years of age.
Minorities: the fatal unintentional drowning rate for African Americans above the age of 29 between 1999 and 2010 was significantly higher than that of white people above the age of 29. The fatal drowning rate of African American children of ages from five to fourteen is almost three times that of white children in the same age range and 5.5 times higher in swimming pools. These disparities might be associated with a lack of basic swimming education in some minority populations.
Freediving
Some additional causes of drowning can also happen during freediving activities:
Ascent blackout, also called deep water blackout, is caused by hypoxia during ascent from depth. The partial pressure of oxygen in the lungs under pressure at the bottom of a deep free dive is adequate to support consciousness but drops below the blackout threshold as the water pressure decreases on the ascent. It usually occurs when arriving near the surface as the pressure approaches normal atmospheric pressure.
Shallow water blackout caused by hyperventilation prior to swimming or diving. The primary urge to breathe is triggered by rising carbon dioxide (CO2) levels in the bloodstream. The body detects CO2 levels accurately and relies on this to control breathing. Hyperventilation reduces the carbon dioxide content of the blood but leaves the diver susceptible to a sudden loss of consciousness without warning from hypoxia. There is no bodily sensation that warns a diver of an impending blackout, and people (often capable swimmers swimming under the surface in shallow water) become unconscious and drown quietly without alerting anyone to the fact that there is a problem and they are typically found at the bottom.
Pathophysiology
Drowning is split into four stages:
Breath-hold under voluntary control until the urge to breathe due to hypercapnia becomes overwhelming
Fluid is swallowed and/or aspirated into the airways
Cerebral anoxia stops breathing and aspiration
Cerebral injury due to anoxia becomes irreversible
In the early stages of drowning, a person holds their breath to prevent water from entering their lungs. When this is no longer possible, a small amount of water entering the trachea causes a muscular spasm that seals the airway and prevents further passage of water. If the process is not interrupted, loss of consciousness due to hypoxia is followed by cardiac arrest.
Oxygen deprivation
A conscious person will hold their breath (see Apnea) and will try to access air, often resulting in panic, including rapid body movement. This uses up more oxygen in the bloodstream and reduces the time until unconsciousness. The person can voluntarily hold their breath for some time, but the breathing reflex will increase until the person tries to breathe, even when submerged.
The breathing reflex in the human body is weakly related to the amount of oxygen in the blood but strongly related to the amount of carbon dioxide (see Hypercapnia). During an apnea, the oxygen in the body is used by the cells and excreted as carbon dioxide. Thus, the level of oxygen in the blood decreases, and the level of carbon dioxide increases. Increasing carbon dioxide levels lead to a stronger and stronger breathing reflex, up to the breath-hold breakpoint, at which the person can no longer voluntarily hold their breath. This typically occurs at an arterial partial pressure of carbon dioxide of 55 mm Hg but may differ significantly between people.
The breath-hold breakpoint can be suppressed or delayed, either intentionally or unintentionally. Hyperventilation before any dive, deep or shallow, flushes out carbon dioxide in the blood resulting in a dive commencing with an abnormally low carbon dioxide level: a potentially dangerous condition known as hypocapnia. The level of carbon dioxide in the blood after hyperventilation may then be insufficient to trigger the breathing reflex later in the dive.
Following this, a blackout may occur before the diver feels an urgent need to breathe. This can occur at any depth and is common in distance breath-hold divers in swimming pools. Both deep and distance free divers often use hyperventilation to flush out carbon dioxide from the lungs to suppress the breathing reflex for longer. It is important not to mistake this for an attempt to increase the body's oxygen store. The body at rest is fully oxygenated by normal breathing and cannot take on any more. Breath-holding in water should always be supervised by a second person, as by hyperventilating, one increases the risk of shallow water blackout because insufficient carbon dioxide levels in the blood fail to trigger the breathing reflex.
A continued lack of oxygen in the brain, hypoxia, will quickly render a person unconscious, usually around a blood partial pressure of oxygen of 25–30 mmHg. An unconscious person rescued with an airway still sealed from laryngospasm stands a good chance of a full recovery. Artificial respiration is also much more effective without water in the lungs. At this point, the person stands a good chance of recovery if attended to within minutes. More than 10% of drownings may involve laryngospasm, but the evidence suggests that it is not usually effective at preventing water from entering the trachea. The lack of water found in the lungs during autopsy does not necessarily mean there was no water at the time of drowning, as small amounts of freshwater are absorbed into the bloodstream. Hypercapnia and hypoxia both contribute to laryngeal relaxation, after which the airway is open through the trachea. There is also bronchospasm and mucous production in the bronchi associated with laryngospasm, and these may prevent water entry at terminal relaxation.
The hypoxemia and acidosis caused by asphyxia in drowning affect various organs. There can be central nervous system damage, cardiac arrhythmia, pulmonary injury, reperfusion injury, and multiple-organ secondary injury with prolonged tissue hypoxia.
A lack of oxygen or chemical changes in the lungs may cause the heart to stop beating. This cardiac arrest stops the flow of blood and thus stops the transport of oxygen to the brain. Cardiac arrest used to be the traditional point of death, but at this point, there is still a chance of recovery. The brain cannot survive long without oxygen, and the continued lack of oxygen in the blood, combined with the cardiac arrest, will lead to the deterioration of brain cells, causing first brain damage and eventually brain death after six minutes from which recovery is generally considered impossible. Hypothermia of the central nervous system may prolong this.
The extent of central nervous system injury to a large extent determines the survival and long term consequences of drowning, In the case of children, most survivors are found within 2 minutes of immersion, and most fatalities are found after 10 minutes or more.
Water aspiration
If water enters the airways of a conscious person, the person will try to cough up the water or swallow it, often inhaling more water involuntarily. When water enters the larynx or trachea, both conscious and unconscious people experience laryngospasm, in which the vocal cords constrict, sealing the airway. This prevents water from entering the lungs. Because of this laryngospasm, in the initial phase of drowning, water enters the stomach, and very little water enters the lungs. Though laryngospasm prevents water from entering the lungs, it also interferes with breathing. In most people, the laryngospasm relaxes sometime after unconsciousness, and water can then enter the lungs, causing a "wet drowning." However, about 7–10% of people maintain this seal until cardiac arrest. This has been called "dry drowning", as no water enters the lungs. In forensic pathology, water in the lungs indicates that the person was still alive at the point of submersion. An absence of water in the lungs may be either a dry drowning or indicates a death before submersion.
Aspirated water that reaches the alveoli destroys the pulmonary surfactant, which causes pulmonary edema and decreased lung compliance, compromising oxygenation in affected parts of the lungs. This is associated with metabolic acidosis, secondary fluid, and electrolyte shifts. During alveolar fluid exchange, diatoms present in the water may pass through the alveolar wall into the capillaries to be carried to internal organs. The presence of these diatoms may be diagnostic of drowning.
Of people who have survived drowning, almost one-third will experience complications such as acute lung injury (ALI) or acute respiratory distress syndrome (ARDS). ALI/ARDS can be triggered by pneumonia, sepsis, and water aspiration. These conditions are life-threatening disorders that can result in death if not treated promptly. During drowning, aspirated water enters the lung tissues, causes a reduction in alveolar surfactant, obstructs ventilation, and triggers a release of inflammatory mediators which results in hypoxia. Specifically, upon reaching the alveoli, hypotonic liquid found in freshwater dilutes pulmonary surfactant, destroying the substance. Comparatively, aspiration of hypertonic seawater draws liquid from the plasma into the alveoli and similarly causes damage to surfactant by disrupting the alveolar-capillary membrane. Still, there is no clinical difference between salt and freshwater drowning. Once someone has reached definitive care, supportive care strategies such as mechanical ventilation can help to reduce the complications of ALI/ARDS.
Whether a person drowns in freshwater or salt water makes no difference in respiratory management or its outcome. People who drown in freshwater may experience worse hypoxemia early in their treatment, however, this initial difference is short-lived.
Cold-water immersion
Submerging the face in water cooler than about triggers the diving reflex, common to air-breathing vertebrates, especially marine mammals such as whales and seals. This reflex protects the body by putting it into energy-saving mode to maximise the time it can stay underwater. The strength of this reflex is greater in colder water and has three principal effects:
Bradycardia, a slowing of the heart rate to less than 60 beats per minute.
Peripheral vasoconstriction, the restriction of the blood flow to the extremities to increase the blood and oxygen supply to the vital organs, especially the brain.
Blood shift, the shifting of blood to the thoracic cavity, the region of the chest between the diaphragm and the neck, to avoid the collapse of the lungs under higher pressure during deeper dives.
The reflex action is automatic and allows both a conscious and an unconscious person to survive longer without oxygen underwater than in a comparable situation on dry land. The exact mechanism for this effect has been debated and may be a result of brain cooling similar to the protective effects seen in people who are treated with deep hypothermia.
The actual cause of death in cold or very cold water is usually lethal bodily reactions to increased heat loss and to freezing water, rather than any loss of core body temperature. Of those who die after plunging into freezing seas, around 20% die within 2 minutes from cold shock (uncontrolled rapid breathing and gasping causing water inhalation, a massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic), another 50% die within 15 – 30 minutes from cold incapacitation (loss of use and control of limbs and hands for swimming or gripping, as the body 'protectively' shuts down the peripheral muscles of the limbs to protect its core), and exhaustion and unconsciousness cause drowning, claiming the rest within a similar time. A notable example of this occurred during the sinking of the Titanic, in which most people who entered the water died within 15–30 minutes.
Submersion into cold water can induce cardiac arrhythmias (abnormal heart rates) in healthy people, sometimes causing strong swimmers to drown. The physiological effects caused by the diving reflex conflict with the body's cold shock response, which includes a gasp and uncontrollable hyperventilation leading to aspiration of water. While breath-holding triggers a slower heart rate, cold shock activates tachycardia, an increase in heart rate. It is thought that this conflict of these nervous system responses may account for the arrhythmias of cold water submersion.
Heat transfers very well into water, and body heat is therefore lost quickly in water compared to air, even in 'cool' swimming waters around 70 °F (~20 °C). A water temperature of can lead to death in as little as one hour, and water temperatures hovering at freezing can lead to death in as little as 15 minutes. This is because cold water can have other lethal effects on the body. Hence, hypothermia is not usually a reason for drowning or the clinical cause of death for those who drown in cold water.
Upon submersion into cold water, remaining calm and preventing loss of body heat is paramount. While awaiting rescue, swimming or treading water should be limited to conserve energy, and the person should attempt to remove as much of the body from the water as possible; attaching oneself to a buoyant object can improve the chance of survival should unconsciousness occur.
Hypothermia (and cardiac arrest) presents a risk for survivors of immersion. This risk increases if the survivor—feeling well again—tries to get up and move, not realizing their core body temperature is still very low and will take a long time to recover.
Most people who experience cold-water drowning do not develop hypothermia quickly enough to decrease cerebral metabolism before ischemia and irreversible hypoxia occur. The neuroprotective effects appear to require water temperatures below about .
Diagnosis
The World Health Organization in 2005 defined drowning as "the process of experiencing respiratory impairment from submersion/immersion in liquid." This definition does not imply death or even the necessity for medical treatment after removing the cause, nor that any fluid enters the lungs. The WHO classifies this as death, morbidity, and no morbidity. There was also consensus that the terms wet, dry, active, passive, silent, and secondary drowning should no longer be used.
Experts differentiate between distress and drowning.
Distress – people in trouble, but who can still float, signal for help, and take action.
Drowning – people suffocating and in imminent danger of death within seconds.
Forensics
Forensic diagnosis of drowning is considered one of the most difficult in forensic medicine. External examination and autopsy findings are often non-specific, and the available laboratory tests are often inconclusive or controversial. The purpose of an investigation is to distinguish whether the death was due to immersion or whether the body was immersed postmortem. The mechanism in acute drowning is hypoxemia and irreversible cerebral anoxia due to submersion in liquid.
Drowning would be considered a possible cause of death if the body was recovered from a body of water, near a fluid that could plausibly have caused drowning, or found with the head immersed in a fluid. A medical diagnosis of death by drowning is generally made after other possible causes of death have been excluded by a complete autopsy and toxicology tests. Indications of drowning are unambiguous and may include bloody froth in the airway, water in the stomach, cerebral edema and petrous or mastoid hemorrhage. Some evidence of immersion may be unrelated to the cause of death, and lacerations and abrasions may have occurred before or after immersion or death.
Diatoms should normally never be present in human tissue unless water was aspirated. Their presence in tissues such as bone marrow suggests drowning; however, they are present in soil and the atmosphere, and samples may be contaminated. An absence of diatoms does not rule out drowning, as they are not always present in water. A match of diatom shells to those found in the water may provide supporting evidence of the place of death. Drowning in saltwater can leave different concentrations of sodium and chloride ions in the left and right chambers of the heart, but they will dissipate if the person survived for some time after the aspiration, or if CPR was attempted, and have been described in other causes of death.
Most autopsy findings relate to asphyxia and are not specific to drowning. The signs of drowning are degraded by decomposition. Large amounts of froth will be present around the mouth and nostrils and in the upper and lower airways in freshly drowned bodies. The volume of froth is much greater in drowning than from other origins. Lung density may be higher than normal, but normal weights are possible after cardiac arrest or vasovagal reflex. The lungs may be overinflated and waterlogged, filling the thoracic cavity. The surface may have a marbled appearance, with darker areas associated with collapsed alveoli interspersed with paler aerated areas. Fluid trapped in the lower airways may block the passive collapse that is normal after death. Hemorrhagic bullae of emphysema may be found. These are related to the rupture of alveolar walls. These signs, while suggestive of drowning, are not conclusive.
safe==Prevention==
It is estimated that more than 85% of drownings could be prevented by supervision, training in water skills, technology, and public education.
Surveillance: Watching the swimmers is a basic task, because drownings can be silent and unnoticed: a person drowning may not always be able to attract attention, often because they have become unconscious. Surveillance of children is important. The highest rates of drowning globally are among children under five, and young children should be supervised, regardless of whether they can already swim. The danger increases when they are alone. A baby can drown in the bathtub, in the toilet, and even in a small bucket filled with less than an inch of water. It only takes around 2 minutes underwater for an adult to lose consciousness, and only between 30 seconds and 2 minutes for a small child to die. Choosing supervised swimming places is safer. Many pools and bathing areas either have lifeguards or a drowning detection system. Bystanders are also important in detection of drownings and in notifying them (personally or by phone, alarm, etc.) to lifeguards, who may be unaware if distracted or busy. Evidence shows that alarms in pools are poor for any utility. The World Health Organization recommends analyzing when the most crowded hours in the swimming zones are, and to increase the number of lifeguards at those moments.
Learning to swim: Being able to swim is one of the best defenses against drowning. It is recommended that children learn to swim in a safe and supervised environment when they are between 1 and 4 years old. Learning to swim is also possible in adults by using the same methods as children. It's still possible to drown even after learning to swim (because of the state of the water and other circumstances), so it's recommended to choose swimming places that are safe and kept under surveillance.
Additional education: The WHO recommends training the general public in first-aid for the drowned, cardiopulmonary resuscitation (CPR), and to behave safely when in the water. It is recommended to teach those who cannot swim to keep themselves away from deep waters.
Pool fencing: Every private and public swimming pool should be fenced and enclosed on every side, so no person can access the water unsupervised. The "Raffarin law", applied in France in 2003, forced the fencing of pools.
Pool drains: Swimming pools often have drainage systems to cycle the water. Drains without covers can injure swimmers by trapping hair or other parts of the body, leading to immobilization and drowning. Drains should not suction too strongly. It is recommended for a pool to have many small drainage holes instead of a single large one. Periodic revisions are required to certify that the system is working well.
Caution with certain conditions: Some conditions require one to be cautious when near water. For example, epilepsy and other seizure disorders may increase the possibility of drowning during a convulsion, making it more dangerous to swim, dive, and bathe. It is recommended that people with these conditions take showers rather than baths and are taught about the dangers of drowning.
Alcohol or drugs: Alcohol and drugs increase the probability of drowning. This danger is greater in bars near the water and parties on boats where alcohol is consumed. For example, Finland sees several drownings every year at Midsummer weekend as Finnish people spend more time in and around the lakes and beaches, often after having consumed alcohol.
Lifejacket use: Children that cannot swim and other people at risk of drowning should wear a fastened and well-fitting lifejacket when near or in the water. Other flotation devices (inflatable inner tubes, water wings, foam tubes, etc.) may be useful, although they are usually considered toys. Other flotation instruments are considered safe, like the professional circle-shaped lifebuoy (hoop-buoy, ring-buoy, life-ring, life-donut, lifesaver, or life preserver), which is designed to be thrown, and some other professional variants that are used by lifeguards in their rescues.
Depth awareness: Diving accidents in pools can cause serious injury. Up to 21% of shallow-water diving accidents can cause spinal injury, occasionally leading to death. Between 1.2% and 22% of all spinal injuries are from diving accidents. If the person does not die, the injury could cause permanent paralysis.
Avoid dangerous waters: Avoid swimming in waters that are too turbulent, where waves are large, with dangerous animals, or are too cold. Also avoid dragging currents, which are currents that are turbulent, foamy, and that can drag people or debris. If caught by one of these currents, swim out from it (it is possible to move out gradually, in a diagonal direction until you arrive at the shore).
Navigating safely: Many people who die by drowning die in navigation accidents. Safe navigation practices include being informed of the state of the sea and equipping the boat with regulatory instruments to keep people afloat. These instruments are lifejackets (see 'lifejacket use' above) and professional lifebuoys with the shape of a circle (ring-buoy, hoop-buoy, life-ring, life-donut, lifesaver, or life preserver).
Use the "buddy system": Don't swim alone, but with another person who can help in case of a problem.
Rescue robots and drones: Nowadays, there exist some remote-controlled modern devices that can accomplish a water rescue. Floating rescue robots can move across the water, allowing the victim to hold on to the drone and be moved out of the water. Flying drones are very fast and can drop life jackets from air, and may help to locate the victim’s position.
Follow the rules: Many people who drown fail follow the safety guidelines of the area. It is important to pay attention to the signage that indicates whether swimming is allowed or if a lifeguard is on duty. (lifeguards, coastguards, etc.)
Water safety
The concept of water safety involves the procedures and policies that are directed to prevent people from drowning or from becoming injured in water.
Time limits
The time a person can safely stay underwater depends on many factors, including energy consumption, number of prior breaths, physical condition, and age. An average person can last between one and three minutes before falling unconscious and around ten minutes before dying. In an unusual case, a person was resuscitated after 65 minutes underwater.
Management
Rescue
When a person is drowning or a swimmer becomes missing, a fast water rescue may become necessary to take that person out of the water as soon as possible. Drowning is not necessarily violent or loud, with splashing and cries; it can be silent.
Rescuers should avoid endangering themselves unnecessarily; whenever it is possible, they should assist from a safe ground position, such as a boat, a pier, or any patch of land near the victim. The fastest way to assist is to throw a buoyant object (such as a lifebuoy). It is very important to avoid aiming directly at the victim, since even the lightest lifebuoys weight over 2 kilograms, and can stun, injure or even render a person unconscious if they impact on the head. Alternatively, one could try to pull the victim out of the water by holding out an object to grasp. Some examples include: ropes, oars, poles, one's own arm, a hand, etc. This carries the risk of the rescuer being pulled into the water by the victim, so the rescuer must take a firm stand, lying down, as well as securing to some stable point. Alternatively, there are modern flying drones that drop life jackets.
Bystanders should immediately call for help. A lifeguard should be called, if present. If not, emergency medical services and paramedics should be contacted as soon as possible. Less than 6% of people rescued by lifeguards need medical attention, and only 0.5% need CPR. The statistics worsen when rescues are made by bystanders.
If lifeguards or paramedics are unable to be called, bystanders must rescue the drowning person. Alternatively, there are small floating robots that can reach the victim, as human rescue carries a risk for the rescuer, who could be drowned. Death of the would-be rescuer can happen because of the water conditions, the instinctive drowning response of the victim, the physical effort, and other problems.
After reaching the victim, first contact made by the rescuer is important. A drowning person in distress is likely to cling to the rescuer in an attempt to stay above the water surface, which could submerge the rescuer in the process. To avoid this, it is recommended that the rescuer approaches the panicking person with a buoyant object or extending a hand, so the victim has something to grasp. It can even be appropriate to approach from behind, taking one of the victim's arms, and pressing it against the victim's back to restrict unnecessary movement. Communication is also important.
If the victim clings to the rescuer and the rescuer cannot control the situation, a possibility is to dive underwater (as drowning people tend to move in the opposite direction, seeking the water surface) and consider a different approach to help the drowning victim. It is possible that the victim has already sunk beneath the water surface. If this has happened, the rescue requires caution, as the victim could be conscious and cling to the rescuer underwater. The rescuer must bring the victim to the surface by grabbing either (or both) of the victim's arms and swimming upward, which may entice the victim to travel in the same direction, thus making the task easier, especially in the case of an unconscious victim. Should the victim be located in deeper waters (or simply complicates matters too much) the rescuer should dive, take the victim from behind, and ascend vertically to the water surface holding the victim.
Finally, the victim must be taken out of the water, which is achieved by a towing maneuver. This is done by placing the victim body in a face-up horizontal position, passing one hand under the victim's armpit to then grab the jaw with it, and towing by swimming backwards. The victim's mouth and nose must be kept above the water surface.
If the person is cooperative, the towing may be done in a similar fashion with the hands going under the victim's armpits. Other styles of towing are possible, but all of them keeping the victim's mouth and nose above the water.
Unconscious people may be pulled in an easier way: pulling on a wrist or on the shirt while they are in a face-up horizontal position. Victims with suspected spinal injuries can require a more specific grip and special care, and a backboard (spinal board) may be needed for their rescue.
For unconscious people, an in-water resuscitation could increase the chances of survival by a factor of about three, but this procedure requires both medical and swimming skills, and it becomes impractical to send anyone besides the rescuer to execute that task. Chest compressions require a suitable platform, so an in-water assessment of circulation is pointless. If the person does not respond after a few breaths, cardiac arrest may be assumed, and getting them out of the water becomes a priority.
First aid
The checks for responsiveness and breathing are carried out with the person horizontally supine. If unconscious but breathing, the recovery position is appropriate.If not breathing, rescue ventilation is necessary. Drowning can produce a gasping pattern of apnea while the heart is still beating, and ventilation alone may be sufficient. The airway-breathing-circulation (ABC) sequence should be followed, rather than starting with compressions as is typical in cardiac arrest, because the basic problem is lack of oxygen. If the victim is not a baby, it is recommended to start with 5 normal rescue breaths, as the initial ventilation may be difficult because of water in the airways, which can interfere with effective alveolar inflation. Thereafter, a continual sequence of 2 rescue breaths and 30 chest compressions is applied. This alternation is repeated until vital signs are re-established, the rescuers are unable to continue, or advanced life support is available.For babies (very small sized infants), the procedure is slightly modified. In each sequence of rescue breaths (the 5 initial breaths, and the further series of 2 breaths), the rescuer's mouth covers the baby's mouth and nose simultaneously (because a baby's face is too small). Besides, the intercalated series of 30 chest compressions are applied by pressing with only two fingers (due to the body of the baby being more fragile) on the chest bone (approximately on the lower part).
Methods to expel water from the airway such as abdominal thrusts, Heimlich maneuver or positioning the head downwards should be avoided, due to there being no obstruction by solids, and they delay the start of ventilation and increase the risk of vomiting.The risk of death is increased, as the aspiration of stomach contents is a common complication of resuscitation efforts.
Treatment for hypothermia may also be necessary. However, in those who are unconscious, it is recommended their temperature not be increased above 34 degrees C. Because of the diving reflex, people submerged in cold water and apparently drowned may revive after a long period of immersion. Rescuers retrieving a child from water significantly below body temperature should attempt resuscitation even after protracted immersion.
Medical care
People with a near-drowning experience who have normal oxygen levels and no respiratory symptoms should be observed in a hospital environment for a period of time to ensure there are no delayed complications. The target of ventilation is to achieve 92% to 96% arterial saturation and adequate chest rise. Positive end-expiratory pressure will improve oxygenation. Drug administration via peripheral veins is preferred over endotracheal administration. Hypotension remaining after oxygenation may be treated by rapid crystalloid infusion. Cardiac arrest in drowning usually presents as asystole or pulseless electrical activity. Ventricular fibrillation is more likely to be associated with complications of pre-existing coronary artery disease, severe hypothermia, or the use of epinephrine or norepinephrine.
While surfactant may be used, no high-quality evidence exist that looks at this practice. Extracorporeal membrane oxygenation may be used in those who cannot be oxygenated otherwise. Steroids are not recommended.
Prognosis
People who have drowned who arrive at a hospital with spontaneous circulation and breathing usually recover with good outcomes. Early provision of basic and advanced life support improve the probability of a positive outcome.
A longer duration of submersion is associated with a lower probability of survival and a higher probability of permanent neurological damage.
Contaminants in the water can cause bronchospasm and impaired gas exchange and can cause secondary infection with delayed severe respiratory compromise.
Low water temperature can cause ventricular fibrillation, but hypothermia during immersion can also slow the metabolism, allowing longer hypoxia before severe damage occurs. Hypothermia that reduces brain temperature significantly can improve the outcome. A reduction of brain temperature by 10 °C decreases ATP consumption by approximately 50%, which can double the time the brain can survive.
The younger the person, the better the chances of survival. In one case, a child submerged in cold () water for 66 minutes was resuscitated without apparent neurological damage. However, over the long term significant deficits were noted, including a range of cognitive difficulties, particularly general memory impairment, although recent magnetic resonance imaging (MRI) and magnetoencephalography (MEG) were within normal range.
Children
Drowning is a major worldwide cause of death and injury in children. An estimate of about 20% of non-fatal drowning victims may result in varying degrees of ischemic and/or hypoxic brain injury. Hypoxic injuries refers to a lack or absence of oxygen in certain organs or tissues. Ischemic injuries on the other hand refers inadequate blood supply to certain organs or part of the body. These injuries can lead to an increased risk of long-term morbidity. Prolonged hypothermia and hypoxemia from nonfatal submersion drowning can result in cardiac dysrhythmias such as ventricular fibrillation, sinus bradycardia, or atrial fibrillation. Long-term neurological outcomes of drowning cannot be predicted accurately during the early stages of treatment. Although survival after long submersion times, mostly by young children, has been reported, many survivors will remain severely and permanently neurologically compromised after much shorter submersion times. Factors affecting the probability of long-term recovery with mild deficits or full function in young children include the duration of submersion, whether advanced life support was needed at the accident site, the duration of cardiopulmonary resuscitation, and whether spontaneous breathing and circulation are present on arrival at the emergency room. Prolonged submersion in water for more than 5–10 minutes usually leads to poorer prognosis.
Data on the long-term outcome are scarce and unreliable. Neurological examination at the time of discharge from the hospital does not accurately predict long-term outcomes. Some people with severe brain injury who were transferred to other institutions died months or years after the drowning and are recorded as survivors. Nonfatal drownings have been estimated as two to four times more frequent than fatal drownings.
Long-term effects of drowning in children
Long-term effects of nonfatal drowning include damage to major organs such as the brain, lungs, and kidneys. Prolonged submersion time is attributed to hypoxic ischemic brain injury in susceptible areas of the brain such as the hippocampus, insular cortex, and/or basal ganglia. Severity in hypoxic ischemic damage of these brain structures corresponds to the severity in global damage to areas of the cerebral cortex. The cerebral cortex is a brain structure that is responsible for language, memory, learning, emotion, intelligence, and personality. Global damage to the cerebral cortex can affect one or more of its primary function. Treatment of pulmonary complication from drowning is dependent on the amount of lung injury that occurred during the incident. These lung injuries can be contributed by water aspiration and also irritants present in the water such as microbial pathogens leading to complications such as lung infection that can develop in adult respiratory disease syndrome later on in life. Some literature suggests that occurrences of drowning can lead to acute kidney injury from lack of blood flow and oxygenation due to shock and global hypoxia. These kidney injury can cause irreversible damage to the kidneys and may require long-term treatment such as renal replacement therapy.
Infant risk
Children are overrepresented in drowning statistics, with children aged 0–4 years old having the highest number of deaths due to unintentional downing. In 2019 alone, 32,070 children between the ages 1–4 years old died as a result of unintentional drowning, equating to an age-adjusted fatality of 6.04 per 100,000 children. Infants are particularly vulnerable because while their mobility develops quickly, their perception concerning their ability for locomotion between surfaces develops slower. An infant can have full control of their movements, but won't recognize that water does not provide the same support for crawling as hardwood floors would. An infant’s capacity for movement needs to be met with an appropriate perception of surfaces of support (and avoidance of surfaces that do not support locomotion) to avoid drowning. By crawling and interacting with their environment, infants learn to distinguish surfaces offering support for locomotion from those that do not, and their perception of surface characteristics will improve, as well as their perception of falls risk, over several weeks.
Epidemiology
In 2019 , roughly 236,000 people died from drowning, thereby causing it to be the third leading cause of unintentional death globally, trailing traffic injuries and falls.
In many countries, drowning is one of the main causes of preventable death for children under 12 years old. In the United States in 2006, 1100 people under 20 years of age died from drowning. The United Kingdom has 450 drownings per year, or 1 per 150,000, whereas in the United States, there are about 6,500 drownings yearly, around 1 per 50,000. In Asia suffocation and drowning were the leading causes of preventable death for children under five years of age; a 2008 report by UNICEF found that in Bangladesh, for instance, 46 children drown each day.
Due to a generally increased likelihood for risk-taking, males are four times more likely to have submersion injuries.
In the fishing industry, the largest group of drownings is associated with vessel disasters in bad weather, followed by man-overboard incidents and boarding accidents at night, either in foreign ports or under the influence of alcohol. Scuba diving deaths are estimated at 700 to 800 per year, associated with inadequate training and experience, exhaustion, panic, carelessness, and barotrauma.
South Asia
Deaths due to drowning is high in the South Asian region with India, China, Pakistan and Bangladesh accounting for up to 52% of the global deaths. Death due to drowning is known to be high in the Sundarbans region in West Bengal and in Bihar.
According to theDaily Times in rural Pakistan, boats are the preferred mode of transport where available. Due to the influence of female modesty culture in Pakistan, women are not encouraged to swim.
Africa
In lower income countries, cases of drowning and deaths caused by drowning are under reported and data collection is limited. Many low-income countries in Africa have the highest rates of drowning, with incidence rates calculated from population based studies across 15 different countries (Egypt, Ethiopia, Kenya, Uganda, Tanzania, Malawi, Zimbabwe, South Africa, Nigeria, Ghana, Burkina Faso, Ghana, Guinea, Cote d'Ivoire, and the Gambia) ranging from 0.33 per 100,000 population to 502 per 100,000 population. Potential risk factors include young age, being male, having to commute across or work on the water (e.g. Fishermen), quality and carrying capacity of the boat, and poor weather.
United States
In the United States, drowning is the second leading cause of death (after motor vehicle accidents) in children 12 and younger.
People who drown are more likely to be male, young, or adolescent. There is a racial disparity found in drowning incidents. According to CDC data collected from 1999 to 2019, drowning rates among Native Americans was 2 times higher than non-Hispanic whites while the rate among African-Americans was 1.5 times higher. Surveys indicate that 10% of children under 5 have experienced a situation with a high risk of drowning. Worldwide, about 175,000 children die through drowning every year. The causes of drowning cases in the US from 1999 to 2006 were as follows:
According to the US National Safety Council, 353 people ages 5 to 24 drowned in 2017.
Society and culture
Old terminology
The word "drowning"—like "electrocution"—was previously used to describe fatal events only. Occasionally, that usage is still insisted upon, though the medical community's consensus supports the definition used in this article. Several terms related to drowning which have been used in the past are also no longer recommended. These include:
Active drowning: people, such as non-swimmers and the exhausted or hypothermic at the surface, who are unable to hold their mouth above water and are suffocating due to lack of air. Instinctively, people in such cases perform well-known behaviors in the last 20–60 seconds before being submerged, representing the body's last efforts to obtain air. Notably, such people are unable to call for help, talk, reach for rescue equipment, or alert swimmers even feet away, and they may drown quickly and silently close to other swimmers or safety.
Dry drowning: drowning in which no water enters the lungs.
Near drowning: drowning which is not fatal.
Wet drowning: drowning in which water enters the lungs.
Passive drowning: people who suddenly sink or have sunk due to a change in their circumstances. Examples include people who drown in an accident due to sudden loss of consciousness or sudden medical condition.
Secondary drowning: physiological response to foreign matter in the lungs due to drowning causing extrusion of liquid into the lungs (pulmonary edema) which adversely affects breathing.
Silent drowning: drowning without a noticeable external display of distress.
Dry drowning
Dry drowning is a term that has never had an accepted medical definition and is discredited. Following the 2002 World Congress on Drowning in Amsterdam, a consensus definition of drowning was established: it is the "process of experiencing respiratory impairment from submersion/immersion in liquid." This definition resulted in only three legitimate drowning subsets: fatal drowning, non-fatal drowning with illness/injury, and non-fatal drowning without illness/injury. In response, major medical consensus organizations have adopted this definition worldwide and have discouraged any medical or publication use of the term "dry drowning". Such organizations include the International Liaison Committee on Resuscitation, the Wilderness Medical Society, the American Heart Association, the Utstein Style system, the International Lifesaving Federation, the International Conference on Drowning, Starfish Aquatics Institute, the American Red Cross, the Centers for Disease Control and Prevention (CDC), the World Health Organization and the American College of Emergency Physicians.
Drowning experts have recognized that the resulting pathophysiology of hypoxemia, acidemia, and eventual death is the same whether water entered the lung or not. As this distinction does not change management or prognosis but causes significant confusion due to alternate definitions and misunderstandings, it is established that pathophysiological discussions of "dry" versus "wet" drowning are not relevant to drowning care.
"Dry drowning" is cited in the news with a wide variety of definitions. and is often confused with "secondary drowning" or "delayed drowning". Various conditions including spontaneous pneumothorax, chemical pneumonitis, bacterial or viral pneumonia, head injury, asthma, heart attack, and chest trauma have been misattributed to the erroneous terms "delayed drowning," "secondary drowning," and "dry drowning." Currently, there has never been a case identified in the medical literature where a person was observed to be without symptoms and who died hours or days later as a direct result of drowning alone.
Capital punishment
In Europe, drowning was used as capital punishment. During the Middle Ages, a sentence of death was read using the words , or "with pit and gallows".
Drowning survived as a method of execution in Europe until the 17th and 18th centuries. England had abolished the practice by 1623, Scotland by 1685, Switzerland in 1652, Austria in 1776, Iceland in 1777, and Russia by the beginning of the 1800s. France revived the practice during the French Revolution (1789–1799) and it was carried out by Jean-Baptiste Carrier at Nantes.
References
External links
Canadian Red Cross: Drowning Research: Drownings in Canada, 10 Years of Research Module 2 – Ice & Cold Water Immersion
Swimming
Medical emergencies
Diving medicine
Causes of death
Suicide methods
Execution methods
Respiratory diseases
Wikipedia medicine articles ready to translate
Wilderness medical emergencies | wiki |
Bravura is a style of both music and its performance intended to show off the skill of a performer.
Bravura may also refer to:
Bravura, a march by Charles E. Duble
Bravura étude, Op. 63 No. 24 by Amédée Méreaux
Bravura, an imprint of Malibu Comics
Bravura, the protagonist of Asterix and the Secret Weapon | wiki |
Roy Wilkins was a highly respected senior official of the National Association for the Advancement of Colored People, and in 1980 the Association created the Roy Wilkins Renown Service Award for members of United States Armed Forces who had advanced civil rights.
References
Awards established in 1980
American awards
Awards honoring African Americans
1980 establishments in the United States
NAACP | wiki |
Tom Mooney, nicknamed "Circus", was an American Negro league pitcher in the 1900s.
Mooney made his Negro leagues debut with the San Antonio Black Bronchos in 1908 and played for the club again the following season. In three recorded career appearances on the mound, he posted a 6.39 ERA over 12.2 innings.
References
External links
Baseball statistics and player information from Baseball-Reference Black Baseball Stats and Seamheads
Year of birth missing
Year of death missing
Place of birth missing
Place of death missing
San Antonio Black Bronchos players | wiki |
In physics, a continuous spectrum usually means a set of attainable values for some physical quantity (such as energy or wavelength) that is best described as an interval of real numbers, as opposed to a discrete spectrum, a set of attainable values that is discrete in the mathematical sense, where there is a positive gap between each value and the next one.
The classical example of a continuous spectrum, from which the name is derived, is the part of the spectrum of the light emitted by excited atoms of hydrogen that is due to free electrons becoming bound to a hydrogen ion and emitting photons, which are smoothly spread over a wide range of wavelengths, in contrast to the discrete lines due to electrons falling from some bound quantum state to a state of lower energy.
As in that classical example, the term is most often used when the range of values of a physical quantity may have both a continuous and a discrete part, whether at the same time or in different situations. In quantum systems, continuous spectra (as in bremsstrahlung and thermal radiation) are usually associated with free particles, such as atoms in a gas, electrons in an electron beam, or conduction band electrons in a metal. In particular, the position and momentum of a free particle has a continuous spectrum, but when the particle is confined to a limited space its spectrum becomes discrete.
Often a continuous spectrum may be just a convenient model for a discrete spectrum whose values are too close to be distinguished, as in the phonons in a crystal.
The continuous and discrete spectra of physical systems can be modeled in functional analysis as different parts in the decomposition of the spectrum of a linear operator acting on a function space, such as the Hamiltonian operator.
See also
Astronomical spectroscopy (examples of continuous spectra)
Synchrotron radiation
Inverse Compton scattering
Discrete (line) spectra
Emission spectrum
Absorption spectrum
References
Scattering theory
he:ספקטרום רציף
nl:Continu spectrum
pt:Espectro contínuo | wiki |
United Empire Loyalists (or simply Loyalists) is an honorific title which was first given by the 1st Lord Dorchester, the Governor of Quebec, and Governor General of The Canadas, to American Loyalists who resettled in British North America during or after the American Revolution. At the time, the demonym Canadian or Canadien was used to refer to the indigenous First Nations groups and the descendants of New France settlers inhabiting the Province of Quebec.
They settled primarily in Nova Scotia and the Province of Quebec. The influx of loyalist settlers resulted in the creation of several new colonies. In 1784, New Brunswick was partitioned from the Colony of Nova Scotia after significant loyalist resettlement around the Bay of Fundy. The influx of loyalist refugees also resulted in the Province of Quebec's division into Lower Canada (present-day Quebec), and Upper Canada (present-day Ontario) in 1791. The Crown gave them land grants of one lot. One lot consisted of per person to encourage their resettlement, as the Government wanted to develop the frontier of Upper Canada. This resettlement added many English speakers to the Canadian population. It was the beginning of new waves of immigration that established a predominantly English-speaking population in the future Canada both west and east of the modern Quebec border.
History
American Revolution
Following the end of the American Revolutionary War and the signing of the Treaty of Paris in 1783, both Loyalist soldiers and civilians were evacuated from New York City, most heading for Canada. Many Loyalists had already migrated to Canada, especially from New York and northern New England, where violence against them had increased during the war.
The Crown-allotted land in Canada was sometimes allotted according to which Loyalist regiment a man had fought in. This Loyalist resettlement was critical to the development of present-day Ontario, and some 10,000 refugees went to Quebec (including the Eastern Townships and modern-day Ontario). But Nova Scotia (including modern-day New Brunswick) received three times that number: about 35,000–40,000 Loyalist refugees.
An unknown but substantial number of individuals did not stay; they eventually returned to the United States. As some families split in their loyalties during the war years, many Loyalists in Canada continued to maintain close ties with relatives in the United States. They conducted commerce across the border with little regard to British trade laws. In the 1790s, the offer of land and low taxes, which were one-quarter those in America, for allegiance by Lieutenant-Governor Simcoe resulted in the arrival of 30,000 Americans often referred to as Late Loyalists. By the outbreak of the War of 1812, of the 110,000 inhabitants of Upper Canada, 20,000 were the initial Loyalists, 60,000 were later American immigrants and their descendants, and 30,000 were immigrants from the UK, their descendants or from the Old Province of Quebec. The later arrival of many of the inhabitants of Upper Canada suggests that land was the main reason for immigration.
Resettlement
The arrival of the Loyalists after the Revolutionary War led to the division of Canada into the provinces of Upper Canada (what is now southern Ontario) and Lower Canada (today's southern Quebec). They arrived and were largely settled in groups by ethnicity and religion. Many soldiers settled with others of the regiments they had served with. The settlers came from every social class and all thirteen colonies, unlike the depiction of them in the Sandham painting which suggests the arrivals were well-dressed upper-class immigrants.
Loyalists soon petitioned the government to be allowed to use the British legal system, which they were accustomed to in the American colonies, rather than the French system. Great Britain had maintained the French legal system and allowed freedom of religion after taking over the former French colony with the defeat of France in the Seven Years' War. With the creation of Upper and Lower Canada, most Loyalists in the west could live under British laws and institutions. The predominantly ethnic French population of Lower Canada, who were still French-speaking, could maintain their familiar French civil law and Catholic religion.
Realizing the importance of some type of recognition, on 9 November 1789, Lord Dorchester, the governor of Quebec and Governor General of British North America, declared "that it was his Wish to put the mark of Honour upon the Families who had adhered to the Unity of the Empire". As a result of Dorchester's statement, the printed militia rolls carried the notation:
Those Loyalists who have adhered to the Unity of the Empire, and joined the Royal Standard before the Treaty of Separation in the year 1783, and all their Children and their Descendants by either sex, are to be distinguished by the following Capitals, affixed to their names: UE or U.E. Alluding to their great principle The Unity of the Empire.
Because most of the nations of the Iroquois had allied with the British, which had ceded their lands to the United States, thousands of Iroquois and other pro-British Native Americans were expelled from New York and other states. They were also resettled in Canada. Many of the Iroquois, led by Joseph Brant Thayendenegea, settled at Six Nations of the Grand River, the largest First Nations reserve in Canada. A smaller group of Iroquois led by Captain John Deserontyon Odeserundiye, settled on the shores of the Bay of Quinte in modern-day southeastern Ontario.
The government settled some 3,500 Black Loyalists in Nova Scotia and New Brunswick, but they faced discrimination and the same inadequate support that all Loyalists experienced. Delays in making land grants, but mostly the willingness of the blacks to under-cut their fellow Loyalists and hire themselves out to the few available jobs at a lower wage aggravated racist tensions in Shelburne. Mobs of white Loyalists attacked Black Loyalists in the Shelburne Riots in July 1784, Canada's first so-called "race" riot. The government was slow to survey the land of Black Loyalists (which meant they could not settle); it was also discriminatory in granting them smaller, poorer and more remote lands than those of white settlers; not counting those Loyalists who were resettled in what would become Upper Canada in general or around the Bay of Quinte in specific, of course. This increased their difficulties in becoming established. The majority of Black Loyalists in Canada were refugees from the American South; they suffered from this discrimination and the harsh winters.
When Great Britain set up the colony of Sierra Leone in Africa, nearly 1300 Black Loyalists emigrated there in 1792 for the promise of self-government. And so 2,200 remained. The Black Loyalists that left established Freetown in Sierra Leone. Well into the 20th century, together with other early settlers from Jamaica and slaves liberated from illegal slave ships, and despite vicious attacks from the indigenous peoples that nearly ended the Maroon colony, they and their descendants dominated the culture, economy and government of Sierra Leone. which finally pulled itself out of a civil war a decade ago and still struggles with glaring corruption until this day.
Numerous Loyalists had been forced to abandon substantial amounts of property in the United States. Britain sought restoration or compensation for this lost property from the United States, which was a major issue during the negotiation of the Jay Treaty in 1795. Negotiations settled on the concept of the United States negotiators "advising" the U.S. Congress to provide restitution. For the British, this concept carried significant legal weight, far more than it did to the Americans; the U.S. Congress declined to accept the advice.
Slavery
Slave-owning Loyalists from across the former Thirteen Colonies brought their slaves with them to Canada, as the practice was still legal there. They took a total of about 2,000 slaves to British North America: 500 in Upper Canada (Ontario), 300 in Lower Canada (Quebec), and 1,200 in the Maritime colonies of New Brunswick, Nova Scotia, and Prince Edward Island. The presence and condition of slaves in the Maritimes would become a particular issue. They constituted a larger portion of the population, but it was not an area of plantation agriculture.
The settlers eventually freed many of these slaves. Together with the free Black Loyalists, many chose to go to Sierra Leone in 1792 and following years, seeking a chance for self-government. Meanwhile, the British Parliament passed an imperial law in 1790 that assured prospective immigrants to Canada that they could retain their slaves as property. In 1793, an anti-slavery law was passed, in the 1st Parliament of Upper Canada. The Act Against Slavery banned the importation of slaves into the colony, and mandated the emancipation of all children born henceforth to female slaves upon reaching the age of 25. The Act was partially introduced due to the influx of the number of slaves brought by Loyalist refugees to Upper Canada. The slave trade was abolished across the British Empire in 1807. The institution of slavery was abolished Empire-wide by 1834 (except in India, where it was considered an indigenous institution).
War of 1812
From 1812 to 1815, the United States and the United Kingdom were engaged in a conflict known as the War of 1812. On 18 June 1812, US President James Madison signed the declaration of war into law, after receiving heavy pressure from the War Hawks in Congress.
By 1812, Upper Canada had been settled mostly by Revolution-era Loyalists from the United States (United Empire Loyalists) and postwar American and British immigrants. The Canadas were thinly populated and only lightly defended by the British Army and the sedentary units of the Canadian Militia. American leaders assumed that Canada could be easily overrun, with former president Thomas Jefferson optimistically describing the potential conquest of Canada as "a matter of marching". Many Loyalist Americans had migrated to Upper Canada after the Revolutionary War. However, there was also a significant number of non-Loyalist American settlers in the area due to the offer of land grants to immigrants. The Americans assumed the latter population would favour the American cause, but they did not. Although the population of Upper Canada included recent settlers from the United States who had no obvious loyalties to the Crown, the American forces found strong opposition from settlers during the War of 1812.
A number of loyalists served as fencibles, provincial regulars, in the Provincial Marine, or with the sedentary militia. With the successful defence of the Canadian colonies from American invasion, the War of 1812 is seen by Loyalists as a victory. After the war, the British government transported to New Brunswick and settled about 400 of 3,000 former slaves from the United States whom they freed during and after the war. It had fulfilled its promise to them of freedom if they left Patriot slaveholders and fought with the British. Enslaved African Americans risked considerable danger by crossing to British lines to achieve freedom.
Present
While the honorific "United Empire Loyalist" is not part of the official Canadian honours system, modern-day descendants of Loyalist refugees may employ it, sometimes using "U.E." as postnominal letters. The practice, however, is uncommon today, even in original Loyalist strongholds like southeastern Ontario. Historians and genealogists use it extensively as a shorthand for identifying the ancestry of particular families.
The influence of the Loyalists on the evolution of Canada remains evident. Their ties with Britain and antipathy to the United States provided the strength needed to keep Canada independent and distinct in North America. The Loyalists' basic distrust of republicanism and "mob rule" influenced Canada's gradual, "paper-strewn" path to independence. The new British North American provinces of Upper Canada (the forerunner of Ontario) and New Brunswick were created as places of refuge for the United Empire Loyalists. The mottoes of the two provinces reflect this history: Ontario's, also found on its coat of arms, is Ut incepit fidelis sic permanet ("Loyal she began, loyal she remains"); New Brunswick's, Spem Reduxit ("Hope restored").
The word "Loyalist" appears frequently in school, street, and business names in such Loyalist-settled communities as Belleville, Ontario. The nearby city of Kingston, established as a Loyalist stronghold, was named in honour of King George III. And on the outskirts of that city is a township simply named "Loyalist".
Canada's 2021 Census estimates a population of 10,015 who identify as having United Empire Loyalist origins, based on a 25% sample.
On 1 July 1934, Royal Mail Canada issued "United Empire Loyalists, 1776–1784" designed by Robert Bruce McCracken based on Sydney March's sculpture United Empire Loyalists. The 10-cent stamps are perforated 11 and were printed by the British American Bank Note Company.
In 1996, Canadian politicians Peter Milliken (a descendant of American Loyalists) and John Godfrey sponsored the Godfrey–Milliken Bill, which would have entitled Loyalist descendants to reclaim ancestral property in the United States which had been confiscated during the American Revolution. The bill, which did not pass the House of Commons, was intended primarily as a satirical response to the contemporaneous American Helms–Burton Act.
In 1997, the Legislative Assembly of Ontario passed a bill declaring 19 June, "United Empire Loyalist Day" in Ontario. United Empire Loyalist Day is also celebrated on the same day in Saskatchewan, on 18 May in New Brunswick and on 22 July in British Columbia.
Memory and historiography
The Loyalists paid attention to their history developing an idealized image of themselves in which they took great pride. In 1898, Henry Coyne provided a glowing depiction:
According to Canadian historians Margaret Conrad and Alvin Finkel, Coyne's memorial incorporates essential themes that have often been incorporated into patriotic celebrations. The Loyalist tradition, as explicated by Murray Barkley and Norman Knowles, includes:
Conrad and Finkel point out some exaggerations: only a small percentage of the Loyalists were colonial elite. In fact Loyalists were drawn from every stratum of colonial society, and few suffered violence and hardship. About 20 percent would later return to the United States. Most were loyal to all things British, but other Loyalists supported the United States in the War of 1812. Conrad and Finkel conclude:
From the 1870s many of their descendants returned to the United States in pursuit of cheaper land. In the New England States alone, greater than 10% of the population can trace its roots to the Maritime Provinces (2 million more of 14 million inhabitants or roughly 15% are part or wholly of French Canadian descent).
United Empire Loyalists' Association
The United Empire Loyalists' Association of Canada (UELAC) is an organization of Loyalist descendants and others interested in Canadian history, in particular the role of the United Empire Loyalists. The organization was incorporated on 27 May 1914 by the Legislative Assembly of Ontario. In 1972, the organization was granted a coat of arms from the College of Arms through a letter patent, dated 28 March 1972.
Symbols
On 17 April 1707, Queen Anne issued a proclamation referencing the use of the Union Flag "at Sea and Land". The Union Flag began to appear on forts and as regimental colours from this point, and at the time of the American Revolution, this was the flag in use. When those loyal to the Crown left the United States for British North America, they took this flag with them, and because of this historical connection, it continues to be the official flag of the UELAC.
In Canadian heraldry, Loyalist descendants are entitled to use a Loyalist coronet in their coat of arms.
List of Loyalist settlements in Canada
18th-century names are listed first, alongside their present-day equivalents.
Adolphustown, Ontario
Antigonish, Nova Scotia
Beamsville, Ontario
Bocabec, New Brunswick
Meyer's Creek → Belleville, Ontario
Buell's Bay → Brockville, Ontario
Butlersbury → Newark → Niagara-on-the-Lake, Ontario
Cataraqui → Kingston, Ontario
Clifton → Niagara Falls, Ontario
Country Harbour, Nova Scotia
Cobourg, Ontario
Colchester → village now within Essex, Ontario
Cornwall, Ontario
Digby, Nova Scotia
Doaktown, New Brunswick
Eastern Townships, Quebec
Effingham, Ontario
Fredericton, New Brunswick
Grimsby, Ontario
Douglas Township → Kennetcook, Nova Scotia
Lincoln, Ontario
Ernestown Township → Loyalist, Ontario
Machiche → Yamachiche, Quebec
Merrittsville → Welland, Ontario
Milliken Corners Milliken, Ontario
Gravelly Bay → Port Colborne, Ontario
Port Roseway → Shelburne, Nova Scotia
Prescott, Ontario
Prince Edward County, Ontario
Rawdon, Nova Scotia
Saint John, New Brunswick
Sheet Harbour, Nova Scotia
Shelburne, Nova Scotia
Six Nations and Brantford, Ontario
Smithville, Ontario
St. Andrews by-the-Sea → St. Andrews, New Brunswick
St. Anne's Point → Fredericton, New Brunswick
Summerville, Nova Scotia
The Twelve → Shipman's Corners → St. Catharines, Ontario
Turkey Point → Norfolk, Ontario
Sandwich → Windsor, Ontario
Odell Town, Quebec
Wainfleet, Ontario
Remsheg → Wallace, Nova Scotia
Westchester, Nova Scotia
York → Toronto, Ontario
See also
Loyalist (American Revolution)
Canadian honorifics
Daughters of the American Revolution
Expulsion of the Loyalists
Society of the Cincinnati
Sons of the American Revolution
Sons of the Revolution
Notes
References
Further reading
Acheson, T.W. "A Study in the Historical Demography of a Loyalist County", Social History, 1 (April 1968), pp. 53–65.
Compeau, Timothy J. "Dishonoured Americans: Loyalist Manhood and Political Death in Revolutionary America." (PhD Diss. The University of Western Ontario, 2015); online.
Jasanoff, Maya. Liberty's Exiles: American Loyalists in the Revolutionary World. (Knopf, 2011) Ranlet (2014) [below] argues her estimate of the number of Loyalists is too high.
Jodon, Michael. Shadow Soldiers of the American Revolution; 2009, . The History Press, Charleston SC.
MacKinnon, Neil. "Nova Scotia Loyalists, 1783–1785", Social History 4 (November 1969), pp. 17–48
Moore, Christopher. The Loyalists: Revolution, Exile, Settlement; 1984, .
Norton, Mary Beth. "The fate of some black loyalists of the American revolution." Journal of Negro History 58#4 (1973): 402–426. in JSTOR
Walker, James W. St G. The Black Loyalists: The Search for a Promised Land in Nova Scotia and Sierra Leone, 1783–1870 (U of Toronto Press, 1992).
Wallace, W. Stewart. The United Empire Loyalists: A Chronicle of the Great Migration; Volume 13 of the "Chronicles of Canada (32 volumes) Toronto, 1914.
Whitehead, Ruth Holmes. Black Loyalists: Southern Settlers of Nova Scotia's First Free Black Communities (Halifax: Nimbus Publishing, 2013).
Wright, Esther Clark. The Loyalists of New Brunswick (Fredericton: 1955).
Historiography
Barkley, Murray. "The Loyalist Tradition in New Brunswick: the Growth and Evolution of an Historical Myth, 1825–1914." Acadiensis 4#2 (1975): 3–45. online
Bell, David VJ. "The Loyalist Tradition in Canada." Journal of Canadian Studies 5#2 (1970): 22+
Knowles, Norman James. Inventing the Loyalists: The Ontario Loyalist Tradition and the Creation of Usable Pasts (University of Toronto Press, 1997).
Ranlet, Philip. "How Many American Loyalists Left the United States?." Historian 76.2 (2014): 278–307.
Upton, L.F.S. ed. The United Empire Loyalists: Men and Myths (The Copp Publishing Company, 1967), Excerpts from historians and from primary sources
Primary sources
Talman, James ed. Loyalist Narratives from Upper Canada. Toronto: Champlain Society, 1946.
"Letter, Benjamin Franklin to Baron Francis Maseres, June 26, 1785"
Gray, Rev. J. W. D. A Sermon, Preached at Trinity Church, in the parish of St. John, N. B., on 8 December 1857, by the Rev. J. W. D. Gray, D.D., and Designed to Recommend the Principles of the Loyalists of 1783. Saint John, New Brunswick: J. & A. McMillan, Printers, 1857. 15 pp. Internet Archive pdf; title incorrectly gives the year as 1847.
External links
"A Short History of the United Empire Loyalists", by Ann Mackenzie, M.A.; Une Courte Histoire des Loyalistes de l'Empire Uni, French translation
Haldimand Collection
Black Loyalists in New Brunswick, 1783–1854, Atlantic Canadian Portal, University of New Brunswick
Loyalist Women in New Brunswick, 1783–1827, Atlantic Canadian Portal, University of New Brunswick
The Myth of the Loyalist Iroquois
The United Empire Loyalists' Association of Canada: Home Page
Photographs of the United Empire Loyalist monument at Country Harbour, Nova Scotia
United Empire Loyalists collection at Internet Archive
American Revolution veterans and lineage organizations
Military history of Nova Scotia
Monarchy in Canada
Social history of Canada
fr:Loyalistes | wiki |
Right Thing or The Right Thing may refer to:
Ethics, the study of right and wrong conduct
"The Right Thing" (song), a 1987 single by Simply Red
The Right Thing (film), a 1963 comedy Australian TV play
"The Right Thing", a season 4 episode of the sitcom New Girl
"The right thing", also called the "MIT approach", a perfectionistic approach to software development described by Richard P. Gabriel in the influential essay "Lisp: Good News, Bad News, How to Win Big"
See also
"The Right Thing to Do", a 1972 song by Carly Simon | wiki |
June Stephenson (; born 30 January 1943) is an English former cricketer who played as a lower-order batter and right-arm medium bowler. She appeared in 12 Test matches and 9 One Day Internationals for England between 1966 and 1976. She played domestic cricket for Yorkshire.
References
External links
1943 births
Living people
Cricketers from Bradford
England women Test cricketers
England women One Day International cricketers
Yorkshire women cricketers | wiki |
A rubber mask is a mask made of rubber. Typically, these are made of latex or silicone rubber and designed to be pulled over the head as a form of theatrical makeup or disguise.
The theatrical makeup used by Michael Crawford when he played the Phantom of the Opera started with a latex skullcap. More latex strips were then added for the disfigured face. The latex was then covered and coloured with cosmetics for the full effect.
See also
Guy Fawkes mask
Horse head mask
List of crimes involving a silicone mask
References
Masks by material | wiki |
General Howell may refer to:
David Howell (British Army officer) (fl. 1970s–2010s), British Army major general
Joshua B. Howell (1806–1864), Union Army brigadier general
Philip Howell (1877–1916), British Army brigadier general
Scott A. Howell (born 1965), U.S. Air Force lieutenant general
See also
Attorney General Howell (disambiguation) | wiki |
NBC Sports on USA Network is the branding used for NBC Sports produced sporting events on USA Network. Since 2022, USA Network has been the de facto cable home of NBC Sports, following the closure of NBCSN. From 2008 to 2021, NBC Sports programs only aired on USA during rare circumstances. Prior to 2008, sports programs on USA used the branding USA Sports.
Overview
Following the dissolving of USA Sports into NBC Sports after the 2007 Masters, USA Network began deemphasizing sports. During this time NBC Sports properties generally only aired on USA in special cases, such as during the Olympics, Stanley Cup Finals or the final week of the English Premier League season.
Beginning in 2006, USA carried some coverage of top level hockey by cooperating with NBC's coverage of ice hockey at the Winter Olympics in 2006, 2010 and 2014; these games were mostly daytime contests that would not preempt the network's increasingly popular prime time programs.
In early 2006, it was announced that USA was outbid by Golf Channel for its early-round PGA Tour rights, with USA's final season being 2006. NBC Universal traded away the network's Friday Ryder Cup coverage through 2012 to ESPN for the rights to sign Al Michaels for its new Sunday Night Football. However, USA did renew its Masters contract for an additional year. USA would televise the 2007 Masters before being outbid by ESPN for future coverage.
USA Network offered daily coverage of the 2008 Summer Olympics through NBC Sports. This would be the USA Network's last Summer Olympics until 2020 because in 2011, Comcast acquired majority control of NBC's parent company NBC Universal from General Electric. This included to rebranding of Versus as NBC Sports Network, which would replace USA Network's summer Olympic coverage.
The Ryder Cup contract, which stipulated cable coverage air on USA, was still controlled by NBC even after it granted ESPN the rights to Friday cable coverage (normally the only day of the event covered on cable). However, in 2010, rain on Friday pushed the singles matches to Monday, necessitating that they air on cable. With NBC having granted only Friday rights to ESPN, the singles matches aired on USA. Four months later, NBC merged with Golf Channel, making Golf Channel NBC's primary cable outlet for golf.
2010s
USA Network aired 41 hours of coverage of the 2010 Winter Olympics.
As part of a 2011 contract renewal, Comcast's properties earned exclusive national rights for all Stanley Cup playoffs through 2021. Because NBC and NBC Sports Network cannot carry all of the games on those two outlets alone in the first two rounds, other Comcast properties would need to be used; USA was initially not used, due to the risk of preempting its popular prime time lineup, and the company instead used CNBC and NHL Network as the overflow channels for the first four years of the contract. In 2015, Comcast announced that USA would carry some games in the first two rounds of the Stanley Cup Playoffs, mainly on Tuesday and Wednesday nights, returning the NHL to USA for the first time in 30 years (1985).
USA Network aired 43 hours of coverage for the 2014 Winter Olympics.
In 2014, due to NBCSN's coverage of the 2014 Winter Olympics, an English Premier League match between Arsenal and Sunderland aired on USA Network.
In the 2015–16 season, USA Network aired 40 matches from the English Premier League, most during the 10AM ET window. These matches moved to CNBC for the 2016–17 season.
On January 26, 2016, NASCAR announced that the Cheez-It 355 at the Glen from Watkins Glen International would air on USA Network due to NBC and NBCSN's commitments to the Summer Olympics.
In 2017, during the final day of the Premier League season on May 21, USA aired a match between Watford and Manchester City. In 2018, USA would air the same match for the final day of the season on May 10.
USA Network aired 40.5 hours for the 2018 Winter Olympics.
In 2019, during the final day of the Premier League season on May 12, USA aired a match between Manchester United and Cardiff City.
2020s
In 2020, during the final day of the Premier League season on July 26, USA aired a match between Chelsea and Wolverhampton.
USA Network had 388.5 hours of coverage of the 2020 Summer Olympics. The main sports featured were swimming, track and field, diving, beach volleyball, volleyball, cycling, triathlon, team sports basketball, soccer and water polo.
In September 2020, it was announced that USA would carry the September 19 college football game between the Notre Dame Fighting Irish and University of South Florida Bulls. It was the first Notre Dame football game broadcast on USA, whose parent company NBC has owned rights to every Fighting Irish home game since 1991. It was also the first American football game broadcast on the network since World Bowl '92.
Notre Dame's double-overtime win against Clemson on November 7, 2020, was moved temporarily to USA Network, due to coverage of Joe Biden's acceptance speech after being declared consensus winner of the 2020 presidential election.
On January 22, 2021, an internal memo sent by NBC Sports president Pete Bevacqua announced that NBCSN would cease operations by the end of the year, and that USA Network would begin "carrying and/or simulcasting certain NBC Sports programming," including the Stanley Cup Playoffs and NASCAR races, before NBCSN's shutdown. When NBCSN is shuttered, much of its programming will be merged onto USA Network's schedule. Peacock, NBCUniversal's new streaming service, will also carry some of the network's former programming starting in 2022. The move was cited by industry analysts as a response to the impact of the COVID-19 pandemic on the sports and television industries, the acceleration of cord-cutting, as well as formidable competition from rival sports networks such as ESPN and Fox Sports 1, noting the company saw an overall revenue drop by 19% to $6.72 billion.
In 2021, during the final day of the Premier League season on May 23, USA aired a match between Aston Villa and Chelsea.
In 2021, USA Network aired Cristiano Ronaldo's return to Manchester United on September 11 when Manchester United took on Newcastle.
Following the dissolving of NBCSN in January 2022, USA has become the cable home of Atlantic 10 men and women college basketball games, weekend lead-in coverage of the U.S. Open, U.S. Women's Open, British Open and AIG Women's Open, the cable portion of NBC Sports' NASCAR contract, the cable portion of the NBC Sports' NTT IndyCar Series contract, the cable portion of NBC Sports' IMSA contract and select AMA Supercross Championship races (split with CNBC).
Beginning on January 1, 2022, regular Premier League matches, as well as Premier League Mornings, moved permanently to USA Network.
In January 2022, for the 2022 Winter Olympics, portions of the U.S figure skating championship and U.S speed skating trials aired on USA.
For the 2022 Winter Olympics, USA Network became the de facto cable home of the games, replacing the defunct NBCSN. The network featured 400 hours of Olympic programming, up 1000% from 2018, broadcasting all Olympic sports.
In 2022, USA Network aired 9 games from the new United States Football League (USFL) and aired portions of the World Figure Skating Championships.
Programming
Current programs
College Basketball on USA (1982–1988) (2022–present)
Atlantic 10 men's and women's regular season contests
Atlantic 10 men's tournament, second round and quarterfinals
Summer Olympics (2008, 2020–present)
Golf on USA (2010, 2022–present)
Ryder Cup (2010)
U.S. Open (2022)
U.S. Women's Open (2022)
British Open (2022)
AIG Women's Open (2022)
Winter Olympics (2010–present)Premier League on NBC (2013–present)NASCAR on NBC (2016 (one race), 2022–present (full cable coverage)
Cheez-It 355 at the Glen (2016)IndyCar Series on NBC (2022–present)
Detroit Grand Prix (2022)
Bommarito Automotive Group 500 (2022)
United States Football League (2022–present)
World Figure Skating Championships (2022–present)
WWE (1985–2000, 2005–present)Prime Time Wrestling (1985–1993)Monday Night Raw (1993–2000, 2005–present)NXT (2019–present)SmackDown Live (2015–2019)
Former programsU.S. Open Tennis Championship (2008)NHL on NBC (2015–2021; select playoff games)College Football on USA'' (2020)
See also
NBC Sports on CNBC
USA Network#Sports programming
References
External links
NBC Sports
USA Network Sports
NBC Sports
Sports television in the United States
Mass media companies disestablished in 2007 | wiki |
Appeal to flattery is a fallacy in which a person uses flattery, excessive compliments, in an attempt to appeal to their audience's vanity to win support for their side. It is also known as apple polishing, wheel greasing, brown nosing, appeal to pride, appeal to vanity or argumentum ad superbiam. The appeal to flattery is a specific kind of appeal to emotion.
Flattery is often used to hide the true intent of an idea or proposal. Praise offers a momentary personal distraction that can often weaken judgment. Moreover, it is usually a cunning form of appeal to consequences, since the audience is subject to be flattered as long as they comply with the flatterer.
Examples:
"Surely a man as smart as you can see this is a brilliant proposal." (failing to accept the proposal is a tacit admission of stupidity)
"Is there a strong man here who could carry this for me?" (a failure to demonstrate physical strength implies weakness)
A refusal which does not deny the compliment could be formulated thus: "I may be [positive attribute], but that doesn't mean that I will [perform action] for you."
It is not necessarily a logical fallacy, however, when the compliment is sincere, and directly related to the argument. Example:
"You are a stunningly beautiful girl – you should become a model."
See also
Flattery
Superficial charm
Sycophancy
Pollyanna principle
References
Flattery | wiki |
Gladiolus 'Charming Beauty' is a cultivar of Gladiolus which features soft pink blossoms with a white throat. Its eye-catching flowers (up to 7 per stem) grow on loose spikes (2-3 spikes per corm) that are adorned by narrow, deep-green sword-shaped leaves. Blooming in early summer, this Gladiolus grows up to tall.
See also
List of Gladiolus varieties
References
Jardins Sans Secret
Charming Beauty
Ornamental plant cultivars | wiki |
Chronomancy is divination of the best time to do something, the determination of lucky and unlucky days, especially popular in ancient China.
The term "chronomancy", stemming from the Greek word chronos (meaning time), and the word manteia (meaning divination) is also used in fiction to refer to a school of magic involving supernatural manipulation of time.
Role in modern fantasy
In modern fantasy role-playing games, such as Dungeons and Dragons and other games set in the Forgotten Realms universe, chronomancy refers to a school of magic related to moving through and manipulating time.
References
Divination
Chinese mythology | wiki |
A law school in the United States is an educational institution where students obtain a professional education in law after first obtaining an undergraduate degree.
Law schools in the U.S. confer the degree of Juris Doctor (J.D.), which is a professional doctorate. It is the degree usually required to practice law in the United States, and the final degree obtained by most practitioners in the field. Juris Doctor programs at law schools are usually three-year programs if done full-time, or four-year programs if done via evening classes. Some U.S. law schools include an Accelerated JD program.
Other degrees that are awarded include the Master of Laws (LL.M.) and the Doctor of Juridical Science (J.S.D. or S.J.D.) degrees, which can be more international in scope. Most law schools are colleges, schools or other units within a larger post-secondary institution, such as a university. Legal education is very different in the United States than in many other parts of the world.
Terminology
A 2006 study found that the names of the 192 law schools approved by the American Bar Association (ABA) at that time included one of five generic identifiers: "school of law" (118), "college of law" (38), "law school" (28), "law center" (7), and "faculty of law" (1). However, in ordinary speech, "law school" is universally preferred for its "brevity and clarity."
Admission
In the United States, law schools require a bachelor's degree in any discipline, a satisfactory undergraduate grade point average (GPA), and a satisfactory score on the Law School Admission Test (LSAT) as prerequisites for admission. Some states that have non-ABA-approved schools or state-accredited schools have equivalency requirements that usually equal 90 credits toward a bachelor's degree. Additional personal factors are evaluated through essays, short-answer questions, letters of recommendation, and other application materials. The standards for grades and LSAT scores vary from school to school.
Though undergraduate GPA and LSAT score are the most important factors considered by law school admissions committees, individual factors are also somewhat important. Interviews—either in person or via video chat—are sometimes used as optional or by-invite application components. Many law schools actively seek applicants from outside the traditional pool to boost racial, economic, and experiential diversity on campus. Most law schools now factor in extracurricular activities, work experience, and unique courses of study in their evaluation of applicants. A growing number of law school applicants have several years of work experience, and correspondingly fewer law students enter immediately after completing their undergraduate education. However, law schools generally only consider undergraduate and not post-collegiate transcripts when considering an applicant for admission; the former are considered by law schools to be a more uniform standard than the latter for judging academic performance.
Many law schools offer substantial scholarships and grants to many of their students, dramatically reducing the actual cost of attending law school compared to sticker tuition. Some law schools condition scholarships on maintaining a certain GPA.
there were 128,641 students enrolled in JD programs at the 204 approved ABA law schools.
Accreditation
To sit for the bar exam, the vast majority of state bar associations require accreditation of an applicant's law school by the American Bar Association. The ABA has promulgated detailed requirements covering every aspect of a law school, down to the precise contents of the law library and the minimum number of minutes of instruction required to receive a law degree. , there are 203 ABA-accredited law schools that award the J.D., divided between 202 with full accreditation and one with provisional accreditation. The Judge Advocate General's Legal Center and School in Charlottesville, Virginia, a school operated by the United States Army that conducts a post-J.D. program for military attorneys, is also ABA-accredited.
Non-ABA approved law schools have much lower bar passage rates than ABA-approved law schools, and do not submit or disclose employment outcome data to the ABA.
In addition, individual state legislatures or bar examiners may maintain a separate approval system, which is open to non-ABA accredited schools. If that is the case, graduates of these schools may generally sit for the bar exam only in the state in which their school is accredited. California is the most famous example of state-specific approval. The State Bar of California's Committee of Bar Examiners approves many schools that may not qualify for or request ABA accreditation. Graduates of such schools can sit for the bar exam in California, and once they have passed that exam, a large number of states allow those students to sit for their bars (after practicing for a certain number of years in California).
California is also the first state to allow graduates of distance legal education (online and correspondence) to take its bar exam. However, online and correspondence law schools are generally not accredited by the ABA or approved by state bar examiners, and the eligibility of their graduates to sit for the bar exam may vary from state to state. Even in California, for instance, the State Bar deems certain online schools as "registered," meaning their graduates may take the bar exam, but also specifically says the "Committee of Bar Examiners does not approve nor accredit correspondence schools." Kentucky goes further by specifically disqualifying correspondence school graduates from admission to the bar. This applies even if the graduate has gained admission in another jurisdiction.
Curriculum
Law students are referred to as 1Ls, 2Ls, and 3Ls based on their year of study. In the United States, the American Bar Association does not mandate a particular curriculum for 1Ls. ABA Standard 302(a)(1) requires only the study of "substantive law" that will lead to "effective and responsible participation in the legal profession." However, most law schools have their own mandatory curriculum for 1Ls, which typically includes:
Civil procedure (Federal Rules of Civil Procedure)
Constitutional law (United States Constitution, especially Fifth and Fourteenth Amendments, and the Commerce Clause)
Contracts (Article 2 (Sales) of the Uniform Commercial Code and Restatement (Second) of Contracts)
Criminal law (General common law, Model Penal Code, and state criminal statutes)
Property (General common law and Restatement of Property)
Torts (General common law, Restatement (Second) and Restatement (Third) of Torts)
Legal research (Use of a law library, LexisNexis, and Westlaw)
Legal writing (including objective analysis, persuasive analysis, and legal citation)
These basic courses are intended to provide an overview of the broad study of law. Not all ABA-approved law schools offer all of these courses in the 1L year; for example, many schools do not offer constitutional law and/or criminal law until the second and third years. Most schools also require Evidence but rarely offer the course to first-year students. Some schools combine legal research and legal writing into a single year-long "lawyering skills" course, which may also include a small oral argument component.
Because the first year curriculum is always fixed, most schools do not allow 1L students to select their own course schedules, and instead hand them their schedules at new student orientation.
At most schools, the grade for an entire course depends upon the outcome of only one or two examinations, usually in essay form, which are administered via students' laptop computers in the classroom with the assistance of specialized software. Some professors may use multiple choice exams in part or in full if the course material is suitable for it (e.g., professional responsibility). Legal research and writing courses tend to have several major projects (some graded, some not) and a final exam in essay form. Most schools impose a mandatory grade curve (see below).
After the first year, law students are generally free to pursue different fields of legal study. All law schools offer (or try to offer) a broad array of upper-division courses in areas of substantive law like administrative law, corporate law, international law, admiralty law, intellectual property law, and tax law, and in areas of procedural law not normally covered in the first year, like criminal procedure and remedies. Many law schools also offer upper-division practical training courses in client counseling, trial advocacy, appellate advocacy, and alternative dispute resolution. Depending upon the law school, practical training courses may involve fictional exercises in which students interact with each other or with volunteer actors playing clients, witnesses, and judges, or real-world cases at legal clinics.
Graduation is the assured outcome for the majority of students who pay their tuition on time, behave honorably and responsibly, maintain a minimum per-semester unit count and grade point average, take required upper-division courses, and successfully complete a certain number of units by the end of their sixth semester.
The ABA also requires that all students at ABA-approved schools take an ethics course in professional responsibility. Typically, this is an upper-level course; most students take it in the 2L year. This requirement was added after the Watergate scandal, which seriously damaged the public image of the profession because President Richard Nixon and most of his alleged co-conspirators were lawyers. The ABA desired to demonstrate that the legal profession could regulate itself, wished to reassert and maintain its position of leadership, and hoped to prevent direct federal regulation of the profession.
As of 2004, to ensure that students' research and writing skills do not deteriorate, the ABA has added an upper division writing requirement. Law students must take at least one course, or complete an independent study project, as a 2L or 3L that requires the writing of a paper for credit.
Most law courses are less about doctrine and more about learning how to analyze legal problems, read cases, distill facts and apply law to facts.
In 1968, the Ford Foundation began disbursing $12 million to persuade law schools to make "law school clinics" part of their curriculum. Clinics were intended to give practical experience in law practice while providing pro bono representation to the poor. However, conservative critics charge that the clinics have been used instead as an avenue for the professors to engage in left-wing political activism. Critics cite the financial involvement of the Ford Foundation as the turning point when such clinics began to change from giving practical experience to engaging in advocacy.
Law schools that offer accelerated JD programs have unique curricula for such programs. Nonetheless, ABA-approved law schools with Accelerated JD programs must meet ABA rules.
Finally, the emphasis in law schools is rarely on the law of the particular state in which the law school sits, but on the law generally throughout the country. Although this makes studying for the bar exam more difficult since one must learn state-specific law, the emphasis on legal skills over legal knowledge can benefit law students not intending to practice in the same state they attend law school.
Grades, grading, and GPA curves
Grades in law school are very competitive. Most schools grade on a curve. In most law schools, the first year curve (1L) is considerably lower than courses taken after the first year of law school.
Many schools use a "median" grading system, that can range from "B-plus medians" to "C-minus medians". Some professors are obliged to determine which exam or paper was the exact median in terms of quality (e.g., the 26th best out of 51), give that paper the relevant grade depending on the system used, and then grade the other exams based on how much better or worse they are than the median. A few schools, such as Yale Law School, Stanford Law School, Harvard Law School, University of California, Berkeley School of Law, and Northeastern University School of Law have alternate grading systems that put less emphasis (or no emphasis) on rank. Other schools, such as New York's Fordham Law School, use a forced grading distribution, where a predetermined percentage of students must receive certain grades. For instance, such a system could oblige professors to award a minimum and maximum number of "A's" and "F's" (e.g., 3.5%/7% A's and 4.5%/10% F's). Many professors chafe against the lack of discretion provided by such systems, especially the required failing of a certain number of students whose performance may have been sub-par but not, in the professor's estimation, worthy of a failing grade. The "median" system seeks to provide some parity among teachers' grading scales while giving the teacher discretion to award a grade below the median only when deserved.
Fairness and equity are the primary reasons for required curves and required grade distributions. Some faculty tend to give higher grades and others lower grades, with a mandatory curve balancing both extremes. Also, at law schools with multiple sections of the same class, it minimizes the problem that one section will have an unfair advantage over another section when applying for Law Review or Moot Court.
Even with curved grading, some law schools such as Syracuse University College of Law still have a policy of "Dismissal for Academic Deficiency", in which students failing to meet a minimum GPA are dismissed from the school.
One school that has deviated from the system of competitive grading common to most American law schools is Northeastern University School of Law. Northeastern does not have any system of grade point averages or class rank, Instead, the school uses a system of narrative evaluations to measure student performance.
A system of anonymous grading known as blind grading is used in many law schools in the United States. It is intended to counter bias by the grader. Each semester students are assigned random numbers, usually by the Registrar's Office, which students must include on their exams. Professors then grade the exams on an anonymous basis, only discovering student names after grades have been submitted to the Registrar's Office. General adoption of blind grading followed admission of significant numbers of minority students to law schools.
Accelerated JD programs
An Accelerated JD program may refer to one of the following:
A program that combines a bachelor's degree with a juris doctor degree ("3+3 JD program" or "BA to JD program").
A two-year juris doctor degree that is offered in a condensed period, separately from a bachelor's degree ("2-year JD program").
As a result of student concerns about the time and cost (both in terms of tuition and the opportunity cost associated with foregoing a salary for three years) required to complete a law degree, there has been an emerging trend to develop accelerated JD programs.
Pedagogical methods
Most law school education in the United States is based on standards developed by Christopher Columbus Langdell and James Barr Ames at Harvard Law School during the 1870s. Professors generally lead in-class debates over the issues in selected court cases, compiled into "casebooks" for each course. Traditionally, law professors chose not to lecture extensively, and instead used the Socratic method to force students to teach each other based on their individual understanding of legal theory and the facts of the case at hand.
Many law schools continue to use the Socratic method—consisting of calling on a student at random, asking about an argument made in an assigned case, asking the student whether they agrees with the argument, and then using a series of questions designed to expose logical flaws in the student's argument. Examinations usually entail interpreting the facts of a hypothetical case, determining how legal theories apply to the case, and then writing an essay. This process is intended to train students in the reasoning methods necessary to interpret theories, statutes, and precedents correctly, and argue their validity, both orally and in writing. In contrast, most civil law countries base their legal education on professorial lectures and oral examinations, which are more suited for the mastery of complicated civil codes.
This style of teaching is often disorienting to first-year law students who are more accustomed to taking notes from professors' lectures. Most casebooks do not clearly outline the law; instead, they force the student to interpret the cases and derive the basic legal concepts from the cases themselves. As a result, many publishers market law school outlines that concisely summarize the basic concepts of each area of law, and good outlines are highly sought after by many students, although some professors discourage their use.
Legal pedagogy has also been criticized by scholars like Alan Watson in his book, The Shame of Legal Education. Some law schools, such as Savannah Law School, have changed direction and created collaborative learning environments to allow students to work directly with each other and professors in order to model the teamwork of attorneys working on a case.
For purposes of passing state bar examinations, some law school graduates find law school instruction inadequate, and resort to specialized bar review courses from private course providers. These bar reviews typically consist of lectures, often video recorded.
History
Until the late 19th century, law schools were uncommon in the United States. Most people entered the legal profession through reading law, a form of independent study or apprenticeship, often under the supervision of an experienced attorney. This practice usually consisted of reading classic legal texts, such as Edward Coke's Institutes of the Lawes of England and William Blackstone's Commentaries on the Laws of England.
In colonial America law schools did not exist. Within a few years following the American Revolution, some universities such as the College of William and Mary and the University of Pennsylvania established a "Chair in Law". Columbia College appointed its first Professor of Law, James Kent, in 1793. Those who held these positions were the sole purveyors of legal education (per se) for their institutions—though law was, of course, discussed in other academic areas as a matter of course—and gave lectures designed to supplement, rather than replace, an apprenticeship.
The first institution established for the sole purpose of teaching law was the Litchfield Law School, set up by Judge Tapping Reeve in 1784 to organize the large number of would-be apprentices or lecture attendees that he attracted. Despite the success of that institution, and of similar programs set up thereafter at Harvard University (1817), Dickinson College (1834), Yale University (1843), Albany Law School (1851), and Columbia University (1858), law school attendance would remain a rare exception in the profession. Apprenticeship would be the norm until the 1890s, when the American Bar Association (which had been formed in 1878) began pressing states to limit admission to the bar to those who had satisfactorily completed several years of post-graduate instruction. In 1906, the Association of American Law Schools adopted a requirement that law school consist of a three-year course of study.
Women
Women were not allowed in most law schools during the late 1800s and the early 1900s. In 1869, Washington University School of Law became the first chartered law school in America to admit women. The "first woman on record to have received a law degree was Ada Kepley from Union College of Law in Illinois (Northwestern)" in 1870. Some law schools that allowed women before most others were Buffalo Law School which "begun in 1887 . . . and open to women and immigrant groups"; University of Iowa which "admitted women as law students" since at least 1869; University of Michigan; and Boston University Law School which started admitting women in 1872. "In 1878 two women successfully sued to be admitted to the first class at Hastings Law School [University of California]," one of whom was Clara S. Foltz. When the University of California established a second law program in 1894, this time on the Berkeley campus, it was open to women and men on an equal basis. Ellen Spencer Mussey and Emma Gillett founded the Washington Law School for women and men in 1898 (now known as, American University Washington College of Law).
The difficulty for women law students was further aggravated by the fact that courts did not allow women to be admitted as lawyers, as demonstrated in the famous case involving Myra Bradwell as the plaintiff in Bradwell v. Illinois (1870). The federal courts were subsequently opened to women in 1878 due to a successful campaign by Belva Ann Lockwood.
The elite law schools remained closed to women for a while after. Pushed by the suffragist movement for women, Harvard Law School started considering admitting women in 1899 but without success. Partly in response to the pressures of the suffragist movement and the unwillingness of elite law schools to open their doors, "in 1908, Portia Law School was founded in Boston" which later became the New England School of Law and was the only law school at the time with "an all women student body". In 1915, due to Harvard's continued refusal to admit women, the Cambridge Law School for Women was established as an alternative to elite law schools, and was to be "as nearly as possible a replica of the Harvard Law School as is possible to make it." World War I encouraged the movement toward admitting women to law schools, and in 1918, Fordham Law School and Yale Law School started admitting women. Northeastern University School of Law, at the time a YMCA institution, started admitting women in 1923. Harvard Law School did not admit women until 1950. In 1966, Notre Dame Law School started admitting women.
Despite all of these advances, "[i]n 1963, women had comprised only 2.7 percent of the profession. In the academic year 1969–70, only 6.35 percent of the degree candidates at law school were women." A prevalent attitude has been mentioned several times by Hillary Clinton, who recalled that she had been accepted at Harvard Law School in 1969 but had been repelled by a professor who told her at a student-recruitment party, "we don't need any more women at Harvard." (She went to Yale Law School instead.) Attendance of women at law schools did however improve significantly in the next 10-year period. "In 1968, 3,704 of the 62,000 law students in approved schools were women; by 1979, there were 37,534 women out of 117,279 students in approved schools" although still represented in larger proportions in less elite law schools. In 2016, the number of women enrolled in ABA-approved law schools reached the majority (50.09%), with female students being 55,766 of the total 111,327.
Credentials obtainable while in law school
Within each U.S. law school, key credentials include:
Law review/Law journal membership or editorial position (based either on grades or write-on competition or both). This is important for at least three reasons. First, because it is determined by either grades or writing ability, membership is an indicator of strong academic performance. This leads to the second reason, which is that potential employers sometimes use law review membership in their hiring criteria. Third, work on law review exposes a student to legal scholarship and editing, and often allows the student to publish a significant piece of legal scholarship on his or her own. Most law schools have a "flagship" journal usually called "School name Law Review" (for example, the Harvard Law Review—although some schools call their flagship journal "School name Law Journal"; see Yale Law Journal) that publishes articles on all areas of law, and one or more other specialty law journals that publish articles concerning only a particular area of the law (for example, the Harvard Journal of Law & Technology).
Moot court membership or award (based on oral and written argument). Success in moot court can distinguish one as an outstanding oral advocate and provides a degree of practical legal training that is often absent from law review membership. Moot court and related activities, such as Trial Advocacy and Dispute Resolution, may appeal especially to employers hiring for litigation positions, such as a district attorney's office.
Mock trial membership and awards (based on trial level advocacy skills) also can distinguish one as an outstanding trial advocate and help develop "real world" skills that are often valuable to employers hiring for litigation positions.
Order of the Coif membership (based on grade point average). This is often coupled with Latin honors (summa and magna cum laude, though often not cum laude). However, a slight majority of law schools in the U.S. do not have Order of the Coif chapters.
State and federal court clerkship
On the basis of a student's credentials, as well as favorable faculty recommendations, some students obtain a one or two-year clerkship with a judge after graduation. It is becoming more common for clerkships to begin after a few years in private practice. Clerkships may be with state or federal judges.
Clerkships are meant to provide the recent law school graduate with experience working for a judge. Often, clerks engage in significant legal research and writing for the judge, writing memos to assist a judge in coming to a legal conclusion in some cases, and writing drafts of opinions based on the judge's decisions. Appellate court clerkships, although generally more prestigious, do not necessarily give one a great deal of practical experience in the day-to-day life of a lawyer in private practice. The average litigator might get much more out of a clerkship at the trial court level, where they will be learning about motions practices, dealing with lawyers, and generally learning how a trial court works on the inside.
By and large, though, clerkships provide other valuable assets to a young lawyer. Judges often become mentors to young clerks, providing the young attorney with an experienced individual to go for advice. Fellow clerks can also become lifelong friends and/or professional connections. Clerkships are great experiences for the new lawyers, and law schools encourage graduates to engage in a clerkship to broaden their professional experiences. However, there simply are not enough clerkships to accommodate all the academically eligible graduates.
United States Supreme Court clerkship
Some law school graduates are able to clerk for one of the Justices on the Supreme Court (each Justice takes two to four clerks per year). Often, these clerks are graduates of elite law schools, with Harvard, Yale, the University of Chicago, the University of Michigan, Columbia, the University of Virginia, and Stanford being among the most highly represented schools. Justice Clarence Thomas is the major exception to the rule that Justices hire clerks from elite schools; he takes pride in selecting clerks from non-top-tier schools, and publicly noted that his clerks have been attacked on the Internet as "third tier trash."
Most Supreme Court clerks have clerked in a lower court, often for a year with a highly selective federal circuit court judge (such as Judges Alex Kozinski, Michael Luttig, J. Harvie Wilkinson, David Tatel, Richard Posner) known as a "feeder judge". It is perhaps the most highly selective and prestigious position a recently graduated lawyer can have, and Supreme Court clerks are often highly sought after by law firms, the government, and law schools. Law firms give Supreme Court clerks as much as a $400,000 bonus for signing with their firm. The vast majority of Supreme Court clerks either become academics at elite law schools, enter private practice as appellate attorneys, or take highly selective government positions.
Controversies involving U.S. law schools
Employment statistics and salary information
After the JD, a large study of law graduates who passed the bar examination, found that even graduates of lower ranked law schools were typically making six figure ($100,000+) incomes within 12 years after graduation. Graduates of higher ranking schools typically earned more than $170,000. The Economic Value of a Law Degree, a peer reviewed study which included law graduates who did not pass the bar exam and were not practicing law, found that law graduates at the 25th percentile of earnings ability typically earned around $20,000 more every year than they would have earned with only a bachelor's degree. Graduates at the 75th percentile earned around $80,000 more per year than they likely would have earned with a bachelor's degree. However, only around 60 to 70 percent of law graduates practice law. Some authors have criticized employment information supplied directly by law schools; however, these studies report information supplied directly by law graduates, and in the case of the latter study, collected by the United States Census Bureau as part of a broader economic survey.
New York Times negative press coverage
Starting in 2011, American law schools became the subject of a series of critical articles in mainstream news publications, starting with a series of New York Times articles by David Segal. Such articles have reported on the debt loads of law graduates, the difficulty of securing employment in the legal profession, and insufficient practical training at American law schools. A number of critics have pointed out factual inaccuracies and logical errors in New York Times' higher education coverage, especially related to law schools.
More recent press coverage by some higher education reporters has noted that peer reviewed studies and comprehensive data suggests that law graduates are still typically better off financially than they would be had they not attended law school, notwithstanding challenges facing recent graduates.
Lawsuits related to American legal education
In 2011, several law schools were sued for fraud and for misleading job placement statistics. Most of these suits have been dismissed on the merits.
In 1995, the United States Department of Justice Sued the American Bar Association, the accrediting body of American law schools, for allegedly violating the Sherman Antitrust Act. The settlement of the suit prohibited the ABA from using salary of faculty or administrators as an accreditation criterion.
Political balance
Liberal professors have claimed that there is conservative bias in law schools, particularly within law and economics and business law fields. Liberals have also argued for affirmative action to increase the representation of women and minorities among law students and law faculty.
Conservative students have argued that there is a liberal bias among top tier law faculty.
Law school rankings
There are several different law school rankings, each with a different emphasis and different methodology. Most either emphasize inputs or readily measurable outcomes (i.e., outcomes shortly after graduation); none measure value-added or long-term outcomes. In general, these rankings are controversial, not universally accepted as authoritative.
U.S. News & World Report's regularly publishes a list of the "Top 100 Law Schools" based on various qualitative and quantitative factors, e.g., entering student LSAT scores and GPAs, reputation surveys, expenditures per student, etc. U.S. News ratings heavily emphasize inputs—student test scores and grades, law school expenditures—but includes some outcomes such as bar passage and employment shortly after graduation. U.S. News rankings are heavily weighted toward "reputation", which is measured through a survey with small sample size and low response rates. The reputation scores are highly correlated with the previous years' reputation scores and may not reflect changes in law school quality over time.
The Social Science Research Network—a repository for draft and completed scholarship in law and the social sciences—publishes monthly rankings of law schools based on the number of times faculty members' scholarship was downloaded. Rankings are available by total number of downloads, total number of downloads within the last 12 months, and downloads per faculty member to adjust for the size of different institutions. SSRN also provides rankings of individual law school faculty members on these metrics.
Brian Leiter compiles a regular series of evaluations called "Brian Leiter's Law School Reports" in which he and other commentators discuss law schools. Leiter's rankings tend to emphasize the quality and quantity of faculty scholarship, as measured by citations in a select group of journals.
Several other ranking systems are explicitly designed to focus on employment outcomes at or shortly after graduation, including rankings by the National Law Journal, Vault.com and Above the Law. The National Law Journal provides a comparison of its employment-based rankings to U.S. News rankings. For students who are primarily interested in lucrative employment outcomes rather than scholarly prestige, this comparison may suggest which law schools are most undervalued by the market.
Top 14 law schools
There exists an informal category known as the "Top Fourteen" or "T14", which has historically referred to the fourteen institutions that regularly claim the top spots in the yearly U.S. News & World Report ranking of American law schools. Furthermore, the "T14" schools remain the only ones to have ever placed within the top ten spots in these rankings. Although "T14" is not a designation used by U.S. News itself, the term is "widely known in the legal community." While these schools have seen their position within the top fourteen spots shift frequently, they have generally not placed outside of the top fourteen since the inception of the rankings. There have been rare exceptions: Texas and UCLA appeared in the 1987 list, before the start of the annual rankings (ahead of Northwestern and Cornell); Texas and UCLA displaced Georgetown in 2018 and 2022, respectively. Because of their relatively consistent placement at the top of these rankings, the schools that have taken the annual top spots since 1990 are commonly referred to as the "Top Fourteen" by published books on law school admissions, undergraduate university pre-law advisers, professional law school consultants, and newspaper articles on the subject.
Those 14 schools, alphabetically, are: Berkeley, Chicago, Columbia, Cornell, Duke, Georgetown, Harvard, Michigan, New York University, Northwestern, Penn, Stanford, Virginia, and Yale.
It is unclear whether attending a higher ranked law school provides a larger boost to law school graduates' earnings than attending a lower ranked law school. Higher earnings and improved outcomes for graduates of higher ranked law schools may be due to these students' greater earnings potential compared to graduates of lower ranked law schools before they attended law school — higher standardized test scores and undergraduate GPAs, wealthier families and friends, etc. One study suggests that, after controlling for students incoming credentials, earnings and employment outcomes are better at lower ranked ABA approved law schools than at higher ranked law schools — that is, lower ranked law schools may do more to improve outcomes than higher ranked schools.
Regional tiers and lower-tier national schools
Most law schools outside the top tier are more regional in scope and often have very strong regional connections to these post-graduation opportunities. For example, a student graduating from a lower-tier law school may find opportunities in that school's "home market": the legal market containing many of that school's alumni, where most of the school's networking and career development energies are focused. In contrast, an upper-tier law school may find employment opportunities in a broader geographic region.
State-authorized schools
Many schools are authorized or accredited by a state and some have been in continuous operation for over 95 years. Most are located in Alabama, California, Massachusetts, Pennsylvania and Tennessee, and in Puerto Rico. Some state authorized law schools are maintained to offer a non-ABA option, experimenting with lower cost options.
Graduates of non-ABA approved law schools have much lower bar passage rates than same-race graduates of ABA-approved law schools in the same state.
Unaccredited schools
Some schools are not accredited by a state or the American Bar Association. Most are located in California. Such schools in California are registered and licensed to operate by The State Bar of California Committee of Bar Examiners (CBE), but are not accredited by the CBE. Their first year students are required to take the First-Year Law Students' Examination ("Baby Bar"), which then authorizes them to continue their studies in years following. Graduates of these schools may then take the California Bar Examination. Once they pass the Bar, they are licensed to practice law in California. However, many other jurisdictions do not allow graduates of unaccredited law schools to sit for their bar examination. In California, graduates of non-ABA approved law schools have much lower bar passage rates than same-race graduates of ABA-approved law schools in the same state.
Oldest active law schools
Law schools are listed by the dates from when they were first established.
Marshall-Wythe School of Law (The College of William & Mary) established 1779 (closed in 1861 and reopened in 1920)
University of Maryland Francis King Carey School of Law established 1816, held first classes in 1824 (closed during the American Civil War and reopened shortly after its end)
Harvard Law School established 1817 (oldest continuously open school)
University of Virginia School of Law established 1819
Yale Law School established 1824
University of Cincinnati College of Law established 1833
Pennsylvania State University Dickinson School of Law established 1834
New York University School of Law established 1835
Indiana University Maurer School of Law established 1842
Saint Louis University School of Law established in 1843 (closed in 1847 and reopened in 1908)
University of North Carolina School of Law established 1845
Louis D. Brandeis School of Law (University of Louisville) established 1846
Cumberland School of Law established in 1847
Tulane University Law School established 1847
Washington and Lee University School of Law established 1849
Baylor Law School established 1849 (closed in 1883 and reestablished 1920)
University of Pennsylvania Law School established 1850
Albany Law School established 1851
University of Mississippi School of Law established 1854
Columbia Law School established 1858
See also
List of law schools in the United States
List of law schools attended by United States Supreme Court justices
IRAC
Law School Admission Council
Correspondence law school
Catholic University of America School of Canon Law
References
Types of university or college
Law of the United States
Higher education in the United States | wiki |
Une contribution est une participation à une action collective. Le terme se retrouve dans :
l'économie de la contribution.
Voir aussi
Collaboration | wiki |
Marmon may refer to:
People
Jacky Marmon (c. 1798–1880), Australian sailor
Daniel W. Marmon (1844–1909), American industrialist
Susie Rayos Marmon (1877–1988), Native American educator, historian, and storyteller
Lee Marmon (1925–2021), Native American photographer and author
Neale Marmon (born 1961), former English footballer
Companies
Nordyke Marmon & Company, a US manufacturer of flour mills until the 1920s
Marmon Motor Car Company, a US manufacturer of automobiles until 1933
Marmon-Herrington, the successor company to the Marmon Motor Car Company
Marmon Group, a Chicago, Illinois industrial company
Marmon Motor Company, a defunct Texas-based manufacturer of premium trucks | wiki |
Miguel Díaz de la Portilla (born January 30, 1963) is a Cuban-American attorney and politician from Florida. A Republican, he served in the Florida Senate from 2010 to 2016, representing parts of Miami, Coral Gables, and the surrounding area. Prior to that, he was a member of the Miami-Dade County Commission from 1993 to 2000.
Early life and education
Díaz de la Portilla's great-grandfather served in the Cuban Senate, while two of his great-uncles served simultaneously in the Cuban House of Representatives. A graduate of Miami's Belen Jesuit Preparatory School, Díaz de la Portilla went on to earn his bachelor's and law degrees at the University of Miami.
He is one of the four children of Cuban exiles Miguel Ángel Díaz Pardo and Fabiola Pura de la Portilla García. Díaz de la Portilla's two brothers, Alex and Renier, are also Miami-Dade politicians. Alex preceded Miguel in the Florida Senate (2000–2010), and previously served in the Florida House of Representatives (1993–2000). Renier served two stints on the Miami-Dade County School Board (1996–1998 and 2006–2012), and one term in the Florida House (2000–2002).
Díaz de la Portilla has two sons and three daughters. He lives in Coral Gables, Florida with his wife, Elinette Ruiz-Díaz de la Portilla, also a Land Use and Zoning Attorney, and the couple's two daughters. He has a black belt in Brazilian jiu-jitsu.
Miami-Dade County Commission
Díaz de la Portilla's career in public service began in 1993 when he was elected to the Miami-Dade County Commission from a new Hispanic-majority district. The district was created after a federal court ruled in 1992 that the county's system of electing its nine commissioners at-large violated the Voting Rights Act, and ordered special elections to elect a new, 13-member commission elected from single-member districts.
On the County Commission, Díaz de la Portilla chaired the transportation committee, and also served as the commission's chair. Among his legislative initiatives were the creation of the county's Office of the Inspector General, the establishment of the Miami-Dade County Expressway Authority, and reforms to land use policy and the zoning process.
In 2000, Díaz de la Portilla opted to run for Miami-Dade County Mayor rather than re-election. He lost to incumbent Mayor Alex Penelas in the first-round nonpartisan primary, 51.6 to 20.9%.
Florida Senate
In 2010, Díaz de la Portilla was elected to the Florida Senate from the 36th district, which encompassed parts of Miami, Coral Gables, and the surrounding area, without general election opposition. Decennial redistricting renumbered his seat the 37th, and he was re-elected unopposed in 2012 and 2014.
When the Florida Environmental Regulation Commission signed off on controversial limits for toxic compounds that can go into Florida's surface waters, Díaz de la Portilla called on Governor Scott to do a do-over and reconsider their position.
In 2014, Díaz de la Portilla was downgraded to a "F rating" by the National Rifle Association. On February 19, 2016, USF Executive Director and NRA Past President Marion P. Hammer sent a "Florida Alert!" to USF & NRA Members and Friends regarding de la Portilla's actions. Portilla rejected several key gun bills including HB4001, HB163, and SB68.
When dealing with campus carry, Díaz de la Portilla took meetings with university presidents, college police chiefs, faculty members, and students from around the state. All of them voiced their opposition to the bill. It's not clear whether the bills would have passed had Díaz de la Portilla allowed them to come up for vote, but there were 26 Republicans and 14 Democrats in the Senate, and approval for either measure would have required only a simple majority. In an interview with Sun Sentinel reporter Dan Sweeney, Díaz de la Portilla stated, "I don't think I'm an anti-gun guy. I'm a pro-common sense guy."
In April 2016, Díaz de la Portilla was recognized nationally by the American Psychiatric Association for championing efforts to address the need to improve mental health services in the Criminal Justice system in the state of Florida.
Court-ordered redistricting in 2015 significantly altered the 37th district, making it more Democratic. Díaz de la Portilla lost re-election in 2016 to Democratic state Representative José Javier Rodríguez in the 2016 general election, 48.9 to 45.6%.
Electoral history
County elections, 1993-2004
Florida Senate, 2010–2016
References
External links
http://www.flsheriffs.org/newsroom/entry/florida-sheriffs-association-announces-2016-legislator-of-the-year-and-legi
Florida State Senate - Miguel Díaz de la Portilla
http://wlrn.org/post/guns-and-mental-health
http://www.orlandosentinel.com/opinion/os-ed-nra-gun-bills-20160225-story.html
Miguel Díaz de la Portilla for State Senate
Diaz de la Portilla declares campus-carry bill dead for the session
Two gun bills shot down in Florida Legislature
Florida Senators Called Out by Students for Stalling Bill Recognizing Right to Carry Guns on Campus
Florida Senator Pushes Open-Carry Bill, Kills Guns On Campus Debate
Senator Portilla, R-Miami, Holds the Key to Florida Open Carry
Political Courage Test
|-
1963 births
Living people
People from Coral Gables, Florida
University of Miami School of Law alumni
Florida lawyers
County commissioners in Florida
Republican Party Florida state senators
American politicians of Cuban descent
Hispanic and Latino American state legislators in Florida | wiki |
Things named after the astronomy and relativity scientist Karl Schwarzschild (1873–1916) include:
Institutions:
Karl Schwarzschild Medal
Karl Schwarzschild Observatory
Astronomical features:
Lunar crater Schwarzschild
Asteroid 837 Schwarzschilda
Technical terms:
Schwarzschild constant
Schwarzschild effect in photography, also known as reciprocity failure, and important for calibrating astronomical measurement
Schwarzschild law, empirical equation relating to Schwarzschild effect
Schwarzschild criterion, in astronomy
Schwarzschild coordinates
Schwarzschild's equation for radiative transfer
Relativity terms:
Schwarzschild metric (closely related to Schwarzschild solution, Schwarzschild geometry, Schwarzschild black hole, and Schwarzschild vacuum)
de Sitter–Schwarzschild metric
Distorted Schwarzschild metric
Schwarzschild geodesics
Schwarzschild fluid solution
Schwarzschild kugelblitz
Schwarzschild radius (closely related to Schwarzschild horizon)
Schwarzschild wormholes
Schwarzschild telescope
Schwarzschild | wiki |
Remorse is a distressing emotion experienced by an individual who regrets actions which they have done in the past that they deem to be shameful, hurtful, or wrong. Remorse is closely allied to guilt and self-directed resentment. When a person regrets an earlier action or failure to act, it may be because of remorse or in response to various other consequences, including being punished for the act or omission. People may express remorse through apologies, trying to repair the damage they've caused, or self-imposed punishments.
In a legal context, the perceived remorse of an offender is assessed by Western justice systems during trials, sentencing, parole hearings, and in restorative justice. However, there are epistemological problems with assessing an offender's level of remorse.
A person who is incapable of feeling remorse is often diagnosed with antisocial personality disorder, as characterized in the DSM IV-TR. In general, a person needs to be unable to feel fear, as well as remorse, in order to develop psychopathic traits. Legal and business professions such as insurance have done research on the expression of remorse via apologies, primarily because of the potential litigation and financial implications.
Studies on apologizing
Two studies on apologising are The Five Languages of Apology by Gary Chapman and Jennifer Thomas and On Apology by Aaron Lazare. These studies indicate that effective apologies that express remorse typically include a detailed account of the offense; acknowledgment of the hurt or damage done; acceptance of the responsibility for, and ownership of, the act or omission; an explanation that recognises one's role. As well, apologies usually include a statement or expression of regret, humility, or remorse; a request for forgiveness; and an expression of a credible commitment to change or a promise that it will not happen again. Apologies may also include some form of restitution, compensation or token gesture in line with the damage that one has caused. John Kleefeld has encapsulated this into "four Rs" that typically make for a fully effective apology: remorse, responsibility, resolution and reparation. When an apology is delayed, for instance if a friend has been wronged and the offending party does not apologise, the perception of the offense can compound over time. This is sometimes known as compounding remorse. Compunction
refers to the act of actively expressing remorse,
usually requiring remorseful individuals to physically approach the person to whom they wish to express regret.
Falsified expressions
In a study led by Leanne ten Brinke, a professor at the University of British Columbia, participants' genuine and falsified emotions were studied to investigate behavioral and facial cues. Brinke and others found a significant difference in the presence of facial expressions in real and false remorse. With falsified emotions of remorse, they found that the participants experienced a greater range of emotions, which are close to genuine feelings, while deceptive descriptions of remorse were associated with positive emotions, such as happiness and surprise. The positive emotions felt by participants demonstrating a deceptive description of remorse are likely due to the leakage of genuine feelings from incomplete deception. Brinke and others established that participants appeared surprised because they could only raise their eyebrows when trying to appear sad, which then caused the participants to feel embarrassed, feel genuine happiness, and let a smile slip. In contrast to deceptive and falsified accounts, genuine accounts were expressed with fewer emotions. Participants showing deceptive or falsified emotions overcompensated their emotional performance. Genuine negative feelings of remorse leaked by the lower face were immediately covered up with a neutral expression. Brinke recorded a small number of body language and verbal cues for deceptive participants; instead, she recorded a large number of speech hesitations that cued deceptive and falsified accounts of remorse. Current findings of deceptive and falsified remorse have a practical use for measuring the veracity of remorseful displays for judges, jurors, parole officers, and psychologists when sentencing offenders.
Psychopathy
Psychopathic individuals are best known for their flagrant disregard for social and moral norms. Psychopaths have dysfunctional personal relationships, characterized by violence, exploitation, and philandering. Emotionally, they are incapable of feeling guilt or empathy, they respond abnormally to fear and pain, and other emotions are shallow compared to population norms. Psychopaths refuse to adopt social and moral norms because they are not swayed by the emotions, such as guilt, remorse, or fear of retribution, that influence other human beings.
Human societies tend to value remorse; conversely, a person who exhibits a lack of remorse is often perceived in a negative light. It is widely accepted that remorse is the proper reaction to misconduct. Remorse may originate in from either actual or contrived regret for the misconduct that results in being caught or causing harm. Research has shown that the facial expressions of offenders on trial affect the jury's attitude and, in turn, the sentencing decision. While remorse may present guilt that may influence a jury's decision, a lack of remorse influences the jury even more because it is one trait of psychopathy.
Psychopathy represents a configuration of traits that are missing within a person's personality, such as a lack of empathy and remorse. Knowledge of psychopathic traits has been shown to affect how jurors perceive adult and juvenile offenders. Assessments of psychopathy are introduced to direct a relatively wide variety of questions in the legal system, so investigators have started examining the effects of psychopathy evidences. Through simulations in studies by John Edens, who is a psychology professor at Texas A&M University, data suggests that attributing psychopathic traits to adult and juvenile offenders can have a noticeable negative effect on how these individuals are viewed by others. Remorselessness, a key feature of psychopathy, proves to be a strong predictor of juror attitudes. In the study by John Edens, a pool of offenders were labeled as either having a "disorder" condition or having "no disorder." Those labeled as "disorder" were given death verdicts by mock jurors. In the study, traits, such as callousness, remorselessness, and superficial charm, were a strong predictor of negative consequences for the offenders. This study found that remorselessness has the largest effect on the mock jurors' opinions of the "disorder" offenders and it explains support for the death sentence. The results of this study suggest that free of mental health testimonies, perceptions of a defendant's personality traits may have serious implications in the sentencing decisions of a capital case.
One study on psychopaths found that, under certain circumstances, they could willfully empathize with others, and that their empathic reaction initiated the same way it does for controls. Psychopathic criminals were brain-scanned while watching videos of a person harming another individual. The psychopaths' empathic reaction initiated the same way it did for controls when they were instructed to empathize with the harmed individual, and the area of the brain relating to pain was activated when the psychopaths were asked to imagine how the harmed individual felt. The research suggests psychopaths can switch empathy on at will, which would enable them to be both callous and charming. The team who conducted the study say they do not know how to transform this willful empathy into the spontaneous empathy most people have, though they propose it might be possible to rehabilitate psychopaths by helping them to activate their "empathy switch". Others suggested that it remains unclear whether psychopaths' experience of empathy was the same as that of controls, and also questioned the possibility of devising therapeutic interventions that would make the empathic reactions more automatic.
One problem with the theory that the ability to turn empathy on and off constitutes psychopathy is that such a theory would classify socially sanctioned violence and punishment as psychopathy, as these entail suspending empathy towards certain individuals and/or groups. The attempt to get around this by standardizing tests of psychopathy for cultures with different norms of punishment is criticized in this context for being based on the assumption that people can be classified in discrete cultures while cultural influences are in reality mixed and every person encounters a mosaic of influences. Psychopathy may be an artefact of psychiatry's standardization along imaginary sharp lines between cultures, as opposed to an actual difference in the brain.
Work conducted by Professor Jean Decety with large samples of incarcerated psychopaths offers additional insights. In one study, psychopaths were scanned while viewing video clips depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions of pain. The participants in the high-psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, amygdala, and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants. In a second study, individuals with psychopathy exhibited a strong response in pain-affective brain regions when taking an imagine-self perspective, but failed to recruit the neural circuits that were activated in controls during an imagine-other perspective—in particular the ventromedial prefrontal cortex and amygdala—which may contribute to their lack of empathic concern.
Researchers have investigated whether people who have high levels of psychopathy have sufficient levels of cognitive empathy but lack the ability to use affective empathy. People who score highly on psychopathy measures are less likely to exhibit affective empathy. There was a strong negative correlation, showing that psychopathy and lack of affective empathy correspond strongly. found those who scored highly on the psychopathy scale do not lack in recognising emotion in facial expressions. Therefore, such individuals do not lack in perspective-talking ability but do lack in compassion .
In fact, in an experiment published in March 2007 at the University of Southern California neuroscientist Antonio R. Damasio and his colleagues showed that subjects with damage to the ventromedial prefrontal cortex lack the ability to empathically feel their way to moral answers, and that when confronted with moral dilemmas, these brain-damaged patients coldly came up with "end-justifies-the-means" answers, leading Damasio to conclude that the point was not that they reached immoral conclusions, but that when they were confronted by a difficult issue – in this case as whether to shoot down a passenger plane hijacked by terrorists before it hits a major city – these patients appear to reach decisions without the anguish that afflicts those with normally functioning brains. According to Adrian Raine, a clinical neuroscientist also at the University of Southern California, one of this study's implications is that society may have to rethink how it judges immoral people: "Psychopaths often feel no empathy or remorse. Without that awareness, people relying exclusively on reasoning seem to find it harder to sort their way through moral thickets. Does that mean they should be held to different standards of accountability?"
Psychopathic individuals do not show regret or remorse. This was thought to be due to an inability to generate this emotion in response to negative outcomes. However, in 2016, people with antisocial personality disorder (also known as dissocial personality disorder) were found to experience regret, but did not use the regret to guide their choice in behavior. There was no lack of regret but a problem to think through a range of potential actions and estimating the outcome values.
Forgiveness
The perception of remorse is essential to an apology, and the greater the perception of remorse the more effective the apology. An effective apology reduces negative consequences and facilitates cognitive and behavioral changes associated with forgiveness. With empathy as the mediator between apologies and forgiveness and remorse as the essential part to an apology, one can expect empathy to mediate perceived remorse forgiveness. Remorse may signal that one is suffering psychologically because of one's negative behavior, which leads to empathy from the victim, who may then express forgiveness. In a study by James Davis and Greg Gold, 170 university students filled out questionnaires about forgiveness within interpersonal relationships. Davis and Gold's findings suggest that when a victim perceives an apology to be remorseful, then they believe the negative behavior will not occur again, and they will be more willing to forgive the perpetrator.
Versus self-condemnation
Remorse is closely linked with the willingness to humble oneself and to repent for one's misdeeds. Remorse is not as such when defined through the view of self-condemnation. Self-condemnation, more so than remorse, is said to be associated with poor psychological well-being. Remorse captures feelings of guilt, regret, and sorrow. Forgiveness does not eliminate all negative feelings, but it may entail the reduction of bitter and angry feelings, not feelings of disappointment, regret, or sorrow. A study by Mickie Fisher found that people who forgive themselves for serious offenses may continue to harbor remorse or regret. In contrast to remorse, self-condemnation reflects a more global, negative, severe stance toward oneself. Remorse may convey a sense of sorrow, while self-condemnation suggests the kind of loathing and desire for punishment that characterizes interpersonal grudges. Fisher suggests that self-forgiveness does not necessarily require one to get rid of feelings or regret or remorse. Based on the study by Fisher, self-forgiveness seems to relate more closely to self-condemnation and not remorse. When trying to convince people to forgive themselves, it is crucial not to erase the potentially adaptive feelings of remorse along with the more destructive self-condemnation. People can grow and experience prosocial behaviors once they accept responsibility for their own transgressions. For genuine self-forgiveness, one must first accept responsibility for their offenses and not rush to rid themselves of guilty feelings.
Buyer's remorse
Purchases can be divided into two different categories: material or experiential. A material good is made to be kept in the buyer's possession, while an experiential good provides the buyer with life experience. A material good provides the buyer with a more enduring pleasure compared with an experiential, as these two purchases also result in different types of regret. While experiential purchases bring about regrets of a missed opportunity, material purchases result in buyer's remorse, which means that a person dwells on how their material purchase measure up to other purchases they could have made and how it compares with other people's purchases. These comparisons diminish satisfaction from the original purpose. Past research explains that regrets of action are intense, but only in the short term, while regrets of inaction gains intensity over time and dominates people's experience. Major life choices, such as marriage, jobs, and education, are often the focus of regret. Everyday experience suggests that everyday decisions are the most frequent causes of regret. Marketing directors know the effects of buyer's remorse, and use it to their advantage when planning marketing strategies. The regret felt over choosing a material over an experiential purchase depends on the pain of the factors underlying the purchase. Based on research by Thomas Gilovich and Emily Rosenzwig, material purchases are more likely to lead to regret, while experiential purchases give the buyer more satisfaction even over time.
See also
References
External links
Emotions
Psychopathy | wiki |
Cleaner shrimp is a common name for a number of swimming decapod crustaceans that clean other organisms of parasites. They belong to any of three families, Hippolytidae (including the Pacific cleaner shrimp, Lysmata amboinensis), Palaemonidae (including the spotted Periclimenes magnificus), and Stenopodidae (including the banded coral shrimp, Stenopus hispidus) . The last of these families is more closely related to lobsters and crabs than it is to the remaining families. The term "cleaner shrimp" is sometimes used more specifically for the family Hippolytidae and the genus Lysmata.
Cleaner shrimp are so called because they exhibit a cleaning symbiosis with client fish where the shrimp clean parasites from the fish. The fish benefit by having parasites removed from them, and the shrimp gain the nutritional value of the parasites. The shrimp also eat the mucus and parasites around the wounds of injured fish, which reduces infections and helps healing. The action of cleansing further aids the health of client fish by reducing their stress levels. In many coral reefs, cleaner shrimp congregate at cleaning stations. In this behaviour cleaner shrimps are similar to cleaner fish, and sometimes may join with cleaner wrasse and other cleaner fish attending to client fish.
Shrimp of the genus Urocaridella are often cryptic or live in caves on the reef and are not associated commensally with other animals. These shrimp assemble around cleaning stations where up to 25 shrimp live in proximity. When a potential client fish swims close to a station with shrimp present, several shrimp perform a "rocking dance," a side-to-side movement. Frequency of rocking increases with hunger. This increase in frequency suggests competition between hungry and sated shrimp. To avoid competition with other cleaners during the day, the shrimp Urocaridella antonbruunii was observed cleaning a sleeping fish at night.
Cleaner shrimps are often included in saltwater aquaria partly due to their cleansing function and partly due to their brightly colored appearance.
References
Decapods
Symbiosis | wiki |
Erie Public Library may refer to:
Buffalo & Erie County Public Library, in New York
Libraries of the Erie County library system, in Erie, Pennsylvania
Main Library (Erie, Pennsylvania), in Erie, Pennsylvania | wiki |
The Irish PGA Championship, formerly the Irish Professional Championship and colloquially known as the Irish Professional Close or National Championship, is a golf tournament that is played annually in Ireland since 1907. It is one of the oldest golf tournaments in the world, the oldest in the country, and has been played at many different golf courses in Ireland.
It is the marquee event on the PGA Tour of Ireland's schedule, having many notable winners in the over 100 years of play. Christy O'Connor Snr and Harry Bradshaw have the most wins in the event with 10. The event was played in match-play format from its inauguration in 1907 until it became a stroke play event in 1910.
Winners
From 1907 to 1909 the championship was a match-play event. The final was over 18 holes in 1907 and 36 holes in 1908 and 1909. The format changed to stroke play from 1910. The tournament was reduced to 54 holes in 1967, 1979, 1985, 1998, 2002 and 2012 and to 36 holes in 1987.
Most wins
Tournament summaries
1907 Irish Professional Championship
The first Irish Professional Championship was played on 20 and 21 May 1907 at Royal Portrush Golf Club. There was an 18-hole stroke play contest on the first morning with the leading 8 qualifying for the knockout matchplay stage. James Edmundson and Harry Hamill led with scores of 76. Three players were tied for the final place and played a 9-hole playoff to decide the last place, won by Hugh McNeill. In the first round of the matchplay Edmundson and Hamill were drawn to play each other, Edmundson being the surprise winner by 5&4. The semi-finals and final were played on the second day. Local professional Edmundson and Yorkshireman Bertie Snowball won their semi-finals and met in the final. The match was all square after 9 holes but Edmundson won the next three and eventually won 2&1.
The Championship was preceded by the first professional match between Ireland and Scotland on 18 May. Teams of 12 played singles and foursomes. Ireland beat a weak Scotland team by 14 matches to 3 with 1 match halved. The players had played a 36-hole stroke-play event the previous day, won by Michael Moran with a score of 154, 4 ahead of Bertie Snowball.
1908 Irish Professional Championship
The 1908 Championship was extended to a third day, being played from 13–15 May at Portmarnock Golf Club. The first day was a 36-hole event with 8 qualifying for the matchplay stage. As in 1907 James Edmundson and Harry Hamill led, with scores of 160. Edmundson and Bertie Snowball again met in the final. In the 36-hole final Edmundson won comfortably 5&3 after being 4 up after 18 holes. Edmundson won a gold medal and the £10 first prize.
1909 Irish Professional Championship
The 1909 Championship retained the same format and was played from 12–14 May at Royal County Down Golf Club. James Edmundson was again joint leader, this time with Michael Moran, with scores of 167. In the first round Edmundson was finally defeated. Moran and Harry Kidd won their two matches and met in the final. Moran started badly and was 3 down after 6 holes. However, he then won the next 6 holes and was 2 up after the first round. Moran won the first hole in the afternoon and from the 5th to the 11th holes won 6 more to win easily 9&7, the first of five successive victories in the championship.
1910 Irish Professional Championship
From 1910 the Championship became a 72-hole strokeplay event. It was played on 9 and 10 June at Royal Dublin Golf Club. Defending champion Michael Moran, pulled 7 ahead at the end of the first day after a second round 72, a course record. On the second day Moran set another course record of 70, extending his lead to 13. A final round of 76 gave him a 10 stroke win over Michael Cahill and 23 ahead of the rest of the field.
1911 Irish Professional Championship
The 1911 Championship was played on 8 and 9 June at Royal Portrush Golf Club. Michael Moran led after the first day on 159, three ahead of James Edmundson. In the third round Edmundson scored 75 to Moran's 78 to be on level terms. Moran then scored a final round 72 to Edmundson's 78 to win by six strokes. Hugh McNeill finished third a further three shots behind.
1912 Irish Professional Championship
The 1912 Championship was played on 9 and 10 May at Castlerock Golf Club. Pat Doyle led after the first day on 152, having set a course record of 72 in his afternoon round. Michael Moran was three behind on 155. Moran scored 75 in the third round to Doyle's 82 to take a 4 stroke lead. Another 75 from Moran gave him a 6 shot win over Doyle who finished with a 77. Harry Hamill was third, a further shot behind.
1913 Irish Professional Championship
The 1913 Championship was played on 31 July and 1 August at Portmarnock Golf Club. After two rounds Hugh McNeill led on 163, ahead of Pat O'Hare on 164. Defending champion Michael Moran was tied for fifth after a second round 88. After a third round 79 O'Hare had a lead of 5 strokes from McNeill and Charlie Pope, with Moran a further shot back. McNeill and Pope faded in the final round, while O'Hare and Moran both took 39 for the front nine. O'Hare then took 7 at the 10th and 11th and came home in 44 to Moran's 36 to give Moran a two stroke victory and his fifth successive title. Pope had a final round 82 to finish third.
1914 Irish Professional Championship
In early 1914 Michael Moran left Royal Dublin Golf Club to be professional at Seaham Harbour, County Durham and so he was not eligible to defend his title in 1914. The 1914 Championship was played on 28 and 29 May at Royal County Down Golf Club. Local professional Alex Robertson led after the first day on 151 with Jimmy O'Hare a shot behind. After the third round Hugh McNeill and Charlie Pope were tied for the lead with Robertson a shot behind. In the final round Pope took 75 to McNeill's 89 to take the championship by 8 strokes from brothers Jimmy and Pat O'Hare with Robertson a further shot behind.
References
External links
Irish PGA Championship official site
Golf tournaments in Ireland
Recurring sporting events established in 1907
1907 establishments in Ireland | wiki |
Richard Levine may refer to:
Richard Levine (architect),American environmental architect, solar energy and sustainability pioneer, and professor
Richard Levine (director), American writer, director, actor and producer
Richard M. Levine, American journalist and author
Richard Levine (character), a character in the Jurassic Park novel The Lost World | wiki |
A panicle is a much-branched inflorescence. Some authors distinguish it from a compound spike inflorescence, by requiring that the flowers (and fruit) be pedicellate (having a single stem per flower). The branches of a panicle are often racemes. A panicle may have determinate or indeterminate growth.
This type of inflorescence is largely characteristic of grasses such as oat and crabgrass, as well as other plants such as pistachio and mamoncillo. Botanists use the term paniculate in two ways: "having a true panicle inflorescence" as well as "having an inflorescence with the form but not necessarily the structure of a panicle".
Corymb
A corymb may have a paniculate branching structure, with the lower flowers having longer pedicels than the upper, thus giving a flattish top superficially resembling an umbel. Many species in the subfamily Amygdaloideae, such as hawthorns and rowans, produce their flowers in corymbs.
See also
Thyrse, a branched inflorescence where the main axis has indeterminate growth, and the branches have determinate growth
Notes
References
Plant morphology | wiki |
Artificial meat(s) may refer to:
Cultured meat, meat grown in cell cultures instead of inside animals
Factory farming related meats, foodstuffs created in highly managed conditions
Meat analogue, imitation meat products such as tofu, tempeh, textured vegetable protein (TVP), wheat gluten, pea protein, or mycoprotein
See also
Artificiality
Meat (disambiguation) | wiki |
There are a number of alternative names for Northern Ireland. Northern Ireland consists of six historic counties of Ireland, and remains part of the United Kingdom following the independence of the other twenty-six counties as the Irish Free State in 1922 (now the Republic of Ireland, officially named "Ireland"). In addition to, and sometimes instead of, its official name, several other names are used for the region. Significant differences in political views between unionists and Irish nationalists are reflected in the variations of names they use for the region. A proposal to change Northern Ireland's legal name to Ulster was seriously considered by the UK and Northern Ireland Governments in 1949 but in the end, the name "Northern Ireland" was retained.
Names
Legal name
The official and legal name of the region is Northern Ireland. The legal name is used by both the British and Irish governments, internationally by governments around the world, and by most of its inhabitants.
Political names
Unionist-associated names
Ulster is often used by unionists and some media outlets in the UK. This is the Hiberno-Norse form of the province of Uladh (pronounced "Ull-ah") (Irish Uladh and Old Norse ster, meaning "province", yields "Uladh Ster" or, in English, "Ulster"). Examples of official use of this term are the Ulster Unionist Party, the University of Ulster, and the BBC Radio Ulster.
This term is disliked by some nationalists because the whole of the Province of Ulster consists of nine counties – three of which, County Monaghan, County Cavan and County Donegal, are in the Republic of Ireland. Unionists have argued that because Ulster's size has changed much over the centuries, Ulster can be applied to Northern Ireland alone. The Government of Northern Ireland once considered a proposal to change the official name to Ulster. Some also reject the claim of the Republic of Ireland to have inherited the tradition of the Irish Republic of the Irish War of Independence, because it excludes the north east, and refer to the Republic variously as the Free State or The Twenty-Six Counties.
The Province is also sometimes used, referring directly to the status of Northern Ireland as a "province" of the United Kingdom. This also, however, could be obliquely used to refer to the province of Ulster; and since no other constituent part of the United Kingdom is known as a province, a less controversial usage is "the region".
In 1949, members of the United Kingdom parliament debated how best it was to respond to Ireland's decision to terminate its last connection with the British King. Ireland also adopted a law saying that the state could be described as the Republic of Ireland. Some British MPs did not consider this was appropriate. Lieut-Colonel Sir Thomas Moore M.P. said that "Ulster has as much right to be called the "Kingdom of Ireland" as Southern Ireland has to be called the "Republic of Ireland." However, Northern Ireland was never renamed the Kingdom of Ireland.
Nationalist-associated names
Nationalists in the region and their supporters elsewhere commonly refer to it as The North of Ireland, The North-East or The North. This can be used to implicitly deny British sovereignty by placing it into the rest of Ireland, at least linguistically. It does, however, contain the same geographic anomaly as it does not contain Ireland's most northerly point.
The Six Counties is another popular name among republicans, as it can portray the region as a mere collection of Irish counties, rather than a legal political entity.
The Occupied Territories or The Occupied Six Counties are phrases sometimes used by some republicans, especially since the arrival of additional British Army soldiers, but originally employed simply to suggest the illegitimacy of the British presence in Northern Ireland. This is sometimes rendered as The Occupied Zone or The OZ.
Other names
In the Republic of Ireland, people typically refer to the region simply as the North. Similarly, and more commonly, in Northern Ireland, the South is sometimes used (by both unionists and nationalists) as a shorthand term for the Republic of Ireland.
Obviously, this explanation does not hold for parts of the Republic such as County Donegal giving rise to the joke that while further out in a boat on Lough Foyle, "the South is north, and the North is south".
A colloquial name for Northern Ireland which has grown in popularity in recent years is "Norn Iron", derived from an exaggerated pronunciation of 'Northern Ireland' in a broad Belfast accent. This name is often used by fans of the football team both on banners and in conversation.
Northern Ireland is literally translated to Tuaisceart Éireann in Irish (though it is sometimes known as Na Sé Chontae 'The Six Counties' as well as Tuaisceart na hÉireann '[the] North of Ireland' by republicans) and Norlin Airlann or Northern Ireland in Ulster Scots.
Government proposals to rename Northern Ireland as Ulster
Ulster unionists often use the name Ulster as a synonym for Northern Ireland. Sometimes there are calls to formally change the name of Northern Ireland to Ulster.
1937 Ulster proposal
In 1937. a plebscite was held in the Irish Free State which approved a new Constitution. Among its provisions, the name of the Irish state was changed to "Ireland"; this led to discussions, both at a governmental level and in the House of Commons of Northern Ireland, about Northern Ireland being renamed as Ulster.
UK and NI Government discussions about name change
Ahead of the renaming of the Irish Free State to simply Ireland in 1937, the British Prime Minister and the Home Secretary discussed the matter with the Prime Minister of Northern Ireland, Lord Craigavon when he was in London in July 1937. It was reported to the Cabinet that:
Later, the British Home Secretary discussed the new name for the Irish state (and other matters) with the Acting Prime Minister of Northern Ireland, J. M. Andrews on 10 December 1937 just under three weeks before the new Constitution came into effect. Since the earlier discussions with Lord Craigavon, the Law Officers have given their opinion that local
legislation changing the name of Northern Ireland to Ulster would be ultra vires, and that legislation by Westminster would be necessary if the change of name were to be made. It was this which the Home Secretary wished to discuss with Mr. Andrews. The Home Secretary reported on the discussions to his Cabinet colleagues noting the following:
Parliamentary discussions about name change
The parliamentary reports of the Parliament of Northern Ireland record an instance in 1937 where the proposal to rename Northern Ireland as Ulster was given formal consideration. On 1 December 1937, Thomas Joseph Campbell, MP (Nationalist) asked the Prime Minister of Northern Ireland whether the Government was considering changing the name of Northern Ireland, and, if so what name was being considered. Responding, the Minister of Finance J. M. Andrews MP said "the matter has been under discussion amongst Members of the Government, but no Cabinet decision has been taken".
This exchange followed speeches in parliament the previous month by two Independent Unionist MPs, Tommy Henderson and John William Nixon, raising the possible name change. Both regretted the name change was not mentioned in the King's Speech. Mr. Henderson criticised the Attorney-General for Northern Ireland's handling of the matter. He said that "the Attorney-General suggested recently that the name of Northern Ireland should be changed to Ulster". However, according to Mr Henderson, it was "absolutely impossible to change the name of this area from Northern Ireland to Ulster without amending the 1920 Act" (the Government of Ireland Act 1920). That Act could only be amended by the Parliament of the United Kingdom and not the Parliament or government of Northern Ireland. He concluded that in making the suggestion, the Attorney-General had tried to "throw dust in the eyes of the Ulster people".
This exchange had followed a statement made by the Attorney General, Sir Anthony Babbington KC on 15 November 1937 in Belfast in which he criticised the new Constitution proposed for Ireland. In particular, he was critical of its claim to jurisdiction over Northern Ireland. He said:
The Attorney General continued by saying that it was of "great importance" that the "cumbersome name" of Northern Ireland that came into the Act of 1920 alongside Southern Ireland should be changed. He continued further remarking that "The name of Southern Ireland has been changed and it was time that the name of Northern Ireland should be changed to Ulster".
1949 Ulster proposal
At a British Cabinet meeting on 22 November 1948 it was decided that a Working Party be established to "[consider] what consequential action may have to be taken by the United Kingdom Government as a result of Eire's ceasing to be a member of the Commonwealth". At the time the Irish parliament was soon expected to pass the Republic of Ireland Act, by which Ireland (formally referred to as "Eire" by the British authorities) would shortly become a republic, and thereby leave the Commonwealth.
The Working Party was chaired by the Cabinet Secretary, Norman Brook. Its report dated 1 January 1949 was presented by Prime Minister Clement Attlee to the Cabinet on 7 January 1949. Among its recommendations was that the name of Northern Ireland should be changed to Ulster. In this regard, the Working Party's report noted:
The Working Party's report appended draft legislation (a draft of the Ireland Act) including provision for the "Ulster" name change. With respect to the arguments against the name change, the report noted in particular that the UK's "Representative" (effectively Ambassador) in Dublin believed taking the name "Ulster" would "give fresh opportunities for anti-British propaganda by Eire". The report also noted that the Commonwealth Relations Office also held that view and its representative on the working party had asked that before a final decision be taken:
A Downing Street Conference between the UK and Northern Ireland governments was held on 6 January 1949. The Conference was held on the initiative of the Northern Ireland Government. Its purpose was to consider possible legislation to give statutory effect to Prime Minister Clement Attlee's assurance that Northern Ireland's constitutional position would not be prejudiced by the Republic of Ireland Act by which Ireland had decided to leave the British Commonwealth and any other possible consequences for Northern Ireland arising from the Irish decision. The UK government was represented at the Conference by the Prime Minister, the Lord Chancellor, the Home Secretary, and the Secretary of State for Commonwealth Relations while Northern Ireland premier Sir Basil Brooke led the Northern Ireland delegation. Brooke said to Attlee:
Prime Minister Attlee reported to his Cabinet colleagues the following day that he had discussed relevant Working Party proposals with the Northern Ireland delegation. "As a result of that discussion", Attlee reported that he would "recommend that the title of Northern Ireland should not be changed to Ulster".
On 10 January 1949, Prime Minister Attlee presented a memorandum of his own to his Cabinet. With respect to his recommendation that the name for Northern Ireland should not be changed, he said:
The proposed name change was the subject of some reportage in the media with The Times reporting shortly before the conference:
The fresh proposal to change the name to Ulster drew protest from the Nationalist Party MP for Fermanagh and Tyrone, Anthony Mulvey. He sent a telegram to Attlee to strongly "protest against any proposal to change the title Northern Ireland to Ulster". Mulvey argued that "[a]ny assent to the suggestion proposed can only be regarded as a calculated affront to the Irish nation and still further embitter relations between the peoples of Great Britain and Ireland...". Mulvey sent a telegram in similar terms to the Irish Minister for External Affairs, Seán MacBride who responded as follows:.
The UK government cabinet minutes of 12 January 1949 noted that "N.I. [Northern Ireland] Ministers accepted the name “N.I.” eventually" A few days after the Conference The Times also reported that "[i]t is not thought that the suggestion to rename Northern Ireland "Ulster" has found much support." In a somewhat colourful but not too accurate explanation of events, in the run up to the General Election in Northern Ireland in 1949, Thomas Loftus Cole declared that the British Government had refused to allow the name change "because the area did not comprise the nine counties of the province. We should demand our three counties [Donegal, Monaghan and Cavan] so that we could call our country Ulster, a name of which we are all proud".
See also
Derry/Londonderry name dispute
Geographical renaming
Names of the Irish state
Southern Ireland
Terminology of the British Isles
References
External links
Ulster
Politics of Northern Ireland
History of Northern Ireland
Northern Ireland
Northern Ireland, Alternative names | wiki |
Animal husbandry is the branch of agriculture concerned with animals that are raised for meat, fibre, milk, or other products. It includes day-to-day care, selective breeding, and the raising of livestock. Husbandry has a long history, starting with the Neolithic Revolution when animals were first domesticated, from around 13,000 BC onwards, predating farming of the first crops. By the time of early civilisations such as ancient Egypt, cattle, sheep, goats, and pigs were being raised on farms.
Major changes took place in the Columbian exchange, when Old World livestock were brought to the New World, and then in the British Agricultural Revolution of the 18th century, when livestock breeds like the Dishley Longhorn cattle and Lincoln Longwool sheep were rapidly improved by agriculturalists, such as Robert Bakewell, to yield more meat, milk, and wool. A wide range of other species, such as horse, water buffalo, llama, rabbit, and guinea pig, are used as livestock in some parts of the world. Insect farming, as well as aquaculture of fish, molluscs, and crustaceans, is widespread. Modern animal husbandry relies on production systems adapted to the type of land available. Subsistence farming is being superseded by intensive animal farming in the more developed parts of the world, where, for example, beef cattle are kept in high density feedlots, and thousands of chickens may be raised in broiler houses or batteries. On poorer soil, such as in uplands, animals are often kept more extensively and may be allowed to roam widely, foraging for themselves.
Most livestock are herbivores, except for pigs and chickens which are omnivores. Ruminants like cattle and sheep are adapted to feed on grass; they can forage outdoors or may be fed entirely or in part on rations richer in energy and protein, such as pelleted cereals. Pigs and poultry cannot digest the cellulose in forage and require other high-protein foods.
Etymology
The verb to husband, meaning "to manage carefully," derives from an older meaning of husband, which in the 14th century referred to the ownership and care of a household or farm, but today means the "control or judicious use of resources," and in agriculture, the cultivation of plants or animals. Farmers and ranchers who raise livestock are considered to practice animal husbandry.
History
Birth of husbandry
The domestication of livestock was driven by the need to have food on hand when hunting was unproductive. The desirable characteristics of a domestic animal are that it should be useful to the domesticator, should be able to thrive in his or her company, should breed freely, and be easy to tend. Domestication was not a single event, but a process repeated at various periods in different places. Sheep and goats were the animals that accompanied the nomads in the Middle East, while cattle and pigs were associated with more settled communities. The first wild animal to be domesticated was the dog. Half-wild dogs, perhaps starting with young individuals, may have been tolerated as scavengers and killers of vermin, and being naturally pack hunters, were predisposed to become part of the human pack and join in the hunt. Prey animals, sheep, goats, pigs and cattle, were progressively domesticated early in the history of agriculture. Pigs were domesticated in the Near East between 8,500 and 8000 BC, sheep and goats in or near the Fertile Crescent about 8,500 BC, and cattle from wild aurochs in the areas of modern Turkey and Pakistan around 8,500 BC. A cow was a great advantage to a villager as she produced more milk than her calf needed, and her strength could be put to use as a working animal, pulling a plough to increase production of crops, and drawing a sledge, and later a cart, to bring the produce home from the field. Draught animals were first used about 4,000 BC in the Middle East, increasing agricultural production immeasurably.
In southern Asia, the elephant was domesticated by 6,000 BC. Fossilised chicken bones dated to 5040 BC have been found in northeastern China, far from where their wild ancestors lived in the jungles of tropical Asia, but archaeologists believe that the original purpose of domestication was for the sport of cockfighting. Meanwhile, in South America, the llama and the alpaca had been domesticated, probably before 3,000 BC, as beasts of burden and for their wool. Neither was strong enough to pull a plough which limited the development of agriculture in the New World. Horses occur naturally on the steppes of Central Asia and their domestication began around 3,000 BC in the Black Sea and Caspian Sea region. Although horses were originally seen as a source of meat, their use as pack animals and for riding followed. Around the same time, the wild ass was being tamed in Egypt. Camels were domesticated soon after this, with the Bactrian camel in Mongolia and the Arabian camel becoming beasts of burden. By 1000 BC, caravans of Arabian camels were linking India with Mesopotamia and the Mediterranean.
Ancient civilisations
In ancient Egypt, cattle were the most important livestock, and sheep, goats, and pigs were also kept; poultry including ducks, geese, and pigeons were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Honey bees were domesticated from at least the Old Kingdom, providing both honey and wax. In ancient Rome, all the livestock known in ancient Egypt were available. In addition, rabbits were domesticated for food by the first century BC. To help flush them out from their burrows, the polecat was domesticated as the ferret, its use described by Pliny the Elder.
Medieval husbandry
In northern Europe, agriculture including animal husbandry went into decline when the Roman empire collapsed. Some aspects such as the herding of animals continued throughout the period. By the 11th century, the economy had recovered and the countryside was again productive. The Domesday Book recorded every parcel of land and every animal in England: "there was not one single hide, nor a yard of land, nay, moreover ... not even an ox, nor a cow, nor a swine was there left, that was not set down in [the king's] writ." For example, the royal manor of Earley in Berkshire, one of thousands of villages recorded in the book, had in 1086 "2 fisheries worth [paying tax of] 7s and 6d [each year] and 20 acres of meadow [for livestock]. Woodland for [feeding] 70 pigs." The improvements of animal husbandry in the medieval period in Europe went hand in hand with other developments. Improvements to the plough allowed the soil to be tilled to a greater depth. Horses took over from oxen as the main providers of traction, new ideas on crop rotation were developed and the growing of crops for winter fodder gained ground. Peas, beans and vetches became common; they increased soil fertility through nitrogen fixation, allowing more livestock to be kept.
Columbian exchange
Exploration and colonisation of North and South America resulted in the introduction into Europe of such crops as maize, potatoes, sweet potatoes and manioc, while the principal Old World livestock – cattle, horses, sheep and goats – were introduced into the New World for the first time along with wheat, barley, rice and turnips.
Agricultural Revolution
Selective breeding for desired traits was established as a scientific practice by Robert Bakewell during the British Agricultural Revolution in the 18th century. One of his most important breeding programs was with sheep. Using native stock, he was able to quickly select for large, yet fine-boned sheep, with long, lustrous wool. The Lincoln Longwool was improved by Bakewell and in turn the Lincoln was used to develop the subsequent breed, named the New (or Dishley) Leicester. It was hornless and had a square, meaty body with straight top lines. These sheep were exported widely and have contributed to numerous modern breeds. Under his influence, English farmers began to breed cattle for use primarily as beef. Long-horned heifers were crossed with the Westmoreland bull to create the Dishley Longhorn.
The semi-natural, unfertilised pastures formed by traditional agricultural methods in Europe were managed by grazing and mowing. As the ecological impact of this land management strategy is similar to the impact of such natural disturbances as a wildfire, this agricultural system shares many beneficial characteristics with a natural habitat, including the promotion of biodiversity. This strategy is declining in Europe today due to the intensification of agriculture. The mechanized and chemical methods used are causing biodiversity to decline.
Practices
Systems
Traditionally, animal husbandry was part of the subsistence farmer's way of life, producing not only the food needed by the family but also the fuel, fertiliser, clothing, transport and draught power. Killing the animal for food was a secondary consideration, and wherever possible its products such as wool, eggs, milk and blood (by the Maasai) were harvested while the animal was still alive. In the traditional system of transhumance, people and livestock moved seasonally between fixed summer and winter pastures; in montane regions the summer pasture was up in the mountains, the winter pasture in the valleys.
Animals can be kept extensively or intensively. Extensive systems involve animals roaming at will, or under the supervision of a herdsman, often for their protection from predators. Ranching in the Western United States involves large herds of cattle grazing widely over public and private lands. Similar cattle stations are found in South America, Australia and other places with large areas of land and low rainfall. Ranching systems have been used for sheep, deer, ostrich, emu, llama and alpaca.
In the uplands of the United Kingdom, sheep are turned out on the fells in spring and graze the abundant mountain grasses untended, being brought to lower altitudes late in the year, with supplementary feeding being provided in winter. In rural locations, pigs and poultry can obtain much of their nutrition from scavenging, and in African communities, hens may live for months without being fed, and still produce one or two eggs a week.
At the other extreme, in the more developed parts of the world, animals are often intensively managed; dairy cows may be kept in zero-grazing conditions with all their forage brought to them; beef cattle may be kept in high density feedlots; pigs may be housed in climate-controlled buildings and never go outdoors; poultry may be reared in barns and kept in cages as laying birds under lighting-controlled conditions. In between these two extremes are semi-intensive, often family-run farms where livestock graze outside for much of the year, silage or hay is made to cover the times of year when the grass stops growing, and fertiliser, feed, and other inputs are brought onto the farm from outside.
Feeding
Animals used as livestock are predominantly herbivorous, the main exceptions being the pig and the chicken which are omnivorous. The herbivores can be divided into "concentrate selectors" which selectively feed on seeds, fruits and highly nutritious young foliage, "grazers" which mainly feed on grass, and "intermediate feeders" which choose their diet from the whole range of available plant material. Cattle, sheep, goats, deer and antelopes are ruminants; they digest food in two steps, chewing and swallowing in the normal way, and then regurgitating the semidigested cud to chew it again and thus extract the maximum possible food value.
The dietary needs of these animals is mostly met by eating grass. Grasses grow from the base of the leaf-blade, enabling it to thrive even when heavily grazed or cut.
In many climates grass growth is seasonal, for example in the temperate summer or tropical rainy season, so some areas of the crop are set aside to be cut and preserved, either as hay (dried grass), or as silage (fermented grass). Other forage crops are also grown and many of these, as well as crop residues, can be ensiled to fill the gap in the nutritional needs of livestock in the lean season.
Extensively reared animals may subsist entirely on forage, but more intensively kept livestock will require energy and protein-rich foods in addition. Energy is mainly derived from cereals and cereal by-products, fats and oils and sugar-rich foods, while protein may come from fish or meat meal, milk products, legumes and other plant foods, often the by-products of vegetable oil extraction.
Pigs and poultry are non-ruminants and unable to digest the cellulose in grass and other forages, so they are fed entirely on cereals and other high-energy foodstuffs. The ingredients for the animals' rations can be grown on the farm or can be bought, in the form of pelleted or cubed, compound foodstuffs specially formulated for the different classes of livestock, their growth stages and their specific nutritional requirements. Vitamins and minerals are added to balance the diet. Farmed fish are usually fed pelleted food.
Breeding
The breeding of farm animals seldom occurs spontaneously but is managed by farmers with a view to encouraging traits seen as desirable. These include hardiness, fertility, docility, mothering abilities, fast growth rates, low feed consumption per unit of growth, better body proportions, higher yields, and better fibre qualities. Undesirable traits such as health defects and aggressiveness are selected against.
Selective breeding has been responsible for large increases in productivity. For example, in 2007, a typical broiler chicken at eight weeks old was 4.8 times as heavy as a bird of similar age in 1957, while in the thirty years to 2007, the average milk yield of a dairy cow in the United States nearly doubled.
Animal health
Good husbandry, proper feeding, and hygiene are the main contributors to animal health on the farm, bringing economic benefits through maximised production. When, despite these precautions, animals still become sick, they are treated with veterinary medicines, by the farmer and the veterinarian. In the European Union, when farmers treat their own animals, they are required to follow the guidelines for treatment and to record the treatments given. Animals are susceptible to a number of diseases and conditions that may affect their health. Some, like classical swine fever and scrapie are specific to one type of stock, while others, like foot-and-mouth disease affect all cloven-hoofed animals. Animals living under intensive conditions are prone to internal and external parasites; increasing numbers of sea lice are affecting farmed salmon in Scotland. Reducing the parasite burdens of livestock results in increased productivity and profitability.
Where the condition is serious, governments impose regulations on import and export, on the movement of stock, quarantine restrictions and the reporting of suspected cases. Vaccines are available against certain diseases, and antibiotics are widely used where appropriate. At one time, antibiotics were routinely added to certain compound foodstuffs to promote growth, but this practice is now frowned on in many countries because of the risk that it may lead to antimicrobial resistance in livestock and in humans.
Governments are concerned with zoonoses, diseases that humans may acquire from animals. Wild animal populations may harbour diseases that can affect domestic animals which may acquire them as a result of insufficient biosecurity. An outbreak of Nipah virus in Malaysia in 1999 was traced back to pigs becoming ill after contact with fruit-eating flying foxes, their faeces and urine. The pigs in turn passed the infection to humans. Avian flu H5N1 is present in wild bird populations and can be carried large distances by migrating birds. This virus is easily transmissible to domestic poultry, and to humans living in close proximity with them. Other infectious diseases affecting wild animals, farm animals and humans include rabies, leptospirosis, brucellosis, tuberculosis and trichinosis.
Range of species
There is no single universally agreed definition of which species are livestock. Widely agreed types of livestock include cattle for beef and dairy, sheep, goats, pigs, and poultry. Various other species are sometimes considered livestock, such as horses, while poultry birds are sometimes excluded. In some parts of the world, livestock includes species such as buffalo, and the South American camelids, the alpaca and llama. Some authorities use much broader definitions to include fish in aquaculture, micro-livestock such as rabbits and rodents like guinea pigs, as well as insects from honey bees to crickets raised for human consumption.
Products
Animals are raised for a wide variety of products, principally meat, wool, milk, and eggs, but also including tallow, isinglass and rennet. Animals are also kept for more specialised purposes, such as to produce vaccines and antiserum (containing antibodies) for medical use. Where fodder or other crops are grown alongside animals, manure can serve as a fertiliser, returning minerals and organic matter to the soil in a semi-closed organic system.
Branches
Dairy
Although all mammals produce milk to nourish their young, the cow is predominantly used throughout the world to produce milk and milk products for human consumption. Other animals used to a lesser extent for this purpose include sheep, goats, camels, buffaloes, yaks, reindeer, horses and donkeys.
All these animals have been domesticated over the centuries, being bred for such desirable characteristics as fecundity, productivity, docility and the ability to thrive under the prevailing conditions. Whereas in the past cattle had multiple functions, modern dairy cow breeding has resulted in specialised Holstein Friesian-type animals that produce large quantities of milk economically. Artificial insemination is widely available to allow farmers to select for the particular traits that suit their circumstances.
Whereas in the past cows were kept in small herds on family farms, grazing pastures and being fed hay in winter, nowadays there is a trend towards larger herds, more intensive systems, the feeding of silage and "zero grazing", a system where grass is cut and brought to the cow, which is housed year-round.
In many communities, milk production is only part of the purpose of keeping an animal which may also be used as a beast of burden or to draw a plough, or for the production of fibre, meat and leather, with the dung being used for fuel or for the improvement of soil fertility. Sheep and goats may be favoured for dairy production in climates and conditions that do not suit dairy cows.
Meat
Meat, mainly from farmed animals, is a major source of dietary protein and essential nutrients around the world, averaging about 8% of man's energy intake. The actual types eaten depend on local preferences, availability, cost and other factors, with cattle, sheep, pigs and goats being the main species involved. Cattle generally produce a single offspring annually which takes more than a year to mature; sheep and goats often have twins and these are ready for slaughter in less than a year; pigs are more prolific, producing more than one litter of up to about 11 piglets each year. Horses, donkeys, deer, buffalo, llamas, alpacas, guanacos and vicunas are farmed for meat in various regions. Some desirable traits of animals raised for meat include fecundity, hardiness, fast growth rate, ease of management and high food conversion efficiency. About half of the world's meat is produced from animals grazing on open ranges or on enclosed pastures, the other half being produced intensively in various factory-farming systems; these are mostly cows, pigs or poultry, and often reared indoors, typically at high densities.
Poultry
Poultry, kept for their eggs and for their meat, include chickens, turkeys, geese and ducks. The great majority of laying birds used for egg production are chickens. Methods for keeping layers range from free-range systems, where the birds can roam as they will but are housed at night for their own protection, through semi-intensive systems where they are housed in barns and have perches, litter and some freedom of movement, to intensive systems where they are kept in cages. The battery cages are arranged in long rows in multiple tiers, with external feeders, drinkers, and egg collection facilities. This is the most labour saving and economical method of egg production but has been criticised on animal welfare grounds as the birds are unable to exhibit their normal behaviours.
In the developed world, the majority of the poultry reared for meat is raised indoors in big sheds, with automated equipment under environmentally controlled conditions. Chickens raised in this way are known as broilers, and genetic improvements have meant that they can be grown to slaughter weight within six or seven weeks of hatching. Newly hatched chicks are restricted to a small area and given supplementary heating. Litter on the floor absorbs the droppings and the area occupied is expanded as they grow. Feed and water is supplied automatically and the lighting is controlled. The birds may be harvested on several occasions or the whole shed may be cleared at one time.
A similar rearing system is usually used for turkeys, which are less hardy than chickens, but they take longer to grow and are often moved on to separate fattening units to finish. Ducks are particularly popular in Asia and Australia and can be killed at seven weeks under commercial conditions.
Aquaculture
Aquaculture has been defined as "the farming of aquatic organisms including fish, molluscs, crustaceans and aquatic plants and implies some form of intervention in the rearing process to enhance production, such as regular stocking, feeding, protection from predators, etc. Farming also implies individual or corporate ownership of the stock being cultivated." In practice it can take place in the sea or in freshwater, and be extensive or intensive. Whole bays, lakes or ponds may be devoted to aquaculture, or the farmed animal may be retained in cages (fish), artificial reefs, racks or strings (shellfish). Fish and prawns can be cultivated in rice paddies, either arriving naturally or being introduced, and both crops can be harvested together.
Fish hatcheries provide larval and juvenile fish, crustaceans and shellfish, for use in aquaculture systems. When large enough these are transferred to growing-on tanks and sold to fish farms to reach harvest size. Some species that are commonly raised in hatcheries include shrimps, prawns, salmon, tilapia, oysters and scallops. Similar facilities can be used to raise species with conservation needs to be released into the wild, or game fish for restocking waterways. Important aspects of husbandry at these early stages include selection of breeding stock, control of water quality and nutrition. In the wild, there is a massive amount of mortality at the nursery stage; farmers seek to minimise this while at the same time maximising growth rates.
Insects
Bees have been kept in hives since at least the First Dynasty of Egypt, five thousand years ago, and man had been harvesting honey from the wild long before that. Fixed comb hives are used in many parts of the world and are made from any locally available material. In more advanced economies, where modern strains of domestic bee have been selected for docility and productiveness, various designs of hive are used which enable the combs to be removed for processing and extraction of honey. Quite apart from the honey and wax they produce, honey bees are important pollinators of crops and wild plants, and in many places hives are transported around the countryside to assist in pollination.
Sericulture, the rearing of silkworms, was first adopted by the Chinese during the Shang dynasty. The only species farmed commercially is the domesticated silkmoth. When it spins its cocoon, each larva produces an exceedingly long, slender thread of silk. The larvae feed on mulberry leaves and in Europe, only one generation is normally raised each year as this is a deciduous tree. In China, Korea and Japan however, two generations are normal, and in the tropics, multiple generations are expected. Most production of silk occurs in the Far East, with a synthetic diet being used to rear the silkworms in Japan.
Insects form part of the human diet in many cultures. In Thailand, crickets are farmed for this purpose in the north of the country, and palm weevil larvae in the south. The crickets are kept in pens, boxes or drawers and fed on commercial pelleted poultry food, while the palm weevil larvae live on cabbage palm and sago palm trees, which limits their production to areas where these trees grow. Another delicacy of this region is the bamboo caterpillar, and the best rearing and harvesting techniques in semi-natural habitats are being studied.
Effects
Environmental impact
Animal husbandry has a significant impact on the world environment. Both production and consumption of animal products have increased rapidly. Over the past 50 years, meat production has trebled, whereas the production of dairy products doubled and that of eggs almost increased fourfold. Meanwhile, meat consumption has also nearly doubled worldwide. Within that increased overall consumption of meat, developing countries had a surge in meat consumption particularly in the portion of monogastric livestock. Being a part of the animal–industrial complex, animal agriculture is the primary driver of climate change, ocean acidification, biodiversity loss, and of the crossing of almost every other planetary boundary, in addition to killing more than 60 billion non-human land animals annually. It is responsible for somewhere between 20 and 33% of the fresh water usage in the world, and livestock, and the production of feed for them, occupy about a third of the Earth's ice-free land. Livestock production is a contributing factor in species extinction, desertification, and habitat destruction. Animal agriculture contributes to species extinction in various ways and is the primary driver of the Holocene extinction. It is estimated that 70% of the agricultural land and 30% of the total land surface of the Earth is involved either directly or indirectly in animal agriculture. Habitat is destroyed by clearing forests and converting land to grow feed crops and for animal grazing, while predators and herbivores are frequently targeted and hunted because of a perceived threat to livestock profits; for example, animal husbandry is responsible for up to 91% of the deforestation in the Amazon region. In addition, livestock produce greenhouse gases. Cows produce some 570 million cubic metres of methane per day, that accounts for 35 to 40% of the overall methane emissions of the planet. Further, livestock production is responsible for 65% of all human-related emissions of nitrous oxide.
As a result, ways of mitigating animal husbandry's environmental impact are being studied. Strategies include using biogas from manure, genetic selection, immunization, rumen defaunation, outcompetition of methanogenic archaea with acetogens, introduction of methanotrophic bacteria into the rumen, diet modification and grazing management, among others. It has been suggested that beef products finished in feedlot are less resource intensive than those pastured beef products. A diet change (with Asparagopsis taxiformis) allowed for a reduction of up to 99% of methane production in an experimental study with three ruminants.
Animal welfare
Since the 18th century, people have become increasingly concerned about the welfare of farm animals. Possible measures of welfare include longevity, behavior, physiology, reproduction, freedom from disease, and freedom from immunosuppression. Standards and laws for animal welfare have been created worldwide, broadly in line with the most widely held position in the western world, a form of utilitarianism: that it is morally acceptable for humans to use non-human animals, provided that no unnecessary suffering is caused, and that the benefits to humans outweigh the costs to the livestock. An opposing view is that animals have rights, should not be regarded as property, are not necessary to use, and should never be used by humans. Live export of animals has risen to meet increased global demand for livestock such as in the Middle East. Animal rights activists have objected to long-distance transport of animals; one result was the banning of live exports from New Zealand in 2003.
David Nibert, professor of sociology at Wittenberg University, posits that, based on contemporary scholarship by ethologists and biologists about the sentience and intelligence of other animals, "we can assume that, for the most part, the other animals' experience of capture, enslavement, use, and slaying was one of suffering and violence." Much of this involved direct physical violence, but also structural violence as their systemic oppression and enslavement "resulted in their inability to meet their basic needs, the loss of self-determination, and the loss of opportunity to live in a natural way." He says that the remains of domesticated animals from thousands of years ago found during archeological excavations revealed numerous bone pathologies, which provide evidence of extreme suffering:
In culture
Since the 18th century, the farmer John Bull has represented English national identity, first in John Arbuthnot's political satires, and soon afterwards in cartoons by James Gillray and others including John Tenniel. He likes food, beer, dogs, horses, and country sports; he is practical and down to earth, and anti-intellectual.
Farm animals are widespread in books and songs for children; the reality of animal husbandry is often distorted, softened, or idealized, giving children an almost entirely fictitious account of farm life. The books often depict happy animals free to roam in attractive countryside, a picture completely at odds with the realities of the impersonal, mechanized activities involved in modern intensive farming.
Pigs, for example, appear in several of Beatrix Potter's "little books", as Piglet in A.A. Milne's Winnie the Pooh stories, and somewhat more darkly (with a hint of animals going to slaughter) as Babe in Dick King-Smith's The Sheep-Pig, and as Wilbur in E. B. White's Charlotte's Web. Pigs tend to be "bearers of cheerfulness, good humour and innocence". Many of these books are completely anthropomorphic, dressing farm animals in clothes and having them walk on two legs, live in houses, and perform human activities. The children's song "Old MacDonald Had a Farm" describes a farmer named MacDonald and the various animals he keeps, celebrating the noises they each make.
Many urban children experience animal husbandry for the first time at a petting farm; in Britain, some five million people a year visit a farm of some kind. This presents some risk of infection, especially if children handle animals and then fail to wash their hands; a strain of E. coli infected 93 people who had visited a British interactive farm in an outbreak in 2009. Historic farms such as those in the United States offer farmstays and "a carefully curated version of farming to those willing to pay for it", sometimes giving visitors a romanticised image of a pastoral idyll from an unspecified time in the pre-industrial past.
See also
Animal–industrial complex
Agribusiness
Fishery
Food vs. feed
Industrial agriculture
Wildlife farming
Zootechnics
Notes
References
Citations
Sources
Saltini, Antonio. Storia delle scienze agrarie, 4 vols, Bologna 1984–89, .
Clutton Brock, Juliet. The walking larder. Patterns of domestication, pastoralism and predation, Unwin Hyman, London 1988.
Clutton Brock, Juliet. Horse power: a history of the horse and donkey in human societies, National history Museum publications, London 1992.
Fleming, George; Guzzoni, M. Storia cronologica delle epizoozie dal 1409 av. Cristo sino al 1800, in Gazzetta medico-veterinaria, I–II, Milano 1871–72.
Hall, S; Clutton Brock, Juliet. Two hundred years of British farm livestock, Natural History Museum Publications, London 1988.
Janick, Jules; Noller, Carl H.; Rhyker, Charles L. The Cycles of Plant and Animal Nutrition, in Food and Agriculture, Scientific American Books, San Francisco 1976.
Manger, Louis N. A History of the Life Sciences, M. Dekker, New York, Basel 2002.
External links
Animal husbandry practices – National Animal Interest Alliance
Livestock | wiki |
A virtual reality website is a website that leverages the WebVR and WebGL APIs to create a 3D environment for a web user to explore using a virtual reality head-mounted display.
History
In June 2014, Mozilla released builds of Firefox with compatibility with Oculus Rift through WebVR, and in November of that year launched MozVR.com, a Virtual Reality Website showcasing web-based virtual reality demos, tied together with a virtual reality navigation interface.
Experimental builds of Google Chrome also use WebVR to support Oculus Rift, Google Cardboard, Project Tango and HTC Vive.
In 2014, Google launched 'Chrome Experiments for Virtual Reality'; a Virtual Reality mobile site showcasing web-based Virtual Reality demos for Google Cardboard.
In 2015, Mozilla released A-Frame (VR), an open source web framework for building VR experiences and websites.
References
Websites
Virtual reality | wiki |
L’advection géostrophique est l’advection produite par le vent géostrophique.
Notes et références
Voir aussi
Articles connexes
Glossaire de la météorologie
Thermodynamique atmosphérique | wiki |
The 2016 COSAFA Cup (known as Castle Lager COSAFA Cup Namibia 2016 for sponsorship reasons) was the 16th edition of the COSAFA Cup, an international football competition consisting of national teams of member nations of the Council of Southern Africa Football Associations (COSAFA). Originally, it was to be held in Windhoek, Namibia during May 2016, however the tournament was rescheduled to avoid a clash with the South African Premier Soccer League and took place in June 2016.
Participating nations
Venues
Draw
The draw was originally scheduled to place on 25 April 2016. It was rescheduled for 28 April 2016 and televised on SuperSport's Soccer Africa show.
Squads
Group stage
Group A
Group B
Knockout stage
The two group stage winners qualified for this round.
Quarter-finals
Semi-finals
Third place play-off
Final
Plate
The losing quarter-finalists qualified for this round.
Semi-finals
Final
Goalscorers
5 goals
Felix Badenhorst
3 goals
Jane Thaba-Ntšo
Gabadinho Mhango
2 goals
Hendrik Somaeb
Thabiso Kutumela
Menzi Masuku
Gift Motupa
Lawrence Mhlanga
Ronald Pfumbidzai
1 goal
Onkabetse Makgantai
Kabelo Seakanyeng
Nelson Omba
Hlompho Kalake
Jeremea Kamela
Tumelo Khutlang
Basias Makepe
Sera Motebang
Phafa Tšosane
Tojo Claudel Fanomezana
Luis Dorza
Andy Sophie
Miracle Gabeya
Deon Hotto
Itamunua Keimuine
Ronald Ketjijere
Judas Moseamedi
Gift Motupa
Lebogang Phiri
Njabulo Ndlovu
Wonder Nhleko
Sabelo Ndzinisa
Tony Tsabedze
Paul Katema
Spencer Sautu
Charles Zulu
Teenage Hadebe
Marshal Mudehwe
Obadiah Tarumbwa
1 own goal
Angula da Costa (playing against Botswana)
References
External links
Official site
2016
2016 in African football
International association football competitions hosted by Namibia
2016 in Namibian sport | wiki |
The meat price refers to the price of meat.
Inexpensive meats
Inexpensive meat or cheap meat include e.g. fatty cuts of lamb or mutton.
Factors influencing the price of meat
Factors influencing the price of meat include supply and demand, subsidies, hidden costs, taxes, quotas or non-material costs ("moral cost") of meat production. Non-material costs can be related to issues such as animal welfare (e.g. treatment of animals, over-breeding). Hidden costs of meat production can be related to the environmental impact of meat production and to the effect on human health (such as resistant antibiotics). Critics of the meat industry often point to these aspects as a problem.
See also
Livestock price
Intensive animal farming
Organic agriculture
Local food
Factory farming divestment
Food vs. feed
Fodder
Wagyu
Slow food
Draft animal: multi-role animals
Meat analogue
Meat-free days
Bibliography
Lymbery, Philip. Farmageddon: The True Cost of Cheap Meat, Bloomsbury Publishing, 2014.
Further reading
Should meat be a luxury food ?
Understanding Markets for Grass-Fed Beef: Consumer Taste, Price, and Purchase Preferences
Eat less meat, of better quality: don’t do it with sadness. Do it with joy!
Rethinking agriculture report
References
Meat industry | wiki |
In computer science and operations research, exact algorithms are algorithms that always solve an optimization problem to optimality.
Unless P = NP, an exact algorithm for an NP-hard optimization problem cannot run in worst-case polynomial time. There has been extensive research on finding exact algorithms whose running time is exponential with a low base.
See also
Approximation-preserving reduction
APX is the class of problems with some constant-factor approximation algorithm
Heuristic algorithm
PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter
References
Computational complexity theory
Optimization algorithms and methods | wiki |
LWRC R.E.P.R. MKII (Rapid Engagement Precision Rifle) is a semi-automatic 7.62 caliber, and 6.5mm Creedmoor or rifle manufactured by LWRC International.
Design details
The rifle is built on the AR-10 platform which takes a 7.62 mm cartridge. The rifle has a Geissele trigger, and Magpul Industries components. It also has a proprietary 4 port Muzzle brake. It has ambidextrous controls The barrel is cold hammer forged, Black Nitride treated, and has. 1:10 twist. The gun can be fitted with a 20", 16", or 12" barrel. The barrel is carbon-fiber-wrapped. The gun has a full-length Picatinny rail.
Operation
The gun operates with a short Stroke piston. It weighs 13 pounds when loaded with ammunition. The gun's purpose is for hunting, long-range shooting competition and law enforcement. The average velocity of a bullet fired from the REPR is 2,711.2mph. The gun is equipped with a 20-round magazine.
References
7.62 mm firearms
Rifles of the United States
LWRC International semi-automatic firearms
Short stroke piston firearms
2010 introductions
ArmaLite AR-10 derivatives
6.5×55mm rifles | wiki |
FLAC may refer to:
Free Lossless Audio Codec, an audio data compression scheme.
Free Legal Advice Centres, an Irish organization
Florida Automatic Computer, an early digital electronic computer
Striplin FLAC, an ultralight aircraft, where the abbreviation stands for Foot Launched Air Cycle
See also
FLAK (disambiguation) | wiki |
Waxing is the process of hair removal from the root by using a covering of a sticky substance, such as wax, to adhere to body hair, and then removing this covering and pulling out the hair from the follicle. New hair will not grow back in the previously waxed area for four to six weeks, although some people will start to see regrowth in only a week due to some of their hair being on a different human hair growth cycle. Almost any area of the body can be waxed, including eyebrows, face, pubic hair (called bikini waxing or intimate waxing), legs, arms, back, abdomen, chest, knuckles, and feet. There are many types of waxing suitable for removing unwanted hair.
Types
Strip waxing (soft wax) is accomplished by spreading a wax thinly over the skin. A cloth or paper strip is applied and pressed firmly, adhering the strip to the wax and the wax to the skin. The strip is then quickly ripped against the direction of hair growth, as parallel as possible to the skin to avoid trauma to the skin. This removes the wax along with the hair. There are different forms of strip waxing or soft waxing: heated, cold or pre-made strips. Unlike cold waxing, heated wax is spread easily over the skin. Cold waxing is thicker, which makes it more difficult to spread smoothly over the skin. Pre-made strips come with the wax on them, and they come in different sizes for different area uses.
Stripless wax (as opposed to strip wax) comprises both hard wax and film wax. Hard wax is applied somewhat thickly and with no cloth or paper strips. Film wax similarly so but is spread in a thin film. The wax then hardens when it cools, thus allowing the easy removal by a therapist without the aid of cloths or strips. This waxing method is very beneficial to people who have sensitive skin. Stripless wax does not adhere to the skin as much as strip wax does, thus making it a good option for sensitive skin as finer hairs are more easily removed because the hard wax encapsulates the hair as it hardens. The stripless waxing method can also be less painful.
Contraindications
The following factors are known to make those who are waxed more prone to "skin lifting", where the top layer of skin is torn away during waxing treatment:
Taking blood-thinning medications;
Taking drugs for autoimmune diseases, including lupus;
Taking prednisone or steroids;
Taking retinoid, including over the counter retinols and prescription strength tretinoin
Psoriasis, eczema, or other chronic skin diseases;
Recent sunburn;
Recent cosmetic or reconstructive surgery;
Recent laser skin treatment;
Severe varicose leg veins;
Rosacea or very sensitive skin;
History of fever blisters or cold sores (waxing can cause a flare-up);
Using Trevino, Tazaronene, or any other peeling agent;
Recent surgical peel, microdermabrasion or chemical peel using glycolic, alpha hydroxy, salicylic acid, or other acid-based products.
There are many benefits to waxing versus other forms of hair removal. It is an effective method to remove large amounts of hair at one time. It is a long-lasting method, as hair in waxed areas will not grow back for two to eight weeks. When hair is shaved or removed by depilatory cream, the hair is removed at the surface rather than the hair root. Within a few days, the hair can reappear back at the surface. With these methods, hair tends to grow back in a rough stubble. Areas that are repeatedly waxed over long periods of time often exhibit regrowth that is softer.
There are many drawbacks of waxing as well. Waxing can be painful when the strip is removed from the skin. Although the pain is not long-lasting, it can be intense, particularly in sensitive areas. Another drawback to waxing is the expense: waxing is usually performed by a licensed esthetician and in some cases the cost can be high, depending on the area waxed and the number of sittings required. There are do-it-yourself waxing supplies, but they may be difficult to use on oneself on some areas on the body.
Another drawback of waxing is that some people experience ingrown hairs, red bumps, and minor bleeding. This is more likely to occur when waxing areas with thick hair, especially the first few times when follicles are strongest.
See also
Bikini waxing
Body treatment
Electrolysis
Male waxing
Persian waxing
References
External links
Kutty, Ahmad (13/Sep/2005) Islamic Ruling on Waxing Unwanted Hair Retrieved March 29, 2006.
Body wax Epilation Video on Youtube
Hair removal | wiki |
Eventi
Viene fondato il St Catharine's College dell'Università di Cambridge.
Nati
Morti
Calendario
Altri progetti
073 | wiki |
Catcher is a position for a baseball or softball player. It is also a general term for a fielder who catches the ball in cricket. It is also the function and name of the circus performer who catches the flyer on the flying trapeze.
Catcher or catchers may also refer to:
Catchers (band), Irish indie pop band
The Catcher, 1998 horror film directed by Guy Crawford and Yvette Hoffman
See also
The Catcher in the Rye (disambiguation)
Foxcatcher, 2014 American true crime sports drama film directed by Bennett Miller | wiki |
Algebra is one of the main branches of mathematics, covering the study of structure, relation and quantity. Algebra studies the effects of adding and multiplying numbers, variables, and polynomials, along with their factorization and determining their roots. In addition to working directly with numbers, algebra also covers symbols, variables, and set elements. Addition and multiplication are general operations, but their precise definitions lead to structures such as groups, rings, and fields.
Branches
Pre-algebra
Elementary algebra
Abstract algebra
Linear algebra
Universal algebra
Algebraic equations
An algebraic equation is an equation involving only algebraic expressions in the unknowns. These are further classified by degree.
Linear equation – algebraic equation of degree one.
Polynomial equation – equation in which a polynomial is set equal to another polynomial.
Transcendental equation – equation involving a transcendental function of one of its variables.
Functional equation – equation in which the unknowns are functions rather than simple quantities.
Differential equation – equation involving derivatives.
Integral equation – equation involving integrals.
Diophantine equation – equation where the only solutions of interest of the unknowns are the integer ones.
History
History of algebra
General algebra concepts
Fundamental theorem of algebra – states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with an imaginary part equal to zero.
Equations – equality of two mathematical expressions
Linear equation – an algebraic equation with a degree of one
Quadratic equation – an algebraic equation with a degree of two
Cubic equation – an algebraic equation with a degree of three
Quartic equation – an algebraic equation with a degree of four
Quintic equation – an algebraic equation with a degree of five
Polynomial – an algebraic expression consisting of variables and coefficients
Inequalities – a comparison between values
Functions – mapping that associates a single output value with each input value
Sequences – ordered list of elements either finite or infinite
Systems of equations – finite set of equations
Vectors – element of a vector space
Matrix – two dimensional array of numbers
Vector space – basic algebraic structure of linear algebra
Field – algebraic structure with addition, multiplication and division
Groups – algebraic structure with a single binary operation
Rings – algebraic structure with addition and multiplication
See also
Table of mathematical symbols
External links
'4000 Years of Algebra', lecture by Robin Wilson, at Gresham College, 17 October 2007 (available for MP3 and MP4 download, as well as a text file).
ExampleProblems.com Example problems and solutions from basic and abstract algebra.
List
Algebra
Algebra
Algebra | wiki |
Flower of Life may refer to:
Flower of Life, a symbol of sacred geometry
Flower of Life, a Japanese manga series | wiki |
The stern is the rear or aft part of a ship or boat.
Stern may also refer to:
People
Stern (given name)
Stern (surname)
Stern family, a Jewish French banking family
Daniel Stern, pen name of Marie d'Agoult (1805–1876), author and paramour of Franz Liszt
Schools
Stern College for Women, an undergraduate women's college of Yeshiva University, located in Manhattan, New York
Stern Conservatory, a former private music school in Berlin, now part of the Berlin University of the Arts
New York University Stern School of Business
Other uses
Stern (magazine), a weekly German news magazine
Stern Review, an influential report on global warming's economic effect
Stern (game company), two related arcade gaming companies
Stern baronets, two extinct titles in the Baronetage of the United Kingdom
Stern Hall (disambiguation)
Stern House, a reconstructed building in Jerusalem
USS Stern, a World War II destroyer escort
See also
Lehi (group), underground Zionist organization informally known as the Stern Gang or Group
Selim I, Sultan of the Ottoman Empire nicknamed Yavuz ("the Stern")
Sterns (disambiguation) | wiki |
The bow and arrow is a ranged weapon system consisting of an elastic launching device (bow) and long-shafted projectiles (arrows). Humans used bows and arrows for hunting and aggression long before recorded history, and the practice was common to many prehistoric cultures. They were important weapons of war from ancient history until the early modern period, where they were rendered increasingly obsolete by the development of the more powerful and accurate firearms. Today, bows and arrows are mostly used for hunting and sports.
Archery is the art, practice, or skill of using bows to shoot arrows. A person who shoots arrows with a bow is called a bowman or an archer. Someone who makes bows is known as a bowyer, someone who makes arrows is a fletcher, and someone who manufactures metal arrowheads is an arrowsmith.
Basic design and use
A bow consists of a semi-rigid but elastic arc with a high-tensile bowstring joining the ends of the two limbs of the bow. An arrow is a projectile with a pointed tip and a long shaft with stabilizer fins (fletching) towards the back, with a narrow notch (nock) at the very end to contact the bowstring.
To load an arrow for shooting (nocking an arrow), the archer places an arrow across the middle of the bow with the bowstring in the arrow's nock. To shoot, the archer holds the bow at its center with one hand and pulls back (draws) the arrow and the bowstring with the other (typically the dominant hand). This flexes the two limbs of the bow rearwards, which perform the function of a pair of cantilever springs to store elastic energy.
Typically while maintaining the draw, the archer aims the shot intuitively or by sighting along the arrow. Then archer releases (looses) the draw, allowing the limbs' stored energy to convert into kinetic energy transmitted via the bowstring to the arrow, propelling it to fly forward with high velocity.
A container or bag for additional arrows for quick reloading is called a quiver.
When not in use, bows are generally kept unstrung, meaning one or both ends of the bowstring are detached from the bow. This removes all residual tension on the bow and can help prevent it from losing strength or elasticity over time. Many bow designs also let it straighten out more completely, reducing the space needed to store the bow. Returning the bowstring to its ready-to-use position is called stringing the bow.
History
The oldest known evidence of the bow and arrow comes from South African sites such as Sibudu Cave, where likely arrowheads have been found, dating from approximately 72,000–60,000 years ago.
The earliest probable arrowheads found outside of Africa were discovered in 2020 in Fa Hien Cave, Sri Lanka. They have been dated to 48,000 years ago. "Bow-and-arrow hunting at the Sri Lankan site likely focused on monkeys and smaller animals, such as squirrels, Langley says. Remains of these creatures were found in the same sediment as the bone points."
Elsewhere in Eurasia, the bow and arrow seems to reappear around the Upper Paleolithic. After the end of the last glacial period, use of the bow seems to have spread to every inhabited region, except for Australasia and most of Oceania.
The earliest definite remains of bow and arrow from Europe are possible fragments from Germany found at Mannheim-Vogelstang dated 17,500–18,000 years ago, and at Stellmoor dated 11,000 years ago. Azilian points found in Grotte du Bichon, Switzerland, alongside the remains of both a bear and a hunter, with flint fragments found in the bear's third vertebra, suggest the use of arrows at 13,500 years ago.
At the site of Nataruk in Turkana County, Kenya, obsidian bladelets found embedded in a skull and within the thoracic cavity of another skeleton, suggest the use of stone-tipped arrows as weapons about 10,000 years ago.
The oldest extant bows in one piece are the elm Holmegaard bows from Denmark, which were dated to 9,000 BCE. Several bows from Holmegaard, Denmark, date 8,000 years ago. High-performance wooden bows are currently made following the Holmegaard design.
The Stellmoor bow fragments from northern Germany were dated to about 8,000 BCE, but they were destroyed in Hamburg during the Second World War, before carbon 14 dating was available; their age is attributed by archaeological association.
The bow was an important weapon for both hunting and warfare from prehistoric times until the widespread use of gunpowder weapons in the 16th century. It was also common in ancient warfare, although certain cultures would not favor them. Greek poet Archilocus expressed scorn for fighting with bows and slings. Beginning with the reign of William the Conqueror, the longbow was England's principal weapon of war until the end of the Middle Ages. Genghis Khan and his Mongol hordes conquered much of the Eurasian steppe using short bows. Native Americans used archery to hunt and defend themselves during the days of English and later American colonization.
Organised warfare with bows ended in the early to mid-17th century in Western Europe, but it persisted into the 19th century in Eastern cultures, including hunting and warfare in the New World. In the Canadian Arctic, bows were made until the end of the 20th century for hunting caribou, for instance at Igloolik. The bow has more recently been used as a weapon of tribal warfare in some parts of Sub-Saharan Africa; an example was documented in 2009 in Kenya when Kisii people and Kalenjin people clashed, resulting in four deaths.
The British upper class led a revival of archery as a sport in the late 18th century. Sir Ashton Lever, an antiquarian and collector, formed the Toxophilite Society in London in 1781, under the patronage of George IV, then Prince of Wales.
Construction
Parts of the bow
The basic elements of a modern bow are a pair of curved elastic limbs, traditionally made from wood, joined by a riser. However self bows such as the English longbow are made of a single piece of wood comprising both limbs and the grip. The ends of each limb are connected by a string known as the bow string. By pulling the string backwards the archer exerts compression force on the string-facing section, or belly, of the limbs as well as placing the outer section, or back, under tension. While the string is held, this stores the energy later released in putting the arrow to flight. The force required to hold the string stationary at full draw is often used to express the power of a bow, and is known as its draw weight, or weight. Other things being equal, a higher draw weight means a more powerful bow, which is able to project heavier arrows at the same velocity or the same arrow at a greater velocity.
The various parts of the bow can be subdivided into further sections. The topmost limb is known as the upper limb, while the bottom limb is the lower limb. At the tip of each limb is a nock, which is used to attach the bowstring to the limbs. The riser is usually divided into the grip, which is held by the archer, as well as the arrow rest and the bow window. The arrow rest is a small ledge or extension above the grip which the arrow rests upon while being aimed. The bow window is that part of the riser above the grip, which contains the arrow rest.
In bows drawn and held by hand, the maximum draw weight is determined by the strength of the archer. The maximum distance the string could be displaced and thus the longest arrow that could be loosed from it, a bow's draw length, is determined by the size of the archer.
A composite bow uses a combination of materials to create the limbs, allowing the use of materials specialized for the different functions of a bow limb. The classic composite bow uses wood for lightness and dimensional stability in the core, horn to store compression energy, and sinew for its ability to store energy in tension. Such bows, typically Asian, would often use a stiff end on the limb end, having the effect of a recurve. In this type of bow, this is known by the Arabic name 'siyah'.
Modern construction materials for bows include laminated wood, fiberglass, metals, and carbon fiber components.
Arrows
An arrow usually consists of a shaft with an arrowhead attached to the front end, with fletchings and a nock at the other. Modern arrows are usually made from carbon fibre, aluminum, fiberglass, and wood shafts. Carbon shafts have the advantage that they do not bend or warp, but they can often be too light weight to shoot from some bows and are expensive. Aluminum shafts are less expensive than carbon shafts, but they can bend and warp from use. Wood shafts are the least expensive option but often will not be identical in weight and size to each other and break more often than the other types of shafts. Arrow sizes vary greatly across cultures and range from very short ones that require the use of special equipment to be shot to ones in use in the Amazon River jungles that are long. Most modern arrows are in length.
Arrows come in many types, among which are breasted, bob-tailed, barreled, clout, and target. A breasted arrow is thickest at the area right behind the fletchings, and tapers towards the (nock) and head. A bob-tailed arrow is thickest right behind the head, and tapers to the nock. A barrelled arrow is thickest in the centre of the arrow. Target arrows are those arrows used for target shooting rather than warfare or hunting, and usually have simple arrowheads.
For safety reasons, a bow should never be shot without an arrow nocked; without an arrow, the energy that is normally transferred into the projectile is instead directed back into the bow itself, which will cause damage to the bow's limbs.
Arrowheads
The end of the arrow that is designed to hit the target is called the arrowhead. Usually, these are separate items that are attached to the arrow shaft by either tangs or sockets. Materials used in the past for arrowheads include flint, bone, horn, or metal. Most modern arrowheads are made of steel, but wood and other traditional materials are still used occasionally. A number of different types of arrowheads are known, with the most common being bodkins, broadheads, and piles. Bodkin heads are simple spikes made of metal of various shapes, designed to pierce armour. A broadhead arrowhead is usually triangular or leaf-shaped and has a sharpened edge or edges. Broadheads are commonly used for hunting. A pile arrowhead is a simple metal cone, either sharpened to a point or somewhat blunt, that is used mainly for target shooting. A pile head is the same diameter as the arrow shaft and is usually just fitted over the tip of the arrow. Other heads are known, including the blunt head, which is flat at the end and is used for hunting small game or birds, and is designed to not pierce the target nor embed itself in trees or other objects and make recovery difficult. Another type of arrowhead is a barbed head, usually used in warfare or hunting.
Bowstrings
Bowstrings may have a nocking point marked on them, which serves to mark where the arrow is fitted to the bowstring before shooting. The area around the nocking point is usually bound with thread to protect the area around the nocking point from wear by the archer's hands. This section is called the serving. At one end of the bowstring a loop is formed, which is permanent. The other end of the bowstring also has a loop, but this is not permanently formed into the bowstring but is constructed by tying a knot into the string to form a loop. Traditionally this knot is known as the archer's knot, but is a form of the timber hitch. The knot can be adjusted to lengthen or shorten the bowstring. The adjustable loop is known as the "tail". The string is often twisted (this being called the "flemish twist").
Bowstrings have been constructed of many materials throughout history, including fibres such as flax, silk, and hemp. Other materials used were animal guts, animal sinews, and rawhide. Modern fibres such as Dacron or Kevlar are now used in commercial bowstring construction, as well as steel wires in some compound bows. Compound bows have a mechanical system of pulley cams over which the bowstring is wound. Nylon is useful only in emergency situations, as it stretches too much.
Types of bow
There is no single accepted system of classification of bows. Bows may be described by various characteristics including the materials used, the length of the draw that they permit, the shape of the bow in sideways view, and the shape of the limb in cross-section.
Commonly-used descriptors for bows include:
By side profile
Recurve bow: a bow with the tips curving away from the archer. The curves straighten out as the bow is drawn and the return of the tip to its curved state after release of the arrow adds extra velocity to the arrow.
Reflex bow: a bow whose entire limbs curve away from the archer when unstrung. The curves are opposite to the direction in which the bow flexes while drawn.
By material
Self bow: a bow made from one piece of wood.
Composite bow: a bow made of more than one material.
By cross-section of limb
Longbow: a self bow with limbs rounded in cross-section, about the same height as the archer so as to allow a full draw, usually over long. The traditional English longbow was made of yew wood, but other woods are also used.
Flatbow: the limbs are approximately rectangular in cross-section. This was traditional in many Native American societies and was found to be the most efficient shape for bow limbs by American engineers in the 20th century
Other characteristics
Takedown bow: a bow that can be disassembled for transportation, usually consisting of three parts: two limbs and a riser, in addition to the string.
Compound bow: a bow with mechanical amplifiers to aid with drawing the bowstring. Usually, these amplifiers are asymmetric pulleys called cams (though they are not actually cams) at the ends of the limbs, which provide a mechanical advantage (known as the let-off) while holding the bow in full draw. Such bows typically have high draw weights and are usually drawn with a release aid with a trigger mechanism for a consistently clean release.
Crossbow: a bow mounted horizontally on a frame similar to a firearm stock, which has a locking mechanism for holding the bowstring at full draw. Crossbows typically shoot arrow-like darts called bolts or "quarrels", rather than normal arrows.
Footbow: a bow meant to be used with the legs and arms while lying down and used in the current distance record for the furthest arrow shot.
See also
Sling (weapon)
Slingshot
Citations
References
Further reading
The Traditional Bowyers Bible Volume 1. 1992 The Lyons Press.
The Traditional Bowyers Bible Volume 2. 1992 The Lyons Press.
The Traditional Bowyers Bible Volume 3. 1994 The Lyons Press.
The Traditional Bowyers Bible Volume 4. 2008 The Lyons Press.
Gray, David, Bows of the World. The Lyons Press, 2002. .
External links
The Asian Traditional Archery Research Network
Simon Archery Collection From The Manchester Museum, The University of Manchester
An Approach to the Study of Ancient Archery using Mathematical Modeling
Ancient weapons
Medieval weapons
Heraldic charges
Hunting equipment
Shinto religious objects | wiki |
The term mathematical theory may refer to:
Theory (mathematical logic), a collection of sentences in a formal language.
Mathematical theory, a branch of mathematics
See also
Theory | wiki |
The Bacillota (synonym Firmicutes) are a phylum of bacteria, most of which have gram-positive cell wall structure. The renaming of phyla such as Firmicutes in 2021 remains controversial among microbiologists, many of whom continue to use the earlier names of long standing in the literature.
The name "Firmicutes" was derived from the Latin words for "tough skin," referring to the thick cell wall typical of bacteria in this phylum. Scientists once classified the Firmicutes to include all gram-positive bacteria, but have recently defined them to be of a core group of related forms called the low-G+C group, in contrast to the Actinomycetota. They have round cells, called cocci (singular coccus), or rod-like forms (bacillus). A few Firmicutes, such as Megasphaera, Pectinatus, Selenomonas and Zymophilus, have a porous pseudo-outer membrane that causes them to stain gram-negative.
Many Bacillota (Firmicutes) produce endospores, which are resistant to desiccation and can survive extreme conditions. They are found in various environments, and the group includes some notable pathogens. Those in one family, the heliobacteria, produce energy through anoxygenic photosynthesis. Bacillota play an important role in beer, wine, and cider spoilage.
Classes
The group is typically divided into the Clostridia, which are anaerobic, and the Bacilli, which are obligate or facultative aerobes.
On phylogenetic trees, the first two groups show up as paraphyletic or polyphyletic, as do their main genera, Clostridium and Bacillus. However, Bacillota as a whole is generally believed to be monophyletic, or paraphyletic with the exclusion of Mollicutes.
Phylogeny
The currently accepted taxonomy based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and the National Center for Biotechnology Information (NCBI).
Genera
More than 274 genera were considered to be within the Bacillota phylum, notable genera of Bacillota include:
Bacilli, order Bacillales
Bacillus
Listeria
Staphylococcus
Bacilli, order Lactobacillales
Enterococcus
Lactobacillus
Leuconostoc
Streptococcus
Clostridia
Clostridioides
Clostridium
Selenomonas
Erysipelotrichia
Erysipelothrix
Clinical significance
Bacillota make up ~30% of the mouse and human gut microbiome. The phylum Bacillota as part of the gut microbiota has been shown to be involved in energy resorption, and potentially related to the development of diabetes and obesity. Within the gut of healthy human adults, the most abundant bacterium: Faecalibacterium prausnitzii (F. prausnitzii), which makes up 5% of the total gut microbiome, is a member of the Bacillota phylum. This species is directly associated with reduced low-grade inflammation in obesity. F. prausnitzii has been found in higher levels within the guts of obese children than in non-obese children.
In multiple studies a higher abundance of Bacillota has been found in obese individuals than in lean controls. A higher level of Lactobacillus (of the Bacillota phylum) has been found in obese patients and in one study, obese patients put on weight loss diets showed a reduced amount of Bacillota within their guts.
Diet changes in mice have also been shown to promote changes in Bacillota abundance. A higher relative abundance of Bacillota was seen in mice fed a western diet (high fat/high sugar) than in mice fed a standard low fat/ high polysaccharide diet. The higher amount of Bacillota was also linked to more adiposity and body weight within mice. Specifically, within obese mice, the class Mollicutes (within the Bacillota phylum) was the most common. When the microbiota of obese mice with this higher Bacillota abundance was transplanted into the guts of germ-free mice, the germ-free mice gained a significant amount of fat as compared to those transplanted with the microbiota of lean mice with lower Bacillota abundance.
The presence of Christensenella (Bacillota, in class Clostridia), isolated from human faeces, has been found to correlate with lower body mass index.
See also
List of bacteria genera
List of bacterial orders
References
External links
Phylum "Firmicutes" - J.P. Euzéby: List of Prokaryotic names with Standing in Nomenclature
Bacteria phyla | wiki |
Falcaria vulgaris, the sickleweed or longleaf, is the sole species in the genus Falcaria. It is a biennial herb of the spherical shape. Blossoms in June–July. Grows in Europe, Siberia, Middle East, Northern Africa, North and South Americas. Contains alkaloids, carotene, vitamin C, proteins.
Its use as an alternative medicine may offer several advantages, especially in the treatment of stomach and skin ulcers, diabetes, infections, and liver and kidney disorders.
References
External links
Apioideae | wiki |
The Lonesome Trail may refer to:
The Lonesome Trail (1930 film), 1930 American western film directed by Bruce Mitchell
The Lonesome Trail (1945 film), 1945 American western film directed by Oliver Drake
The Lonesome Trail (1955 film), 1955 American western film directed by Richard Bartlett | wiki |
Rainbows è un EP di Raphael Gualazzi, pubblicato il 29 ottobre 2013.
Tracce | wiki |
A blastoderm (germinal disc, blastodisc) is a single layer of embryonic epithelial tissue that makes up the blastula. It encloses the fluid filled blastocoel. Gastrulation follows blastoderm formation, where the tips of the blastoderm begins the formation of the ectoderm, mesoderm, and endoderm.
Formation
The blastoderm is formed when the oocyte plasma membrane begins cleaving by invagination, creating multiple cells that arrange themselves into an outer sleeve to the blastocoel.
In oviparous
In chicken eggs, the blastoderm represents a flat disc after embryonic fertilization. At the edge of the blastoderm is the site of active migration by most cells.
See also
Blastodisc
Embryology
Cleavage
Gastrulation
References
Campbell Reece, Biology 7th edition, Pearson Publishing, 2005
Embryology | wiki |
Family patrimony is a type of civil law patrimony that is created by marriage or civil union (where recognized) which creates a bundle of entitlements and obligations that must be shared by the spouses or partners upon divorce, annulment, dissolution of marriage or dissolution of civil union, when there must be a division of property. It is similar to the common law concept of community property.
Civil law (legal system) | wiki |
International human rights law (IHRL) is the body of international law designed to promote human rights on social, regional, and domestic levels. As a form of international law, international human rights law are primarily made up of treaties, agreements between sovereign states intended to have binding legal effect between the parties that have agreed to them; and customary international law. Other international human rights instruments, while not legally binding, contribute to the implementation, understanding and development of international human rights law and have been recognized as a source of political obligation.
International human rights law, which governs the conduct of a state towards its people in peacetime is traditionally seen as distinct from international humanitarian law which governs the conduct of a state during armed conflict, although the two branches of law are complementary and in some ways overlap.
A more systemic perspective explains that international humanitarian law represents a function of international human rights law; it includes general norms that apply to everyone at all time as well as specialized norms which apply to certain situations such as armed conflict between both state and military occupation (i.e. IHL) or to certain groups of people including refugees (e.g. the 1951 Refugee Convention), children (the Convention on the Rights of the Child), and prisoners of war (the 1949 Third Geneva Convention).
United Nations system
The General Assembly of the United Nations adopted the Vienna Declaration and Programme of Action in 1993, in terms of which the United Nations High Commissioner for Human Rights was established.
In 2006, the United Nations Commission on Human Rights was replaced with the United Nations Human Rights Council for the enforcement of international human rights law. The changes prophesied a more structured organization along with a requirement to review human rights cases every four years. The United Nations Sustainable Development Goal 10 also targets the promotion of legislation and policies towards reducing inequality.
International Bill of Human Rights
Universal Declaration of Human Rights
The Universal Declaration of Human Rights (UDHR) is a UN General Assembly declaration that does not in form create binding international human rights law. Many legal scholars cite the UDHR as evidence of customary international law.
More broadly, the UDHR has become an authoritative human rights reference. It has provided the basis for subsequent international human rights instruments that form non-binding, but ultimately authoritative international human rights law.
International human rights treaties
Besides the adoption in 1966 of the two wide-ranging Covenants that form part of the International Bill of Human Rights (namely the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights), other treaties have been adopted at the international level. These are generally known as human rights instruments. Some of the most significant include the following:
the Convention on the Prevention and Punishment of the Crime of Genocide (CPCG) (adopted 1948 and entered into force in 1951);
the Convention Relating to the Status of Refugees (CSR) (adopted in 1951 and entered into force in 1954);
the Convention on the Elimination of All Forms of Racial Discrimination (CERD) (adopted in 1965 and entered into force in 1969);
the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) (entered into force in 1981);
the United Nations Convention Against Torture (CAT) (adopted in 1984 and entered into force in 1987);
the Convention on the Rights of the Child (CRC) (adopted in 1989 and entered into force in 1990);
the International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families (ICRMW) (adopted in 1990 and entered into force in 2003);
the Convention on the Rights of Persons with Disabilities (CRPD) (entered into force on 3 May 2008); and
the International Convention for the Protection of All Persons from Enforced Disappearance (ICPPED) (adopted in 2006 and entered into force in 2010).
Regional protection and institutions
Regional systems of international human rights law supplement and complement national and international human rights law by protecting and promoting human rights in specific areas of the world. There are three key regional human rights instruments which have established human rights law on a regional basis:
the African Charter on Human and Peoples' Rights for Africa of 1981, in force since 1986;
the American Convention on Human Rights for the Americas of 1969, in force since 1978; and
the European Convention on Human Rights for Europe of 1950, in force since 1953.
Americas and Europe
The Organisation of American States and the Council of Europe, like the UN, have adopted treaties (albeit with weaker implementation mechanisms) containing catalogues of economic, social and cultural rights, in addition to the aforementioned conventions dealing mostly with civil and political rights:
the European Social Charter for Europe of 1961, in force since 1965 (whose complaints mechanism, created in 1995 under an Additional Protocol, has been in force since 1998); and
the Protocol of San Salvador to the ACHR for the Americas of 1988, in force since 1999.
Africa
The African Union (AU) is a supranational union consisting of 55 African countries. Established in 2001, the AU's purpose is to help secure Africa's democracy, human rights, and a sustainable economy, in particular by bringing an end to intra-African conflict and creating an effective and productive common market.
The African Charter on Human and Peoples' Rights is the region's principal human rights instrument, which emerged under the aegis of the Organisation of African Unity (OAU) (since replaced by the African Union). The intention to draw up the African Charter on Human and Peoples' Rights was announced in 1979. The Charter was unanimously approved at the OAU's 1981 Assembly.
Pursuant to Article 63 (whereby it was to "come into force three months after the reception by the Secretary General of the instruments of ratification or adherence of a simple majority" of the OAU's member states), the African Charter on Human and Peoples' Rights came into effect on 21 October 1986, in honour of which 21 October was declared African Human Rights Day.
The African Commission on Human and Peoples' Rights (ACHPR) is a quasi-judicial organ of the African Union, tasked with promoting and protecting human rights and collective (peoples') rights throughout the African continent, as well as with interpreting the African Charter on Human and Peoples' Rights, and considering individual complaints of violations of the Charter. The commission has three broad areas of responsibility:
promoting human and peoples' rights;
protecting human and peoples' rights; and
interpreting the African Charter on Human and Peoples' Rights.
In pursuit of these goals, the commission is mandated to "collect documents, undertake studies and researches on African problems in the field of human and peoples' rights, organise seminars, symposia and conferences, disseminate information, encourage national and local institutions concerned with human and peoples' rights and, should the case arise, give its views or make recommendations to governments."
With the creation of the African Court on Human and Peoples' Rights (under a protocol to the Charter which was adopted in 1998 and entered into force in January 2004), the commission will have the additional task of preparing cases for submission to the Court's jurisdiction. In a July 2004 decision, the AU Assembly resolved that the future Court on Human and Peoples' Rights would be integrated with the African Court of Justice.
The Court of Justice of the African Union is intended to be the "principal judicial organ of the Union". Although it has not yet been established, it is intended to take over the duties of the African Commission on Human and Peoples' Rights, as well as to act as the supreme court of the African Union, interpreting all necessary laws and treaties. The Protocol establishing the African Court on Human and Peoples' Rights entered into force in January 2004, but its merging with the Court of Justice has delayed its establishment. The Protocol establishing the Court of Justice will come into force when ratified by fifteen countries.
There are many countries in Africa accused of human rights violations by the international community and NGOs.
Inter-American system
The Organization of American States (OAS) is an international organization headquartered in Washington, DC. Its members are the thirty-five independent nation-states of the Americas.
Over the course of the 1990s, with the end of the Cold War, the return to democracy in Latin America, and the thrust toward globalisation, the OAS made major efforts to reinvent itself to fit the new context. Its stated priorities now include the following:
strengthening democracy;
working for peace;
protecting human rights;
combating corruption;
the rights of indigenous peoples; and
promoting sustainable development.
The Inter-American Commission on Human Rights (IACHR) is an autonomous organ of the Organization of American States, also based in Washington, D.C. Along with the Inter-American Court of Human Rights, based in San José, Costa Rica, it is one of the bodies that comprise the inter-American system for the promotion and protection of human rights. The IACHR is a permanent body which meets in regular and special sessions several times a year to examine allegations of human rights violations in the hemisphere. Its human rights duties stem from three documents:
the OAS Charter;
the American Declaration of the Rights and Duties of Man; and
the American Convention on Human Rights.
The Inter-American Court of Human Rights was established in 1979 with the purpose of enforcing and interpreting the provisions of the American Convention on Human Rights. Its two main functions are therefore adjudicatory and advisory:
Under the former, it hears and rules on the specific cases of human rights violations referred to it.
Under the latter, it issues opinions on matters of legal interpretation brought to its attention by other OAS bodies or member states.
Many countries in the Americas, including Colombia, Cuba, Mexico and Venezuela, have been accused of human rights violations.
European system
The Council of Europe, founded in 1949, is the oldest organisation working for European integration. It is an international organisation with legal personality recognised under public international law, and has observer status at the United Nations. The seat of the council is in Strasbourg in France.
The Council of Europe is responsible for both the European Convention on Human Rights and the European Court of Human Rights. These institutions bind the council's members to a code of human rights which, although strict, is more lenient than that of the UN Charter on human rights.
The council also promotes the European Charter for Regional or Minority Languages and the European Social Charter. Membership is open to all European states which seek European integration, accept the principle of the rule of law, and are able and willing to guarantee democracy, fundamental human rights and freedoms.
The Council of Europe is separate from the European Union, but the latter is expected to accede to the European Convention on Human Rights. The Council includes all the member states of European Union. The EU also has a separate human rights document, the Charter of Fundamental Rights of the European Union.
The European Convention on Human Rights has since 1950 defined and guaranteed human rights and fundamental freedoms in Europe. All 47 member states of the Council of Europe have signed this convention, and are therefore under the jurisdiction of the European Court of Human Rights in Strasbourg. In order to prevent torture and inhuman or degrading treatment, the Committee for the Prevention of Torture was established.
The Council of Europe also adopted the Convention on Action against Trafficking in Human Beings in May 2005, for protection against human trafficking and sexual exploitation, the Council of Europe Convention on the Protection of Children against Sexual Exploitation and Sexual Abuse in October 2007, and the Convention on preventing and combating violence against women and domestic violence in May 2011.
The European Court of Human Rights is the only international court with jurisdiction to deal with cases brought by individuals rather than states. In early 2010, the court had a backlog of over 120,000 cases and a multi-year waiting list. About one out of every twenty cases submitted to the court is considered admissible. In 2007, the court issued 1,503 verdicts. At the current rate of proceedings, it would take 46 years for the backlog to clear.
Monitoring, implementation and enforcement
There is currently no international court to administer international human rights law, but quasi-judicial bodies exist under some UN treaties (like the Human Rights Committee under the ICCPR). The International Criminal Court (ICC) has jurisdiction over the crime of genocide, war crimes and crimes against humanity. The European Court of Human Rights and the Inter-American Court of Human Rights enforce regional human rights law.
Although these same international bodies also hold jurisdiction over cases regarding international humanitarian law, it is crucial to recognise, as discussed above, that the two frameworks constitute different legal regimes.
The United Nations human rights bodies do have some quasi-legal enforcement mechanisms. These include the treaty bodies attached to the seven currently active treaties, and the United Nations Human Rights Council complaints procedures, with Universal Periodic Review and United Nations Special Rapporteur (known as the 1235 and 1503 mechanisms respectively).
The enforcement of international human rights law is the responsibility of the nation state; it is the primary responsibility of the State to make the human rights of its citizens a reality.
In practice, many human rights are difficult to enforce legally, due to the absence of consensus on the application of certain rights, the lack of relevant national legislation or of bodies empowered to take legal action to enforce them.
In over 110 countries, national human rights institutions (NHRIs) have been set up to protect, promote or monitor human rights with jurisdiction in a given country. Although not all NHRIs are compliant with the Paris Principles, the number and effect of these institutions is increasing.
The Paris Principles were defined at the first International Workshop on National Institutions for the Promotion and Protection of Human Rights in Paris from 7 to 9 October 1991, and adopted by UN Human Rights Commission Resolution 1992/54 of 1992 and General Assembly Resolution 48/134 of 1993. The Paris Principles list a number of responsibilities for NHRIs.
Universal jurisdiction
Universal jurisdiction is a controversial principle in international law, whereby states claim criminal jurisdiction over people whose alleged crimes were committed outside the boundaries of the prosecuting state, regardless of nationality, country of residence or any other relationship to the prosecuting country. The state backs its claim on the grounds that the crime committed is considered a crime against all, which any state is authorized to punish. The concept of universal jurisdiction is therefore closely linked to the idea that certain international norms are erga omnes, or owed to the entire world community, as well as the concept of jus cogens.
In 1993, Belgium passed a "law of universal jurisdiction" to give its courts jurisdiction over crimes against humanity in other countries. In 1998, Augusto Pinochet was arrested in London following an indictment by Spanish judge Baltasar Garzón under the universal-jurisdiction principle. Adolf Eichmann who was the former Nazi SS lieutenant colonel accused of overseeing the transfer of Jews to Holocaust death camps also persecuted in Israel in 1961. Adolf was living in Argentina after the war.
The principle is supported by Amnesty International and other human rights organisations, which believe that certain crimes pose a threat to the international community as a whole, and that the community has a moral duty to act.
Others, like Henry Kissinger, argue that "widespread agreement that human rights violations and crimes against humanity must be prosecuted has hindered active consideration of the proper role of international courts. Universal jurisdiction risks creating universal tyranny—that of judges".
See also
References
External links
UNHCHR
Official United Nations website
Official UN website on International Law
Official website of the International Court of Justice
International Justice Resource Center – comprehensive online resources and news
Human rights instruments
Human rights
human rights law
Cultural globalization | wiki |
Geographical exploration, sometimes considered the default meaning for the more general term exploration, refers to the practice of discovering remote lands and regions of the planet Earth. It is studied by geographers and historians.
Two major eras of exploration occurred in human history: one of convergence, and one of divergence. The first, covering most of Homo sapiens history, saw humans moving out of Africa, settling in new lands, and developing distinct cultures in relative isolation. Early explorers settled in Europe and Asia; 14,000 years ago, some crossed the Ice Age land bridge from Siberia to Alaska, and moved southbound to settle in the Americas. For the most part, these cultures were ignorant of each other's existence. The second period of exploration, occurring over the last 10,000 years, saw increased cross-cultural exchange through trade and exploration, and marked a new era of cultural intermingling, and more recently, convergence.
Early writings about exploration date back to the 4th millennium B.C. in ancient Egypt. One of the earliest and most impactful thinkers of exploration was Ptolemy in the 2nd century AD. Between the 5th century and 15th century AD, most exploration was done by Chinese and Arab explorers. This was followed by the Age of Discovery after European scholars rediscovered the works of early Latin and Greek geographers. While the Age of Discovery was partly driven by European land routes becoming unsafe, and a desire for conquest, the 17th century saw exploration driven by nobler motives, including scientific discovery and the expansion of knowledge about the world. This broader knowledge of the world's geography meant that people were able to make world maps, depicting all land known. The first modern atlas was the Theatrum Orbis Terrarum, published by Abraham Ortelius, which included a world map that depicted all of Earth's continents.
Concept
Exploration is the process of exploring, which has been defined as:
To examine or investigate something systematically.
To travel somewhere in search of discovery.
To examine diagnostically.
To (seek) experience first hand.
To wander without any particular aim or purpose.
Notable historical periods of human exploration
Phoenician galley sailings
The Phoenicians (1550 BCE–300 BCE) traded throughout the Mediterranean Sea and Asia Minor though many of their routes are still unknown today. The presence of tin in some Phoenician artifacts suggests that they may have traveled to Britain. According to Virgil's Aeneid and other ancient sources, the legendary Queen Dido was a Phoenician from Tyre who sailed to North Africa and founded the city of Carthage.
Carthaginean exploration of Western Africa
Hanno the Navigator (500 BC), a Carthaginean navigator who explored the Western Coast of Africa.
Greek & Roman exploration of Northern Europe and Thule
Pytheas (4th century BC), a Greek explorer from Massalia (Marseille), was the first to circumnavigate Great Britain, explore Germany, and reach Thule (most commonly thought to be the Shetland Islands or Iceland).
Under Augustus, Romans reached and explored all the Baltic Sea.
Roman explorations
Africa exploration
The Romans organized expeditions to cross the Sahara desert along five different routes:
through the western Sahara, toward the Niger River and Timbuktu.
through the Tibesti Mountains, toward Lake Chad and Nigeria.
through the Nile river, toward Uganda.
through the western coast of Africa, toward the Canary Islands and the Cape Verde islands.
through the Red Sea, toward Somalia and perhaps Tanzania.
All these expeditions were supported by legionaries and had mainly a commercial purpose. Only the one done by emperor Nero seemed to be a preparative for the conquest of Ethiopia or Nubia: in 62 AD two legionaries explored the sources of the Nile.
One of the main reasons of the explorations was to get gold using the camel to transport it.
The explorations near the African western and eastern coasts were supported by Roman ships and deeply related to the naval commerce (mainly toward the Indian Ocean).
The Romans also organized several explorations into Northern Europe, and explored as far as China in Asia.
30 BC – 640 AD With the acquisition of Ptolemaic Egypt, the Romans begin trading with India. The Romans now have a direct connection to the spice trade, which the Egyptians had established beginning in 118 BC.
100–166 AD Sino-Roman relations begin. Ptolemy writes of the Golden Chersonese (i.e. Malay Peninsula) and the trade port of Kattigara, now identified as Óc Eo in northern Vietnam, then part of Jiaozhou, a province of the Chinese Han Empire. The Chinese historical texts describe Roman embassies, from a land they called Daqin.
2nd century Roman traders reach Siam, Cambodia, Sumatra, and Java.
161 An embassy from Roman Emperor Antoninus Pius or his successor Marcus Aurelius reaches Chinese Emperor Huan of Han at Luoyang.
226 A Roman diplomat or merchant lands in northern Vietnam and visits Nanjing, China and the court of Sun Quan, ruler of Eastern Wu.
Chinese exploration of Central Asia
During the 2nd century BC, the Han dynasty explored much of the Eastern Northern Hemisphere. Starting in 139 BC, the Han diplomat Zhang Qian traveled west in an unsuccessful attempt to secure an alliance with the Da Yuezhi against the Xiongnu (the Yuezhi had been evicted from Gansu by the Xiongnu in 177 BC); however, Zhang's travels discovered entire countries which the Chinese were unaware of, including the remnants of the conquests of Alexander the Great (r. 336–323 BC). When Zhang returned to China in 125 BC, he reported on his visits to Dayuan (Fergana), Kangju (Sogdia), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom which had just been subjugated by the Da Yuezhi). Zhang described Dayuan and Daxia as agricultural and urban countries like China, and although he did not venture there, described Shendu (the Indus River valley of Northwestern India) and Anxi (Parthian territories) further west.
Viking Age
From about 800 AD to 1040 AD, the Vikings explored Iceland and much of the Western Northern Hemisphere via rivers and oceans. For example, it is known that the Norwegian Viking explorer, Erik the Red (950–1003), sailed to and settled in Greenland after being expelled from Iceland, while his son, the Icelandic explorer Leif Erikson (980–1020), reached Newfoundland and the nearby North American coast, and is believed to be the first European to land in North America.
Polynesian Age
Polynesians were a maritime people, who populated and explored the central and south Pacific for around 5,000 years, up to about 1280 when they discovered New Zealand. The key invention to their exploration was the outrigger canoe, which provided a swift and stable platform for carrying goods and people. Based on limited evidence, it is thought that the voyage to New Zealand was deliberate. It is unknown if one or more boats went to New Zealand, or the type of boat, or the names of those who migrated. 2011 studies at Wairau Bar in New Zealand show a high probability that one origin was Ruahine Island in the Society Islands. Polynesians may have used the prevailing north easterly trade winds to reach New Zealand in about three weeks. The Cook Islands are in direct line along the migration path and may have been an intermediate stopping point. There are cultural and language similarities between Cook Islanders and New Zealand Māori. Early Māori had different legends of their origins, but the stories were misunderstood and reinterpreted in confused written accounts by early European historians in New Zealand trying to present a coherent pattern of Māori settlement in New Zealand.
Mathematical modelling based on DNA genome studies, using state of the art techniques, have shown that a large number of Polynesian migrants (100–200), including women, arrived in New Zealand around the same time, in about 1280. Otago University studies have tried to link distinctive DNA teeth patterns, which show special dietary influence, with places in or nearby the Society Islands.
Chinese exploration of the Indian Ocean
The Chinese explorer, Wang Dayuan (fl. 1311–1350) made two major trips by ship to the Indian Ocean. During 1328–1333, he sailed along the South China Sea and visited many places in Southeast Asia and reached as far as South Asia, landing in Sri Lanka and India, and he even went to Australia. Then in 1334–1339, he visited North Africa and East Africa. Later, the Chinese admiral Zheng He (1371–1433) made seven voyages to Arabia, East Africa, India, Indonesia and Thailand.
European Age of Discovery
The Age of Discovery, also known as the Age of Exploration, is one of the most important periods of geographical exploration in human history. It started in the early 15th century and lasted until the 17th century. In that period, Europeans discovered and/or explored vast areas of the Americas, Africa, Asia, and Oceania. Portugal and Spain dominated the first stages of exploration, while other European nations followed, such as England, France, and the Netherlands.
Important explorations during this period went to a number of continents and regions around the globe. In Africa, important explorers of this period include Diogo Cão (1452–1486), who discovered and ascended the Congo River and reached the coasts of present-day Angola and Namibia; and Bartolomeu Dias (1450–1500), the first European to reach the Cape of Good Hope and other parts of the South African coast.
Explorers of routes from Europe towards Asia, the Indian Ocean, and the Pacific Ocean, include Vasco da Gama (1460–1524), a navigator who made the first trip from Europe to India and back by the Cape of Good Hope, discovering the ocean route to the East; Pedro Álvares Cabral (c. 1467/1468 – c. 1520), who, following the path of Vasco da Gama, claimed Brazil and led the first expedition that linked Europe, Africa, America, and Asia; Diogo Dias, who discovered the eastern coast of Madagascar and rounded the corner of Africa; explorers such as Diogo Fernandes Pereira and Pedro Mascarenhas (1470–1555), among others, who discovered and mapped the Mascarene Islands and other archipelagos.
António de Abreu (1480–1514) and Francisco Serrão (14??–1521) led the first direct European fleet into the Pacific Ocean (on its western edges) and through the Sunda Islands, reaching the Moluccas. Andrés de Urdaneta (1498–1568) discovered the maritime route from Asia to the Americas.
In the Pacific Ocean, Jorge de Menezes (1498–1537) reached New Guinea while García Jofre de Loaísa (1490–1526) reached the Marshall Islands.
Discovery of America
Explorations of the Americas began with the initial discovery of America by Christopher Columbus (1451–1506), who led a Castilian (Spanish) expedition across the Atlantic, discovering America. After the discovery of America by Columbus, a number of important expeditions were sent out to explore the Western Hemisphere. This included Juan Ponce de León (1474–1521), who discovered and mapped the coast of Florida; Vasco Núñez de Balboa (c. 1475–1519), who was the first European to view the Pacific Ocean from American shores (after crossing the Isthmus of Panama) confirming that America was a separate continent from Asia; Aleixo Garcia (14?–1527), who explored the territories of present-day southern Brazil, Paraguay and Bolivia, crossing the Chaco and reaching the Andes (near Sucre).
Álvar Núñez Cabeza de Vaca (1490–1558) discovered the Mississippi River and was the first European to sail the Gulf of Mexico and cross Texas. Jacques Cartier (1491–1557) drew the first maps of part of central and maritime Canada; Francisco Vázquez de Coronado (1510–1554) discovered the Grand Canyon and the Colorado River; Francisco de Orellana (1511–1546) was the first European to navigate the length of the Amazon River.
Further explorations
Ferdinand Magellan (1480–1521), was the first navigator to cross the Pacific Ocean, discovering the Strait of Magellan, the Tuamotus and Mariana Islands, and achieving a nearly complete circumnavigation of the Earth, in multiple voyages, for the first time. Juan Sebastián Elcano (1476–1526), completed the first global circumnavigation.
In the second half of the 16th century and the 17th century exploration of Asia and the Pacific Ocean continued with explorers such as Andrés de Urdaneta (1498–1568), who discovered the maritime route from Asia to the Americas; Pedro Fernandes de Queirós (1565–1614), who discovered the Pitcairn Islands and the Vanuatu archipelago; Álvaro de Mendaña de Neira (1542–1595), who discovered the Tuvalu archipelago, the Marquesas, the Solomons, and Wake Island.
Explorers of Australia included Willem Janszoon (1570–1630), who made the first recorded European landing in Australia; Yñigo Ortiz de Retez, who discovered and reached eastern and northern New Guinea; Luis Váez de Torres (1565–1613), who discovered the Torres Strait between Australia and New Guinea; Abel Tasman (1603–1659), who explored North Australia, discovered Tasmania, New Zealand, and Tongatapu.
In North America, major explorers included Henry Hudson (1565–1611), who explored the Hudson Bay in Canada; Samuel de Champlain (1574–1635), who explored St. Lawrence River and the Great Lakes (in Canada and northern United States); and René-Robert Cavelier, Sieur de La Salle (1643–1687), who explored the Great Lakes region of the United States and Canada, and the entire length of the Mississippi River.
Late modern period
Long after the Age of Discovery, other explorers completed the world map, such as various Russians explorers, reaching the Siberian Pacific coast and the Bering Strait, at the extreme edge of Asia and Alaska (North America); Vitus Bering (1681–1741) who in the service of the Russian Navy, explored the Bering Strait, the Bering Sea, the North American coast of Alaska, and some other northern areas of the Pacific Ocean; and James Cook, who explored the east coast of Australia, the Hawaiian Islands, and circumnavigated Antarctica.
There were still significant explorations which occurred well into the late modern period. This includes the Lewis and Clark Expedition (1804–1806), an overland expedition dispatched by President Thomas Jefferson to explore the newly acquired Louisiana Purchase and to find an interior aquatic route to the Pacific Ocean, along with other objectives to examine the flora and fauna of the continent. In 1818, the British researcher Sir John Ross was the first to find that the deep sea is inhabited by life when catching jellyfish and worms in about depth with a special device. The United States Exploring Expedition (1838–1842) was an expedition sent by President Andrew Jackson, in order to survey the Pacific Ocean and surrounding lands.
The extreme conditions in the deep sea require elaborate methods and technologies to endure them. In the 20th century, deep-sea exploration advanced considerably through a series of technological inventions, ranging from the sonar system, which can detect the presence of solid objects underwater through the use of reflected sound, to manned deep-diving submersibles. In 1960, Jacques Piccard and United States Navy Lieutenant Donald Walsh descended in the bathyscaphe Trieste into the deepest part of the world's oceans, the Mariana Trench. In 2018, , piloted by Victor Vescovo, completed the first mission to the deepest point of the Atlantic Ocean, diving below the ocean surface to the base of the Puerto Rico Trench. With the advent of satellite imagery and aviation, broad scale exploration of the surface of Earth has largely ceased, however the culture of many disconnected tribes still remain undocumented and left to be explored, and the details of more inaccessible ecosystems remains undescribed. Urban exploration is the exploration of manmade structures, usually abandoned ruins or hidden components of the manmade environment.
Space age
Space exploration started in the 20th century with the invention of exo-atmospheric rockets. This has given humans the opportunity to travel to the Moon, and to send robotic explorers to other planets and far beyond.
Both of the Voyager probes have left the Solar System, bearing imprinted gold discs with multiple data types.
Underwater exploration
Objectives
The scope of underwater exploration includes the distribution and variety of marine and aquatic life, measurement of the geographical distribution of the chemical and physical properties, including movement of the water, and the geophysical, geological and topographical features of the Earth's crust where it is covered by water.
Systematic, targeted exploration is the most effective method to increase understanding of the ocean and other underwater regions, so they can be effectively managed, conserved, regulated, and their resources discovered, accessed, and used. The ocean covers approximately 70% of Earth’s surface and has a critical role in supporting life on the planet but knowledge and understanding of the ocean remains limited due to difficulty and cost of access.
The distinction between exploration, survey, and other research is somewhat blurred, and one way of looking at it is to consider the baseline surveys and research as exploration, as previously unknown information is gathered. Updating and refining the data is less exploratory in nature, but may still be exploration for the people involved, in the sense that the experience is new to them.
Status
According to NOAA, as of January 2023: "More than eighty percent of our ocean is unmapped, unobserved, and unexplored." Less than 10% of the ocean, including about 35% of the ocean and coastal waters of the United States, have been mapped in any detail using sonar technology. According to GEBCO 2019 data, less than 18% of the deep ocean bed has been mapped using direct measurement and about 50% of coastal waters were not yet surveyed.
Most of the data used to create global seabed maps are approximate depths derived from satellite gravity measurements and sea surface heights which are affected by the shape and mass distribution of the seabed. This method of approximation only provides low resolution information on large topographical features, and can miss significant features.
History
See also
References
Citations
Sources
Further reading
External links
National Geographic Explorer Program
NOAA Ocean Explorer – provides public access to current information on a series of NOAA scientific and educational explorations and activities in the marine environment
NOAA Office of Ocean Exploration and Research – formed by the merger of NOAA's Undersea Research Program (NURP) and the Office of Ocean Exploration (OE)
Historical eras
Adventure
World history | wiki |
In mechanics, a moment is a measure of the turning effect of a force about some point in space. In most practical examples, moments are the results of forces acting at a distance from the point of interest. The stress on the body on which the force acts is then symmetric. If the moment results from a strong body force, such as that produced in a magnetic material in a strong magnetic field, the stress tensor is non-symmetric.
References
Fung, Y.C, Foundations of Solid Mechanics, Prentice-Hall (1965)
Mechanics | wiki |
Video black level is defined as the level of brightness at the darkest (black) part of a visual image or the level of brightness at which no light is emitted from a screen, resulting in a pure black screen.
Video displays generally need to be calibrated so that the displayed black is true to the black information in the video signal. If the black level is not correctly adjusted, visual information in a video signal could be displayed as black, or black information could be displayed as above black information (gray).
The voltage of the black level varies across different television standards. PAL sets the black level the same as the blanking level, while NTSC sets the black level approximately 54 mV above the blanking level.
User misadjustment of black level on monitors is common. It results in darker colors having their hue changed, it affects contrast, and in many cases causes some of the image detail to be lost.
Black level is set by displaying a testcard image and adjusting display controls. With CRT displays:
"brightness" adjusts black level
"contrast" adjusts white level
CRTs tend to have some interdependence of controls, so a control sometimes needs adjustment more than once.
In digital video black level usually means the range of RGB values in video signal, which can be either [0..255] (or "normal"; typical of a computer output) or [16..235] (or "low"; standard for video).
See also
Picture line-up generation equipment (PLUGE)
Display technology
Television technology | wiki |
An intracranial aneurysm, also known as a brain aneurysm, is a cerebrovascular disorder in which weakness in the wall of a cerebral artery or vein causes a localized dilation or ballooning of the blood vessel.
Aneurysms in the posterior circulation (basilar artery, vertebral arteries and posterior communicating artery) have a higher risk of rupture. Basilar artery aneurysms represent only 3–5% of all intracranial aneurysms but are the most common aneurysms in the posterior circulation.
Classification
Cerebral aneurysms are classified both by size and shape. Small aneurysms have a diameter of less than 15 mm. Larger aneurysms include those classified as large (15 to 25 mm), giant (25 to 50 mm), and super-giant (over 50 mm).
Berry (saccular) aneurysms
Saccular aneurysms, also known as berry aneurysms, appear as a round outpouching and are the most common form of cerebral aneurysm. Causes include connective tissue disorders, polycystic kidney disease, arteriovenous malformations, untreated hypertension, tobacco smoking, cocaine, and amphetamines, intravenous drug abuse (can cause infectious mycotic aneurysms), alcoholism, heavy caffeine intake, head trauma, and infection in the arterial wall from bacteremia (mycotic aneurysms).
Fusiform aneurysms
Fusiform dolichoectatic aneurysms represent a widening of a segment of an artery around the entire blood vessel, rather than just arising from a side of an artery's wall. They have an estimated annual risk of rupture between 1.6 and 1.9 percent.
Microaneurysms
Microaneurysms, also known as Charcot–Bouchard aneurysms, typically occur in small blood vessels (less than 300 micrometre diameter), most often the lenticulostriate vessels of the basal ganglia, and are associated with chronic hypertension. Charcot–Bouchard aneurysms are a common cause of intracranial hemorrhage.
Signs and symptoms
A small, unchanging aneurysm will produce few, if any, symptoms. Before a larger aneurysm ruptures, the individual may experience such symptoms as a sudden and unusually severe headache, nausea, vision impairment, vomiting, and loss of consciousness, or no symptoms at all.
Subarachnoid bleed
If an aneurysm ruptures, blood leaks into the space around the brain. This is called a subarachnoid hemorrhage. Onset is usually sudden without prodrome, classically presenting as a "thunderclap headache" worse than previous headaches. Symptoms of a subarachnoid hemorrhage differ depending on the site and size of the aneurysm. Symptoms of a ruptured aneurysm can include:
a sudden severe headache that can last from several hours to days
nausea and vomiting
drowsiness, confusion and/or loss of consciousness
visual abnormalities
meningism
dizziness
Almost all aneurysms rupture at their apex. This leads to hemorrhage in the subarachnoid space and sometimes in brain parenchyma. Minor leakage from aneurysm may precede rupture, causing warning headaches. About 60% of patients die immediately after rupture. Larger aneurysms have a greater tendency to rupture, though most ruptured aneurysms are less than 10 mm in diameter.
Microaneurysms
A ruptured microaneurysm may cause an intracerebral hemorrhage, presenting as a focal neurological deficit.
Rebleeding, hydrocephalus (the excessive accumulation of cerebrospinal fluid), vasospasm (spasm, or narrowing, of the blood vessels), or multiple aneurysms may also occur. The risk of rupture from a cerebral aneurysm varies according to the size of an aneurysm, with the risk rising as the aneurysm size increases.
Vasospasm
Vasospasm, referring to blood vessel constriction, can occur secondary to subarachnoid hemorrhage following a ruptured aneurysm. This is most likely to occur within 21 days and is seen radiologically within 60% of such patients. The vasospasm is thought to be secondary to the apoptosis of inflammatory cells such as macrophages and neutrophils that become trapped in the subarachnoid space. These cells initially invade the subarachnoid space from the circulation in order to phagocytose the hemorrhaged red blood cells. Following apoptosis, it is thought there is a massive degranulation of vasoconstrictors, including endothelins and free radicals, that cause the vasospasm.
Risk factors
Intracranial aneurysms may result from diseases acquired during life, or from genetic conditions. Hypertension, smoking, alcoholism, and obesity are associated with the development of brain aneurysms. Cocaine use has also been associated with the development of intracranial aneurysms.
Other acquired associations with intracranial aneurysms include head trauma and infections.
Genetic associations
Coarctation of the aorta is also a known risk factor, as is arteriovenous malformation. Genetic conditions associated with connective tissue disease may also be associated with the development of aneurysms. This includes:
autosomal dominant polycystic kidney disease,
neurofibromatosis type I,
Marfan syndrome,
multiple endocrine neoplasia type I,
pseudoxanthoma elasticum,
hereditary hemorrhagic telangiectasia and
Ehlers-Danlos syndrome types II and IV.
Specific genes have also had reported association with the development of intracranial aneurysms, including perlecan, elastin, collagen type 1 A2, endothelial nitric oxide synthase, endothelin receptor A and cyclin dependent kinase inhibitor. Recently, several genetic loci have been identified as relevant to the development of intracranial aneurysms. These include 1p34–36, 2p14–15, 7q11, 11q25, and 19q13.1–13.3.
Pathophysiology
Aneurysm means an outpouching of a blood vessel wall that is filled with blood. Aneurysms occur at a point of weakness in the vessel wall. This can be because of acquired disease or hereditary factors. The repeated trauma of blood flow against the vessel wall presses against the point of weakness and causes the aneurysm to enlarge. As described by the law of Young-Laplace, the increasing area increases tension against the aneurysmal walls, leading to enlargement. In addition, a combination of computational fluid dynamics and morphological indices have been proposed as reliable predictors of cerebral aneurysm rupture.
Both high and low wall shear stress of flowing blood can cause aneurysm and rupture. However, the mechanism of action is still unknown. It is speculated that low shear stress causes growth and rupture of large aneurysms through inflammatory response while high shear stress causes growth and rupture of small aneurysm through mural response (response from the blood vessel wall). Other risk factors that contributes to the formation of aneurysm are: cigarette smoking, hypertension, female gender, family history of cerebral aneurysm, infection, and trauma. Damage to structural integrity of the arterial wall by shear stress causes an inflammatory response with the recruitment of T cells, macrophages, and mast cells. The inflammatory mediators are: interleukin 1 beta, interleukin 6, tumor necrosis factor alpha (TNF alpha), MMP1, MMP2, MMP9, prostaglandin E2, complement system, reactive oxygen species (ROS), and angiotensin II. However, smooth muscle cells from the tunica media layer of the artery moved into the tunica intima, where the function of the smooth muscle cells changed from contractile function into pro-inflammatory function. This causes the fibrosis of the arterial wall, with reduction of number of smooth muscle cells, abnormal collagen synthesis, resulting in a thinning of the arterial wall and the formation of aneurysm and rupture. No specific gene loci has been identified to be associated with cerebral aneurysms.
Generally, aneurysms larger than 7 mm in diameter should be treated because they are prone for rupture. Meanwhile, aneurysms less than 7 mm arise from the anterior and posterior communicating artery and are more easily ruptured when compared to aneurysms arising from other locations.
Saccular aneurysms
Saccular aneurysms are almost always the result of hereditary weaknesses in blood vessels and typically occur within the arteries of the circle of Willis, in order of frequency affecting the following arteries:
Anterior communicating artery
Posterior communicating artery
Middle cerebral artery
Internal carotid artery
Tip of basilar artery
Saccular aneurysms tend to have a lack of tunica media and elastic lamina around their dilated locations (congenital), with a wall of sac made up of thickened hyalinized intima and adventitia. In addition, some parts of the brain vasculature are inherently weak—particularly areas along the circle of Willis, where small communicating vessels link the main cerebral vessels. These areas are particularly susceptible to saccular aneurysms. Approximately 25% of patients have multiple aneurysms, predominantly when there is a familial pattern.
Diagnosis
Once suspected, intracranial aneurysms can be diagnosed radiologically using magnetic resonance or CT angiography. But these methods have limited sensitivity for diagnosis of small aneurysms, and often cannot be used to specifically distinguish them from infundibular dilations without performing a formal angiogram. The determination of whether an aneurysm is ruptured is critical to diagnosis. Lumbar puncture (LP) is the gold standard technique for determining aneurysm rupture (subarachnoid hemorrhage). Once an LP is performed, the CSF is evaluated for RBC count, and presence or absence of xanthochromia.
Treatment
Emergency treatment for individuals with a ruptured cerebral aneurysm generally includes restoring deteriorating respiration and reducing intracranial pressure. Currently there are two treatment options for securing intracranial aneurysms: surgical clipping or endovascular coiling. If possible, either surgical clipping or endovascular coiling is typically performed within the first 24 hours after bleeding to occlude the ruptured aneurysm and reduce the risk of recurrent hemorrhage.
While a large meta-analysis found the outcomes and risks of surgical clipping and endovascular coiling to be statistically similar, no consensus has been reached. In particular, the large randomised control trial International Subarachnoid Aneurysm Trial appears to indicate a higher rate of recurrence when intracerebral aneurysms are treated using endovascular coiling. Analysis of data from this trial has indicated a 7% lower eight-year mortality rate with coiling, a high rate of aneurysm recurrence in aneurysms treated with coiling—from 28.6 to 33.6% within a year, a 6.9 times greater rate of late retreatment for coiled aneurysms, and a rate of rebleeding 8 times higher than surgically clipped aneurysms.
Surgical clipping
Aneurysms can be treated by clipping the base of the aneurysm with a specially-designed clip. Whilst this is typically carried out by craniotomy, a new endoscopic endonasal approach is being trialled. Surgical clipping was introduced by Walter Dandy of the Johns Hopkins Hospital in 1937.
After clipping, a catheter angiogram or CTA can be performed to confirm complete clipping.
Endovascular coiling
Endovascular coiling refers to the insertion of platinum coils into the aneurysm. A catheter is inserted into a blood vessel, typically the femoral artery, and passed through blood vessels into the cerebral circulation and the aneurysm. Coils are pushed into the aneurysm, or released into the blood stream ahead of the aneurysm. Upon depositing within the aneurysm, the coils expand and initiate a thrombotic reaction within the aneurysm. If successful, this prevents further bleeding from the aneurysm. In the case of broad-based aneurysms, a stent may be passed first into the parent artery to serve as a scaffold for the coils.
Cerebral bypass surgery
Cerebral bypass surgery was developed in the 1960s in Switzerland by Gazi Yasargil. When a patient has an aneurysm involving a blood vessel or a tumor at the base of the skull wrapping around a blood vessel, surgeons eliminate the problem vessel by replacing it with an artery from another part of the body.
Prognosis
Outcomes depend on the size of the aneurysm. Small aneurysms (less than 7 mm) have a low risk of rupture and increase in size slowly. The risk of rupture is less than one percent for aneurysms of this size.
The prognosis for a ruptured cerebral aneurysm depends on the extent and location of the aneurysm, the person's age, general health, and neurological condition. Some individuals with a ruptured cerebral aneurysm die from the initial bleeding. Other individuals with cerebral aneurysm recover with little or no neurological deficit. The most significant factors in determining outcome are the Hunt and Hess grade, and age. Generally patients with Hunt and Hess grade I and II hemorrhage on admission to the emergency room and patients who are younger within the typical age range of vulnerability can anticipate a good outcome, without death or permanent disability. Older patients and those with poorer Hunt and Hess grades on admission have a poor prognosis. Generally, about two-thirds of patients have a poor outcome, death, or permanent disability.
Increased availability and greater access to medical imaging has caused a rising number of asymptomatic, unruptured cerebral aneurysms to be discovered incidentally during medical imaging investigations. Unruptured aneurysms may be managed by endovascular clipping or stenting. For those subjects that underwent follow-up for the unruptured aneurysm, computed tomography angiography (CTA) or magnetic resonance angiography (MRA) of the brain can be done yearly. Recently, an increasing number of aneurysm features have been evaluated in their ability to predict aneurysm rupture status, including aneurysm height, aspect ratio, height-to-width ratio, inflow angle, deviations from ideal spherical or elliptical forms, and radiomics morphological features.
Epidemiology
The prevalence of intracranial aneurysm is about 1–5% (10 million to 12 million persons in the United States) and the incidence is 1 per 10,000 persons per year in the United States (approximately 27,000), with 30- to 60-year-olds being the age group most affected. Intracranial aneurysms occur more in women, by a ratio of 3 to 2, and are rarely seen in pediatric populations.
See also
Interventional neuroradiology
Intradural pseudoaneurysm
References
External links
National Institute of Neurological Disorders and Stroke
Cerebrovascular diseases
Neurosurgery | wiki |
The Big Fork River (French: Rivière Grande Fourche; Ojibwe: Baas-achaabaani-ziibi) is a stream in the U.S. state of Minnesota. Starting in the Chippewa National Forest at Dora Lake, it flows for into the Rainy River.
See also
Big Fork River is the fifth longest river totally within the state of Minnesota.
List of rivers of Minnesota
List of longest streams of Minnesota
Plum Creek (Big Fork River)
References
References
Minnesota Watersheds
USGS Hydrologic Unit Map - State of Minnesota (1974)
Rivers of Itasca County, Minnesota
Rivers of Koochiching County, Minnesota
Rivers of Minnesota | wiki |
Kölner Liste steht für:
Kölner Liste (Doping) zum Thema Nahrungsmittel und Doping
Kölner Liste (Baumarkt) zur Sortimentierung in Baumärkten
Kölner Liste (Standesamt) zum Thema Beglaubigung von Urkunden | wiki |
Gonorrhea, colloquially known as the clap, is a sexually transmitted infection (STI) caused by the bacterium Neisseria gonorrhoeae. Infection may involve the genitals, mouth, or rectum. Infected men may experience pain or burning with urination, discharge from the penis, or testicular pain. Infected women may experience burning with urination, vaginal discharge, vaginal bleeding between periods, or pelvic pain. Complications in women include pelvic inflammatory disease and in men include inflammation of the epididymis. Many of those infected, however, have no symptoms. If untreated, gonorrhea can spread to joints or heart valves.
Gonorrhea is spread through sexual contact with an infected person. This includes oral, anal, and vaginal sex. It can also spread from a mother to a child during birth. Diagnosis is by testing the urine, urethra in males, or cervix in females. Testing all women who are sexually active and less than 25 years of age each year as well as those with new sexual partners is recommended; the same recommendation applies in men who have sex with men (MSM).
Gonorrhea can be prevented with the use of condoms, having sex with only one person who is uninfected, and by not having sex. Treatment is usually with ceftriaxone by injection and azithromycin by mouth. Resistance has developed to many previously used antibiotics and higher doses of ceftriaxone are occasionally required. Retesting is recommended three months after treatment. Sexual partners from the last two months should also be treated.
Gonorrhea affects about 0.8% of women and 0.6% of men. An estimated 33 to 106 million new cases occur each year, out of the 498 million new cases of curable STI – which also includes syphilis, chlamydia, and trichomoniasis. Infections in women most commonly occur when they are young adults. In 2015, it caused about 700 deaths. Descriptions of the disease date back to before the Common Era within the Hebrew Bible/Old Testament (). The current name was first used by the Greek physician Galen before AD 200 who referred to it as "an unwanted discharge of semen".
Signs and symptoms
Gonorrhea infections of mucosal membranes can cause swelling, itching, pain, and the formation of pus. The time from exposure to symptoms is usually between two and 14 days, with most symptoms appearing between four and six days after infection, if they appear at all. Both men and women with infections of the throat may experience a sore throat, though such infection does not produce symptoms in 90% of cases. Other symptoms may include swollen lymph nodes around the neck. Either sex can become infected in the eyes or rectum if these tissues are exposed to the bacterium.
Women
Half of women with gonorrhea are asymptomatic but the other half experience vaginal discharge, lower abdominal pain, or pain with sexual intercourse associated with inflammation of the uterine cervix. Common medical complications of untreated gonorrhea in women include pelvic inflammatory disease which can cause scars to the fallopian tubes and result in later ectopic pregnancy among those women who become pregnant.
Men
Most infected men with symptoms have inflammation of the penile urethra associated with a burning sensation during urination and discharge from the penis. In men, discharge with or without burning occurs in half of all cases and is the most common symptom of the infection. This pain is caused by a narrowing and stiffening of the urethral lumen. The most common medical complication of gonorrhea in men is inflammation of the epididymis. Gonorrhea is also associated with increased risk of prostate cancer.
Infants
If not treated, gonococcal ophthalmia neonatorum will develop in 28% of infants born to women with gonorrhea.
Spread
If left untreated, gonorrhea can spread from the original site of infection and infect and damage the joints, skin, and other organs. Indications of this can include fever, skin rashes, sores, and joint pain and swelling. In advanced cases, gonorrhea may cause a general feeling of tiredness similar to other infections. It is also possible for an individual to have an allergic reaction to the bacteria, in which case any appearing symptoms will be greatly intensified. Very rarely it may settle in the heart, causing endocarditis, or in the spinal column, causing meningitis. Both are more likely among individuals with suppressed immune systems, however.
Cause
Gonorrhea is caused by the bacterium Neisseria gonorrhoeae. Previous infection does not confer immunity – a person who has been infected can become infected again by exposure to someone who is infected. Infected persons may be able to infect others repeatedly without having any signs or symptoms of their own.
Spread
The infection is usually spread from one person to another through vaginal, oral, or anal sex. Men have a 20% risk of getting the infection from a single act of vaginal intercourse with an infected woman. The risk for men who have sex with men (MSM) is higher. Insertive MSM may get a penile infection from anal intercourse, while receptive MSM may get anorectal gonorrhea. Women have a 60–80% risk of getting the infection from a single act of vaginal intercourse with an infected man.
A mother may transmit gonorrhea to her newborn during childbirth; when affecting the infant's eyes, it is referred to as ophthalmia neonatorum. It may be able to spread through the objects contaminated with body fluid from an infected person. The bacteria typically does not survive long outside the body, typically dying within minutes to hours.
Risk factors
It is discovered that sexually active women younger than 25 and men who have sex with men are at increased risk of getting gonorrhea.
Other risk factors include:
Having a new sex partner
Having a sex partner who has other partners
Having more than one sex partner
Having had gonorrhea or another sexually transmitted infection
Complications
Medically it has been said that Untreated gonorrhea can lead to major complications, such as:
Infertility in women. Gonorrhea can spread into the uterus and fallopian tubes, causing pelvic inflammatory disease (PID). PID can result in scarring of the tubes, greater risk of pregnancy complications and infertility. PID requires immediate treatment.
Infertility in men. Gonorrhea can cause a small, coiled tube in the rear portion of the testicles where the sperm ducts are located (epididymis) to become inflamed (epididymitis). Untreated epididymitis can lead to infertility.
Infection that spreads to the joints and other areas of the body. The bacterium that causes gonorrhea can spread through the bloodstream and infect other parts of the body, including the joints. Fever, rash, skin sores, joint pain, swelling and stiffness are possible results.
Increased risk of HIV/AIDS. Having gonorrhea increases the susceptibility to infection with human immunodeficiency virus (HIV), the virus that leads to AIDS. People who have both gonorrhea and HIV (untreated by anti-retroviral therapy) are able to pass both diseases more readily to their partners.
Complications in babies. Babies who contract gonorrhea from their mothers during birth can develop blindness, sores on the scalp and infections.
Diagnosis
Traditionally, gonorrhea was diagnosed with Gram stain and culture; however, newer polymerase chain reaction (PCR)-based testing methods are becoming more common. In those failing initial treatment, culture should be done to determine sensitivity to antibiotics.
Tests that use PCR (aka nucleic acid amplification) to identify genes unique to N. gonorrhoeae are recommended for screening and diagnosis of gonorrhea infection. These PCR-based tests require a sample of urine, urethral swabs, or cervical/vaginal swabs. Culture (growing colonies of bacteria in order to isolate and identify them) and Gram-stain (staining of bacterial cell walls to reveal morphology) can also be used to detect the presence of N. gonorrhoeae in all specimen types except urine. The swab sample for gonorrhea infections does not show difference whether the sample was collected in home or in clinic in term of number patient treated. The implications in cured patient, reinfection, partner management, and safety are unknown.
If Gram-negative, oxidase-positive diplococci are visualized on direct Gram stain of urethral pus (male genital infection), no further testing is needed to establish the diagnosis of gonorrhea infection. However, in the case of female infection direct Gram stain of cervical swabs is not useful because the N. gonorrhoeae organisms are less concentrated in these samples. The chances of false positives are increased as Gram-negative diplococci native to the normal vaginal flora cannot be distinguished from N. gonorrhoeae. Thus, cervical swabs must be cultured under the conditions described above. If oxidase positive, Gram-negative diplococci are isolated from a culture of a cervical/vaginal swab specimen, then the diagnosis is made. Culture is especially useful for diagnosis of infections of the throat, rectum, eyes, blood, or joints—areas where PCR-based tests are not well established in all labs. Culture is also useful for antimicrobial sensitivity testing, treatment failure, and epidemiological purposes (outbreaks, surveillance).
In patients who may have disseminated gonococcal infection (DGI), all possible mucosal sites should be cultured (e.g., pharynx, cervix, urethra, rectum). Three sets of blood cultures should also be obtained. Synovial fluid should be collected in cases of septic arthritis.
All people testing positive for gonorrhea should be tested for other sexually transmitted diseases such as chlamydia, syphilis, and human immunodeficiency virus. Studies have found co-infection with chlamydia ranging from 46 to 54% in young people with gonorrhea. Among persons in the United States between 14 and 39 years of age, 46% of people with gonorrheal infection also have chlamydial infection. For this reason, gonorrhea and chlamydia testing are often combined. People diagnosed with gonorrhea infection have a fivefold increase risk of HIV transmission. Additionally, infected persons who are HIV positive are more likely to shed and transmit HIV to uninfected partners during an episode of gonorrhea.
Screening
The United States Preventive Services Task Force (USPSTF) recommends screening for gonorrhea in women at increased risk of infection, which includes all sexually active women younger than 25 years. Extragenital gonorrhea and chlamydia are highest in men who have sex with men (MSM). Additionally, the USPSTF also recommends routine screening in people who have previously tested positive for gonorrhea or have multiple sexual partners and individuals who use condoms inconsistently, provide sexual favors for money, or have sex while under the influence of alcohol or drugs.
Screening for gonorrhea in women who are (or intend to become) pregnant, and who are found to be at high risk for sexually transmitted diseases, is recommended as part of prenatal care in the United States.
Prevention
As with most sexually transmitted diseases, the risk of infection can be reduced significantly by the correct use of condoms, not having sex, or can be removed almost entirely by limiting sexual activities to a mutually monogamous relationship with an uninfected person.
Those previously infected are encouraged to return for follow up care to make sure that the infection has been eliminated. In addition to the use of phone contact, the use of email and text messaging have been found to improve the re-testing for infection.
Newborn babies coming through the birth canal are given erythromycin ointment in the eyes to prevent blindness from infection. The underlying gonorrhea should be treated; if this is done then usually a good prognosis will follow.
Treatment
Antibiotics
Antibiotics are used to treat gonorrhea infections. As of 2016, both ceftriaxone by injection and azithromycin by mouth are most effective. However, due to increasing rates of antibiotic resistance, local susceptibility patterns must be taken into account when deciding on treatment. Ertapenem is a potential effective alternative treatment for ceftriaxone-resistant gonorrhea.
Adults may have eyes infected with gonorrhoea and require proper personal hygiene and medications. Addition of topical antibiotics have not been shown to improve cure rates compared to oral antibiotics alone in treatment of eye infected gonorrhea. For newborns, erythromycin ointment is recommended as a preventative measure for gonococcal infant conjunctivitis.
Infections of the throat can be especially problematic, as antibiotics have difficulty becoming sufficiently concentrated there to destroy the bacteria. This is amplified by the fact that pharyngeal gonorrhoea is mostly asymptomatic, and gonococci and commensal Neisseria species can coexist for long time periods in the pharynx and share anti-microbial resistance genes. Accordingly, an enhanced focus on early detection (i.e., screening of high-risk populations, such as men who have sex with men, PCR testing should be considered) and appropriate treatment of pharyngeal gonorrhoea is important.
Sexual partners
It is recommended that sexual partners be tested and potentially treated. One option for treating sexual partners of people infected is patient-delivered partner therapy (PDPT), which involves providing prescriptions or medications to the person to take to his/her partner without the health care provider's first examining him/her.
The United States' Centers for Disease Control and Prevention (CDC) currently recommend that individuals who have been diagnosed and treated for gonorrhea avoid sexual contact with others until at least one week past the final day of treatment in order to prevent the spread of the bacterium.
Antibiotic resistance
Many antibiotics that were once effective including penicillin, tetracycline, and fluoroquinolones are no longer recommended because of high rates of resistance. Resistance to cefixime has reached a level such that it is no longer recommended as a first-line agent in the United States, and if it is used a person should be tested again after a week to determine whether the infection still persists. Public health officials are concerned that an emerging pattern of resistance may predict a global epidemic. In 2016, the WHO published new guidelines for treatment, stating "There is an urgent need to update treatment recommendations for gonococcal infections to respond to changing antimicrobial resistance (AMR) patterns of N. gonorrhoeae. High-level resistance to previously recommended quinolones is widespread and decreased susceptibility to the extended-spectrum (third-generation) cephalosporins, another recommended first-line treatment in the 2003 guidelines, is increasing and several countries have reported treatment failures."
Prognosis
Gonorrhea if left untreated may last for weeks or months with higher risks of complications. One of the complications of gonorrhea is systemic dissemination resulting in skin pustules or petechia, septic arthritis, meningitis, or endocarditis. This occurs in between 0.6 and 3% of infected women and 0.4 and 0.7% of infected men.
In men, inflammation of the epididymis, prostate gland, and urethra can result from untreated gonorrhea. In women, the most common result of untreated gonorrhea is pelvic inflammatory disease. Other complications include inflammation of the tissue surrounding the liver, a rare complication associated with Fitz-Hugh–Curtis syndrome; septic arthritis in the fingers, wrists, toes, and ankles; septic abortion; chorioamnionitis during pregnancy; neonatal or adult blindness from conjunctivitis; and infertility. Men who have had a gonorrhea infection have an increased risk of getting prostate cancer.
Epidemiology
About 88 million cases of gonorrhea occur each year, out of the 448 million new cases of curable STI each year – that also includes syphilis, chlamydia and trichomoniasis. The prevalence was highest in the African region, the Americas, and Western Pacific, and lowest in Europe. In 2013, it caused about 3,200 deaths, up from 2,300 in 1990.
In the United Kingdom, 196 per 100,000 males 20 to 24 years old and 133 per 100,000 females 16 to 19 years old were diagnosed in 2005. In 2013, the CDC estimated that more than 820,000 people in the United States get a new gonorrheal infection each year. Fewer than half of these infections are reported to CDC. In 2011, 321,849 cases of gonorrhea were reported to the CDC. After the implementation of a national gonorrhea control program in the mid-1970s, the national gonorrhea rate declined from 1975 to 1997. After a small increase in 1998, the gonorrhea rate has decreased slightly since 1999. In 2004, the rate of reported gonorrheal infections was 113. 5 per 100,000 persons.
In the US, it is the second-most-common bacterial sexually transmitted infections; chlamydia remains first. According to the CDC African Americans are most affected by gonorrhea, accounting for 69% of all gonorrhea cases in 2010.
The World Health Organization warned in 2017 of the spread of untreatable strains of gonorrhea, following analysis of at least three cases in Japan, France and Spain, which survived all antibiotic treatment.
History
Some scholars translate the biblical terms zav (for a male) and zavah (for a female) as gonorrhea.
It has been suggested that mercury was used as a treatment for gonorrhea. Surgeons' tools on board the recovered English warship the Mary Rose included a syringe that, according to some, was used to inject the mercury via the urinary meatus into crewmen with gonorrhea. The name "the clap", in reference to the disease, is recorded as early as the sixteenth century, referring to a medieval red-light district in Paris, Les Clapiers. Translating to "The rabbit holes", it was so named for the small huts in which prostitutes worked.
In 1854, Dr. Wilhelm Gollmann addressed gonorrhea in his book, Homeopathic Guide to all Diseases Urinary and Sexual Organs. He noted that the disease was common in prostitutes and homosexuals in large cities. Gollmann recommended the following as cures: aconite to cure "shooting pains with soreness and inflammation;" mercury "for stitching pain with purulent discharge;" nux vomica and sulphur "when the symptoms are complicated with hemorrhoids and stricture of the rectum. Other remedies include argentum, aurum (gold), belladonna, calcarea, ignatia, phosphorus, and sepia.
Silver nitrate was one of the widely used drugs in the 19th century. However, it became replaced by Protargol. Arthur Eichengrün invented this type of colloidal silver, which was marketed by Bayer from 1897 onward. The silver-based treatment was used until the first antibiotics came into use in the 1940s.
The exact time of onset of gonorrhea as prevalent disease or epidemic cannot be accurately determined from the historical record. One of the first reliable notations occurs in the Acts of the (English) Parliament. In 1161, this body passed a law to reduce the spread of "... the perilous infirmity of burning". The symptoms described are consistent with, but not diagnostic of, gonorrhea. A similar decree was passed by Louis IX in France in 1256, replacing regulation with banishment. Similar symptoms were noted at the siege of Acre by Crusaders.
Coincidental to, or dependent on, the appearance of a gonorrhea epidemic, several changes occurred in European medieval society. Cities hired public health doctors to treat affected patients without right of refusal. Pope Boniface rescinded the requirement that physicians complete studies for the lower orders of the Catholic priesthood.
Medieval public health physicians in the employ of their cities were required to treat prostitutes infected with the "burning", as well as lepers and other epidemic patients. After Pope Boniface completely secularized the practice of medicine, physicians were more willing to treat a sexually transmitted disease.
Research
A vaccine for gonorrhea has been developed that is effective in mice. It will not be available for human use until further studies have demonstrated that it is both safe and effective in the human population. Development of a vaccine has been complicated by the ongoing evolution of resistant strains and antigenic variation (the ability of N. gonorrhoeae to disguise itself with different surface markers to evade the immune system).
As N. gonorrhoeae is closely related to N. meningitidis and they have 80–90% homology in their genetic sequences some cross-protection by meningococcal vaccines is plausible. A study published in 2017 showed that MeNZB group B meningococcal vaccine provided a partial protection against gonorrhea. The vaccine efficiency was calculated to be 31%.
References
External links
"Gonorrhea – CDC Fact Sheet"
Bacterium-related cutaneous conditions
Gonorrhea
Infectious causes of cancer
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | wiki |
A crowd scene is the representation of a crowd in art, literature or other media.
There are many examples of crowd scenes in American literature. One classic is Poe's short story, "The Man of the Crowd", in which a mysterious old man is followed through London in the 19th century, when it was the most populous city in the world.
See also
crowd simulation
walla
References
Citations
Sources
Film and video terminology
Literary theory | wiki |
A graphing calculator is a class of hand-held calculator that is capable of plotting graphs and solving complex functions. While there are several companies that manufacture models of graphing calculators, Hewlett-Packard is a major manufacturer.
The following table compares general and technical information for Hewlett-Packard graphing calculators:
See also
Comparison of Texas Instruments graphing calculators
HP calculators
List of Hewlett-Packard pocket calculators
References
HP calculators
HP calculators
Graphing calculators | wiki |
American Indoor Football (AIF) was a professional indoor football league, one of the several regional professional indoor football leagues in North America.
The AIFL began as a regional league with six franchises on the East Coast of the United States in 2005. After a rapid, and largely failed, expansion effort in 2006, most of the league's remaining teams jumped to the new AIFA (the rest joined the short-lived WIFL). The AIFA expanded throughout existing territory and, in 2008, expanded into the Western United States. The league legally divided into two entities to allow for a partial merger with the Southern Indoor Football League, which resulted in all of its Eastern teams merging into the SIFL and the AIFA only maintaining its western teams. The league's western component, which remained separate of the merger, had indicated it would play as the AIFA West for the 2011 season but ceased operations January 2011. The league announced it would be relaunching as American Indoor Football in time for spring 2012.
After the 2016 season, the AIF ceased operations with the former AIF owner stating his support for the recently created Arena Developmental League. In 2021, league owner John Morris announced he planned to relaunch the AIF for the 2022 season, though no games would be played.
History
The league has its roots in the Atlantic Indoor Football League, which began play in 2005 under the leadership of Andrew Haines. The first team to join the AIFL was the Johnstown RiverHawks. The league began with six teams, all of them based in the eastern United States. Two teams played all of their games on the road, and the regular season was cut short two weeks because of teams being unable to secure venues for playoff games. In the 2005–06 offseason, the league changed its name to the American Indoor Football League, while nine expansion teams entered the league and a tenth (the Rome Renegades) joined from the National Indoor Football League.
The 2006 season was marred by the folding of two teams, and the league used semi-pro teams to fill scheduling vacancies. The league was briefly acquired by Greens Worldwide, Inc., the owners of the amateur North American Football League, during the 2006 season, but they terminated the contract soon afterwards. Nine teams left the league after the season, including four who split off to create the short-lived World Indoor Football League. On October 2, 2006, a massive reorganization took place as Morris and Michael Mink set up a new league, which absorbed all of the remaining AIFL franchises, and Haines was ousted. (Haines would go on to create the Mid-Atlantic Hockey League in 2007, before similar stability problems led to the forced divestiture of that league as well. Haines would, in April 2010, announce he was relaunching his league as the Ultimate Indoor Football League beginning in 2011 and revived two defunct former AIFL teams.) The league took on the American Indoor Football Association name at the same time.
The 2007 season was relatively successful for the league, as all 112 scheduled games were played and no teams folded mid-season, a major improvement over the past two seasons (when the AIFA was known as the AIFL). The AIFA Championship Bowl I was a neutral site game held in Florence, South Carolina. In addition, the league held its first All-Star Game the same weekend, also in Florence. League owners stated that the neutral site was chosen so that both games could be televised to obtain nationwide exposure for the league.
The league then expanded nationwide; some individual teams were able to acquire several players with NFL experience, a sign that the league had achieved a level on par with leagues such as af2. The league earned a major television contract as well: On September 17, 2007, The American Indoor Football Association owners John Morris and Michael Min announced that the league signed a three-year national television broadcast, mobile phone broadcast, and webcast licensing agreement with Simply 4Me Incorporated (d.b.a. SimplyMe TV). However, that deal was subsequently cancelled. Later in the season, FSN Pittsburgh agreed to pick up the remaining games; Erie, Pennsylvania-based Image Sports Network was also involved with the league.
Eight teams participating in the league in 2007 did not return for the 2008 season, including the 2007 champion Lakeland Thunderbolts. The AIFA became the third league since 2004 (excluding the folded WIFL and NIFL before its folding) to lose its standing champion (the 2004 NIFL champion Lexington Horsemen left to join the newly created UIF and later were in af2, and the 2006 champion Billings Outlaws also left to join two years later). However, nine teams signed on to begin play in 2008, and the league created a Western Conference. In 2007, the team farthest west was based in Mississippi; in 2008, the team farthest west was based in Arizona. Three of the four teams who had won the league championship to that point were no longer active league members.
The 2009 season culminated in AIFA Championship Bowl III, hosted by the Western Conference champion Wyoming Cavalry on July 25, 2009. The game, played before 6,500 fans at the Casper Events Center, saw the Reading Express defeat the Wyoming Cavalry for their first title, 65–42.
As the 2010 season approached, AIFA continued to expand its nationwide footprint. Expansion franchises had been added in Richmond, Virginia; Yakima, Washington; Wasilla, Alaska (believed to be the smallest city in America to host a national professional football franchise) and Wenatchee, Washington. The moves gave the AIFA a much more significant presence on the West Coast of the United States. To accommodate this, and to keep travel expenses down, for the 2010 season the AIFA adopted a scheduling system that effectively treated the Eastern and Western conferences as separate leagues, with no regular-season crossover between the two conferences. The league also secured a television contract with AMGTV, which was to syndicate a "Game of the Week" package to regional sports networks and its network of low-powered broadcast stations.
In 2010, the Baltimore Mariners completed the league's first-ever perfect season by winning all fourteen regular season games and winning AIFA Championship Bowl IV.
Split, partial merger with the SIFL and first cessation of operations
The AIFA arranged a split and partial merger with the Southern Indoor Football League after the 2010 season. As part of the deal, Morris would acquire the rights to the Eastern Conference teams and merge them into the SIFL, while Mink would retain the western conference teams, rights to the AIFA name, and television contract, the last of which was extended through 2013.
The AIFA West originally announced that it would begin its season with four teams, beginning in March 2011, after the Tucson Thunder Kats announced it would be suspending operations until 2012. As of January 2011, no schedule had been released, and the league informed the remaining three teams that there would not be a fourth team representing Eugene, Oregon as the league had earlier promised. The league attempted to work out a schedule with the remaining three teams, but the Reno Barons and Stockton Wolves were unwilling to go forward with such a schedule and broke from the league. Both teams operated as the two-team "Western Indoor Football Association" in 2011, each playing whatever semi-pro teams were willing to face them in addition to each other. With only the Yakima Valley Warriors left, the AIFA ceased operations; it said that it would attempt to relaunch in 2012 with eight to 12 teams in at least two regions of the United States.
As of June 2011, Morris had released a statement indicating he still represented the AIFA when he purchased the assets of the Fayetteville Force.
Relaunch and folding
On October 27, 2011, the AIFA announced it was relaunching as American Indoor Football (AIF). The move came in light of the dissolution of the SIFL and its breakup into the Professional Indoor Football League and the Lone Star Football League. AIF announced its intentions to absorb the three remaining SIFL teams not in either the PIFL or LSFL (the Harrisburg Stampede, Trenton Steel and Carolina Speed), as well as the remaining teams that would have participated in the AIFA West. AIF intended to launch an amateur division as well.
In 2015, the league absorbed the remains of the Continental Indoor Football League, picking up the Saginaw Sting and Chicago Blitz from that league; the CIFL Web site became a redirect to AIF's. (The two other surviving teams from that league chose to play in other leagues: Erie decided to join the PIFL, while the Marion Blue Racers fulfilled an earlier promise to join the X-League). In homage to the CIFL, AIF split into two conferences, one bearing the American name and the other (which includes both CIFL refugees) named the Continental Conference. The conference names were changed to Northern and Southern for the 2016 season.
The 2016 season saw the league grow from nine teams to a total of 28 announced teams. However, only 21 teams ever played a league game that season, including four teams that folded midseason and several other teams cancelling scheduled games. The Columbus Lions, which joined for 2016, would finish the season undefeated and win the championship. The Lions then announced that they were leaving the league due to the league's instability, especially in the Southern Division where the Lions were the only team that did not have a cancelled or rescheduled game. On July 7, 2016, the Lions' owners announced the formation of a new league, the Arena Developmental League. On July 13, the Lehigh Valley Steelhawks also announced they were leaving the AIF.
In response, AIF owner Jim Morris announced on July 18, 2016, that the AIF was ceasing operations immediately. He also announced his support of the new Arena Developmental League (which later changed its name to National Arena League before its inaugural season) and hoped the new league would take on many of the former AIF teams.
The Buffalo Blitz (formerly the Buffalo Lightning) used the official AIF football in their press announcement upon joining the Can-Am Indoor Football League, which was created by announced AIF 2017 expansion team Vermont Bucks. The Can-Am also used the AIF footballs in games during their only season.
Basic rule differences
AIF did not use rebound nets found in the Arena Football League.
One linebacker could move flat to flat but was required to stay in drop zone.
Platooning and free substitution were allowed, meaning players did not have to play both offense and defense.
Franchises were required to have at least nine players that originated from within a 120-mile radius of the team's home town.
The AIF ball pattern was similar to that of the basketball in the American Basketball Association, with red, white, and blue panels as opposed to the brown colored football of most leagues. This pattern originated in the AIFL and is also used in the UIFL.
Two rule changes appeared to be inspired by Canadian football rules:
Two offensive players could be in motion at one time. The AFL allows only one in motion.
The AIF recognized the single (also known as an uno or rouge). If a kickoff goes through the uprights, or if the receiving team does not advance the ball out of the end zone on a kickoff, the kicking team is awarded one point and the ball is spotted at the opponent's five yard line.
Teams
Teams when league folded
Italics indicate travel only team (including the Maryland Eagles who also have home games with another league)
Map of teams
Former teams
Defunct franchises
Abilene Warriors – Announced for 2016 but never played.
AIFL Ghostchasers
Arctic Predators (Wasilla, Alaska).
Arizona Adrenaline
Arizona Outlaws
Atlanta Sharks – At one point, listed on the 2016 schedule of the Indoor Football Alliance.
Atlanta Vultures – Folded during the 2016 season.
Augusta Colts – Originally played as the Augusta Spartans.
Austin Colts – Added midseason in 2016 but folded prior to playing any games.
Baltimore Blackbirds
Baltimore Mariners
Canton Legends
Carolina Ghostriders – Also known as the Carolina Sharks, AIFL Ghostriders, and indirectly the Greensboro Ghostriders.
Carolina Speed
Chattahoochee Valley Vipers
Corpus Christi Fury – Added for 2016, folded midseason.
Cleveland Patriots
Danville Demolition
Daytona Beach Thunder
D.C. Armor
Erie Freeze
Fayetteville Guard
Florence Phantoms
Florida Stingrays
Gulf Coast Raiders
Huntington Heroes
Johnstown Riverhawks
Louisiana Cottonmouths – Announced for 2016 but never played.
Marion Blue Racers – Announced as a member for the 2016 season joining from the X-League but suspended operations prior to the season.
Maryland Reapers
Mississippi MudCats
Montgomery Bears
Myrtle Beach Freedom
Nevada Lynx
New Jersey Revolution
New Mexico Wildcats
New Mexico Stars
Northern Kentucky Nightmare – Traveling 2016 expansion team; ownership attempted to join the Arena Developmental League after the AIF ceased operations.
Northshore Gators – Announced for 2016 but never played.
Ogden Knights
Ontario Warriors
Philadelphia Yellow Jackets – Joined in 2016, folded midseason.
Pineywoods Bucks – Announced for 2016 season but never played.
Raleigh Rebels
Reading Express
Richmond Bandits
Richmond Raiders
Roc City Thunder – Played as an independent; announced as member of, but never actually played, in the league.
Rochester Raiders
Rome Renegades
Saginaw Sting
Springfield Stallions
South Carolina Force
Steel City Menace – Joined in 2016, folded midseason
Steubenville Stampede
Syracuse Soldiers – Also known as the Binghamton Brigade
Tallahassee Titans
Tucson Thunder Kats – Announced but never played
Utah Saints
Virginia Badgers
Wenatchee Valley Venom
Wyoming Cavalry
Yakima Valley Warriors
York/Central Penn Capitals
Former AIFL/AIFA/AIF teams that left and were still active
ASI Panthers – Announced to be an independent team for 2016 but was also listed by Indoor Football Alliance as a member. The team instead went to the semi-professional Minor League Football and were renamed to Penn Panthers.
Buffalo Lightning – Played a short 2016 season as an independent before playing 2017 as the Buffalo Blitz as members of the Can-Am Indoor Football League.
Cape Fear Heroes – Created their own league in 2015 as the charter member of Supreme Indoor Football (SIF) which then became part of the Indoor Football Alliance (IFA). The IFA was unable to organize a season in time and went on hiatus for 2016. The Heroes ownership restarted the SIF for 2017.
Central Florida Jaguars – Joined the AIF as an expansion team in 2016. Became the charter member of the semi-professional Elite Indoor Football Conference for 2017 and briefly an affiliate member of the new Arena Pro Football. In 2017, the EIFC played all league games outdoors in Bartow, Florida.
Chicago Blitz – After the team went for sale near the end of the 2016 season, the team ceased operations for 2017. Joined the semiprofessional regional Midwest Professional Indoor Football for 2018 but left the league after three games.
Columbus Lions – Joined for the 2016 season and won the AIF championship. One week later the team announced they would not return to the AIF for the 2017 season and the owners formed a new league called the Arena Developmental League, which then changed its name to National Arena League.
Erie RiverRats/Storm – Later called the Erie Explosion. Played in Professional Indoor Football League before league folded in 2015. Initially part of the Indoor Football Alliance as a member of the reorganized Continental Indoor Football League but went on hiatus for 2016, never to return.
Florida Tarpons – Joined for the 2016 season from the recently defunct X-League Indoor Football. Joined the new Arena Pro Football for 2017 after the AIF ceased operations.
Georgia Firebirds – Joined for the 2016 season. Joined the new National Arena League for 2017 after the AIF ceased operations.
Harrisburg Stampede – Moved to the Professional Indoor Football League after their 2013 championship, then folded following the 2014 season.
High Country Grizzlies – Announced as 2017 expansion team prior to the AIF folding. Joined the new National Arena League for 2017 after the AIF ceased operations.
Lehigh Valley Steelhawks – Joined for the 2016 season but left after one season. Joined National Arena League for 2017 prior to the AIF ceasing operations.
Maryland Eagles – Full AIF member in 2013, and then a part-time traveling team from 2014 to 2016 while also playing full-time in a regional semi-professional league. After the AIF ceased operations, went semi-pro only.
Miami Valley Silverbacks – Joined AIFL for 2006 season but left after one season. Joined Continental Indoor Football League in 2007 until the 2012 season. Played as Dayton Silverbacks from 2010 to 2012.
River City Raiders – Joined for the 2016 season from the X-League Indoor Football where it was known as the St. Louis Attack. Briefly joined Champions Indoor Football before joining Arena Pro Football for the 2017 season after the AIF ceased operations.
Savannah Steam – Joined for the 2015 season and played two seasons until the AIF folded and they themselves were evicted from their home arena. While working to establish its own Elite Indoor Football league for 2017, they rebranded as the Southern Steam and played home games outdoors and later in a converted warehouse in nearby Statesboro.
Texas Stealth – Announced as an expansion team for the 2016 season but joined North American Indoor Football (a semi-professional league) prior to their first season.
Triangle Torch – Joined as an expansion team for the 2016 season. Joined the reorganized Supreme Indoor Football for the 2017 season.
Utah Valley Thunder – Assumed the identity of the Utah Blaze in the Arena Football League in 2010 until 2013.
Vermont Bucks – Announced as 2017 expansion team two days prior to the AIF folding. Created the new Can-Am Indoor Football League for 2017.
West Michigan Ironmen – Joined for the 2016 season until the AIF folded. Announced they had joined Champions Indoor Football for 2017.
Substitute
Chambersburg Cardinals – semi-professional outdoor team that played two games as a substitute in 2006, from the North American Football League.
Columbus Blackhawks – semi-professional team that filled in for one game in 2006.
Philadelphia Scorpions – semi-professional team that filled in for one game in 2006.
Championship games
See also
List of leagues of American football
References
External links
Defunct indoor American football leagues in the United States | wiki |
Bank of America Centers kan syfta på:
Bank of America Center (Houston) – en byggnad i Houston i Texas
Bank of America Center (San Francisco) – en byggnad i San Francisco i Kalifornien
Se även
Bank of America Plaza
Bank of America Tower | wiki |
NGC 483 is a spiral galaxy in the constellation Pisces. It is located approximately 192 million light-years from Earth and was discovered on November 11, 1827 by astronomer John Herschel.
See also
Spiral galaxy
List of NGC objects (1–1000)
References
External links
SEDS
Spiral galaxies
Pisces (constellation)
0483
4961
Astronomical objects discovered in 1827 | wiki |
Redneck is a derogatory term chiefly, but not exclusively, applied to white Americans perceived to be crass and unsophisticated, closely associated with rural whites of the Southern United States.
Its meaning possibly stems from the sunburn found on farmers' necks dating back to the late 19th century. Its modern usage is similar in meaning to cracker (especially regarding Texas, Georgia, and Florida), hillbilly (especially regarding Appalachia and the Ozarks), and white trash (but without the last term's suggestions of immorality). In Britain, the Cambridge Dictionary definition states: "A poor, white person without education, esp. one living in the countryside in the southern US, who is believed to have prejudiced ideas and beliefs. This word is usually considered offensive." People from the white South sometimes jocularly call themselves "rednecks" as insider humor.
By the 1970s, the term had become offensive slang, its meaning expanded to include racism, loutishness, and opposition to modern ways.
Patrick Huber, in his monograph A Short History of Redneck: The Fashioning of a Southern White Masculine Identity, emphasized the theme of masculinity in the 20th-century expansion of the term, noting, "The redneck has been stereotyped in the media and popular culture as a poor, dirty, uneducated, and racist Southern white man."
19th and early 20th centuries
Political term for poor farmers
The term originally characterized farmers that had a red neck, caused by sunburn from long hours working in the fields. A citation from 1893 provides a definition as "poorer inhabitants of the rural districts ... men who work in the field, as a matter of course, generally have their skin stained red and burnt by the sun, and especially is this true of the back of their necks". Hats were usually worn and they protected that wearer's head from the sun, but also provided psychological protection by shading the face from close scrutiny. The back of the neck however was more exposed to the sun and allowed closer scrutiny about the person's background in the same way callused working hands could not be easily covered.
By 1900, "rednecks" was in common use to designate the political factions inside the Democratic Party comprising poor white farmers in the South. The same group was also often called the "wool hat boys" (for they opposed the rich men, who wore expensive silk hats). A newspaper notice in Mississippi in August 1891 called on rednecks to rally at the polls at the upcoming primary election:
By 1910, the political supporters of the Mississippi Democratic Party politician James K. Vardaman—chiefly poor white farmers—began to describe themselves proudly as "rednecks", even to the point of wearing red neckerchiefs to political rallies and picnics.
Linguist Sterling Eisiminger, based on the testimony of informants from the Southern United States, speculated that the prevalence of pellagra in the region during the Great Depression may have contributed to the rise in popularity of the term; red, inflamed skin is one of the first symptoms of that disorder to appear.
Coal miners
The term "redneck" in the early 20th century was occasionally used in reference to American coal miner union members who wore red bandanas for solidarity. The sense of "a union man" dates at least to the 1910s and was especially popular during the 1920s and 1930s in the coal-producing regions of West Virginia, Kentucky, and Pennsylvania. It was also used by union strikers to describe poor white strikebreakers.
Late 20th and early 21st centuries
Writers Edward Abbey and Dave Foreman also use "redneck" as a political call to mobilize poor rural white Southerners. "In Defense of the Redneck" was a popular essay by Ed Abbey. One popular early Earth First! bumper sticker was "Rednecks for Wilderness". Murray Bookchin, an urban leftist and social ecologist, objected strongly to Earth First!'s use of the term as "at the very least, insensitive". However, many Southerners have proudly embraced the term as a self-identifier. Similarly to Earth First!'s use, the self-described "anti-racist, pro-gun, pro-labor" group Redneck Revolt have used the term to signal its roots in the rural white working-class and celebration of what member Max Neely described as "redneck culture".
As political epithet
According to Chapman and Kipfer in their "Dictionary of American Slang", by 1975 the term had expanded in meaning beyond the poor Southerner to refer to "a bigoted and conventional person, a loutish ultra-conservative". For example, in 1960 John Bartlow Martin expressed Senator John F. Kennedy should not enter the Indiana Democratic presidential primary because the state was "redneck conservative country". Indiana, he told Kennedy, was a state "suspicious of foreign entanglements, conservative in fiscal policy, and with a strong overlay of Southern segregationist sentiment". Writer William Safire observes it is often used to attack white Southern conservatives, and more broadly to degrade working class and rural whites that are perceived by urban progressives to be insufficiently progressive. At the same time, some white Southerners have reclaimed the word, using it with pride and defiance as a self-identifier.
In popular culture
Johnny Russell was nominated for a Grammy Award in 1973 for his recording of "Rednecks, White Socks and Blue Ribbon Beer", parlaying the "common touch" into financial and critical success.
Further songs referencing rednecks include "Longhaired Redneck" by David Allan Coe, "Rednecks" by Randy Newman, "Redneck Friend" by Jackson Browne, "Redneck Woman" by Gretchen Wilson, "Redneck Yacht Club" by Craig Morgan, "Redneck" by Lamb of God, "Redneck Crazy" by Tyler Farr, "Red Neckin' Love Makin' Night" by Conway Twitty, "Up Against The Wall Redneck Mother" by Jerry Jeff Walker, and "Your Redneck Past" by Ben Folds Five.
'Picture to Burn' by Taylor Swift is another successful country song using the word 'redneck', this time in a negative way, where the narrator calls her ex-boyfriend a 'redneck heartbreak'.
Frank Zappa's song "Lonesome Cowboy Bert" which appeared on the soundtrack of "200 Motels" performed by The Mothers used the term.
Comedian Jeff Foxworthy's 1993 comedy album You Might Be a Redneck If... cajoled listeners to evaluate their own behavior in the context of stereotypical redneck behavior.
Redneck is mentioned several times on Texas-based animated sitcom King of the Hill by Hank Hill's antagonistic neighbor Kahn.
Outside the United States
Historical Scottish Covenanter usage
In Scotland in the 1640s, the Covenanters rejected rule by bishops, often signing manifestos using their own blood. Some wore red cloth around their neck to signify their position, and were called rednecks by the Scottish ruling class to denote that they were the rebels in what came to be known as The Bishop's War that preceded the rise of Cromwell. Eventually, the term began to mean simply "Presbyterian", especially in communities along the Scottish border. Because of the large number of Scottish immigrants in the pre-revolutionary American South, some historians have suggested that this may be the origin of the term in the United States.
Dictionaries document the earliest American citation of the term's use for Presbyterians in 1830, as "a name bestowed upon the Presbyterians of Fayetteville (North Carolina)".
South Africa
The exact Afrikaans equivalent, , is used as a disparaging term for English people and South Africans of English descent, in reference to their supposed naïveté as later arrivals in the region in failing to protect themselves from the sun.
See also
Florida cracker
Georgia cracker
Old Stock Americans
Stereotypes of white Americans
Culture of the Southern United States
Country (identity)
List of ethnic slurs
Class discrimination
Bogan, Australian term
Plain Folk of the Old South
Redlegs – poor whites that live on Barbados and a few other Caribbean islands
Yokel
White trash
References
Further reading
Abbey, Edward. "In Defense of the Redneck", from Abbey's Road: Take the Other. (E. P. Dutton, 1979)
Ferrence, Matthew, "You Are and You Ain't: Story and Literature as Redneck Resistance", Journal of Appalachian Studies, 18 (2012), 113–30.
Goad, Jim. The Redneck Manifesto: How Hillbillies, Hicks, and White Trash Became America's Scapegoats (Simon & Schuster, 1997).
Harkins, Anthony. Hillbilly: A cultural history of an American icon (2003).
Huber, Patrick. "A short history of Redneck: The fashioning of a southern white masculine identity." Southern Cultures 1#2 (1995): 145–166. online
Jarosz, Lucy, and Victoria Lawson. "'Sophisticated people versus rednecks': Economic restructuring and class difference in America's West." Antipode 34#1 (2002): 8-27.
Shirley, Carla D. "'You might be a redneck if ... ' Boundary Work among Rural, Southern Whites." Social forces 89#1 (2010): 35–61. in JSTOR
West, Stephen A. From Yeoman to Redneck in the South Carolina Upcountry, 1850–1915 (2008)
Weston, Ruth D. "The Redneck Hero in the Postmodern World", South Carolina Review, (Spring 1993)
Wilson, Charles R. and William Ferris, eds. Encyclopedia of Southern Culture, (1989)
Wray, Matt. Not Quite White: White Trash and the Boundaries of Whiteness (2006)
External links
Poor Whites in the New Georgia Encyclopedia (history)
American regional nicknames
American slang
English words
European-American culture in Appalachia
Florida culture
Georgia (U.S. state) culture
History of subcultures
Pejorative terms for white people
Rural culture in the United States
Slang of the Southern United States
Socioeconomic stereotypes
Stereotypes of rural people
Stereotypes of the working class
Stereotypes of white Americans
Texas culture
Working-class culture in the United States | wiki |
Mountain alder is a common name for two different alders:
Alnus alnobetula subsp. crispa — the green alder, native to western North America.
Alnus incana subsp. tenuifolia — the grey alder or thinleaf alder, native to western North America. | wiki |
Wakeskating is a water sport and an adaptation of wakeboarding that employs a similar design of board manufactured from maple or fibreglass. Unlike wakeboarding, the rider is not bound to the board in any way, similar to the skateboard, from which the name derives.
Design
Fins are constructed of plastic, fiberglass or aluminum. Shorter fins must be deeper to get the same amount of tracking. A shallower fin does not track as well as a deeper one. But a deeper fin has more drag in the water, and does not release from the water as fast.
Wakeskating shoes are designed with quick drying materials and drainage channels. The drainage channels are a system of holes in the sole and channels through the midsole.
Most of Wakeskate boards are made with a grip tape on the upside part just as a skateboard. That grip tape is like a sand paper, it helps the rider to stay on the board and provide a good traction. It is the major reason why rider wears shoes. Some boards are made with a foam instead of the regular skateboard grip-tape. That surface is easier on the skin if you fall. Also that kind of surface can be ridden bare feet.
History
Wakeskating was pioneered by Thomas Horrell in the United States. Wakeskating has become urbanized due to the advent of the "winch", a mechanical device with a small horizontal shaft engine that holds a spool of rope and pulls the rope in at riding speed.
A wakeskate is an integral part of Wakeskating. There are five factors that differentiate one wakeskate from another. They are size, material, deck shape, deck surface and rocker type of a wakeskate.
Size
The size of a wakeskate is determined by the weight of the rider. The smallest wakeskate is available in the size of 39 inches which are suitable for a rider with a weight of 180 pounds. The length of a wakeskate can be more than 41 inches which are best for riders with a weight of 250 pounds or more. Shorter wakeskates are easy to maneuver and can be easily flicked because of their lighter weight and smaller size. However, they are comparatively unstable as the rider lands on the surface of the water. Larger wakeskates offer more surface area that offers higher stability to the rider.
Material
A wakeskate is available in two types of material: wood and composite. Wood wakeskate consists of a wooden skate that is covered by marine grade epoxy that gives it a finished look and long life. However, the life of wood wakeskates is always shorter as compared to composite wakestakes because of the degradation of wood as a result of constant exposure to water. Thus, they do not usually come with any sort of manufacturer's warranty.
Composite wakeskates are more popular especially among the professional riders because of its lighter weight and longer life. They are completely made of synthetic materials that do not degrade quickly due to exposure to water. They are more expensive as compared to wood wakeskates.
See also
Reed Hansen
Skateboarding
References
Boardsports
Sports originating in Australia
Towed water sports
Wakeboarding | wiki |
Kong Vibol (); is a Cambodian politician. He belongs to the Cambodian People's Party (CPP).
Kong Vibol is the Secretary of State, Ministry of Economy and Finance and vice-chairman of the Council for the Development of Cambodia, (CDC).
References
Cambodian People's Party politicians
Living people
Year of birth missing (living people) | wiki |
The United States Social Security Administration (SSA) is an independent agency of the U.S. federal government that administers Social Security, a social insurance program consisting of retirement, disability and survivor benefits. To qualify for most of these benefits, most workers pay Social Security taxes on their earnings; the claimant's benefits are based on the wage earner's contributions. Otherwise benefits such as Supplemental Security Income (SSI) are given based on need.
The Social Security Administration was established by the Social Security Act of 1935 and is codified in (). It was created in 1935 as the "Social Security Board", then assumed its present name in 1946. Its current leader is Kilolo Kijakazi, who serves on an acting basis.
SSA offers its services to the public through 1,200 field offices, a website, and a national toll-free number. Field offices, which served 43 million individuals in 2019, were reopened on April 7, 2022 after being closed for two years due to the COVID-19 pandemic.
SSA is headquartered in Woodlawn, Maryland, just to the west of Baltimore, at what is known as Central Office. In addition to its 1,200 field offices, the agency includes 10 regional offices, 8 processing centers, and 37 Teleservice Centers. , about 60,000 people were employed by SSA. Headquarters non-supervisory employees of SSA are represented by American Federation of Government Employees Local 1923.
SSA operates the largest government program in the United States. In fiscal year (FY) 2022, the agency expects to pay out $1.2 trillion in Social Security benefits to 66 million individuals. In addition, SSA expects to pay $61 billion in SSI benefits to 7.5 million low-income individuals in FY 2022.
History
The Social Security Act created a Social Security Board (SSB), to oversee the administration of the new program. It was created as part of President Franklin D. Roosevelt's New Deal with the signing of the Social Security Act of 1935 on August 14, 1935. The Board consisted of three presidentially appointed executives, and started with no budget, no staff, and no furniture. It obtained a temporary budget from the Federal Emergency Relief Administration headed by Harry Hopkins. The first counsel for the new agency was Thomas Elliott, one of Felix Frankfurter's "happy hot dogs".
The first Social Security office opened in Austin, Texas, on October 14, 1936. Social Security taxes were first collected in January 1937, along with the first one-time, lump-sum payments. The first person to receive monthly retirement benefits was Ida May Fuller of Brattleboro, Vermont. Her first check, dated January 31, 1940, was in the amount of US$22.54.
In 1939, the Social Security Board merged into a cabinet-level Federal Security Agency, which included the SSB, the U.S. Public Health Service, the Civilian Conservation Corps, and other agencies. In January 1940, the first regular ongoing monthly benefits began.
In 1946, the SSB was renamed the Social Security Administration under President Harry S. Truman's Reorganization Plan.
In 1953, the Federal Security Agency was abolished and SSA was placed under the Department of Health, Education, and Welfare, which became the Department of Health and Human Services in 1980. In 1994, Congress amended non-positive law and returned SSA to the status of an independent agency in the executive branch of government. In 1972, Cost of Living Adjustments (COLAs) were introduced into SSA programs to deal with the effects of inflation on fixed incomes.
Previous Social Security Commissioners
List of previous Social Security Commissioners
Headquarters
SSA was one of the first federal agencies to have its national headquarters outside of Washington, D.C., or its adjacent suburbs. It was located in Baltimore initially due to the need for a building that was capable of holding the unprecedented amount of paper records that would be needed. Nothing suitable was available in Washington in 1936, so the Social Security Board selected the Candler Building on Baltimore's harbor as a temporary location. Soon after locating there, construction began on a permanent building for SSA in Washington that would meet their requirements for record storage capacity. However, by the time the new building was completed, World War II had started, and the building was commandeered by the War Department. By the time the war ended, it was judged too disruptive to relocate the agency to Washington. The Agency remained in the Candler Building until 1960, when it relocated to its newly built headquarters in Woodlawn.
The road on which the headquarters is located, built especially for SSA, is named Security Boulevard (Maryland Route 122) and has since become one of the major arteries connecting Baltimore with its western suburbs. Security Boulevard is also the name of SSA's exit from the nearby Baltimore Beltway (Interstate 695). A nearby shopping center has been named Security Square Mall, and Woodlawn is often referred to informally as "Security." Interstate 70, which runs for thousands of miles from Utah to Maryland, terminates in a park and ride lot that adjoins the SSA campus.
Due to space constraints and ongoing renovations, many headquarters employees work in leased space throughout the Woodlawn area. Other SSA components are located elsewhere. For example, the headquarters (also known as Central Office) of SSA's Office of Disability Adjudication and Review is located in Falls Church, Virginia.
Field Offices
SSA has a network of more than 1,200 community-based field offices. In fiscal year 2019, 43 million individuals visited these field offices to apply for benefits, get an original or replacement Social Security card, or receive other services. Field offices reopened in April 2022, after being closed for two years due to the COVID-19 pandemic.
SSA provides a field office locator service, where members of the public can find office phone numbers and addresses.
SSA also provides services through a national toll-free number (1-800-772-1213) and a website. Retirement and disability benefits can be applied for online. For survivor benefits, however, members of the public must call or visit SSA in person to apply. In most states, individuals seeking a replacement Social Security card can apply for one online.
Members of the public can also apply for Supplemental Security Income at SSA's field offices. Field office staff will also assist SSI applicants with an application for food assistance through the SNAP program.
Program Service Centers
Much of the actual processing of initial benefits and subsequent adjustments to benefits is done in six large Program Service Centers located around the country.
The two main positions in Program Service Centers have long been Claims Authorizers and Benefits Authorizers. Claims Authorizers, now sometimes called claims specialists, establish initial benefits for program recipients. Benefits Authorizers process complicated changes of entitlements to existing beneficiaries, including life events, overpayments, underpayments, and so forth. The claims position is the higher-ranking of the two and initially required a college degree whereas the post-entitlement position did not. For decades, post-entitlement actions have been processed through a system known as Manual Adjustment, Credit and Award Processes (MADCAP).
The six service centers are:
Northeastern Program Service Center, Jamaica, Queens, New York (as of late 1980s; previously in Rego Park, Queens and College Point, Queens)
Mid-Atlantic Program Service Center, Philadelphia, Pennsylvania
Southeastern Program Service Center, Birmingham, Alabama
Great Lakes Program Service Center, Chicago, Illinois
Mid-America Program Service Center, Kansas City, Missouri
Western Program Service Center, Richmond, California (as of mid 1970s; previously in San Francisco)
They have been located in these six cities going back to at least the early 1950s.
The origins of the payment centers date back to 1942, when they were known as Area Offices. The first one was established in Philadelphia, with ones in New York, Chicago, San Francisco, and New Orleans, Louisiana, soon following.
In addition, there are specialized processing centers for the Office of Earnings and International Operations and the Office of Disability Operations, both located in Baltimore.
Before the mid-1970s, the Program Service Centers were called Payment Centers. By the late 1960s, the Payment Centers had acquired a reputation as sources of poor bureaucratic performance that people did not want to work in, and a reorganization under a modules system was undertaken during the 1970s in an effort to improve matters. Each module would be assigned a certain block of social security numbers and it would process all aspects of a claim, from initial entitlement through various changes, notifications to beneficiaries, and so forth. Decades later, the modules system was still seen as one of the great improvements in SSA processing.
The centers have each employed around two thousand people or more, giving them a major local economic impact, and even relocations within the same metropolitan area have created political conflict.
When in the early 1970s, SSA and the General Services Administration said it intended to move payment center operations out of San Francisco and across the East Bay to Richmond, the move was opposed by San Francisco-representing Congressman Phillip Burton.
Burton's efforts were in vain, however, as construction in a redevelopment area in Richmond commenced and the move was made around 1975.
Similarly, in the late 1970s, SSA, the General Services Administration, and the Carter administration devised a plan to move the program service center from its main location, in two leased buildings on Horace Harding Expressway in Lefrak City in Rego Park, to a new federal building planned for a revitalization zone in the center of the Jamaica area of Queens. The move was championed by Congressman Joseph P. Addabbo, who represented Jamaica and whose district would gain the over 2,000 federal workers involved, but was opposed by Congressman Benjamin Rosenthal, whose district would lose them. According to Rosenthal, the potential negative impact of the move affected the Elmhurst and Corona neighborhoods most strongly.
The move was also supported by Representative Geraldine Ferraro, another powerful Queens figure, who sat on the House Public Works Committee.
The dispute was aired in Congressional hearings and embroiled Senator Daniel Patrick Moynihan and developer Richard Lefrak, supporting and opposing the move respectively, as well.
In the event, the move went forward and the new, 11-story building in Jamaica – by then named the Joseph P. Addabbo Federal Building, as the congressman had died in the interim – opened in 1988.
Coverage
Initially, only 56 percent of the jobs in the United States were covered by Social Security. Today, the system is nearly universal, with 94 percent of individuals in paid employment in the United States working in covered employment.
State and local government workers are not required to participate in the Social Security program if they participate in a public retirement system through their employers. However, state and local governments, through agreements known as Section 218 agreements, may elect to participate in the program. Of the 23.2 million state and local workers in the United States, about 6.6 million are not covered by Social Security. Other workers not covered by Social Security include federal employees hired before 1984, railroad workers, some family employees, some students, and some members of the clergy.
If a job is not covered by Social Security, workers and employers do not pay Social Security payroll taxes. Social Security retirement and disability benefits are not payable unless individuals have sufficient work in Social Security covered employment. Individuals who work part of their careers in covered employment and part of their careers in non-covered employment and who receive pensions from non-covered employment may have their Social Security benefits reduced through the Windfall Elimination Provision (WEP) or the Government Pension Offset (GPO).
Railroad workers were covered by the Railroad Retirement Board before Social Security was founded. Today, they still are, though a portion of each railroad pension is designated as "equivalent" to Social Security. Railroad workers also participate in Medicare. All state and local government employees hired since 1986, or whom are covered by Section 218 Agreements, participate in Medicare even if not covered for purposes of Social Security benefits.
Old age, survivors and disability
SSA administers the retirement, survivors, and disabled social insurance programs, which can provide monthly benefits to aged or disabled workers, their spouses and children, and to the survivors of insured workers. In 2010, more than 54 million Americans received approximately $712 billion in Social Security benefits. The programs are primarily financed by taxes which employers, employees, and the self-insured pay annually. These revenues are placed into a special trust fund. These programs are collectively known as Retirement, Survivors, Disability Insurance (RSDI).
SSA administers its disability program partly through its Office of Disability Adjudication and Review (ODAR), which has regional offices and hearing offices across the United States. ODAR publishes a manual, called HALLEX, which contains instructions for its employees regarding how to implement its guiding principles and procedures.
The RSDI program is the primary benefits program administered by the U.S. federal government, and for some beneficiaries is the vital source of income. Increasing access to this benefit program for low-income or homeless individuals is one of SSA's goals. SSA is a member of the United States Interagency Council on Homelessness and works with other municipal, county, state, local and federal partners to increase access and approval for SSI/SSDI benefits who are eligible.
Supplemental Security Income (SSI)
SSA also administers the Supplemental Security Income (SSI) program, which is needs-based, for the aged, blind, or disabled. Prior to the 1972 Amendments to the Social Security Act, low-income aged, blind, or disabled persons received benefits from state-run programs called Old-Age Assistance, Aid to the Blind, and Aid to the Permanently and Totally Disabled. These programs received federal funding, but varied in terms of eligibility requirements and benefit payments. The 1972 Amendments replaced these programs with the SSI program. SSA was assigned responsibility for the SSI program and began operations in 1974. Federal benefit payments up to $914 for an SSI individual and $1,371 for an SSI couple are available from the program. SSI benefits are paid out of the general revenue of the United States of America. Some states supplement the federal amount.
Because SSI is needs-based, eligibility is restricted to persons with limited income and resources. In addition, eligibility is generally restricted to U.S. citizens, nationals, and some other groups (such as some refugees) who reside in one of the 50 U.S. states, the District of Columbia, or the Northern Mariana Islands. U.S. citizens and nationals who reside in American Samoa, Guam, Puerto Rico, and the U.S. Virgin Islands are not eligible for SSI. In 2019, 8 million individuals received SSI, including 1.1 million disabled children, 4.6 million disabled adults, and 2.3 million persons 65 or older.
In some cases, individuals may be eligible for Social Security (RSDI) benefits and SSI benefits. For example, a disabled individual who worked in Social Security-covered employment and who has limited income and resources may receive a Social Security disability benefit (due to employment prior to disability) and a partial SSI benefit (due to limited income and resources). SSA refers to these beneficiaries as "concurrent" beneficiaries.
Medicare
The administration of the Medicare program is a responsibility of the Centers for Medicare and Medicaid Services, but SSA offices are used for determining initial eligibility, some processing of premium payments, and for limited public contact information. They also administer a financial needs-based program called Extra Help, which helps beneficiaries pay the premiums, deductibles, and coinsurance associated with prescription drug coverage under Part D of Medicare. Benefits under this program are estimated to be worth about $5,000 per year. Individuals may apply online for the Extra Help program or by calling SSA.
Operations
To ensure consistent and efficient treatment of Social Security beneficiaries across its vast bureaucracy, SSA has compiled a giant book known as the Program Operations Manual System (POMS) which governs practically all aspects of SSA's internal operations. POMS describes, in excruciating detail, a huge variety of situations regularly encountered by SSA personnel, and the exact policies and procedures that apply to each situation.
Automation
While the establishment of Social Security predated the invention of the modern digital computer, punched card data processing was a mature technology, and the Social Security system made extensive use of automated unit record equipment from the program's inception. This allowed the Social Security Administration to achieve a high level of efficiency. SSA expenses have always been a small fraction of benefits paid. As a percentage of assets, the administration costs are 0.39%.
Adjudication
SSA operates its own administrative adjudication system, which has original jurisdiction when claims are denied in part or in full. SSA decisions are issued by Administrative Law Judges and Senior Attorney Adjudicators (supported by about 6,000 staff employees) at locations throughout the United States of the U.S. Office of Hearing Operations (OHO pronounced "oh,oh") formerly Office of Disability Adjudication and Review (ODAR), who hear and decide challenges to SSA decisions. Dissatisfied claimants can appeal to ODAR's Appeals Council, and if still dissatisfied can appeal to a U.S. District Court.
Over the years, OHO aka ODAR has developed its own procedural system, which is documented in the Hearings and Appeals Litigation Law Manual (HALLEX)
(HALLEX). ODAR was formerly known as the Office of Hearings and Appeals (OHA) and, prior to the 1970s, the Bureau of Hearings and Appeals. The name was changed to ODAR in 2007 to reflect the fact that about 75% of the agency's docket consists of disability cases. OHO aka ODAR also adjudicates disputes relating to retirement claims and has jurisdiction when the paternity of a claimant or the validity of a marriage is at issue when a claim is filed for benefits under the earnings record of a spouse or parent. The agency also adjudicates a limited number of Medicare claim issues, which is a residual legacy from when SSA was part of the U.S. Department of Health and Human Services.
Statistical publications
Each year, just before Mother's Day, SSA releases a list of the names most commonly given to newborn babies in the United States in the previous year, based on applications for Social Security cards. The report includes the 1,000 most common names for both genders. The Popular Baby Names page on the SSA website provides the complete list and allows searches for past years and particular names. For privacy reasons, SSA does not publish data for names with fewer than five occurrences in any given year.
Criticism and controversy
Bloomberg reported that SSA made a $32.3 billion mistake when reporting 2009 U.S. wage statistics. The error when corrected, further reduces the average 2009 U.S. wage to $39,055. In 2009 the average U.S. wage was reported as $39,269.
See also
Social programs in the United States
Public finance
Social Security Death Index
Social Security Disability Insurance
Michael J. Astrue, Commissioner of the Social Security Administration from 2007 to 2013
NOSSCR, National Organization of Social Security Claimants' Representatives
Richardson v. Perales
Ticket to Work, SSA's Ticket to Work Program
Title 20 of the Code of Federal Regulations
Data.gov
USAFacts
SSA impersonation scam
References
All Social Security Offices in USA
Further reading
Social Security Disability Advocate's Handbook, by David Traver, James Publishing, 2006,
Social Security Handbook, Germania Publishing, 2006.
External links
Program Operations Manual System (POMS) – public online version of the procedure by which SSA employees process decisions about benefits
Historical Background And Development Of Social Security by Social Security Administration
Social Security Administration on USAspending.gov
Social Security Administration in the Federal Register
Papers of Charles I. Schottland, former Commissioner of Social Security, Dwight D. Eisenhower Presidential Library
Social Security Administration Office Locations
SSA Pub. No 25-1556. Teleservice Representative Basic Training Curriculum Introduction Unit 1 Lessons 01-08 Student. pp. 7–15. Social Security Administration''. April 2006.
Baltimore County, Maryland landmarks
Government agencies established in 1935
Independent agencies of the United States government
New Deal agencies
Administration | wiki |
Nettle refers to any of various plant species.
Nettle or nettles may also refer to:
Vessels
, various ships with the name
, two ships
, a United States Coast Guard coastal freighter
Creeks
Nettle Creek (Grass River), a stream in New York, United States
Nettle Creek (Mad River), a stream in Ohio, United States
Nettle Creek, Innot Hot Springs, Queensland Australia
People
Nettles (surname), a list of people surnamed Nettles or Nettle
Other uses
Nettle (cryptographic library), a cryptographic library developed by Niels Möller in 2001
"Nettles", a song from the single Teddy Picker by the Arctic Monkeys
Sea nettle, the jellyfish genus Chrysaora
See also
Nettie (disambiguation) | wiki |
Country Club – wieś w Stanach Zjednoczonych, w stanie Missouri, w hrabstwie Andrew.
Wsie w stanie Missouri | wiki |
Chemistry (from Egyptian kēme (chem), meaning "earth") is the physical science concerned with the composition, structure, and properties of matter, as well as the changes it undergoes during chemical reactions.
Below is a list of chemistry-related articles. Chemical compounds are listed separately at list of organic compounds, list of inorganic compounds or list of biomolecules.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
References
Indexes of science articles | wiki |
Santa Ines (Santa Inés or Santa Inês) may refer to one of the following places:
Places
Brazil
Santa Inês, a city in Brazil
Chile
Santa Inés Island, an island off the coast of southern Chile
Mexico
Santa Inés del Monte, Oaxaca
Santa Inés de Zaragoza, Oaxaca
Santa Inés Yatzeche, Oaxaca
Spain
Santa Inés, Province of Burgos, a village and municipality in Castile and León, Spain | wiki |
Demarcation is the act of creating a boundary around a place or thing.
Demarcation may also refer to:
Demarcation line, a temporary border between the countries
Demarcation problem, the question of which practices of doing science permit the resulting theories to lie within the boundaries of knowledge
Demarcation dispute, may arise when two different trade unions both claim the right to represent the same class or group of workers
Demarcation point, in telephony, the point at which the telephone company network ends and connects with the wiring at the customer premises
Demarcation transactions, starting and ending database transactions using begin, commit, and rollback methods | wiki |
Transparent Horizon is a 1975 black Cor-ten steel sculpture by Louise Nevelson, installed on the Massachusetts Institute of Technology campus, in Cambridge, Massachusetts, United States. The artwork was among the first funded by MIT's "Percent-For-Art" program, which allocates $500,000 for art commissions for new architectural renovations on campus. The sculpture is an amalgam of two of Nevelson's previous works, Tropical Tree IV and Black Flower Series IV. The sculpture has been the target of vandalism.
References
External links
Transparent Horizon, 1975 at cultureNOW
1975 sculptures
Massachusetts Institute of Technology campus
Outdoor sculptures in Cambridge, Massachusetts
Steel sculptures in Massachusetts
Vandalized works of art in Massachusetts | wiki |
U.S.A. contro John Lennon (The U.S. vs. John Lennon) – documentario del 2006 diretto da David Leaf e John Scheinfeld
The U.S. vs. John Lennon – album del 2006 della colonna sonora del documentario | wiki |
Christ the Vine may refer to:
Christ the Vine (Moskos), a tempera painting by Leos Moskos
Christ the Vine (Victor), an egg tempera painting by Victor | wiki |
is one of Japan's top makers of fibers and textiles, including synthetic fibers (polyester, nylon and acrylics) and natural fibers, such as cotton and wool.
History
Toyobo was established in 1882 by Eiichi Shibusawa as a cotton-spinning company in a context of post-Meiji Restoration. By the 1930s, Toyobo was the world's largest cotton-spinning company. In the 1960s, the company started to manufacture synthetic fibers and films.
In August 2013, Toyobo bought the Spanish company Spinreact for 22.3 million euros.
In 2015, Toyobo provided 40% of the yarn for airbags worldwide, and 50% of Japan's food packaging films. In March 2017, Toyobo introduced Cocomi, a t-shirt that tracks a driver's heartbeats and activates an alarm if somnolence is detected. In August 2017, Toyobo established a new group in Europe, Toyobo Chemicals Europe GmbH, with a focus on marketing specialty chemical products, and a new manufacturing base for airbag fabrics.
In March 2018, Toyobo paid $66 million to settle a case of defective bulletproof vests sold to the US Government between 2001 and 2005.
Activities
Toyobo's textiles are designed for clothing, home furnishings, and for industrial uses. Textiles include spandex yarn for apparel, polyurethane fiber for pantyhose, yarns for airbags and tire cords and synthetic fibers for apparel. Toyobo is also engaged in the spinning, weaving, knitting, dyeing, sewing, and the wholesaling and trading of textiles in Japan and internationally.
Toyobo also manufactures plastic films, and resins. Biochemical products such as reagents, medical products (e.g. fiber membranes for artificial organs), and purification devices are also manufactured by the company.
The company operates across Japan, China, South Korea, Singapore, Malaysia, Australia, United States, and Germany and is listed on the Tokyo Stock Exchange, being a component of the Nikkei 225 stock index.
Gallery
See also
Biodefense
References
External links
Official global website
Textile companies of Japan
Defense companies of Japan
Chemical companies of Japan
Clothing companies of Japan
Disaster preparedness
Medical technology companies of Japan
Manufacturing companies based in Osaka
Japanese companies established in 1882
Manufacturing companies established in 1882
Companies listed on the Tokyo Stock Exchange
Japanese brands | wiki |
Trochilics is the science of rotary motion, or work done with wheels.
Trochilics may also refer to:
Trochilic Engine, a type of Swing-piston engine conceptualized in the 1990s but never built | wiki |
Low-alloy special purpose steel is a grade of tool steel characterized by its proportion of iron to other elements, the kind of elements in its composition, and its treatment during the manufacturing process. The three ASTM established grades of low-alloy special purpose steel are L2, L3, and L6. This grade originally contained L1, L4, L5 and L7 as well as three F grades (F1, F2, and F3) but because of falling demand only grades L2 and L6 remain in production.
L2
L2 grade steel comes in medium-carbon (0.45%-0.65%) and high-carbon (0.65%-1.1%) formats.
L6
L6 is the most commonly encountered and most frequently made variety of these steels. It is known for its high wear resistance and its toughness.
Applications
Applications for the L-series of tool steels have included precision gauges, bearings, rollers, cold-heading dies, swaging dies, feed fingers, spindles, jigs, shears, punches, and drills. They are also used for machining arbors, cams, chucks, and collets.
References
Steel | wiki |
Ultraviolet-sensitive beads (UV beads) are beads that are colorful in the presence of ultraviolet radiation. Ultraviolet rays are present in sunlight and light from various artificial sources and can cause sunburn or skin cancer. The color change in the beads alerts the wearer to the presence of the radiation.
When changing colour they undergo photochromism.
When the beads are not exposed to ultraviolet rays, they are colorless and either translucent or opaque. However, when sunlight falls onto the beads, they instantly turn into red, orange, yellow, blue, purple, or pink.
References
External links
Description of ultraviolet-sensitive beads and related products
Beadwork
Craft materials
Jewellery components
Ultraviolet radiation | wiki |
Drip Drop may refer to:
"Drip Drop" (Safura song), a 2010 song by Azerbaijani singer Safura Alizadeh
"Drip Drop" (Leiber and Stoller song), a 1958 song performed by The Drifters and also Dion
"Drip Drop", a song on the album V by Vanessa Hudgens
See also
Drip (disambiguation)
Drop (disambiguation) | wiki |
2017 hurricane season may refer to:
2017 Atlantic hurricane season
2017 Pacific hurricane season | wiki |
Fruit slice may refer to:
A type of cake, otherwise known as flies graveyard
A type of cake, otherwise known as gur cake
The candy gummy known as fruit snack | wiki |
Girly girl is a term for a girl or woman who presents herself in a traditionally feminine way. This may include wearing pink, using make-up, using perfume, dressing in skirts and dresses, and engaging in activities that are traditionally associated with femininity, such as talking about relationships.
The term is often used in a derogatory manner, but it can also be used in a more positive way, especially when considering the fluidity of gender roles. Being a "girly girl" can then be seen as a fluid and partially embodied position – a form of discourse taken up, discarded or modified for tactical or strategic purposes.
Social determinants
The female opposite of a girly girl is a tomboy. The male counterpart of a girly girl is a "man's man". The increasing prevalence of girly girls in the early 21st century has been linked to a supposed "post-feminist, post–new man construction of masculinity and femininity in mutually exclusive terms", as opposed to the more blurred gender representations of previous decades.
See also
References
Femininity
LGBT slang
Slang terms for women
Youth rights
Stereotypes of women | wiki |
Project M is a video game modification of the 2008 fighting game Super Smash Bros. Brawl.
Project M may also refer to:
Project M (NASA), a proposed project by NASA to send a robonaut to the Moon
Project M, the fictional creators of DC Comics Creature Commandos
Project M, developers of the Metroid: Other M game
ProjectM, an OpenGL version of the music visualizer MilkDrop | wiki |
Gradus is the shortened form of a Latin phrase which means "Steps to Parnassus".
Gradus may also refer to:
step (), an ancient Roman unit of length
gradus deiectio, Latin for "Reduction in rank"
Gradus Gravis affair, Latin for "Serious Degree"
People with the surname
Kamila Gradus (born 1967), retired Polish marathon runner | wiki |
The Selective Service Act of 1917 or Selective Draft Act () authorized the United States federal government to raise a national army for service in World War I through conscription. It was envisioned in December 1916 and brought to President Woodrow Wilson's attention shortly after the break in relations with Germany in February 1917. The Act itself was drafted by then-Captain (later Brigadier General) Hugh S. Johnson after the United States entered World War I by declaring war on Germany. The Act was canceled with the end of the war on November 11, 1918. The Act was upheld as constitutional by the United States Supreme Court in 1918.
History
Origins
At the time of World War I, the US Army was small compared with the mobilized armies of the European powers. As late as 1914, the Regular Army had under 100,000 men, while the National Guard (the organized militias of the states) numbered around 115,000. The National Defense Act of 1916 authorized the growth of the Army to 165,000 and the National Guard to 450,000 by 1921, but by 1917 the Army had only expanded to around 121,000, with the National Guard numbering 181,000.
By 1916, it had become clear that any participation by the United States in the conflict in Europe would require a far larger army. While President Wilson at first wished to use only volunteer troops, it soon became clear that this would be impossible. When war was declared, Wilson asked for the Army to increase to a force of one million. But by six weeks after war was declared, only 73,000 men had volunteered for service. Wilson accepted the recommendation of Secretary of War Newton D. Baker for a draft.
General Enoch H. Crowder, the Judge Advocate General of the United States Army, when first consulted, was opposed. But later, with the assistance of Captain Hugh Johnson and others, Crowder guided the bill through Congress and administered the draft as the Provost Marshal General.
A problem that came up in the writing of the bill and its negotiation through Congress was the desire of former President Theodore Roosevelt to assemble a volunteer force to go to Europe. President Wilson and others, including army officers, were reluctant to permit this for a variety of reasons. The final bill contained a compromise provision permitting the president to raise four volunteer divisions, a power Wilson did not exercise.
To persuade an uninterested populace to support the war and the draft, George Creel, a veteran of the newspaper industry, became the United States' official war propagandist. He set up the Committee on Public Information, which recruited 75,000 speakers, who made 750,000 four-minute speeches in 5,000 cities and towns across America. Creel later helped form the American Alliance for Labor and Democracy, with union leader Samuel Gompers as president, to win working-class support for the war and "unify sentiment in the nation". The AALD had branches in 164 cities, and many labor leaders went along although "rank-and-file working class support for the war remained lukewarm ...", and the campaign was ultimately unsuccessful. Many prominent Socialist leaders became pro-war, though the majority did not.
Effects
By the guidelines set down by the Selective Service Act, all males aged 21 to 30 were required to register to potentially be selected for military service. At the request of the War Department, Congress amended the law in August 1918 to expand the age range to include all men 18 to 45, and to bar further volunteering. By the end of World War I, some two million men volunteered for various branches of the armed services, and some 2.8 million had been drafted. This meant that more than half of the almost 4.8 million Americans who served in the armed forces were drafted. Due to the effort to incite a patriotic attitude, the World War I draft had a high success rate, with fewer than 350,000 men "dodging" the draft.
Differences from previous drafts
The most important difference between the draft established by the Selective Service Act of 1917 and the Civil War draft was that substitutes were not allowed. During the Civil War, a drafted man could avoid service by hiring another man to serve in his place. A mostly inaccurate perception spread that substitutes were used primarily by wealthy men and was resented by those who could not afford them or considered them dishonorable.
The practice of substitutes was prohibited in Section Three of the Selective Service Act of 1917:
No person liable to military service shall hereafter be permitted or allowed to furnish a substitute for such service; nor shall any substitute be received, enlisted, or enrolled in the military service of the United States; and no such person shall be permitted to escape such service or to be discharged therefrom prior to the expiration of his term of service by the payment of money or any other valuable thing whatsoever as consideration his release from military service or liability there to.
National registration days and termination
During World War I there were three registrations.
The first, on June 5, 1917, was for all men between the ages of 21 and 30.
The second, on June 5, 1918, registered those who attained age 21 after June 5, 1917. A supplemental registration, included in the second registration, was held on August 24, 1918, for those becoming 21 years old after June 5, 1918.
The third, on September 12, 1918, was for men age 18 through 45.
The Selective Service Act was upheld by the United States Supreme Court in the Selective Draft Law Cases, . The Solicitor General's argument and the court's opinion were based primarily on Kneedler v. Lane, 45 Pa. 238, 252 (1863) and Vattel's 1758 treatise The Law of Nations.
After the signing of the armistice of November 11, 1918, the activities of the Selective Service System were rapidly curtailed. On March 31, 1919, all local, district, and medical advisory boards were closed, and on May 21, 1919, the last state headquarters closed operations. The Provost Marshal General was relieved from duty on July 15, 1919, thereby finally terminating the activities of the Selective Service System of World War I.
Draft categories
Conscription was by class. The first candidates were to be drawn from Class I. Members of each class below Class I were available only if the pool of all available and potential candidates in the class above it were exhausted.
African-Americans
The American military was entirely segregated at the time of World War I. While the Army had several regiments of black "Buffalo Soldiers", many politicians such as Sen. James K. Vardaman (Mississippi) and Sen. Benjamin Tillman (South Carolina) staunchly opposed any expanded military role for black Americans. Nevertheless, the War Department decided to include black people in the draft. A total of 2,290,527 black Americans were ultimately registered for the draft during the two calls of June 2 and September 12, 1917 – 9.6 percent of the total American pool for potential conscription.
Draft board officials were told to tear off the lower left-hand corner of the Selective Service form of a black registrant, indicating his designation for segregated units. The August 1917 Houston Riot, when armed black soldiers fired upon Houston police and civilians, also affected the War Department's decision-making. The great majority of black soldiers were employed only in labor functions, such as road-building and freight-handling. Only two black combat units of were ultimately established – the 92nd and 93rd Infantry Divisions. Black Americans were entirely excluded from the United States Marine Corps and were consigned to menial labor in the United States Navy for the duration of the war.
See also
Conscription in the United States
Selective Service System
Footnotes
External links
Geheran, Michael: Selective Service Act , in: 1914-1918-online. International Encyclopedia of the First World War.
Strauss, Lon: Social Conflict and Control, Protest and Repression (USA) , in: 1914-1918-online. International Encyclopedia of the First World War.
Wood, Margaret. "World War I: Conscription Laws." Library of Congress Blog. September 13, 2016. Last accessed May 9, 2017.
"World War I." Library of Congress Online. Last accessed May 9, 2017.
"An Act To authorize the President to increase temporarily the Military Establishment of the United States." Statutes at Large. May 18, 1917, H.R. 3545. Last accessed May 9, 2017.
An Act Amending the Act entitled "An Act to authorize the President to increase temporarily the Military Establishment of the United States," approved May eighteenth, nineteen hundred and seventeen. Statutes at Large. Web. August 31, 1918, H.R. 12731. Last accessed May 9. 2017.
1917 in American law
Conscription in the United States
United States federal defense and national security legislation
United States in World War I
1917 in military history
1917 in the United States
65th United States Congress
Conscription law
World War I legislation | wiki |
Beautiful People | wiki |
Poetic Justice ("giustizia poetica" in lingua inglese) può riferirsi a:
Cinema
Poetic Justice – film del 1993 diretto da John Singleton
Musica
Poetic Justice – album dei Lillian Axe del 1992
Poetic Justice – album del 1993 della colonna sonora del film omonimo
Poetic Justice – album di Stan Rogers del 1996
Poetic Justice – singolo di Kendrick Lamar del 2013 | wiki |
Stone Lake is in Stone Lake State Park, a California State Park, located in Sacramento County, California. The open space property protects two rare natural Central Valley lakes and their surrounding riparian habitat and grassland areas.
The County of Sacramento operates Stone Lake.
External links
Official Stone Lake State Park site
State parks of California
Lakes of Sacramento County, California
Lakes of California
Parks in the San Joaquin Valley
Parks in Sacramento County, California | wiki |
In music, a bow is a tensioned stick which has hair (usually horse-tail hair) coated in rosin (to facilitate friction) affixed to it. It is moved across some part (generally some type of strings) of a musical instrument to cause vibration, which the instrument emits as sound. The vast majority of bows are used with string instruments, such as the violin, viola, cello, and bass, although some bows are used with musical saws and other bowed idiophones.
Materials and manufacture
A bow consists of a specially shaped stick with other material forming a ribbon stretched between its ends, which is used to stroke the string and create sound. Different musical cultures have adopted various designs for the bow. For instance, in some bows a single cord is stretched between the ends of the stick. In the Western tradition of bow making—bows for the instruments of the violin and viol families—a hank of horsehair is normally employed.
The manufacture of bows is considered a demanding craft, and well-made bows command high prices. Part of the bow maker's skill is the ability to choose high quality material for the stick. Historically, Western bows have been made of pernambuco wood from Brazil. However, pernambuco is now an endangered species whose export is regulated by international treaty, so makers are currently adopting other materials: woods such as Ipê (Tabebuia) and synthetic materials, such as carbon fiber epoxy composite and fiberglass.
For the frog, which holds and adjusts the near end of the horsehair, ebony is most often used, but other materials, often decorative, were used as well, such as ivory and tortoiseshell. Materials such as mother of pearl or abalone shell are often used on the slide that covers the mortise, as well as in round decorative "eyes" inlaid on the side surfaces. Sometimes "Parisian eyes" are used, with the circle of shell surrounded by a metal ring. The metal parts of the frog, or mountings, may be used by the maker to mark various grades of bow, ordinary bows being mounted with nickel silver, better bows with silver, and the finest being gold-mounted. (Not all makers adhere uniformly to this practice.) Near the frog is the grip, which is made of a wire, silk, or "whalebone" wrap and a thumb cushion made of leather or snakeskin. The tip plate of the bow may be made of bone, ivory, mammoth ivory, or metal, such as silver.
A bow maker or typically uses between 150 and 200 hairs from the tail of a horse for a violin bow. Bows for other members of the violin family typically have a wider ribbon, using more hairs. There is a widely held belief among string players, neither proven nor disproven scientifically, that white hair produces a "smoother" sound and black hair (used mainly for double bass bows) is coarser and thus produces a "rougher" sound. Lower quality (inexpensive) bows often use nylon or synthetic hair, and some use bleached horse hair to give the appearance of higher quality. Rosin, or colophony, a hard, sticky substance made from resin (sometimes mixed with wax), is regularly applied to the bow hair to increase friction.
In making a wooden bow, the greater part of the woodworking is done on a straight stick. According to James McKean, "the bow maker graduates the stick in precise gradations so that it is evenly flexible throughout". These gradations were originally calculated by François Tourte, discussed below. To shape the curve or "camber" of the bow stick, the maker carefully heats the stick in an alcohol flame, a few inches at a time, bending the heated stick gradually—using a metal or wooden template to get the model's exact curve and shape.
The art of making wooden bows has changed little since the 19th century. Most modern composite sticks roughly resemble the Tourte design. Various inventors have explored new ways of bow-making. The Incredibow, for example, has a straight stick cambered only by the fixed tension of the synthetic hair.
Types
Slightly different bows, varying in weight and length, are used for the violin, viola, cello, and double bass.
These are generally variations on the same basic design. However, bassists use two distinct forms of the double bass bow. The "French" overhand bow is constructed like the bow used with other bowed orchestral instruments, and the bassist holds the stick from opposite the frog. The "German" underhand bow is broader and longer than the French bow, with a larger frog curved to fit the palm of the hand. The bassist holds the German stick with the hand loosely encompassing the frog. The German bow is the older of the two designs, having superseded the earlier arched bow. The French bow became popular with its adoption in the 19th century by virtuoso Giovanni Bottesini. Both are found in the orchestra, though typically an individual bass player prefers to perform using one or the other type of bow.
Bowing
The characteristic long, sustained, and singing sound produced by the violin, viola, violoncello, and double bass is due to the drawing of the bow against their strings. This sustaining of musical sound with a bow is comparable to a singer using breath to sustain sounds and sing long, smooth, or legato melodies.
The term used for playing with a bow is "arco", from the Latin word "arcus", meaning bow. Therefore, to play arco is to play with a bow.
In modern practice, the bow is almost always held in the right hand while the left is used for fingering. When the player pulls the bow across the strings (such that the frog moves away from the instrument), it is called a down-bow; pushing the bow so the frog moves toward the instrument is an up-bow (the directions "down" and "up" are literally descriptive for violins and violas and are employed in analogous fashion for the cello and double bass). Two consecutive notes played in the same bow direction are referred to as a hooked bow; a down-bow following a whole down-bow is called a retake.
Generally, the player uses down-bow for strong musical beats and up-bow for weak beats. However, this is reversed in the viola da gamba—players of violin family instruments look like they are "pulling" on the strong beats, where gamba players look like they are "stabbing" on the strong beats. The difference may result from the different ways player hold the bow in these instrument families: violin/viola/cello players hold the wood part of the bow closer to the palm, whereas gamba players use the opposite orientation, with the horsehair closer. The orientation appropriate to each instrument family permits the stronger wrist muscles (flexors) to reinforce the strong beat.
String players control their tone quality by touching the bow to the strings at varying distances from the bridge, emphasizing the higher harmonics by playing sul ponticello ("on the bridge"), or reducing them, and so emphasizing the fundamental frequency, by playing sul tasto ("on the fingerboard").
Occasionally, composers ask the player to use the bow by touching the strings with the wood rather than the hair; this is known by the Italian phrase col legno ("with the wood"). Coll'arco ("with the bow") is the indication to use the bow hair to create the sound in the normal way.
History
Origin
The question of when and where the bow was invented is of interest because the technique of using it to produce sound on a stringed instrument has led to many important historical and regional developments in music, as well as the variety of instruments used.
Pictorial and sculptural evidence from early Egyptian, Indian, Hellenic, and Anatolian civilizations indicate that plucked stringed instruments existed long before the technique of bowing developed. In spite of the ancient origins of the bow and arrow, it would appear that bowed string instruments only developed during a comparatively recent period.
Eric Halfpenny, writing in the 1988 Encyclopædia Britannica, says, "bowing can be traced as far back as the Islamic civilization of the 10th century ... it seems likely that the principle of bowing originated among the nomadic horse riding cultures of Central Asia, whence it spread quickly through Islam and the East, so that by 1000 it had almost simultaneously reached China, Java, North Africa, the Near East and Balkans, and Europe." Halfpenny notes that in many Eurasian languages the word for "bridge" etymologically means "horse," and that the Chinese regarded their own bowed instruments (huqin) as having originated with the "barbarians" of Central Asia.
The Central Asian theory is endorsed by Werner Bachmann, writing in The New Grove Dictionary of Music and Musicians. Bachmann notes evidence from a 10th-century Central Asian wall painting for bowed instruments in what is now the city of Kurbanshaid in Tajikistan.
Circumstantial evidence also supports the Central Asian theory. All the elements that were necessary for the invention of the bow were probably present among the Central Asian horse riding peoples at the same time:
In a society of horse-mounted warriors (the horse peoples included the Huns and the Mongols), horsehair obviously would have been available.
Central Asian horse warriors specialized in the military bow, which could easily have served the inventor as a temporary way to hold horsehair at high tension.
To this day, horsehair for bows is taken from places with harsh cold climates, including Mongolia, as such hair offers a better grip on the strings.
Rosin, crucial for creating sound even with coarse horsehair, is used by traditional archers to maintain the integrity of the string and (mixed with beeswax) to protect the finish of the bow.
(From this information it can be seen that the invention of the bow originates from a Mongol warrior, having just used rosin on his equipment, idly stroking his harp or lyre with a rosin-dusted finger, producing a brief continuous sound, thus inspiring them to restring their bow with horsehair, leading to the earliest example of the bow)
However the bow was invented, it spread quickly and widely. The Central Asian horse peoples occupied a territory that included the Silk Road, along which merchants and travelers transported goods and innovations rapidly for thousands of miles (including, via India, by sea to Java). This would account for the near-simultaneous appearance of the musical bow in the many locations cited by Halfpenny.
Arabic rabāb
The Arabic rabāb is a type of a bowed string instrument so named no later than the 8th century and spread via Islamic trading routes over much of North Africa, the Middle East, parts of Europe, and the Far East. It is the earliest known bowed instrument, and the ancestor of all European bowed instruments, including the rebec, lyra and violin.
Modern Western bow
The kind of bow in use today was brought into its modern form largely by the bow maker François Tourte in 19th-century France. Pernambuco wood, which was imported into France to make textile dye, was found by the early French bow masters to have just the right combination of strength, resiliency, weight, and beauty. According to James McKean, Tourte's bows, "like the instruments of Stradivari, are still considered to be without equal."
Historical bows
The early 18th-century bow referred to as the Corelli-Tartini model is also referred to as the Italian 'sonata' bow. This basic Baroque bow supplanted by 1725 an earlier French dance bow that was short with a little point. The French dance bow was held with the thumb under the hair and played with short, quick strokes for rhythmic dance music. The Italian sonata bow was longer, from 24 to 28 inches (61–71 cm.), with a straight or slightly convex stick. The head is described as a pike's head, and the frog is either fixed (the clip-in bow) or has a screw mechanism. The screw is an early improvement, indicative of further changes to come. As compared to a modern Tourte-style bow, the Corelli-Tartini model is shorter and lighter, especially at the tip, the balance point is lower down on the stick, the hair more yielding, and the ribbon of hair narrower—about 6 mm wide.
In the early bow (the Baroque bow), the natural bow stroke is a non-legato norm, producing what Leopold Mozart called a "small softness" at the beginning and end of each stroke.
A lighter, clearer sound is produced, and quick notes are cleanly articulated without the hair leaving the string.
A truly great example of such a bow, described by David Boyden, is part of the Ansley Salz Collection at the University of California at Berkeley. It was made around 1700, and is attributed to Stradivari.
Towards the middle of the century (18th century), there was a move into the Transitional period, the separation of hair from stick became greater, particularly at the head. This greater separation is necessary because the stick becomes longer and straighter, approaching a concave shape.
Up until the advent of the bow by Tourte, there was absolutely no standardization of bow features during this Transitional period, and every bow was different in weight, length and balance.
In particular, the heads varied enormously by any given maker.
Another transitional type of bow may be called the Cramer bow, after the violinist Wilhelm Cramer (1746–99) who lived the early part of his life in Mannheim (Germany) and, after 1772, in London. This bow and models comparable to it in Paris, generally prevailed between the gradual demise of the Corelli-Tartini model and the birth of the Tourte—that is, roughly 1750 until 1785.
In the view of top experts, the Cramer bow represents a decisive step towards the modern bow.
The Cramer bow and others like it were gradually rendered obsolete by the advent of François Tourte's standardized bow. The hair (on the Cramer bow) is wider than the Corelli model but still narrower than a Tourte, the screw mechanism becomes standard, and more sticks are made from pernambuco, rather than the earlier snakewood, ironwood, and china wood, which were often fluted for a portion of the length of the stick.
Fine makers of these Transitional models were Duchaîne, La Fleur, Meauchand, Tourte père, and Edward Dodd.
The underlying reasons for the change from the old Corelli-Tartini model to the Cramer and, finally, to the Tourte were naturally related to musical demands on the part of composers and violinists.
Undoubtedly the emphasis on cantabile, especially the long drawn out and evenly sustained phrase, required a generally longer bow and also a somewhat wider ribbon of hair. These new bows were ideal to fill the new, very large concert halls with sound and worked great with the late classical and the new romantic repertoire.
Today, with the rise of the historically informed performance movement, string players have developed a revived interest in the lighter, pre-Tourte bow, as more suitable for playing stringed instruments made in pre-19th-century style.
Stradivarius bows
A Stradivari bow, The King Charles IV Violin Bow attributed to the Stradivari Workshop, is currently in the collection of the National Music Museum Object number: 04882, at the University of South Dakota in Vermillion, South Dakota. The Rawlins Gallery violin bow, NMM 4882, is attributed to the workshop of Antonio Stradivari, Cremona, 1700.
This bow is one of two bows (the other in a private collection in London) attributed to the workshop of Antonio Stradivari.
Other types of bow
The Chinese yazheng and yaqin, and Korean ajaeng zithers are generally played by "bowing" with a rosined stick, which creates friction against the strings without any horsehair. The hurdy-gurdy's strings are similarly set into vibration by means of a "rosin wheel," a wooden wheel that contacts the strings as it is rotated by means of a crank handle, creating a "bowed" tone.
Maintenance
Careful owners always loosen the hair on a bow before putting it away. James McKean recommends that the owner "loosen the hair completely, then bring it back just a single turn of the button." The goal is to "keep the hair even but allow the bow to relax." Over-tightening the bow, however, can also be damaging to the stick and cause it to break.
Since hairs may break in service, bows must be periodically rehaired, an operation usually performed by professional bow makers rather than by the instrument owner.
Bows sometimes lose their correct camber (see above), and are recambered using the same heating method as is used in the original manufacture.
Lastly, the grip or winding of the bow must occasionally be replaced to maintain a good grip and protect the wood.
These repairs are usually left to professionals, as the head of the bow is extremely fragile, and a poor rehair, or a broken ivory plate on the tip, can lead to ruining the bow.
Nomenclature
In vernacular speech, the bow is occasionally called a fiddlestick. Bows for particular instruments are often designated as such: violin bow, cello bow, and so on.
See also
Bariolage
Bowed guitar
Curved bow
References
Sources
Harnoncourt, Nikolaus. Baroque music today: music as speech. Amadeus Press, c. 1988.
Saint-George, Henry (1866–1917). The Bow (London, 1896; 2: 1909).
Seletsky, Robert E., "New Light on the Old Bow," Part 1: Early Music 5/2004, pp. 286–96; Part 2: Early Music 8/2004, pp. 415–26.
Notes
Further reading
Bachmann, Werner. The Origins of Bowing and the Development of Bowed Instruments Up to the Thirteenth Century. London, Oxford U.P., 1969.
Saint-George, Henry, The Bow, Its History, Manufacture and Use
Templeton, David. "Fresh Prince: Joshua Bell on composition, hyperviolins, and the future". Strings no. 105 (October 2002).
Young, Diana. A Methodology for Investigation of Bowed String Performance Through Measurement of Violin Bowing Technique. PhD Thesis. M.I.T., 2007.
External links
Article about horse hair.
Commissioning a bow.
Mastering New Materials: Commissioning an Amber Bow, no.65
Production of a carbon fiber bow
eNotes article on the history and making of bows.
The violin bow: a brief depiction of its history
Bows used in traditional music (Polish folk musical instruments)
Musical instrument parts and accessories
Arab inventions
Mongolian inventions
Turkish inventions | wiki |
Musica
Ringo Starr – batterista britannico
Ringo Starr – singolo dei Pinguini Tattici Nucleari
Pagine correlate
Ringo | wiki |
A field goal is a means of scoring in gridiron football.
Field goal may also refer to:
Sports
Field goal (basketball), a scoring play in basketball normally worth two or three points
Three-point field goal, the three-point instance of the above
Four-point field goal, an uncommon variant of the above
Field goal (rugby), an obsolete method of scoring in rugby football
Drop goal, a contemporary method of scoring (also known as a field goal) in rugby football
Others
Field Goal (video game), a 1979 arcade game
Project Field Goal, part of Operation Millpond, a 1961 American covert operation during the Laotian Civil War
See also
Goal (sports), various scoring methods | wiki |
Grammorhoe polygrammata is een vlinder uit de familie van de spanners (Geometridae). De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1794 door Borkhausen.
polygrammata | wiki |
Hot Weather Football Championship, also known as the All India Hot Weather Football Championship, is an annual Indian football tournament held in Mandi, Himachal Pradesh and organized by All India Hot Weather Football Championship Organising Committee (AIHWFCOC). The tournament was first started in 1970, which was won by Sports School Jalandhar. Apart from some top clubs from Himachal, clubs from other Indian states also have participated in this competition.
FC Punjab Police has won the tournament for a record four times. The current champions are Reserve Bank of India who won the title by defeating Tamil Nadu Police at the 49th edition of the tournament in 2021.
Venue
All matches are played at Paddal Ground of Mandi Himachal Pradesh.
Results
References
External links
Football in Himachal Pradesh
Football cup competitions in India
1970 establishments in India
Recurring sporting events established in 1970 | wiki |
Milk Money may refer to:
Milk Money (film), a 1994 romantic comedy film
Milk Money (anime), a 2004 hentai series
Milk Money (band), an American band | wiki |
A mycetome is a specialized organ in a variety of animal species which houses that animal's symbionts, isolating them from the animal's natural cellular defense mechanisms and allowing sustained controlled symbiotic growth. In several species, such as bed bugs and certain families of leech, these symbionts are attached to the gut and aid in the production of vitamin B from ingested meals of blood. In insects, the organisms that inhabit these structures are either bacteria or yeasts.
In bed bugs, it has been found that heat stress can cause damage to the mycetome, preventing the symbionts from being successfully passed from the adult female to her eggs at the time of oogenesis, causing the resulting nymphs to develop abnormally or to die prematurely.
References
Insect biology
Symbiosis
Animal anatomy | wiki |
Federal holidays in the United States are the eleven calendar dates that are designated by the U.S. government as holidays. During U.S. federal holidays, non-essential federal government offices are closed and federal government employees are paid for the holiday.
Federal holidays are designated by the United States Congress in Title V of the United States Code (). Congress only has authority to create holidays for federal institutions (including federally-owned properties), employees, and the District of Columbia. Although not required, as a general rule of courtesy, other institutions, such as banks, businesses, schools, and the stock market, may be closed on federal holidays. In various parts of the country, state and city holidays may be observed concurrently with federal holidays.
History
The history of federal holidays in the United States dates back to June 28, 1870, when Congress created federal holidays "to correspond with similar laws of States around the District...and...in every State of the Union." Although at first applicable only to federal employees in the District of Columbia, Congress extended coverage in 1885 to all federal employees.
The original four holidays in 1870 were:
New Year's Day
Independence Day
Thanksgiving Day
Christmas Day
George Washington's Birthday became a federal holiday in 1879. In 1888 and 1894, respectively, Decoration Day (now Memorial Day) and Labor Day were created. Armistice Day was established in 1938 to honor the end of World War I, and the scope of the holiday was expanded to honor Americans who fought in World War II and the Korean War when it was renamed Veterans Day in 1954.
In 1968, the Uniform Monday Holiday Act gave several holidays "floating" dates so that they always fall on a Monday, and also established Columbus Day.
In 1983, Ronald Reagan signed Martin Luther King Jr. Day into law, and it was first observed three years later, although some states resisted. It was finally celebrated by all 50 states in 2000.
Christmas Day as a federal or public holiday is sometimes objected to by various sources, usually due to its ties with Christianity. In December 1999, the Western Division of the United States District Court for the Southern District of Ohio, in the case Ganulin v. United States, denied the charge that Christmas Day's federal status violated the Establishment Clause of the Constitution, ruling that "the Christmas holiday has become largely secularized", and that "by giving federal employees a paid vacation day on Christmas, the government is doing no more than recognizing the cultural significance of the holiday".
On June 17, 2021, Joe Biden signed legislation making Juneteenth a federal holiday, commemorating the emancipation of enslaved African Americans.
List of federal holidays
Most of the 11 U.S. federal holidays are also state holidays. A holiday that falls on a weekend is usually observed on the closest weekday (e.g. a holiday falling on a Saturday is observed on the preceding Friday, while a holiday falling on a Sunday is observed on the succeeding Monday). The official names come from the statute that defines holidays for federal employees.
New Year's Day, Juneteenth, Independence Day, Veterans Day, and Christmas Day are observed on the same calendar date each year, irrespective of the day of the week. For floating holidays, when a holiday falls on a Saturday, federal employees who work Monday to Friday observe the holiday on the previous Friday. Federal employees who work on Saturday observe the holiday on Saturday and, for them, Friday is a regular work day. Holidays that fall on a Sunday are observed by federal workers the following Monday.
Inauguration Day, held on January 20 every four years following a quadrennial presidential election, is considered a paid holiday for federal employees in the Washington, D.C., area by the Office of Personnel Management. However, it is not considered a federal holiday in the United States equivalent to the eleven holidays mentioned above.
Although many states recognize most or all federal holidays as state holidays, the federal government cannot enact laws to compel them to do so. Furthermore, states can recognize other days as state holidays that are not federal holidays. For example, the State of Texas recognizes all federal holidays except Columbus Day, and in addition recognizes the Friday after Thanksgiving, Christmas Eve, and the day after Christmas as state holidays. Furthermore, Texas does not follow the federal rule of closing either the Friday before if a holiday falls on a Saturday, or the Monday after if a holiday falls on a Sunday (offices are open on those Fridays or Mondays), but does have "partial staffing holidays" (such as March 2, which is Texas Independence Day) and "optional holidays" (such as Good Friday).
Private employers also are not required to observe federal or state holidays, the key exception being federally-chartered banks. Some private employers, often by a union contract, pay a differential such as time-and-a-half or double-time to employees who work on some federal holidays. Employees not specifically covered by a union contract, however, might only receive their standard pay for working on a federal holiday, depending on the company policy.
Legal holidays due to presidential proclamation
Federal law also provides for the declaration of other public holidays by the President of the United States. Generally the president will provide a reasoning behind the elevation of the day, and call on the people of the United States to observe the day "with appropriate ceremonies and activities." Examples of presidentially declared holidays were the days of the funerals for former Presidents Ronald Reagan, George H. W. Bush, and Gerald Ford; federal government offices were closed and employees given a paid holiday.
Proposed federal holidays
Many federal holidays have been proposed. As the U.S. federal government is a large employer, the holidays are expensive. If a holiday is controversial, opposition will generally prevent bills enacting them from passing. For example, Martin Luther King Jr. Day, marking King's birthday, took much effort to pass and for all states to recognize it. It was not until 2000 that this holiday was officially observed in all 50 states.
The following list is an example of holidays that have been proposed and reasons why they are not observed at the federal level. Some of these holidays are observed at the state level.
Controversy
Protests by the Native American community support the abolition of Columbus Day, mainly due to its ideology in forcefully conquering and converting whole populations with another and encouraging imperialism and colonization. Glenn Morris of The Denver Post wrote that Columbus Day "... is not merely a celebration of Columbus the man; it is the celebration of a racist legal and political legacy—embedded in official legal and political pronouncements of the U.S.—such as the doctrine of discovery and manifest destiny." Alaska, Florida, Hawaii, Iowa, Louisiana, Maine, Minnesota, New Mexico, Nevada, North Carolina, Oregon, South Dakota, Washington, and Wisconsin do not recognize Columbus Day, though other states such as Hawaii and South Dakota mark the day with an alternative holiday or observance. South Dakota is the only state to recognize Native American Day as an alternate. Hawaii recognizes Discoverer's Day. Other states such as Maine, Nevada, Vermont, Washington and Wisconsin instead recognize Indigenous Peoples' Day as an alternative holiday.
See also
List of observances in the United States by presidential proclamation
Public holidays in the United States
References
External links
Federal Holidays: Evolution and Application, CRS Report for Congress, 98-301 GOV, updated February 8, 1999, by Stephen W. Stathis
United States Code: Federal Holidays (5 USC 6103)
Official US Federal Holiday calendar
US Federal holiday and special occasions calendar
National Holidays in USA
federal
Federal government of the United States | wiki |