title
stringlengths 2
90
| text
stringlengths 128
106k
| relevans
float64 0.76
0.81
| popularity
float64 0.26
1
| ranking
float64 0.2
0.81
|
---|---|---|---|---|
Systemic inflammatory response syndrome | In immunology, systemic inflammatory response syndrome (SIRS) is an inflammatory state affecting the whole body. It is the body's response to an infectious or noninfectious insult. Although the definition of SIRS refers to it as an "inflammatory" response, it actually has pro- and anti-inflammatory components.
Presentation
Complications
SIRS is frequently complicated by failure of one or more organs or organ systems. The complications of SIRS include
Acute kidney injury
Shock
Multiple organ dysfunction syndrome
Causes
The causes of SIRS are broadly classified as infectious or noninfectious. Causes of SIRS include:
bacterial infections
severe malaria
trauma
burns
pancreatitis
ischemia
hemorrhage
Other causes include:
Complications of surgery
Adrenal insufficiency
Pulmonary embolism
Complicated aortic aneurysm
Cardiac tamponade
Anaphylaxis
Drug overdose
Diagnosis
SIRS is a serious condition related to systemic inflammation, organ dysfunction, and organ failure. It is a subset of cytokine storm, in which there is abnormal regulation of various cytokines. SIRS is also closely related to sepsis, in which patients satisfy criteria for SIRS and have a suspected or proven infection.
Many experts consider the current criteria for a SIRS diagnosis to be overly sensitive, as nearly all (>90%) of patients admitted to the ICU meet the SIRS criteria.
Adult
Manifestations of SIRS include, but are not limited to:
Body temperature less than 36 °C (96.8 °F) or greater than 38 °C (100.4 °F)
Heart rate greater than 90 beats per minute
Tachypnea (high respiratory rate), with greater than 20 breaths per minute; or, an arterial partial pressure of carbon dioxide less than 4.3 kPa (32 mmHg)
White blood cell count less than 4000 cells/mm³ (4 x 109 cells/L) or greater than 12,000 cells/mm³ (12 x 109 cells/L); or the presence of greater than 10% immature neutrophils (band forms). Band forms greater than 3% is called bandemia or a "left-shift".
When two or more of these criteria are met with or without evidence of infection, patients may be diagnosed with "SIRS". Patients with SIRS and acute organ dysfunction may be termed "severe SIRS". Note: Fever and an increased white blood cell count are features of the acute-phase reaction, while an increased heart rate is often the initial sign of hemodynamic compromise. An increased rate of breathing may be related to the increased metabolic stress due to infection and inflammation, but may also be an ominous sign of inadequate perfusion resulting in the onset of anaerobic cellular metabolism.
Children
The International Pediatric Sepsis Consensus has proposed some changes to adapt these criteria to the pediatric population.
In children, the SIRS criteria are modified in the following fashion:
Heart rate is greater than 2 standard deviations above normal for age in the absence of stimuli such as pain and drug administration, or unexplained persistent elevation for greater than 30 minutes to 4 hours. In infants, also includes heart rate less than 10th percentile for age in the absence of vagal stimuli, beta-blockers, or congenital heart disease or unexplained persistent depression for greater than 30 minutes.
Body temperature obtained orally, rectally, from Foley catheter probe, or from central venous catheter probe less than 36 °C or greater than 38.5 °C.
Respiratory rate greater than 2 standard deviations above normal for age or the requirement for mechanical ventilation not related to neuromuscular disease or the administration of anesthesia.
White blood cell count elevated or depressed for age not related to chemotherapy, or greater than 10% bands plus other immature forms.
Temperature or white blood cell count must be abnormal to qualify as SIRS in pediatric patients.
Treatment
Generally, the treatment for SIRS is directed towards the underlying problem or inciting cause (i.e. adequate fluid replacement for hypovolemia, IVF/NPO for pancreatitis, epinephrine/steroids/diphenhydramine for anaphylaxis).
Selenium, glutamine, and eicosapentaenoic acid have shown effectiveness in improving symptoms in clinical trials. Other antioxidants such as vitamin E may be helpful as well.
Septic treatment protocol and diagnostic tools have been created due to the potentially severe outcome septic shock. For example, the SIRS criteria were created as mentioned above to be extremely sensitive in suggesting which patients may have sepsis. However, these rules lack specificity, i.e. not a true diagnosis of the condition, but rather a suggestion to take necessary precautions. The SIRS criteria are guidelines set in place to ensure septic patients receive care as early as possible.
In cases caused by an implanted mesh, removal (explantation) of the polypropylene surgical mesh implant may be indicated.
History
The concept of SIRS was first conceived of and presented by William R. Nelson, of the Department of Surgery of the University of Toronto. SIRS was more broadly adopted in 1991 at the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference with the goal of aiding in the early detection of sepsis.
Criteria for SIRS were established in 1992 as part of the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference. The conference concluded that the manifestations of SIRS include, but are not limited to the first four described above under adult SIRS criteria.
In septic patients, these clinical signs can also be seen in other proinflammatory conditions, such as trauma, burns, pancreatitis, etc. A follow-up conference, therefore, decided to define the patients with a documented or highly suspicious infection that results in a systemic inflammatory response as having sepsis. Note that SIRS criteria are non-specific, and must be interpreted carefully within the clinical context. These criteria exist primarily for the purpose of more objectively classifying critically ill patients so that future clinical studies may be more rigorous and more easily reproducible.
References
External links
Intensive care medicine
Immune system disorders
Causes of death
Sepsis
Syndromes | 0.770642 | 0.994974 | 0.766769 |
Urea-containing cream | Urea, also known as carbamide-containing cream, is used as a medication and applied to the skin to treat dryness and itching such as may occur in psoriasis, dermatitis, or ichthyosis. It may also be used to soften nails.
In adults side effects are generally few. It may occasionally cause skin irritation. Urea works in part by loosening dried skin. Preparations generally contain 5 to 50% urea.
Urea containing creams have been used since the 1940s. It is on the World Health Organization's List of Essential Medicines. It is available over the counter.
Medical uses
Urea cream is indicated for debridement and promotion of normal healing of skin areas with hyperkeratosis, particularly where healing is inhibited by local skin infection, skin necrosis, fibrinous or itching debris or eschar. Specific condition with hyperkeratosis where urea cream is useful include:
Dry skin and rough skin
Dermatitis
Psoriasis
Ichthyosis
Eczema
Keratosis
Keratoderma
Corns
Calluses
Damaged, ingrown and devitalized nails
Side effects
Common side effects of urea cream are:
Mild skin irritation
Temporary burning sensation
Stinging sensation
Itching
In severe cases, there can be an allergic reaction with symptoms such as skin rash, urticaria, difficulty breathing and swelling of the mouth, face, lips, or tongue.
Mechanism of action
Urea in low doses is a humectant while at high doses (above 20%) it causes breakdown of protein in the skin.
Urea dissolves the intercellular matrix of the cells of the stratum corneum, promoting desquamation of scaly skin, eventually resulting in softening of hyperkeratotic areas. In nails, urea causes softening and eventually debridement of the nail plate.
References
External links
Dermatologic drugs
Ureas
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | 0.770628 | 0.994991 | 0.766767 |
Men's health | Men's health is a state of complete physical, mental, and social well-being, as experienced by men, and not merely the absence of disease. Differences in men's health compared to women's can be attributed to biological factors, behavioural factors, and social factors (e.g., occupations).
Men's health often relates to biological factors such as the male reproductive system or to conditions caused by hormones specific to, or most notable in, males. Some conditions that affect both men and women, such as cancer, and injury, manifest differently in men. Some diseases that affect both sexes are statistically more common in men. In terms of behavioural factors, men are more likely to make unhealthy or risky choices and less likely to seek medical care.
Men may face issues not directly related to their biology, such as gender-differentiated access to medical treatment and other socioeconomic factors. Outside Sub-Saharan Africa, men are at greater risk of HIV/AIDS. This is associated with unsafe sexual activity that is often nonconsensual.
Definition
Men's health refers to the state of physical, mental, and social well-being of men, and encompasses a wide range of issues that are unique to men or that affect men differently than women. This can include issues related to reproductive health, sexual health, cardiovascular health, mental health, and cancer prevention and treatment. Men's health also encompasses lifestyle factors such as diet, exercise, and stress management, as well as access to healthcare and preventative measures.
Life expectancy
Despite overall increases in life expectancy globally, men's life expectancy is less than women's, regardless of race and geographic regions. The global gap between the life expectancy of men and women has remained at approximately 4.4 years since 2016, according to the WHO. Life expectancy is a statistical measure that represents the average number of years that a person is expected to live, based on the current mortality rates. It is typically calculated at birth, and can vary depending on factors such as gender, race, and location. For example, life expectancy in many developed countries is higher than in developing countries, and life expectancy for women is generally higher than for men.
However, the gap does vary based on country, with low income countries having a smaller gap in life expectancy. Biological, behavioural, and social factors contribute to a lower overall life expectancy in men; however, the individual importance of each factor is not known. Overall attitudes towards health differ by gender. Men are generally less likely to be proactive in seeking healthcare, resulting in poorer health outcomes.
Men are difficult to recruit to health promotion interventions. The value of adopting a gender-sensitive approach to engage and retain men in health promotion interventions has been reported.
Biological influences on lower male life expectancies include genetics and hormones. For males, the 23rd pair of chromosomes are an X and a Y chromosome, rather than the two X chromosomes in females. The Y chromosome is smaller in size and contains fewer genes. This distinction may contribute to the discrepancy between men and women's life expectancy, as the additional X chromosome in females may counterbalance potential disease producing genes from the other X chromosome. Since males don't have the second X chromosome, they lack this potential protection. Hormonally, testosterone is a major male sex hormone important for a number of functions in males, and to a lesser extent, females. Low testosterone in males is a risk factor of cardiovascular related diseases. Conversely, high testosterone levels can contribute to prostate diseases. These hormonal factors play a direct role in the life expectancy of men compared to women.
In terms of behavioural factors, men have higher levels of consumption of alcohol, substances, and tobacco compared to women, resulting in increased rates of diseases such as lung cancer, cardiovascular disease, and cirrhosis of the liver. Sedentary behaviour, associated with many chronic diseases seems to be more prevalent in men. These diseases influence the overall life expectancy of men. For example, according to the World Health Organization, 3.14 million men died from causes linked to excessive alcohol use in 2010 compared to 1.72 million women. Men are more likely than women to engage in over 30 risky behaviours associated with increased morbidity, injury, and mortality. Additionally, despite a disproportionately lower rate of suicide attempts than women, men have significantly higher rates of death by suicide.
Social determinants of men's health involve factors such as greater levels of occupational exposure to physical and chemical hazards than women. Historically, men had higher work-related stress, which negatively impacted their life expectancy by increasing the risk of hypertension, heart attack, and stroke. However, as women's role in the workplace continues to be established, these risks are no longer specific to just men.
Mental health
Stress
Although most stress symptoms are similar in men and women, stress can be experienced differently by men. The American Psychological Association says that men are not as likely to report emotional and physical symptoms of stress compared to women. They say men are more likely to withdraw socially when stressed and are more likely to report doing nothing to manage their stress. Men are more likely than women to cite that work is a source of stress; women are more likely to report that money and the economy are a source of stress.
Mental stress in men is associated with various complications which can affect men's health: high blood pressure and subsequent cardiovascular morbidity and mortality, cardiovascular disease, erectile dysfunction (impotence) and possibly reduced fertility (due to reduced libido and frequency of intercourse).
Fathers experience stress during the time shortly before and after the time of birth (perinatal period). Stress levels tend to increase from the prenatal period up until the time of birth, and then decrease from the time of birth to the later postnatal period. Factors which contribute to stress in fathers include negative feelings about the pregnancy, role restrictions related to becoming a father, fear of childbirth, and feelings of incompetence related to infant care. This stress has a negative impact on fathers. Higher levels of stress in fathers are associated with mental health issues such as anxiety, depression, psychological distress, and fatigue.
Substance use disorders
Substance use disorder and alcohol use disorder can be defined as a pattern of harmful use of substance for mood-altering purposes. Alcohol is one of the most commonly substances used in excess, and men are up to twice as likely to develop alcohol use disorder than women. Gender differences in alcohol consumption remain universal, although the sizes of gender differences vary. More drinking and heaving, binge drinking occurs in men, whereas more long-term abstention occurs in women. Moreover, men are more likely to abuse substances such as drugs, with a lifetime prevalence of 11.5% in men compared to 6.4% in women, in the United States. Additionally, males are more likely to be substance addicts and abuse substances due to peer pressure compared to females.
Risks
Substance and alcohol use disorders are associated with various mental health issues in men and women. Mental health problems are not only a result from drinking excess alcohol; they can also cause people to drink too much. A major reason for consuming alcohol is to change mood or mental state. Alcohol can temporarily alleviate feelings of anxiety and depression, and some people use it as a form of self-medication in an attempt to counteract these negative feelings. However, alcohol consumption can worsen existing mental health problems. Evidence shows that people who consume high amounts of alcohol or use illicit substances are vulnerable to an increased risk of developing mental health problems. Men with mental health disorders, like post-traumatic stress disorder, are twice as likely as women to develop a substance use disorder.
Treatment
There have been identified gender differences in seeking treatment for mental health and substance abuse disorders between men and women. Women are more likely to seek help from and disclose mental health problems to their primary care physicians, whereas men are more likely to seek specialist and inpatient care. Men are more likely than women to disclose problems with alcohol use to their health care provider. In the United States, there are more men than women in treatment for substance use disorders. Both men and women receive better mental health outcomes with early treatment interventions.
Suicide
Suicide has a high incidence rate in men but often lacks public awareness. Suicide is the 13th leading cause of death globally, and in most parts of the world, men are significantly more likely to die by suicide than women, although women are significantly more likely to attempt suicide. This is known as the "gender paradox of suicidal behaviour". Worldwide, the ratio of suicide deaths was 1.8:1 men per woman in 2016 according to the World Health Organization. This gender disparity varies greatly between countries. For example, in the United Kingdom and Australia, this men/women ratio is approximately 3:1, and in the United States, Russia, and Argentina approximately 4:1. In South Africa, the suicide rate amongst men is five times greater than women. In East Asian countries however, the gender gap in suicide rates are relatively smaller, with men to women ratios ranging from 1:1 to 2:1. Multiple factors exist to explain this gender gap in suicide rates, such as men more frequently completing high mortality actions such as hanging, carbon-monoxide poisoning, and the use of lethal weapons. Additional factors that contribute to the disparity in suicide rates between men and women include the pressures of traditional gender roles for men in society and the socialization of men in society.
Risk factors
Because variations exist in the risk factors associated with suicidal behaviour between men and women, they contribute to the discrepancy in suicide rates. Suicide is complex and cannot simply be attributed to a single cause; however, there are psychological, social, and psychiatric factors to consider.
Mental illness is a major risk factor for suicide for both men and women. Common mental illnesses that are associated with suicide include depression, bipolar disorder, schizophrenia, and substance abuse disorders. In addition to mental illness, psychosocial factors such as unemployment and occupational stress are established risk factors for men. Alcohol use disorder is a risk factor that is much more prevalent in men than in women, which increases risks of depression and impulsive behaviours. This problem is exacerbated in men, as they are twice as likely as women to develop alcohol use disorder.
Reluctance to seek help is another prevalent risk factor facing men, stemming from internalized notions of masculinity. Traditional masculine stereotypes place expectations of strength and stoic, while any indication of vulnerability, such as consulting mental health services, is perceived as weak and emasculating. As a result, depression is under-diagnosed in men and may often remain untreated, which may lead to suicide.
Warning signs
Identifying warning signs is important for reducing suicide rates world-wide, but particularly for men, as distress may be expressed in a manner that is not easily recognisable. For instance, depression, and suicidal thoughts may manifest in the form of anger, hostility, and irritability. Additionally, risk-taking and avoidance behaviours may be demonstrated more commonly in men.
Common conditions
The following is a list of diseases or conditions that have a high prevalence in men (relative to women).
Cardiovascular conditions:
Cardiovascular disease
Atherosclerosis
Heart attack
Hypertension
Stroke
High cholesterol
Respiratory conditions:
Respiratory disease
COPD
Lung cancer
Pneumonia
Mental health conditions:
Autism
Major depressive disorder
Suicide
Addiction
Cancer:
Prostate cancer
Testicular cancer
Colorectal cancer
Skin cancer
Sexual health:
HIV/AIDS
Erectile dysfunction
Ejaculation Disorders
Hypoactive sexual desire disorder
Other:
Unintentional injuries
Diabetes
Influenza
Liver disease
Kidney disease
Alcohol abuse
Organisations
In the UK, the Men's Health Forum was founded in 1994. It was established originally by the Royal College of Nursing but became completely independent of the RCN when it was established as a charity in 2001. The first National Men's Health Week was held in the US in 1994. The first UK week took place in 2002, and the event went international (International Men's Health Week) the following year. In 2005, the world's first professor of men's health, Alan White, was appointed at Leeds Metropolitan University in north-east England.
In Australia, the Men's Health Information and Resource Centre advocates a salutogenic approach to male health which focuses on the causal factors behind health. The centre is led by John Macdonald and was established in 1999. The Centre leads and executes Men's Health Week in Australia with core funding from the NSW Ministry of Health.
The Global Action on Men's Health (GAMH) was established in 2013 and was registered as a UK-based charity in May 2018. It is a collaborative initiative to bring together men's health organizations from across the globe into a new global network. GAMH is working at international and national levels to encourage international agencies (such as the World Health Organization) and individual governments to develop research, policies and strategies on men's health.
See also
Andrology
Gender disparities in health
International Journal of Men's Health
International Men's Day November 19
Movember
National Prostate Cancer Awareness Month
References
External links
Men's Health Network
Men's Health Information and Resource Centre (Australia)
Andrology | 0.772731 | 0.992224 | 0.766722 |
Decompression sickness | Decompression sickness (DCS; also called divers' disease, the bends, aerobullosis, and caisson disease) is a medical condition caused by dissolved gases emerging from solution as bubbles inside the body tissues during decompression. DCS most commonly occurs during or soon after a decompression ascent from underwater diving, but can also result from other causes of depressurisation, such as emerging from a caisson, decompression from saturation, flying in an unpressurised aircraft at high altitude, and extravehicular activity from spacecraft. DCS and arterial gas embolism are collectively referred to as decompression illness.
Since bubbles can form in or migrate to any part of the body, DCS can produce many symptoms, and its effects may vary from joint pain and rashes to paralysis and death. DCS often causes air bubbles to settle in major joints like knees or elbows, causing individuals to bend over in excruciating pain, hence its common name, the bends. Individual susceptibility can vary from day to day, and different individuals under the same conditions may be affected differently or not at all. The classification of types of DCS according to symptoms has evolved since its original description in the 19th century. The severity of symptoms varies from barely noticeable to rapidly fatal.
Decompression sickness can occur after an exposure to increased pressure while breathing a gas with a metabolically inert component, then decompressing too fast for it to be harmlessly eliminated through respiration, or by decompression by an upward excursion from a condition of saturation by the inert breathing gas components, or by a combination of these routes. Theoretical decompression risk is controlled by the tissue compartment with the highest inert gas concentration, which for decompression from saturation is the slowest tissue to outgas.
The risk of DCS can be managed through proper decompression procedures, and contracting the condition has become uncommon. Its potential severity has driven much research to prevent it, and divers almost universally use decompression schedules or dive computers to limit their exposure and to monitor their ascent speed. If DCS is suspected, it is treated by hyperbaric oxygen therapy in a recompression chamber. Where a chamber is not accessible within a reasonable time frame, in-water recompression may be indicated for a narrow range of presentations, if there are suitably skilled personnel and appropriate equipment available on site. Diagnosis is confirmed by a positive response to the treatment. Early treatment results in a significantly higher chance of successful recovery.
Decompression sickness caused by a decompression from saturation can occur in decompression or upward excursions from saturation diving, ascent to high altitudes, and extravehicular activities in space. Treatment is recompression, and oxygen therapy.
Classification
DCS is classified by symptoms. The earliest descriptions of DCS used the terms: "bends" for joint or skeletal pain; "chokes" for breathing problems; and "staggers" for neurological problems. In 1960, Golding et al. introduced a simpler classification using the term "Type I ('simple')" for symptoms involving only the skin, musculoskeletal system, or lymphatic system, and "Type II ('serious')" for symptoms where other organs (such as the central nervous system) are involved. Type II DCS is considered more serious and usually has worse outcomes. This system, with minor modifications, may still be used today. Following changes to treatment methods, this classification is now much less useful in diagnosis, since neurological symptoms may develop after the initial presentation, and both Type I and Type II DCS have the same initial management.
Decompression illness and dysbarism
The term dysbarism encompasses decompression sickness, arterial gas embolism, and barotrauma, whereas decompression sickness and arterial gas embolism are commonly classified together as decompression illness when a precise diagnosis cannot be made. DCS and arterial gas embolism are treated very similarly because they are both the result of gas bubbles in the body. The U.S. Navy prescribes identical treatment for Type II DCS and arterial gas embolism. Their spectra of symptoms also overlap, although the symptoms from arterial gas embolism are generally more severe because they often arise from an infarction (blockage of blood supply and tissue death).
Signs and symptoms
While bubbles can form anywhere in the body, DCS is most frequently observed in the shoulders, elbows, knees, and ankles. Joint pain ("the bends") accounts for about 60% to 70% of all altitude DCS cases, with the shoulder being the most common site for altitude and bounce diving, and the knees and hip joints for saturation and compressed air work. Neurological symptoms are present in 10% to 15% of DCS cases with headache and visual disturbances being the most common symptom. Skin manifestations are present in about 10% to 15% of cases. Pulmonary DCS ("the chokes") is very rare in divers and has been observed much less frequently in aviators since the introduction of oxygen pre-breathing protocols. The table below shows symptoms for different DCS types.
Frequency
The relative frequencies of different symptoms of DCS observed by the U.S. Navy are as follows:
Onset
Although onset of DCS can occur rapidly after a dive, in more than half of all cases symptoms do not begin to appear for at least an hour. In extreme cases, symptoms may occur before the dive has been completed. The U.S. Navy and Technical Diving International, a leading technical diver training organization, have published a table that documents time to onset of first symptoms. The table does not differentiate between types of DCS, or types of symptom.
Causes
DCS is caused by a reduction in ambient pressure that results in the formation of bubbles of inert gases within tissues of the body. It may happen when leaving a high-pressure environment, ascending from depth, or ascending to altitude. A closely related condition of bubble formation in body tissues due to isobaric counterdiffusion can occur with no change of pressure.
Ascent from depth
DCS is best known as a diving disorder that affects divers having breathed gas that is at a higher pressure than the surface pressure, owing to the pressure of the surrounding water. The risk of DCS increases when diving for extended periods or at greater depth, without ascending gradually and making the decompression stops needed to slowly reduce the excess pressure of inert gases dissolved in the body. The specific risk factors are not well understood and some divers may be more susceptible than others under identical conditions. DCS has been confirmed in rare cases of breath-holding divers who have made a sequence of many deep dives with short surface intervals, and may be the cause of the disease called taravana by South Pacific island natives who for centuries have dived by breath-holding for food and pearls.
Two principal factors control the risk of a diver developing DCS:
the rate and duration of gas absorption under pressure – the deeper or longer the dive the more gas is absorbed into body tissue in higher concentrations than normal (Henry's Law);
the rate and duration of outgassing on depressurization – the faster the ascent and the shorter the interval between dives the less time there is for absorbed gas to be offloaded safely through the lungs, causing these gases to come out of solution and form "micro bubbles" in the blood.
Even when the change in pressure causes no immediate symptoms, rapid pressure change can cause permanent bone injury called dysbaric osteonecrosis (DON). DON can develop from a single exposure to rapid decompression.
Leaving a high-pressure environment
When workers leave a pressurized caisson or a mine that has been pressurized to keep water out, they will experience a significant reduction in ambient pressure. A similar pressure reduction occurs when astronauts exit a space vehicle to perform a space-walk or extra-vehicular activity, where the pressure in their spacesuit is lower than the pressure in the vehicle.
The original name for DCS was "caisson disease". This term was introduced in the 19th century, when caissons under pressure were used to keep water from flooding large engineering excavations below the water table, such as bridge supports and tunnels. Workers spending time in high ambient pressure conditions are at risk when they return to the lower pressure outside the caisson if the pressure is not reduced slowly. DCS was a major factor during construction of Eads Bridge, when 15 workers died from what was then a mysterious illness, and later during construction of the Brooklyn Bridge, where it incapacitated the project leader Washington Roebling. On the other side of the Manhattan island during construction of the Hudson River Tunnel, contractor's agent Ernest William Moir noted in 1889 that workers were dying due to decompression sickness; Moir pioneered the use of an airlock chamber for treatment.
Ascent to altitude and loss of pressure from a pressurised environment
The most common health risk on ascent to altitude is not decompression sickness but altitude sickness, or acute mountain sickness (AMS), which has an entirely different and unrelated set of causes and symptoms. AMS results not from the formation of bubbles from dissolved gasses in the body but from exposure to a low partial pressure of oxygen and alkalosis. However, passengers in unpressurized aircraft at high altitude may also be at some risk of DCS.
Altitude DCS became a problem in the 1930s with the development of high-altitude balloon and aircraft flights but not as great a problem as AMS, which drove the development of pressurized cabins, which coincidentally controlled DCS. Commercial aircraft are now required to maintain the cabin at or below a pressure altitude of even when flying above . Symptoms of DCS in healthy individuals are subsequently very rare unless there is a loss of pressurization or the individual has been diving recently. Divers who drive up a mountain or fly shortly after diving are at particular risk even in a pressurized aircraft because the regulatory cabin altitude of represents only 73% of sea level pressure.
Generally, the higher the altitude the greater the risk of altitude DCS but there is no specific, maximum, safe altitude below which it never occurs. There are very few symptoms at or below unless the person had predisposing medical conditions or had dived recently. There is a correlation between increased altitudes above and the frequency of altitude DCS but there is no direct relationship with the severity of the various types of DCS. A US Air Force study reports that there are few occurrences between and and 87% of incidents occurred at or above . High-altitude parachutists may reduce the risk of altitude DCS if they flush nitrogen from the body by pre-breathing pure oxygen. A similar procedure is used by astronauts and cosmonauts preparing for extravehicular activity in low pressure space suits.
Predisposing factors
Although the occurrence of DCS is not easily predictable, many predisposing factors are known. They may be considered as either environmental or individual. Decompression sickness and arterial gas embolism in recreational diving are associated with certain demographic, environmental, and dive style factors. A statistical study published in 2005 tested potential risk factors: age, gender, body mass index, smoking, asthma, diabetes, cardiovascular disease, previous decompression illness, years since certification, dives in the last year, number of diving days, number of dives in a repetitive series, last dive depth, nitrox use, and drysuit use. No significant associations with risk of decompression sickness or arterial gas embolism were found for asthma, diabetes, cardiovascular disease, smoking, or body mass index. Increased depth, previous DCI, larger number of consecutive days diving, and being male were associated with higher risk for decompression sickness and arterial gas embolism. Nitrox and drysuit use, greater frequency of diving in the past year, increasing age, and years since certification were associated with lower risk, possibly as indicators of more extensive training and experience.
Environmental
The following environmental factors have been shown to increase the risk of DCS:
the magnitude of the pressure reduction ratio – a large pressure reduction ratio is more likely to cause DCS than a small one.
repetitive exposures – repetitive dives within a short period of time (a few hours) increase the risk of developing DCS. Repetitive ascents to altitudes above within similar short periods increase the risk of developing altitude DCS.
the rate of ascent – the faster the ascent the greater the risk of developing DCS. The U.S. Navy Diving Manual indicates that ascent rates greater than about when diving increase the chance of DCS, while recreational dive tables such as the Bühlmann tables require an ascent rate of with the last taking at least one minute. An individual exposed to a rapid decompression (high rate of ascent) above has a greater risk of altitude DCS than being exposed to the same altitude but at a lower rate of ascent.
the duration of exposure – the longer the duration of the dive, the greater is the risk of DCS. Longer flights, especially to altitudes of and above, carry a greater risk of altitude DCS.
underwater diving before flying – divers who ascend to altitude soon after a dive increase their risk of developing DCS even if the dive itself was within the dive table safe limits. Dive tables make provisions for post-dive time at surface level before flying to allow any residual excess nitrogen to outgas. However, the pressure maintained inside even a pressurized aircraft may be as low as the pressure equivalent to an altitude of above sea level. Therefore, the assumption that the dive table surface interval occurs at normal atmospheric pressure is invalidated by flying during that surface interval, and an otherwise-safe dive may then exceed the dive table limits.
diving before travelling to altitude – DCS can occur without flying if the person moves to a high-altitude location on land immediately after diving, for example, scuba divers in Eritrea who drive from the coast to the Asmara plateau at increase their risk of DCS.
diving at altitude – diving in water whose surface pressure is significantly below sea level pressure – for example, Lake Titicaca is at . Versions of decompression tables for altitudes exceeding , or dive computers with high-altitude settings or surface pressure sensors may be used to reduce this risk.
Individual
The following individual factors have been identified as possibly contributing to increased risk of DCS:
dehydration – Studies by Walder concluded that decompression sickness could be reduced in aviators when the serum surface tension was raised by drinking isotonic saline, and the high surface tension of water is generally regarded as helpful in controlling bubble size. Maintaining proper hydration is recommended. There is no convincing evidence that overhydration has any benefits, and it is implicated in immersion pulmonary oedema.
patent foramen ovale – a hole between the atrial chambers of the heart in the fetus is normally closed by a flap with the first breaths at birth. In about 20% of adults the flap does not completely seal, however, allowing blood through the hole when coughing or during activities that raise chest pressure. In diving, this can allow venous blood with microbubbles of inert gas to bypass the lungs, where the bubbles would otherwise be filtered out by the lung capillary system, and return directly to the arterial system (including arteries to the brain, spinal cord and heart). In the arterial system, bubbles (arterial gas embolism) are far more dangerous because they block circulation and cause infarction (tissue death, due to local loss of blood flow). In the brain, infarction results in stroke, and in the spinal cord it may result in paralysis.
a person's age – there are some reports indicating a higher risk of altitude DCS with increasing age.
previous injury – there is some indication that recent joint or limb injuries may predispose individuals to developing decompression-related bubbles.
ambient temperature – there is some evidence suggesting that individual exposure to very cold ambient temperatures may increase the risk of altitude DCS. Decompression sickness risk can be reduced by increased ambient temperature during decompression following dives in cold water, though risk is also increased by ingassing while the diver is warm and peripherally well-perfused, and decompressing when the diver is cold.
body type – typically, a person who has a high body fat content is at greater risk of DCS. This is because nitrogen is five times more soluble in fat than in water, leading to greater amounts of total body dissolved nitrogen during time at pressure. Fat represents about 15–25 percent of a healthy adult's body, but stores about half of the total amount of nitrogen (about 1 litre) at normal pressures.
alcohol consumption – although alcohol consumption increases dehydration and therefore may increase susceptibility to DCS, a 2005 study found no evidence that alcohol consumption increases the incidence of DCS.
Mechanism
Depressurisation causes inert gases, which were dissolved under higher pressure, to come out of physical solution and form gas bubbles within the body. These bubbles produce the symptoms of decompression sickness. Bubbles may form whenever the body experiences a reduction in pressure, but not all bubbles result in DCS. The amount of gas dissolved in a liquid is described by Henry's Law, which indicates that when the pressure of a gas in contact with a liquid is decreased, the amount of that gas dissolved in the liquid will also decrease proportionately.
On ascent from a dive, inert gas comes out of solution in a process called "outgassing" or "offgassing". Under normal conditions, most offgassing occurs by gas exchange in the lungs. If inert gas comes out of solution too quickly to allow outgassing in the lungs then bubbles may form in the blood or within the solid tissues of the body. The formation of bubbles in the skin or joints results in milder symptoms, while large numbers of bubbles in the venous blood can cause lung damage. The most severe types of DCS interruptand ultimately damagespinal cord function, leading to paralysis, sensory dysfunction, or death. In the presence of a right-to-left shunt of the heart, such as a patent foramen ovale, venous bubbles may enter the arterial system, resulting in an arterial gas embolism. A similar effect, known as ebullism, may occur during explosive decompression, when water vapour forms bubbles in body fluids due to a dramatic reduction in environmental pressure.
Inert gases
The main inert gas in air is nitrogen, but nitrogen is not the only gas that can cause DCS. Breathing gas mixtures such as trimix and heliox include helium, which can also cause decompression sickness. Helium both enters and leaves the body faster than nitrogen, so different decompression schedules are required, but, since helium does not cause narcosis, it is preferred over nitrogen in gas mixtures for deep diving.
There is some debate as to the decompression requirements for helium during short-duration dives. Most divers do longer decompressions; however, some groups like the WKPP have been experimenting with the use of shorter decompression times by including deep stops. The balance of evidence as of 2020 does not indicate that deep stops increase decompression efficiency.
Any inert gas that is breathed under pressure can form bubbles when the ambient pressure decreases. Very deep dives have been made using hydrogen–oxygen mixtures (hydrox), but controlled decompression is still required to avoid DCS.
Isobaric counterdiffusion
DCS can also be caused at a constant ambient pressure when switching between gas mixtures containing different proportions of inert gas. This is known as isobaric counterdiffusion, and presents a problem for very deep dives. For example, after using a very helium-rich trimix at the deepest part of the dive, a diver will switch to mixtures containing progressively less helium and more oxygen and nitrogen during the ascent. Nitrogen diffuses into tissues 2.65 times slower than helium but is about 4.5 times more soluble. Switching between gas mixtures that have very different fractions of nitrogen and helium can result in "fast" tissues (those tissues that have a good blood supply) actually increasing their total inert gas loading. This is often found to provoke inner ear decompression sickness, as the ear seems particularly sensitive to this effect.
Bubble formation
The location of micronuclei or where bubbles initially form is not known. The most likely mechanisms for bubble formation are tribonucleation, when two surfaces make and break contact (such as in joints), and heterogeneous nucleation, where bubbles are created at a site based on a surface in contact with the liquid. Homogeneous nucleation, where bubbles form within the liquid itself is less likely because it requires much greater pressure differences than experienced in decompression. The spontaneous formation of nanobubbles on hydrophobic surfaces is a possible source of micronuclei, but it is not yet clear if these can grow large enough to cause symptoms as they are very stable.
Once microbubbles have formed, they can grow by either a reduction in pressure or by diffusion of gas into the gas from its surroundings. In the body, bubbles may be located within tissues or carried along with the bloodstream. The speed of blood flow within a blood vessel and the rate of delivery of blood to capillaries (perfusion) are the main factors that determine whether dissolved gas is taken up by tissue bubbles or circulation bubbles for bubble growth.
Pathophysiology
The primary provoking agent in decompression sickness is bubble formation from excess dissolved gases. Various hypotheses have been put forward for the nucleation and growth of bubbles in tissues, and for the level of supersaturation which will support bubble growth. The earliest bubble formation detected is subclinical intravascular bubbles detectable by doppler ultrasound in the venous systemic circulation. The presence of these "silent" bubbles is no guarantee that they will persist and grow to be symptomatic.
Vascular bubbles formed in the systemic capillaries may be trapped in the lung capillaries, temporarily blocking them. If this is severe, the symptom called "chokes" may occur. If the diver has a patent foramen ovale (or a shunt in the pulmonary circulation), bubbles may pass through it and bypass the pulmonary circulation to enter the arterial blood. If these bubbles are not absorbed in the arterial plasma and lodge in systemic capillaries they will block the flow of oxygenated blood to the tissues supplied by those capillaries, and those tissues will be starved of oxygen. Moon and Kisslo (1988) concluded that "the evidence suggests that the risk of serious neurological DCI or early onset DCI is increased in divers with a resting right–to-left shunt through a PFO. There is, at present, no evidence that PFO is related to mild or late onset bends. Bubbles form within other tissues as well as the blood vessels. Inert gas can diffuse into bubble nuclei between tissues. In this case, the bubbles can distort and permanently damage the tissue. As they grow, the bubbles may also compress nerves, causing pain. Extravascular or autochthonous bubbles usually form in slow tissues such as joints, tendons and muscle sheaths. Direct expansion causes tissue damage, with the release of histamines and their associated affects. Biochemical damage may be as important as, or more important than mechanical effects.
Bubble size and growth may be affected by several factors – gas exchange with adjacent tissues, the presence of surfactants, coalescence and disintegration by collision. Vascular bubbles may cause direct blockage, aggregate platelets and red blood cells, and trigger the coagulation process, causing local and downstream clotting.
Arteries may be blocked by intravascular fat aggregation. Platelets accumulate in the vicinity of bubbles. Endothelial damage may be a mechanical effect of bubble pressure on the vessel walls, a toxic effect of stabilised platelet aggregates and possibly toxic effects due to the association of lipids with the air bubbles. Protein molecules may be denatured by reorientation of the secondary and tertiary structure when non-polar groups protrude into the bubble gas and hydrophilic groups remain in the surrounding blood, which may generate a cascade of pathophysiological events with consequent production of clinical signs of decompression sickness.
The physiological effects of a reduction in environmental pressure depend on the rate of bubble growth, the site, and surface activity. A sudden release of sufficient pressure in saturated tissue results in a complete disruption of cellular organelles, while a more gradual reduction in pressure may allow accumulation of a smaller number of larger bubbles, some of which may not produce clinical signs, but still cause physiological effects typical of a blood/gas interface and mechanical effects. Gas is dissolved in all tissues, but decompression sickness is only clinically recognised in the central nervous system, bone, ears, teeth, skin and lungs.
Necrosis has frequently been reported in the lower cervical, thoracic, and upper lumbar regions of the spinal cord. A catastrophic pressure reduction from saturation produces explosive mechanical disruption of cells by local effervescence, while a more gradual pressure loss tends to produce discrete bubbles accumulated in the white matter, surrounded by a protein layer. Typical acute spinal decompression injury occurs in the columns of white matter. Infarcts are characterised by a region of oedema, haemorrhage and early myelin degeneration, and are typically centred on small blood vessels. The lesions are generally discrete. Oedema usually extends to the adjacent grey matter. Microthrombi are found in the blood vessels associated with the infarcts.
Following the acute changes there is an invasion of lipid phagocytes and degeneration of adjacent neural fibres with vascular hyperplasia at the edges of the infarcts. The lipid phagocytes are later replaced by a cellular reaction of astrocytes. Vessels in surrounding areas remain patent but are collagenised. Distribution of spinal cord lesions may be related to vascular supply. There is still uncertainty regarding the aetiology of decompression sickness damage to the spinal cord.
Dysbaric osteonecrosis lesions are typically bilateral and usually occur at both ends of the femur and at the proximal end of the humerus. Symptoms are usually only present when a joint surface is involved, which typically does not occur until a long time after the causative exposure to a hyperbaric environment. The initial damage is attributed to the formation of bubbles, and one episode can be sufficient, however incidence is sporadic and generally associated with relatively long periods of hyperbaric exposure and aetiology is uncertain. Early identification of lesions by radiography is not possible, but over time areas of radiographic opacity develop in association with the damaged bone.
Diagnosis
Diagnosis of decompression sickness relies almost entirely on clinical presentation, as there are no laboratory tests that can incontrovertibly confirm or reject the diagnosis. Various blood tests have been proposed, but they are not specific for decompression sickness, they are of uncertain utility and are not in general use.
Decompression sickness should be suspected if any of the symptoms associated with the condition occurs following a drop in pressure, in particular, within 24 hours of diving. In 1995, 95% of all cases reported to Divers Alert Network had shown symptoms within 24 hours. This window can be extended to 36 hours for ascent to altitude and 48 hours for prolonged exposure to altitude following diving. An alternative diagnosis should be suspected if severe symptoms begin more than six hours following decompression without an altitude exposure or if any symptom occurs more than 24 hours after surfacing. The diagnosis is confirmed if the symptoms are relieved by recompression. Although magnetic resonance imaging (MRI) or computed tomography (CT) can frequently identify bubbles in DCS, they are not as good at determining the diagnosis as a proper history of the event and description of the symptoms.
Test of pressure
There is no gold standard for diagnosis, and DCI experts are rare. Most of the chambers open to treatment of recreational divers and reporting to Diver's Alert Network see fewer than 10 cases per year, making it difficult for the attending doctors to develop experience in diagnosis. A method used by commercial diving supervisors when considering whether to recompress as first aid when they have a chamber on site, is known as the test of pressure. The diver is checked for contraindications to recompression, and if none are present, recompressed. If the symptoms resolve or reduce during recompression, it is considered likely that a treatment schedule will be effective. The test is not entirely reliable, and both false positives and false negatives are possible, however in the commercial diving environment it is often considered worth treating when there is doubt, and very early recompression has a history of very high success rates and reduced number of treatments needed for complete resolution and minimal sequelae.
Differential diagnosis
Symptoms of DCS and arterial gas embolism can be virtually indistinguishable. The most reliable way to tell the difference is based on the dive profile followed, as the probability of DCS depends on duration of exposure and magnitude of pressure, whereas AGE depends entirely on the performance of the ascent. In many cases it is not possible to distinguish between the two, but as the treatment is the same in such cases it does not usually matter.
Other conditions which may be confused include skin symptoms. Cutis marmorata due to DCS may be confused with skin barotrauma due to dry suit squeeze, for which no treatment is necessary. Dry suit squeeze produces lines of redness with possible bruising where the skin was pinched between folds of the suit, while the mottled effect of cutis marmorata is usually on skin where there is subcutaneous fat, and has no linear pattern.
Transient episodes of severe neurological incapacitation with rapid spontaneous recovery shortly after a dive may be attributed to hypothermia, but may actually be symptomatic of short term CNS involvement due to bubbles which form a short term gas embolism, then resolve, but which may leave residual problems which may cause relapses. These cases are thought to be under-diagnosed.
Inner ear decompression sickness (IEDCS) can be confused with inner ear barotrauma (IEBt), alternobaric vertigo, caloric vertigo and reverse squeeze. A history of difficulty in equalising the ears during the dive makes ear barotrauma more likely, but does not always eliminate the possibility of inner ear DCS, which is usually associated with deep, mixed gas dives with decompression stops. Both conditions may exist concurrently, and it can be difficult to distinguish whether a person has IEDCS, IEBt, or both.
Numbness and tingling are associated with spinal DCS, but can also be caused by pressure on nerves (compression neurapraxia). In DCS the numbness or tingling is generally confined to one or a series of dermatomes, while pressure on a nerve tends to produce characteristic areas of numbness associated with the specific nerve on only one side of the body distal to the pressure point. A loss of strength or function is likely to be a medical emergency. A loss of feeling that lasts more than a minute or two indicates a need for immediate medical attention. It is only partial sensory changes, or paraesthesias, where this distinction between trivial and more serious injuries applies.
Large areas of numbness with associated weakness or paralysis, especially if a whole limb is affected, are indicative of probable brain involvement and require urgent medical attention. Paraesthesias or weakness involving a dermatome indicate probable spinal cord or spinal nerve root involvement. Although it is possible that this may have other causes, such as an injured intervertebral disk, these symptoms indicate an urgent need for medical assessment. In combination with weakness, paralysis or loss of bowel or bladder control, they indicate a medical emergency.
Prevention
Underwater diving
To prevent the excess formation of bubbles that can lead to decompression sickness, divers limit their ascent rate—the recommended ascent rate used by popular decompression models is about per minute—and follow a decompression schedule as necessary. This schedule may require the diver to ascend to a particular depth, and remain at that depth until sufficient inert gas has been eliminated from the body to allow further ascent. Each of these is termed a "decompression stop", and a schedule for a given bottom time and depth may contain one or more stops, or none at all. Dives that contain no decompression stops are called "no-stop dives", but divers usually schedule a short "safety stop" at , depending on the training agency or dive computer.
The decompression schedule may be derived from decompression tables, decompression software, or from dive computers, and these are generally based upon a mathematical model of the body's uptake and release of inert gas as pressure changes. These models, such as the Bühlmann decompression algorithm, are modified to fit empirical data and provide a decompression schedule for a given depth and dive duration using a specified breathing gas mixture.
Since divers on the surface after a dive may still have excess inert gas in their bodies, decompression from any subsequent dive before this excess is eliminated needs to modify the schedule to take account of the residual gas load from the previous dive. This will result in a shorter allowable time under water without obligatory decompression stops, or an increased decompression time during the subsequent dive. The total elimination of excess gas may take many hours, and tables will indicate the time at normal pressures that is required, which may be up to 18 hours.
Decompression time can be significantly shortened by breathing mixtures containing much less inert gas during the decompression phase of the dive (or pure oxygen at stops in of water or less). The reason is that the inert gas outgases at a rate proportional to the difference between the partial pressure of inert gas in the diver's body and its partial pressure in the breathing gas; whereas the likelihood of bubble formation depends on the difference between the inert gas partial pressure in the diver's body and the ambient pressure. Reduction in decompression requirements can also be gained by breathing a nitrox mix during the dive, since less nitrogen will be taken into the body than during the same dive done on air.
Following a decompression schedule does not completely protect against DCS. The algorithms used are designed to reduce the probability of DCS to a very low level, but do not reduce it to zero. The mathematical implications of all current decompression models are that provided that no tissue is ingassing, longer decompression stops will decrease decompression risk, or at worst not increase it. Efficient decompression requires the diver to ascend fast enough to establish as high a decompression gradient, in as many tissues, as safely possible, without provoking the development of symptomatic bubbles. This is facilitated by the highest acceptably safe oxygen partial pressure in the breathing gas, and avoiding gas changes that could cause counterdiffusion bubble formation or growth. The development of schedules that are both safe and efficient has been complicated by the large number of variables and uncertainties, including personal variation in response under varying environmental conditions and workload, attributed to variations of body type, fitness and other risk factors.
Exposure to altitude
One of the most significant breakthroughs in the prevention of altitude DCS is oxygen pre-breathing. Breathing pure oxygen significantly reduces the nitrogen loads in body tissues by reducing the partial pressure of nitrogen in the lungs, which induces diffusion of nitrogen from the blood into the breathing gas, and this effect eventually lowers the concentration of nitrogen in the other tissues of the body. If continued for long enough, and without interruption, this provides effective protection upon exposure to low-barometric pressure environments. However, breathing pure oxygen during flight alone (ascent, en route, descent) does not decrease the risk of altitude DCS as the time required for ascent is generally not sufficient to significantly desaturate the slower tissues.
Pure aviator oxygen which has moisture removed to prevent freezing of valves at altitude is readily available and routinely used in general aviation mountain flying and at high altitudes. Most small general aviation aircraft are not pressurized, therefore oxygen use is an FAA requirement at higher altitudes.
Although pure oxygen pre-breathing is an effective method to protect against altitude DCS, it is logistically complicated and expensive for the protection of civil aviation flyers, either commercial or private. Therefore, it is currently used only by military flight crews and astronauts for protection during high-altitude and space operations. It is also used by flight test crews involved with certifying aircraft, and may also be used for high-altitude parachute jumps.
Astronauts aboard the International Space Station preparing for extra-vehicular activity (EVA) "camp out" at low atmospheric pressure, , spending eight sleeping hours in the Quest airlock chamber before their spacewalk. During the EVA they breathe 100% oxygen in their spacesuits, which operate at , although research has examined the possibility of using 100% O2 at in the suits to lessen the pressure reduction, and hence the risk of DCS.
Treatment
Recompression on air was shown to be an effective treatment for minor DCS symptoms by Keays in 1909. Evidence of the effectiveness of recompression therapy utilizing oxygen was first shown by Yarbrough and Behnke, and has since become the standard of care for treatment of DCS. Recompression is normally carried out in a recompression chamber. At a dive site, a riskier alternative is in-water recompression.
Oxygen first aid has been used as an emergency treatment for diving injuries for years. Particularly if given within the first four hours of surfacing, it increases the success of recompression therapy as well as decreasing the number of recompression treatments required. Most fully closed-circuit diving rebreathers can deliver sustained high concentrations of oxygen-rich breathing gas and could be used as a means of supplying oxygen if dedicated equipment is not available.
It is beneficial to give fluids, as this helps reduce dehydration. It is no longer recommended to administer aspirin, unless advised to do so by medical personnel, as analgesics may mask symptoms. People should be made comfortable and placed in the supine position (horizontal), or the recovery position if vomiting occurs. In the past, both the Trendelenburg position and the left lateral decubitus position (Durant's maneuver) have been suggested as beneficial where air emboli are suspected, but are no longer recommended for extended periods, owing to concerns regarding cerebral edema.
First aid
All cases of decompression sickness should be treated initially with the highest available concentration of oxygen until hyperbaric oxygen therapy (100% oxygen delivered in a hyperbaric chamber) can be provided. Mild cases of the "bends" and some skin symptoms may disappear during descent from high altitude; however, it is recommended that these cases still be evaluated. Neurological symptoms, pulmonary symptoms, and mottled or marbled skin lesions should be treated with hyperbaric oxygen therapy if seen within 10 to 14 days of development. Early recompression has a history of better outcomes and less treatment being needed.
Normobaric oxygen administered at as close to 100% as practicable is known to be beneficial based on observed bubble reduction and symptom resolution. For this reason diver training in oxygen administration, and a system for administering a high percentage of inspired oxygen at quantities sufficient for plausible evacuation scenarios is desirable. Where oxygenation may be compromised the administration rate should be adjusted to ensure that the best practicable supplementation is maintained until supplies can be replenished.
A horizontal position is preferable during evacuation if possible, with the recovery position recommended for unconscious divers, as there is evidence that inert gas washout is improved in horizontal subjects, and that large arterial bubbles tend to distribute towards the head in upright positions. A head down position is thought to be harmful in DCS.
Oral hydration is recommended in fully conscious persons, and fluids should ideally be isotonic, without alcohol, carbonation or caffeine, as diving is known to cause dehydration, and rehydration is known to reduce post-dive venous gas emboli.
Intravascular rehydration is recommended if suitably competent responders are present. Glucose free isotonic crystalloid solutions are preferred. Case evidence shows that aggressive rehydration can be life-saving in severe cases.
If there are no contraindications, a non-steroidal anti-inflammatory drug along with hyperbatic oxygen is likely to improve rate of recovery. The most prominent NSAIDs are aspirin, ibuprofen, and naproxen; all available over the counter in most countries. Paracetamol (acetaminophen) is generally not considered an NSAID because it has only minor anti-inflammatory activity.Corticosteroids, pentoxyphylline, aspirin, lidocaine and nicergoline have been used in early management of DCS, but there is insufficient evidence on their effectiveness.
Divers should be kept comfortably warm, as warm subjects are known to eliminate gas more quickly, but overheating aggravates neurological injury.
Delay of recompression
Observational evidence shows that outcomes after recompression are likely to be better after immediate recompression, which is only possible when on-site recompression is possible, although the 2004 workshop on decompression came to the conclusion that for cases with mild symptoms, a delay before recompression is unlikely to cause any worsening of long-term outcomes.
In more serious cases recompression should be done as soon as safely possible. There is some evidence that delays longer than six hours result in slower or less complete recovery, and the number of treatments required may be increased.
Transport of a symptomatic diver
Exposing a case of decompression sickness to reduced ambient pressure will cause the bubbles to expand if not constrained by a rigid local tissue environment. This can aggravate the symptoms, and should be avoided if reasonably practicable. If a diver with DCS is transported by air, cabin pressure should be kept as close to sea level atmospheric pressure as possible, preferably not more than 150 m, either by cabin pressurisation or by remaining at low altitude throughout the flight. The risk of deterioration at higher altitudes must be considered against the risk of deterioration if not transported. Some divers with symptoms or signs of mild decompression sickness may be evacuated by pressurised commercial airliner for further treatment after a surface interval of at least 24 hours. The 2004 workshop considered it unlikely for this to cause a worse outcome. Most experience has been for short flights of less than two hours. There is little known about the effects of longer flights. Where possible, pre-flight and in-flight oxygen breathing at the highest available percentage is considered best practice. Similar precautions apply to surface transport through higher altitudes.
In-water recompression
Recompression and hyperbaric oxygen administered in a recompression chamber is recognised as the definitive treatment for DCI, but when there is no readily available access to a suitable hyperbaric chamber, and if symptoms are significant or progressing, in-water recompression (IWR) with oxygen is a medically recognised option where a group of divers including the symptomatic diver already have relevant training and equipment that provides a sufficient understanding of the associated risks and allows the involved parties to collectively accept responsibility for a decision to proceed with IWR.
In-water recompression (IWR) or underwater oxygen treatment is the emergency treatment of decompression sickness by returning the diver underwater to help the gas bubbles in the tissues, which are causing the symptoms, to resolve. It is a procedure that exposes the diver to significant risk which should be compared with the risk associated with the other available options. Some authorities recommend that it is only to be used when the time to travel to the nearest recompression chamber is too long to save the victim's life, others take a more pragmatic approach, and accept that in some circumstances IWR is the best available option. The risks may not be justified for case of mild symptoms likely to resolve spontaneously, or for cases where the diver is likely to be unsafe in the water, but in-water recompression may be justified in cases where severe outcomes are likely, if conducted by a competent and suitably equipped team.
Carrying out in-water recompression when there is a nearby recompression chamber or without suitable equipment and training is never a desirable option. The risk of the procedure is due to the diver suffering from DCS being seriously ill and may become paralysed, unconscious or stop breathing while under water. Any one of these events is likely to result in the diver drowning or asphyxiating or suffering further injury during a subsequent rescue to the surface. This risk can be reduced by improving airway security by using surface supplied gas and a helmet or full-face mask.
Several schedules have been published for in-water recompression treatment, but little data on their efficacy is available.
The decision of whether or not to attempt IWR is dependent on identifying the diver whose condition is serious enough to justify the risk, but whose clinical condition does not indicate that the risk is unacceptable. The risk may not be justified for mild DCI, if spontaneous recovery is probable whether the diver is recompressed or not, and surface oxygen is indicated for these cases. However, in these cases the risk of the recompression is also low, and early abandonment is also unlikely to cause further harm.
Contraindications
Some signs of decompression illness which suggest a risk of permanent injury are nevertheless considered contraindications for IWR. Hearing loss and vertigo displayed in isolation with no other symptoms of DCI can have been caused by inner ear barotrauma rather than DCI, and inner ear barotrauma is generally considered a contraindication for recompression. Even when caused by DCI, vertigo can make in-water treatment hazardous if accompanied by nausea and vomiting. A diver with a deteriorating level of consciousness or with a persisting reduced level of consciousness should also not be recompressed in-water nor should a diver who does not want to go back down, or with a history of oxygen toxicity in the preceding dives, or any physical injury or incapacitation which may make the procedure unsafe.
Definitive treatment
The duration of recompression treatment depends on the severity of symptoms, the dive history, the type of recompression therapy used and the patient's response to the treatment. One of the more frequently used treatment schedules is the US Navy Table 6, which provides hyperbaric oxygen therapy with a maximum pressure equivalent to of seawater (2.8 bar PO2) for a total time under pressure of 288 minutes, of which 240 minutes are on oxygen and the balance are air breaks to minimise the possibility of oxygen toxicity.
A multiplace chamber is the preferred facility for treatment of decompression sickness as it allows direct physical access to the patient by medical personnel, but monoplace chambers are more widely available and should be used for treatment if a multiplace chamber is not available or transportation would cause significant delay in treatment, as the interval between onset of symptoms and recompression is important to the quality of recovery. It may be necessary to modify the optimum treatment schedule to allow use of a monoplace chamber, but this is usually better than delaying treatment. A US Navy treatment table 5 can be safely performed without air breaks if a built-in breathing system is not available. In most cases the patient can be adequately treated in a monoplace chamber at the receiving hospital.
Altitude decompression sickness
Treatment and management may vary depending on the grade or form of decompression sickness and the treating facility or organization. First aid at altitude is oxygen at the highest practicable concentration and earliest and largest practicable reduction in cabin altitude.
Ground-level 100% oxygen therapy is suggested for 2 hours following type-1 decompression sickness that occurs at altitude, if it resolves upon descent. In more severe cases, hyperbaric oxygen therapy following standard recompression protocols is indicated. Decompression sickness in aviation most commonly follows flights in non-pressurized aircraft, flights with cabin pressure fluctuations, or in individuals who fly after diving. Cases have also been reported after the use of altitude chambers. These are relatively rare clinical events.
Prognosis
Immediate treatment with 100% oxygen, followed by recompression in a hyperbaric chamber, will in most cases result in no long-term effects. However, permanent long-term injury from DCS is possible. Three-month follow-ups on diving accidents reported to DAN in 1987 showed 14.3% of the 268 divers surveyed had ongoing symptoms of Type II DCS, and 7% from Type I DCS. Long-term follow-ups showed similar results, with 16% having permanent neurological sequelae.
Long term effects are dependent on both initial injury, and treatment. While almost all cases will resolve more quickly with treatment, milder cases may resolve adequately over time without recompression, where the damage is minor and the damage is not significantly aggravated by lack of treatment. In some cases the cost, inconvenience, and risk to the patient may make it appropriate not to evacuate to a hyperbaric treatment facility. These cases should be assessed by a specialist in diving medicine, which can generally be done remotely by telephone or internet.
For joint pain, the likely tissues affected depend on the symptoms, and the urgency of hyperbaric treatment will depend largely on the tissues involved.
Sharp, localised pain that is affected by movement suggests tendon or muscle injury, both of which will usually fully resolve with oxygen and anti-inflammatory medication.
Sharp, localised pain that is not affected by movement suggests local inflammation, which will also usually fully resolve with oxygen and anti-inflammatory medication.
Deep, non-localised pain affected by movement suggests joint capsule tension, which is likely to fully resolve with oxygen and anti-inflammatory medication, though recompression will help it to resolve faster.
Deep, non-localised pain not affected by movement suggests bone medulla involvement, with ischaemia due to blood vessel blockage and swelling inside the bone, which is mechanistically associated with osteonecrosis, and therefore it has been strongly recommended that these symptoms are treated with hyperbaric oxygen.
Epidemiology
The incidence of decompression sickness is rare, estimated at 2.8 to 4 cases per 10,000 dives, with the risk 2.6 times greater for males than females. DCS affects approximately 1,000 U.S. scuba divers per year. In 1999, the Divers Alert Network (DAN) created "Project Dive Exploration" to collect data on dive profiles and incidents. From 1998 to 2002, they recorded 50,150 dives, from which 28 recompressions were requiredalthough these will almost certainly contain incidents of arterial gas embolism (AGE)a rate of about 0.05%.
Around 2013, Honduras had the highest number of decompression-related deaths and disabilities in the world, caused by unsafe practices in lobster diving among the indigenous Miskito people, who face great economic pressures. At that time it was estimated that in the country over 2000 divers had been injured and 300 others had died since the 1970s.
Timeline
1670: Robert Boyle demonstrated that a reduction in ambient pressure could lead to bubble formation in living tissue. This description of a bubble forming in the eye of a viper subjected to a near vacuum was the first recorded description of decompression sickness.
1769: Giovanni Morgagni described the post mortem findings of air in cerebral circulation and surmised that this was the cause of death.
1840: Charles Pasley, who was involved in the recovery of the sunken warship HMS Royal George, commented that, of those having made frequent dives, "not a man escaped the repeated attacks of rheumatism and cold".
1841: First documented case of decompression sickness, reported by a mining engineer who observed pain and muscle cramps among coal miners working in mine shafts air-pressurized to keep water out.
1854: Decompression sickness reported and one resulting death of caisson workers on the Royal Albert Bridge.
1867: Panamanian pearl divers using the revolutionary Sub Marine Explorer submersible repeatedly experienced "fever" due to rapid ascents. Continued sickness led to the vessel's abandonment in 1869.
1870: Bauer published outcomes of 25 paralyzed caisson workers.
From 1870 to 1910, all prominent features were established. Explanations at the time included: cold or exhaustion causing reflex spinal cord damage; electricity cause by friction on compression; or organ congestion; and vascular stasis caused by decompression.
1871: The Eads Bridge in St Louis employed 352 compressed air workers including Alphonse Jaminet as the physician in charge. There were 30 seriously injured and 12 fatalities. Jaminet himself developed decompression sickness and his personal description was the first such recorded. According to Divers Alert Network, in its Inert Gas Exchange, Bubbles and Decompression Theory course, this is where "bends" was first used to refer to DCS.
1872: The similarity between decompression sickness and iatrogenic air embolism as well as the relationship between inadequate decompression and decompression sickness was noted by Friedburg. He suggested that intravascular gas was released by rapid decompression and recommended: slow compression and decompression; four-hour working shifts; limit to maximum pressure of 44.1 psig (4 atm); using only healthy workers; and recompression treatment for severe cases.
1873: Andrew Smith first used the term "caisson disease" describing 110 cases of decompression sickness as the physician in charge during construction of the Brooklyn Bridge. The project employed 600 compressed air workers. Recompression treatment was not used. The project chief engineer Washington Roebling had caisson disease, and endured the after-effects of the disease for the rest of his life. During this project, decompression sickness became known as "The Grecian Bends" or simply "the bends" because affected individuals characteristically bent forward at the hips: this is possibly reminiscent of a then popular women's fashion and dance maneuver known as the Grecian Bend.
1890: During construction of the Hudson River Tunnel contractor's agent Ernest William Moir pioneered the use of an airlock chamber for treatment.
1900: Leonard Hill used a frog model to prove that decompression causes bubbles and that recompression resolves them. Hill advocated linear or uniform decompression profiles. This type of decompression is used today by saturation divers. His work was financed by Augustus Siebe and the Siebe Gorman Company.
1904: Tunnel building to and from Manhattan Island caused over 3,000 injuries and over 30 deaths which led to laws requiring PSI limits and decompression rules for "sandhogs" in the United States.
1904: Siebe and Gorman in conjunction with Leonard Hill developed and produced a closed bell in which a diver can be decompressed at the surface.
1908: "The Prevention of Compressed Air Illness" was published by JS Haldane, Boycott and Damant recommending staged decompression. These tables were accepted for use by the Royal Navy.
1914–16: Experimental decompression chambers were in use on land and aboard ship.
1924: The US Navy published the first standardized recompression procedure.
1930s: Albert R Behnke separated the symptoms of Arterial Gas Embolism (AGE) from those of DCS.
1935: Behnke et al. experimented with oxygen for recompression therapy.
1937: Behnke introduced the "no-stop" decompression tables.
1941: Altitude DCS is treated with hyperbaric oxygen for the first time.
1944: US Navy published hyperbaric treatment tables "Long Air Recompression Table with Oxygen" and "Short Oxygen Recompression Table", both using 100% oxygen below 60 fsw (18 msw)
1945: Field results showed that the 1944 oxygen treatment table was not yet satisfactory, so a series of tests were conducted by staff from the Navy Medical Research Institute and the Navy Experimental Diving Unit using human subjects to verify and modify the treatment tables. Tests were conducted using the 100-foot air-oxygen treatment table and the 100-foot air treatment table, which were found to be satisfactory. Other tables were extended until they produced satisfactory results. The resulting tables were used as the standard treatment for the next 20 years, and these tables and slight modifications were adopted by other navies and industry. Over time, evidence accumulated that the success of these table for severe decompression sickness was not very good.
1957: Robert Workman established a new method for calculation of decompression requirements (M-values).
1959: The "SOS Decompression Meter", a submersible mechanical device that simulated nitrogen uptake and release, was introduced.
1960: FC Golding et al. split the classification of DCS into Type 1 and 2.
1965: Low success rates of the existing US Navy treatment tables led to the development of the oxygen treatment table by Goodman and Workman in 1965, variations of which are still in general use as the definitive treatment for most cases of decompression sickness.
1965: LeMessurier and Hills published a paper on A thermodynamic approach arising from a study on Torres Strait diving techniques which suggests that decompression by conventional models results in bubble formation which is then eliminated by re-dissolving at the decompression stops.
1976 – M.P. Spencer showed that the sensitivity of decompression testing is increased by the use of ultrasonic methods which can detect mobile venous bubbles before symptoms of DCS emerge.
1982: Paul K Weathersby, Louis D Homer and Edward T Flynn introduce survival analysis into the study of decompression sickness.
1983: Orca produced the "EDGE", a personal dive computer, using a microprocessor to calculate nitrogen absorption for twelve tissue compartments.
1984: Albert A Bühlmann released his book "Decompression–Decompression Sickness", which detailed his deterministic model for calculation of decompression schedules.
1989: The advent of dive computers had not been widely accepted, but after the 1989 AAUS Dive computer workshop published a group consensus list of recommendations for the use of dive computers in scientific diving, most opposition to dive computers dissipated, numerous new models were introduced, the technology dramatically improved and dive computers became standard scuba diving equipment. Over time, some of the recommendations became irrelevant as the technology improved.
2000: HydroSpace Engineering developed the HS Explorer, a Trimix computer with optional PO2 monitoring and twin decompression algorithms, Buhlmann, and the first full real time RGBM implementation.
2001: The US Navy approved the use of Cochran NAVY decompression computer with the VVAL 18 Thalmann algorithm for Special Warfare operations.
By 2010: The use of dive computers for decompression status tracking was virtually ubiquitous among recreational divers and widespread in scientific diving.
2018: A group of diving medical experts issued a consensus guideline on pre-hospital decompression sickness management and concluded that in-water recompression is a valid and effective emergency treatment where a chamber is not available, but is only appropriate in groups that have been trained and are competent in the skills required for IWR and have appropriate equipment.
2023: The animal rights group, PETA, says that it has successfully lobbied the Navy to end a pair of studies that involved subjecting sheep to conditions that simulated surfacing quickly from a great depth, causing them pain and sometimes leaving the animals paralyzed or dead.
Society and culture
Economics
In the United States, it is common for medical insurance not to cover treatment for the bends that is the result of recreational diving. This is because scuba diving is considered an elective and "high-risk" activity and treatment for decompression sickness is expensive. A typical stay in a recompression chamber will easily cost several thousand dollars, even before emergency transportation is included.
In the United Kingdom, treatment of DCS is provided by the National Health Service. This may occur either at a specialised facility or at a hyperbaric centre based within a general hospital.
Other animals
Animals may also contract DCS, especially those caught in nets and rapidly brought to the surface. It has been documented in loggerhead turtles and likely in prehistoric marine animals as well. Modern reptiles are susceptible to DCS, and there is some evidence that marine mammals such as cetaceans and seals may also be affected. AW Carlsen has suggested that the presence of a right-left shunt in the reptilian heart may account for the predisposition in the same way as a patent foramen ovale does in humans.
Footnotes
See also
Notes
1. autochthonous: formed or originating in the place where found.
References
Sources
External links
Divers Alert Network: diving medicine articles
Dive Tables from the NOAA
CDC – Decompression Sickness and Tunnel Workers – NIOSH Workplace Safety and Health Topic
Pathophysiology of decompression and acute dysbaric disorders
"Decompression Sickness" on Medscape
Norwegian Diving- and Treatment Tables. Tables and guidelines for surface orientated diving on air and nitrox. Tables and guidelines for treatment of decompression illness. Jan Risberg • Andreas Møllerløkken • Olav Sande Eftedal. 12.8.2019
Decompression sickness
Aviation medicine
Medical emergencies
Effects of external causes | 0.767465 | 0.999011 | 0.766707 |
Nausea | Nausea is a diffuse sensation of unease and discomfort, sometimes perceived as an urge to vomit. It can be a debilitating symptom if prolonged and has been described as placing discomfort on the chest, abdomen, or back of the throat.
Over 30 definitions of nausea were proposed in a 2011 book on the topic.
Nausea is a non-specific symptom, which means that it has many possible causes. Some common causes of nausea are gastroenteritis and other gastrointestinal disorders, food poisoning, motion sickness, dizziness, migraine, fainting, low blood sugar, anxiety, hyperthermia, dehydration and lack of sleep. Nausea is a side effect of many medications including chemotherapy, or morning sickness in early pregnancy. Nausea may also be caused by disgust and depression.
Medications taken to prevent and treat nausea and vomiting are called antiemetics. The most commonly prescribed antiemetics in the US are promethazine, metoclopramide, and the newer ondansetron. The word nausea is from Latin nausea, from Greek – nausia, "ναυτία" – nautia, motion sickness, "feeling sick or queasy".
Causes
Gastrointestinal infections (37%) and food poisoning are the two most common causes of acute nausea and vomiting. Side effects from medications (3%) and pregnancy are also relatively frequent. There are many causes of chronic nausea. Nausea and vomiting remain undiagnosed in 10% of the cases. Aside from morning sickness, there are no sex differences in complaints of nausea. After childhood, doctor consultations decrease steadily with age. Only a fraction of one percent of doctor visits by those over 65 are due to nausea.
Gastrointestinal
Gastrointestinal infection is one of the most common causes of acute nausea and vomiting. Chronic nausea may be the presentation of many gastrointestinal disorders, occasionally as the major symptom, such as gastroesophageal reflux disease, functional dyspepsia, gastritis, biliary reflux, gastroparesis, peptic ulcer, celiac disease, non-celiac gluten sensitivity, Crohn's disease, hepatitis, upper gastrointestinal malignancy, and pancreatic cancer. Uncomplicated Helicobacter pylori infection does not cause chronic nausea.
Food poisoning
Food poisoning usually causes an abrupt onset of nausea and vomiting one to six hours after ingestion of contaminated food and lasts for one to two days. It is due to toxins produced by bacteria in food.
Medications
Many medications can potentially cause nausea. Some of the most frequently associated include cytotoxic chemotherapy regimens for cancer and other diseases, and general anaesthetic agents. An old cure for migraine, ergotamine, is well known to cause devastating nausea in some patients; a person using it for the first time will be prescribed an antiemetic for relief if needed.
Pregnancy
Nausea or "morning sickness" is common during early pregnancy but may occasionally continue into the second and third trimesters. In the first trimester nearly 80 % of women have some degree of nausea. Pregnancy should therefore be considered as a possible cause of nausea in any sexually active woman of child-bearing age. While usually it is mild and self-limiting, severe cases known as hyperemesis gravidarum may require treatment.
Disequilibrium
A number of conditions involving balance such as motion sickness and vertigo can lead to nausea and vomiting.
Gynecologic
Dysmenorrhea can cause nausea.
Psychiatric
Nausea may be caused by depression, anxiety disorders and eating disorders.
Potentially serious
While most causes of nausea are not serious, some serious conditions are associated with nausea. These include pancreatitis, small bowel obstruction, appendicitis, cholecystitis, hepatitis, Addisonian crisis, diabetic ketoacidosis, increased intracranial pressure, spontaneous intracranial hypotension, brain tumors, meningitis, heart attack, rabies, carbon monoxide poisoning and many others.
Comprehensive list
Inside the abdomen
Obstructing disorders
Gastric outlet obstruction
Small bowel obstruction
Colonic obstruction
Superior mesenteric artery syndrome
Enteric infections
Viral infection
Bacterial infection
Inflammatory diseases
Celiac disease
Cholecystitis
Pancreatitis
Appendicitis
Hepatitis
Sensorimotor dysfunction
Gastroparesis
Intestinal pseudo-obstruction
Gastroesophageal reflux disease
Irritable bowel syndrome
Cyclic vomiting syndrome
Other
Non-celiac gluten sensitivity
Biliary colic
Kidney stone
Cirrhosis
Abdominal irradiation<ref name=H2015>Hasler WL. Nausea, Vomiting, and Indigestion. In: Kasper D, Fauci A, Hauser S, Longo D, Jameson J, Loscalzo J. eds. 'Harrison's Principles of Internal Medicine, 19e. New York, NY: McGraw-Hill; 2015.</ref>
Outside the abdomen Cardiopulmonary Cardiomyopathy
Myocardial infarction (heart attack)
Paroxysmal coughInner-ear diseases Motion sickness
Labyrinthitis
MalignancyIntracerebral disorders Malignancy
Hemorrhage
Abscess
Hydrocephalus
Meningitis
Encephalitis
RabiesPsychiatric illnesses Anorexia and bulimia nervosa
Depression
Drug withdrawalOther Post-operative vomiting
Nociception
Altitude sickness
Medications and metabolic disorders Drugs Chemotherapy
Antibiotics
Antiarrhythmics
Digoxin
Oral hypoglycemic medications
Oral contraceptives
Norepinephrine reuptake inhibitorsEndocrine/metabolic disease Pregnancy
Uremia
Ketoacidosis
Thyroid and parathyroid disease
Adrenal insufficiencyToxins'''
Liver failure
Alcohol
Pathophysiology
Research on nausea and vomiting has relied on using animal models to mimic the anatomy and neuropharmacologic features of the human body. The physiologic mechanism of nausea is a complex process that has yet to be fully elucidated. There are four general pathways that are activated by specific triggers in the human body that go on to create the sensation of nausea and vomiting.
Central nervous system (CNS): Stimuli can affect areas of the CNS including the cerebral cortex and the limbic system. These areas are activated by elevated intracranial pressure, irritation of the meninges (i.e. blood or infection), and extreme emotional triggers such as anxiety. The supratentorial region is also responsible for the sensation of nausea.
Chemoreceptor trigger zone (CTZ): The CTZ is located in the area postrema in the floor of the fourth ventricle within the brain. This area is outside the blood brain barrier, and is therefore readily exposed to substances circulating through the blood and cerebral spinal fluid. Common triggers of the CTZ include metabolic abnormalities, toxins, and medications. Activation of the CTZ is mediated by dopamine (D2) receptors, serotonin (5HT3) receptors, and neurokinin receptors (NK1).
Vestibular system: This system is activated by disturbances to the vestibular apparatus in the inner ear. These include movements that cause motion sickness and dizziness. This pathway is triggered via histamine (H1) receptors and acetylcholine (ACh) receptors.
Peripheral Pathways: These pathways are triggered via chemoreceptors and mechanoreceptors in the gastrointestinal tract, as well as other organs such as the heart and kidneys. Common activators of these pathways include toxins present in the gastrointestinal lumen and distension of the gastrointestinal lumen from blockage or dysmotility of the bowels. Signals from these pathways travel via multiple neural tracts including the vagus, glossopharyngeal, splanchnic, and sympathetic nerves.
Signals from any of these pathways then travel to the brainstem, activating several structures including the nucleus of the solitary tract, the dorsal motor nucleus of the vagus, and central pattern generator. These structures go on to signal various downstream effects of nausea and vomiting. The body's motor muscle responses involve halting the muscles of the gastrointestinal tract, and in fact causing reversed propulsion of gastric contents towards the mouth while increasing abdominal muscle contraction. Autonomic effects involve increased salivation and the sensation of feeling faint that often occurs with nausea and vomiting.
Pre-nausea pathophysiology
It has been described that alterations in heart rate can occur as well as the release of vasopressin from the posterior pituitary.
Diagnosis
Patient history
Taking a thorough patient history may reveal important clues to the cause of nausea and vomiting. If the patient's symptoms have an acute onset, then drugs, toxins, and infections are likely. In contrast, a long-standing history of nausea will point towards a chronic illness as the culprit. The timing of nausea and vomiting after eating food is an important factor to pay attention to. Symptoms that occur within an hour of eating may indicate an obstruction proximal to the small intestine, such as gastroparesis or pyloric stenosis. An obstruction further down in the intestine or colon will cause delayed vomiting. An infectious cause of nausea and vomiting such as gastroenteritis may present several hours to days after the food was ingested. The contents of the emesis is a valuable clue towards determining the cause. Bits of fecal matter in the emesis indicate obstruction in the distal intestine or the colon. Emesis that is of a bilious nature (greenish in color) localizes the obstruction to a point past the stomach. Emesis of undigested food points to an obstruction prior to the gastric outlet, such as achalasia or Zenker's diverticulum. If patient experiences reduced abdominal pain after vomiting, then obstruction is a likely etiology. However, vomiting does not relieve the pain brought on by pancreatitis or cholecystitis.
Physical exam
It is important to watch out for signs of dehydration, such as orthostatic hypotension and loss of skin turgor. Auscultation of the abdomen can produce several clues to the cause of nausea and vomiting. A high-pitched tinkling sound indicates possible bowel obstruction, while a splashing "succussion" sound is more indicative of gastric outlet obstruction. Eliciting pain on the abdominal exam when pressing on the patient may indicate an inflammatory process. Signs such as papilledema, visual field losses, or focal neurological deficits are red flag signs for elevated intracranial pressure.
Diagnostic testing
When a history and physical exam are not enough to determine the cause of nausea and vomiting, certain diagnostic tests may prove useful. A chemistry panel would be useful for electrolyte and metabolic abnormalities. Liver function tests and lipase would identify pancreaticobiliary diseases. Abdominal X-rays showing air-fluid levels indicate bowel obstruction, while an X-ray showing air-filled bowel loops are more indicative of ileus. More advanced imaging and procedures may be necessary, such as a CT scan, upper endoscopy, colonoscopy, barium enema, or MRI. Abnormal GI motility can be assessed using specific tests like gastric scintigraphy, wireless motility capsules, and small-intestinal manometry.
Treatment
If dehydration is present due to loss of fluids from severe vomiting, rehydration with oral electrolyte solutions is preferred. If this is not effective or possible, intravenous rehydration may be required. Medical care is recommended if: a person cannot keep any liquids down, has symptoms more than 2 days, is weak, has a fever, has stomach pain, vomits more than two times in a day or does not urinate for more than 8 hours.
Medications
Numerous pharmacologic medications are available for the treatment of nausea. There is no medication that is clearly superior to other medications for all cases of nausea. The choice of antiemetic medication may be based on the situation during which the person experiences nausea. For people with motion sickness and vertigo, antihistamines and anticholinergics such as meclizine and scopolamine are particularly effective. Nausea and vomiting associated with migraine headaches respond best to dopamine antagonists such as metoclopramide, prochlorperazine, and chlorpromazine. In cases of gastroenteritis, serotonin antagonists such as ondansetron were found to suppress nausea and vomiting, as well as reduce the need for IV fluid resuscitation. The combination of pyridoxine and doxylamine is the first line treatment for pregnancy-related nausea and vomiting. Dimenhydrinate is an inexpensive and effective over the counter medication for preventing postoperative nausea and vomiting. Other factors to consider when choosing an antiemetic medication include the person's preference, side-effect profile, and cost.
Nabilone is also indicated for this purpose.
Alternative medicine
In certain people, cannabinoids may be effective in reducing chemotherapy associated nausea and vomiting. Several studies have demonstrated the therapeutic effects of cannabinoids for nausea and vomiting in the advanced stages of illnesses such as cancer and AIDS.
In hospital settings topical anti-nausea gels are not indicated because of lack of research backing their efficacy. Topical gels containing lorazepam, diphenhydramine, and haloperidol are sometimes used for nausea but are not equivalent to more established therapies.
Ginger has also been shown to be potentially effective in treating several types of nausea.
Prognosis
The outlook depends on the cause. Most people recover within few hours or a day. While short-term nausea and vomiting are generally harmless, they may sometimes indicate a more serious condition. When associated with prolonged vomiting, it may lead to dehydration or dangerous electrolyte imbalances or both. Repeated intentional vomiting, characteristic of bulimia, can cause stomach acid to wear away at the enamel present on the teeth.
Epidemiology
Nausea and or vomiting is the main complaint in 1.6% of visits to family physicians in Australia. However, only 25% of people with nausea visit their family physician. In Australia, nausea, as opposed to vomiting, occurs most frequently in persons aged 15–24 years, and is less common in other age groups.
See also
Cancer and nausea
Vasodilation
References
External links
Symptoms and signs: Digestive system and abdomen
Vomiting | 0.768225 | 0.997972 | 0.766667 |
Hypermetabolism | Hypermetabolism is defined as an elevated resting energy expenditure (REE) > 110% of predicted REE. Hypermetabolism is accompanied by a variety of internal and external symptoms, most notably extreme weight loss, and can also be a symptom in itself. This state of increased metabolic activity can signal underlying issues, especially hyperthyroidism. Patients with Fatal familial insomnia can also present with hypermetabolism; however, this universally fatal disorder is exceedingly rare, with only a few known cases worldwide. The drastic impact of the hypermetabolic state on patient nutritional requirements is often understated or overlooked as well.
Signs and symptoms
Symptoms may last for days, weeks, or months until the disorder is healed. The most apparent sign of hypermetabolism is an abnormally high intake of calories followed by continuous weight loss. Internal symptoms of hypermetabolism include: peripheral insulin resistance, elevated catabolism of protein, carbohydrates and triglycerides, and a negative nitrogen balance in the body.
Outward symptoms of hypermetabolism may include:
Weight loss
Anemia
Fatigue
Elevated heart rate
Irregular heartbeat
Insomnia
Dysautonomia
Shortness of breath
Muscle weakness
Excessive sweating
Pathophysiology
During the acute phase, the liver redirects protein synthesis, causing up-regulation of certain proteins and down-regulation of others. Measuring the serum level of proteins that are up- and down-regulated during the acute phase can reveal extremely important information about the patient's nutritional state. The most important up-regulated protein is C-reactive protein, which can rapidly increase 20- to 1,000-fold during the acute phase.
Hypermetabolism also causes expedited catabolism of carbohydrates, proteins, and triglycerides in order to meet the increased metabolic demands.
Diagnosis
Quantitation by indirect calorimetry, as opposed to the Harris-Benedict equation, is needed to accurately measure REE in cancer patients.
Differential diagnosis
Many different illnesses can cause an increase in metabolic activity as the body combats illness and disease in order to heal itself.
Hypermetabolism is a common symptom of various pathologies. Some of the most prevalent diseases characterized by hypermetabolism are listed below.
Hyperthyroidism: Manifestation: An overactive thyroid often causes a state of increased metabolic activity.
Friedreich's ataxia: Manifestation: Local cerebral metabolic activity is increased extensively as the disease progresses.
Fatal familial insomnia: Manifestation: Hypermetabolism in the thalamus occurs and disrupts sleep spindle formation that occurs there.
Graves' disease: Manifestation: Excess hypermetabolically-induced thyroid hormone activates sympathetic pathways, causing the eyelids to retract and remain constantly elevated.
Anorexia and bulimia: Manifestation: The prolonged stress put on the body as a result of these eating disorders forces the body into starvation mode. Some patients recovering from these disorders experience hypermetabolism until they resume normal diets.
Astrocytoma: Manifestation: Causes hypermetabolic lesions in the brain
Treatment
Ibuprofen, polyunsaturated fatty acids, and beta-blockers have been reported in some preliminary studies to decrease REE, which may allow patients to meet their caloric needs and gain weight.
References
Metabolism | 0.772779 | 0.992048 | 0.766634 |
Vital signs | Vital signs (also known as vitals) are a group of the four to six most crucial medical signs that indicate the status of the body's vital (life-sustaining) functions. These measurements are taken to help assess the general physical health of a person, give clues to possible diseases, and show progress toward recovery. The normal ranges for a person's vital signs vary with age, weight, sex, and overall health.
There are four primary vital signs: body temperature, blood pressure, pulse (heart rate), and breathing rate (respiratory rate), often notated as BT, BP, HR, and RR. However, depending on the clinical setting, the vital signs may include other measurements called the "fifth vital sign" or "sixth vital sign."
Early warning scores have been proposed that combine the individual values of vital signs into a single score. This was done in recognition that deteriorating vital signs often precede cardiac arrest and/or admission to the intensive care unit. Used appropriately, a rapid response team can assess and treat a deteriorating patient and prevent adverse outcomes.
Primary vital signs
There are four primary vital signs which are standard in most medical settings:
Body temperature
Heart rate or Pulse
Respiratory rate
Blood pressure
The equipment needed is a thermometer, a sphygmomanometer, and a watch. Although a pulse can be taken by hand, a stethoscope may be required for a clinician to take a patient's apical pulse.
Temperature
Temperature recording gives an indication of core body temperature, which is normally tightly controlled (thermoregulation), as it affects the rate of chemical reactions. Body temperature is maintained through a balance of the heat produced by the body and the heat lost from the body.
Temperature can be recorded in order to establish a baseline for the individual's normal body temperature for the site and measuring conditions.
Temperature can be measured from the mouth, rectum, axilla (armpit), ear, or skin. Oral, rectal, and axillary temperature can be measured with either a glass or electronic thermometer. Note that rectal temperature measures approximately 0.5 °C higher than oral temperature, and axillary temperature approximately 0.5 °C less than oral temperature. Aural and skin temperature measurements require special devices designed to measure temperature from these locations.
While is considered "normal" body temperature, there is some variance between individuals. Most have a normal body temperature set point that falls within the range of .
The main reason for checking body temperature is to solicit any signs of systemic infection or inflammation in the presence of a fever. Fever is considered temperature of or above. Other causes of elevated temperature include hyperthermia, which results from unregulated heat generation or abnormalities in the body's heat exchange mechanisms.
Temperature depression (hypothermia) also needs to be evaluated. Hypothermia is classified as temperature below .
It is also recommended to review the trend of the patient's temperature over time. A fever of 38 °C does not necessarily indicate an ominous sign if the patient's previous temperature has been higher.
Pulse
The pulse is the rate at which the heart beats while pumping blood through the arteries, recorded as beats per minute (bpm). It may also be called "heart rate". In addition to providing the heart rate, the pulse should also be evaluated for strength and obvious rhythm abnormalities. The pulse is commonly taken at the wrist (radial artery). Alternative sites include the elbow (brachial artery), the neck (carotid artery), behind the knee (popliteal artery), or in the foot (dorsalis pedis or posterior tibial arteries). The pulse is taken with the index finger and middle finger by pushing with firm yet gentle pressure at the locations described above, and counting the beats felt per 60 seconds (or per 30 seconds and multiplying by two). The pulse rate can also be measured by listening directly to the heartbeat using a stethoscope. The pulse may vary due to exercise, fitness level, disease, emotions, and medications. The pulse also varies with age. A newborn can have a heart rate of 100–160 bpm, an infant (0–5 months old) a heart rate of 90–150 bpm, and a toddler (6–12 months old) a heart rate of 80–140 bpm. A child aged 1–3 years old can have a heart rate of 80–130 bpm, a child aged 3–5 years old a heart rate of 80–120 bpm, an older child (age of 6–10) a heart rate of 70–110 bpm, and an adolescent (age 11–14) a heart rate of 60–105 bpm. An adult (age 15+) can have a heart rate of 60–100 bpm.
Respiratory rate
Average respiratory rates vary between ages, but the normal reference range for people age 18 to 65 is 16–20 breaths per minute. The value of respiratory rate as an indicator of potential respiratory dysfunction has been investigated but findings suggest it is of limited value. Respiratory rate is a clear indicator of acidotic states, as the main function of respiration is removal of CO2 leaving bicarbonate base in circulation.
Blood pressure
Blood pressure is recorded as two readings: a higher systolic pressure, which occurs during the maximal contraction of the heart, and the lower diastolic or resting pressure. In adults, a normal blood pressure is 120/80, with 120 being the systolic and 80 being the diastolic reading. Usually, the blood pressure is read from the left arm unless there is some damage to the arm. The difference between the systolic and diastolic pressure is called the pulse pressure. The measurement of these pressures is now usually done with an aneroid or electronic sphygmomanometer. The classic measurement device is a mercury sphygmomanometer, using a column of mercury measured off in millimeters. In the United States and UK, the common form is millimeters of mercury, while elsewhere SI units of pressure are used. There is no natural 'normal' value for blood pressure, but rather a range of values that on increasing are associated with increased risks. The guideline acceptable reading also takes into account other co-factors for disease. Therefore, elevated blood pressure (hypertension) is variously defined when the systolic number is persistently over 140–160 mmHg. Low blood pressure is hypotension. Blood pressures are also taken at other portions of the extremities. These pressures are called segmental blood pressures and are used to evaluate blockage or arterial occlusion in a limb (see Ankle brachial pressure index).
Other signs
In the U.S., in addition to the above four, many providers are required or encouraged by government technology-in-medicine laws to record the patient's height, weight, and body mass index. In contrast to the traditional vital signs, these measurements are not useful for assessing acute changes in state because of the rate at which they change; however, they are useful for assessing the impact of prolonged illness or chronic health problems.
The definition of vital signs may also vary with the setting of the assessment. Emergency medical technicians (EMTs), in particular, are taught to measure the vital signs of respiration, pulse, skin, pupils, and blood pressure as "the 5 vital signs" in a non-hospital setting.
Fifth vital signs
The "fifth vital sign" may refer to a few different parameters.
Pain is considered a standard fifth vital sign in some organizations, such as the U.S. Veterans Affairs. Pain is measured on a 0–10 pain scale based on subjective patient reporting and may be unreliable. Some studies show that recording pain routinely may not change management.
Menstrual cycle
Oxygen saturation (as measured by pulse oximetry)
Blood glucose level
Sixth vital signs
There is no standard "sixth vital sign"; its use is more informal and discipline-dependent.
End-tidal
Functional status
Shortness of breath
Gait speed
Delirium
Variations by age
Children and infants have respiratory and heart rates that are faster than those of adults as shown in the following table :
Monitoring
Monitoring of vital parameters most commonly includes at least blood pressure and heart rate, and preferably also pulse oximetry and respiratory rate. Multimodal monitors that simultaneously measure and display the relevant vital parameters are commonly integrated into the bedside monitors in intensive care units, and the anesthetic machines in operating rooms. These allow for continuous monitoring of a patient, with medical staff being continuously informed of the changes in the general condition of a patient.
While monitoring has traditionally been done by nurses and doctors, a number of companies are developing devices that can be used by consumers themselves. These include Cherish Health, Scanadu and Azoi.
See also
Biotelemetry
Medical record
Remote Patient Monitoring
References
Medical signs
Physical examination
fr:Signes vitaux | 0.769816 | 0.99576 | 0.766552 |
Hepatosplenomegaly | Hepatosplenomegaly (commonly abbreviated HSM) is the simultaneous enlargement of both the liver (hepatomegaly) and the spleen (splenomegaly). Hepatosplenomegaly can occur as the result of acute viral hepatitis, infectious mononucleosis, and histoplasmosis or it can be the sign of a serious and life-threatening lysosomal storage disease. Systemic venous hypertension can also increase the risk for developing hepatosplenomegaly, which may be seen in those patients with right-sided heart failure.
Common causes
Rare disorders
Are the following:
Lipoproteinlipase deficiency
Multiple sulfatase deficiency
Osteopetrosis
Adult-onset Still's disease (AOSD)
References
External links
Symptoms and signs: Digestive system and abdomen
Medical signs
Diseases of liver | 0.771171 | 0.99395 | 0.766506 |
Heartburn | Heartburn, also known as pyrosis, cardialgia or acid indigestion, is a burning sensation in the central chest or upper central abdomen. Heartburn is usually due to regurgitation of gastric acid (gastric reflux) into the esophagus. It is the major symptom of gastroesophageal reflux disease (GERD).
Other common descriptors for heartburn (besides burning) are belching, nausea, squeezing, stabbing, or a sensation of pressure on the chest. The pain often rises in the chest (directly behind the breastbone) and may radiate to the neck, throat, or angle of the arm. Because the chest houses other important organs besides the esophagus (including the heart and lungs), not all symptoms related to heartburn are esophageal in nature.
The cause will vary depending on one's family and medical history, genetics, if a person is pregnant or lactating, and age. As a result, the diagnosis will vary depending on the suspected organ and the inciting disease process. Work-up will vary depending on the clinical suspicion of the provider seeing the patient, but generally includes endoscopy and a trial of antacids to assess for relief.
Treatment for heartburn may include medications and dietary changes. Medication include antacids. Dietary changes may require avoiding foods that are high in fats, spicy, high in artificial flavors, heavily reducing NSAID use, avoiding heavy alcohol consumption, and decreasing peppermint consumption. Lifestyle changes may help such as reducing weight.
Definition
The term indigestion includes heartburn along with a number of other symptoms. Indigestion is sometimes defined as a combination of epigastric pain and heartburn. Heartburn is commonly used interchangeably with gastroesophageal reflux disease (GERD) rather than just to describe a symptom of burning in one's chest.
Differential diagnosis
Heartburn-like symptoms and/or lower chest or upper abdomen may be indicative of much more sinister and/or deadly disease. Of greatest concern is to confuse heartburn (generally related to the esophagus) with a heart attack as these organs share a common nerve supply. Numerous abdominal and thoracic organs are present in that region of the body. Many different organ systems might explain the discomfort called heartburn.
Heart
The most common symptom for a heart attack is chest pain. However, as many as 30% of people who receive cardiac catheterization for chest pain have findings that do not account for their chest discomfort. These are often defined as having "atypical chest pain" or chest pain of undetermined origin. Women experiencing heart attacks may also deny classic signs and symptoms and instead complain of GI symptoms. One article estimates that ischemic heart disease may appear to be GERD in 0.6% of people.
Esophagus
GERD (most common cause of heartburn) – occurs when acid refluxes from the stomach and inflames the esophagus.
Esophageal spasms – typically occur after eating or drinking and may be combined with difficulty swallowing.
Esophageal strictures
Esophageal cancers
Mallory-Weis tears – tears of the superficial mucosa of the esophagus that are subsequently exposed to gastric acid commonly due to vomiting and/or retching
Eosinophilic esophagitis – a disease commonly associated with other atopic diseases such as asthma, food allergies, seasonal allergies, and atopic skin disease
Chemical esophagitis – related to the intake of caustic substances, excessive amounts of hot liquids, alcohol, or tobacco smoke
Infectious esophagitis – especially CMV and certain fungal infections, most common in immunocompromised persons
Stomach
Peptic ulcer disease – can be secondary to Helicobacter pylori infection or heavy NSAID use that weakens stomach mucosal layer. Pain often worsens with eating.
Stomach cancer
Intestines
Intestinal ulcers – generally secondary to other conditions such as H. pylori infection or cancers of the gastrointestinal tract. Pain often improves with eating.
Duodenitis – inflammation of the small intestine. May be the result of several conditions.
Gallbladder
Gallstones
Pancreas
Pancreatitis – can be autoimmune, due to a gallstone obstructing the lumen, related to alcohol consumption.
Hematology
Pernicious anemia – can be autoimmune, due to atrophic gastritis.
Pregnancy
Heartburn is common during pregnancy having been reported in as many as 80% of pregnancies. It is most often due to GERD and results from relaxation of the lower esophageal sphincter (LES), changes in gastric motility, and/or increasing intra-abdominal pressure. The onset of symptoms can be during any trimester of pregnancy.
Hormonal – related to the increasing amounts of estrogen and progesterone and their effect on the LES
Mechanical – the enlarging uterus increasing intra-abdominal pressure, inducing reflux of gastric acid
Behavioral – as with other instances of heartburn, behavioral modifications can exacerbate or alleviate symptoms
Unknown origin
Functional heartburn is heartburn of unknown cause. It is commonly associated with psychiatric conditions like depression and anxiety. It is also seen with other functional gastrointestinal disorders like irritable bowel syndrome and is the primary cause of lack of improvement post treatment with proton pump inhibitors (PPIs). Despite this, PPIs are still the primary treatment with response rates in about 50% of people. The diagnosis is one of elimination, based upon the Rome III criteria. It was found to be present in 22.3% of Canadians in one survey.
Diagnostic approach
Heartburn can be caused by several conditions and a preliminary diagnosis of GERD is based on additional signs and symptoms. The chest pain caused by GERD has a distinct 'burning' sensation, occurs after eating or at night, and worsens when a person lies down or bends over. It also is common in pregnant women, and may be triggered by consuming food in large quantities, or specific foods containing certain spices, high fat content, or high acid content. In young persons (typically <40 years) who present with heartburn symptoms consistent with GERD (onset after eating, when lying down, when pregnant), a physician may begin a course of PPIs to assess clinical improvement before additional testing is undergone. Resolution or improvement of symptoms on this course may result in a diagnosis of GERD.
Other tests or symptoms suggesting acid reflux is causing heartburn include:
Onset of symptoms after eating or drinking, at night, and/or with pregnancy, and improvement with PPIs
Endoscopy looking for erosive changes of the esophagus consistent with prolonged acid exposure (e.g. - Barrett's esophagus)
Upper GI series looking for the presence of acid reflux
GI cocktail
Relief of symptoms 5 to 10 minutes after the administration of viscous lidocaine and an antacid increases the suspicion that the pain is esophageal in origin. This however does not rule out a potential cardiac cause as 10% of cases of discomfort due to cardiac causes are improved with antacids.
Biochemical
Esophageal pH monitoring: a probe can be placed via the nose into the esophagus to record the level of acidity in the lower esophagus. Because some degree of variation in acidity is normal, and small reflux events are relatively common, esophageal pH monitoring can be used to document reflux in real-time. Patients are able to record symptom onset to correlate lower esophageal pH with time of symptom onset.
Mechanical
Manometry: in this test, a pressure sensor (manometer) is passed via the mouth into the esophagus and measures the pressure of the LES directly.
Endoscopy: the esophageal mucosa can be visualized directly by passing a thin, lighted tube with a tiny camera known as an endoscope attached through the mouth to examine the oesophagus and stomach. In this way, evidence of esophageal inflammation can be detected, and biopsies taken if necessary. Since an endoscopy allows a doctor to visually inspect the upper digestive tract the procedure may help identify any additional damage to the tract that may not have been detected otherwise.
Biopsy: a small sample of tissue from the oesophagus is removed. It is then studied to check for inflammation, cancer, or other problems.
Treatment
Treatment plans are tailored to the specific diagnosis and etiology of the heartburn. Management of heartburn can be sorted into various categories.
Pharmacologic management
Antacids (i.e. calcium carbonate and sodium bicarbonate) are often taken to treat the immediate problem
H2 receptor antagonists or proton pump inhibitors are effective for the two most common causes of heartburn (e.g. gastritis and GERD)
Antibiotics are used if H. pylori is present.
Behavioral management
Taking medications 30–45 minutes before eating suppresses the stomach's acid generating response to food
Avoiding chocolate, peppermint, caffeine intake, and foods high in fats
Limiting big meals, instead consuming smaller, more frequent meals
Avoiding reclining 2.5–3.5 hours after a meal to prevent the reflux of stomach contents
Lifestyle modifications
Early studies show that diets that are high in fiber may show evidence in decreasing symptoms of dyspepsia.
Weight loss can decrease abdominal pressure that both delays gastric emptying and increases gastric acid reflux into the esophagus
Smoking cessation
Alternative and complementary therapies
Symptoms of heartburn may not always be the result of an organic cause. Patients may respond better to therapies targeting anxiety, through medications aimed towards a psychiatric etiology, osteopathic manipulation, and acupuncture.
Psychotherapy may show a positive role in treatment of heartburn and the reduction of distress experienced during symptoms.
Acupuncture – in cases of PPI failure, adding acupuncture may be more effective than doubling the dose of PPIs.
Surgical management
In the case of GERD causing heartburn symptoms, surgery may be required if PPI is not effective. Surgery is not undergone if functional heartburn is the leading diagnosis.
Epidemiology
About 42% of the United States population has had heartburn at some point.
References
Symptoms and signs: Digestive system and abdomen | 0.768181 | 0.997702 | 0.766416 |
Body cavity | A body cavity is any space or compartment, or potential space, in an animal body. Cavities accommodate organs and other structures; cavities as potential spaces contain fluid.
The two largest human body cavities are the ventral body cavity, and the dorsal body cavity. In the dorsal body cavity the brain and spinal cord are located.
The membranes that surround the central nervous system organs (the brain and the spinal cord, in the cranial and spinal cavities) are the three meninges. The differently lined spaces contain different types of fluid. In the meninges for example the fluid is cerebrospinal fluid; in the abdominal cavity the fluid contained in the peritoneum is a serous fluid.
In amniotes and some invertebrates the peritoneum lines their largest body cavity called the coelom.
Mammals
Mammalian embryos develop two body cavities: the intraembryonic coelom and the extraembryonic coelom (or chorionic cavity). The intraembryonic coelom is lined by somatic and splanchnic lateral plate mesoderm, while the extraembryonic coelom is lined by extraembryonic mesoderm. The intraembryonic coelom is the only cavity that persists in the mammal at term, which is why its name is often contracted to simply coelomic cavity. Subdividing the coelomic cavity into compartments, for example, the pericardial cavity / pericardium, where the heart develops, simplifies discussion of the anatomies of complex animals.
Cavitation in the early embryo is the process of forming the blastocoel, the fluid-filled cavity defining the blastula stage in non-mammals, or the blastocyst in mammals.
Human body cavities
The dorsal (posterior) cavity and the ventral (anterior) cavity are the largest body compartments.
The dorsal body cavity includes the cranial cavity, enclosed by the skull and contains the brain, and the spinal cavity, enclosed by the spine, and contains the spinal cord.
The ventral body cavity includes the thoracic cavity, enclosed by the ribcage and contains the lungs and heart; and the abdominopelvic cavity. The abdominopelvic cavity can be divided into the abdominal cavity, enclosed by the ribcage and pelvis and contains the kidneys, ureters, stomach, intestines, liver, gallbladder, and pancreas; and the pelvic cavity, enclosed by the pelvis and contains bladder, anus and reproductive system.
Ventral body cavity
The ventral cavity has two main subdivisions: the thoracic cavity and the abdominopelvic cavity. The thoracic cavity is the more superior subdivision of the ventral cavity, and is enclosed by the rib cage. The thoracic cavity contains the lungs surrounded by the pleural cavity, and the heart surrounded by the pericardial cavity, located in the mediastinum. The diaphragm forms the floor of the thoracic cavity and separates it from the more inferior abdominopelvic cavity.
The abdominopelvic cavity is the largest cavity in the body occupying the entire lower half of the trunk. Although no membrane physically divides the abdominopelvic cavity, it can be useful to distinguish between the abdominal cavity, and the pelvic cavity. The abdominal cavity occupies the entire lower half of the trunk, anterior to the spine, and houses the organs of digestion. Just under the abdominal cavity, anterior to the buttocks, is the pelvic cavity. The pelvic cavity is funnel shaped, and is located inferior and anterior to the abdominal cavity, and houses the organs of reproduction.
Dorsal body cavity
The dorsal body cavity contains the cranial cavity, and the spinal cavity.
The cranial cavity is a large, bean-shaped cavity filling most of the upper skull where the brain is located. The spinal cavity is the very narrow, thread-like cavity running from the cranial cavity down the entire length of the spinal cord.
In the dorsal cavity, the cranial cavity houses the brain, and the spinal cavity encloses the spinal cord. Just as the brain and spinal cord make up a continuous, uninterrupted structure, the cranial and spinal cavities that house them are also continuous. The brain and spinal cord are protected by the bones of the skull and vertebral column and by cerebrospinal fluid, a colorless fluid produced by the brain, which cushions the brain and spinal cord within the dorsal body cavity.
Development
At the end of the third week of gestation, the neural tube, which is a fold of one of the layers of the trilaminar germ disc, called the ectoderm, appears. This layer elevates and closes dorsally, while the gut tube rolls up and closes ventrally to create a "tube on top of a tube". The mesoderm, which is another layer of the trilaminar germ disc, holds the tubes together and the lateral plate mesoderm, the middle layer of the germ disc, splits to form a visceral layer associated with the gut and a parietal layer, which along with the overlying ectoderm, forms the lateral body wall. The space between the visceral and parietal layers of lateral plate mesoderm is the primitive body cavity. When the lateral body wall folds, it moves ventrally and fuses at the midline. The body cavity closes, except in the region of the connecting stalk. Here, the gut tube maintains an attachment to the yolk sac. The yolk sac is a membranous sac attached to the embryo, which provides nutrients and functions as the circulatory system of the very early embryo.
The lateral body wall folds, pulling the amnion in with it so that the amnion surrounds the embryo and extends over the connecting stalk, which becomes the umbilical cord, which connects the fetus with the placenta. If the ventral body wall fails to close, ventral body wall defects can result, such as ectopia cordis, a congenital malformation in which the heart is abnormally located outside the thorax. Another defect is gastroschisis, a congenital defect in the anterior abdominal wall through which the abdominal contents freely protrude. Another possibility is bladder exstrophy, in which part of the urinary bladder is present outside the body. In normal circumstances, the parietal mesoderm will form the parietal layer of serous membranes lining the outside (walls) of the peritoneal, pleural, and pericardial cavities. The visceral layer will form the visceral layer of the serous membranes covering the lungs, heart, and abdominal organs. These layers are continuous at the root of each organ as the organs lie in their respective cavities. The peritoneum, a serum membrane that forms the lining of the abdominal cavity, forms in the gut layers and in places mesenteries extend from the gut as double layers of peritoneum. Mesenteries provide a pathway for vessels, nerves, and lymphatics to the organs. Initially, the gut tube from the caudal end of the foregut to the end of the hindgut is suspended from the dorsal body wall by dorsal mesentery. Ventral mesentery, derived from the septum transversum, exists only in the region of the terminal part of the esophagus, the stomach, and the upper portion of the duodenum.
Function
These cavities contain and protect delicate internal organs, and the ventral cavity allows for significant changes in the size and shape of the organs as they perform their functions.
Anatomical structures are often described in terms of the cavity in which they reside. The body maintains its internal organization by means of membranes, sheaths, and other structures that separate compartments.
The lungs, heart, stomach, and intestines, for example, can expand and contract without distorting other tissues or disrupting the activity of nearby organs. The ventral cavity includes the thoracic and abdominopelvic cavities and their subdivisions. The dorsal cavity includes the cranial and spinal cavities.
Other animals
Organisms can be also classified according to the type of body cavity they possess, such as pseudocoelomates and protostome coelomates.
Coelom
In amniotes and some invertebrates, the coelom is the large cavity lined by mesothelium, an epithelium derived from mesoderm. Organs formed inside the coelom can freely move, grow, and develop independently of the body wall while fluid in the peritoneum cushions and protects them from shocks.
Arthropods and most molluscs have a reduced (but still true) coelom, the hemocoel (of an open circulatory system) and the smaller gonocoel (a cavity that contains the gonads). Their hemocoel is often derived from the blastocoel.
See also
Gastrovascular cavity
References
This Wikipedia entry incorporates text from the freely licensed Connexions edition of Anatomy & Physiology text-book by OpenStax College
External links
Further discussion
Animal anatomy
Developmental biology | 0.76778 | 0.998191 | 0.766391 |
Haemophilia | Haemophilia (British English), or hemophilia (American English), is a mostly inherited genetic disorder that impairs the body's ability to make blood clots, a process needed to stop bleeding. This results in people bleeding for a longer time after an injury, easy bruising, and an increased risk of bleeding inside joints or the brain. Those with a mild case of the disease may have symptoms only after an accident or during surgery. Bleeding into a joint can result in permanent damage while bleeding in the brain can result in long term headaches, seizures, or an altered level of consciousness.
There are two main types of haemophilia: haemophilia A, which occurs due to low amounts of clotting factor VIII, and haemophilia B, which occurs due to low levels of clotting factor IX. They are typically inherited from one's parents through an X chromosome carrying a nonfunctional gene. Most commonly found in men, haemophilia can affect women too, though very rarely. A woman would need to inherit two affected X chromosomes to be affected, whereas a man would only need one X chromosome affected. It is possible for a new mutation to occur during early development, or haemophilia may develop later in life due to antibodies forming against a clotting factor. Other types include haemophilia C, which occurs due to low levels of factor XI, Von Willebrand disease, which occurs due to low levels of a substance called von Willebrand factor, and parahaemophilia, which occurs due to low levels of factor V. Haemophilia A, B, and C prevent the intrinsic pathway from functioning properly; this clotting pathway is necessary when there is damage to the endothelium of a blood vessel. Acquired haemophilia is associated with cancers, autoimmune disorders, and pregnancy. Diagnosis is by testing the blood for its ability to clot and its levels of clotting factors.
Prevention may occur by removing an egg, fertilising it, and testing the embryo before transferring it to the uterus. Human embryos in research can be regarded as the technical object/process. Missing blood clotting factors are replaced to treat haemophilia. This may be done on a regular basis or during bleeding episodes. Replacement may take place at home or in hospital. The clotting factors are made either from human blood or by recombinant methods. Up to 20% of people develop antibodies to the clotting factors which makes treatment more difficult. The medication desmopressin may be used in those with mild haemophilia A. Studies of gene therapy are in early human trials.
Haemophilia A affects about 1 in 5,000–10,000, while haemophilia B affects about 1 in 40,000 males at birth. As haemophilia A and B are both X-linked recessive disorders, females are rarely severely affected. Some females with a nonfunctional gene on one of the X chromosomes may be mildly symptomatic. Haemophilia C occurs equally in both sexes and is mostly found in Ashkenazi Jews. In the 1800s haemophilia B was common within the royal families of Europe. The difference between haemophilia A and B was determined in 1952.
Signs and symptoms
Characteristic symptoms vary with severity. In general symptoms are internal or external bleeding episodes, which are called "bleeds". People with more severe haemophilia experience more severe and more frequent bleeds, while people with mild haemophilia usually experience more minor symptoms except after surgery or serious trauma. In cases of moderate haemophilia symptoms are variable which manifest along a spectrum between severe and mild forms.
In both haemophilia A and B, there is spontaneous bleeding but a normal bleeding time, normal prothrombin time, normal thrombin time, but prolonged partial thromboplastin time. Internal bleeding is common in people with severe haemophilia and some individuals with moderate haemophilia. The most characteristic type of internal bleed is a joint bleed where blood enters into the joint spaces. This is most common with severe haemophiliacs and can occur spontaneously (without evident trauma). If not treated promptly, joint bleeds can lead to permanent joint damage and disfigurement. Bleeding into soft tissues such as muscles and subcutaneous tissues is less severe but can lead to damage and requires treatment.
Children with mild to moderate haemophilia may not have any signs or symptoms at birth, especially if they do not undergo circumcision. Their first symptoms are often frequent and large bruises and haematomas from frequent bumps and falls as they learn to walk. Swelling and bruising from bleeding in the joints, soft tissue, and muscles may also occur. Children with mild haemophilia may not have noticeable symptoms for many years. Often, the first sign in very mild haemophiliacs is heavy bleeding from a dental procedure, an accident, or surgery. Females who are carriers usually have enough clotting factors from their one normal gene to prevent serious bleeding problems, though some may present as mild haemophiliacs.
Complications
Severe complications are much more common in cases of severe and moderate haemophilia. Complications may arise from the disease itself or from its treatment:
Deep internal bleeding, e.g. deep-muscle bleeding, leading to swelling, numbness or pain of a limb.
Joint damage from haemarthrosis (haemophilic arthropathy), potentially with severe pain, disfigurement, and even destruction of the joint and development of debilitating arthritis.
Transfusion transmitted infection from blood transfusions that are given as treatment.
Adverse reactions to clotting factor treatment, including the development of an immune inhibitor which renders factor replacement less effective.
Intracranial haemorrhage is a serious medical emergency caused by the buildup of pressure inside the skull. It can cause disorientation, nausea, loss of consciousness, brain damage, and death.
Haemophilic arthropathy is characterised by chronic proliferative synovitis and cartilage destruction. If an intra-articular bleed is not drained early, it may cause apoptosis of chondrocytes and affect the synthesis of proteoglycans. The hypertrophied and fragile synovial lining while attempting to eliminate excessive blood may be more likely to easily rebleed, leading to a vicious cycle of hemarthrosis-synovitis-hemarthrosis. In addition, iron deposition in the synovium may induce an inflammatory response activating the immune system and stimulating angiogenesis, resulting in cartilage and bone destruction.
Genetics
Typically, females possess two X-chromosomes, and males have one X and one Y-chromosome. Since the mutations causing the disease are X-linked recessive, a female carrying the defect on one of her X-chromosomes may not be affected by it, as the equivalent dominant allele on her other chromosome should express itself to produce the necessary clotting factors, due to X inactivation. Therefore, heterozygous females are just carriers of this genetic disposition. However, the Y-chromosome in the male has no gene for factors VIII or IX. If the genes responsible for production of factor VIII or factor IX present on a male's X-chromosome are deficient there is no equivalent on the Y-chromosome to cancel it out, so the deficient gene is not masked and the disorder will develop.
Since a male receives his single X-chromosome from his mother, the son of a healthy female silently carrying the deficient gene will have a 50% chance of inheriting that gene from her and with it the disease; and if his mother is affected with haemophilia, he will have a 100% chance of being a haemophiliac. In contrast, for a female to inherit the disease, she must receive two deficient X-chromosomes, one from her mother and the other from her father (who must therefore be a haemophiliac himself). Hence, haemophilia is expressed far more commonly among males than females, while females, who must have two deficient X-chromosomes in order to have haemophilia, are far more likely to be silent carriers, survive childhood and to submit each of her genetic children to an at least 50% risk of receiving the deficient gene. However, it is possible for female carriers to become mild haemophiliacs due to lyonisation (inactivation) of the X-chromosomes. Haemophiliac daughters are more common than they once were, as improved treatments for the disease have allowed more haemophiliac males to survive to adulthood and become parents. Adult females may experience menorrhagia (heavy periods) due to the bleeding tendency. The pattern of inheritance is criss-cross type. This type of pattern is also seen in colour blindness.
A mother who is a carrier has a 50% chance of passing the faulty X-chromosome to her daughter, while an affected father will always pass on the affected gene to his daughters. A son cannot inherit the defective gene from his father. Genetic testing and genetic counselling is recommended for families with haemophilia. Prenatal testing, such as amniocentesis, is available to pregnant women who may be carriers of the condition.
As with all genetic disorders, it is also possible for a human to acquire it spontaneously through mutation, rather than inheriting it, because of a new mutation in one of their parents' gametes. Spontaneous mutations account for about 33% of all cases of haemophilia A. About 30% of cases of haemophilia B are the result of a spontaneous gene mutation.
If a female gives birth to a haemophiliac son, either the female is a carrier for the blood disorder or the haemophilia was the result of a spontaneous mutation. Until modern direct DNA testing, however, it was impossible to determine if a female with only healthy children was a carrier or not.
If a male has the disease and has children with a female who is not a carrier, his daughters will be carriers of haemophilia. His sons, however, will not be affected with the disease. The disease is X-linked and the father cannot pass haemophilia through the Y-chromosome. Males with the disorder are then no more likely to pass on the gene to their children than carrier females, though all daughters they sire will be carriers and all sons they father will not have haemophilia (unless the mother is a carrier)
Severity
There are numerous different mutations which cause each type of haemophilia. Due to differences in changes to the genes involved, people with haemophilia often have some level of active clotting factor. Individuals with less than 1% active factor are classified as having severe haemophilia, those with 1–5% active factor have moderate haemophilia, and those with mild haemophilia have between 5% and 40% of normal levels of active clotting factor.
Diagnosis
Haemophilia can be diagnosed before, during or after birth if there is a family history of the condition. Several options are available to parents. If there is no family history of haemophilia, it is usually only diagnosed when a child begins to walk or crawl. Affected children may experience joint bleeds or easy bruising.
Mild haemophilia may only be discovered later, usually after an injury or a dental or surgical procedure.
Before pregnancy
Genetic testing and counselling are available to help determine the risk of passing the condition onto a child. This may involve testing a sample of tissue or blood to look for signs of the genetic mutation that causes haemophilia.
During pregnancy
A pregnant woman with a history of haemophilia in her family can test for the haemophilia gene. Such tests include:
chorionic villus sampling (CVS): a small sample of the placenta is removed from the womb and tested for the haemophilia gene, usually during weeks 11–14 of pregnancy
amniocentesis: a sample of amniotic fluid is taken for testing, usually during weeks 15–20 of pregnancy
There is a small risk of these procedures causing problems such as miscarriage or premature labour, so the woman may discuss this with the doctor in charge of her care.
After birth
If haemophilia is suspected after a child has been born, a blood test can usually confirm the diagnosis. Blood from the umbilical cord can be tested at birth if there's a family history of haemophilia. A blood test will also be able to identify whether a child has haemophilia A or B, and how severe it is.
Classification
There are several types of haemophilia: haemophilia A, haemophilia B, haemophilia C, parahaemophilia, acquired haemophilia A, and acquired haemophilia B.
Haemophilia A is a recessive X-linked genetic disorder resulting in a deficiency of functional clotting Factor VIII. Haemophilia B is also a recessive X-linked genetic disorder involving a lack of functional clotting Factor IX. Haemophilia C is an autosomal genetic disorder involving a lack of functional clotting Factor XI. Haemophilia C is not completely recessive, as heterozygous individuals also show increased bleeding.
The type of haemophilia known as parahaemophilia is a mild and rare form and is due to a deficiency in factor V. This type can be inherited or acquired.
A non-genetic form of haemophilia is caused by autoantibodies against factor VIII and so is known as acquired haemophilia A. It is a rare but potentially life-threatening bleeding disorder caused by the development of autoantibodies (inhibitors) directed against plasma coagulation factors. Acquired haemophilia can be associated with cancers, autoimmune disorders and following childbirth.
Management
There is no long-term cure. Treatment and prevention of bleeding episodes is done primarily by replacing the missing blood clotting factors.
Clotting factors
Clotting factors are usually not needed in mild haemophilia. In moderate haemophilia clotting factors are typically only needed when bleeding occurs or to prevent bleeding with certain events. In severe haemophilia preventive use is often recommended two or three times a week and may continue for life. Rapid treatment of bleeding episodes decreases damage to the body.
Factor VIII is used in haemophilia A and factor IX in haemophilia B. Factor replacement can be either isolated from human plasma, recombinant, or a combination of the two. Some people develop antibodies (inhibitors) against the replacement factors given to them, so the amount of the factor has to be increased or non-human replacement products must be given, such as porcine factor VIII.
If a person becomes refractory to replacement coagulation factor as a result of high levels of circulating inhibitors, this may be partially overcome with recombinant human factor VIII.
In early 2008, the US Food and Drug Administration (FDA) approved an anti-haemophilic drug completely free of albumin, which made it the first anti-haemophilic drug in the US to use an entirely synthetic purification process. Since 1993 recombinant factor products (which are typically cultured in Chinese hamster ovary (CHO) tissue culture cells and involve little, if any human plasma products) have been available and have been widely used in wealthier western countries. While recombinant clotting factor products offer higher purity and safety, they are, like concentrate, extremely expensive, and not generally available in the developing world. In many cases, factor products of any sort are difficult to obtain in developing countries.
Clotting factors are either given preventively or on-demand. Preventive use involves the infusion of clotting factor on a regular schedule in order to keep clotting levels sufficiently high to prevent spontaneous bleeding episodes. On-demand (or episodic) treatment involves treating bleeding episodes once they arise. In 2007, a trial comparing on-demand treatment of boys (< 30 months) with haemophilia A with prophylactic treatment (infusions of 25 IU/kg body weight of Factor VIII every other day) in respect to its effect on the prevention of joint-diseases. When the boys reached 6 years of age, 93% of those in the prophylaxis group and 55% of those in the episodic-therapy group had a normal index joint-structure on MRI. Preventative treatment, however, resulted in average costs of $300,000 per year. The author of an editorial published in the same issue of the NEJM supports the idea that prophylactic treatment not only is more effective than on demand treatment but also suggests that starting after the first serious joint-related haemorrhage may be more cost effective than waiting until the fixed age to begin. Most haemophiliacs in third world countries have limited or no access to commercial blood clotting factor products.
Other
Desmopressin (DDAVP) may be used in those with mild haemophilia A. Tranexamic acid or epsilon aminocaproic acid may be given along with clotting factors to prevent breakdown of clots.
Pain medicines, steroids, and physical therapy may be used to reduce pain and swelling in an affected joint. In those with severe hemophilia A already receiving FVIII, emicizumab may provide some benefit. Different treatments are used to help those with an acquired form of hemophilia in addition to the normal clotting factors. Often the most effective treatment is corticosteroids which remove the auto-antibodies in half of people. As a secondary route of treatment, cyclophosphamide and cyclosporine are used and are proven effective for those who did not respond to the steroid treatments. In rare cases a third route or treatment is used, high doses of intravenous immunoglobulin or immunosorbent that works to help control bleeding instead of battling the auto-antibodies.
Contraindications
Anticoagulants such as heparin and warfarin are contraindicated for people with haemophilia as these can aggravate clotting difficulties. Also contraindicated are those drugs which have "blood thinning" side effects. For instance, medicines which contain aspirin, ibuprofen, or naproxen sodium should not be taken because they are well known to have the side effect of prolonged bleeding.
Also contraindicated are activities with a high likelihood of trauma, such as motorcycling and skateboarding. Popular sports with very high rates of physical contact and injuries such as American football, hockey, boxing, wrestling, and rugby should be avoided by people with haemophilia. Other active sports like soccer, baseball, and basketball also have a high rate of injuries, but have overall less contact and should be undertaken cautiously and only in consultation with a doctor.
Prognosis
Like most aspects of the disorder, life expectancy varies with severity and adequate treatment. People with severe haemophilia who do not receive adequate, modern treatment have greatly shortened lifespans and often do not reach maturity. Prior to the 1960s when effective treatment became available, average life expectancy was only 11 years. By the 1980s the life span of the average haemophiliac receiving appropriate treatment was 50–60 years. Today with appropriate treatment, males with haemophilia typically have a near normal quality of life with an average lifespan approximately 10 years shorter than an unaffected male.
Since the 1980s the primary leading cause of death of people with severe haemophilia has shifted from haemorrhage to HIV/AIDS acquired through treatment with contaminated blood products. The second leading cause of death related to severe haemophilia complications is intracranial haemorrhage which today accounts for one third of all deaths of people with haemophilia. Two other major causes of death include hepatitis infections causing cirrhosis and obstruction of air or blood flow due to soft tissue haemorrhage.
Epidemiology
Haemophilia frequency is about 1 instance in every 10,000 births (or 1 in 5,000 male births) for haemophilia A and 1 in 50,000 births for haemophilia B. About 18,000 people in the United States have haemophilia. Each year in the US, about 400 babies are born with the disorder. Haemophilia usually occurs in males and less often in females. It is estimated that about 2,500 Canadians have haemophilia A, and about 500 Canadians have haemophilia B.
History
Scientific discovery
The excessive bleeding was known to ancient people. The Talmud instructs that a boy must not be circumcised if he had two brothers who died due to complications arising from their circumcisions, and Maimonides says that this excluded paternal half-brothers. This may have been due to a concern about hemophilia. The tenth century Arab surgeon Al-Zahrawi noted cases of excessive bleeding among men in a village. Several similar references to the disease later known as hemophilia appear throughout historical writings, though no term for inherited abnormal bleeding tendencies existed until the nineteenth century.
In 1803, John Conrad Otto, a Philadelphian physician, wrote an account about "a hemorrhagic disposition existing in certain families" in which he called the affected males "bleeders". He recognised that the disorder was hereditary and that it affected mostly males and was passed down by healthy females. His paper was the second paper to describe important characteristics of an X-linked genetic disorder (the first paper being a description of colour blindness by John Dalton who studied his own family). Otto was able to trace the disease back to a woman who settled near Plymouth, New Hampshire, in 1720. The idea that affected males could pass the trait onto their unaffected daughters was not described until 1813 when John F. Hay, published an account in The New England Journal of Medicine.
In 1924, a Finnish doctor discovered a hereditary bleeding disorder similar to haemophilia localised in Åland, southwest of Finland. This bleeding disorder is called "Von Willebrand Disease".
The term "haemophilia" is derived from the term "haemorrhaphilia" which was used in a description of the condition written by Friedrich Hopff in 1828, while he was a student at the University of Zurich. In 1937, Patek and Taylor, two doctors from Harvard University, discovered anti-haemophilic globulin. In 1947, Alfredo Pavlovsky, a doctor from Buenos Aires, found haemophilia A and haemophilia B to be separate diseases by doing a lab test. This test was done by transferring the blood of one haemophiliac to another haemophiliac. The fact that this corrected the clotting problem showed that there was more than one form of haemophilia.
European royalty
Haemophilia has featured prominently in European royalty and thus is sometimes known as 'the royal disease'. Queen Victoria passed the mutation for haemophilia B to her son Leopold and, through two of her daughters, Alice and Beatrice, to various royals across the continent, including the royal families of Spain, Germany, and Russia. In Russia, Tsarevich Alexei, the son and heir of Tsar Nicholas II, famously had haemophilia, which he had inherited from his mother, Empress Alexandra, one of Queen Victoria's granddaughters. The haemophilia of Alexei would result in the rise to prominence of the Russian mystic Grigori Rasputin, at the imperial court.
It was claimed that Rasputin was successful at treating Tsarevich Alexei's haemophilia. At the time, a common treatment administered by professional doctors was to use aspirin, which worsened rather than lessened the problem. It is believed that, by simply advising against the medical treatment, Rasputin could bring visible and significant improvement to the condition of Tsarevich Alexei.
In Spain, Queen Victoria's youngest daughter, Princess Beatrice, had a daughter Victoria Eugenie of Battenberg, who later became Queen of Spain. Two of her sons were haemophiliacs and both died from minor car accidents. Her eldest son, Prince Alfonso of Spain, Prince of Asturias, died at the age of 31 from internal bleeding after his car hit a telephone booth. Her youngest son, Infante Gonzalo, died at age 19 from abdominal bleeding following a minor car accident in which he and his sister hit a wall while avoiding a cyclist. Neither appeared injured or sought immediate medical care and Gonzalo died two days later from internal bleeding.
Treatment
The method for the production of an antihaemophilic factor was discovered by Judith Graham Pool from Stanford University in 1964, and approved for commercial use in 1971 in the United States under the name Cryoprecipitated AHF. Together with the development of a system for transportation and storage of human plasma in 1965, this was the first time an efficient treatment for haemophilia became available.
Blood contamination
Up until late 1985 many people with haemophilia received clotting factor products that posed a risk of HIV and hepatitis C infection. The plasma used to create the products was not screened or tested, nor had most of the products been subject to any form of viral inactivation.
Tens of thousands worldwide were infected as a result of contaminated factor products including more than 10,000 people in the United States, 3,500 British, 1,400 Japanese, 700 Canadians, 250 Irish, and 115 Iraqis.
Infection via the tainted factor products had mostly stopped by 1986 by which time viral inactivation methods had largely been put into place, although some products were shown to still be dangerous in 1987.
Research
Gene therapy
In those with severe haemophilia, gene therapy may reduce symptoms to those that a person with mild or moderate haemophilia might have. The best results have been found in haemophilia B. In 2016 early stage human research was ongoing with a few sites recruiting participants. In 2017 a gene therapy trial on nine people with haemophilia A reported that high doses did better than low doses. It is not currently an accepted treatment for haemophilia.
In July 2022 results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses.
In November 2022, the first gene therapy treatment for Hemophilia B was approved by the U.S. Food and Drug Administration, called Hemgenix. It is a single-dose treatment that gives the patient the genetic information required to produce Factor IX.
In June 2023, the FDA approved the first gene therapy treatment for Hemophilia A, called Roctavian. It was only approved for patients with severe cases, but it has been shown to reduce yearly bleeding episodes by 50%. It works similarly to Hemgenix, being administered by intravenous infusion that contains a gene for Factor VIII.
See also
Coagulopathy
Purpura secondary to clotting disorders
Von Willebrand disease
World Federation of Hemophilia
References
External links
World Federation of Hemophilia
Wikipedia medicine articles ready to translate
X-linked recessive disorders
Rare diseases | 0.767096 | 0.999041 | 0.76636 |
Arthritis | Arthritis is a term often used to mean any disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In some types of arthritis, other organs are also affected. Onset can be gradual or sudden.
There are over 100 types of arthritis. The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types include gout, lupus, fibromyalgia, and septic arthritis. They are all types of rheumatic disease.
Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful. Recommended medications may depend on the form of arthritis. These may include pain medications such as ibuprofen and paracetamol (acetaminophen). In some circumstances, a joint replacement may be useful.
Osteoarthritis affects more than 3.8% of people, while rheumatoid arthritis affects about 0.24% of people. Gout affects about 1–2% of the Western population at some point in their lives. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall the disease becomes more common with age. Arthritis is a common reason that people miss work and can result in a decreased quality of life. The term is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation').
Classification
There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:
Hemarthrosis
Osteoarthritis
Rheumatoid arthritis
Gout and pseudo-gout
Septic arthritis
Ankylosing spondylitis
Juvenile idiopathic arthritis
Still's disease
Psoriatic arthritis
Joint pain can also be a symptom of other diseases. In this case, the arthritis is considered to be secondary to the main disease; these include:
Psoriasis
Reactive arthritis
Ehlers–Danlos syndrome
Iron overload
Hepatitis
Lyme disease
Sjögren's disease
Hashimoto's thyroiditis
Celiac disease
Non-celiac gluten sensitivity
Inflammatory bowel disease (including Crohn's disease and ulcerative colitis)
Henoch–Schönlein purpura
Hyperimmunoglobulinemia D with recurrent fever
Sarcoidosis
Whipple's disease
TNF receptor associated periodic syndrome
Granulomatosis with polyangiitis (and many other vasculitis syndromes)
Familial Mediterranean fever
Systemic lupus erythematosus
An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.
Signs and symptoms
Pain, which can vary in severity, is a common symptom in virtually all types of arthritis. Other symptoms include swelling, joint stiffness, redness, and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms. Symptoms may include:
Inability to use the hand or walk
Stiffness in one or more joints
Rash or itch
Malaise and fatigue
Weight loss
Poor sleep
Muscle aches and pains
Tenderness
Difficulty moving the joint
It is common in advanced arthritis for significant secondary changes to occur. For example, arthritic symptoms might make it difficult for a person to move around and/or exercise, which can lead to secondary effects, such as:
Muscle weakness
Loss of flexibility
Decreased aerobic fitness
These changes, in addition to the primary symptoms, can have a huge impact on quality of life.
Disability
Arthritis is the most common cause of disability in the United States. More than 20 million individuals with arthritis have severe limitations in function on a daily basis. Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it difficult for individuals to be physically active and some become home bound.
It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Each year, arthritis results in nearly 1 million hospitalizations and close to 45 million outpatient visits to health care centers.
Decreased mobility, in combination with the above symptoms, can make it difficult for an individual to remain physically active, contributing to an increased risk of obesity, high cholesterol or vulnerability to heart disease. People with arthritis are also at increased risk of depression, which may be a response to numerous factors, including fear of worsening symptoms.
Risk factors
There are common risk factors that increase a person's chance of developing arthritis later in adulthood. Some of these are modifiable while others are not. Smoking has been linked to an increased susceptibility of developing arthritis, particularly rheumatoid arthritis.
Diagnosis
Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by other tests such as radiology and blood tests, depending on the type of suspected arthritis. All arthritides potentially feature pain. Pain patterns may differ depending on the arthritides and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness lasting over 30 minutes.
Elements of the history of the disorder guide diagnosis. Important features are speed and time of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, gelling or locking with inactivity, aggravating and relieving factors, and other systemic symptoms. It may include checking joints, observing movements, examination of skin for rashes or nodules and symptoms of pulmonary inflammation. Physical examination may confirm the diagnosis or may indicate systemic disease. Radiographs are often used to follow progression or help assess severity.
Blood tests and X-rays of the affected joints often are performed to make the diagnosis. Screening blood tests are indicated if certain arthritides are suspected. These might include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.
Rheumatoid arthritis patients often have high erythrocyte sedimentation rate (ESR, also known as sed rate) or C-reactive protein (CRP) levels, which indicates the presence of an inflammatory process in the body. Anti-cyclic citrullinated peptide (anti-CCP) antibodies and rheumatoid factor (RF) are two more common blood tests. Positive results indicate the risk of rheumatoid arthritis, while negative results help rule out this autoimmune condition.
Imaging tests like X-rays, MRI scans or Ultrasounds used to diagnose and monitor arthritis. Other imaging tests for rheumatoid arthritis that may be considered include computed tomography (CT) scanning, positron emission tomography (PET) scanning, bone scanning, and dual-energy X-ray absorptiometry (DEXA).
Osteoarthritis
Osteoarthritis is the most common form of arthritis. It affects humans and other animals, notably dogs, but also occurs in cats and horses. It can affect both the larger and the smaller joints of the body. In humans, this includes the hands, wrists, feet, back, hip, and knee. In dogs, this includes the elbow, hip, stifle (knee), shoulder, and back. The disease is essentially one acquired from daily wear and tear of the joint; however, osteoarthritis can also occur as a result of injury. Osteoarthritis begins in the cartilage and eventually causes the two opposing bones to erode into each other. The condition starts with minor pain during physical activity, but soon the pain can be continuous and even occur while in a state of rest. The pain can be debilitating and prevent one from doing some activities. In dogs, this pain can significantly affect quality of life and may include difficulty going up and down stairs, struggling to get up after lying down, trouble walking on slick floors, being unable to hop in and out of vehicles, difficulty jumping on and off furniture, and behavioral changes (e.g., aggression, difficulty squatting to toilet). Osteoarthritis typically affects the weight-bearing joints, such as the back, knee and hip. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. The strongest predictor of osteoarthritis is increased age, likely due to the declining ability of chondrocytes to maintain the structural integrity of cartilage. More than 30 percent of women have some degree of osteoarthritis by age 65. Other risk factors for osteoarthritis include prior joint trauma, obesity, and a sedentary lifestyle.
Rheumatoid arthritis
Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues. The attack is not only directed at the joint but to many other parts of the body. In rheumatoid arthritis, most damage occurs to the joint lining and cartilage which eventually results in erosion of two opposing bones. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe deformity in a few years if not treated. RA occurs mostly in people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and aggressive treatment, many individuals can lead a better quality of life than if going undiagnosed for long after RA's onset. The risk factors with the strongest association for developing rheumatoid arthritis are the female sex, a family history of rheumatoid arthritis, age, obesity, previous joint damage from an injury, and exposure to tobacco smoke.
Bone erosion is a central feature of rheumatoid arthritis. Bone continuously undergoes remodeling by actions of bone resorbing osteoclasts and bone forming osteoblasts. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium, caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts. Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.
Lupus
Lupus is a common collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.
Gout
Gout is caused by deposition of uric acid crystals in the joints, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated. When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout.
Comparison of types
Other
Infectious arthritis is another severe form of arthritis. It presents with sudden onset of chills, fever and joint pain. The condition is caused by bacteria elsewhere in the body. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage. Only about 1% of cases of infectious arthritis are due to any of a wide variety of viruses. The virus SARS-CoV-2, which causes Covid-19 has been added to the list of viruses which can cause infections arthritis. SARS-CoV-2 causes reactive arthritis.
Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin problem first and then the arthritis. The typical features are continuous joint pains, stiffness and swelling. The disease does recur with periods of remission but there is no known cure for the disorder. A small percentage develop a severely painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.
Treatment
There is no known cure for arthritis and rheumatic diseases. Treatment options vary depending on the type of arthritis and include physical therapy, exercise and diet, orthopedic bracing, and oral and topical medications. Joint replacement surgery may be required to repair damage, restore function, or relieve pain.
Physical therapy
In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.
Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay the need for surgical intervention in advanced cases. Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities. Assistive technology is a tool used to aid a person's disability by reducing their physical barriers by improving the use of their damaged body part, typically after an amputation. Assistive technology devices can be customized to the patient or bought commercially.
Medications
There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.
Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs may be less well tolerated. However, topical NSAIDs may have better safety profiles than oral NSAIDs. For more severe cases of osteoarthritis, intra-articular corticosteroid injections may also be considered.
The drugs to treat rheumatoid arthritis (RA) range from corticosteroids to monoclonal antibodies given intravenously. Due to the autoimmune nature of RA, treatments may include not only pain medications and anti-inflammatory drugs, but also another category of drugs called disease-modifying antirheumatic drugs (DMARDs). csDMARDs, TNF biologics and tsDMARDs are specific kinds of DMARDs that are recommended for treatment. Treatment with DMARDs is designed to slow down the progression of RA by initiating an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells. Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).
Surgery
A number of rheumasurgical interventions have been incorporated in the treatment of arthritis since the 1950s. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to optimized physical and medical therapy.
Adaptive aids
People with hand arthritis can have trouble with simple activities of daily living tasks (ADLs), such as turning a key in a lock or opening jars, as these activities can be cumbersome and painful. There are adaptive aids or assistive devices (ADs) available to help with these tasks, but they are generally more costly than conventional products with the same function. It is now possible to 3-D print adaptive aids, which have been released as open source hardware to reduce patient costs. Adaptive aids can significantly help arthritis patients and the vast majority of those with arthritis need and use them.
Alternative medicine
Further research is required to determine if transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis is effective for controlling pain.
Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis. Evidence of benefit is tentative.
Pulsed electromagnetic field therapy (PEMFT) has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis. The FDA has not approved PEMFT for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.
Epidemiology
Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 2013 to 2015 showed 54.4 million (22.7%) adults had self-reported doctor-diagnosed arthritis, and 23.7 million (43.5% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase. Adults with co-morbid conditions, such as heart disease, diabetes, and obesity, were seen to have a higher than average prevalence of doctor-diagnosed arthritis (49.3%, 47.1%, and 30.6% respectively).
Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition. Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies. The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:
Rheumatoid arthritis: 0.1% in Algeria (urban setting); 0.6% in Democratic Republic of Congo (urban setting); 2.5% and 0.07% in urban and rural settings in South Africa respectively; 0.3% in Egypt (rural setting), 0.4% in Lesotho (rural setting)
Osteoarthritis: 55.1% in South Africa (urban setting); ranged from 29.5 to 82.7% in South Africans aged 65 years and older
Knee osteoarthritis has the highest prevalence from all types of osteoarthritis, with 33.1% in rural South Africa
Ankylosing spondylitis: 0.1% in South Africa (rural setting)
Psoriatic arthritis: 4.4% in South Africa (urban setting)
Gout: 0.7% in South Africa (urban setting)
Juvenile idiopathic arthritis: 0.3% in Egypt (urban setting)
History
Evidence of osteoarthritis and potentially inflammatory arthritis has been discovered in dinosaurs. The first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples. It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from Ötzi, a mummy found along the border of modern Italy and Austria, to the Egyptian mummies .
In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects. Augustin Jacob Landré-Beauvais, a 28-year-old resident physician at Salpêtrière Asylum in France was the first person to describe the symptoms of rheumatoid arthritis. Though Landré-Beauvais' classification of rheumatoid arthritis as a relative of gout was inaccurate, his dissertation encouraged others to further study the disease.
Terminology
The term is derived from arthr- (from ) and -itis (from , , ), the latter suffix having come to be associated with inflammation.
The word arthritides is the plural form of arthritis, and denotes the collective group of arthritis-like conditions.
See also
Antiarthritics
Arthritis Care (charity in the UK)
Arthritis Foundation (US not-for-profit)
Knee arthritis
Osteoimmunology
Weather pains
References
External links
American College of Rheumatology – US professional society of rheumatologists
National Institute of Arthritis and Musculoskeletal and Skin Diseases - US National Institute of Arthritis and Musculoskeletal and Skin Diseases
The Ultimate Arthritis Diet Arthritis Foundation
Aging-associated diseases
Inflammations
Rheumatology
Wikipedia neurology articles ready to translate
Skeletal disorders
Wikipedia medicine articles ready to translate | 0.766761 | 0.999345 | 0.766259 |
Thyroid disease | Thyroid disease is a medical condition that affects the function of the thyroid gland. The thyroid gland is located at the front of the neck and produces thyroid hormones that travel through the blood to help regulate many other organs, meaning that it is an endocrine organ. These hormones normally act in the body to regulate energy use, infant development, and childhood development.
There are five general types of thyroid disease, each with their own symptoms. A person may have one or several different types at the same time. The five groups are:
Hypothyroidism (low function) caused by not having enough free thyroid hormones
Hyperthyroidism (high function) caused by having too many free thyroid hormones
Structural abnormalities, most commonly a goiter (enlargement of the thyroid gland)
Tumors which can be benign (not cancerous) or cancerous
Abnormal thyroid function tests without any clinical symptoms (subclinical hypothyroidism or subclinical hyperthyroidism).
In the US, hypothyroidism and hyperthyroidism were respectively found in 4.6 and 1.3% of the >12y old population (2002).
In some types, such as subacute thyroiditis or postpartum thyroiditis, symptoms may go away after a few months and laboratory tests may return to normal. However most types of thyroid disease do not resolve on their own. Common hypothyroid symptoms include fatigue, low energy, weight gain, inability to tolerate the cold, slow heart rate, dry skin and constipation. Common hyperthyroid symptoms include irritability, anxiety, weight loss, fast heartbeat, inability to tolerate the heat, diarrhea, and enlargement of the thyroid. Structural abnormalities may not produce symptoms, however some people may have hyperthyroid or hypothyroid symptoms related to the structural abnormality or notice swelling of the neck. Rarely goiters can cause compression of the airway, compression of the vessels in the neck, or difficulty swallowing. Tumors, often called thyroid nodules, can also have many different symptoms ranging from hyperthyroidism to hypothyroidism to swelling in the neck and compression of the structures in the neck.
Diagnosis starts with a history and physical examination. Screening for thyroid disease in patients without symptoms is a debated topic although commonly practiced in the United States. If dysfunction of the thyroid is suspected, laboratory tests can help support or rule out thyroid disease. Initial blood tests often include thyroid-stimulating hormone (TSH) and free thyroxine (T4). Total and free triiodothyronine (T3) levels are less commonly used. If autoimmune disease of the thyroid is suspected, blood tests looking for Anti-thyroid autoantibodies can also be obtained. Procedures such as ultrasound, biopsy and a radioiodine scanning and uptake study may also be used to help with the diagnosis, particularly if a nodule is suspected.
Thyroid diseases are highly prevalent worldwide, and treatment varies based on the disorder. Levothyroxine is the mainstay of treatment for people with hypothyroidism, while people with hyperthyroidism caused by Graves' disease can be managed with iodine therapy, antithyroid medication, or surgical removal of the thyroid gland. Thyroid surgery may also be performed to remove a thyroid nodule or to reduce the size of a goiter if it obstructs nearby structures or for cosmetic reasons.
Signs and symptoms
Symptoms of the condition vary with type: hypo- vs. hyperthyroidism, which are further described below.
Possible symptoms of hypothyroidism are:
Possible symptoms of hyperthyroidism are:Note: certain symptoms and physical changes can be seen in both hypothyroidism and hyperthyroidism —fatigue, fine / thinning hair, menstrual cycle irregularities, muscle weakness / aches (myalgia), and different forms of myxedema.
Diseases
Low function
Hypothyroidism is a state in which the body is not producing enough thyroid hormones, or is not able to respond to / utilize existing thyroid hormones properly. The main categories are:
Thyroiditis: an inflammation of the thyroid gland
Hashimoto's thyroiditis / Hashimoto's disease
Ord's thyroiditis
Postpartum thyroiditis
Silent thyroiditis
Acute thyroiditis
Riedel's thyroiditis (the majority of cases do not affect thyroid function, but approximately 30% of cases lead to hypothyroidism)
Iatrogenic hypothyroidism
Postoperative hypothyroidism
Medication- or radiation-induced hypothyroidism
Thyroid hormone resistance
Euthyroid sick syndrome
Congenital hypothyroidism: a deficiency of thyroid hormone from birth, which untreated can lead to cretinism
High function
Hyperthyroidism is a state in which the body is producing too much thyroid hormone. The main hyperthyroid conditions are:
Graves' disease
Toxic thyroid nodule
Thyroid storm
Toxic nodular struma (Plummer's disease)
Hashitoxicosis: transient hyperthyroidism that can occur in Hashimoto's thyroiditis
Structural abnormalities
Goiter: an abnormal enlargement of the thyroid gland
Endemic goiter
Diffuse goiter
Multinodular goiter
Lingual thyroid
Thyroglossal duct cyst
Tumors
Thyroid cancer
Papillary
Follicular
Medullary
Anaplastic
Lymphomas are usually malignant
Thyroid adenomas are benign tumors
Medication side effects
Certain medications can have the unintended side effect of affecting thyroid function. While some medications can lead to significant hypothyroidism or hyperthyroidism and those at risk will need to be carefully monitored, some medications may affect thyroid hormone lab tests without causing any symptoms or clinical changes, and may not require treatment. The following medications have been linked to various forms of thyroid disease:
Amiodarone (more commonly can lead to hypothyroidism, but can be associated with some types of hyperthyroidism)
Lithium salts (hypothyroidism)
Some types of interferon and IL-2 (thyroiditis)
Glucocorticoids, dopamine agonists, and somatostatin analogs (block TSH, which can lead to hypothyroidism)
Pathophysiology
Most thyroid disease in the United States stems from a condition where the body's immune system attacks itself. In other instances, thyroid disease comes from the body trying to adapt to environmental conditions like iodine deficiency or to new physiologic conditions like pregnancy.
Autoimmune Thyroid Disease
Autoimmune thyroid disease is a general category of disease that occurs due to the immune system targeting its own body. It is not fully understood why this occurs, but it is thought to be partially genetic as these diseases tend to run in families. In one of the most common types, Graves' Disease, the body produces antibodies against the TSH receptor on thyroid cells. This causes the receptor to activate even without TSH being present and causes the thyroid to produce and release excess thyroid hormone (hyperthyroidism). Another common form of autoimmune thyroid disease is Hashimoto's thyroiditis where the body produces antibodies against different normal components of the thyroid gland, most commonly thyroglobulin, thyroid peroxidase, and the TSH receptor. These antibodies cause the immune system to attack the thyroid cells and cause inflammation (lymphocytic infiltration) and destruction (fibrosis) of the gland.
Goiter
Goiter is the general enlargement of the thyroid that can be associated with many thyroid diseases. The main reason this happens is because of increased signaling to the thyroid by way of TSH receptors to try to make it produce more thyroid hormone. This causes increased vascularity and increase in size (hypertrophy) of the gland. In hypothyroid states or iodine deficiency, the body recognizes that it is not producing enough thyroid hormone and starts to produce more TSH to help stimulate the thyroid to produce more thyroid hormone. This stimulation causes the gland to increase in size to increase production of thyroid hormone. In hyperthyroidism caused by Graves' Disease or toxic multinodular goiter, there is excess stimulation of the TSH receptor even when thyroid hormone levels are normal. In Graves' Disease this is because of an autoantibodies (Thyroid Stimulating Immunoglobulins) which bind to and activate the TSH receptors in place of TSH while in toxic multinodular goiter this is often because of a mutation in the TSH receptor that causes it to activate without receiving a signal from TSH. In more rare cases, the thyroid may become enlarged because it becomes filled with thyroid hormone or thyroid hormone precursors that it is unable to release or because of congential abnormalities or because of increased intake of iodine from supplementation or medication.
Pregnancy
There are many changes to the body during pregnancy. One of the major changes to help with the development of the fetus is the production of human chorionic gonadotropin (hCG). This hormone, produced by the placenta, has similar structure to TSH and can bind to the maternal TSH receptor to produce thyroid hormone. During pregnancy, there is also an increase in estrogen which causes the mother to produce more thyroxine binding globulin, which is what carries most of the thyroid hormone in the blood. These normal hormonal changes often make pregnancy look like a hyperthyroid state but may be within the normal range for pregnancy, so it necessary to use trimester specific ranges for TSH and free T4. True hyperthyroidism in pregnancy is most often caused by an autoimmune mechanism from Graves' Disease. New diagnosis of hypothyroidism in pregnancy is rare because hypothyroidism often makes it difficult to become pregnant in the first place. When hypothyroidism is seen in pregnancy, it is often because an individual already has hypothyroidism and needs to increase their levothyroxine dose to account for the increased thyroxine binding globulin present in pregnancy.
Diagnosis
Diagnosis of thyroid disease depends on symptoms and whether or not a thyroid nodule is present. Most patients will receive a blood test. Others might need an ultrasound, biopsy or a radioiodine scanning and uptake study.
Blood tests
Thyroid function tests
There are several hormones that can be measured in the blood to determine how the thyroid gland is functioning. These include the thyroid hormones triiodothyronine (T3) and its precursor thyroxine (T4), which are produced by the thyroid gland. Thyroid-stimulating hormone (TSH) is another important hormone that is secreted by the anterior pituitary cells in the brain. Its primary function is to increase the production of T3 and T4 by the thyroid gland.
The most useful marker of thyroid gland function is serum thyroid-stimulating hormone (TSH) levels. TSH levels are determined by a classic negative feedback system in which high levels of T3 and T4 suppress the production of TSH, and low levels of T3 and T4 increase the production of TSH. TSH levels are thus often used by doctors as a screening test, where the first approach is to determine whether TSH is elevated, suppressed, or normal.
Elevated TSH levels can signify inadequate thyroid hormone production (hypothyroidism)
Suppressed TSH levels can point to excessive thyroid hormone production (hyperthyroidism)
Because a single abnormal TSH level can be misleading, T3 and T4 levels must be measured in the blood to further confirm the diagnosis. When circulating in the body, T3 and T4 are bound to transport proteins. Only a small fraction of the circulating thyroid hormones are unbound or free, and thus biologically active. T3 and T4 levels can thus be measured as free T3 and T4, or total T3 and T4, which takes into consideration the free hormones in addition to the protein-bound hormones. Free T3 and T4 measurements are important because certain drugs and illnesses can affect the concentrations of transport proteins, resulting in differing total and free thyroid hormone levels. There are differing guidelines for T3 and T4 measurements.
Free T4 levels should be measured in the evaluation of hypothyroidism, and low free T4 establishes the diagnosis. T3 levels are generally not measured in the evaluation of hypothyroidism.
Free T4 and total T3 can be measured when hyperthyroidism is of high suspicion as it will improve the accuracy of the diagnosis. Free T4, total T3 or both are elevated and serum TSH is below normal in hyperthyroidism. If the hyperthyroidism is mild, only serum T3 may be elevated and serum TSH can be low or may not be detected in the blood.
Free T4 levels may also be tested in patients who have convincing symptoms of hyper- and hypothyroidism, despite a normal TSH.
Antithyroid antibodies
Autoantibodies to the thyroid gland may be detected in various disease states. There are several anti-thyroid antibodies, including anti-thyroglobulin antibodies (TgAb), anti-microsomal/anti-thyroid peroxidase antibodies (TPOAb), and TSH receptor antibodies (TSHRAb).
Elevated anti-thryoglobulin (TgAb) and anti-thyroid peroxidase antibodies (TPOAb) can be found in patients with Hashimoto's thyroiditis, the most common autoimmune type of hypothyroidism. TPOAb levels have also been found to be elevated in patients who present with subclinical hypothyroidism (where TSH is elevated, but free T4 is normal), and can help predict progression to overt hypothyroidism. The American Association Thyroid Association thus recommends measuring TPOAb levels when evaluating subclinical hypothyroidism or when trying to identify whether nodular thyroid disease is due to autoimmune thyroid disease.
When the etiology of hyperthyroidism is not clear after initial clinical and biochemical evaluation, measurement of TSH receptor antibodies (TSHRAb) can help make the diagnosis. In Graves' disease, TSHRAb levels are elevated as they are responsible for activating the TSH receptor and causing increased thyroid hormone production.
Other markers
There are two markers for thyroid-derived cancers.
Thyroglobulin (TG) levels can be elevated in well-differentiated papillary or follicular adenocarcinoma. It is often used to provide information on residual, recurrent or metastatic disease in patients with differentiated thyroid cancer. However, serum TG levels can be elevated in most thyroid diseases. Routine measurement of serum TG for evaluation of thyroid nodules is therefore currently not recommended by the American Thyroid Association.
Elevated calcitonin levels in the blood have been shown to be associated with the rare medullary thyroid cancer. However, the measurement of calcitonin levels as a diagnostic tool is currently controversial due to falsely high or low calcitonin levels in a variety of diseases other than medullary thyroid cancer.
Very infrequently, TBG and transthyretin levels may be abnormal; these are not routinely tested.
To differentiate between different types of hypothyroidism, a specific test may be used. Thyrotropin-releasing hormone (TRH) is injected into the body through a vein. This hormone is naturally secreted by the hypothalamus and stimulates the pituitary gland. The pituitary responds by releasing thyroid-stimulating hormone (TSH). Large amounts of externally administered TRH can suppress the subsequent release of TSH. This amount of release-suppression is exaggerated in primary hypothyroidism, major depression, cocaine dependence, amphetamine dependence and chronic phencyclidine abuse. There is a failure to suppress in the manic phase of bipolar disorder.
Ultrasound
Many people may develop a thyroid nodule at some point in their lives. Although many who experience this worry that it is thyroid cancer, there are many causes of nodules that are benign and not cancerous. If a possible nodule is present, a doctor may order thyroid function tests to determine if the thyroid gland's activity is being affected. If more information is needed after a clinical exam and lab tests, medical ultrasonography can help determine the nature of thyroid nodule(s). There are some notable differences in typical benign vs. cancerous thyroid nodules that can particularly be detected by the high-frequency sound waves in an ultrasound scan. The ultrasound may also locate nodules that are too small for a doctor to feel on a physical exam, and can demonstrate whether a nodule is primarily solid, liquid (cystic), or a mixture of both. It is an imaging process that can often be done in a doctor's office, is painless, and does not expose the individual to any radiation.
The main characteristics that can help distinguish a benign vs. malignant (cancerous) thyroid nodule on ultrasound are as follows:
Although ultrasonography is a very important diagnostic tool, this method is not always able to separate benign from malignant nodules with certainty. In suspicious cases, a tissue sample is often obtained by biopsy for microscopic examination.
Radioiodine scanning and uptake
Thyroid scintigraphy, in which the thyroid is imaged with the aid of radioactive iodine (usually iodine-123, which does not harm thyroid cells, or rarely, iodine-131), is performed in the nuclear medicine department of a hospital or clinic. Radioiodine collects in the thyroid gland before being excreted in the urine. While in the thyroid, the radioactive emissions can be detected by a camera, producing a rough image of the shape (a radioiodine scan) and tissue activity (a radioiodine uptake) of the thyroid gland.
A normal radioiodine scan shows even uptake and activity throughout the gland. Irregular uptake can reflect an abnormally shaped or abnormally located gland, or it can indicate that a portion of the gland is overactive or underactive. For example, a nodule that is overactive ("hot") -- to the point of suppressing the activity of the rest of the gland—is usually a thyrotoxic adenoma, a surgically curable form of hyperthyroidism that is rarely malignant. In contrast, finding that a substantial section of the thyroid is inactive ("cold") may indicate an area of non-functioning tissue, such as thyroid cancer.
The amount of radioactivity can be quantified and serves as an indicator of the metabolic activity of the gland. A normal quantitation of radioiodine uptake demonstrates that about 8-35% of the administered dose can be detected in the thyroid 24 hours later. Overactivity or underactivity of the gland, as may occur with hyperthyroidism or hypothyroidism, is usually reflected in increased or decreased radioiodine uptake. Different patterns may occur with different causes of hypo- or hyperthyroidism.
Biopsy
A medical biopsy refers to the obtaining of a tissue sample for examination under the microscope or other testing, usually to distinguish cancer from noncancerous conditions. Thyroid tissue may be obtained for biopsy by fine needle aspiration (FNA) or by surgery.
Fine needle aspiration has the advantage of being a brief, safe, outpatient procedure that is safer and less expensive than surgery and does not leave a visible scar. Needle biopsies became widely used in the 1980s, though it was recognized that the accuracy of identification of cancer was good, but not perfect. The accuracy of the diagnosis depends on obtaining tissue from all of the suspicious areas of an abnormal thyroid gland. The reliability of fine needle aspiration is increased when sampling can be guided by ultrasound, and over the last 15 years, this has become the preferred method for thyroid biopsy in North America.
Treatment
Medication
Levothyroxine is a stereoisomer of thyroxine (T4) which is degraded much more slowly and can be administered once daily in patients with hypothyroidism. Natural thyroid hormone from pigs is sometimes also used, especially for people who cannot tolerate the synthetic version. Hyperthyroidism caused by Graves' disease may be treated with the thioamide drugs propylthiouracil, carbimazole or methimazole, or rarely with Lugol's solution. Additionally, hyperthyroidism and thyroid tumors may be treated with radioactive iodine. Ethanol injections for the treatment of recurrent thyroid cysts and metastatic thyroid cancer in lymph nodes can also be an alternative to surgery.
Surgery
Thyroid surgery is performed for a variety of reasons. A nodule or lobe of the thyroid is sometimes removed for biopsy or because of the presence of an autonomously functioning adenoma causing hyperthyroidism. A large majority of the thyroid may be removed (subtotal thyroidectomy) to treat the hyperthyroidism of Graves' disease, or to remove a goiter that is unsightly or impinges on vital structures.
A complete thyroidectomy of the entire thyroid, including associated lymph nodes, is the preferred treatment for thyroid cancer. Removal of the bulk of the thyroid gland usually produces hypothyroidism unless the person takes thyroid hormone replacement. Consequently, individuals who have undergone a total thyroidectomy are typically placed on thyroid hormone replacement (e.g. levothyroxine) for the remainder of their lives. Higher than normal doses are often administered to prevent recurrence.
If the thyroid gland must be removed surgically, care must be taken to avoid damage to adjacent structures, the parathyroid glands and the recurrent laryngeal nerve. Both are susceptible to accidental removal and/or injury during thyroid surgery.
The parathyroid glands produce parathyroid hormone (PTH), a hormone needed to maintain adequate amounts of calcium in the blood. Removal results in hypoparathyroidism and a need for supplemental calcium and vitamin D each day. In the event that the blood supply to any one of the parathyroid glands is endangered through surgery, the parathyroid gland(s) involved may be re-implanted in surrounding muscle tissue.
The recurrent laryngeal nerves provide motor control for all external muscles of the larynx except for the cricothyroid muscle, which also runs along the posterior thyroid. Accidental laceration of either of the two or both recurrent laryngeal nerves may cause paralysis of the vocal cords and their associated muscles, changing the voice quality. A 2019 systematic review concluded that the available evidence shows no difference between visually identifying the nerve or utilizing intraoperative neuroimaging during surgery, when trying to prevent injury to recurrent laryngeal nerve during thyroid surgery.
Radioiodine
Radioiodine therapy with iodine-131 can be used to shrink the thyroid gland (for instance, in the case of large goiters that cause symptoms but do not harbor cancer—after evaluation and biopsy of suspicious nodules has been done), or to destroy hyperactive thyroid cells (for example, in cases of thyroid cancer). The iodine uptake can be high in countries with iodine deficiency, but low in iodine sufficient countries. To enhance iodine-131 uptake by the thyroid and allow for more successful treatment, TSH is raised prior to therapy in order to stimulate the existing thyroid cells. This is done either by withdrawal of thyroid hormone medication or injections of recombinant human TSH (Thyrogen), released in the United States in 1999. Thyrogen injections can reportedly boost uptake up to 50-60%. Radioiodine treatment can also cause hypothyroidism (which is sometimes the end goal of treatment) and, although rare, a pain syndrome (due to radiation thyroiditis).
Epidemiology
In the United States, autoimmune inflammation is the most common form of thyroid disease while worldwide hypothyroidism and goiter due to dietary iodine deficiency is the most common. According to the American Thyroid Association in 2015, approximately 20 million people in the United States alone are affected by thyroid disease. Hypothyroidism affects 3-10% percent of adults, with a higher incidence in women and the elderly. An estimated one-third of the world's population currently lives in areas of low dietary iodine levels. In regions of severe iodine deficiency, the prevalence of goiter is as high as 80%. In areas where iodine-deficiency is not found, the most common type of hypothyroidism is an autoimmune subtype called Hashimoto's thyroiditis, with a prevalence of 1-2%. As for hyperthyroidism, Graves' disease, another autoimmune condition, is the most common type with a prevalence of 0.5% in males and 3% in females. Although thyroid nodules are common, thyroid cancer is rare. Thyroid cancer accounts for less than 1% of all cancer in the UK, though it is the most common endocrine tumor and makes up greater than 90% of all cancers of the endocrine glands.
See also
Hyperthyroidism
Graves' disease
Hypothyroidism
Hashimoto's thyroiditis
Thyroid nodule
Thyroid disease in pregnancy
References
External links
Medline Plus Medical Encyclopedia entry for Thyroid Disease
National Institutes of Health | 0.767266 | 0.998678 | 0.766252 |
Surgical sieve | The surgical sieve is a thought process in medicine. It is a typical example of how to organise a structured examination answer for medical students and physicians when they are challenged with a question. It is also a way of constructing answers to questions from patients and their relatives in a logical manner, and structuring articles and reference texts in medicine. Some textbooks put emphasis on using the surgical sieve as a basic structure of diagnosis and management of illnesses.
Overview
Although there are several versions around the world with slight variations, the surgical sieve usually consist of the following types of process in the human body in any particular order:
Congenital
Acquired
Vascular
Infective
Traumatic
Autoimmune
Metabolic
Inflammatory
Neurological
Neoplastic
Degenerative
Environmental
Unknown
A more extensive, and perhaps more concise mechanism of employing the surgical sieve is using the mnemonic
MEDIC HAT PINE:
Metabolic (conditions relating to metabolism, biochemistry, and the like)
Endocrinological (conditions relating to the various secretory systems within the body)
Degenerative (conditions relating to age-related destruction of tissue, or stress-related destruction of tissue)
Inflammatory/Infective (conditions that primarily present in a way that involves the profane activation of the immune system)
Congenital (conditions present at birth)
Genetic / inherited (conditions that your family passes on to you)
Haematological (conditions relating to the blood system, in one way or another)
Autoimmune (conditions relating to the inappropriate activation of the immune system, in one of many ways)
Traumatic (conditions relating to a physical impact between two or more objects)
Psychological (conditions related to a chemical imbalance or a disorder of thought processes)
Neurological (conditions relating to the nervous system, in one way or another – whether that be the central or peripheral)
Idiopathic (conditions without a known cause) Iatrogenic (lit. Translation “doctor caused” - or resulting from treatment)
Neoplastic (conditions relating to cancers)
Environmental (conditions relating to exposures, and dose-response relationships thereof)
Examples
What are the causes of an acute confusional state in a patient?
Treatment induced (Iatrogenic): polypharmacy, sedatives, analgesics, steroids, drug withdrawal
Vascular: stroke, TIA, vascular dementia
Inflammatory: infection, systemic inflammatory response syndrome
Traumatic: head injury, Intracranial hemorrhage, shock
Autoimmune: thyroid disease
Metabolic: electrolyte imbalance, DKA, hypoglycaemia, SIADH
Infective: sepsis, local infection
Neoplastic: brain tumour, carcinomatosis
Degenerative: Alzheimer's disease, dementia
What are the causes of splenomegaly?
Idiopathic: Idiopathic thrombocytopenic purpura
Vascular: portal vein obstruction, Budd-Chiari syndrome, haemoglobinopathies (Sickle-cell disease, thalassemia)
Infective: AIDS, mononucleosis, septicaemia, tuberculosis, brucellosis, malaria, infective endocarditis
Traumatic: haematoma, rupture
Autoimmune: rheumatoid arthritis, SLE
Metabolic: Gaucher's disease, mucopolysaccharidoses, amyloidosis, Tangier disease
Inflammatory: sarcoidosis
Neoplastic: CML, metastases, myeloproliferative disorders
In popular culture
The surgical sieve is frequently used by Gregory House, who is a physician in the TV series House in order to diagnose the rare diseases his patients suffer from. In some episodes various forms of the surgical sieve are scribbled on to House's whiteboard while his team struggle to diagnose difficult cases. In the episode 'Paternity' the mnemonic 'MIDNIT' is used to run through the sieve (metabolic, inflammation, degenerative, neoplastic, infection, trauma).
See also
Trauma surgery
Hypnosurgery
Surgery
References
Medical diagnosis
Medical terminology
Medical education
Sieve
Medical mnemonics | 0.791599 | 0.967953 | 0.766231 |
Atrophy | Atrophy is the partial or complete wasting away of a part of the body. Causes of atrophy include mutations (which can destroy the gene to build up the organ), poor nourishment, poor circulation, loss of hormonal support, loss of nerve supply to the target organ, excessive amount of apoptosis of cells, and disuse or lack of exercise or disease intrinsic to the tissue itself. In medical practice, hormonal and nerve inputs that maintain an organ or body part are said to have trophic effects. A diminished muscular trophic condition is designated as atrophy. Atrophy is reduction in size of cell, organ or tissue, after attaining its normal mature growth. In contrast, hypoplasia is the reduction in the cellular numbers of an organ, or tissue that has not attained normal maturity.
Atrophy is the general physiological process of reabsorption and breakdown of tissues, involving apoptosis. When it occurs as a result of disease or loss of trophic support because of other diseases, it is termed pathological atrophy, although it can be a part of normal body development and homeostasis as well.
Normal development
Examples of atrophy as part of normal development include shrinking and the involution of the thymus in early childhood, and the tonsils in adolescence. In old age, effects include, but are not limited to, loss of teeth, hair, thinning of skin that creates wrinkles, weakening of muscles, loss of weight in organs and sluggish mental activity.
Muscle atrophies
Disuse atrophy of muscles and bones, with loss of mass and strength, can occur after prolonged immobility, such as extended bedrest, or having a body part in a cast (living in darkness for the eye, bedridden for the legs etc.). This type of atrophy can usually be reversed with exercise unless severe.
There are many diseases and conditions which cause atrophy of muscle mass. For example, diseases such as cancer and AIDS induce a body wasting syndrome called cachexia, which is notable for the severe muscle atrophy seen. Other syndromes or conditions which can induce skeletal muscle atrophy are congestive heart failure and liver disease.
During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass. This condition is called sarcopenia, and may be distinct from atrophy in its pathophysiology. While the exact cause of sarcopenia is unknown, it may be induced by a combination of a gradual failure in the satellite cells which help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors which are necessary to maintain muscle mass and satellite cell survival.
Dystrophies, myositis, and motor neuron conditions
Pathologic atrophy of muscles can occur with diseases of the motor nerves or diseases of the muscle tissue itself. Examples of atrophying nerve diseases include Charcot-Marie-Tooth disease, poliomyelitis, amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and Guillain–Barré syndrome. Examples of atrophying muscle diseases include muscular dystrophy, myotonia congenita, and myotonic dystrophy.
Changes in Na+ channel isoform expression and spontaneous activity in muscle called fibrillation can also result in muscle atrophy.
A flail limb is a medical term which refers to an extremity in which the primary nerve has been severed, resulting in complete lack of mobility and sensation. The muscles soon wither away from atrophy.
Gland atrophy
The adrenal glands atrophy during prolonged use of exogenous glucocorticoids like prednisone. Atrophy of the breasts can occur with prolonged estrogen reduction, as with anorexia nervosa or menopause. Testicular atrophy can occur with prolonged use of enough exogenous sex steroids (either androgen or estrogen) to reduce gonadotropin secretion.
Vaginal atrophy
In post-menopausal women, the walls of the vagina become thinner (atrophic vaginitis). The mechanism for the age-related condition is not yet clear, though there are theories that the effect is caused by decreases in estrogen levels. This atrophy, occurring concurrently with breast atrophy, is consistent with the homeostatic (normal development) role of atrophy in general, as after menopause the body has no further functional biological need to maintain the reproductive system which it has permanently shut down.
Research
One drug in test seemed to prevent the type of muscle loss that occurs in immobile, bedridden patients.
Testing on mice showed that it blocked the activity of a protein present in the muscle that is involved in muscle atrophy. However, the drug's long-term effect on the heart precludes its routine use in humans, and other drugs are being sought.
See also
Olivopontocerebellar atrophy
Optic atrophy
Spinomuscular atrophy
Hypertrophy
Deconditioning
List of biological development disorders
References
External links
Anatomical pathology
Gross pathology | 0.768093 | 0.997572 | 0.766228 |
Etiology | Etiology (; alternatively spelled aetiology or ætiology) is the study of causation or origination. The word is derived from the Greek word , meaning "giving a reason for". More completely, etiology is the study of the causes, origins, or reasons behind the way that things are, or the way they function, or it can refer to the causes themselves. The word is commonly used in medicine (pertaining to causes of disease or illness) and in philosophy, but also in physics, biology, psychology, political science, geography, cosmology, spatial analysis and theology in reference to the causes or origins of various phenomena.
In the past, when many physical phenomena were not well understood or when histories were not recorded, myths often arose to provide etiologies. Thus, an etiological myth, or origin myth, is a myth that has arisen, been told over time or written to explain the origins of various social or natural phenomena. For example, Virgil's Aeneid is a national myth written to explain and glorify the origins of the Roman Empire. In theology, many religions have creation myths explaining the origins of the world or its relationship to believers.
Medicine
In medicine, the etiology of an illness or condition refers to the frequent studies to determine one or more factors that come together to cause the illness. Relatedly, when disease is widespread, epidemiological studies investigate what associated factors, such as location, sex, exposure to chemicals, and many others, make a population more or less likely to have an illness, condition, or disease, thus helping determine its etiology. Sometimes determining etiology is an imprecise process. In the past, the etiology of a common sailor's disease, scurvy, was long unknown. When large, ocean-going ships were built, sailors began to put to sea for long periods of time, and often lacked fresh fruit and vegetables. Without knowing the precise cause, Captain James Cook suspected scurvy was caused by the lack of vegetables in the diet. Based on his suspicion, he forced his crew to eat sauerkraut, a cabbage preparation, every day, and based upon the positive outcomes, he inferred that it prevented scurvy, even though he did not know precisely why. It took about another two hundred years to discover the precise etiology; the lack of vitamin C in a sailor's diet.
The following are examples of intrinsic factors:
Inherited conditions, or conditions that are passed down to you from your parents. An example of this is hemophilia, a disorder that leads to excessive bleeding.
Metabolic and endocrine, or hormone, disorders. These are abnormalities in the chemical signaling and interaction in the body. For example, Diabetes mellitus is an endocrine disease that causes high blood sugar.
Neoplastic disorders or cancer where the cells of the body grow out of control.
Problems with immunity, such as allergies, which are an overreaction of the immune system.
Mythology
An etiological myth, or origin myth, is a myth intended to explain the origins of cult practices, natural phenomena, proper names and the like. For example, the name Delphi and its associated deity, Apollon Delphinios, are explained in the Homeric Hymn which tells of how Apollo, in the shape of a dolphin, propelled Cretans over the seas to make them his priests. While Delphi is actually related to the word ("womb"), many etiological myths are similarly based on folk etymology (the term "Amazon", for example). In the Aeneid (published ), Virgil claims the descent of Augustus Caesar's Julian clan from the hero Aeneas through his son Ascanius, also called Iulus. The story of Prometheus' sacrifice trick at Mecone in Hesiod's Theogony relates how Prometheus tricked Zeus into choosing the bones and fat of the first sacrificial animal rather than the meat to justify why, after a sacrifice, the Greeks offered the bones wrapped in fat to the gods while keeping the meat for themselves. In Ovid's Pyramus and Thisbe, the origin of the color of mulberries is explained, as the white berries become stained red from the blood gushing forth from their double suicide.
See also
Backstory
Bradford Hill criteria
Correlation does not imply causation
Creation myth
Just-so story
Just So Stories
Pathology
Pourquoi story
Problem of causation
Involution (esoterism)
References
External links
Causes of conditions
Origin myths
Mythography
Mythology
Origins | 0.768801 | 0.996651 | 0.766226 |
Histology | Histology,
also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms.
Biological tissues
Animal tissue classification
There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma).
Plant tissue classification
For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types:
Dermal tissue
Vascular tissue
Ground tissue
Meristematic tissue
Medical histology
Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations.
Occupations
The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists.
Sample preparation
Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation.
Fixation
Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline).
For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate.
The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes.
Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols.
Selection and trimming
Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time.
Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes.
Embedding
Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media.
Paraffin wax
For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue.
Other materials
Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes.
In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required.
For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks.
Sectioning
For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections.
A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies.
Staining
Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used.
Light microscopy
Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink.
In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains.
Historadiography
In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy.
Immunohistochemistry
Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail.
Electron microscopy
For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope.
Specialized techniques
Cryosectioning
Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery.
Ultramicrotomy
Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome.
Artifacts
Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions.
History
In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body.
In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology", coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier.
In the early 1830s Purkynĕ invented a microtome with high precision.
During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing).
Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin.
The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible.
Future directions
In vivo histology
Currently there is intense interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples.
See also
National Society for Histotechnology
Slice preparation
Notes
References
External links
Histotechnology
Staining
Histochemistry
Anatomy
Laboratory healthcare occupations | 0.767842 | 0.99788 | 0.766215 |
Skin condition | A skin condition, also known as cutaneous condition, is any medical condition that affects the integumentary system—the organ system that encloses the body and includes skin, nails, and related muscle and glands. The major function of this system is as a barrier against the external environment.
Conditions of the human integumentary system constitute a broad spectrum of diseases, also known as dermatoses, as well as many nonpathologic states (like, in certain circumstances, melanonychia and racquet nails). While only a small number of skin diseases account for most visits to the physician, thousands of skin conditions have been described. Classification of these conditions often presents many nosological challenges, since underlying causes and pathogenetics are often not known. Therefore, most current textbooks present a classification based on location (for example, conditions of the mucous membrane), morphology (chronic blistering conditions), cause (skin conditions resulting from physical factors), and so on.
Clinically, the diagnosis of any particular skin condition begins by gathering pertinent information of the presenting skin lesion(s), including: location (e.g. arms, head, legs); symptoms (pruritus, pain); duration (acute or chronic); arrangement (solitary, generalized, annular, linear); morphology (macules, papules, vesicles); and color (red, yellow, etc.). Some diagnoses may also require a skin biopsy which yields histologic information that can be correlated with the clinical presentation and any laboratory data. The introduction of cutaneous ultrasound has allowed the detection of cutaneous tumors, inflammatory processes, and skin diseases.
Layer of skin involved
The skin weighs an average of , covers an area of about , and is made of three distinct layers: the epidermis, dermis, and subcutaneous tissue. The two main types of human skin are glabrous skin, the nonhairy skin on the palms and soles (also referred to as the "palmoplantar" surfaces), and hair-bearing skin. Within the latter type, hairs in structures called pilosebaceous units have a hair follicle, sebaceous gland, and associated arrector pili muscle. In the embryo, the epidermis, hair, and glands are from the ectoderm, which is chemically influenced by the underlying mesoderm that forms the dermis and subcutaneous tissues.
Epidermis
The epidermis is the most superficial layer of skin, a squamous epithelium with several strata: the stratum corneum, stratum lucidum, stratum granulosum, stratum spinosum, and stratum basale. Nourishment is provided to these layers via diffusion from the dermis, since the epidermis is without direct blood supply. The epidermis contains four cell types: keratinocytes, melanocytes, Langerhans cells, and Merkel cells. Of these, keratinocytes are the major component, constituting roughly 95% of the epidermis. This stratified squamous epithelium is maintained by cell division within the stratum basale, in which differentiating cells slowly displace outwards through the stratum spinosum to the stratum corneum, where cells are continually shed from the surface. In normal skin, the rate of production equals the rate of loss; about two weeks are needed for a cell to migrate from the basal cell layer to the top of the granular cell layer, and an additional two weeks to cross the stratum corneum.
Dermis
The dermis is the layer of skin between the epidermis and subcutaneous tissue, and comprises two sections, the papillary dermis and the reticular dermis. The superficial papillary dermis interdigitates with the overlying rete ridges of the epidermis, between which the two layers interact through the basement membrane zone. Structural components of the dermis are collagen, elastic fibers, and ground substance also called extra fibrillar matrix. Within these components are the pilosebaceous units, arrector pili muscles, and the eccrine and apocrine glands. The dermis contains two vascular networks that run parallel to the skin surface—one superficial and one deep plexus—which are connected by vertical communicating vessels. The function of blood vessels within the dermis is fourfold: to supply nutrition, to regulate temperature, to modulate inflammation, and to participate in wound healing.
Subcutaneous tissue
The subcutaneous tissue is a layer of fat between the dermis and underlying fascia. This tissue may be further divided into two components, the actual fatty layer, or panniculus adiposus, and a deeper vestigial layer of muscle, the panniculus carnosus. The main cellular component of this tissue is the adipocyte, or fat cell. The structure of this tissue is composed of septal (i.e. linear strands) and lobular compartments, which differ in microscopic appearance. Functionally, the subcutaneous fat insulates the body, absorbs trauma, and serves as a reserve energy source.
Diseases of the skin
Diseases of the skin include skin infections and skin neoplasms (including skin cancer).
History
In 1572, Geronimo Mercuriali of Forlì, Italy, completed ('On the diseases of the skin'). It is considered the first scientific work dedicated to dermatology.
Diagnoses
The physical examination of the skin and its appendages, as well as the mucous membranes, forms the cornerstone of an accurate diagnosis of cutaneous conditions. Most of these conditions present with cutaneous surface changes termed "lesions," which have more or less distinct characteristics. Often proper examination will lead the physician to obtain appropriate historical information and/or laboratory tests that are able to confirm the diagnosis. Upon examination, the important clinical observations are the (1) morphology, (2) configuration, and (3) distribution of the lesion(s). With regard to morphology, the initial lesion that characterizes a condition is known as the "primary lesion", and identification of such a lesions is the most important aspect of the cutaneous examination. Over time, these primary lesions may continue to develop or be modified by regression or trauma, producing "secondary lesions". However, with that being stated, the lack of standardization of basic dermatologic terminology has been one of the principal barriers to successful communication among physicians in describing cutaneous findings. Nevertheless, there are some commonly accepted terms used to describe the macroscopic morphology, configuration, and distribution of skin lesions, which are listed below.
Lesions
Primary lesions
Macule: A macule is a change in surface color, without elevation or depression, so nonpalpable, well or ill-defined, variously sized, but generally considered less than either 5 or 10 mm in diameter at the widest point.
Patch: A patch is a large macule equal to or greater than either 5 or 10 mm across, depending on one's definition of a macule. Patches may have some subtle surface change, such as a fine scale or wrinkling, but although the consistency of the surface is changed, the lesion itself is not palpable.
Papule: A papule is a circumscribed, solid elevation of skin, varying in size from less than either 5 or 10 mm in diameter at the widest point.
Plaque: A plaque has been described as a broad papule, or confluence of papules equal to or greater than 10 mm, or alternatively as an elevated, plateau-like lesion that is greater in its diameter than in its depth.
Nodule: A nodule is morphologically similar to a papule in that it is also a palpable spherical lesion less than 10 mm in diameter. However, it is differentiated by being centered deeper in the dermis or subcutis.
Tumor: Similar to a nodule, but it is larger than 10 mm in diameter.
Vesicle: A vesicle is a small blister, a circumscribed, epidermal elevation generally considered less than either 5 or 10 mm in diameter at the widest point.
Bulla: A bulla is a large blister, a rounded or irregularly shaped blister equal to or greater than either 5 or 10 mm, depending on one's definition of a vesicle.
Pustule: A pustule is a small elevation of the skin usually consisting of necrotic inflammatory cells.
Cyst: A cyst is an epithelial-lined cavity.
Wheal: A wheal is a rounded or flat-topped, pale red papule or plaque that is characteristically evanescent, disappearing within 24 to 48 hours. The temporary raised skin on the site of a properly delivered intradermal (ID) injection is also called a welt, with the ID injection process itself frequently referred to as simply "raising a wheal" in medical texts.
Welts: Welts occur as a result of blunt force being applied to the body with elongated objects without sharp edges.
Telangiectasia: A telangiectasia represents an enlargement of superficial blood vessels to the point of being visible.
Burrow: A burrow appears as a slightly elevated, grayish, tortuous line in the skin, and is caused by burrowing organisms.
Secondary lesions
Scale: Dry or greasy laminated masses of keratin, they represent thickened stratum corneum.
Crust: Dried sebum usually mixed with epithelial and sometimes bacterial debris
Lichenification: Epidermal thickening characterized by visible and palpable thickening of the skin with accentuated skin markings
Erosion: An erosion is a discontinuity of the skin exhibiting incomplete loss of the epidermis, a lesion that is moist, circumscribed, and usually depressed.
Excoriation: A punctate or linear abrasion produced by mechanical means (often scratching), usually involving only the epidermis, but commonly reaching the papillary dermis.
Ulcer: An ulcer is a discontinuity of the skin exhibiting complete loss of the epidermis and often portions of the dermis.
Fissure is a lesion in the skin that is usually narrow but deep.
Induration is dermal thickening causing the cutaneous surface to feel thicker and firmer.
Atrophy refers to a loss of skin, and can be epidermal, dermal, or subcutaneous. With epidermal atrophy, the skin appears thin, translucent, and wrinkled. Dermal or subcutaneous atrophy is represented by depression of the skin.
Maceration: softening and turning white of the skin due to being consistently wet.
Umbilication is formation of a depression at the top of a papule, vesicle, or pustule.
Phyma: A tubercle on any external part of the body, such as in phymatous rosacea
Configuration
"Configuration" refers to how lesions are locally grouped ("organized"), which contrasts with how they are distributed (see next section).
Agminate: in clusters
Annular or circinate: ring-shaped
Arciform or arcuate: arc-shaped
Digitate: with finger-like projections
Discoid or nummular: round or disc-shaped
Figurate: with a particular shape
Guttate: resembling drops
Gyrate: coiled or spiral-shaped
Herpetiform: resembling herpes
Linear
Mammillated: with rounded, breast-like projections
Reticular or reticulated: resembling a net
Serpiginous: with a wavy border
Stellate: star-shaped
Targetoid: resembling a bullseye
Verrucous or Verruciform: wart-like
Distribution
"Distribution" refers to how lesions are localized. They may be confined to a single area (a patch) or may be in several places. Some distributions correlate with the means by which a given area becomes affected. For example, contact dermatitis correlates with locations where allergen has elicited an allergic immune response. Varicella zoster virus is known to recur (after its initial presentation as chicken pox) as herpes zoster ("shingles"). Chicken pox appears nearly everywhere on the body, but herpes zoster tends to follow one or two dermatomes; for example, the eruptions may appear along the bra line, on either or both sides of the patient.
Generalized
Symmetric: one side mirrors the other
Flexural: on the front of the fingers
Extensor: on the back of the fingers
Intertriginous: in an area where two skin areas may touch or rub together
Morbilliform: resembling measles
Palmoplantar: on the palm of the hand or bottom of the foot
Periorificial: around an orifice such as the mouth
Periungual/subungual: around or under a fingernail or toenail
Blaschkoid: following the path of Blaschko's lines in the skin
Photodistributed: in places where sunlight reaches
Zosteriform or dermatomal: associated with a particular nerve
Other related terms
Collarette
Comedo
Confluent
Eczema (a type of dermatitis)
Evanescent (lasting less than 24 hours)
Granuloma
Livedo
Purpura
Erythema (redness)
Horn (a cell type)
Poikiloderma
Histopathology
Hyperkeratosis
Parakeratosis
Hypergranulosis
Acanthosis
Papillomatosis
Dyskeratosis
Acantholysis
Spongiosis
Hydropic swelling
Exocytosis
Vacuolization
Erosion
Ulceration
Lentiginous
See also
Wound, an injury which damages the epidermis.
References
External links | 0.769595 | 0.995581 | 0.766194 |
Sick building syndrome | Sick building syndrome (SBS) is a condition in which people develop symptoms of illness or become infected with chronic disease from the building in which they work or reside. In scientific literature, SBS is also known as building-related illness (BRI), building-related symptoms (BRS), or idiopathic environmental intolerance (IEI).
The main identifying observation is an increased incidence of complaints of such symptoms as headache, eye, nose, and throat irritation, fatigue, dizziness, and nausea. The 1989 Oxford English Dictionary defines SBS in that way. The World Health Organization created a 484-page tome on indoor air quality 1984, when SBS was attributed only to non-organic causes, and suggested that the book might form a basis for legislation or litigation.
The outbreaks may or may not be a direct result of inadequate or inappropriate cleaning. SBS has also been used to describe staff concerns in post-war buildings with faulty building aerodynamics, construction materials, construction process, and maintenance. Some symptoms tend to increase in severity with the time people spend in the building, often improving or even disappearing when people are away from the building. The term SBS is also used interchangeably with "building-related symptoms", which orients the name of the condition around patients' symptoms rather than a "sick" building.
Attempts have been made to connect sick building syndrome to various causes, such as contaminants produced by outgassing of some building materials, volatile organic compounds (VOC), improper exhaust ventilation of ozone (produced by the operation of some office machines), light industrial chemicals used within, and insufficient fresh-air intake or air filtration (see "Minimum efficiency reporting value"). Sick building syndrome has also been attributed to heating, ventilation, and air conditioning (HVAC) systems, an attribution about which there are inconsistent findings.
Signs and symptoms
Human exposure to aerosols has a variety of adverse health effects. Building occupants complain of symptoms such as sensory irritation of the eyes, nose, or throat; neurotoxic or general health problems; skin irritation; nonspecific hypersensitivity reactions; infectious diseases; and odor and taste sensations. Poor lighting has caused general malaise.
Extrinsic allergic alveolitis has been associated with the presence of fungi and bacteria in the moist air of residential houses and commercial offices. A study in 2017 correlated several inflammatory diseases of the respiratory tract with objective evidence of damp-caused damage in homes.
The WHO has classified the reported symptoms into broad categories, including mucous-membrane irritation (eye, nose, and throat irritation), neurotoxic effects (headaches, fatigue, and irritability), asthma and asthma-like symptoms (chest tightness and wheezing), skin dryness and irritation, and gastrointestinal complaints.
Several sick occupants may report individual symptoms that do not seem connected. The key to discovery is the increased incidence of illnesses in general with onset or exacerbation in a short period, usually weeks. In most cases, SBS symptoms are relieved soon after the occupants leave the particular room or zone. However, there can be lingering effects of various neurotoxins, which may not clear up when the occupant leaves the building. In some cases, including those of sensitive people, there are long-term health effects.
Cause
ASHRAE has recognized that polluted urban air, designated within the United States Environmental Protection Agency (EPA)'s air quality ratings as unacceptable, requires the installation of treatment such as filtration for which the HVAC practitioners generally apply carbon-impregnated filters and their likes. Different toxins will aggravate the human body in different ways. Some people are more allergic to mold, while others are highly sensitive to dust. Inadequate ventilation will exaggerate small problems (such as deteriorating fiberglass insulation or cooking fumes) into a much more serious indoor air quality problem.
Common products such as paint, insulation, rigid foam, particle board, plywood, duct liners, exhaust fumes and other chemical contaminants from indoor or outdoor sources, and biological contaminants can be trapped inside by the HVAC AC system. As this air is recycled using fan coils the overall oxygenation ratio drops and becomes harmful. When combined with other stress factors such as traffic noise, poor lighting, inhabitants of buildings located in a polluted urban area can quickly become ill as their immune system is overwhelmed.
Certain VOCs, considered toxic chemical contaminants to humans, are used as adhesives in many common building construction products. These aromatic carbon rings / VOCs can cause acute and chronic health effects in the occupants of a building, including cancer, paralysis, lung failure, and others. Bacterial spores, fungal spores, mold spores, pollen, and viruses are types of biological contaminants and can all cause allergic reactions or illness described as SBS. In addition, pollution from outdoors, such as motor vehicle exhaust, can enter buildings, worsen indoor air quality, and increase the indoor concentration of carbon monoxide and carbon dioxide. Adult SBS symptoms were associated with a history of allergic rhinitis, eczema and asthma.
A 2015 study concerning the association of SBS and indoor air pollutants in office buildings in Iran found that, as carbon dioxide increased in a building, nausea, headaches, nasal irritation, dyspnea, and throat dryness also rose. Some work conditions have been correlated with specific symptoms: brighter light, for example was significantly related to skin dryness, eye pain, and malaise. Higher temperature is correlated with sneezing, skin redness, itchy eyes, and headache; lower relative humidity has been associated with sneezing, skin redness, and eye pain.
In 1973, in response to the oil crisis and conservation concerns, ASHRAE Standards 62-73 and 62-81 reduced required ventilation from per person to per person, but this was found to be a contributing factor to sick building syndrome. As of the 2016 revision, ASHRAE ventilation standards call for 5 to 10 cubic feet per minute of ventilation per occupant (depending on the occupancy type) in addition to ventilation based on the zone floor area delivered to the breathing zone.
Workplace
Excessive work stress or dissatisfaction, poor interpersonal relationships and poor communication are often seen to be associated with SBS, recent studies show that a combination of environmental sensitivity and stress can greatly contribute to sick building syndrome.
Greater effects were found with features of the psycho-social work environment including high job demands and low support. The report concluded that the physical environment of office buildings appears to be less important than features of the psycho-social work environment in explaining differences in the prevalence of symptoms. However, there is still a relationship between sick building syndrome and symptoms of workers regardless of workplace stress.
Specific work-related stressors are related with specific SBS symptoms. Workload and work conflict are significantly associated with general symptoms (headache, abnormal tiredness, sensation of cold or nausea). While crowded workspaces and low work satisfaction are associated with upper respiratory symptoms. Work productivity has been associated with ventilation rates, a contributing factor to SBS, and there's a significant increase in production as ventilation rates increase, by 1.7% for every two-fold increase of ventilation rate. Printer effluent, released into the office air as ultra-fine particles (UFPs) as toner is burned during the printing process, may lead to certain SBS symptoms. Printer effluent may contain a variety of toxins to which a subset of office workers are sensitive, triggering SBS symptoms.
Specific careers are also associated with specific SBS symptoms. Transport, communication, healthcare, and social workers have highest prevalence of general symptoms. Skin symptoms such as eczema, itching, and rashes on hands and face are associated with technical work. Forestry, agriculture, and sales workers have the lowest rates of sick building syndrome symptoms.
From the assessment done by Fisk and Mudarri, 21% of asthma cases in the United States were caused by wet environments with mold that exist in all indoor environments, such as schools, office buildings, houses and apartments. Fisk and Berkeley Laboratory colleagues also found that the exposure to the mold increases the chances of respiratory issues by 30 to 50 percent. Additionally, studies showing that health effects with dampness and mold in indoor environments found that increased risk of adverse health effects occurs with dampness or visible mold environments.
Milton et al. determined the cost of sick leave specific for one business was an estimated $480 per employee, and about five days of sick leave per year could be attributed to low ventilation rates. When comparing low ventilation rate areas of the building to higher ventilation rate areas, the relative risk of short-term sick leave was 1.53 times greater in the low ventilation areas.
Home
Sick building syndrome can be caused by one's home. Laminate flooring may release more SBS-causing chemicals than do stone, tile, and concrete floors. Recent redecorating and new furnishings within the last year are associated with increased symptoms; so are dampness and related factors, having pets, and cockroaches. Mosquitoes are related to more symptoms, but it is unclear whether the immediate cause of the symptoms is the mosquitoes or the repellents used against them.
Mold
Sick building syndrome may be associated with indoor mold or mycotoxin contamination. However, the attribution of sick building syndrome to mold is controversial and supported by little evidence.
Indoor temperature
Indoor temperature under 18 °C (64 °F) has been shown to be associated with increased respiratory and cardiovascular diseases, increased blood levels, and increased hospitalization.
Diagnosis
While sick building syndrome (SBS) encompasses a multitude of non-specific symptoms, building-related illness (BRI) comprises specific, diagnosable symptoms caused by certain agents (chemicals, bacteria, fungi, etc.). These can typically be identified, measured, and quantified. There are usually four causal agents in BRi: immunologic, infectious, toxic, and irritant. For instance, Legionnaire's disease, usually caused by Legionella pneumophila, involves a specific organism which could be ascertained through clinical findings as the source of contamination within a building.
Prevention
Reduction of time spent in the building
If living in the building, moving to a new place
Fixing any deteriorated paint or concrete deterioration
Regular inspections to indicate for presence of mold or other toxins
Adequate maintenance of all building mechanical systems
Toxin-absorbing plants, such as sansevieria
Roof shingle non-pressure cleaning for removal of algae, mold, and Gloeocapsa magma
Using ozone to eliminate the many sources, such as VOCs, molds, mildews, bacteria, viruses, and even odors. However, numerous studies identify high-ozone shock treatment as ineffective despite commercial popularity and popular belief.
Replacement of water-stained ceiling tiles and carpeting
Only using paints, adhesives, solvents, and pesticides in well-ventilated areas or only using these pollutant sources during periods of non-occupancy
Increasing the number of air exchanges; the American Society of Heating, Refrigeration and Air-Conditioning Engineers recommend a minimum of 8.4 air exchanges per 24-hour period
Increased ventilation rates that are above the minimum guidelines
Proper and frequent maintenance of HVAC systems
UV-C light in the HVAC plenum
Installation of HVAC air cleaning systems or devices to remove VOCs and bioeffluents (people odors)
Central vacuums that completely remove all particles from the house including the ultrafine particles (UFPs) which are less than 0.1 μm
Regular vacuuming with a HEPA filter vacuum cleaner to collect and retain 99.97% of particles down to and including 0.3 micrometers
Placing bedding in sunshine, which is related to a study done in a high-humidity area where damp bedding was common and associated with SBS
Lighting in the workplace should be designed to give individuals control, and be natural when possible
Relocating office printers outside the air conditioning boundary, perhaps to another building
Replacing current office printers with lower emission rate printers
Identification and removal of products containing harmful ingredients
Management
SBS, as a non-specific blanket term, does not have any specific cause or cure. Any known cure would be associated with the specific eventual disease that was cause by exposure to known contaminants. In all cases, alleviation consists of removing the affected person from the building associated. BRI, on the other hand, utilizes treatment appropriate for the contaminant identified within the building (e.g., antibiotics for Legionnaire's disease).
Improving the indoor air quality (IAQ) of a particular building can attenuate, or even eliminate, the continued exposure to toxins. However, a Cochrane review of 12 mold and dampness remediation studies in private homes, workplaces and schools by two independent authors were deemed to be very low to moderate quality of evidence in reducing adult asthma symptoms and results were inconsistent among children. For the individual, the recovery may be a process involved with targeting the acute symptoms of a specific illness, as in the case of mold toxins. Treating various building-related illnesses is vital to the overall understanding of SBS. Careful analysis by certified building professionals and physicians can help to identify the exact cause of the BRI, and help to illustrate a causal path to infection. With this knowledge one can, theoretically, remediate a building of contaminants and rebuild the structure with new materials. Office BRI may more likely than not be explained by three events: "Wide range in the threshold of response in any population (susceptibility), a spectrum of response to any given agent, or variability in exposure within large office buildings."
Isolating any one of the three aspects of office BRI can be a great challenge, which is why those who find themselves with BRI should take three steps, history, examinations, and interventions. History describes the action of continually monitoring and recording the health of workers experiencing BRI, as well as obtaining records of previous building alterations or related activity. Examinations go hand in hand with monitoring employee health. This step is done by physically examining the entire workspace and evaluating possible threats to health status among employees. Interventions follow accordingly based on the results of the Examination and History report.
Epidemiology
Some studies have found that women have higher reports of SBS symptoms than men. It is not entirely clear, however, if this is due to biological, social, or occupational factors.
A 2001 study published in the Journal Indoor Air, gathered 1464 office-working participants to increase the scientific understanding of gender differences under the Sick Building Syndrome phenomenon. Using questionnaires, ergonomic investigations, building evaluations, as well as physical, biological, and chemical variables, the investigators obtained results that compare with past studies of SBS and gender. The study team found that across most test variables, prevalence rates were different in most areas, but there was also a deep stratification of working conditions between genders as well. For example, men's workplaces tend to be significantly larger and have all-around better job characteristics. Secondly, there was a noticeable difference in reporting rates, specifically that women have higher rates of reporting roughly 20% higher than men. This information was similar to that found in previous studies, thus indicating a potential difference in willingness to report.
There might be a gender difference in reporting rates of sick building syndrome, because women tend to report more symptoms than men do. Along with this, some studies have found that women have a more responsive immune system and are more prone to mucosal dryness and facial erythema. Also, women are alleged by some to be more exposed to indoor environmental factors because they have a greater tendency to have clerical jobs, wherein they are exposed to unique office equipment and materials (example: blueprint machines, toner-based printers), whereas men often have jobs based outside of offices.
History
In the late 1970s, it was noted that nonspecific symptoms were reported by tenants in newly constructed homes, offices, and nurseries. In media it was called "office illness". The term "sick building syndrome" was coined by the WHO in 1986, when they also estimated that 10–30% of newly built office buildings in the West had indoor air problems. Early Danish and British studies reported symptoms.
Poor indoor environments attracted attention. The Swedish allergy study (SOU 1989:76) designated "sick building" as a cause of the allergy epidemic as was feared. In the 1990s, therefore, extensive research into "sick building" was carried out. Various physical and chemical factors in the buildings were examined on a broad front.
The problem was highlighted increasingly in media and was described as a "ticking time bomb". Many studies were performed in individual buildings.
In the 1990s "sick buildings" were contrasted against "healthy buildings". The chemical contents of building materials were highlighted. Many building material manufacturers were actively working to gain control of the chemical content and to replace criticized additives. The ventilation industry advocated above all more well-functioning ventilation. Others perceived ecological construction, natural materials, and simple techniques as a solution.
At the end of the 1990s came an increased distrust of the concept of "sick building". A dissertation at the Karolinska Institute in Stockholm 1999 questioned the methodology of previous research, and a Danish study from 2005 showed these flaws experimentally. It was suggested that sick building syndrome was not really a coherent syndrome and was not a disease to be individually diagnosed, but a collection of as many as a dozen semi-related diseases. In 2006 the Swedish National Board of Health and Welfare recommended in the medical journal Läkartidningen that "sick building syndrome" should not be used as a clinical diagnosis. Thereafter, it has become increasingly less common to use terms such as sick buildings and sick building syndrome in research. However, the concept remains alive in popular culture and is used to designate the set of symptoms related to poor home or work environment engineering. Sick building is therefore an expression used especially in the context of workplace health.
Sick building syndrome made a rapid journey from media to courtroom where professional engineers and architects became named defendants and were represented by their respective professional practice insurers. Proceedings invariably relied on expert witnesses, medical and technical experts along with building managers, contractors and manufacturers of finishes and furnishings, testifying as to cause and effect. Most of these actions resulted in sealed settlement agreements, none of these being dramatic. The insurers needed a defense based upon Standards of Professional Practice to meet a court decision that declared that in a modern, essentially sealed building, the HVAC systems must produce breathing air for suitable human consumption. ASHRAE (American Society of Heating, Refrigeration and Air Conditioning Engineers, currently with over 50,000 international members) undertook the task of codifying its indoor air quality (IAQ) standard.
ASHRAE empirical research determined that "acceptability" was a function of outdoor (fresh air) ventilation rate and used carbon dioxide as an accurate measurement of occupant presence and activity. Building odors and contaminants would be suitably controlled by this dilution methodology. ASHRAE codified a level of 1,000 ppm of carbon dioxide and specified the use of widely available sense-and-control equipment to assure compliance. The 1989 issue of ASHRAE 62.1-1989 published the whys and wherefores and overrode the 1981 requirements that were aimed at a ventilation level of 5,000 ppm of carbon dioxide (the OSHA workplace limit), federally set to minimize HVAC system energy consumption. This apparently ended the SBS epidemic.
Over time, building materials changed with respect to emissions potential. Smoking vanished and dramatic improvements in ambient air quality, coupled with code compliant ventilation and maintenance, per ASHRAE standards have all contributed to the acceptability of the indoor air environment.
See also
Aerotoxic syndrome
Air purifier
Asthmagen
Cleanroom
Electromagnetic hypersensitivity
Havana syndrome
Healthy building
Indoor air quality
Lead paint
Multiple chemical sensitivity
NASA Clean Air Study
Nosocomial infection
Particulates
Power tools
Renovation
Somatization disorder
Fan death
References
Further reading
Martín-Gil J., Yanguas M. C., San José J. F., Rey-Martínez and Martín-Gil F. J. "Outcomes of research into a sick hospital". Hospital Management International, 1997, pp. 80–82. Sterling Publications Limited.
Åke Thörn, The Emergence and preservation of sick building syndrome, KI 1999.
Charlotte Brauer, The sick building syndrome revisited, Copenhagen 2005.
Michelle Murphy, Sick Building Syndrome and the Problem of Uncertainty, 2006.
Johan Carlson, "Gemensam förklaringsmodell för sjukdomar kopplade till inomhusmiljön finns inte" [Unified explanation for diseases related to indoor environment not found]. Läkartidningen 2006/12.
Bulletin of the Transilvania University of Braşov, Series I: Engineering Sciences • Vol. 5 (54) No. 1 2012 "Impact of Indoor Environment Quality on Sick Building Syndrome in Indian Leed Certified Buildings". by Jagannathan Mohan
External links
Best Practices for Indoor Air Quality when Remodeling Your Home, US EPA
Renovation and Repair, Part of Indoor Air Quality Design Tools for Schools, US EPA
Addressing Indoor Environmental Concerns During Remodeling, US EPA
Dust FAQs, UK HSE
CCOHS: Welding - Fumes And Gases | Health Effect of Welding Fumes
Syndromes of unknown causes
Building biology
Environmental toxicology
Indoor air pollution
Building defects
Syndromes | 0.769411 | 0.995804 | 0.766182 |
Kambo (drug) | Kambo, also known as sapo (from ) or vacina-do-sapo, is substance derived from the natural secretions of an amphibian belonging to the Phyllomedusa family. Commonly the dried skin secretions of the giant leaf frog, known as the kambô in Portuguese, a species of frog, are used for ritualistic purposes with a strong religious and spiritual components. Less commonly it is used as a transdermal medicine, however, evidence for its effectiveness is limited.
Kambo is usually used in a group setting, called a kambo circle or kambo ceremony. The effects on humans usually include tachycardia, nausea, vomiting, and diarrhea. A meta-review of 50 studies in which 11 cases of acute intoxication were examined found that extreme cases have included psychosis (occasionally severe), SIADH, kidney damage (including acute renal failure), pancreas damage, liver damage including toxic hepatitis, dermatomyositis, esophageal rupture, and seizures, in some cases leading to death, although such incidents are limited in number and some evidence suggests precipitation by medical contraindications.
Kambo, which originated as a folk medicine practice among some indigenous peoples in the Amazon basin, is also administered as a complementary medicine and alternative medicine treatment in the West, often as a pseudoscientific cleanse or detox. The ceremony involves burning an arm or leg and applying the kambo secretion directly to the burn. Promoters claim that kambo helps with several illnesses or injuries. There is no scientific evidence that it is an effective treatment and causal evidence is limited.
It seems to be particularly dangerous to take kambo with large quantities of water. Doing that is associated with SIADH and severe electrolyte imbalances: changes in plasma and urine osmolarity, hypokalemia, hypomagnesemia and hypophosphatemia. Naloxone is under study as a possible antidote; hospital treatment also includes medicines to protect organs from damage and restore electrolyte function.
Terminology
Kampo pae, a name used by the Noke Kuin (formerly Katukina)
Dow kiet, a word used by the Matses
Sapo, kampô, kampu, vacina de sapo, or vacina da floresta, in Brazilian Portuguese
"Kambô" is a common name of Phyllomedusa bicolor, an Amazonian tree frog, also known as the blue-and-yellow frog, bicolored tree-frog, giant monkey frog, giant leaf frog, or waxy-monkey tree frog. "Sapo" means "toad" in Spanish and Portuguese. The frog is an anuran amphibian that inhabits the Amazon and Orinoco basins in South America.
History
Natives who practice kambo are Panoan-speaking indigenous groups in the southeast Amazon rainforest, such as the Matsés, Marubo, Amahuaca, Kashinawa, Katukina, Yawanawá, and Kaxinawá. There are ethnographic studies on the use of kambo in traditional Noke Kuin medicine in the region of the state of Acre, in the Brazilian Amazon.
Since the mid-20th century, kambo has also been practiced in urban regions of Brazil. In 2004, Brazil banned the sale and marketing of kambo. Import is illegal in Chile. Outside of South America, it first became known as an alternative therapy in the late 2010s.
In 2021, the Therapeutic Goods Administration (TGA) of Australia banned the use of kambo in Australia and classified it as a schedule-10 poison. It is listed in the category for "substances of such danger to health as to warrant prohibition of sale, supply and use".
Indigenous use
To collect the secretions from the frog's body, first, the frog has to be caught. A practitioner will tie the frog to four sticks placed in the ground with its limbs stretched. This causes the frog to become stressed enough to activate its defense mechanism and secrete a substance containing peptides from its skin. After these secretions are obtained, the frog is released back into the wild. The secretions are then left to dry. Small burns are created on the skin, and a small dose of the frog secretions is applied to the open wounds. In native practice, the secretions are removed from the wounds after 15 to 20 minutes, ending the acute symptoms.
Traditional practitioners claim that it aids fertility, cleanses the body and soul, increases strength, and brings good luck to hunts, though there is no scientific evidence for these claims. It is used by natives to who attempt to expel "panema" (bad spirit) and to induce abortions. The secretions are also commonly used in people who suffer from laziness, a condition perceived as unfavorable by the Noke Kuin as the person stops participating socially.
Joaquim Luz, a Yamanawa leader, criticized internet sales and kambo's use without the preparation or permission of indigenous peoples, saying that such users are at risk, even of death. Other native groups have also expressed concerns.
Non-indigenous use
Outside South America, a kambo ceremony can involve just two people: the practitioner and the participant, or many participants at once, which is known as a kambo circle. Participants are encouraged to bring plenty of water, a towel, and a bucket. There are usually yoga mats on the floor and the ceremony room, which is often the practitioner's living room, is heavily incensed.
During the ceremony, the participant's skin is deliberately burnt multiple times, usually on the upper arm or leg, by the practitioner using a smoldering stick or vine. The practitioner uses saliva or water to reconstitute the secretions and place it on top of the burnt skin. Participants may be encouraged to shout "Viva" whenever one of them vomits into their bucket. Short-term effects include violent nausea, vomiting, diarrhea, edema (swelling) of the face, headaches, and tachycardia. The secretions seem to be vasoactive (affecting the circulation), explaining why they are absorbed rapidly.
Intoxication may occur immediately or within hours.
Medical claims
Non-indigenous users and practitioners of kambo claim that the alternative medicine helps with a wide variety of issues and conditions. These claims include treating addiction, depression, and chronic pain, reducing fevers, increasing fertility, boosting energy and physical strength, and improving mental clarity. It is also claimed that kambo removes negative energy. There is currently no scientific evidence to support positive health effect claims..
There is no solid medical evidence on how the frog toxins work, whether they are useful for treating anything, and whether they can be used safely: no clinical trials have tested them on humans, . Reports of adverse effects are numerous, including for use with experienced guidance.
Kym Jenkins of the Royal Australian and New Zealand College of Psychiatrists, in a Sydney Morning Herald article, said "people with mental illness are a more vulnerable group anyway for a variety of reasons. If you're feeling very anxious or very depressed, you're automatically more vulnerable, and you could be more susceptible to people advertising or marketing a quick fix. I do have concerns that people can be preyed upon when they are more vulnerable."
The Australian Medical Association (AMA) supports the TGA's ban on the sale, supply, and use of kambo, saying it considers kambo to be a "significant health risk".
Marketing
In non-indigenous use, the frog secretion is described and marketed as a "detox" treatment, cleanse, purge, and as a "vaccine" that is "good for everything". Kambo has been marketed both as a "scientific" remedy, emphasizing the biochemistry, and as a "spiritual" remedy, emphasizing its indigenous origins. Purging (deliberate vomiting) has been a popular treatment since the 1800s. "Detox" has been described by Edzard Ernst, emeritus professor of complementary medicine, as a term for conventional medical treatments for addiction, which has been "hijacked by entrepreneurs, quacks, and charlatans to sell a bogus treatment."
In Brazil, given the growth in the consumption of kambo in urban centers, there has been criticism by indigenous people, academics and communicators regarding the cultural appropriation of indigenous knowledge, the process of extracting the secretion of the Phyllomedusa bicolor frog, the form of transmission of wisdom, and the price charged by the ritual and the mystification of the origin of the frog.
There is also concern about pharmacological patents on the peptides identified in kambo (see biopiracy), the commercialization of the kambo outside its place of origin, and the unknown impact on frog populations, since many more are now removed from their natural habitats.
In light of the chemical complexity of the frog toxins, and their complex and potentially fatal effects, the authors of a 2022 review on the diagnosis and treatment of kambo cases said they urged "strict surveillance of the websites that encourage the use of this substance and [we] urge greater control of e-commerce or illicit trafficking of animals and secretions, including through the dark web".
Environmental impact
The effect of the increased use of kambo rituals, and trafficking of the frogs and their secretions, may have an effect on the population of Phyllomedusa bicolor in its natural habitats: the forests of Bolivia, Peru, Brazil, the Guianas, Colombia, and Venezuela. Phyllomedusa bicolor is not considered an endangered species by the IUCN. Besides Phyllomedusa (species?), other threatened endemic frog species of South America's neotropical regions have been poached and smuggled on the black market.
Parasitology
Smuggling amphibians such as Phyllomedusa bicolor can spread parasites. Zoos keep frogs for conservation purposes, and there are many parasites present in these animals that naturally occur only in the native habitats. It is recommended for imported amphibians to go through a quarantine process to verify they are not spreading parasites that could damage other ecosystems. Parasite infection rates in frogs is 51%, while in salamanders it is 13%. Individuals who want to have them as pets are obligated and encouraged to get them examined to detect gastrointestinal parasites that could potentially be harmful. Neocosmocercella fisherae is the first nematode species found parasitising Phyllomedusa bicolor from the Brazilian Amazon region.
Notable deaths
A 40-year-old businessman was charged in Brazil in 2008 with the illegal exercise of medicine and felony murder after administering kambo toxins to a business colleague who died; the deceased's son, who said his father had pressured him into participating, suffered more minor effects. In Chile, in 2009, Daniel Lara Aguilar, who suffered from chronic lumbar disc disease, died immediately after taking kambo administered by a local shaman in a mass healing ceremony; the autopsy was inconclusive due to pre-existing conditions. Medical literature reported a 2018 case in Italy of a man with obesity and ventricular hypertrophy, who, according to autopsy reports, died of cardiac arrhythmia while under the effects of kambo use. In March 2019, kambo practitioner Natasha Lechner suffered a cardiac arrest and died while receiving kambo. In April 2019, a homicide investigation was opened into the death by "severe cerebral edema" of a young person who had taken kambo toxins in Chile; the import of the frog and its secretions is illegal in Chile. In October 2021, Australian man Jarred Antonovich died at a festival in New South Wales from a perforated esophagus suspected to be caused by excessive vomiting after being administered kambo and N,N-dimethyltriptamine. After a car accident in 1997 from which he had to learn to walk and talk again, he was left with lasting impediments, the inquest heard, which may have contributed to the esophageal rupture.
Pharmacology
The frog secretes a range of small chemical compounds of a type called peptides, which have several different effects. Peptides found in the frog secretions include the opioid peptides dermorphin and deltorphin, the vasodilator sauvagine, and dermaseptin, which exhibits antimicrobial properties in vitro. Various other substances such as phyllomedusin, phyllokinin, caerulein, and adrenoregulin are also present. There is active medical research into the peptides found in the skin secretions of Phyllomedusa bicolor, focusing on discovering their biological effects. There have been some preclinical trials in mice and rats, but no phase-1 tests or clinical trials of safety in humans, .
Most of the kambo-related bioactive peptides so far characterized have displayed potential applications in medicine, such as phyllocaeruleins with hypotensive properties, tachykinins and phyllokinins as vasodilators, dermorphins and deltorphins with opiate-like properties, and adenoregulins with antibiotic properties.
In a clinical trial of a randomized, placebo-controlled study in postoperative pain, dermorphin administered via the intrathecal route was "impressively superior" over the placebo and the reference compound morphine." Due to the numerous biological activities of these substances and the similarities with the amino acid sequences related to mammalian neuropeptides and hormones, many have aroused the interest from a medical and pharmacological perspective, such as in the production of new drugs.
See also
Notes
References
Causes of death
Pseudoscience
Alternative detoxification
Fringe science
Scientific skepticism
Alternative medicine
South American traditional medicine
Amphibians and humans | 0.767862 | 0.997778 | 0.766157 |
Collagen | Collagen is the main structural protein in the extracellular matrix of a body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals. 25% to 35% of a mammalian body's protein content is collagen. Amino acids are bound together to form a triple helix of elongated fibril known as a collagen helix. The collagen helix is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis, while Vitamin E improves its production.
Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes 1% to 2% of muscle tissue and accounts for 6% of the weight to skeletal muscle. The fibroblast is the most common cell creating collagen in a body. Gelatin, which is used in food and industry, is collagen that was irreversibly hydrolyzed using heat, basic solutions, or weak acids.
Etymology
The name collagen comes from the Greek κόλλα (kólla), meaning "glue", and suffix -γέν, -gen, denoting "producing".
Human types
Over 90% of the collagen in the human body is type I collagen. However, as of 2011, 28 types of human collagen have been identified, described, and divided into several groups according to the structure they form. All of the types contain at least one triple helix. The number of types shows collagen's diverse functionality.
Fibrillar (Type I, II, III, V, XI)
Non-fibrillar
FACIT (Fibril Associated Collagens with Interrupted Triple Helices) (Type IX, XII, XIV, XIX, XXI)
Short chain (Type VIII, X)
Basement membrane (Type IV)
Multiplexin (Multiple Triple Helix domains with Interruptions) (Type XV, XVIII)
MACIT (Membrane Associated Collagens with Interrupted Triple Helices) (Type XIII, XVII)
Microfibril forming (Type VI)
Anchoring fibrils (Type VII)
The five most common types are:
Type I: skin, tendon, vasculature, organs, bone (main component of the organic part of bone)
Type II: cartilage (main collagenous component of cartilage)
Type III: reticulate (main component of reticular fibers), commonly found alongside type I
Type IV: forms basal lamina, the epithelium-secreted layer of the basement membrane
Type V: cell surfaces, hair, and placenta
In human biology
Cardiac
The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.
Bone grafts
As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting as it has a triple helical structure, making it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure of collagen prevents it from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.
Tissue regeneration
Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.
Reconstructive surgical uses
Collagens are widely employed in the construction of artificial skin substitutes used in the management of severe burns and wounds. These collagens may be derived from bovine, equine, porcine, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.
Wound healing
Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. Wound deterioration, followed sometimes by procedures such as amputation, can thus be avoided.
Collagen is a natural product and is thus used as a natural wound dressing and has properties that artificial wound dressings do not have. It is resistant against bacteria, which is of vital importance in a wound dressing. It helps to keep the wound sterile, because of its natural ability to fight infection. When collagen is used as a burn dressing, healthy granulation tissue is able to form very quickly over the burn, helping it to heal rapidly.
Throughout the four phases of wound healing, collagen performs the following functions:
Guiding function: Collagen fibers serve to guide fibroblasts. Fibroblasts migrate along a connective tissue matrix.
Chemotactic properties: The large surface area available on collagen fibers can attract fibrogenic cells which help in healing.
Nucleation: Collagen, in the presence of certain neutral salt molecules, can act as a nucleating agent causing formation of fibrillar structures.
Hemostatic properties: Blood platelets interact with the collagen to make a hemostatic plug.
Basic research
Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.
Biology
The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in the amino acid sequence of collagen are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline. The average amino acid composition for fish and mammal skin is given.
Synthesis
First, a three-dimensional stranded structure is assembled with amino acids glycine and proline as its principal components. This is not yet collagen but is its precursor: procollagen. Procollagen is then modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of a triple helix structure to collagen. Because the hydroxylase enzymes performing these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by two different enzymes: prolyl 4-hydroxylase and lysyl hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. The synthesis of collagen occurs inside and outside of a cell. The formation of collagen which results in fibrillary collagen (most common form) is discussed here. Meshwork collagen, which is often involved in the formation of filtration systems, is another common form of collagen. All types of collagens are triple helices, and the differences lie in the make-up of their alpha peptides created in step 2.
Transcription of mRNA: About 44 genes are associated with collagen formation, each coding for a specific mRNA sequence, and typically have the "COL" prefix. The beginning of collagen synthesis begins with turning on genes associated with the formation of a particular alpha peptide (typically alpha 1, 2 or 3).
Pre-pro-peptide formation: Once the final mRNA exits from the cell nucleus and enters into the cytoplasm, it links with the ribosomal subunits and the process of translation occurs. The early/first part of the new peptide is known as the signal sequence. The signal sequence on the of the peptide is recognized by a signal recognition particle on the endoplasmic reticulum, which will be responsible for directing the pre-pro-peptide into the endoplasmic reticulum. Therefore, once the synthesis of new peptide is finished, it goes directly into the endoplasmic reticulum for post-translational processing. It is now known as preprocollagen.
Pre-pro-peptide to pro-collagen: Three modifications of the pre-pro-peptide occur leading to the formation of the alpha peptide:
The signal peptide on the N-terminal is removed, and the molecule is now known as propeptide (not procollagen).
Hydroxylation of lysines and prolines on propeptide by the enzymes 'prolyl hydroxylase' and 'lysyl hydroxylase' (to produce hydroxyproline and hydroxylysine) occurs to aid cross-linking of the alpha peptides. This enzymatic step requires vitamin C as a cofactor. In scurvy, the lack of hydroxylation of prolines and lysines causes a looser triple helix (which is formed by three alpha peptides).
Glycosylation occurs by adding either glucose or galactose monomers onto the hydroxyl groups that were placed onto lysines, but not on prolines.
Once these modifications have taken place, three of the hydroxylated and glycosylated propeptides twist into a triple helix forming procollagen. Procollagen still has unwound ends, which will be later trimmed. At this point, the procollagen is packaged into a transfer vesicle destined for the Golgi apparatus.
Golgi apparatus modification: In the Golgi apparatus, the procollagen goes through one last post-translational modification before being secreted out of the cell. In this step, oligosaccharides (not monosaccharides as in step 3) are added, and then the procollagen is packaged into a secretory vesicle destined for the extracellular space.
Formation of tropocollagen: Once outside the cell, membrane bound enzymes known as collagen peptidases, remove the "loose ends" of the procollagen molecule. What is left is known as tropocollagen. Defects in this step produce one of the many collagenopathies known as Ehlers–Danlos syndrome. This step is absent when synthesizing type III, a type of fibrillar collagen.
Formation of the collagen fibril: lysyl oxidase, an extracellular copper-dependent enzyme, produces the final step in the collagen synthesis pathway. This enzyme acts on lysines and hydroxylysines producing aldehyde groups, which will eventually undergo covalent bonding between tropocollagen molecules. This polymer of tropocollagen is known as a collagen fibril.
Amino acids
Collagen has an unusual amino acid composition and sequence:
Glycine is found at almost every third residue.
Proline makes up about 17% of collagen.
Collagen contains two unusual derivative amino acids not directly inserted during translation. These amino acids are found at specific locations relative to glycine and are modified post-translationally by different enzymes, both of which require vitamin C as a cofactor.
Hydroxyproline derived from proline
Hydroxylysine derived from lysine – depending on the type of collagen, varying numbers of hydroxylysines are glycosylated (mostly having disaccharides attached).
Cortisol stimulates degradation of (skin) collagen into amino acids.
Collagen I formation
Most collagen forms in a similar manner, but the following process is typical for type I:
Inside the cell
Two types of alpha chains – alpha-1 and alpha 2, are formed during translation on ribosomes along the rough endoplasmic reticulum (RER). These peptide chains known as preprocollagen, have registration peptides on each end and a signal peptide.
Polypeptide chains are released into the lumen of the RER.
Signal peptides are cleaved inside the RER and the chains are now known as pro-alpha chains.
Hydroxylation of lysine and proline amino acids occurs inside the lumen. This process is dependent on and consumes ascorbic acid (vitamin C) as a cofactor.
Glycosylation of specific hydroxylysine residues occurs.
Triple alpha helical structure is formed inside the endoplasmic reticulum from two alpha-1 chains and one alpha-2 chain.
Procollagen is shipped to the Golgi apparatus, where it is packaged and secreted into extracellular space by exocytosis.
Outside the cell
Registration peptides are cleaved and tropocollagen is formed by procollagen peptidase.
Multiple tropocollagen molecules form collagen fibrils, via covalent cross-linking (aldol reaction) by lysyl oxidase which links hydroxylysine and lysine residues. Multiple collagen fibrils form into collagen fibers.
Collagen may be attached to cell membranes via several types of protein, including fibronectin, laminin, fibulin and integrin.
Molecular structure
A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.
A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.
Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.
Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.
The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.
There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure in situ. These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.
Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.
Associated disorders
Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.
In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.
Diseases
One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.
Osteogenesis imperfecta – Caused by a mutation in type 1 collagen, dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.
Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in type 2 collagen, further research is being conducted to confirm this.
Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in collagen type 3.
Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.
Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.
Animal harvesting
When not synthesized, collagen can be harvested from animal skin. This has led to deforestation as has occurred in Paraguay where large collagen producers buy large amounts of cattle hides from regions that have been clear-cut for cattle grazing.
Characteristics
Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called collagen fibers are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.
Mechanical properties
Collagen is a complex hierarchical material with mechanical properties that vary significantly across different scales.
On the molecular scale, atomistic and course-grained modeling simulations, as well as numerous experimental methods, have led to several estimates of the Young's modulus of collagen at the molecular level. Only above a certain strain rate is there a strong relationship between elastic modulus and strain rate, possibly due to the large number of atoms in a collagen molecule. The length of the molecule is also important, where longer molecules have lower tensile strengths than shorter ones due to short molecules having a large proportion of hydrogen bonds being broken and reformed.
On the fibrillar scale, collagen has a lower modulus compared to the molecular scale, and varies depending on geometry, scale of observation, deformation state, and hydration level. By increasing the crosslink density from zero to 3 per molecule, the maximum stress the fibril can support increases from 0.5 GPa to 6 GPa.
Limited tests have been done on the tensile strength of the collagen fiber, but generally it has been shown to have a lower Young's modulus compared to fibrils.
When studying the mechanical properties of collagen, tendon is often chosen as the ideal material because it is close to a pure and aligned collagen structure. However, at the macro, tissue scale, the vast number of structures that collagen fibers and fibrils can be arranged into results in highly variable properties. For example, tendon has primarily parallel fibers, whereas skin consists of a net of wavy fibers, resulting in a much higher strength and lower ductility in tendon compared to skin. The mechanical properties of collagen at multiple hierarchical levels is given.
Collagen is known to be a viscoelastic solid. When the collagen fiber is modeled as two Kelvin-Voigt models in series, each consisting of a spring and a dashpot in parallel, the strain in the fiber can be modeled according to the following equation:
where α, β, and γ are defined materials properties, εD is fibrillar strain, and εT is total strain.
Uses
Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages.
If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.
From the Greek for glue, kolla, the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.
Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.
Cosmetics
Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Collagen is a vital protein in skin, hair, nails, and other tissues. Its production decreases with age and factors like sun damage and smoking. Collagen supplements, derived from sources like fish and cattle, are marketed to improve skin, hair, and nails. Studies show some skin benefits, but these supplements often contain other beneficial ingredients, making it unclear if collagen alone is effective. There's minimal evidence supporting collagen's benefits for hair and nails. Overall, the effectiveness of oral collagen supplements is not well-proven, and focusing on a healthy lifestyle and proven skincare methods like sun protection is recommended.
History
The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.
The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed microfibril.
See also
Collagen hybridizing peptide, a peptide that can bind to denatured collagen
Hypermobility spectrum disorder
Metalloprotease inhibitor
Osteoid, a collagen-containing component of bone
Collagen loss
References
Structural proteins
Edible thickening agents
Aging-related proteins | 0.766584 | 0.999412 | 0.766134 |
Fulminant | Fulminant is a medical descriptor for any event or process that occurs suddenly and escalates quickly, and is intense and severe to the point of lethality, i.e., it has an explosive character. The word comes from Latin fulmināre, to strike with lightning. There are several diseases described by this adjective:
Fulminant liver failure
Fulminant (Marburg variant) multiple sclerosis.
Fulminant colitis
Fulminant pre-eclampsia
Fulminant meningitis
Purpura fulminans
Fulminant hepatic venous thrombosis (Budd-Chiari syndrome)
Fulminant jejunoileitis
Fulminant myocarditis
Beyond these particular uses, the term is used more generally as a descriptor for sudden-onset medical conditions that are immediately threatening to life or limb. Some viral hemorrhagic fevers, such as Ebola, Lassa fever, and Lábrea fever, may kill in as little as two to five days. Diseases that cause rapidly developing lung edema, such as some kinds of pneumonia, may kill in a few hours. It was said of the "black death" (pneumonic bubonic plague) that some of its victims would die in a matter of hours after the initial symptoms appeared. Other pathologic conditions that may be fulminating in character are acute respiratory distress syndrome, asthma, acute anaphylaxis, septic shock, and disseminated intravascular coagulation.
The term is generally not used to refer to immediate death by trauma, such as gunshot wound, but can refer to trauma-induced secondary conditions, such as commotio cordis, a sudden cardiac arrest caused by a blunt, non-penetrating trauma to the precordium, which causes ventricular fibrillation of the heart. Cardiac arrest and stroke in certain parts of the brain, such as in the brainstem (which controls cardiovascular and respiratory system functions), and massive hemorrhage of the great arteries (such as in perforation of the walls by trauma or by sudden opening of an aneurysm of the aorta) may be very quick, causing "fulminant death". Sudden infant death syndrome (SIDS) is still a mysterious cause of respiratory arrest in infants. Certain infections of the brain, such as rabies, meningococcal meningitis, or primary amebic meningoencephalitis can kill within hours to days after symptoms appear.
Some toxins, such as cyanide, may also provoke fulminant death. Abrupt hyperkalemia provoked by intravenous injection of potassium chloride leads to fulminant death by cardiac arrest.
Related terms
To fulminate is to hurl verbal denunciations, severe criticisms, or menacing comments at someone. Rarely, it is used in its original sense, "to kill by lightning".
Fulminates are a class of explosives used in detonator caps. They are named for the startling suddenness with which they explode.
References
Medical terminology
Medical aspects of death | 0.778137 | 0.984523 | 0.766094 |
Orientation (mental) | Orientation is a function of the mind involving awareness of three dimensions: time, place and person. Problems with orientation lead to disorientation, and can be due to various conditions. It ranges from an inability to coherently understand person, place, time, and situation, to complete orientation.
Assessment
Assessment of a person's mental orientation is frequently designed to evaluate the need for focused diagnosis and treatment of conditions leading to Altered Mental Status (AMS). A variety of basic prompts and tests are available to determine a person's level of orientation. These tests frequently primarily assess the ability of the person (within EMS) to perform basic functions of life (see: Airway Breathing Circulation), many assessments then gauge their level of amnesia, awareness of surroundings, concept of time, place, and response to verbal, and sensory stimuli.
Causes of mental disorientation
Disorientation has a variety of causes, physiological and mental in nature. Physiological disorientation is frequently caused by an underlying or acute condition. Disease or injury that impairs the delivery of essential nutrients such as glucose, oxygen, fluids, or electrolytes can impair homeostasis, and therefore neurological function causing mental disorientation. Other causes are psycho-neurological in nature (see also Cognitive disorder) stemming from chemical imbalances in the brain, deterioration of the structure of the brain, or psychiatric states or illnesses that result in disorientation.
Mental orientation is frequently affected by shock, including physiological shock (see: Shock circulatory) and mental shock (see: Acute stress reaction, a psychological condition in response to acute stressful stimuli.)
Areas within precuneus, posterior cingulate cortex, inferior parietal lobe, medial prefrontal cortex, lateral frontal, lateral temporal cortices are believed to be responsible for situational orientation.
See also
Mental confusion
Mental status examination
Delirium
Altered mental status
References
Cognition | 0.783298 | 0.978019 | 0.76608 |
Community health | Community health refers to non-treatment based health services that are delivered outside hospitals and clinics. Community health is a subset of public health that is taught to and practiced by clinicians as part of their normal duties. Community health volunteers and community health workers work with primary care providers to facilitate entry into, exit from and utilization of the formal health system by community members as well as providing supplementary services such as support groups or wellness events that are not offered by medical institutions.
Community health is a major field of study within the medical and clinical sciences which focuses on the maintenance, protection, and improvement of the health status of population groups and communities, in particular those who are a part of disadvantaged communities. It is a distinct field of study that may be taught within a separate school of public health or preventive healthcare. The World Health Organization defines community health as:Environmental, Social, and Economic resources to sustain emotional and physical well being among people in ways that advance their aspirations and satisfy their needs in their unique environment.
Medical interventions that occur in communities can be classified as three categories: Primary care, Secondary care, and Tertiary care. Each category focuses on a different level and approach towards the community or population group. In the United States, Community health is rooted within Primary healthcare achievements. Primary healthcare programs aim to reduce risk factors and increase health promotion and prevention. Secondary healthcare is related to "hospital care" where acute care is administered in a hospital department setting. Tertiary healthcare refers to highly specialized care usually involving disease or disability management.
Community health services are classified into categories including:
Preventive health services such as chemoprophylaxis for Tuberculosis, Cancer screening and treatment of Diabetes and Hypertension.
Promotive health services such as health education, family planning, vaccination and nutritional supplementation
Curative health services such as treatment of Jiggers, Lice infestation, Malaria and Pneumonia.
Rehabilitative health services such as provision of prosthetics, social work, occupational therapy, physical therapy, counseling and other mental health services.
Community health workers and volunteers
Community health workers (also known as community health assistants and community health officers) are local public health workers with a deep understanding of their community's health needs and challenges. They serve as a bridge between their community and local health systems to ensure high quality and culturally competent service delivery. They have vocational, professional or academic qualifications which enable them to provide training, supervisory, administrative, teaching and research services in community health departments.
Community health volunteers are members of a local community who have experience and training on the health problems prevalent in their community and care services available, in order to identify and link those in need with local providers. Community health volunteers may be referred to by different titles depending on their local health system; these titles can included lay health workers, health volunteers, village health agents, non-specialist healthcare providers, and village health agents.
Community health volunteers provide basic services such as distribution of water chlorination tablets, mosquito nets and health education material. They will involve or work with registered clinicians when they encounter sick or recovering patients or those with complex or ongoing needs.
Community health organizations are non-profit and non government organization which administers and coordinates the delivery of health care services to people living in a designated community or neighborhood. It helps people understand their status of health or social conditions. Providing advocacy for those who need it and holding groups and individual meetings with people in the community. The vital role is advocating for the rights and interests of their community members. They raise awareness about issues affecting their community by research, dialogues and lobby for policies and programs that address those issues.
Measuring Community health
Community health is generally measured by Geographical Information Systems and Demographic data. Geographic Information Systems can be used to define sub-communities when neighborhood location data is not enough. Traditionally Community health has been measured using sampling data which was then compared to well-known data sets, like the National Health Interview Survey or National Health and Nutrition Examination Survey. With technological development, information systems could store more data for small-scale communities, cities, and towns; as opposed to census data that only generalize information about small populations based on the overall population. Geographical Information Systems (GIS) can give more precise information about community resources, even at neighborhood levels. The ease of use of Geographic Information Systems (GIS), advances in multilevel statistics, and spatial analysis methods make it easier for researchers to procure and generate data related to the built environment.
Social media can also play a big role in health information analytics. Studies have found social media being capable of influencing people to change their unhealthy behaviors and encourage interventions capable of improving health status. Social media statistics combined with Geographical Information Systems (GIS) may provide researchers with a more complete image of community standards for health and well being.
Categories of Community health
Primary Healthcare and Primary Prevention
Community-based health promotion emphasizes Primary Prevention and population-based perspective (traditional prevention). It is the goal of Community Health to have individuals in a certain community improve their lifestyle or seek medical attention. Primary Healthcare is provided by health professionals, specifically the ones a patient sees first that may refer them to Secondary or Tertiary care.
Primary prevention refers to the early avoidance and identification of risk factors that may lead to certain diseases and disabilities. Community-focused efforts including immunizations, classroom teaching, and awareness campaigns are all good examples of how primary prevention techniques are utilized by communities to change certain health behaviors. Prevention programs, if carefully designed and drafted, can effectively prevent problems that children and adolescents face as they grow up. This finding also applies to all groups and classes of people. Prevention programs are one of the most effective tools health professionals can use to significantly impact individual, population, and community health.
Secondary Healthcare and Secondary Prevention
Community health can also be improved with improvements in individuals' environments. Community health status is determined by the environmental characteristics, behavioral characteristics, social cohesion in the environment of that community. Appropriate modifications in the environment can help to prevent unhealthy behaviors and negative health outcomes.
Secondary prevention refers to improvements made in a patient's lifestyle or environment after the onset of disease or disability. This sort of prevention works to make life easier for the patient since it is too late to prevent them from their current disease or disability. An example of secondary prevention is when those with occupational low back pain are provided with strategies to stop their health status from worsening; the prospects of secondary prevention may even hold more promise than primary prevention in this case.
Tertiary Healthcare
In Tertiary healthcare, community health can only be affected with professional medical care involving the entire population. Patients need to be referred to specialists and undergo advanced medical treatment. In some countries, there are more sub-specialties of medical professions than there are primary care specialists. Health inequalities are directly related to social advantage and social resources.
Challenges and difficulties in Community health
The complexity of community health and its various problems can make it difficult for researchers to assess and identify solutions. Community-Based Participatory Research (CBPR) is a unique alternative that combines community participation, inquiry, and action. Community-Based Participatory Research (CBPR) helps researchers address community issues with a broader lens and also works with the people in the community to find culturally sensitive, valid, and reliable methods and approaches.
Community health also requires clear communication to properly address health issues, disparities, and complications. Health communication is the concept that applies communication evidence, strategy, theory, and creativity to promote behaviors, policies, and practices that advance the health and well-being of people and populations. Communicating health care can be limited by a few factors, which are important to recognize to best apply community health practices. To better understand and provide community health from a provider and consumer perspective, limitations in concepts such as scientific complexity or uncertainty. When using an example such as environmental health, scientific complexity can be summarized with environmental health risks that often involve complex scientific concepts that can be difficult to understand for someone without the technical knowledge gained from specialized training or education. The use of plain language and visual aids helps to simplify complex information and increase accessibility. Uncertainty in community health exists when the scientific understanding of environmental health risks may be incomplete or uncertain. Knowing details about how communities work, specific to the individual, can also play a role in a better understanding as well as offering complete transparency about the limitations of knowledge and ongoing research efforts. Preventative actions must be taken to prevent misinformation in the face of uncertainty as it can cause grave circumstances as seen during the COVID-19 pandemic. Scientific complexity and uncertainty are concepts that can make it difficult to understand the environment and limits coherent communication about population health. Examples of successful community health initiatives can include projects addressing the issues that complicate the subject. For example, promotions for understanding, as well as initiatives working towards already acknowledged health issues in each community.
Patients with limited-English proficiency especially struggle to access health and turn to community health centers. Community health carries the burden of serving such patients. There is a need to invest more in the interpreter workforce (in both quantity and quality), letting those with limited-English proficiency know more about their rights currently available to them, potential legal avenues to take if said rights are not being provided, and increased awareness for non-verbal communication. Executive Order 13166 (2000), titled Improving Access to Services for Persons with Limited English Proficiency, offered continuing education for health professionals, certification of healthcare interpreters, and reimbursement for language services for Medicaid/ State Children's Health Insurance Program (SCHIP) enrollees in order to address the institutional issues regarding language barriers in our current healthcare system. Another avenue that has been taken is local governments partnering with community-based organizations, such as the collaboration between Alameda county and the Korean Community Center of the East Bay (KCCEB) to create RICE, the Refugee and Immigrant Collaborative for Empowerment, a coalition mobilized by various multiethnic and multilingual organizations. They partnered in 2020 during the height of the pandemic in order to support COVID-19 testing and increase vaccine awareness and accessibility across 16 language groups.
Other issues involve access and cost of medical care. A great majority of the world does not have adequate health insurance. In low-income countries, less than 40% of total health expenditures are paid for by the public/government. Community health, even Population health, is not encouraged as health sectors in developing countries are not able to link the national authorities with the local government and community action.
In the United States, the Affordable Care Act (ACA) changed the way community health centers operate and the policies that were in place, greatly influencing community health. The ACA directly affected community health centers by increasing funding, expanding insurance coverage for Medicaid, reforming the Medicaid payment system, appropriating $1.5 billion to increase the workforce and promote training. The impact, importance, and success of the Affordable Care Act is still being studied and will have a large impact on how ensuring health can affect community standards on health and also individual health.
Ethnic disparities in health statuses among different communities are also a cause of concern. Community coalition-driven interventions may bring benefits to this segment of society. This also relates to language usage, where results from a 2019 systematic review found that patients with limited English proficiency who received care from physicians who communicate in the patient's own preferred language generally had improved health outcomes.
Community health resolutions
Each community is different and should create its own Community Health Improvement Process also known as CHIP. A CHIP consists of problem identification and prioritization cycle along with an analysis and implementation cycle. Five strategies that assist the CHIP process are improving community health and well-being; community involvement, political commitment; healthy public policy; multi-sectoral collaboration; and asset-based community development. An asset-based approach involves empowering individuals and communities by focusing on community strengths along with the skills of the individuals.
The CDC makes states that "Individuals who are in good physical shape, have proper vaccination, have access to clinical services and medications, and know where to get critical health and emergency alert information create a better community than those who have poor health and don't understand where to get proper treatment and medicine."
The Problem, Identification, and Prioritization cycle have three phases that help benefit the community, which is forming a health coalition, collecting and analyzing data for health profile, and identifying critical health issues. The information that is gathered is also distributed to the community to help with important decision-making.
Following this cycle is the Analysis and Implementation cycle which helps resolve community health problems by analyzing the health issue, establishing resources, creating a health improvement strategy with the resources, and allocating responsibility throughout the community. Multiple issues are analyzed in conjunction to determine which is most important. Lastly, the authority to act is implemented, sufficient funds are allocated and access to data is released in order for the members of the community to review and move accordingly.
Community health in the Global South
Access to community health in the Global South is influenced by geographic accessibility (physical distance from the service delivery point to the user), availability (proper type of care, service provider, and materials), financial accessibility (willingness and ability of users to purchase services), and acceptability (responsiveness of providers to social and cultural norms of users and their communities). While the Epidemiological transition is shifting the disease burden from communicable to noncommunicable conditions in developing countries, this transition is still in an early stage in parts of the Global South such as South Asia, the Middle East, and Sub-Saharan Africa. Two phenomena in developing countries have created a "medical poverty trap" for underserved communities in the Global South — the introduction of user fees for public healthcare services and the growth of out-of-pocket expenses for private services. The private healthcare sector is being increasingly utilized by low and middle income communities in the Global South for conditions such as malaria, tuberculosis, and sexually transmitted infections. Private care is characterized by more flexible access, shorter waiting times, and greater choice. Private providers that serve low-income communities are often unqualified and untrained. Some policymakers recommend that governments in developing countries harness private providers to remove state responsibility from service provision.
Community development is frequently used as a public health intervention to empower communities to obtain self-reliance and control over the factors that affect their health. Community health workers are able to draw on their firsthand experience, or local knowledge, to complement the information that scientists and policy makers use when designing health interventions. Interventions with community health workers have been shown to improve access to primary healthcare and quality of care in developing countries through reduced malnutrition rates, improved maternal and child health and prevention and management of HIV/AIDS. Community health workers have also been shown to promote chronic disease management by improving the clinical outcomes of patients with diabetes, hypertension, and cardiovascular diseases.
Slum-dwellers in the Global South face threats of infectious disease, non-communicable conditions, and injuries due to violence and road traffic accidents. Participatory, multi-objective slum upgrading in the urban sphere significantly improves social determinants that shape health outcomes such as safe housing, food access, political and gender rights, education, and employment status. Efforts have been made to involve the urban poor in project and policy design and implementation. Through slum upgrading, states recognize and acknowledge the rights of the urban poor and the need to deliver basic services. Upgrading can vary from small-scale sector-specific projects (i.e. water taps, paved roads) to comprehensive housing and infrastructure projects (i.e. piped water, sewers). Other projects combine environmental interactions with social programs and political empowerment. Recently, slum upgrading projects have been incremental to prevent the displacement of residents during improvements and attentive to emerging concerns regarding climate change adaptation. By legitimizing slum-dwellers and their right to remain, slum upgrading is an alternative to slum removal and a process that in itself may address the structural determinants of population health.
Kenya
Community health refers to the first level of health services provision in Kenya that comprises;
Interventions focusing on building demand for existing health and related services, by improving community awareness and health seeking behavior and 2. Taking defined interventions and services as defined in (Kenya Health Sector Strategic and investment plan KHSSP) close to the community and households.
The current registered association for community Health professionals in Kenya is The Society of Community Health Caregivers. It was registered in the year 2020 to act as an umbrella body for the community health professionals.
Academic resources
Journal of Urban Health, Springer. (electronic) (paper).
International Quarterly of Community Health Education, Sage Publications. (electronic), (paper).
Global Public Health, Informa Healthcare. (paper).
Journal of Community Health, Springer. .
Family and Community Health, Lippincott Williams & Wilkins. (electronic).
Health Promotion Practice, Sage Publications. (electronic) (paper).
Journal of Health Services Research and Policy, Sage Publications. (electronic) (paper).
BMC Health Sciences Research, Biomed Central. (electronic).
Health Services Research, Wiley-Blackwell. (electronic).
Health Communication and Literacy: An Annotated Bibliography, Centre for Literacy of Quebec. .
See also
Community health agent
Community health center
Community mental health service
Online health communities
Prison reform
University of Community Health, Magway
References
Further reading
John Sanbourne Bockoven (1963). Moral Treatment in American Psychiatry, New York: Springer Publishing Co.
External links
Health marketing- CDC
Public health
Health | 0.769531 | 0.995514 | 0.766079 |
Plant physiology | Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants.
Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology.
Aims
The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research.
First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.
Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do.
Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists.
Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant.
Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
Biochemistry of plants
The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary.
Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits.
Constituent elements
Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey.
The following tables list element nutrients essential to plants. Uses within plants are generalized.
Pigments
Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye.
Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties.
Signals and regulators
Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals.
Plant hormones
Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations.
Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death.
The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology.
Photomorphogenesis
While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light.
Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll.
The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings.
Photoperiodism
Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead.
Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night.
Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima).
Environmental physiology
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology.
Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon.
Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination.
While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Tropisms and nastic movements
Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement.
Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones.
Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects.
Plant disease
Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms.
Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors.
One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry.
History
Early history
Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water.
Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time.
Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby.
Economic applications
Food production
In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics.
Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
See also
Biomechanics
Hyperaccumulator
Phytochemistry
Plant anatomy
Plant morphology
Plant secondary metabolism
Branches of botany
References
Further reading
Lincoln Taiz, Eduardo Zeiger, Ian Max Møller, Angus Murphy: Fundamentals of Plant Physiology. Sinauer, 2018.
Branches of botany | 0.774033 | 0.989696 | 0.766058 |
Ad hoc | Ad hoc is a Latin phrase meaning literally for this. In English, it typically signifies a solution designed for a specific purpose, problem, or task rather than a generalized solution adaptable to collateral instances (compare with a priori).
Common examples include ad hoc committees and commissions created at the national or international level for a specific task, and the term is often used to describe arbitration (ad hoc arbitration). In other fields, the term could refer to a military unit created under special circumstances (see task force), a handcrafted network protocol (e.g., ad hoc network), a temporary collaboration among geographically-linked franchise locations (of a given national brand) to issue advertising coupons, or a purpose-specific equation in mathematics or science.
Ad hoc can also function as an adjective describing temporary, provisional, or improvised methods to deal with a particular problem, the tendency of which has given rise to the noun adhocism. This concept highlights the flexibility and adaptability often required in problem-solving across various domains.
In everyday language, "ad hoc" is sometimes used informally to describe improvised or makeshift solutions, emphasizing their temporary nature and specific applicability to immediate circumstances.
Styling
Style guides disagree on whether Latin phrases like ad hoc should be italicized. The trend is not to use italics. For example, The Chicago Manual of Style recommends that familiar Latin phrases that are listed in the Webster's Dictionary, including "ad hoc", not be italicized.
Hypothesis
In science and philosophy, ad hoc means the addition of extraneous hypotheses to a theory to save it from being falsified. Ad hoc hypotheses compensate for anomalies not anticipated by the theory in its unmodified form.
Scientists are often skeptical of scientific theories that rely on frequent, unsupported adjustments to sustain them. Ad hoc hypotheses are often characteristic of pseudo-scientific subjects such as homeopathy.
In the military
In the military, ad hoc units are created during unpredictable situations, when the cooperation between different units is suddenly needed for fast action, or from remnants of previous units which have been overrun or otherwise whittled down.
In governance
In national and sub-national governance, ad hoc bodies may be established to deal with specific problems not easily accommodated by the current structure of governance or to address multi-faceted issues spanning several areas of governance. In the UK and other commonwealth countries, ad hoc Royal Commissions may be set up to address specific questions as directed by parliament.
In diplomacy
In diplomacy, diplomats may be appointed by a government as special envoys, or diplomats who serve on a ad hoc basis due to the possibility that such envoys' offices may either not be retained by a future government or may only exist during the duration of a relevant cause.
Networking
The term ad hoc networking typically refers to a system of network elements that combine to form a network requiring little or no planning.
See also
Ad hoc testing
Ad infinitum
Ad libitum
Adhocracy
Democracy
Heuristic
House rule
Russell's teapot
Inductive reasoning
Confirmation bias
Cherry picking
References
Further reading
External links
Latin words and phrases | 0.767457 | 0.998085 | 0.765987 |
Functional gastrointestinal disorder | Functional gastrointestinal disorders (FGID), also known as disorders of gut–brain interaction, include a number of separate idiopathic disorders which affect different parts of the gastrointestinal tract and involve visceral hypersensitivity and motility disturbances.
Definition
Using the Delphi method, the Rome Foundation and its board of directors, chairs and co-chairs of the ROME IV committees developed the current definition for disorders of gut-brain interaction.
A group of disorders classified by GI symptoms related to any combination of:
Motility disturbance
Visceral hypersensitivity
Altered mucosal and immune function
Altered gut microbiota
Altered central nervous system (CNS) processing
Classification
Terms such as functional colonic disease (or functional bowel disorder) refer in medicine to a group of bowel disorders which are characterized by chronic abdominal complaints without a structural or biochemical cause that could explain symptoms. Other functional disorders relate to other aspects of the process of digestion.
The consensus review process of meetings and publications organised by the Rome Foundation, known as the Rome process, has helped to define the functional gastrointestinal disorders. Successively, the Rome I, Rome II, Rome III and Rome IV proposed consensual classification system and terminology, as recommended by the Rome Coordinating Committee. These now include classifications appropriate for adults, children and neonates/toddlers.
The current ROME IV classification, published in 2016, is as follows:
A. Esophageal disorders
A1. Functional chest pain
A2. Functional heartburn
A3. Reflux hypersensitivity
A4. Globus
A5. Functional dysphagia
B. Gastroduodenal disorders
B1. Functional dyspepsia
B1a. Postprandial distress syndrome (PDS)
B1b. Epigastric pain syndrome (EPS)
B2. Belching disorders
B2a. Excessive supragastric belching
B2b. Excessive gastric belching
B3. Nausea and vomiting disorders
B3a. Chronic nausea vomiting syndrome (CNVS)
B3b. Cyclic vomiting syndrome (CVS)
B3c. Cannabinoid hyperemesis syndrome (CHS)
B4. Rumination syndrome
C. Bowel disorders
C1. Irritable bowel syndrome (IBS)
IBS with predominant constipation (IBS-C)
IBS with predominant diarrhea (IBS-D)
IBS with mixed bowel habits (IBS-M)
IBS unclassified (IBS-U)
C2. Functional constipation
C3. Functional diarrhea
C4. Functional abdominal bloating/distension
C5. Unspecified functional bowel disorder
C6. Opioid-induced constipation
D. Centrally mediated disorders of gastrointestinal pain
D1. Centrally mediated abdominal pain syndrome (CAPS)
D2. Narcotic bowel syndrome (NBS)/ Opioid-induced GI hyperalgesia
E. Gallbladder and sphincter of Oddi disorders
E1. Biliary pain
E1a. Functional gallbladder disorder
E1b. Functional biliary sphincter of Oddi disorder
E2. Functional pancreatic sphincter of Oddi disorder
F. Anorectal disorders
F1. Fecal incontinence
F2. Functional anorectal pain
F2a. Levator ani syndrome
F2b. Unspecified functional anorectal pain
F2c. Proctalgia fugax
F3. Functional defecation disorders
F3a. Inadequate defecatory propulsion
F3b. Dyssynergic defecation
G. Childhood functional GI disorders: Neonate/Toddler
G1. Infant regurgitation
G2. Rumination syndrome
G3. Cyclic vomiting syndrome (CVS)
G4. Infant colic
G5. Functional diarrhea
G6. Infant dyschezia
G7. Functional constipation
H. Childhood functional GI disorders: Child/Adolescent
H1. Functional nausea and vomiting disorders
H1a. Cyclic vomiting syndrome (CVS)
H1b. Functional nausea and functional vomiting
H1b1. Functional nausea
H1b2. Functional vomiting
H1c. Rumination syndrome
H1d. Aerophagia
H2. Functional abdominal pain disorders
H2a. Functional dyspepsia
H2a1. Postprandial distress syndrome
H2a2. Epigastric pain syndrome
H2b. Irritable bowel syndrome (IBS)
H2c. Abdominal migraine
H2d. Functional abdominal pain ‒ NOS
H3. Functional defecation disorders
H3a. Functional constipation
H3b. Nonretentive fecal incontinence
Causes
FGIDs share in common any of several physiological features including increased motor reactivity, enhanced visceral hypersensitivity, altered mucosal immune and inflammatory function (associated with bacterial dysbiosis), and altered central nervous system and enteric nervous system (CNS-ENS) regulation.
The pathophysiology of FGID has been best conceptualized using biopsychosocial model help to explain the relationships between an individual factors in their early life that in turn can influence their psychosocial factor and physiological functioning. This model also shows the complex interactions between these factors through the brain-gut axis. These factors affect how FGID manifest in terms of symptoms but also affect the clinical outcome. These factors are interconnected and the influences on these factors are bidirectional and mutually interactive.
Early life factors
Early life factors include genetic factors, psychophysiological and sociocultural factors, and environmental exposures.
Genetics – Several polymorphisms and candidate genes may predispose individuals to develop FGID. These include alpha-2 adrenergic and 5-HT receptors; serotonin and norepinephrine transporters (SERT, NET); inflammatory markers interleukin-(IL)10, tumor necrosis factor-(TNF) alpha, and TNF super family member 15 (TNF-SF15); intracellular cell signaling (G proteins); and ion channels (SCN5A). However, the expression of a FGID requires the influence of additional environmental exposures such as infection, illness modeling and other factors.
Psychophysiological factors may affect the expression of these genes, thus leading to symptoms production associated with FGID.
Sociocultural factors and family interactions have been shown to shape later reporting of symptoms, the development of FGIDs, and health care seeking. The expression of pain varies across cultures as well including denial of symptoms to dramatic expression.
Environmental exposures – Prior studies have shown the effect of environmental exposures in relation to the development of FGIDs. Environmental exposures such as childhood salmonella infection can be a risk factor for IBS in adulthood.
Psychosocial factors
There is a strong link between FGIDs and psychosocial factors. Psychosocial factors influence the functioning of the GI tract through the brain-gut axis, including the GI tract's motility, sensitivity, and barrier function. Psychosocial factors also affect experience and behavior, treatment selection, and clinical outcome.
Psychological stress or one's emotional response to stress exacerbates gastrointestinal symptoms and may contribute to FGID development and maintenance. Specifically in children and adolescents, anxiety and depression may present as FGID-associated somatic complaints, such as nausea, vomiting, and abdominal pain. Similarly, anxiety in individuals with FGIDs is linked to greater pain severity, frequency, duration, chronicity, and disabling effects. This is because psychological stress can impact the gut's mucosal barrier functions, allowing bacteria and bacterial products to migrate and cause pain, diarrhea, and other GI symptoms. Conversely, since the brain-gut axis is bidirectional, GI inflammation and injury can amplify pain signals to the brain and contribute to worsened mental status, including anxiety and depression symptoms.
Individuals with FGIDs may also experience poor socialization. Due to the nature of the disease, individuals with an FGID may have difficulty with regular school or work attendance and participation in extracurricular activities, leading to isolation and a lack of peer support. This lack of peer support may lead to depression and loneliness, conditions which exacerbate FGIDs symptoms. In addition, children with FGIDs are more likely to experience bullying. As such, stressful situations which influence socialization (seen as either a lack thereof or negative experiences) may lead to an impaired functioning in patients with FGIDs.
Family interactions may also play a role in the development of FGIDs through their effects on the physical and psychosocial functioning of an individual. Family factors which may influence the development of an FGID include child attachment style, maladaptive parenting behaviors (paternal rejection and hostility), and even the parents' health status, as children of chronically ill parents experience increased somatization, insecure attachment, and worsened biopsychosocial functioning. Each of these factors leads to the accumulation of stressors, which can ultimately lead to the development of an FGID. In addition, family units which have a member with an FGIDs diagnosis are more likely to face family functioning difficulties, including challenges to familial roles, communication, affective involvement, organization, and cohesion. These challenges arise due to the nature of the disease, and ultimately worsen symptoms for the FGID patient.
Physiology
The physiology of FGID is characterized by abnormal motility, visceral hypersensitivity as well as dysregulation of the immune system and barrier function of the GI tract as well as inflammatory changes.
Abnormal motility Studies have shown altered muscle contractility and tone, bowel compliance, and transit may contribute to many of the gastrointestinal symptoms of FGID which may include diarrhea, constipation, and vomiting.
Visceral hypersensitivity In FGID there is poor association of pain with GI motility in many functional GI disorders. These patient often have a lower pain threshold with balloon distension of the bowel (visceral hyperalgesia), or they have increased sensitivity even to normal intestinal function; Visceral hypersensitivity may be amplified in patients with FGIDs.
Immune dysregulation, inflammation, and barrier dysfunction Studies on postinfectious IBS have shown that factors such as mucosal membrane permeability, the intestinal flora, and altered mucosal immune function. Ultimately leading to visceral hypersensitivity. Factors contributing to this occurrence include genetics, psychological stress, and altered receptor sensitivity at the gut mucosa and myenteric plexus, which are enabled by mucosal immune dysfunction.
Microbiome There has been increased attention to the role of bacteria and the microbiome in overall health and disease. There is evidence for a group of microorganisms which play a role in the brain-gut axis. Studies have revealed that the bacterial composition of the gastrointestinal tract in IBS patient differs from healthy individuals (e.g., increased Firmicutes and reduced Bacteroidetes and Bifidobacteria) However, further research is needed to determine the role of the microbiome in FGIDs.
Food and diet The types of food consumed and diet consumed plays a role in the manifestation of FGID and also their relationship to intestinal microbiota. Studies have shown that specific changes in diet (e.g., low FODMAP—fermentable oligo-, di-, and monosaccharides and polyols, or gluten restriction in some patients) may help and reduce the symptom burden in FGID. However, no one diet has been shown to be recommended for all people.
Brain-gut axis
The brain-gut axis is a bidirectional mechanism in which psychosocial factors influence the GI tract and vice versa. Specifically, the emotional and cognitive centers of the brain influence GI activity and immune cell function, and the microbes within the gut regulate mood, cognition, and mental health. These two systems interact through several mechanisms. There are direct, physical connections between the central nervous system and nerve plexuses to the visceral muscles. In addition, neurotransmitters send signals related to thoughts, feelings, and pain regulation from the brain to the GI tract. The brain-gut axis influences the entire body through a variety of pathways; it regulates sensory, motor, endocrine, autonomic, immune, and inflammatory reactions. Within the physical and psychological interactions of FGIDs specifically, psychiatric disorders such as anxiety, depression, and even autism are well-linked to GI dysfunction. Conversely, functional GI diseases are linked to several comorbid psychiatric diseases. Negative emotions such as fear, anxiety, anger, stress, and pain may delay gastric emptying, decrease intestinal and colonic transit time, and induce defecation and diarrhea.
Treatments
Psychotherapeutic treatments
Because FGIDs are known to be multifactorial with external stressors and environmental factors playing a role in their development, current research demonstrates that psychological treatments may be effective in relieving some symptoms of the disease. Interventions such as cognitive behavioral therapy (CBT), hypnotherapy, and biofeedback-assisted relaxation training (BART) each show promise in symptom reduction. Each of these therapies aims to alter an individual's thought patterns and behaviors while improving self-efficacy, which all work together to improve health outcomes.
Cognitive behavioral therapy is a treatment based on the theory that thinking affects one's feelings and behaviors. As such, alterations in one's thought process can have a positive or negative effect on actions and perceptions. Through the lens of FGIDs, a negative thought pattern may be associated with a negative physical experience of abdominal pain, discomfort, and general sickness. In theory, retraining the patient's thought patterns can alleviate these symptoms and improve quality of life. In patients with FGIDs, CBT is an effective treatment option; one study found 87.5% of participants to be completely pain-free following treatment. Internet-based CBT (iCBT) is similarly effective, and may be a good treatment option for individuals who either cannot afford or otherwise lack access to traditional CBT.
Hypnotherapy, another method for reducing symptoms of FGIDs, teaches users how to alter their perception of uncomfortable sensations in the body. Gut-directed hypnotherapy specifically gives greater improvements in symptoms than standard treatment of the disease. Research demonstrates directed hypnotherapy to be an effective mechanism of reducing visceral hypersensitivity (a low pain threshold of the internal organs) and sympathetic activity, due to the reduced activity of the anterior cingulated cortex and state of relaxation achieved during hypnosis. For patients with irritable bowel syndrome (IBS) and functional abdominal pain (FAP), hypnotherapy reduces pain intensity and frequency.
BART therapies monitor the physiological changes occurring with thoughts, feelings, and emotions. These therapies aim to teach patients how to visualize the effects of the interventions they are undergoing. BART is used to improve mood and somatic responses to anxiety disorders, which may relieve some of the psychological and physiological symptoms of FGIDs. The visual, real-time feedback given through BART empowers the patient to see the difference that the therapy is making, thus giving the patient control over the physiological components of the disease. This allows the patient to maximize their mind-body connection and eventually optimize symptom management and quality of life. BART allows the patient to break the positive feedback loop of anxiety and pain, thus reducing disease exacerbations.
Pharmaceutical treatments
Antidepressants have been thoroughly studied as a potential treatment for FGIDs. Tricyclic antidepressants (TCAs), selective serotonin reuptake inhibitors (SSRIs), and selective norepinephrine reuptake inhibitors (SNRIs) show the most promise in treating some of the symptoms of FGIDs. TCAs, specifically amitriptyline, show promising results when examining common FGIDs symptoms such as pain and poor quality of life. SNRIs also demonstrate pain-relieving qualities. SSRIs are less effective in pain management, but may reduce symptoms of anxiety and depression, which would, in turn, reduce some FGIDs symptoms.
Epidemiology
Functional gastrointestinal disorders are very common. Globally, irritable bowel syndrome and functional dyspepsia alone may affect 16–26% of the population.
Research
There is considerable research into the causes, diagnosis and treatments for FGIDs. Diet, microbiome, genetics, neuromuscular function and immunological response all interact. A role for mast cell activation has been proposed as one of the factors.
See also
Allergy
Food intolerance
Functional indigestion
Histamine intolerance
References
External links
Gastrointestinal tract disorders | 0.773245 | 0.990596 | 0.765973 |
Paresis | In medicine, paresis is a condition typified by a weakness of voluntary movement, or by partial loss of voluntary movement or by impaired movement. When used without qualifiers, it usually refers to the limbs, but it can also be used to describe the muscles of the eyes (ophthalmoparesis), the stomach (gastroparesis), and also the vocal cords (vocal cord paresis).
Neurologists use the term paresis to describe weakness, and plegia to describe paralysis in which all voluntary movement is lost. The term paresis comes from the 'letting go' from παρίημι 'to let go, to let fall'.
Types
Limbs
Monoparesis – One leg or one arm
Paraparesis – Both legs
Hemiparesis – The loss of function to only one side of the body
Triparesis – Three limbs. This can either mean both legs and one arm, both arms and a leg, or a combination of one arm, one leg, and face
Double hemiparesis – All four limbs are involved, but one side of the body is more affected than the other
Tetraparesis – All four limbs
Quadriparesis – All four limbs, equally affected
These terms frequently refer to the impairment of motion in multiple sclerosis and cerebral palsy
Other
Gastroparesis – impaired stomach emptying
A form of ophthalmoplegia
Spastic paresis – exaggerated tendon reflexes and muscle hypertonia
In the past, the term was most commonly used to refer to "general paresis", which was a symptom of untreated syphilis. However, due to improvements in treatment of syphilis, it is now rarely used in this context.
See also
Asthenia
Ataxia
Atony
Catatonia
Fatigue (physical)
Facial nerve paralysis
Hypotonia
Malaise
Muscle weakness
Palsy
References
External links
Overview
Hind Limb Paresis and Paralysis in Rabbits
Symptoms and signs
Medical terminology | 0.772092 | 0.992074 | 0.765973 |
Nutrition | Nutrition is the biochemical and physiological process by which an organism uses food to support its life. It provides organisms with nutrients, which can be metabolized to create energy and chemical structures. Failure to obtain the required amount of nutrients causes malnutrition. Nutritional science is the study of nutrition, though it typically emphasizes human nutrition.
The type of organism determines what nutrients it needs and how it obtains them. Organisms obtain nutrients by consuming organic matter, consuming inorganic matter, absorbing light, or some combination of these. Some can produce nutrients internally by consuming basic elements, while some must consume other organisms to obtain pre-existing nutrients. All forms of life require carbon, energy, and water as well as various other molecules. Animals require complex nutrients such as carbohydrates, lipids, and proteins, obtaining them by consuming other organisms. Humans have developed agriculture and cooking to replace foraging and advance human nutrition. Plants acquire nutrients through the soil and the atmosphere. Fungi absorb nutrients around them by breaking them down and absorbing them through the mycelium.
History
Scientific analysis of food and nutrients began during the chemical revolution in the late 18th century. Chemists in the 18th and 19th centuries experimented with different elements and food sources to develop theories of nutrition. Modern nutrition science began in the 1910s as individual micronutrients began to be identified. The first vitamin to be chemically identified was thiamine in 1926, and vitamin C was identified as a protection against scurvy in 1932. The role of vitamins in nutrition was studied in the following decades. The first recommended dietary allowances for humans were developed to address fears of disease caused by food deficiencies during the Great Depression and the Second World War. Due to its importance in human health, the study of nutrition has heavily emphasized human nutrition and agriculture, while ecology is a secondary concern.
Nutrients
Nutrients are substances that provide energy and physical components to the organism, allowing it to survive, grow, and reproduce. Nutrients can be basic elements or complex macromolecules. Approximately 30 elements are found in organic matter, with nitrogen, carbon, and phosphorus being the most important. Macronutrients are the primary substances required by an organism, and micronutrients are substances required by an organism in trace amounts. Organic micronutrients are classified as vitamins, and inorganic micronutrients are classified as minerals.
Nutrients are absorbed by the cells and used in metabolic biochemical reactions. These include fueling reactions that create precursor metabolites and energy, biosynthetic reactions that convert precursor metabolites into building block molecules, polymerizations that combine these molecules into macromolecule polymers, and assembly reactions that use these polymers to construct cellular structures.
Nutritional groups
Organisms can be classified by how they obtain carbon and energy. Heterotrophs are organisms that obtain nutrients by consuming the carbon of other organisms, while autotrophs are organisms that produce their own nutrients from the carbon of inorganic substances like carbon dioxide. Mixotrophs are organisms that can be heterotrophs and autotrophs, including some plankton and carnivorous plants. Phototrophs obtain energy from light, while chemotrophs obtain energy by consuming chemical energy from matter. Organotrophs consume other organisms to obtain electrons, while lithotrophs obtain electrons from inorganic substances, such as water, hydrogen sulfide, dihydrogen, iron(II), sulfur, or ammonium. Prototrophs can create essential nutrients from other compounds, while auxotrophs must consume preexisting nutrients.
Diet
In nutrition, the diet of an organism is the sum of the foods it eats. A healthy diet improves the physical and mental health of an organism. This requires ingestion and absorption of vitamins, minerals, essential amino acids from protein and essential fatty acids from fat-containing food. Carbohydrates, protein and fat play major roles in ensuring the quality of life, health and longevity of the organism. Some cultures and religions have restrictions on what is acceptable for their diet.
Nutrient cycle
A nutrient cycle is a biogeochemical cycle involving the movement of inorganic matter through a combination of soil, organisms, air or water, where they are exchanged in organic matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle, sulfur cycle, nitrogen cycle, water cycle, phosphorus cycle, and oxygen cycle, among others that continually recycle along with other mineral nutrients into productive ecological nutrition.
Biogeochemical cycles that are performed by living organisms and natural processes are water, carbon, nitrogen, phosphorus, and sulfur cycles. Nutrient cycles allow these essential elements to return to the environment after being absorbed or consumed. Without proper nutrient cycling, there would be risk of change in oxygen levels, climate, and ecosystem function.
Foraging
Foraging is the process of seeking out nutrients in the environment. It may also be defined to include the subsequent use of the resources. Some organisms, such as animals and bacteria, can navigate to find nutrients, while others, such as plants and fungi, extend outward to find nutrients. Foraging may be random, in which the organism seeks nutrients without method, or it may be systematic, in which the organism can go directly to a food source. Organisms are able to detect nutrients through taste or other forms of nutrient sensing, allowing them to regulate nutrient intake. Optimal foraging theory is a model that explains foraging behavior as a cost–benefit analysis in which an animal must maximize the gain of nutrients while minimizing the amount of time and energy spent foraging. It was created to analyze the foraging habits of animals, but it can also be extended to other organisms. Some organisms are specialists that are adapted to forage for a single food source, while others are generalists that can consume a variety of food sources.
Nutrient deficiency
Nutrient deficiencies, known as malnutrition, occur when an organism does not have the nutrients that it needs. This may be caused by suddenly losing nutrients or the inability to absorb proper nutrients. Not only is malnutrition the result of a lack of necessary nutrients, but it can also be a result of other illnesses and health conditions. When this occurs, an organism will adapt by reducing energy consumption and expenditure to prolong the use of stored nutrients. It will use stored energy reserves until they are depleted, and it will then break down its own body mass for additional energy.
A balanced diet includes appropriate amounts of all essential and nonessential nutrients. These can vary by age, weight, sex, physical activity levels, and more. A lack of just one essential nutrient can cause bodily harm, just as an overabundance can cause toxicity. The Daily Reference Values keep the majority of people from nutrient deficiencies. DRVs are not recommendations but a combination of nutrient references to educate professionals and policymakers on what the maximum and minimum nutrient intakes are for the average person. Food labels also use DRVs as a reference to create safe nutritional guidelines for the average healthy person.
In organisms
Animal
Animals are heterotrophs that consume other organisms to obtain nutrients. Herbivores are animals that eat plants, carnivores are animals that eat other animals, and omnivores are animals that eat both plants and other animals. Many herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize. Animals generally have a higher requirement of energy in comparison to plants. The macronutrients essential to animal life are carbohydrates, amino acids, and fatty acids.
All macronutrients except water are required by the body for energy, however, this is not their sole physiological function. The energy provided by macronutrients in food is measured in kilocalories, usually called Calories, where 1 Calorie is the amount of energy required to raise 1 kilogram of water by 1 degree Celsius.
Carbohydrates are molecules that store significant amounts of energy. Animals digest and metabolize carbohydrates to obtain this energy. Carbohydrates are typically synthesized by plants during metabolism, and animals have to obtain most carbohydrates from nature, as they have only a limited ability to generate them. They include sugars, oligosaccharides, and polysaccharides. Glucose is the simplest form of carbohydrate. Carbohydrates are broken down to produce glucose and short-chain fatty acids, and they are the most abundant nutrients for herbivorous land animals. Carbohydrates contain 4 calories per gram.
Lipids provide animals with fats and oils. They are not soluble in water, and they can store energy for an extended period of time. They can be obtained from many different plant and animal sources. Most dietary lipids are triglycerides, composed of glycerol and fatty acids. Phospholipids and sterols are found in smaller amounts. An animal's body will reduce the amount of fatty acids it produces as dietary fat intake increases, while it increases the amount of fatty acids it produces as carbohydrate intake increases. Fats contain 9 calories per gram.
Protein consumed by animals is broken down to amino acids, which would be later used to synthesize new proteins. Protein is used to form cellular structures, fluids, and enzymes (biological catalysts). Enzymes are essential to most metabolic processes, as well as DNA replication, repair, and transcription. Protein contains 4 calories per gram.
Much of animal behavior is governed by nutrition. Migration patterns and seasonal breeding take place in conjunction with food availability, and courtship displays are used to display an animal's health. Animals develop positive and negative associations with foods that affect their health, and they can instinctively avoid foods that have caused toxic injury or nutritional imbalances through a conditioned food aversion. Some animals, such as rats, do not seek out new types of foods unless they have a nutrient deficiency.
Human
Early human nutrition consisted of foraging for nutrients, like other animals, but it diverged at the beginning of the Holocene with the Neolithic Revolution, in which humans developed agriculture to produce food. The Chemical Revolution in the 18th century allowed humans to study the nutrients in foods and develop more advanced methods of food preparation. Major advances in economics and technology during the 20th century allowed mass production and food fortification to better meet the nutritional needs of humans. Human behavior is closely related to human nutrition, making it a subject of social science in addition to biology. Nutrition in humans is balanced with eating for pleasure, and optimal diet may vary depending on the demographics and health concerns of each person.
Humans are omnivores that eat a variety of foods. Cultivation of cereals and production of bread has made up a key component of human nutrition since the beginning of agriculture. Early humans hunted animals for meat, and modern humans domesticate animals to consume their meat and eggs. The development of animal husbandry has also allowed humans in some cultures to consume the milk of other animals and process it into foods such as cheese. Other foods eaten by humans include nuts, seeds, fruits, and vegetables. Access to domesticated animals as well as vegetable oils has caused a significant increase in human intake of fats and oils. Humans have developed advanced methods of food processing that prevent contamination of pathogenic microorganisms and simplify the production of food. These include drying, freezing, heating, milling, pressing, packaging, refrigeration, and irradiation. Most cultures add herbs and spices to foods before eating to add flavor, though most do not significantly affect nutrition. Other additives are also used to improve the safety, quality, flavor, and nutritional content of food.
Humans obtain most carbohydrates as starch from cereals, though sugar has grown in importance. Lipids can be found in animal fat, butterfat, vegetable oil, and leaf vegetables, and they are also used to increase flavor in foods. Protein can be found in virtually all foods, as it makes up cellular material, though certain methods of food processing may reduce the amount of protein in a food. Humans can also obtain energy from ethanol, which is both a food and a drug, but it provides relatively few essential nutrients and is associated with nutritional deficiencies and other health risks.
In humans, poor nutrition can cause deficiency-related diseases, such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism, or nutrient-excess conditions, such as obesity and metabolic syndrome. Other conditions possibly affected by nutrition disorders include cardiovascular diseases, diabetes, and osteoporosis. Undernutrition can lead to wasting in acute cases, and stunting of marasmus in chronic cases of malnutrition.
Domesticated animal
In domesticated animals, such as pets, livestock, and working animals, as well as other animals in captivity, nutrition is managed by humans through animal feed. Fodder and forage are provided to livestock. Specialized pet food has been manufactured since 1860, and subsequent research and development have addressed the nutritional needs of pets. Dog food and cat food in particular are heavily studied and typically include all essential nutrients for these animals. Cats are sensitive to some common nutrients, such as taurine, and require additional nutrients derived from meat. Large-breed puppies are susceptible to overnutrition, as small-breed dog food is more energy dense than they can absorb.
Plant
Most plants obtain nutrients through inorganic substances absorbed from the soil or the atmosphere. Carbon, hydrogen, oxygen, nitrogen, and sulfur are essential nutrients that make up organic material in a plant and allow enzymic processes. These are absorbed ions in the soil, such as bicarbonate, nitrate, ammonium, and sulfate, or they are absorbed as gases, such as carbon dioxide, water, oxygen gas, and sulfur dioxide. Phosphorus, boron, and silicon are used for esterification. They are obtained through the soil as phosphates, boric acid, and silicic acid, respectively. Other nutrients used by plants are potassium, sodium, calcium, magnesium, manganese, chlorine, iron, copper, zinc, and molybdenum.
Plants uptake essential elements from the soil through their roots and from the air (consisting of mainly nitrogen and oxygen) through their leaves. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. Although nitrogen is plentiful in the Earth's atmosphere, very few plants can use this directly. Most plants, therefore, require nitrogen compounds to be present in the soil in which they grow. This is made possible by the fact that largely inert atmospheric nitrogen is changed in a nitrogen fixation process to biologically usable forms in the soil by bacteria.
As these nutrients do not provide the plant with energy, they must obtain energy by other means. Green plants absorb energy from sunlight with chloroplasts and convert it to usable energy through photosynthesis.
Fungus
Fungi are chemoheterotrophs that consume external matter for energy. Most fungi absorb matter through the root-like mycelium, which grows through the organism's source of nutrients and can extend indefinitely. The fungus excretes extracellular enzymes to break down surrounding matter and then absorbs the nutrients through the cell wall. Fungi can be parasitic, saprophytic, or symbiotic. Parasitic fungi attach and feed on living hosts, such as animals, plants, or other fungi. Saprophytic fungi feed on dead and decomposing organisms. Symbiotic fungi grow around other organisms and exchange nutrients with them.
Protist
Protists include all eukaryotes that are not animals, plants, or fungi, resulting in great diversity between them. Algae are photosynthetic protists that can produce energy from light. Several types of protists use mycelium similar to those of fungi. Protozoa are heterotrophic protists, and different protozoa seek nutrients in different ways. Flagellate protozoa use a flagellum to assist in hunting for food, and some protozoa travel via infectious spores to act as parasites. Many protists are mixotrophic, having both phototrophic and heterotrophic characteristics. Mixotrophic protists will typically depend on one source of nutrients while using the other as a supplemental source or a temporary alternative when its primary source is unavailable.
Prokaryote
Prokaryotes, including bacteria and archaea, vary greatly in how they obtain nutrients across nutritional groups. Prokaryotes can only transport soluble compounds across their cell envelopes, but they can break down chemical components around them. Some lithotrophic prokaryotes are extremophiles that can survive in nutrient-deprived environments by breaking down inorganic matter. Phototrophic prokaryotes, such as cyanobacteria and Chloroflexia, can engage in photosynthesis to obtain energy from sunlight. This is common among bacteria that form in mats atop geothermal springs. Phototrophic prokaryotes typically obtain carbon from assimilating carbon dioxide through the Calvin cycle.
Some prokaryotes, such as Bdellovibrio and Ensifer, are predatory and feed on other single-celled organisms. Predatory prokaryotes seek out other organisms through chemotaxis or random collision, merge with the organism, degrade it, and absorb the released nutrients. Predatory strategies of prokaryotes include attaching to the outer surface of the organism and degrading it externally, entering the cytoplasm of the organism, or by entering the periplasmic space of the organism. Groups of predatory prokaryotes may forgo attachment by collectively producing hydrolytic enzymes.
See also
Milan Charter 2015 Charter on Nutrition
References
Bibliography
External links | 0.766926 | 0.998742 | 0.765961 |
Arthropathy | An arthropathy is a disease of a joint.
Types
Arthritis is a form of arthropathy that involves inflammation of one or more joints, while the term arthropathy may be used regardless of whether there is inflammation or not.
Joint diseases can be classified as follows:
Arthritis
Infectious arthritis
Septic arthritis (infectious)
Tuberculosis arthritis
Reactive arthritis (indirectly)
Noninfectious arthritis
Seronegative spondyloarthropathy:
Psoriatic arthritis
Ankylosing spondylitis
Rheumatoid arthritis: Felty's syndrome
Juvenile idiopathic arthritis
Adult-onset Still's disease
Crystal arthropathy
Gout
Chondrocalcinosis
Osteoarthritis
Hemarthrosis (joint bleeding)
Synovitis is the medical term for inflammation of the synovial membrane.
Joint dislocation
With arthropathy in the name
Reactive arthropathy (M02-M03) is caused by an infection, but not a direct infection of the synovial space. (See also Reactive arthritis)
Enteropathic arthropathy (M07) is caused by colitis and related conditions.
Crystal arthropathy (also known as crystal arthritis) (M10-M11) involves the deposition of crystals in the joint.
In gout, the crystal is uric acid.
In pseudogout/chondrocalcinosis/calcium pyrophosphate deposition disease, the crystal is calcium pyrophosphate.
Diabetic arthropathy (M14.2, E10-E14) is caused by diabetes.
Neuropathic arthropathy'' (M14.6) is associated with a loss of sensation.
Spondylarthropathy is any form of arthropathy of the vertebral column.
Signs and symptoms
Joint pain is a common but non-specific sign of joint disease. Signs will depend on the specific disease, and may even then vary. Common signs may include:
Decreased range of motion
Stiffness
Effusion
Pneumarthrosis, air in a joint (which is also a common normal finding).
Bone erosion
Systemic signs of arthritis such as fatigue
Diagnosis
Diagnosis may be a combination of medical history, physical examination, blood tests and medical imaging (generally X-ray initially).
Treatment
References
External links
Inflammatory polyarthropathies
Infectious arthropathies | 0.772949 | 0.990956 | 0.765959 |
Flesh | Flesh is any aggregation of soft tissues of an organism. Various multicellular organisms have soft tissues that may be called "flesh". In mammals, including humans, flesh encompasses muscles, fats and other loose connective tissues, but sometimes excluding non-muscular organs (liver, lung, spleen, kidney) and typically discarded parts (hard tendon, brain tissue, intestines, etc.). More generally, it may be considered the portions of the body that are soft and delicate. In a culinary context, consumable animal flesh is called meat, while processed visceral tissues are known as offal.
In particular animal groups such as vertebrates, molluscs and arthropods, the flesh is distinguished from tougher body structures such as bone, shell and scute, respectively. In plants, the "flesh" is the juicy, edible structures such as the mesocarp of fruits and melons as well as soft tubers, rhizomes and taproots, as opposed to tougher structures like nuts and stems. In fungi, flesh refers to trama, the soft, inner portion of a mushroom, or fruit body.
A more restrictive usage may be found in some contexts, such as the visual arts, where flesh may refer only to visibly exposed human skin, as opposed to parts of the body covered by clothing and hair. Flesh as a descriptor for colour usually refers to the non-melanated pale or pinkish skin colour of white humans, however, it can also be used to refer to the colour of any human skin.
In Christian religious circles, the flesh is a metaphor associated with carnality.
Gallery
References
Tissues (biology)
Vertebrate anatomy
Art history | 0.779194 | 0.982981 | 0.765933 |
Critical infrastructure | Critical infrastructure, or critical national infrastructure (CNI) in the UK, describes infrastructure considered essential by governments for the functioning of a society and economy and deserving of special protection for national security. Critical infrastructure has traditionally been viewed as under the scope of government due to its strategic importance, yet there's an observable trend towards its privatization, raising discussions about how the private sector can contribute to these essential services.
Items
Most commonly associated with the term are assets and facilities for:
Shelter; Heating (e.g. natural gas, fuel oil, district heating);
Agriculture, food production and distribution;
Education, skills development and technology transfer / basic subsistence and unemployment rate statistics;
Water supply (drinking water, waste water/sewage, stemming of surface water (e.g. dikes and sluices));
Public health (hospitals, ambulances);
Transportation systems (fuel supply, railway network, airports, harbours, inland shipping);
Security services (police, military).
Electricity generation, transmission and distribution; (e.g. natural gas, fuel oil, coal, nuclear power)
Renewable energy, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.
Telecommunication; coordination for successful operations
Economic sector; Goods and services and financial services (banking, clearing);
Protection programmes
Canada
The Canadian Federal Government identifies the following 10 Critical Infrastructure Sectors as a way to classify essential assets.
Energy & Utilities: Electricity providers; off-shore/on-shore oil & gas; coal supplies, natural gas providers; home fuel oil; gas station supplies; alternative energy suppliers (wind, solar, other)
Information and Communication Technology: Broadcast Media; telecommunication providers (landlines, cell phones, internet, wifi); Postal services;
Finance: Banking services, government finance/aid departments; taxation
Health: Public health & wellness programs, hospital/clinic facilities; blood & blood products
Food: Food supply chains; food inspectors; import/export programs; grocery stores; Agri & Acqua culture; farmers markets
Water: Water supply & protection; wastewater management; fisheries & ocean protection programs
Transportation: Roads, bridges, railways, aviation/airports; shipping & ports; transit
Safety: Emergency responders; public safety programs
Government: Military; Continuity of governance
Manufacturing: Industry, economic development
European Union
European Programme for Critical Infrastructure Protection (EPCIP) refers to the doctrine or specific programs created as a result of the European Commission's directive EU COM(2006) 786 which designates European critical infrastructure that, in case of fault, incident, or attack, could impact both the country where it is hosted and at least one other European Member State. Member states are obliged to adopt the 2006 directive into their national statutes.
It has proposed a list of European critical infrastructures based upon inputs by its member states.
Each designated European Critical Infrastructures (ECI) will have to have an Operator Security Plan (OSP) covering the identification of important assets, a risk analysis based on major threat scenarios and the vulnerability of each asset, and the identification, selection and prioritisation of counter-measures and procedures.
Germany
The German critical-infrastructure protection programme KRITIS is coordinated by the Federal Ministry of the Interior. Some of its special agencies like the German Federal Office for Information Security or the Federal Office of Civil Protection and Disaster Assistance BBK deliver the respective content, e.g., about IT systems.
Singapore
In Singapore, critical infrastructures are mandated under the Protected Areas and Protected Places Act. In 2017, the Infrastructure Protection Act was passed in Parliament, which provides for the protection of certain areas, places and other premises in Singapore against security risks. It came into force in 2018.
United Kingdom
In the UK, the National Protective Security Authority (NPSA) provides information, personnel and physical security advice to the businesses and organizations which make up the UK's national infrastructure, helping to reduce its vulnerability to terrorism and other threats.
It can call on resources from other government departments and agencies, including MI5, the National Cyber Security Centre (NCSC) and other government departments responsible for national infrastructure sectors.
United States
The U.S. has had a wide-reaching critical infrastructure protection program in place since 1996. Its Patriot Act of 2001 defined critical infrastructure as those "systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters."
In 2014 the NIST Cybersecurity Framework was published, and quickly became a popular set of guidelines, despite the significant costs of full compliance.
These have identified a number of critical infrastructures and responsible agencies:
Agriculture and food – Departments of Agriculture and Health and Human Services
Water – Environmental Protection Agency
Public Health – Department of Health and Human Services
Emergency Services – Department of Homeland Security
Government – Department of Homeland Security
Defense Industrial Base – Department of Defense
Information and Telecommunications – Department of Commerce
Energy – Department of Energy
Transportation and Shipping – Department of Transportation
Banking and Finance – Department of the Treasury
Chemical Industry and Hazardous Materials – Department of Homeland Security
Post – Department of Homeland Security
National monuments and icons - Department of the Interior
Critical manufacturing - Department of Homeland Security (14th sector announced March 3, 2008; recorded April 30, 2008)
National Infrastructure Protection Plan
The National Infrastructure Protection Plan (NIPP) defines critical infrastructure sector in the US. Presidential Policy Directive 21 (PPD-21), issued in February 2013 entitled Critical Infrastructure Security and Resilience mandated an update to the NIPP. This revision of the plan established the following 16 critical infrastructure sectors:
Chemical
Commercial facilities
Communications
Critical manufacturing
Dams
Defense industrial base
Emergency services
Energy
Financial services
Food and agriculture
Government facilities
Healthcare and public health
Information technology
Nuclear reactors, materials, and waste
Transportation systems
Water and wastewater systems
National Monuments and Icons along with the postal and shipping sector were removed in 2013 update to the NIPP. The 2013 version of the NIPP has faced criticism for lacking viable risk measures. The plan assigns the following agencies sector-specific coordination responsibilities:
Department of Homeland Security
Chemical
Commercial facilities
Communications
Critical manufacturing
Dams
Emergency services
Government facilities (jointly with General Services Administration)
Information technology
Nuclear reactors, materials, and waste
Transportation systems (jointly with Department of Transportation)
Department of Defense
Defense industrial base
Department of Energy
Energy
Department of the Treasury
Financial services
Department of Agriculture
Food and agriculture
General Services Administration
Government facilities (jointly with Department of Homeland Security)
Department of Health and Human Services
Healthcare and Public Health
Department of Transportation
Transportation systems (jointly with Department of Homeland Security)
Environmental Protection Agency
Water and wastewater systems
State-level legislation
Several U.S. states have passed "critical infrastructure" bills, promoted by the American Legislative Exchange Council (ALEC), to criminalize protests against the fossil fuel industry. In May 2017, Oklahoma passed legislation which created felony penalties for trespassing on land considered critical infrastructure, including oil and gas pipelines, or conspiring to do so; ALEC introduced a version of the bill as a model act and encouraged other states to adopt it. In June 2020, West Virginia passed the Critical Infrastructure Protection Act, which created felony penalties for protests against oil and gas facilities.
Stress testing
Critical infrastructure (CI) such as highways, railways, electric power networks, dams, port facilities, major gas pipelines or oil refineries are exposed to multiple natural and human-induced hazards and stressors, including earthquakes, landslides, floods, tsunami, wildfires, climate change effects or explosions. These stressors and abrupt events can cause failures and losses, and hence, can interrupt essential services for the society and the economy. Therefore, CI owners and operators need to identify and quantify the risks posed by the CIs due to different stressors, in order to define mitigation strategies and improve the resilience of the CIs. Stress tests are advanced and standardised tools for hazard and risk assessment of CIs, that include both low-probability high-consequence (LP-HC) events and so-called extreme or rare events, as well as the systematic application of these new tools to classes of CI.
Stress testing is the process of assessing the ability of a CI to maintain a certain level of functionality under unfavourable conditions, while stress tests consider LP-HC events, which are not always accounted for in the design and risk assessment procedures, commonly adopted by public authorities or industrial stakeholders. A multilevel stress test methodology for CI has been developed in the framework of the European research project STREST, consisting of four phases:
Phase 1: Preassessment, during which the data available on the CI (risk context) and on the phenomena of interest (hazard context) are collected. The goal and objectives, the time frame, the stress test level and the total costs of the stress test are defined.
Phase 2: Assessment, during which the stress test at the component and the system scope is performed, including fragility and risk analysis of the CIs for the stressors defined in Phase 1. The stress test can result in three outcomes: Pass, Partly Pass and Fail, based on the comparison of the quantified risks to acceptable risk exposure levels and a penalty system.
Phase 3: Decision, during which the results of the stress test are analyzed according to the goal and objectives defined in Phase 1. Critical events (events that most likely cause the exceedance of a given level of loss) and risk mitigation strategies are identified.
Phase 4: Report, during which the stress test outcome and risk mitigation guidelines based on the findings established in Phase 3 are formulated and presented to the stakeholders.
This stress-testing methodology has been demonstrated to six CIs in Europe at component and system level: an oil refinery and petrochemical plant in Milazzo, Italy; a conceptual alpine earth-fill dam in Switzerland; the Baku–Tbilisi–Ceyhan pipeline in Turkey; part of the Gasunie national gas storage and distribution network in the Netherlands; the port infrastructure of Thessaloniki, Greece; and an industrial district in the region of Tuscany, Italy. The outcome of the stress testing included the definition of critical components and events and risk mitigation strategies, which are formulated and reported to stakeholders.
See also
Industrial antiterrorism
Infrastructure
Infrastructure security
Civil defense
Paramilitary
References
External links
Infracritical: comparison of US and international definitions of infrastructure
Digital Watch - Critical Infrastructure
United States Department of Homeland Security
Infrastructure
National security | 0.777679 | 0.98487 | 0.765913 |
Meningococcal disease | Meningococcal disease describes infections caused by the bacterium Neisseria meningitidis (also termed meningococcus). It has a high mortality rate if untreated but is vaccine-preventable. While best known as a cause of meningitis, it can also result in sepsis, which is an even more damaging and dangerous condition. Meningitis and meningococcemia are major causes of illness, death, and disability in both developed and under-developed countries.
There are approximately 2,600 cases of bacterial meningitis per year in the United States, and on average 333,000 cases in developing countries. The case fatality rate ranges between 10 and 20 percent. The incidence of endemic meningococcal disease during the last 13 years ranges from 1 to 5 per 100,000 in developed countries, and from 10 to 25 per 100,000 in developing countries. During epidemics the incidence of meningococcal disease approaches 100 per 100,000. Meningococcal vaccines have sharply reduced the incidence of the disease in developed countries.
The disease's pathogenesis is not fully understood. Neisseria meningitidis colonises a substantial proportion of the general population harmlessly, but in a very small percentage of individuals it can invade the bloodstream, affecting the entire body, most notably limbs and brain, causing serious illness. Over the past few years, experts have made an intensive effort to understand specific aspects of meningococcal biology and host interactions; however, the development of improved treatments and effective vaccines is expected to depend on novel efforts by workers in many different fields.
While meningococcal disease is not as contagious as the common cold (which is spread through casual contact), it can be transmitted through saliva and occasionally through close, prolonged general contact with an infected person.
Types
Meningococcemia
Meningococcemia, like many other gram-negative blood infections, can cause disseminated intravascular coagulation (DIC), which is the inappropriate clotting of blood within the vessels. DIC can cause ischemic tissue damage when upstream thrombi obstruct blood flow and hemorrhage because clotting factors are exhausted. Small bleeds into the skin cause the characteristic petechial rash, which appears with a star-like shape. This is due to the release of toxins into the blood that break down the walls of blood vessels. A rash can develop under the skin due to blood leakage that may leave red or brownish pinprick spots, which can develop into purple bruising. Meningococcal rash can usually be confirmed by a glass test in which the rash does not fade away under pressure.
Meningitis
Meningococcal meningitis is a form of bacterial meningitis. Meningitis is a disease caused by inflammation and irritation of the meninges, the membranes surrounding the brain and spinal cord. In meningococcal meningitis this is caused by the bacteria invading the cerebrospinal fluid and circulating through the central nervous system. Sub-Saharan Africa, the Americas, Western Europe, the UK, and Ireland still face many challenges combating this disease.
Other types
As with any gram-negative bacterium, N. meningitidis can infect a variety of sites.
Meningococcal pneumonia can appear during influenza pandemics and in military camps. This is a multilobar, rapidly evolving pneumonia, sometimes associated with septic shock. With prompt treatment, the prognosis is excellent. Pericarditis can appear, either as a septic pericarditis with grave prognosis or as a reactive pericarditis in the wake of meningitis or septicaemia.
Signs and symptoms
Meningitis
The patient with meningococcal meningitis typically presents with high fever, nuchal rigidity (stiff neck), Kernig's sign, severe headache, vomiting, purpura, photophobia, and sometimes chills, altered mental status, or seizures. Diarrhea or respiratory symptoms are less common. Petechiae are often also present, but do not always occur; their absence does not negate a diagnosis of meningococcal disease. Anyone with symptoms of meningococcal meningitis should receive intravenous antibiotics prior to the results of lumbar puncture being known, as delay in treatment can greatly worsen the prognosis.
Meningococcemia
Symptoms of meningococcemia are, at least initially, similar to those of influenza. Typically, the first symptoms include fever, nausea, myalgia, headache, arthralgia, chills, diarrhea, stiff neck, and malaise. Later symptoms include septic shock, purpura, hypotension, cyanosis, petechiae, seizures, anxiety, and multiple organ dysfunction syndrome. Acute respiratory distress syndrome and altered mental status may also occur. The petechial rash appear with the 'star-like' shape. Meningococcal sepsis has a greater mortality rate than meningococcal meningitis, but the risk of neurologic sequelae is much lower.
Pathogenesis
Meningococcal disease causes life-threatening meningitis and sepsis conditions. In the case of meningitis, bacteria attack the lining between the brain and skull called the meninges. Infected fluid from the meninges then passes into the spinal cord, causing symptoms including stiff neck, fever and rashes. The meninges (and sometimes the brain itself) begin to swell, which affects the central nervous system.
Even with antibiotics, approximately 1 in 10 people who have meningococcal meningitis will die; however, about as many survivors of the disease lose a limb or their hearing, or experience permanent brain damage. The sepsis type of infection is much more deadly, and results in a severe blood poisoning called meningococcal sepsis that affects the entire body. In this case, bacterial toxins rupture blood vessels and can rapidly shut down vital organs. Within hours, patient's health can change from seemingly good to mortally ill.
The N. meningitidis bacterium is surrounded by a slimy outer coat that contains disease-causing endotoxin. While many bacteria produce endotoxin, the levels produced by meningococcal bacteria are 100 to 1,000 times greater (and accordingly more lethal) than normal. As the bacteria multiply and move through the bloodstream, it sheds concentrated amounts of toxin. The endotoxin directly affects the heart, reducing its ability to circulate blood, and also causes pressure on blood vessels throughout the body. As some blood vessels start to hemorrhage, major organs like the lungs and kidneys are damaged.
Patients with meningococcal disease are treated with a large dose of antibiotic. The systemic antibiotic flowing through the bloodstream rapidly kills the bacteria but, as the bacteria are killed, even more toxin is released. It takes up to several days for the toxin to be neutralized from the body by using continuous liquid treatment and antibiotic therapy.
Prevention
The most effective method of prevention is a vaccine against N. meningitidis. Different countries have different strains of the bacteria and therefore use different vaccines. Twelve serogroups (strains) exist, with six having the potential to cause a major epidemic - A, B, C, X, Y and W135 are responsible for virtually all cases of the disease in humans. Vaccines are currently available against all six strains, including a newer vaccine against serogroup B. The first vaccine to prevent meningococcal serogroup B (meningitis B) disease was approved by the European Commission on 22 January 2013.
Vaccines offer significant protection from three to five years (plain polysaccharide vaccine Menomune, Mencevax and NmVac-4) to more than eight years (conjugate vaccine Menactra).
Vaccinations
Children
Children 2–10 years of age who are at high risk for meningococcal disease such as certain chronic medical conditions and travel to or reside in countries with hyperendemic or epidemic meningococcal disease should receive primary immunization. Although safety and efficacy of the vaccine have not been established in children younger than 2 years of age and under outbreak control, the unconjugated vaccine can be considered.
Adolescents
Primary immunization against meningococcal disease with meningitis A, C, Y and W-135 vaccines is recommended for all young adolescents at 11–12 years of age and all unvaccinated older adolescents at 15 years of age. Although conjugate vaccines are the preferred meningococcal vaccine in adolescents 11 years of age or older, polysaccharide vaccines are an acceptable alternative if the conjugated vaccine is unavailable.
Adults
Primary immunization with meningitis A, C, Y and W-135 vaccines is recommended for college students who plan to live in dormitories, although the risk for meningococcal disease for college students 18–24 years of age is similar to that of the general population of similar age.
Routine primary immunization against meningococcal disease is recommended for most adults living in areas where meningococcal disease is endemic or who are planning to travel to such areas. Although conjugate vaccines are the preferred meningococcal vaccine in adults 55 years of age or younger, polysaccharide vaccines are an acceptable alternative for adults in this age group if the conjugated vaccine is unavailable. Since safety and efficacy of conjugate vaccines in adults older than 55 years of age have not been established to date, polysaccharide vaccines should be used for primary immunization in this group.
Medical staff
Health care people should receive routine immunization against meningococcal disease for laboratory personnel who are routinely exposed to isolates of N. meningitidis. Laboratory personnel and medical staff are at risk of exposure to N. meningitides or to patients with meningococcal disease. Hospital Infection Control Practices Advisory Committee (HICPAC) recommendations regarding immunization of health-care workers that routine vaccination of health-care personnel is recommended, Any individual 11–55 years of age who wishes to reduce their risk of meningococcal disease may receive meningitis A, C, Y and W-135 vaccines and those older than 55 years of age. Under certain circumstances if unvaccinated health-care personnel cannot get vaccinated and who have intensive contact with oropharyngeal secretions of infected patients and who do not use proper precautions should receive anti-infective prophylaxis against meningococcal infection (i.e., 2-day regimen of oral rifampicin or a single dose of IM ceftriaxone or a single dose of oral ciprofloxacin).
USA military recruits
Because the risk of meningococcal disease is increased among USA's military recruits, all military recruits routinely receive primary immunization against the disease.
Travelers
Immunization against meningococcal disease is not a requirement for entry into any country, unlike yellow fever. Only Saudi Arabia requires that travelers to that country for the annual Hajj and Umrah pilgrimage have a certificate of vaccination against meningococcal disease, issued not more than 3 years and not less than 10 days before arrival in Saudi Arabia.
Travelers to or residents of areas where N. meningitidis is highly endemic or epidemic are at risk of exposure should receive primary immunization against meningococcal disease.
HIV-infected individuals
HIV-infected individuals are likely to be at increased risk for meningococcal disease; HIV-infected individuals who wish to reduce their risk of meningococcal disease may receive primary immunization against meningococcal disease. Although efficacy of meningitis A, C, Y and W-135 vaccines have not been evaluated in HIV-infected individuals to date, HIV-infected individuals 11–55 years of age may receive primary immunization with the conjugated vaccine. Vaccination against meningitis does not decrease CD4+ T-cell counts or increase viral load in HIV-infected individuals, and there has been no evidence that the vaccines adversely affect survival.
Close contacts
Protective levels of anticapsular antibodies are not achieved until 7–14 days following administration of a meningococcal vaccine, vaccination cannot prevent early onset disease in these contacts and usually is not recommended following sporadic cases of invasive meningococcal disease. Unlike developed countries, in sub-Saharan Africa and other under developed countries, entire families live in a single room of a house.
Meningococcal infection is usually introduced into a household by an asymptomatic person. Carriage then spreads through the household, reaching infants usually after one or more other household members have been infected. Disease is most likely to occur in infants and young children who lack immunity to the strain of organism circulating and who subsequently acquire carriage of an invasive strain.
Close contacts are defined as those persons who could have had intimate contact with the patient's oral secretions such as through kissing or sharing of food or drink. The importance of the carrier state in meningococcal disease is well known. In developed countries the disease transmission usually occurs in day care, schools and large gatherings where usually disease transmission could occur. Because the meningococcal organism is transmitted by respiratory droplets and is susceptible to drying, it has been postulated that close contact is necessary for transmission. Therefore, the disease transmission to other susceptible person cannot be prevented. Meningitis occurs sporadically throughout the year, and since the organism has no known reservoir outside of man, asymptomatic carriers are usually the source of transmission.
Additionally, basic hygiene measures, such as handwashing and not sharing drinking cups, can reduce the incidence of infection by limiting exposure. When a case is confirmed, all close contacts with the infected person can be offered antibiotics to reduce the likelihood of the infection spreading to other people. However, rifampin-resistant strains have been reported and the indiscriminate use of antibiotics contributes to this problem. Chemoprophylaxis is commonly used to those close contacts who are at highest risk of carrying the pathogenic strains. Since vaccine duration is unknown, mass select vaccinations may be the most cost-effective means for controlling the transmission of the meningococcal disease, rather than mass routine vaccination schedules.
Chronic medical conditions
Persons with component deficiencies in the final common complement pathway (C3, C5-C9) are more susceptible to N. meningitidis infection than complement-satisfactory persons, and it was estimated that the risk of infection is 7000 times higher in such individuals. In addition, complement component-deficient populations frequently experience frequent meningococcal disease since their immune response to natural infection may be less complete than that of complement non-deficient persons.
Inherited properdin deficiency also is related, with an increased risk of contracting meningococcal disease. Persons with functional or anatomic asplenia may not efficiently clear encapsulated Neisseria meningitidis from the bloodstream Persons with other conditions associated with immunosuppression also may be at increased risk of developing meningococcal disease.
Antibiotics
An updated 2013 Cochrane review investigated the effectiveness of different antibiotics for prophylaxis against meningococcal disease and eradication of N. meningitidis particularly in people at risk of being carriers. The systematic review included 24 studies with 6,885 participants. During follow up no cases of meningococcal disease were reported and thus true antibiotic preventative measures could not be directly assessed. However, the data suggested that rifampin, ceftriaxone, ciprofloxacin and penicillin were equally effective for the eradication of N. meningitidis in potential carriers, although rifampin was associated with resistance to the antibiotic following treatment. Eighteen studies provided data on side effects and reported they were minimal but included nausea, abdominal pain, dizziness and pain at injection site.
Disease outbreak control
Meningitis A, C, Y and W-135 vaccines can be used for large-scale vaccination programs when an outbreak of meningococcal disease occurs in Africa and other regions of the world. Whenever sporadic or cluster cases or outbreaks of meningococcal disease occur in the US, chemoprophylaxis is the principal means of preventing secondary cases in household and other close contacts of individuals with invasive disease. Meningitis A, C, Y and W-135 vaccines rarely may be used as an adjunct to chemoprophylaxis,1 but only in situations where there is an ongoing risk of exposure (e.g., when cluster cases or outbreaks occur) and when a serogroup contained in the vaccine is involved.
It is important that clinicians promptly report all cases of suspected or confirmed meningococcal disease to local public health authorities and that the serogroup of the meningococcal strain involved be identified. The effectiveness of mass vaccination programs depends on early and accurate recognition of outbreaks. When a suspected outbreak of meningococcal disease occurs, public health authorities will then determine whether mass vaccinations (with or without mass chemoprophylaxis) is indicated and delineate the target population to be vaccinated based on risk assessment.
Treatment
When meningococcal disease is suspected, treatment must be started immediately and should not be delayed while waiting for investigations. Treatment in primary care usually involves prompt intramuscular administration of benzylpenicillin, and then an urgent transfer to hospital (hopefully, an academic level I medical center, or at least a hospital with round the clock neurological care, ideally with neurological intensive and critical care units) for further care. Once in the hospital, the antibiotics of choice are usually IV broad spectrum 3rd generation cephalosporins, e.g., cefotaxime or ceftriaxone. Benzylpenicillin and chloramphenicol are also effective. Supportive measures include IV fluids, oxygen, inotropic support, e.g., dopamine or dobutamine and management of raised intracranial pressure. Steroid therapy may help in some adult patients, but is unlikely to affect long term outcomes.
There is some debate on which antibiotic is most effective at treating the illness. A systematic review compared two antibiotics. There was one trial: an open label (not blinded) non-inferiority trial of 510 people comparing two different types of antibiotics; ceftriaxone (in which there were 14 deaths out of 247), and chloramphenicol (12 deaths out of 256). There were no reported side effects. Both antibiotics were considered equally effective. Antibiotic choice should be based on local antibiotic resistance information.
Prognosis
Complications
Complications following meningococcal disease can be divided into early and late groups. Early complications include: raised intracranial pressure, disseminated intravascular coagulation, seizures, circulatory collapse and organ failure. Later complications are: deafness, blindness, lasting neurological deficits, reduced IQ, and gangrene leading to amputations.
Epidemiology
Africa
The importance of meningitis disease is as significant in Africa as HIV, TB and malaria. Cases of meningococcemia leading to severe meningoencephalitis are common among young children and the elderly. Deaths occurring in less than 24 hours are more likely during the disease epidemic seasons in Africa and Sub-Saharan Africa is hit by meningitis disease outbreaks throughout the epidemic season. It may be that climate change contributes significantly the spread of the disease in Benin, Burkina Faso, Cameroon, the Central African Republic, Chad, Côte d'Ivoire, the Democratic Republic of the Congo, Ethiopia, Ghana, Mali, Niger, Nigeria and Togo. This is an area of Africa where the disease is endemic: meningitis is "silently" present, and there are always a few cases. When the number of cases passes five per population of 100,000 in one week, teams are on alert. Epidemic levels are reached when there have been 100 cases per 100,000 populations over several weeks.
Further complicating efforts to halt the spread of meningitis in Africa is the fact that extremely dry, dusty weather conditions which characterize Niger and Burkina Faso from December to June favor the development of epidemics. Overcrowded villages are breeding grounds for bacterial transmission and lead to a high prevalence of respiratory tract infections, which leave the body more susceptible to infection, encouraging the spread of meningitis. IRIN Africa news has been providing the number of deaths for each country since 1995, and a mass vaccination campaign following a community outbreak of meningococcal disease in Florida was done by the CDC.
Florida
As of June 2022, there is an ongoing outbreak of the disease in Florida. The CDC has identified 26 cases of the disease. Seven deaths have been attributed to the disease.
History and etymology
From the Greek meninx (membrane) + kokkos (berry), meningococcal disease was first described by Gaspard Vieusseux during an outbreak in Geneva in 1805. In 1884, Italian pathologists Ettore Marchiafava and Angelo Celli described intracellular micrococci in cerebrospinal fluid, and in 1887, Anton Wiechselbaum identified the meningococcus (designated as Diplococcus intracellularis meningitidis) in cerebrospinal fluid and established the connection between the organism and epidemic meningitis.
See also
Endocarditis
Pathogenic bacteria
Waterhouse–Friderichsen syndrome
African meningitis belt
2009–10 West African meningitis outbreak
Meningococcal vaccine
Meningitis Vaccine Project
References
Further reading
External links
Bacterial diseases
Bacterium-related cutaneous conditions
Causes of amputation
Sepsis
Vaccine-preventable diseases | 0.767575 | 0.997826 | 0.765906 |
Comorbidity | In medicine, comorbidity refers to the simultaneous presence of two or more medical conditions in a patient; often co-occurring (that is, concomitant or concurrent) with a primary condition. It originates from the Latin term (meaning "sickness") prefixed with ("together") and suffixed with -ity (to indicate a state or condition). Comorbidity includes all additional ailments a patient may experience alongside their primary diagnosis, which can be either physiological or psychological in nature. In the context of mental health, comorbidity frequently refers to the concurrent existence of mental disorders, for example, the co-occurrence of depressive and anxiety disorders. The concept of multimorbidity is related to comorbidity but is different in its definition and approach, focusing on the presence of multiple diseases or conditions in a patient without the need to specify one as primary.
Definition
The term "comorbid" has three definitions:
to indicate a medical condition existing simultaneously but independently with another condition in a patient.
to indicate a medical condition in a patient that causes, is caused by, or is otherwise related to another condition in the same patient.
to indicate two or more medical conditions existing simultaneously regardless of their causal relationship.
Comorbidity can indicate either a condition existing simultaneously, but independently with another condition or a related derivative medical condition. The latter sense of the term causes some overlap with the concept of complications. For example, in longstanding diabetes mellitus, the extent to which coronary artery disease is an independent comorbidity versus a diabetic complication is not easy to measure, because both diseases are quite multivariate and there are likely aspects of both simultaneity and consequence. The same is true of intercurrent diseases in pregnancy. In other examples, the true independence or relation is not ascertainable because syndromes and associations are often identified long before pathogenetic commonalities are confirmed (and, in some examples, before they are even hypothesized). In psychiatric diagnoses it has been argued in part that this "'use of imprecise language may lead to correspondingly imprecise thinking', [and] this usage of the term 'comorbidity' should probably be avoided." However, in many medical examples, such as comorbid diabetes mellitus and coronary artery disease, it makes little difference which word is used, as long as the medical complexity is duly recognized and addressed.
Difference from multimorbidity
Comorbidity is often referred to as multimorbidity even though the two are considered distinct clinical scenarios.
Comorbidity means that one 'index' condition is the focus of attention, and others are viewed in relation to this. In contrast, multimorbidity describes someone having two or more long-term (chronic) conditions without any of them holding priority over the others. This distinction is important in how the healthcare system treats people and helps making clear the specific settings in which the use of one or the other term can be preferred. Multimorbidity offers a more general and person-centered concept that allows focusing on all of the patient's symptoms and providing a more holistic care. In other settings, for example in pharmaceutical research, comorbidity might often be the more useful term to use.
Mental health
In psychiatry, psychology, and mental health counseling, comorbidity refers to the presence of more than one diagnosis occurring in an individual at the same time. However, in psychiatric classification, comorbidity does not necessarily imply the presence of multiple diseases, but instead can reflect current inability to supply a single diagnosis accounting for all symptoms. On the DSM Axis I, major depressive disorder is a very common comorbid disorder. The Axis II personality disorders are often criticized because their comorbidity rates are excessively high, approaching 60% in some cases. Critics assert this indicates these categories of mental illness are too imprecisely distinguished to be usefully valid for diagnostic purposes, impacting treatment and resource allocation. Symptom overlap is a key component against DSM classification and serves as a note towards redefining criteria in disorders that the root cause may not be understood thoroughly. Regardless of criticisms, it stands that, annually, up to 45% of mental health patients fit the criteria for a comorbid diagnosis. A comorbid diagnosis is associated with more severe symptomatic expression and greater chance of dismal prognosis. Certain diagnoses such as ADHD, autism, OCD, and mood disorders have higher rates of co-occurring or being prevalent in separate diagnoses. "Comorbidity in OCD is the rule rather than the exception" with OCD diagnoses facing a lifetime rate of 90%. With overlapping symptoms comes overlap in treatment as well, CBT for example is common for both ADHD and OCD with pediatric onset and can be effective for both in a comorbid diagnosis. OCD and eating disorders have a high rate of occurrence, it is estimated that 20-60% of patients with an eating disorder have OCD. More often, comorbidity complicates and can prevent treatment efficacy on a varying scale depending on the circumstances.
The term 'comorbidity' was introduced in medicine by Feinstein (1970) to describe cases in which a 'distinct additional clinical entity' occurred before or during treatment for the 'index disease', the original or primary diagnosis. Since the terms were coined, meta studies have shown that criteria used to determine the index disease were flawed and subjective, and moreover, trying to identify an index disease as the cause of the others can be counterproductive to understanding and treating interdependent conditions. In response, 'multimorbidity' was introduced to describe concurrent conditions without relativity to or implied dependency on another disease, so that the complex interactions to emerge naturally under analysis of the system as a whole.
Although the term 'comorbidity' has recently become very fashionable in psychiatry, its use to indicate the concomitance of two or more psychiatric diagnoses is said to be incorrect because in most cases it is unclear whether the concomitant diagnoses actually reflect the presence of distinct clinical entities or refer to multiple manifestations of a single clinical entity. It has been argued that because "'the use of imprecise language may lead to correspondingly imprecise thinking', this usage of the term 'comorbidity' should probably be avoided".
Due to its artifactual nature, psychiatric comorbidity has been considered as a Kuhnian anomaly leading the DSM to a scientific crisis and a comprehensive review on the matter considers comorbidity as an epistemological challenge to modern psychiatry. The Hierarchical Taxonomy of Psychopathology is a leading alternative classification system that addresses these concerns about comorbidity.
History
Widespread study of physical and mental pathology found its place in psychiatry. I. Jensen (1975), J.H. Boyd (1984), W.C. Sanderson (1990), Yuri Nuller (1993), D.L. Robins (1994), A. B. Smulevich (1997), C.R. Cloninger (2002) and other psychiatrists discovered a number of comorbid conditions in those with psychiatric disorders.
The influence of comorbidity on the clinical progression of the primary (basic) physical disorder, effectiveness of the medicinal therapy and immediate and long-term prognosis of the patients was researched by physicians and scientists of various medical fields in many countries across the globe. These scientists and physicians included: M. H. Kaplan (1974), T. Pincus (1986), M. E. Charlson (1987), F. G. Schellevis (1993), H. C. Kraemer (1995), M. van den Akker (1996), A. Grimby (1997), S. Greenfield (1999), M. Fortin (2004) & A. Vanasse (2004), C. Hudon (2005), L. B. Lazebnik (2005), A. L. Vertkin (2008), G. E. Caughey (2008), F. I. Belyalov (2009), L. A. Luchikhin (2010) and many others.
Inception of the term
Many centuries ago the doctors propagated the viability of a complex approach in the diagnosis of disease and the treatment of the patient, however, modern medicine, which boasts a wide range of diagnostic methods and a variety of therapeutic procedures, stresses specification. This brought up a question: How to wholly evaluate the state of a patient who has a number of diseases simultaneously, where to start from and which disease(s) require(s) primary and subsequent treatment? For many years this question stood out unanswered, until 1970, when a renowned American doctor epidemiologist and researcher, A.R. Feinstein, who had greatly influenced the methods of clinical diagnosis and particularly methods used in the field of clinical epidemiology, came out with the term of "comorbidity". The appearance of comorbidity was demonstrated by Feinstein using the example of patients physically affected by rheumatic fever, discovering the worst state of the patients, who simultaneously had multiple diseases. In due course of time after its discovery, comorbidity was distinguished as a separate scientific-research discipline in many branches of medicine.
Evolution of the term
Presently there is no agreed-upon terminology of comorbidity. Some authors bring forward different meanings of comorbidity and multi-morbidity, defining the former, as the presence of a number of diseases in a patient, connected to each other through proven pathogenetic mechanisms and the latter, as the presence of a number of diseases in a patient, not having any connection to each other through any of the proven to date pathogenetic mechanisms. Others affirm that multi-morbidity is the combination of a number of chronic or acute diseases and clinical symptoms in a person and do not stress the similarities or differences in their pathogenesis. However the principle clarification of the term was given by H. C. Kraemer and M. van den Akker, determining comorbidity as the combination in a patient of 2 or more chronic diseases (disorders), pathogenetically related to each other or coexisting in a single patient independent of each disease's activity in the patient.
Synonyms
Polymorbidity
Multifactorial diseases
Polypathy
Dual diagnosis, used for mental health issues
Pluralpathology
Epidemiology
Comorbidity is widespread among the patients admitted at multidiscipline hospitals. During the phase of initial medical help, the patients having multiple diseases simultaneously are a norm rather than an exception. Prevention and treatment of chronic diseases declared by the World Health Organization, as a priority project for the second decade of the 20th century, are meant to better the quality of the global population. This is the reason for an overall tendency of large-scale epidemiological researches in different medical fields, carried-out using serious statistical data. In most of the carried-out, randomized, clinical researches the authors study patients with single refined pathology, making comorbidity an exclusive criterion. This is why it is hard to relate researches, directed towards the evaluation of the combination of ones or the other separate disorders, to works regarding the sole research of comorbidity. The absence of a single scientific approach to the evaluation of comorbidity leads to omissions in clinical practice. It is hard not to notice the absence of comorbidity in the taxonomy (systematics) of disease, presented in ICD-10.
Clinico-pathological comparisons
All the fundamental researches of medical documentation, directed towards the study of the spread of comorbidity and influence of its structure, were conducted until the 1990s. The sources of information, used by the researchers and scientists, working on the matter of comorbidity, were case histories, hospital records of patients and other medical documentation, kept by family doctors, insurance companies and even in the archives of patients in old houses.
The listed methods of obtaining medical information are mainly based on clinical experience and qualification of the physicians, carrying out clinically, instrumentally and laboratorially confirmed diagnosis. This is why despite their competence, they are highly subjective. No analysis of the results of postmortem of deceased patients was carried out for any of the comorbidity researches.
"It is the duty of the doctor to carry out autopsy of the patients they treat", said once professor M. Y. Mudrov. Autopsy allows you to exactly determine the structure of comorbidity and the direct cause of death of each patient independent of his/her age, gender and gender specific characteristics. Statistical data of comorbid pathology, based on these sections, are mainly devoid of subjectivism.
Research
The analysis of a decade long Australian research based on the study of patients having 6 widespread chronic diseases demonstrated that nearly half of the elderly patients with arthritis also had hypertension, 20% had cardiac disorders and 14% had type 2 diabetes. More than 60% of asthmatic patients complained of concurrent arthritis, 20% complained of cardiac problems and 16% had type 2 diabetes.
In patients with chronic kidney disease (renal insufficiency) the frequency of coronary heart disease is 22% higher and new coronary events 3.4 times higher compared to patients without kidney function disorders. Progression of CKD towards end stage renal disease requiring renal replacement therapy is accompanied by increasing prevalence of Coronary Heart Disease and sudden death from cardiac arrest.
A Canadian research conducted upon 483 obesity patients, it was determined that spread of obesity related accompanying diseases was higher among females than males. The researchers discovered that nearly 75% of obesity patients had accompanying diseases, which mostly included dyslipidemia, hypertension and type 2 diabetes. Among the young obesity patients (from 18 to 29) more than two chronic diseases were found in 22% males and 43% females.
Fibromyalgia is a condition which is comorbid with several others, including but not limited to; depression, anxiety, headache, irritable bowel syndrome, chronic fatigue syndrome, systemic lupus erythematosus, rheumatoid arthritis, migraine, and panic disorder.
The number of comorbid diseases increases with age. Comorbidity increases by 10% in ages up to 19 years, up to 80% in people of ages 80 and older. According to data by M. Fortin, based on the analysis of 980 case histories, taken from daily practice of a family doctor, the spread of comorbidity is from 69% in young patients, up to 93% among middle aged people and up to 98% patients of older age groups. At the same time the number of chronic diseases varies from 2.8 in young patients and 6.4 among older patients.
According to Russian data, based on the study of more than three thousand postmortem reports (n=3239) of patients of physical pathologies, admitted at multidisciplinary hospitals for the treatment of chronic disorders (average age 67.8 ± 11.6 years), the frequency of comorbidity is 94.2%. Doctors mostly come across a combination of two to three disorders, but in rare cases (up to 2.7%) a single patient carried a combination of 6–8 diseases simultaneously.
The fourteen-year research conducted on 883 patients of idiopathic thrombocytopenic purpura (Werlhof disease), conducted in Great Britain, shows that the given disease is related to a wide range of physical pathologies. In the comorbid structure of these patients, most frequently present are malignant neoplasms, locomotorium disorders, skin and genitourinary system disorders, as well as haemorrhagic complications and other autoimmune diseases, the risk of whose progression during the first five years of the primary disease exceeds the limit of 5%.
In a research conducted on 196 larynx cancer patients, it was determined that the survival rate of patients at various stages of cancer differs depending upon the presence or absence of comorbidity. At the first stage of cancer the survival rate in the presence of comorbidity is 17% and in its absence it is 83%, in the second stage of cancer the rate of survivability is 14% and 76%, in the third stage it is 28% and 66% and in the fourth stage of cancer it is 0% and 50% respectively. Overall the survivability rate of comorbid larynx cancer patients is 59% lower than the survivability rate of patients without comorbidity.
Except for therapists and general physicians, the problem of comorbidity is also often faced by specialists. Regretfully they seldom pay attention to the coexistence of a whole range of disorders in a single patient and mostly conduct the treatment of specific to their specialization diseases. In current practice urologists, gynecologists, ENT specialists, eye specialists, surgeons and other specialists all too often mention only the diseases related to "own" field of specialization, passing on the discovery of other accompanying pathologies "under the control" of other specialists. It has become an unspoken rule for any specialized department to carry out consultations of the therapist, who feels obliged to carry out symptomatic analysis of the patient, as well as to the form the diagnostic and therapeutic concept, taking in view the potential risks for the patient and his long-term prognosis.
Based on the available clinical and scientific data it is possible to conclude that comorbidity has a range of undoubted properties, which characterize it as a heterogeneous and often encountered event, which enhances the seriousness of the condition and worsens the patient's prospects. The heterogeneous character of comorbidity is due to the wide range of reasons causing it.
Causes
Anatomic proximity of diseased organs
Singular pathogenetic mechanism of a number of diseases
Terminable cause-effect relation between the diseases
One disease resulting from complications of another
Pleiotropy
The factors responsible for the development of comorbidity can be chronic infections, inflammations, involutional and systematic metabolic changes, iatrogenesis, social status, ecology and genetic susceptibility.
Types
Trans-syndromal comorbidity: coexistence, in a single patient, of two and/or more syndromes, pathogenetically related to each other.
Trans-nosological comorbidity: coexistence, in a single patient, of two and/or more syndromes, pathogenetically not related to each other.
The division of comorbidity as per syndromal and nosological principles is mainly preliminary and inaccurate, however it allows us to understand that comorbidity can be connected to a singular cause or common mechanisms of pathogenesis of the conditions, which sometimes explains the similarity in their clinical aspects, which makes it difficult to differentiate between nosologies.
Etiological comorbidity: It is caused by concurrent damage to different organs and systems, which is caused by a singular pathological agent (for example due to alcoholism in patients with chronic alcohol intoxication; pathologies associated with smoking; systematic damage due to collagenoses).
Complicated comorbidity: It is the result of the primary disease and often subsequent after sometime after its destabilization appears in the shape of target lesions (for example chronic nephratony resulting from diabetic nephropathy (Kimmelstiel-Wilson disease) in patients with type 2 diabetes; development of brain infarction resulting from complications due to hypertensive crisis in patients with hypertension).
Iatrogenic comorbidity: It appears as a result of necessitated negative effect of the doctor on the patient, under the conditions of pre determine danger of one or the other medical procedure (for example, glucocorticosteroid osteoporosis in patients treated for a long time using systematic hormonal agents (preparations); drug-induced hepatitis resulting from chemotherapy against TB, prescribed due to the conversion of tubercular tests).
Unspecified (NOS) comorbidity: This type assumes the presence of singular pathogenetic mechanisms of development of diseases, comprising this combination, but require a number of tests, proving the hypothesis of the researcher or physician (for example, erectile dysfunction as an early sign of general atherosclerosis (ASVD); occurrence of erosive-ulcerative lesions in the mucous membrane of the upper gastrointestinal tract in "vascular" patients).
"Arbitrary" comorbidity: initial alogism of the combination of diseases is not proven, but soon can be explained with clinical and scientific point of view (for example, combination of coronary heart disease (CHD) and choledocholithiasis; combination of acquired heart valvular disease and psoriasis).
Structure
There are a number of rules for the formulation of clinical diagnosis for comorbid patients, which must be followed by a practitioner. The main principle is to distinguish in diagnosis the primary and background diseases, as well as their complications and accompanying pathologies.
Primary disease: This is the nosological form, which itself or as a result of complications calls for the foremost necessity for treatment at the time due to threat to the patient's life and danger of disability. Primary is the disease, which becomes the cause of seeking medical help or the reason for the patient's death. If the patient has several primary diseases it is important to first of all understand the combined primary diseases (rival or concomitant).
Rival diseases: These are the concurrent nosological forms in a patient, interdependent in etiologies and pathogenesis, but equally sharing the criterion of a primary disease (for example, transmural myocardial infarction and massive thromboembolism of pulmonary artery, caused by phlebemphraxis of lower limbs). For practicing pathologist rival are two or more diseases, exhibited in a single patient, each of which by itself or through its complications could cause the patient's death.
Polypathia: Diseases with different etiologies and pathogenesis, each of which separately could not cause death, but, concurring during development and reciprocally exacerbating each other, they cause the patient's death (for example, osteoporotic fracture of the surgical neck of the femur and hypostatic pneumonia).
Background disease: This helps in the occurrence of or adverse development of the primary disease increases its dangers and helps in the development of complications. This disease as well as the primary one requires immediate treatment (for example, type 2 diabetes).
Complications: Nosologies having pathogenetic relation to the primary disease, supporting the adverse progression of the disorder, causing acute worsening of the patient's conditions (are a part of the complicated comorbidity). In a number of cases the complications of the primary disease and related to it etiological and pathogenetic factors, are indicated as conjugated disease. In this case they must be identified as the cause of comorbidity. Complications are listed in a descending order of prognostic or disabling significance.
Associating diseases: Nosological units not connected etiologically and pathogenetically with the primary disease (Listed in the order of significance).
Diagnosis
Many tests attempt to standardize the "weight" or value of comorbid conditions, whether they are secondary or tertiary illnesses. Each test attempts to consolidate each individual comorbid condition into a single, predictive variable that measures mortality or other outcomes. Researchers have validated such tests because of their predictive value, but no one test is as yet recognized as a standard.
Charlson Comorbidity Index (CCI)
The Charlson Comorbidity Index predicts the mortality for a patient who may have a range of comorbid conditions, such as heart disease, AIDS, or cancer (a total of 17 conditions). Each condition is assigned a score of 1, 2, 3, or 6, depending on the risk of dying associated with each one. Scores are summed to provide a total score to predict mortality. Many variations of the Charlson comorbidity index have been presented, including the Charlson/Deyo, Charlson/Romano, Charlson/Manitoba, and Charlson/D'Hoores comorbidity indices.
For a physician, this score is helpful in deciding how aggressively to treat a condition. For example, a patient may have cancer with comorbid heart disease and diabetes. These comorbidities may be so severe that the costs and risks of cancer treatment would outweigh its short-term benefit.
Since patients often do not know how severe their conditions are, nurses were originally supposed to review a patient's chart and determine whether a particular condition was present in order to calculate the index. Subsequent studies have adapted the comorbidity index into a questionnaire for patients.
The Charlson index, especially the Charlson/Deyo, followed by the Elixhauser have been most commonly referred by the comparative studies of comorbidity and multimorbidity measures.
Comorbidity–Polypharmacy Score (CPS)
The comorbidity–polypharmacy score (CPS) is a simple measure that consists of the sum of all known comorbid conditions and all associated medications. There is no specific matching between comorbid conditions and corresponding medications. Instead, the number of medications is assumed to be a reflection of the "intensity" of the associated comorbid conditions. This score has been tested and validated extensively in the trauma population, demonstrating good correlation with mortality, morbidity, triage, and hospital readmissions. Of interest, increasing levels of CPS were associated with significantly lower 90-day survival in the original study of the score in trauma population.
Elixhauser Comorbidity Index
The Elixhauser comorbidity measure was developed using administrative data from a statewide California inpatient database from all non-federal inpatient community hospital stays in California (n = 1,779,167). The Elixhauser comorbidity measure developed a list of 30 comorbidities relying on the ICD-9-CM coding manual. The comorbidities were not simplified as an index because each comorbidity affected outcomes (length of hospital stay, hospital changes, and mortality) differently among different patients groups. The comorbidities identified by the Elixhauser comorbidity measure are significantly associated with in-hospital mortality and include both acute and chronic conditions. van Walraven et al. have derived and validated an Elixhauser comorbidity index that summarizes disease burden and can discriminate for in-hospital mortality. In addition, a systematic review and comparative analysis shows that among various comorbidities indices, Elixhauser index is a better predictor of the risk especially beyond 30 days of hospitalization.
Diagnosis-related group
Patients who are more seriously ill tend to require more hospital resources than patients who are less seriously ill, even though they are admitted to the hospital for the same reason. Recognizing this, the diagnosis-related group (DRG) manually splits certain DRGs based on the presence of secondary diagnoses for specific complications or comorbidities (CC). The same applies to Healthcare Resource Groups (HRGs) in the UK.
Clinical example of evaluation
Patient S., 73 years, called an ambulance because of a sudden pressing pain in the chest. It was known from the case history that the patient had CHD for many years. Such chest pains were experienced by her earlier as well, but they always disappeared after a few minutes of sublingual administration of organic nitrates. This time taking three tablets of nitroglycerine did not kill the pain. It was also known from the case history that the patient had twice had myocardial infarctions during the last ten years, as well as had an Acute Cerebrovascular Event with sinistral hemiplegia more than 15 years ago. Apart from that the patient had hypertension, type 2 diabetes with diabetic nephropathy, hysteromyoma, cholelithiasis, osteoporosis and varicose pedi-vein disease. It was also learned that the patient regularly takes a number of antihypertensive drugs, urinatives and oral antihyperglycemic remedies, as well as statins, antiplatelet and nootropics. In the past the patient had undergone cholecystectomy due to cholelithiasis more than 20 years ago, as well as the extraction of a cataract of the right eye 4 years ago. The patient was admitted to cardiac intensive care unit at a general hospital diagnosed for acute transmural myocardial infarction. During the check-up moderate azotemia, mild erythronormoblastic anemia, proteinuria and lowering of left vascular ejection fraction were also identified.
Methods of evaluation
There are currently several generally accepted methods of evaluating (measuring) comorbidity:
Cumulative Illness Rating Scale (CIRS): Developed in 1968 by B. S. Linn, it became a revolutionary discovery, because it gave the practicing doctors a chance to calculate the number and severity of chronic illnesses in the structure of the comorbid state of their patients. The proper use of CIRS means separate cumulative evaluation of each of the biological systems: "0" The selected system corresponds to the absence of disorders, "1": Slight (mild) abnormalities or previously had disorders, "2": Illness requiring the prescription of medicinal therapy, "3": Disease, which caused disability and "4": Acute organ insufficiency requiring emergency therapy. The CIRS system evaluates comorbidity in cumulative score, which can be from 0 to 56. As per its developers, the maximum score is not compatible with the patient's life.
Cumulative Illness Rating Scale for Geriatrics (CIRS-G): This system is similar to CIRS, but for aged patients, offered by M. D. Miller in 1991. This system takes into account the age of the patient and the peculiarities of the old age disorders.
The Kaplan–Feinstein Index: This index was created in 1973 based on the study of the effect of the associated diseases on patients with type 2 diabetes during a period of 5 years. In this system of comorbidity evaluation all the present (in a patient) diseases and their complications, depending on the level of their damaging effect on body organs, are classified as mild, moderate and severe. In this case the conclusion about cumulative comorbidity is drawn on the basis of the most decompensated biological system. This index gives cumulative, but less detailed as compared to CIRS, assessment of the condition of each of the biological systems: "0": Absence of disease, "1": Mild course of the disease, "2": Moderate disease, "3": Severe disease. The Kaplan–Feinstein Index evaluates comorbidity by cumulative score, which can vary from 0 to 36. Apart from that the notable deficiency of this method of evaluating comorbidity is the excessive generalization of diseases (nosologies) and the absence of a large number of illnesses in the scale, which, probably, should be noted in the "miscellaneous" column, which undermines (decreases) this method's objectivity and productivity of this method. However the indisputable advantage of the Kaplan–Feinstein Index as compared to CIRS is in the capability of independent analysis of malignant neoplasms and their severities. Using this method patient S's, age 73, comorbidity can be evaluated as of moderate severity (16 out of 36 points), however its prognostic value is unclear, because of the absence of the interpretation of the overall score, resulting from the accumulation of the patient's diseases.
Charlson Index: This index is meant for the long-term prognosis of comorbid patients and was developed by M. E. Charlson in 1987. This index is based on a point scoring system (from 0 to 40) for the presence of specific associated diseases and is used for prognosis of lethality. For its calculation the points are accumulated, according to associated diseases, as well as the addition of a single point for each 10 years of age for patients of ages above forty years (in 50 years 1 point, 60 years 2 points etc.). The distinguishing feature and undisputed advantage of the Charlson Index is the capability of evaluating the patient's age and determination of the patient's mortality rate, which in the absence of comorbidity is 12%, at 1–2 points it is 26%; at 3–4 points it is 52% and with the accumulation of more than 5 points it is 85%. Regretfully this method has some deficiencies: Evaluating comorbidity severity of many diseases is not considered, as well as the absence of many important for prognosis disorders. Apart from that it is doubtful that possible prognosis for a patient with bronchial asthma and chronic leukemia is comparable to the prognosis for the patient ailing from myocardial infarction and cerebral infarction. In this case comorbidity of patient S, 73 years of age according to this method, is equivalent to mild state (9 out of 40 points).
Modified Charlson Index: R. A. Deyo, D. C. Cherkin, and Marcia Ciol added chronic forms of ischemic cardiac disorder and the stages of chronic cardiac insufficiency to this index in 1992.
Elixhauser Index: The Elixhauser comorbidity measure include 30 comorbidities, which are not simplified as an index. Elixhauser shows a better predictive performance for mortality risk especially beyond 30 days of hospitalization.
Index of Co-Existent Disease (ICED): This Index was first developed in 1993 by S. Greenfield to evaluate comorbidity in patients with malignant neoplasms, later it also became useful for other categories of patients. This method helps in calculating the duration of a patient's stay at a hospital and the risks of repeated admittance of the same at a hospital after going through surgical procedures. For the evaluation of comorbidity the ICED index suggests to evaluate the patient's condition separately as per two different components: Physiological functional characteristics. The first component comprises 19 associated disorders, each of which is assessed on a 4-point scale, where "0" indicates the absence of disease and "3" indicates the disease's severe form. The second component evaluates the effect of associated diseases on the physical condition of the patient. It assesses 11 physical functions using a 3-point scale, where "0" means normal functionality and "2" means the impossibility of functionality.
Geriatric Index of Comorbidity (GIC): Developed in 2002
Functional Comorbidity Index (FCI): Developed in 2005.
Total Illness Burden Index (TIBI): Developed in 2007.
Analyzing the comorbid state of patient S, 73 years of age, using the most used international comorbidity assessment scales, a doctor would come across totally different evaluation. The uncertainty of these results would somewhat complicate the doctors judgment about the factual level of severity of the patient's condition and would complicate the process of prescribing rational medicinal therapy for the identified disorders. Such problems are faced by doctors on everyday basis, despite all their knowledge about medical science. The main hurdle in the way of inducting comorbidity evaluation systems in broad based diagnostic-therapeutic process is their inconsistency and narrow focus. Despite the variety of methods of evaluation of comorbidity, the absence of a singular generally accepted method, devoid of the deficiencies of the available methods of its evaluation, causes disturbance. The absence of a unified instrument, developed on the basis of colossal international experience, as well as the methodology of its use does not allow comorbidity to become doctor "friendly". At the same time due to the inconsistency in approach to the analysis of comorbid state and absence of components of comorbidity in medical university courses, the practitioner is unclear about its prognostic effect, which makes the generally available systems of associated pathology evaluation unreasoned and therefore un-needed as well.
Treatment of comorbid patient
The effect of comorbid pathologies on clinical implications, diagnosis, prognosis and therapy of many diseases is polyhedral and patient-specific. The interrelation of the disease, age and drug pathomorphism greatly affect the clinical presentation and progress of the primary nosology, character and severity of the complications, worsens the patient's life quality and limit or make difficult the remedial-diagnostic process. Comorbidity affects life prognosis and increases the chances of fatality. The presence of comorbid disorders increases bed days, disability, hinders rehabilitation, increases the number of complications after surgical procedures, and increases the chances of decline in aged people.
The presence of comorbidity must be taken into account when selecting the algorithm of diagnosis and treatment plans for any given disease. It is important to enquire comorbid patients about the level of functional disorders and anatomic status of all the identified nosological forms (diseases). Whenever a new, as well as mildly notable symptom appears, it is necessary to conduct a deep examination to uncover its causes. It is also necessary to be remembered that comorbidity leads to polypragmasy (polypharmacy), i.e. simultaneous prescription of a large number of medicines, which renders impossible the control over the effectiveness of the therapy, increases monetary expenses and therefore reduces compliance. At the same time, polypragmasy, especially in aged patients, renders possible the sudden development of local and systematic, unwanted medicinal side-effects. These side-effects are not always considered by the doctors, because they are considered as the appearance of comorbidity and as a result become the reason for the prescription of even more drugs, sealing-in the vicious circle. Simultaneous treatment of multiple disorders requires strict consideration of compatibility of drugs and detailed adherence of rules of rational drug therapy, based on E. M. Tareev's principles, which state: "Each non-indicated drug is contraindicated" and B. E. Votchal said: "If the drug does not have any side-effects, one must think if there is any effect at all".
A study of inpatient hospital data in the United States in 2011 showed that the presence of a major complication or comorbidity was associated with a great risk of intensive-care unit utilization, ranging from a negligible change for acute myocardial infarction with major complication or comorbidity to nearly nine times more likely for a major joint replacement with major complication or comorbidity.
See also
Coinfection
Conditions comorbid to autism spectrum disorders
Superinfection
Syndemic
References
Further reading
Comorbidity: Addiction and Other Mental Illness. Rockville, MD: U.S. Dept. of Health and Human Services, National Institutes of Health, National Institute on Drug Abuse, 2010.
External links
Online comorbidity scoring tools
MDCalc – Medical calculators, equations, scores, and guidelines
Medical diagnosis
Diseases and disorders
Epidemiology
Public health | 0.76759 | 0.997774 | 0.765881 |
Peripheral edema | Peripheral edema is edema (accumulation of fluid causing swelling) in tissues perfused by the peripheral vascular system, usually in the lower limbs. In the most dependent parts of the body (those hanging distally), it may be called dependent edema.
Cause
The condition is commonly associated with vascular and cardiac changes associated with aging but can be caused by many other conditions, including congestive heart failure, kidney failure, liver cirrhosis, portal hypertension, trauma, alcoholism, altitude sickness, pregnancy, hypertension, sickle cell anemia, a compromised lymphatic system or merely long periods of time sitting or standing without moving. Some medicines (e.g. amlodipine, pregabalin) may also cause or worsen the condition.
Prognosis
Successful treatment depends on control of the underlying cause. Severe swelling can cause permanent damage to nerves, resulting in peripheral neuropathy. Many cases from temporary or minor causes resolve on their own, with no lasting damage.
References
External links
Symptoms and signs: Skin and subcutaneous tissue | 0.769248 | 0.995621 | 0.76588 |
Sense | A sense is a biological system used by an organism for sensation, the process of gathering information about the surroundings through the detection of stimuli. Although, in some cultures, five human senses were traditionally identified as such (namely sight, smell, touch, taste, and hearing), many more are now recognized. Senses used by non-human organisms are even greater in variety and number. During sensation, sense organs collect various stimuli (such as a sound or smell) for transduction, meaning transformation into a form that can be understood by the brain. Sensation and perception are fundamental to nearly every aspect of an organism's cognition, behavior and thought.
In organisms, a sensory organ consists of a group of interrelated sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves (nerves of the Central and Peripheral nervous systems that relay sensory information to and from the brain and body), the different types of sensory receptor cells (such as mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from these organs towards the central nervous system, finally arriving at the sensory cortices in the brain, where sensory signals are processed and interpreted (perceived).
Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems. Human external senses are based on the sensory organs of the eyes, ears, skin, nose, mouth and the vestibular system. Internal sensation detects stimuli from internal organs and tissues. Internal senses possessed by humans include spatial orientation, proprioception (body position) and nociception (pain). Further internal senses lead to signals such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting. Some animals are able to detect electrical and magnetic fields, air moisture, or polarized light, while others sense and perceive through alternative systems, such as echolocation. Sensory modalities or sub modalities are different ways sensory information is encoded or transduced. Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived. Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.
Definitions
Sensory organs
Sensory organs are organs that sense and transduce stimuli. Humans have various sensory organs (i.e. eyes, ears, skin, nose, and mouth) that correspond to a respective visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), olfactory system (sense of smell), and gustatory system (sense of taste). Those systems, in turn, contribute to vision, hearing, touch, smell, and the ability to taste. Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including the vestibular system (sense of balance) sensed by the inner ear and providing the perception of spatial orientation; proprioception (body position); and nociception (pain). Further internal chemoreception- and osmoreception-based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.
Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, other mammals in general have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues and some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical fields and magnetic fields, air moisture, or polarized light. Others sense and perceive through alternative systems such as echolocation. Recent theory suggests that plants and artificial agents such as robots may be able to detect and interpret environmental information in an analogous manner to animals.
Sensory modalities
Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy, and umami, all of which are based on different chemicals binding to sensory neurons.
Receptors
Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing. Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.
Structural receptor types
Location
One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.
Cell type
The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.
A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.
Functional receptor types
A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.
Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, electroreceptors (in certain mammals and fish), and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors. Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.
Thresholds
Absolute threshold
Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold. The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time. Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.
Differential threshold
Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other. Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus. According to Weber's Law, bigger stimuli require larger differences to be noticed.
Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.
Signal detection theory
Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something—a blotchy pattern of grey with intermittent brighter flashes—this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.
Private perceptive experience
Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.
Sensory adaptation
When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During that process, the subject becomes less sensitive to the stimulus.
Fourier analysis
Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.
Sensory neuroscience and the biology of perception
Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)
Sensory nervous system
All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).
Multimodal perception
Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived. Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.
Philosophy
The philosophy of perception is concerned with the nature of perceptual experience and the status of perceptual data, in particular how they relate to beliefs about, or knowledge of, the world. Historical inquiries into the underlying mechanisms of sensation and perception have led early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.
Human sensation
General
Absolute threshold
Some examples of human absolute thresholds for the nine to 21 external senses.
Multimodal perception
Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration. Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus. Additionally, multimodal "what" and "where" pathways have been proposed for auditory and tactile stimuli.
External
External receptors that respond to stimuli from outside the body are called exteroceptors. Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, balance, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.
Visual system (vision)
The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.
At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.
The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the "red" cones minimally, the "green" cones marginally, and the "blue" cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.
There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
On February 14, 2013, researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.
Visual perception in psychology
According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt's Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.
The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow.
The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.
The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line.
The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.
The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.
The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects. We will see the overlapping objects with no interruptions.
The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.
Auditory system (hearing)
Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.
Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz, with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.
Studies pertaining to audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear.
Auditory cognitive psychology is a branch of cognitive psychology that is dedicated to the auditory system. The main point is to understand why humans are able to use sound in thinking outside of actually saying it.
Relating to auditory cognitive psychology is psychoacoustics. Psychoacoustics is more directed at people interested in music. Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics. Most research around these two are focused on the instrument, the listener, and the player of the instrument.
Somatosensory system (touch)
Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia. Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord. The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.
Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.
The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
Gustatory system (taste)
The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids. The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.
Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.
Salty and sour taste submodalities are triggered by the cations and , respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.
Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.
Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium and free fatty acids may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can "taste" words. They have reported having flavor sensations they are not actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well.
Olfactory system (smell)
Like the sense of taste, the sense of smell, or the olfactory system, is also responsive to chemical stimuli. Unlike taste, there are hundreds of olfactory receptors (388 functional ones according to one 2003 study), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.
The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.
In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones. Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.
Vestibular system (balance)
The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.
The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying "no". The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.
The vestibular nerve conducts information from sensory receptors in three ampullae that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
Internal
An internal sensation and perception also known as interoception is "any sense that is normally stimulated from within the body". These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.
Specific receptors include:
Hunger is governed by a set of brain structures (e.g., the hypothalamus) that are responsible for energy homeostasis.
Pulmonary stretch receptors are found in the lungs and control the respiratory rate.
Peripheral chemoreceptors in the brain monitor the carbon dioxide and oxygen levels in the brain to give a perception of suffocation if carbon dioxide levels get too high.
The chemoreceptor trigger zone is an area of the medulla in the brain that receives inputs from blood-borne drugs or hormones, and communicates with the vomiting center.
Chemoreceptors in the circulatory system also measure salt levels and prompt thirst if they get too high; they can also respond to high blood sugar levels in diabetics.
Cutaneous receptors in the skin not only respond to touch, pressure, temperature and vibration, but also respond to vasodilation in the skin such as blushing.
Stretch receptors in the gastrointestinal tract sense gas distension that may result in colic pain.
Stimulation of sensory receptors in the esophagus result in sensations felt in the throat when swallowing, vomiting, or during acid reflux.
Sensory receptors in pharynx mucosa, similar to touch receptors in the skin, sense foreign objects such as mucus and food that may result in a gag reflex and corresponding gagging sensation.
Stimulation of sensory receptors in the urinary bladder and rectum may result in perceptions of fullness.
Stimulation of stretch sensors that sense dilation of various blood vessels may result in pain, for example headache caused by vasodilation of brain arteries.
Cardioception refers to the perception of the activity of the heart.
Opsins and direct DNA damage in melanocytes and keratinocytes can sense ultraviolet radiation, which plays a role in pigmentation and sunburn.
Baroreceptors relay blood pressure information to the brain and maintain proper homeostatic blood pressure.
The perception of time is also sometimes called a sense, though not tied to a specific receptor.
Nonhuman animal sensation and perception
Human analogues
Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
Smell
An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell. Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human mammals can smell better than humans, humans are known to have far fewer olfactory receptors than mice, and humans have also accumulated more genetic mutations in their olfactory receptors than other primates.
Vomeronasal organ
Many animals (salamanders, reptiles, mammals) have a vomeronasal organ that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobson's organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.
Taste
Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.
Vision
Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose. It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.
Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes. Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision, explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays. Some cephalopods can distinguish the polarization of light.
Spatial orientation
Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
Non-human analogues
In addition, some animals have senses that humans lack.
Magnetoception
Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration. It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction. Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field. There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.
Echolocation
Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
Electroreception
Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus has the most acute sense of electroception.
A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors. These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.
Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense. However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
Hygroreception
Hygroreception is the ability to detect changes in the moisture content of the environment.
Infrared sensing
The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes. It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making. The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons. The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light. This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.
Other
Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush; however, this is considered an entoptic phenomenon rather than a separate sense.
Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
Plant sensation
By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
However, plants can perceive the world around them, and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as away.
Artificial sensation and perception
Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.
Culture
In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses. At that time, the words "sense" and "wit" were synonyms, so the senses were known as the five outward wits. This traditional concept of five senses is common today.
The traditional five senses are enumerated as the "five material faculties" in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.
See also
Aesthesis
Apperception
Attention
Chemesthesis
Extrasensory perception
Entoptic phenomenon
Increased sensitivity:
Hyperacusis
Hyperesthesia
Supertaster
Illusions
Auditory illusion
Optical illusion
Touch illusion
Multisensory integration
Phantom limb
Sensation and perception psychology
Sense of direction
Sensitivity (human)
Sensorium
Sensory processing disorder
Synesthesia (Ideasthesia)
References
External links
The 2004 Nobel Prize in Physiology or Medicine (announced 4 October 2004) was won by Richard Axel and Linda Buck for their work explaining olfaction, published first in a joint paper in 1991 that described the very large family of about one thousand genes for odorant receptors and how the receptors link to the brain.
Answers to several questions related to senses and human feeling from curious kids
The Physiology of the Senses tutorial—12 animated chapters on vision, hearing, touch, balance and memory.
Sensory systems | 0.767206 | 0.998248 | 0.765862 |
Hormesis | Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. Within the hormetic zone, the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen, which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting.
In toxicology, hormesis is a dose-response phenomenon to xenobiotics or other stressors.
In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. In pharmacology, the hormetic zone is similar to the therapeutic window.
In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood.
Etymology
The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek to excite. The same Greek root provides the word hormone. The term "hormetics" is used for the study of hormesis. The word hormesis was first reported in English in 1943.
History
A form of hormesis famous in antiquity was Mithridatism, the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac, polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance, the Swiss doctor Paracelsus said, "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison."
German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt, who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule. Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology, volume 33, pp. 517–541.
In 2004, Edward Calabrese evaluated the concept of hormesis. Over 600 substances show a U-shaped dose–response relationship; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]"
Examples
Carbon monoxide
Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter). The majority of endogenous carbon monoxide is produced by heme oxygenase; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent.
Regarding the hormetic curve graph:
Deficiency zone: an absence of carbon monoxide signaling has toxic implications
Hormetic zone / region of homeostasis: small amount of carbon monoxide has a positive effect:
essential as a neurotransmitter
beneficial as a pharmaceutical
Toxicity zone: excessive exposure results in carbon monoxide poisoning
Oxygen
Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide:
Deficiency zone: hypoxia / asphyxia
Hormetic zone / region of homeostasis
Toxicity zone: oxidative stress
Physical exercise
Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk.
Mitohormesis
The possible effect of small amounts of oxidative stress is under laboratory research. Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman. The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research. However, in over 19 clinical trials, "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span."
Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene, vitamin A or vitamin E may increase disease prevalence in humans.
Alcohol
Alcohol is believed to be hormetic in preventing heart disease and stroke, although the benefits of light drinking may have been exaggerated. The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome, therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear.
In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans, a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet.
Methylmercury
In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury, a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville, stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds.
Radiation
Ionizing radiation
Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average.
In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work.
Chemical and ionizing radiation combined
No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave. The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions.
The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory.
Nucleotide excision repair
Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. The DNA damaging (genotoxic) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure.
Applications
Effects in aging
One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock, irradiation, prooxidants, hypergravity, and food restriction. Such compounds that may modulate stress responses in cells have been termed "hormetins".
Controversy
Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US.
Radiation controversy
The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation. This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans.
Nonetheless, many countries including the Czech Republic, Germany, Austria, Poland, and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency.
The United States National Research Council (part of the National Academy of Sciences), the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress) and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses.
A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation.
A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste.
Policy consequences
Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses.
See also
Calorie restriction
Michael Ristow
Petkau effect
Radiation hormesis
Stochastic resonance
Mithridatism
Antifragility
Xenohormesis
References
External links
International Dose-Response Society
Clinical pharmacology
Radiobiology
Toxicology
Health paradoxes | 0.770258 | 0.994232 | 0.765815 |
Life-support system | A life-support system is the combination of equipment that allows survival in an environment or situation that would not support that life in its absence. It is generally applied to systems supporting human life in situations where the outside environment is hostile, such as outer space or underwater, or medical situations where the health of the person is compromised to the extent that the risk of death would be high without the function of the equipment.
In human spaceflight, a life-support system is a group of devices that allow a human being to survive in outer space.
US government space agency NASA, and private spaceflight companies
use the phrase "environmental control and life-support system" or the acronym ECLSS when describing these systems. The life-support system may supply air, water and food. It must also maintain the correct body temperature, an acceptable pressure on the body and deal with the body's waste products. Shielding against harmful external influences such as radiation and micro-meteorites may also be necessary. Components of the life-support system are life-critical, and are designed and constructed using safety engineering techniques.
In underwater diving, the breathing apparatus is considered to be life support equipment, and a saturation diving system is considered a life-support system – the personnel who are responsible for operating it are called life support technicians. The concept can also be extended to submarines, crewed submersibles and atmospheric diving suits, where the breathing gas requires treatment to remain respirable, and the occupants are isolated from the outside ambient pressure and temperature.
Medical life-support systems include heart-lung machines, medical ventilators and dialysis equipment.
Human physiological and metabolic needs
A crewmember of typical size requires approximately of food, water, and oxygen per day to perform standard activities on a space mission, and outputs a similar amount in the form of waste solids, waste liquids, and carbon dioxide. The mass breakdown of these metabolic parameters is as follows: of oxygen, of food, and of water consumed, converted through the body's physiological processes to of solid wastes, of liquid wastes, and of carbon dioxide produced. These levels can vary due to activity level of a specific mission assignment, but must obey the principle of mass balance. Actual water use during space missions is typically double the given value, mainly due to non-biological use (e.g. showering). Additionally, the volume and variety of waste products varies with mission duration to include hair, finger nails, skin flaking, and other biological wastes in missions exceeding one week in length. Other environmental considerations such as radiation, gravity, noise, vibration, and lighting also factor into human physiological response in outer space, though not with the more immediate effect that the metabolic parameters have.
Atmosphere
Outer space life-support systems maintain atmospheres composed, at a minimum, of oxygen, water vapor and carbon dioxide. The partial pressure of each component gas adds to the overall barometric pressure.
However, the elimination of diluent gases substantially increases fire risks, especially in ground operations when for structural reasons the total cabin pressure must exceed the external atmospheric pressure; see Apollo 1. Furthermore, oxygen toxicity becomes a factor at high oxygen concentrations. For this reason, most modern crewed spacecraft use conventional air (nitrogen/oxygen) atmospheres and use pure oxygen only in pressure suits during extravehicular activity where acceptable suit flexibility mandates the lowest inflation pressure possible.
Water
Water is consumed by crew members for drinking, cleaning activities, EVA thermal control, and emergency uses. It must be stored, used, and reclaimed (from waste water and exhaled water vapor) efficiently since no on-site sources currently exist for the environments reached in the course of human space exploration. Future lunar missions may utilize water sourced from polar ices; Mars missions may utilize water from the atmosphere or ice deposits.
Food
All space missions to date have used supplied food. Life-support systems could include a plant cultivation system which allows food to be grown within buildings or vessels. This would also regenerate water and oxygen. However, no such system has flown in outer space as yet. Such a system could be designed so that it reuses most (otherwise lost) nutrients. This is done, for example, by composting toilets which reintegrate waste material (excrement) back into the system, allowing the nutrients to be taken up by the food crops. The food coming from the crops is then consumed again by the system's users and the cycle continues. The logistics and area requirements involved however have been prohibitive in implementing such a system to date.
Gravity
Depending on the length of the mission, astronauts may need artificial gravity to reduce the effects of space adaptation syndrome, body fluid redistribution, and loss of bone and muscle mass. Two methods of generating artificial weight in outer space exist.
Linear acceleration
If a spacecraft's engines could produce thrust continuously on the outbound trip with a thrust level equal to the mass of the ship, it would continuously accelerate at the rate of per second, and the crew would experience a pull toward the ship's aft bulkhead at normal Earth gravity (one g). The effect is proportional to the rate of acceleration. When the ship reaches the halfway point, it would turn around and produce thrust in the retrograde direction to slow down.
Rotation
Alternatively, if the ship's cabin is designed with a large cylindrical wall, or with a long beam extending another cabin section or counterweight, spinning it at an appropriate speed will cause centrifugal force to simulate the effect of gravity. If ω is the angular velocity of the ship's spin, then the acceleration at a radius r is:
Notice the magnitude of this effect varies with the radius of rotation, which crewmembers might find inconvenient depending on the cabin design. Also, the effects of Coriolis force (a force imparted at right angles to motion within the cabin) must be dealt with. And there is concern that rotation could aggravate the effects of vestibular disruption.
Space vehicle systems
Gemini, Mercury, and Apollo
American Mercury, Gemini and Apollo spacecraft contained 100% oxygen atmospheres, suitable for short duration missions, to minimize weight and complexity.
Space Shuttle
The Space Shuttle was the first American spacecraft to have an Earth-like atmospheric mixture, comprising 22% oxygen and 78% nitrogen. For the Space Shuttle, NASA includes in the ECLSS category systems that provide both life support for the crew and environmental control for payloads. The Shuttle Reference Manual contains ECLSS sections on: Crew Compartment Cabin Pressurization, Cabin Air Revitalization, Water Coolant Loop System, Active Thermal Control System, Supply and Waste Water, Waste Collection System, Waste Water Tank, Airlock Support, Extravehicular Mobility Units, Crew Altitude Protection System, and Radioisotope Thermoelectric Generator Cooling and Gaseous Nitrogen Purge for Payloads.
Soyuz
The life-support system on the Soyuz spacecraft is called the Kompleks Sredstv Obespecheniya Zhiznideyatelnosti (KSOZh). Vostok, Voshkod and Soyuz contained air-like mixtures at approximately 101kPa (14.7 psi). The life support system provides a nitrogen/oxygen atmosphere at sea level partial pressures. The atmosphere is then regenerated through KO2 cylinders, which absorb most of the CO2 and water produced by the crew biologically and regenerates the oxygen, the LiOH cylinders then absorb the leftover CO2.
Plug and play
The Paragon Space Development Corporation is developing a plug and play ECLSS called commercial crew transport-air revitalization system (CCT-ARS) for future spacecraft partially paid for using NASA's Commercial Crew Development (CCDev) funding.
The CCT-ARS provides seven primary spacecraft life support functions in a highly integrated and reliable system: Air temperature control, Humidity removal, Carbon dioxide removal, Trace contaminant removal, Post-fire atmospheric recovery, Air filtration, and Cabin air circulation.
Space station systems
Space station systems include technology that enables humans to live in outer space for a prolonged period of time. Such technology includes filtration systems for human waste disposal and air production.
Skylab
Skylab used 72% oxygen and 28% nitrogen at a total pressure of 5 psi.
Salyut and Mir
The Salyut and Mir space stations contained an air-like Oxygen and Nitrogen mixture at approximately sea-level pressures of 93.1 kPa (13.5psi) to 129 kPa (18.8 psi) with an Oxygen content of 21% to 40%.
Bigelow commercial space station
The life-support system for the Bigelow Commercial Space Station is being designed by Bigelow Aerospace in Las Vegas, Nevada. The space station will be constructed of habitable Sundancer and BA 330 expandable spacecraft modules. "human-in-the-loop testing of the environmental control and life-support system (ECLSS)" for Sundancer has begun.
Natural systems
Natural LSS like the Biosphere 2 in Arizona have been tested for future space travel or colonization. These systems are also known as closed ecological systems. They have the advantage of using solar energy as primary energy only and being independent from logistical support with fuel. Natural systems have the highest degree of efficiency due to integration of multiple functions. They also provide the proper ambience for humans which is necessary for a longer stay in outer space.
Underwater and saturation diving habitats
Underwater habitats and surface saturation accommodation facilities provide life-support for their occupants over periods of days to weeks. The occupants are constrained from immediate return to surface atmospheric pressure by decompression obligations of up to several weeks.
The life support system of a surface saturation accommodation facility provides breathing gas and other services to support life for the personnel under pressure. It includes the following components: Underwater habitats differ in that the ambient external pressure is the same as internal pressure, so some engineering problems are simplified.
Gas compression, mixing and storage facilities
Chamber climate control system – control of temperature and humidity, and filtration of gas
Instrumentation, control, monitoring and communications equipment
Fire suppression systems
Sanitation systems
Underwater habitats balance internal pressure with the ambient external pressure, allowing the occupants free access to the ambient environment within a specific depth range, while saturation divers accommodated in surface systems are transferred under pressure to the working depth in a closed diving bell
The life support system for the bell provides and monitors the main supply of breathing gas, and the control station monitors the deployment and communications with the divers. Primary gas supply, power and communications to the bell are through a bell umbilical, made up from a number of hoses and electrical cables twisted together and deployed as a unit. This is extended to the divers through the diver umbilicals.
The accommodation life support system maintains the chamber environment within the acceptable range for health and comfort of the occupants. Temperature, humidity, breathing gas quality sanitation systems and equipment function are monitored and controlled.
Experimental life-support systems
MELiSSA
Micro-Ecological Life Support System Alternative (MELiSSA) is a European Space Agency led initiative, conceived as a micro-organisms and higher plants based ecosystem intended as a tool to gain understanding of the behaviour of artificial ecosystems, and for the development of the technology for a future regenerative life-support system for long term crewed space missions.
CyBLiSS
CyBLiSS ("Cyanobacterium-Based Life Support Systems") is a concept developed by researchers from several space agencies (NASA, the German Aerospace Center and the Italian Space Agency) which would use cyanobacteria to process resources available on Mars directly into useful products, and into substrates for other key organisms of Bioregenerative life support system (BLSS). The goal is to make future human-occupied outposts on Mars as independent of Earth as possible (explorers living "off the land"), to reduce mission costs and increase safety. Even though developed independently, CyBLiSS would be complementary to other BLSS projects (such as MELiSSA) as it can connect them to materials found on Mars, thereby making them sustainable and expandable there. Instead of relying on a closed loop, new elements found on site can be brought into the system.
See also
Footnotes
References
Further reading
Eckart, Peter. Spaceflight Life Support and Biospherics. Torrance, CA: Microcosm Press; 1996. .
Larson, Wiley J. and Pranke, Linda K., eds. Human Spaceflight: Mission Analysis and Design. New York: McGraw Hill; 1999. .
Reed, Ronald D. and Coulter, Gary R. Physiology of Spaceflight – Chapter 5: 103–132.
Eckart, Peter and Doll, Susan. Environmental Control and Life Support System (ECLSS) – Chapter 17: 539–572.
Griffin, Brand N., Spampinato, Phil, and Wilde, Richard C. Extravehicular Activity Systems – Chapter 22: 707–738.
Wieland, Paul O., Designing for Human Presence in Space: An Introduction to Environmental Control and Life Support Systems. National Aeronautics and Space Administration, NASA Reference Publication RP-1324, 1994
External links
Environmental Control and Life Support System (NASA-KSC)
Dedication and Perspiration Builds the Next Generation Life Support System (NASA, Fall 2007)
Aerospace Biomedical and Life Support Engineering (MIT OpenCourseWare page – Spring 2006)
Space Advanced Life Support (Purdue course page – Spring 2004)
Advanced Life support for missions to Mars
Mars Advanced Life Support
Mars Life Support Systems
Publications on Mars Life Support Systems
Personal Hygiene in Space (Canadian Space Agency)
Plants will Be Critical for Human Life Support Systems in Space
Spacecraft design
Diving support equipment
Medical equipment | 0.769778 | 0.994815 | 0.765787 |
CFC | CFC, cfc, or Cfc may stand for:
Science and technology
Chlorofluorocarbon, a class of chemical compounds
Cardiofaciocutaneous Syndrome, a rare and serious genetic disorder
Subpolar oceanic climate (Cfc in the Köppen climate classification), short, generally cool summers and long, mild winters with abundant precipitation year-round
ColdFusion Components, objects or files used in ColdFusion application servers
Carbon fibre composite, a composite carbon based material, used in fusion armour applications
Consideration of future consequences, a personality trait
Continuous function chart, sort of Function block diagram enabling to program both Boolean and analogue expressions; Often associated with Sequential function chart (SFC)
Counterflow chiller, a type of heat exchanger.
Education
Canadian Film Centre, an institution for advanced training in film, television and new media in Canada
Central Florida Community College, a public state college in Ocala, Florida
Businesses and organizations
Certificación Fonográfica Centroamericana, music certification organization
California Fried Chicken, an Indonesia fast food chain
CfC Stanbic Holdings, now Stanbic Holdings plc, a financial institution based in Kenya
Computer Film Company, a London digital film special effects company
Chess Federation of Canada, Canada's national chess organization
Citizens for Conservation
Compass Family Center, San Francisco family shelter
Countrywide Financial Corporation, American residential mortgage banking and related businesses
Consumer Federation of California, a California-based, nonprofit consumer advocacy organization.
Common Fund for Commodities, an intergovernmental financial institution for supporting strongly commodity-dependent developing countries
Consumer Federation of California, a consumer advocacy organization
Centers for Change, precursor in New York of the International Workers Party
Chemins de fer de Corse, the railway system in Corsica, France
Politics, law, government, and finance
Cash for clunkers program
Combined Federal Campaign, for charities to fundraise via payroll deductions from US Federal Government employees
Controlled foreign corporation, company owned or controlled primarily by taxpayers of a different jurisdiction
Consumption of fixed capital, accounting term for depreciation of fixed assets
Comisión Federal de Competencia, or Federal Competition Commission, an agency of the Mexican government
ROK/US Combined Forces Command
United States Court of Federal Claims, a United States court
Religion
Carols for Choirs, a British collection of Christmas carol music books
Catechism for Filipino Catholics, Roman Catholic catechism for Filipinos
Champions for Christ, Every Nation Churches outreach to college and professional sportspeople
Couples for Christ, a Catholic Charismatic renewal movement which seeks to preserve the sanctity of the family
Congregatio Fratrum Christianorum, Congregation of Christian Brothers
Catholics for Choice, Catholic pro-choice organization
Military
Combined Forces Command (disambiguation), various multi-national military commands
China Fleet Club, British Navy club in Hong Kong
Canadian Forestry Corps, timber-processing corps of the Canadian Army during both World Wars
Corporal first class, the highest enlistee rank in the Singapore Armed Forces
Entertainment and gaming
SNK vs. Capcom: Card Fighters Clash, a game released for the Neo-Geo console
Celebrity Fit Club, a reality weight-loss show
Sports
Canadian Football Council, precursor of the Canadian Football League
Football clubs
In England:
(association football)
Chasetown F.C.
Chelsea F.C.
Chester F.C.
Chesterfield F.C.
Chipstead F.C.
Chorley F.C.
Clapton F.C.
Clitheroe F.C.
Cobham F.C.
Cove F.C.
Crockenhill F.C.
Croydon F.C.
In Scotland:
(association football)
Celtic F.C.
Clyde F.C.
Clydebank F.C.
Cowdenbeath F.C.
Other association football:
Carrigans F.C., Philippines
Cebu F.C., Philippines
Ceres F.C., Philippines
Changwon City FC, Korea
Charlotte FC, Charlotte, United States
Chemnitzer FC, Germany
Chonburi F.C., Thailand
Cincinnati FC, Cincinnati, United States
Cimarron F.C., Philippines
Clermont-Ferrand Football Club, France
Coritiba Foot Ball Club, Brazil
Chattanooga FC, Tennessee, United States
In Australia:
(Australian rules football)
Carlton Football Club
Clarence Football Club
Collingwood Football Club
In India:
(association football)
Chennaiyin FC | 0.775031 | 0.988049 | 0.765769 |
Wasting | In medicine, wasting, also known as wasting syndrome, refers to the process by which a debilitating disease causes muscle and fat tissue to "waste" away. Wasting is sometimes referred to as "acute malnutrition" because it is believed that episodes of wasting have a short duration, in contrast to stunting, which is regarded as chronic malnutrition. An estimated 45 million children under 5 years of age (or 6.7%) were wasted in 2021. Prevalence is highest in Southern Asia, followed by Oceania (excluding Australia and New Zealand) and South-eastern Asia.
Causes
Wasting can be caused by an extremely low energy intake (e.g., caused by famine), nutrient losses due to infection, or a combination of low intake and high loss. Infections and conditions associated with wasting include tuberculosis, chronic diarrhea, AIDS, and superior mesenteric artery syndrome. The mechanism may involve cachectin – also called tumor necrosis factor, a macrophage-secreted cytokine. Caretakers and health providers can sometimes contribute to wasting if the patient is placed on an improper diet. Voluntary weight loss and eating disorders are excluded as causes of wasting.
Diagnosis
Classification
Children: Weight-for-height (WFH). In infants under 24 months, recumbent (supine) length is used. WFH as % of median reference value is calculated this way:
Cutoff points may vary, but <80% (close to −2 Z-score) is often used.
Adults:
Body Mass Index (BMI) is the quotient between weight and height squared (kg/m2). An individual with a BMI < 18.5 is regarded as a case of wasting.
Percent of body weight lost (At Tufts, an unintentional loss of 6% or more in 6 months is regarded as wasting)
Treatment
Antiretrovirals and anabolic steroids have been used to treat HIV wasting syndrome. Additionally, an increase in protein-rich foods such as peanut butter and legumes (dried beans and peas) can assist in controlling the loss of muscle mass.
See also
Anorexia
Atrophy
Cachexia
Superior mesenteric artery syndrome
Weight loss
References
External links
Chronic Wasting Disease and Potential Transmission to Humans, Center for Disease Control and Prevention
Unintentional Weight Loss/Wasting, Tufts University Nutrition/Infection Unit
Symptoms and signs | 0.771582 | 0.99245 | 0.765756 |
Pulmonary edema | Pulmonary edema (British English: oedema), also known as pulmonary congestion, is excessive fluid accumulation in the tissue or air spaces (usually alveoli) of the lungs. This leads to impaired gas exchange, most often leading to shortness of breath (dyspnea) which can progress to hypoxemia and respiratory failure. Pulmonary edema has multiple causes and is traditionally classified as cardiogenic (caused by the heart) or noncardiogenic (all other types not caused by the heart).
Various laboratory tests (CBC, troponin, BNP, etc.) and imaging studies (chest x-ray, CT scan, ultrasound) are often used to diagnose and classify the cause of pulmonary edema.
Treatment is focused on three aspects:
improving respiratory function,
treating the underlying cause, and
preventing further damage and allow full recovery to the lung.
Pulmonary edema can cause permanent organ damage, and when sudden (acute), can lead to respiratory failure or cardiac arrest due to hypoxia. The term edema is from the Greek (oidēma, "swelling"), from οἰδέω (oidéō, "(I) swell").
Pathophysiology
The amount of fluid in the lungs is governed by multiple forces and is visualized using the Starling equation. There are two hydrostatic pressures and two oncotic (protein) pressures that determine the fluid movement within the lung air spaces (alveoli). Of the forces that explain fluid movement, only the pulmonary wedge pressure is obtainable via pulmonary artery catheterization. Due to the complication rate associated with pulmonary artery catheterization, other imaging modalities and diagnostic methods have become more popular. Imbalance in any of these forces can cause fluid movement (or lack of movement) causing a buildup of fluid where it should not normally be. Although rarely clinically measured, these forces allow physicians to classify and subsequently treat the underlying cause of pulmonary edema.
Classification
Pulmonary edema has a multitude of causes, and is typically classified as cardiogenic or noncardiogenic.
Cardiogenic pulmonary edema is caused by increased hydrostatic pressure causing increased fluid in the pulmonary interstitium and alveoli.
Noncardiogenic causes are associated with the oncotic pressure as discussed above causing malfunctioning barriers in the lungs (increased microvascular permeability).
Cardiogenic
Pulmonary Edema vs Congestive Heart Failure
The term pulmonary edema literally means wet lungs. This term actually refers to a pathological condition of the lungs, frequently demonstrated by chest X-ray. Edema of the lungs should be thought of as the result of a disease such as congestive heart failure and not a disease in and of itself. In this case it would be a cardiac disease and not a pulmonary disease.
Cardiogenic pulmonary edema is typically caused by either volume overload or impaired left ventricular function. As a result, pulmonary venous pressures rises from the normal average of 15 mmHg. As the pulmonary venous pressure rises, these pressures overwhelm the barriers and fluid enters the alveoli when the pressure is above 25 mmHg. Depending on whether the cause is acute or chronic determines how fast pulmonary edema develops and the severity of symptoms. Some of the common causes of cardiogenic pulmonary edema include:
Acute exacerbation of congestive heart failure which is due to the heart's inability to pump the blood out of the pulmonary circulation at a sufficient rate resulting in elevation in pulmonary wedge pressure and edema.
Pericardial tamponade as well as treating pericardial tamponade via pericardiocentesis has shown to cause pulmonary edema as a result of increased left-sided heart strain.
Heart Valve Dysfunction such as mitral valve regurgitation can cause increased pressure and energy on the left side of the heart (increased pulmonary wedge pressure) causing pulmonary edema.
Hypertensive crisis can cause pulmonary edema as the elevation in blood pressure and increased afterload on the left ventricle hinders forward flow in blood vessels and causes the elevation in wedge pressure and subsequent pulmonary edema. In a recent systematic review, it was found that pulmonary edema was the second most common condition associated with hypertensive crisis after ischemic stroke.
Flash pulmonary edema
Flash pulmonary edema is a clinical syndrome that begins suddenly and accelerates rapidly. Essentially all patients will present to the emergency department by ambulance.
The initiating acute event often a vascular event such as intense vasoconstriction and not a cardiac event such as myocardial infarction. The most noticeable abnormality is edema of the lungs. Nevertheless it is a cardiovascular disease not a pulmonary disease. It is also known by other appellations including sympathetic crashing acute pulmonary edema (SCAPE). It is often associated with severe hypertension Typically, patients with the syndrome of flash pulmonary edema do not have chest pain are often not recognized as having a cardiovascular disease. Treatment of FPE should include reducing systemic vascular resistance with nitroglycerin, providing supplemental oxygenation, and decreasing left ventricular filling pressure. Effective treatment is evident by a decrease in dyspnea and normalization of vital signs. Important targets of therapy such as reduced systemic vascular resistance and reduced left atrial pressure are difficult if not impossible to monitor.
Recurrence of FPE is thought to be associated with hypertension and may signify renal artery stenosis. Prevention of recurrence is based on managing or preventing hypertension, coronary artery disease, renovascular hypertension, and heart failure.
Noncardiogenic
Noncardiogenic pulmonary edema is caused by increased microvascular permeability (increased oncotic pressure) leading to increased fluid transfer into the alveolar spaces. The pulmonary artery wedge pressure is typically normal as opposed to cardiogenic pulmonary edema where the elevated pressure is causing the fluid transfer. There are multiple causes of noncardiogenic edema with multiple subtypes within each cause. Acute respiratory distress syndrome (ARDS) is a type of respiratory failure characterized by rapid onset of widespread inflammation in the lungs. Although ARDS can present with pulmonary edema (fluid accumulation), it is a distinct clinical syndrome that is not synonymous with pulmonary edema.
Direct lung injury
Acute lung injury may cause pulmonary edema directly through injury to the vasculature and parenchyma of the lung, causes include:
Inhalation of hot or toxic gases (including vaping-associated lung injury)
Pulmonary contusion, i.e., high-energy trauma (e.g. vehicle accidents)
Aspiration, e.g., gastric fluid
Reexpansion, i.e. post large volume thoracocentesis, resolution of pneumothorax, post decortication, removal of endobronchial obstruction, effectively a form of negative pressure pulmonary oedema.
Reperfusion injury, i.e., postpulmonary thromboendartectomy or lung transplantation
Swimming induced pulmonary edema also known as immersion pulmonary edema
Transfusion associated Acute Lung Injury is a specific type of blood-product transfusion injury that occurs when the donors plasma contained antibodies against the recipient, such as anti-HLA or anti-neutrophil antibodies.
Negative pressure pulmonary edema is when inspiration is attempted against some sort of obstruction in the upper airway, most commonly happens as a result of laryngospasm in adults. This negative pressure in the chest ruptures capillaries and floods the alveoli with blood
Pulmonary embolism
Indirect lung injury
Neurogenic causes (seizures, head trauma, strangulation, electrocution).
Transfusion Associated Circulatory Overload occurs when multiple blood transfusions or blood-products (plasma, platelets, etc.) are transfused over a short period of time.
It includes acute lung injury and acute respiratory distress syndrome. (ALI-ARDS) cover many of these causes, Sepsis- Severe infection or inflammation which may be local or systemic. This is the classical form of acute lung injury-adult respiratory distress syndrome
Pancreatitis
Special causes
Some causes of pulmonary edema are less well characterized and arguably represent specific instances of the broader classifications above.
Arteriovenous malformation
Hantavirus pulmonary syndrome
High altitude pulmonary edema (HAPE)
Envenomation, such as with the venom of Atrax robustus
Signs and symptoms
The most common symptom of pulmonary edema is dyspnea and may include other symptoms relating to inadequate oxygen (hypoxia) such as fast breathing (tachypnea), tachycardia and cyanosis. Other common symptoms include coughing up blood (classically seen as pink or red, frothy sputum), excessive sweating, anxiety, and pale skin. Other signs include end-inspiratory crackles (crackling sounds heard at the end of a deep breath) on auscultation and the presence of a third heart sound.
Shortness of breath can manifest as orthopnea (inability to breathe sufficiently when lying down flat) and/or paroxysmal nocturnal dyspnea (episodes of severe sudden breathlessness at night). These are common presenting symptoms of chronic and cardiogenic pulmonary edema due to left ventricular failure.
The development of pulmonary edema may be associated with symptoms and signs of "fluid overload" in the lungs; this is a non-specific term to describe the manifestations of right ventricular failure on the rest of the body. These symptoms may include peripheral edema (swelling of the legs, in general, of the "pitting" variety, wherein the skin is slow to return to normal when pressed upon due to fluid), raised jugular venous pressure and hepatomegaly, where the liver is excessively enlarged and may be tender or even pulsatile.
Additional symptoms such as fever, low blood pressure, injuries or burns may be present and can help characterize the cause and subsequent treatment strategies.
Diagnosis
There is no single test for confirming that breathlessness is caused by pulmonary edema – there are many causes of shortness of breath; but there are methods to suggest a high probability of an edema.
Lab tests
Low oxygen saturation in blood and disturbed arterial blood gas readings support the proposed diagnosis by suggesting a pulmonary shunt. Blood tests are performed for electrolytes (sodium, potassium) and markers of renal function (creatinine, urea). Elevated creatine levels may suggest a cardiogenic cause of pulmonary edema. Liver enzymes, inflammatory markers (usually C-reactive protein) and a complete blood count as well as coagulation studies (PT, aPTT) are also typically requested as further diagnosis. Elevated white blood cell count (WBC) may suggest a non-cardiogenic cause such as sepsis or infection. B-type natriuretic peptide (BNP) is available in many hospitals, sometimes even as a point-of-care test. Low levels of BNP (<100 pg/ml) suggest a cardiac cause is unlikely, and suggest noncardiogenic pulmonary edema.
Imaging tests
Chest X-ray has been used for many years to diagnose pulmonary edema due to its wide availability and relatively cheap cost. A chest X-ray will show fluid in the alveolar walls, Kerley B lines, increased vascular shadowing in a classical batwing peri-hilum pattern, upper lobe diversion (biased blood flow to the superior parts instead of inferior parts of the lung), and possibly pleural effusions. In contrast, patchy alveolar infiltrates are more typically associated with noncardiogenic edema.
Lung ultrasounds, employed by a healthcare provider at the point of care, is also a useful tool to diagnose pulmonary edema; not only is it accurate, but it may quantify the degree of lung water, track changes over time, and differentiate between cardiogenic and non-cardiogenic edema. Lung ultrasound is recommended as the first-line method due to its wide availability, ability to be performed bedside, and wide diagnostic utility for other similar diseases.
Especially in the case of cardiogenic pulmonary edema, urgent echocardiography may strengthen the diagnosis by demonstrating impaired left ventricular function, high central venous pressures and high pulmonary artery pressures leading to pulmonary edema.
Prevention
In those with underlying heart or lung disease, effective control of congestive and respiratory symptoms can help prevent pulmonary edema.
Dexamethasone is in widespread use for the prevention of high altitude pulmonary edema. Sildenafil is used as a preventive treatment for altitude-induced pulmonary edema and pulmonary hypertension. Sildenafil's mechanism of action is via phosphodiesterase inhibition which raises cGMP, resulting in pulmonary arterial vasodilation and inhibition of smooth muscle cell proliferation and indirectly fluid formation in the lungs. While this effect has only recently been discovered, sildenafil is already becoming an accepted treatment for this condition, in particular in situations where the standard treatment of rapid descent (acclimatization) has been delayed for some reason.
Management
The initial management of pulmonary edema, irrespective of the type or cause, is supporting vital functions while edema lasts. Hypoxia may require supplementary oxygen to balance blood oxygen levels, but if this is insufficient then again mechanical ventilation may be required to prevent complications caused by hypoxia. Therefore, if the level of consciousness is decreased it may be required to proceed to tracheal intubation and mechanical ventilation to prevent airway compromise. Treatment of the underlying cause is the next priority; pulmonary edema secondary to infection, for instance, would require the administration of appropriate antibiotics or antivirals.
Cardiogenic pulmonary edema
Cardiogenic pulmonary edema is the result of cardiovascular insufficiency. Treatment is directed at improving cardiovascular function and providing supportive care.
Positioning upright may relieve symptoms. A loop diuretic such as furosemide is administered, often together with morphine to reduce respiratory distress. Both diuretic and morphine may have vasodilator effects, but specific vasodilators may be used (particularly intravenous glyceryl trinitrate or ISDN) provided the blood pressure is adequate.
Continuous positive airway pressure and bilevel positive airway pressure (CPAP/BiPAP) has been demonstrated to reduce mortality and the need of mechanical ventilation in people with severe cardiogenic pulmonary edema.
It is possible for cardiogenic pulmonary edema to occur together with cardiogenic shock, in which the cardiac output is insufficient to sustain an adequate blood pressure to the lungs. This can be treated with inotropic agents or by intra-aortic balloon pump, but this is regarded as temporary treatment while the underlying cause is addressed and the lungs recover.
Prognosis
As pulmonary edema has a wide variety of causes and presentations, the outcome or prognosis is often disease-dependent and more accurately described in relation to the associated syndrome. It is a major health problem, with one large review stating an incidence of 7.6% with an associated in hospital mortality rate of 11.9%. Generally, pulmonary edema is associated with a poor prognosis with a 50% survival rate at one year, and 85% mortality at six years.
References
Medical emergencies
Respiratory diseases principally affecting the interstitium | 0.766324 | 0.999255 | 0.765753 |
Endocrinology | Endocrinology (from endocrine + -ology) is a branch of biology and medicine dealing with the endocrine system, its diseases, and its specific secretions known as hormones. It is also concerned with the integration of developmental events proliferation, growth, and differentiation, and the psychological or behavioral activities of metabolism, growth and development, tissue function, sleep, digestion, respiration, excretion, mood, stress, lactation, movement, reproduction, and sensory perception caused by hormones. Specializations include behavioral endocrinology and comparative endocrinology.
The endocrine system consists of several glands, all in different parts of the body, that secrete hormones directly into the blood rather than into a duct system. Therefore, endocrine glands are regarded as ductless glands. Hormones have many different functions and modes of action; one hormone may have several effects on different target organs, and, conversely, one target organ may be affected by more than one hormone.
The endocrine system
Endocrinology is the study of the endocrine system in the human body. This is a system of glands which secrete hormones. Hormones are chemicals that affect the actions of different organ systems in the body. Examples include thyroid hormone, growth hormone, and insulin. The endocrine system involves a number of feedback mechanisms, so that often one hormone (such as thyroid stimulating hormone) will control the action or release of another secondary hormone (such as thyroid hormone). If there is too much of the secondary hormone, it may provide negative feedback to the primary hormone, maintaining homeostasis.
In the original 1902 definition by Bayliss and Starling (see below), they specified that, to be classified as a hormone, a chemical must be produced by an organ, be released (in small amounts) into the blood, and be transported by the blood to a distant organ to exert its specific function. This definition holds for most "classical" hormones, but there are also paracrine mechanisms (chemical communication between cells within a tissue or organ), autocrine signals (a chemical that acts on the same cell), and intracrine signals (a chemical that acts within the same cell). A neuroendocrine signal is a "classical" hormone that is released into the blood by a neurosecretory neuron (see article on neuroendocrinology).
Hormones
Griffin and Ojeda identify three different classes of hormones based on their chemical composition:
Amines
Amines, such as norepinephrine, epinephrine, and dopamine (catecholamines), are derived from single amino acids, in this case tyrosine. Thyroid hormones such as 3,5,3'-triiodothyronine (T3) and 3,5,3',5'-tetraiodothyronine (thyroxine, T4) make up a subset of this class because they derive from the combination of two iodinated tyrosine amino acid residues.
Peptide and protein
Peptide hormones and protein hormones consist of three (in the case of thyrotropin-releasing hormone) to more than 200 (in the case of follicle-stimulating hormone) amino acid residues and can have a molecular mass as large as 31,000 grams per mole. All hormones secreted by the pituitary gland are peptide hormones, as are leptin from adipocytes, ghrelin from the stomach, and insulin from the pancreas.
Steroid
Steroid hormones are converted from their parent compound, cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Some forms of vitamin D, such as calcitriol, are steroid-like and bind to homologous receptors, but lack the characteristic fused ring structure of true steroids.
As a profession
Although every organ system secretes and responds to hormones (including the brain, lungs, heart, intestine, skin, and the kidneys), the clinical specialty of endocrinology focuses primarily on the endocrine organs, meaning the organs whose primary function is hormone secretion. These organs include the pituitary, thyroid, adrenals, ovaries, testes, and pancreas.
An endocrinologist is a physician who specializes in treating disorders of the endocrine system, such as diabetes, hyperthyroidism, and many others (see list of diseases).
Work
The medical specialty of endocrinology involves the diagnostic evaluation of a wide variety of symptoms and variations and the long-term management of disorders of deficiency or excess of one or more hormones.
The diagnosis and treatment of endocrine diseases are guided by laboratory tests to a greater extent than for most specialties. Many diseases are investigated through excitation/stimulation or inhibition/suppression testing. This might involve injection with a stimulating agent to test the function of an endocrine organ. Blood is then sampled to assess the changes of the relevant hormones or metabolites. An endocrinologist needs extensive knowledge of clinical chemistry and biochemistry to understand the uses and limitations of the investigations.
A second important aspect of the practice of endocrinology is distinguishing human variation from disease. Atypical patterns of physical development and abnormal test results must be assessed as indicative of disease or not. Diagnostic imaging of endocrine organs may reveal incidental findings called incidentalomas, which may or may not represent disease.
Endocrinology involves caring for the person as well as the disease. Most endocrine disorders are chronic diseases that need lifelong care. Some of the most common endocrine diseases include diabetes mellitus, hypothyroidism and the metabolic syndrome. Care of diabetes, obesity and other chronic diseases necessitates understanding the patient at the personal and social level as well as the molecular, and the physician–patient relationship can be an important therapeutic process.
Apart from treating patients, many endocrinologists are involved in clinical science and medical research, teaching, and hospital management.
Training
Endocrinologists are specialists of internal medicine or pediatrics. Reproductive endocrinologists deal primarily with problems of fertility and menstrual function—often training first in obstetrics. Most qualify as an internist, pediatrician, or gynecologist for a few years before specializing, depending on the local training system. In the U.S. and Canada, training for board certification in internal medicine, pediatrics, or gynecology after medical school is called residency. Further formal training to subspecialize in adult, pediatric, or reproductive endocrinology is called a fellowship. Typical training for a North American endocrinologist involves 4 years of college, 4 years of medical school, 3 years of residency, and 2 years of fellowship. In the US, adult endocrinologists are board certified by the American Board of Internal Medicine (ABIM) or the American Osteopathic Board of Internal Medicine (AOBIM) in Endocrinology, Diabetes and Metabolism.
Diseases treated by endocrinologists
Diabetes mellitus: This is a chronic condition that affects how your body regulates blood sugar. There are two main types: type 1 diabetes, which is an autoimmune disease that occurs when the body attacks the cells that produce insulin, and type 2 diabetes, which is a condition in which the body either doesn't produce enough insulin or doesn't use it effectively.
Thyroid disorders: These are conditions that affect the thyroid gland, a butterfly-shaped gland located in the front of your neck. The thyroid gland produces hormones that regulate your metabolism, heart rate, and body temperature. Common thyroid disorders include hyperthyroidism (overactive thyroid) and hypothyroidism (underactive thyroid).
Adrenal disorders: The adrenal glands are located on top of your kidneys. They produce hormones that help regulate blood pressure, blood sugar, and the body's response to stress. Common adrenal disorders include Cushing syndrome (excess cortisol production) and Addison's disease (adrenal insufficiency).
Pituitary disorders: The pituitary gland is a pea-sized gland located at the base of the brain. It produces hormones that control many other hormone-producing glands in the body. Common pituitary disorders include acromegaly (excess growth hormone production) and Cushing's disease (excess ACTH production).
Metabolic disorders: These are conditions that affect how your body processes food into energy. Common metabolic disorders include obesity, high cholesterol, and gout.
Calcium and bone disorders: Endocrinologists also treat conditions that affect calcium levels in the blood, such as hyperparathyroidism (too much parathyroid hormone) and osteoporosis (weak bones).
Sexual and reproductive disorders: Endocrinologists can also help diagnose and treat hormonal problems that affect sexual development and function, such as polycystic ovary syndrome (PCOS) and erectile dysfunction.
Endocrine cancers: These are cancers that develop in the endocrine glands. Endocrinologists can help diagnose and treat these cancers.
Diseases and medicine
Diseases
See main article at Endocrine diseases
Endocrinology also involves the study of the diseases of the endocrine system. These diseases may relate to too little or too much secretion of a hormone, too little or too much action of a hormone, or problems with receiving the hormone.
Societies and Organizations
Because endocrinology encompasses so many conditions and diseases, there are many organizations that provide education to patients and the public. The Hormone Foundation is the public education affiliate of The Endocrine Society and provides information on all endocrine-related conditions. Other educational organizations that focus on one or more endocrine-related conditions include the American Diabetes Association, Human Growth Foundation, American Menopause Foundation, Inc., and American Thyroid Association.
In North America the principal professional organizations of endocrinologists include The Endocrine Society, the American Association of Clinical Endocrinologists, the American Diabetes Association, the Lawson Wilkins Pediatric Endocrine Society, and the American Thyroid Association.
In Europe, the European Society of Endocrinology (ESE) and the European Society for Paediatric Endocrinology (ESPE) are the main organisations representing professionals in the fields of adult and paediatric endocrinology, respectively.
In the United Kingdom, the Society for Endocrinology and the British Society for Paediatric Endocrinology and Diabetes are the main professional organisations.
The European Society for Paediatric Endocrinology is the largest international professional association dedicated solely to paediatric endocrinology. There are numerous similar associations around the world.
History
The earliest study of endocrinology began in China. The Chinese were isolating sex and pituitary hormones from human urine and using them for medicinal purposes by 200 BC. They used many complex methods, such as sublimation of steroid hormones. Another method specified by Chinese texts—the earliest dating to 1110—specified the use of saponin (from the beans of Gleditsia sinensis) to extract hormones, but gypsum (containing calcium sulfate) was also known to have been used.
Although most of the relevant tissues and endocrine glands had been identified by early anatomists, a more humoral approach to understanding biological function and disease was favoured by the ancient Greek and Roman thinkers such as Aristotle, Hippocrates, Lucretius, Celsus, and Galen, according to Freeman et al., and these theories held sway until the advent of germ theory, physiology, and organ basis of pathology in the 19th century.
In 1849, Arnold Berthold noted that castrated cockerels did not develop combs and wattles or exhibit overtly male behaviour. He found that replacement of testes back into the abdominal cavity of the same bird or another castrated bird resulted in normal behavioural and morphological development, and he concluded (erroneously) that the testes secreted a substance that "conditioned" the blood that, in turn, acted on the body of the cockerel. In fact, one of two other things could have been true: that the testes modified or activated a constituent of the blood or that the testes removed an inhibitory factor from the blood. It was not proven that the testes released a substance that engenders male characteristics until it was shown that the extract of testes could replace their function in castrated animals. Pure, crystalline testosterone was isolated in 1935.
Graves' disease was named after Irish doctor Robert James Graves, who described a case of goiter with exophthalmos in 1835. The German Karl Adolph von Basedow also independently reported the same constellation of symptoms in 1840, while earlier reports of the disease were also published by the Italians Giuseppe Flajani and Antonio Giuseppe Testa, in 1802 and 1810 respectively, and by the English physician Caleb Hillier Parry (a friend of Edward Jenner) in the late 18th century. Thomas Addison was first to describe Addison's disease in 1849.
In 1902 William Bayliss and Ernest Starling performed an experiment in which they observed that acid instilled into the duodenum caused the pancreas to begin secretion, even after they had removed all nervous connections between the two. The same response could be produced by injecting extract of jejunum mucosa into the jugular vein, showing that some factor in the mucosa was responsible. They named this substance "secretin" and coined the term hormone for chemicals that act in this way.
Joseph von Mering and Oskar Minkowski made the observation in 1889 that removing the pancreas surgically led to an increase in blood sugar, followed by a coma and eventual death—symptoms of diabetes mellitus. In 1922, Banting and Best realized that homogenizing the pancreas and injecting the derived extract reversed this condition.
Neurohormones were first identified by Otto Loewi in 1921. He incubated a frog's heart (innervated with its vagus nerve attached) in a saline bath, and left in the solution for some time. The solution was then used to bathe a non-innervated second heart. If the vagus nerve on the first heart was stimulated, negative inotropic (beat amplitude) and chronotropic (beat rate) activity were seen in both hearts. This did not occur in either heart if the vagus nerve was not stimulated. The vagus nerve was adding something to the saline solution. The effect could be blocked using atropine, a known inhibitor to heart vagal nerve stimulation. Clearly, something was being secreted by the vagus nerve and affecting the heart. The "vagusstuff" (as Loewi called it) causing the myotropic (muscle enhancing) effects was later identified to be acetylcholine and norepinephrine. Loewi won the Nobel Prize for his discovery.
Recent work in endocrinology focuses on the molecular mechanisms responsible for triggering the effects of hormones. The first example of such work being done was in 1962 by Earl Sutherland. Sutherland investigated whether hormones enter cells to evoke action, or stayed outside of cells. He studied norepinephrine, which acts on the liver to convert glycogen into glucose via the activation of the phosphorylase enzyme. He homogenized the liver into a membrane fraction and soluble fraction (phosphorylase is soluble), added norepinephrine to the membrane fraction, extracted its soluble products, and added them to the first soluble fraction. Phosphorylase activated, indicating that norepinephrine's target receptor was on the cell membrane, not located intracellularly. He later identified the compound as cyclic AMP (cAMP) and with his discovery created the concept of second-messenger-mediated pathways. He, like Loewi, won the Nobel Prize for his groundbreaking work in endocrinology.
See also
Comparative endocrinology
Endocrine disease
Hormone
Hormone replacement therapy
Neuroendocrinology
Pediatric endocrinology
Reproductive endocrinology and infertility
Wildlife endocrinology
List of instruments used in endocrinology
References
Endocrine system
Hormones | 0.766337 | 0.999216 | 0.765736 |
Motor neuron diseases | Motor neuron diseases or motor neurone diseases (MNDs) are a group of rare neurodegenerative disorders that selectively affect motor neurons, the cells which control voluntary muscles of the body. They include amyotrophic lateral sclerosis (ALS), progressive bulbar palsy (PBP), pseudobulbar palsy, progressive muscular atrophy (PMA), primary lateral sclerosis (PLS), spinal muscular atrophy (SMA) and monomelic amyotrophy (MMA), as well as some rarer variants resembling ALS.
Motor neuron diseases affect both children and adults. While each motor neuron disease affects patients differently, they all cause movement-related symptoms, mainly muscle weakness. Most of these diseases seem to occur randomly without known causes, but some forms are inherited. Studies into these inherited forms have led to discoveries of various genes (e.g. SOD1) that are thought to be important in understanding how the disease occurs.
Symptoms of motor neuron diseases can be first seen at birth or can come on slowly later in life. Most of these diseases worsen over time; while some, such as ALS, shorten one's life expectancy, others do not. Currently, there are no approved treatments for the majority of motor neuron disorders, and care is mostly symptomatic.
Signs and symptoms
Signs and symptoms depend on the specific disease, but motor neuron diseases typically manifest as a group of movement-related symptoms. They come on slowly, and worsen over the course of more than three months. Various patterns of muscle weakness are seen, and muscle cramps and spasms may occur. One can have difficulty breathing with climbing stairs (exertion), difficulty breathing when lying down (orthopnea), or even respiratory failure if breathing muscles become involved. Bulbar symptoms, including difficulty speaking (dysarthria), difficulty swallowing (dysphagia), and excessive saliva production (sialorrhea), can also occur. Sensation, or the ability to feel, is typically not affected. Emotional disturbance (e.g. pseudobulbar affect) and cognitive and behavioural changes (e.g. problems in word fluency, decision-making, and memory) are also seen. There can be lower motor neuron findings (e.g. muscle wasting, muscle twitching), upper motor neuron findings (e.g. brisk reflexes, Babinski reflex, Hoffman's reflex, increased muscle tone), or both.
Motor neuron diseases are seen both in children and adults. Those that affect children tend to be inherited or familial, and their symptoms are either present at birth or appear before learning to walk. Those that affect adults tend to appear after age 40. The clinical course depends on the specific disease, but most progress or worsen over the course of months. Some are fatal (e.g. ALS), while others are not (e.g. PLS).
Patterns of weakness
Various patterns of muscle weakness occur in different motor neuron diseases. Weakness can be symmetric or asymmetric, and it can occur in body parts that are distal, proximal, or both. According to Statland et al., there are three main weakness patterns that are seen in motor neuron diseases, which are:
Asymmetric distal weakness without sensory loss (e.g. ALS, PLS, PMA, MMA)
Symmetric weakness without sensory loss (e.g. PMA, PLS)
Symmetric focal midline proximal weakness (neck, trunk, bulbar involvement; e.g. ALS, PBP, PLS)
Lower and upper motor neuron findings
Motor neuron diseases are on a spectrum in terms of upper and lower motor neuron involvement. Some have just lower or upper motor neuron findings, while others have a mix of both. Lower motor neuron (LMN) findings include muscle atrophy and fasciculations, and upper motor neuron (UMN) findings include hyperreflexia, spasticity, muscle spasm, and abnormal reflexes.
Pure upper motor neuron diseases, or those with just UMN findings, include PLS.
Pure lower motor neuron diseases, or those with just LMN findings, include PMA.
Motor neuron diseases with both UMN and LMN findings include both familial and sporadic ALS.
Causes
Most cases are sporadic and their causes are usually not known. It is thought that environmental, toxic, viral, or genetic factors may be involved.
DNA damage
TAR DNA-binding protein 43 (TDP-43), is a critical component of the non-homologous end joining (NHEJ) enzymatic pathway that repairs DNA double-strand breaks in pluripotent stem cell-derived motor neurons. TDP-43 is rapidly recruited to double-strand breaks where it acts as a scaffold for the recruitment of the XRCC4-DNA ligase protein complex that then acts to repair double-strand breaks. About 95% of ALS patients have abnormalities in the nucleus-cytoplasmic localization in spinal motor neurons of TDP43. In TDP-43 depleted human neural stem cell-derived motor neurons, as well as in sporadic ALS patients' spinal cord specimens there is significant double-strand break accumulation and reduced levels of NHEJ.
Associated risk factors
In adults, men are more commonly affected than women.
Diagnosis
Differential diagnosis can be challenging due to the number of overlapping symptoms, shared between several motor neuron diseases. Frequently, the diagnosis is based on clinical findings (i.e. LMN vs. UMN signs and symptoms, patterns of weakness), family history of MND, and a variation of tests, many of which are used to rule out disease mimics, which can manifest with identical symptoms.
Classification
Motor neuron disease describes a collection of clinical disorders, characterized by progressive muscle weakness and the degeneration of the motor neuron on electrophysiological testing. The term "motor neuron disease" has varying meanings in different countries. Similarly, the literature inconsistently classifies which degenerative motor neuron disorders can be included under the umbrella term "motor neuron disease". The four main types of MND are marked (*) in the table below.
All types of MND can be differentiated by two defining characteristics:
Is the disease sporadic or inherited?
Is there involvement of the upper motor neurons (UMN), the lower motor neurons (LMN), or both?
Sporadic or acquired MNDs occur in patients with no family history of degenerative motor neuron disease. Inherited or genetic MNDs adhere to one of the following inheritance patterns: autosomal dominant, autosomal recessive, or X-linked. Some disorders, like ALS, can occur sporadically (85%) or can have a genetic cause (15%) with the same clinical symptoms and progression of disease.
UMNs are motor neurons that project from the cortex down to the brainstem or spinal cord. LMNs originate in the anterior horns of the spinal cord and synapse on peripheral muscles. Both motor neurons are necessary for the strong contraction of a muscle, but damage to an UMN can be distinguished from damage to a LMN by physical exam.
Tests
Cerebrospinal fluid (CSF) tests: Analysis of the fluid from around the brain and spinal cord could reveal signs of an infection or inflammation.
Magnetic resonance imaging (MRI): An MRI of the brain and spinal cord is recommended in patients with UMN signs and symptoms to explore other causes, such as a tumor, inflammation, or lack of blood supply (stroke).
Electromyogram (EMG) & nerve conduction study (NCS): The EMG, which evaluates muscle function, and NCS, which evaluates nerve function, are performed together in patients with LMN signs.
For patients with MND affecting the LMNs, the EMG will show evidence of: (1) acute denervation, which is ongoing as motor neurons degenerate, and (2) chronic denervation and reinnervation of the muscle, as the remaining motor neurons attempt to fill in for lost motor neurons.
By contrast, the NCS in these patients is usually normal. It can show a low compound muscle action potential (CMAP), which results from the loss of motor neurons, but the sensory neurons should remain unaffected.
Tissue biopsy: Taking a small sample of a muscle or nerve may be necessary if the EMG/NCS is not specific enough to rule out other causes of progressive muscle weakness, but it is rarely used.
Treatment
There are no known curative treatments for the majority of motor neuron disorders. Please refer to the articles on individual disorders for more details.
Prognosis
The table below lists life expectancy for patients who are diagnosed with MND.
Terminology
In the United States and Canada, the term motor neuron disease usually refers to the group of disorders while amyotrophic lateral sclerosis is frequently called Lou Gehrig's disease. In the United Kingdom and Australia, the term motor neuron(e) disease is used for amyotrophic lateral sclerosis, although is not uncommon to refer to the entire group.
While MND refers to a specific subset of similar diseases, there are numerous other diseases of motor neurons that are referred to collectively as "motor neuron disorders", for instance the diseases belonging to the spinal muscular atrophies group. However, they are not classified as "motor neuron diseases" by the 11th edition of the International Statistical Classification of Diseases and Related Health Problems (ICD-11), which is the definition followed in this article.
See also
Spinal muscular atrophies
Hereditary motor and sensory neuropathies
References
External links
Motor neuron diseases
Rare diseases
Systemic atrophies primarily affecting the central nervous system | 0.767273 | 0.997978 | 0.765721 |
Cough | A cough is a sudden expulsion of air through the large breathing passages which can help clear them of fluids, irritants, foreign particles and microbes. As a protective reflex, coughing can be repetitive with the cough reflex following three phases: an inhalation, a forced exhalation against a closed glottis, and a violent release of air from the lungs following opening of the glottis, usually accompanied by a distinctive sound.
Frequent coughing usually indicates the presence of a disease. Many viruses and bacteria benefit, from an evolutionary perspective, by causing the host to cough, which helps to spread the disease to new hosts. Irregular coughing is usually caused by a respiratory tract infection but can also be triggered by choking, smoking, air pollution, asthma, gastroesophageal reflux disease, post-nasal drip, chronic bronchitis, lung tumors, heart failure and medications such as angiotensin-converting-enzyme inhibitors (ACE inhibitors) and beta blockers.
Treatment should target the cause; for example, smoking cessation or discontinuing ACE inhibitors. Cough suppressants such as codeine or dextromethorphan are frequently prescribed, but have been demonstrated to have little effect. Other treatment options may target airway inflammation or may promote mucus expectoration. As it is a natural protective reflex, suppressing the cough reflex might have damaging effects, especially if the cough is productive (producing phlegm).
Presentation
Complications
The complications of coughing can be classified as either acute or chronic. Acute complications include cough syncope (fainting spells due to decreased blood flow to the brain when coughs are prolonged and forceful), insomnia, cough-induced vomiting, subconjunctival hemorrhage or "red eye", coughing defecation and in women with a prolapsed uterus, cough urination. Chronic complications are common and include abdominal or pelvic hernias, fatigue fractures of lower ribs and costochondritis. Chronic or violent coughing can contribute to damage to the pelvic floor and a possible cystocele.
Differential diagnosis
A cough in children may be either a normal physiological reflex or due to an underlying cause. In healthy children it may be normal in the absence of any disease to cough ten times a day. The most common cause of an acute or subacute cough is a viral respiratory tract infection. A healthy adult also coughs 18.8 times a day on average, but in the population with respiratory disease the geometric mean frequency is 275 times a day. In adults with a chronic cough, i.e. a cough longer than 8 weeks, more than 90% of cases are due to post-nasal drip, asthma, eosinophilic bronchitis, and gastroesophageal reflux disease. The causes of chronic cough are similar in children with the addition of bacterial bronchitis.
Infections
A cough can be the result of a respiratory tract infection such as the common cold, COVID-19, acute bronchitis, pneumonia, pertussis, or tuberculosis. In the vast majority of cases, acute coughs, i.e. coughs shorter than 3 weeks, are due to the common cold. In people with a normal chest X-ray, tuberculosis is a rare finding. Pertussis is increasingly being recognised as a cause of troublesome coughing in adults.
After a respiratory tract infection has cleared, the person may be left with a postinfectious cough. This typically is a dry, non-productive cough that produces no phlegm. Symptoms may include a tightness in the chest, and a tickle in the throat. This cough may often persist for weeks after an illness. The cause of the cough may be inflammation similar to that observed in repetitive stress disorders such as carpal tunnel syndrome. The repetition of coughing produces inflammation which produces discomfort, which in turn produces more coughing. Postinfectious cough typically does not respond to conventional cough treatments. Medication used for postinfectious coughs may include ipratropium to treat the inflammation, as well as cough suppressants to reduce frequency of the cough until inflammation clears. Inflammation may increase sensitivity to other existing issues such as allergies, and treatment of other causes of coughs (such as use of an air purifier or allergy medicines) may help speed recovery.
Reactive airway disease
When coughing is the only complaint of a person who meets the criteria for asthma (bronchial hyperresponsiveness and reversibility), this is termed cough-variant asthma. Atopic cough and eosinophilic bronchitis are related conditions. Atopic cough occurs in individuals with a family history of atopy (an allergic condition), abundant eosinophils in the sputum, but with normal airway function and responsiveness. Eosinophilic bronchitis is characterized by eosinophils in sputum and in bronchoalveolar lavage fluid without airway hyperresponsiveness or an atopic background. This condition responds to treatment with corticosteroids. Cough can also worsen in an acute exacerbation of chronic obstructive pulmonary disease.
Asthma is a common cause of chronic cough in adults and children. Coughing may be the only symptom the person has from their asthma, or asthma symptoms may also include wheezing, shortness of breath, and a tight feeling in their chest. Depending on how severe the asthma is, it can be treated with bronchodilators (medicine which causes the airways to open up) or inhaled steroids. Treatment of the asthma should make the cough go away.
Chronic bronchitis is defined clinically as a persistent cough that produces sputum (phlegm) and mucus, for at least three months in two consecutive years. Chronic bronchitis is often the cause of "smoker's cough". The tobacco smoke causes inflammation, secretion of mucus into the airway, and difficulty clearing that mucus out of the airways. Coughing helps clear those secretions out. May be treated by quitting smoking. May also be caused by pneumoconiosis and long-term fume inhalation.
Gastroesophageal reflux
In people with unexplained cough, gastroesophageal reflux disease should be considered. This occurs when acidic contents of the stomach come back up into the esophagus. Symptoms usually associated with GERD include heartburn, sour taste in the mouth, or a feeling of acid reflux in the chest, although, more than half of the people with cough from GERD do not have any other symptoms. An esophageal pH monitor can confirm the diagnosis of GERD. Sometimes GERD can complicate respiratory ailments related to cough, such as asthma or bronchitis. The treatment involves anti-acid medications and lifestyle changes with surgery indicated in cases not manageable with conservative measures.
Air pollution
Coughing may be caused by air pollution including tobacco smoke, particulate matter, irritant gases, and dampness in a home.
The human health effects of poor air quality are far reaching, but principally affect the body's respiratory system and the cardiovascular system. Individual reactions to air pollutants depend on the type of pollutant a person is exposed to, the degree of exposure, the individual's health status and genetics. People who exercise outdoors on hot, smoggy days, for example, increase their exposure to pollutants in the air.
Foreign body
A foreign body can sometimes be suspected, for example if the cough started suddenly when the patient was eating. Rarely, sutures left behind inside the airway branches can cause coughing. A cough can be triggered by dryness from mouth breathing or recurrent aspiration of food into the windpipe in people with swallowing difficulties.
Drug-induced cough
Drugs used for treatments other than coughs, such as ACE inhibitors which are often used to treat high blood pressure, can sometimes cause cough as a side effect, and stopping their use will stop the cough. Beta blockers similarly cause cough as an adverse event.
Tic cough
A tic cough, previously called a habit cough, is one that responds to behavioral or psychiatric therapy after organic causes have been excluded. Absence of the cough during sleep is common, but not diagnostic. A tic cough is thought to be more common in children than in adults.
A similar disorder is the somatic cough syndrome previously called the psychogenic cough.
Neurogenic cough
Some cases of chronic cough may be attributed to a sensory neuropathic disorder. Treatment for neurogenic cough may include the use of certain neuralgia medications. Coughing may occur in tic disorders such as Tourette syndrome, although it should be distinguished from throat-clearing in this disorder.
Other
Cough may also be caused by conditions affecting the lung tissue such as bronchiectasis, cystic fibrosis, interstitial lung diseases and sarcoidosis. Coughing can also be triggered by benign or malignant lung tumors or mediastinal masses. Through irritation of the nerve, diseases of the external auditory canal (wax, for example) can also cause cough. Cardiovascular diseases associated with cough are heart failure, pulmonary infarction and aortic aneurysm. Nocturnal cough is associated with heart failure, as the heart does not compensate for the increased volume shift to the pulmonary circulation, in turn causing pulmonary edema and resultant cough. Other causes of nocturnal cough include asthma, post-nasal drip and gastroesophageal reflux disease (GERD). Another cause of cough occurring preferentially in supine position is recurrent aspiration.
Given its irritant nature to mammal tissues, capsaicin is widely used to determine the cough threshold and as a tussive stimulant in clinical research of cough suppressants. Capsaicin is what makes chili peppers spicy, and might explain why workers in factories with these fruits can develop a cough.
Coughing may also be used for social reasons, and as such is not always involuntary. A voluntary cough, often written as "ahem", can be used to attract attention or express displeasure, as a form of nonverbal, paralingual metacommunication.
Airway clearance
Coughing, and huffing are important ways of removing mucus as sputum in many conditions such as cystic fibrosis, and chronic bronchitis.
Pathophysiology
A cough is a protective reflex in healthy individuals which is influenced by psychological factors. The cough reflex is initiated by stimulation of two different classes of afferent nerves, namely the myelinated rapidly adapting receptors, and nonmyelinated C-fibers with endings in the lung.
Diagnostic approach
The type of cough may help in the diagnosis. For instance, an inspiratory "whooping" sound on coughing almost doubles the likelihood that the illness is pertussis.
Blood may occur in small amounts with severe cough of many causes, but larger amounts suggests bronchitis, bronchiectasis, tuberculosis, or primary lung cancer.
Further workup may include labs, x-rays, and spirometry.
Classification
A cough can be classified by its duration, character, quality, and timing. The duration can be either acute (of sudden onset) if it is present less than three weeks, subacute if it is present between three or eight weeks, and chronic when lasting longer than eight weeks. A cough can be non-productive (dry) or productive (when phlegm is produced that may be coughed up as sputum). It may occur only at night (then called nocturnal cough), during both night and day, or just during the day.
A number of characteristic coughs exist. While these have not been found to be diagnostically useful in adults, they are of use in children. A barky cough is part of the common presentation of croup. A staccato cough has been classically described with neonatal chlamydial pneumonia.
Treatment
The treatment of a cough in children is based on the underlying cause. In children half of cases go away without treatment in 10 days and 90% in 25 days.
According to the American Academy of Pediatrics the use of cough medicine to relieve cough symptoms is supported by little evidence and thus not recommended for treating cough symptoms in children. There is tentative evidence that the use of honey is better than no treatment or diphenhydramine in decreasing coughing. It does not alleviate coughing to the same extent as dextromethorphan but it shortens the cough duration better than placebo and salbutamol. A trial of antibiotics or inhaled corticosteroids may be tried in children with a chronic cough in an attempt to treat protracted bacterial bronchitis or asthma respectively. There is insufficient evidence to recommend treating children who have a cough that is not related to a specific condition with inhaled anti-cholinergics.
Because coughing can spread disease through infectious aerosol droplets, it is recommended to cover one's mouth and nose with the forearm, the inside of the elbow, a tissue or a handkerchief while coughing.
Epidemiology
A cough is the most common reason for visiting a primary care physician in the United States.
Other animals
Marine mammals such as dolphins and whales cannot cough. Some invertebrates such as insects and spiders cannot cough or sneeze. Alligators can cough. Domestic animals and vertebrates such as dogs and cats can cough, because of diseases, allergies, dust or choking. In particular, cats are known for coughing before spitting up a hairball.
In other domestic animals, horses can cough because of infections, or due to poor ventilation and dust in enclosed spaces. Kennel cough in dogs can result from a viral or bacterial infection.
Deer can cough similarly to humans as a result of respiratory tract infections, such as parasitic bronchitis caused by a species of Dictyocaulus.
References
Further reading
External links | 0.768375 | 0.996482 | 0.765671 |
Toxicity | Toxicity is the degree to which a chemical substance or a particular mixture of substances can damage an organism. Toxicity can refer to the effect on a whole organism, such as an animal, bacterium, or plant, as well as the effect on a substructure of the organism, such as a cell (cytotoxicity) or an organ such as the liver (hepatotoxicity). Sometimes the word is more or less synonymous with poisoning in everyday usage.
A central concept of toxicology is that the effects of a toxicant are dose-dependent; even water can lead to water intoxication when taken in too high a dose, whereas for even a very toxic substance such as snake venom there is a dose below which there is no detectable toxic effect. Toxicity is species-specific, making cross-species analysis problematic. Newer paradigms and metrics are evolving to bypass animal testing, while maintaining the concept of toxicity endpoints.
Etymology
In Ancient Greek medical literature, the adjective τoξικόν (meaning "toxic") was used to describe substances which had the ability of "causing death or serious debilitation or exhibiting symptoms of infection." The word draws its origins from the Greek noun τόξον (meaning "arc"), in reference to the use of bows and poisoned arrows as weapons.
English-speaking American culture has adopted several figurative usages for toxicity, often when describing harmful inter-personal relationships or character traits (e.g. "toxic masculinity").
History
Humans have a deeply rooted history of not only being aware of toxicity, but also taking advantage of it as a tool. Archaeologists studying bone arrows from caves of Southern Africa have noted the likelihood that some aging 72,000 to 80,000 years old were dipped in specially prepared poisons to increase their lethality. Although scientific instrumentation limitations make it difficult to prove concretely, archaeologists hypothesize the practice of making poison arrows was widespread in cultures as early as the paleolithic era. The San people of Southern Africa have managed to preserved this practice into the modern era, with the knowledge base to form complex mixtures from poisonous beetles and plant derived extracts, yielding an arrow-tip product with a shelf life beyond several months to a year.
Types
There are generally five types of toxicities: chemical, biological, physical, radioactive and behavioural.
Disease-causing microorganisms and parasites are toxic in a broad sense but are generally called pathogens rather than toxicants. The biological toxicity of pathogens can be difficult to measure because the threshold dose may be a single organism. Theoretically one virus, bacterium or worm can reproduce to cause a serious infection. If a host has an intact immune system, the inherent toxicity of the organism is balanced by the host's response; the effective toxicity is then a combination. In some cases, e.g. cholera toxin, the disease is chiefly caused by a nonliving substance secreted by the organism, rather than the organism itself. Such nonliving biological toxicants are generally called toxins if produced by a microorganism, plant, or fungus, and venoms if produced by an animal.
Physical toxicants are substances that, due to their physical nature, interfere with biological processes. Examples include coal dust, asbestos fibres or finely divided silicon dioxide, all of which can ultimately be fatal if inhaled. Corrosive chemicals possess physical toxicity because they destroy tissues, but are not directly poisonous unless they interfere directly with biological activity. Water can act as a physical toxicant if taken in extremely high doses because the concentration of vital ions decreases dramatically with too much water in the body. Asphyxiant gases can be considered physical toxicants because they act by displacing oxygen in the environment but they are inert, not chemically toxic gases.
Radiation can have a toxic effect on organisms.
Behavioral toxicity refers to the undesirable effects of essentially therapeutic levels of medication clinically indicated for a given disorder (DiMascio, Soltys and Shader, 1970). These undesirable effects include anticholinergic effects, alpha-adrenergic blockade, and dopaminergic effects, among others.
Measuring
Toxicity can be measured by its effects on the target (organism, organ, tissue or cell). Because individuals typically have different levels of response to the same dose of a toxic substance, a population-level measure of toxicity is often used which relates the probabilities of an outcome for a given individual in a population. One such measure is the . When such data does not exist, estimates are made by comparison to known similar toxic things, or to similar exposures in similar organisms. Then, "safety factors" are added to account for uncertainties in data and evaluation processes. For example, if a dose of a toxic substance is safe for a laboratory rat, one might assume that one-tenth that dose would be safe for a human, allowing a safety factor of 10 to allow for interspecies differences between two mammals; if the data are from fish, one might use a factor of 100 to account for the greater difference between two chordate classes (fish and mammals). Similarly, an extra protection factor may be used for individuals believed to be more susceptible to toxic effects such as in pregnancy or with certain diseases. Or, a newly synthesized and previously unstudied chemical that is believed to be very similar in effect to another compound could be assigned an additional protection factor of 10 to account for possible differences in effects that are probably much smaller. This approach is very approximate, but such protection factors are deliberately very conservative, and the method has been found to be useful in a wide variety of applications.
Assessing all aspects of the toxicity of cancer-causing agents involves additional issues, since it is not certain if there is a minimal effective dose for carcinogens, or whether the risk is just too small to see. In addition, it is possible that a single cell transformed into a cancer cell is all it takes to develop the full effect (the "one hit" theory).
It is more difficult to determine the toxicity of chemical mixtures than a pure chemical because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline, cigarette smoke, and industrial waste. Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.
The preclinical toxicity testing on various biological systems reveals the species-, organ- and dose-specific toxic effects of an investigational product. The toxicity of substances can be observed by (a) studying the accidental exposures to a substance (b) in vitro studies using cells/ cell lines (c) in vivo exposure on experimental animals. Toxicity tests are mostly used to examine specific adverse events or specific endpoints such as cancer, cardiotoxicity, and skin/eye irritation. Toxicity testing also helps calculate the No Observed Adverse Effect Level (NOAEL) dose and is helpful for clinical studies.
Classification
For substances to be regulated and handled appropriately they must be properly classified and labelled. Classification is determined by approved testing measures or calculations and has determined cut-off levels set by governments and scientists (for example, no-observed-adverse-effect levels, threshold limit values, and tolerable daily intake levels). Pesticides provide the example of well-established toxicity class systems and toxicity labels. While currently many countries have different regulations regarding the types of tests, numbers of tests and cut-off levels, the implementation of the Globally Harmonized System has begun unifying these countries.
Global classification looks at three areas: Physical Hazards (explosions and pyrotechnics), Health Hazards and environmental hazards.
Health hazards
The types of toxicities where substances may cause lethality to the entire body, lethality to specific organs, major/minor damage, or cause cancer. These are globally accepted definitions of what toxicity is. Anything falling outside of the definition cannot be classified as that type of toxicant.
Acute toxicity
Acute toxicity looks at lethal effects following oral, dermal or inhalation exposure. It is split into five categories of severity where Category 1 requires the least amount of exposure to be lethal and Category 5 requires the most exposure to be lethal. The table below shows the upper limits for each category.
Note: The undefined values are expected to be roughly equivalent to the category 5 values for oral and dermal administration.
Other methods of exposure and severity
Skin corrosion and irritation are determined through a skin patch test analysis, similar to an allergic inflammation patch test. This examines the severity of the damage done; when it is incurred and how long it remains; whether it is reversible and how many test subjects were affected.
Skin corrosion from a substance must penetrate through the epidermis into the dermis within four hours of application and must not reverse the damage within 14 days. Skin irritation shows damage less severe than corrosion if: the damage occurs within 72 hours of application; or for three consecutive days after application within a 14-day period; or causes inflammation which lasts for 14 days in two test subjects. Mild skin irritation is minor damage (less severe than irritation) within 72 hours of application or for three consecutive days after application.
Serious eye damage involves tissue damage or degradation of vision which does not fully reverse in 21 days. Eye irritation involves changes to the eye which do fully reverse within 21 days.
Other categories
Respiratory sensitizers cause breathing hypersensitivity when the substance is inhaled.
A substance which is a skin sensitizer causes an allergic response from a dermal application.
Carcinogens induce cancer, or increase the likelihood of cancer occurring.
Neurotoxicity is a form of toxicity in which a biological, chemical, or physical agent produces an adverse effect on the structure or function of the central and/or peripheral nervous system. It occurs when exposure to a substance – specifically, a neurotoxin or neurotoxicant– alters the normal activity of the nervous system in such a way as to cause permanent or reversible damage to nervous tissue.
Reproductively toxic substances cause adverse effects in either sexual function or fertility to either a parent or the offspring.
Specific-target organ toxins damage only specific organs.
Aspiration hazards are solids or liquids which can cause damage through inhalation.
Environmental hazards
An Environmental hazard can be defined as any condition, process, or state adversely affecting the environment. These hazards can be physical or chemical, and present in air, water, and/or soil. These conditions can cause extensive harm to humans and other organisms within an ecosystem.
Common types of environmental hazards
Water: detergents, fertilizer, raw sewage, prescription medication, pesticides, herbicides, heavy metals, PCBs
Soil: heavy metals, herbicides, pesticides, PCBs
Air: particulate matter, carbon monoxide, sulfur dioxide, nitrogen dioxide, asbestos, ground-level ozone, lead (from aircraft fuel, mining, and industrial processes)
The EPA maintains a list of priority pollutants for testing and regulation.
Occupational hazards
Workers in various occupations may be at a greater level of risk for several types of toxicity, including neurotoxicity. The expression "Mad as a hatter" and the "Mad Hatter" of the book Alice in Wonderland derive from the known occupational toxicity of hatters who used a toxic chemical for controlling the shape of hats. Exposure to chemicals in the workplace environment may be required for evaluation by industrial hygiene professionals.
Hazards for small businesses
Hazards from medical waste and prescription disposal
Hazards in the arts
Hazards in the arts have been an issue for artists for centuries, even though the toxicity of their tools, methods, and materials was not always adequately realized. Lead and cadmium, among other toxic elements, were often incorporated into the names of artist's oil paints and pigments, for example, "lead white" and "cadmium red".
20th-century printmakers and other artists began to be aware of the toxic substances, toxic techniques, and toxic fumes in glues, painting mediums, pigments, and solvents, many of which in their labelling gave no indication of their toxicity. An example was the use of xylol for cleaning silk screens. Painters began to notice the dangers of breathing painting mediums and thinners such as turpentine. Aware of toxicants in studios and workshops, in 1998 printmaker Keith Howard published Non-Toxic Intaglio Printmaking which detailed twelve innovative Intaglio-type printmaking techniques including photo etching, digital imaging, acrylic-resist hand-etching methods, and introducing a new method of non-toxic lithography.
Mapping environmental hazards
There are many environmental health mapping tools. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund programs. TOXMAP is a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network
(TOXNET) and PubMed, and from other authoritative sources.
Aquatic toxicity
Aquatic toxicity testing subjects key indicator species of fish or crustacea to certain concentrations of a substance in their environment to determine the lethality level. Fish are exposed for 96 hours while crustacea are exposed for 48 hours. While GHS does not define toxicity past 100 mg/L, the EPA currently lists aquatic toxicity as "practically non-toxic" in concentrations greater than 100 ppm.
Note: A category 4 is established for chronic exposure, but simply contains any toxic substance which is mostly insoluble, or has no data for acute toxicity.
Factors influencing toxicity
Toxicity of a substance can be affected by many different factors, such as the pathway of administration (whether the toxicant is applied to the skin, ingested, inhaled, injected), the time of exposure (a brief encounter or long term), the number of exposures (a single dose or multiple doses over time), the physical form of the toxicant (solid, liquid, gas), the concentration of the substance, and in the case of gases, the partial pressure (at high ambient pressure, partial pressure will increase for a given concentration as a gas fraction), the genetic makeup of an individual, an individual's overall health, and many others. Several of the terms used to describe these factors have been included here.
Acute exposure A single exposure to a toxic substance which may result in severe biological harm or death; acute exposures are usually characterized as lasting no longer than a day.
Chronic exposure Continuous exposure to a toxicant over an extended period of time, often measured in months or years; it can cause irreversible side effects.
Alternatives to dose-response framework
Considering the limitations of the dose-response concept, a novel Abstract Drug Toxicity Index (DTI) has been proposed recently. DTI redefines drug toxicity, identifies hepatotoxic drugs, gives mechanistic insights, predicts clinical outcomes and has potential as a screening tool.
See also
Agency for Toxic Substances and Disease Registry (ATSDR)
Biological activity
Biological warfare
California Proposition 65 (1986)
Carcinogen
Drunkenness
Indicative limit value
List of highly toxic gases
Material safety data sheet (MSDS)
Mutagen
Hepatotoxicity
Nephrotoxicity
Neurotoxicity
Ototoxicity
Paracelsus
Physiologically-based pharmacokinetic modelling.
Poison
Reference dose
Registry of Toxic Effects of Chemical Substances (RTECS) – toxicity database
Soil contamination
Teratogen
Toxic tort
Toxication
Toxicophore
Toxin
Toxica, a disambiguation page
References
External links
Agency for Toxic Substances and Disease Registry
Whole Effluent, Aquatic Toxicity Testing FAQ
TOXMAP Environmental Health e-Maps from the United States National Library of Medicine
Toxseek: meta-search engine in toxicology and environmental health
Pharmacology
Toxicology
Chemical hazards | 0.76887 | 0.995809 | 0.765647 |
Doctor's visit | A doctor's visit, also known as a physician office visit or a consultation, or a ward round in an inpatient care context, is a meeting between a patient with a physician to get health advice or treatment plan for a symptom or condition, most often at a professional health facility such as a doctor's office, clinic or hospital. According to a survey in the United States, a physician typically sees between 50 and 100 patients per week, but it may vary with medical specialty, but differs only little by community size such as metropolitan versus rural areas.
Procedure
The four great cornerstones of diagnostic medicine are anatomy (structure: what is there), physiology (how the structure/s work), pathology (what goes wrong with the anatomy and physiology), and psychology (mind and behavior). In addition, the physician should consider the patient in their 'well' context rather than simply as a walking medical condition. This means the socio-political context of the patient (family, work, stress, beliefs) should be assessed as it often offers vital clues to the patient's condition and further management.
A patient typically presents a set of complaints (the symptoms) to the physician, who then performs a diagnostic procedure, which generally includes obtaining further information about the patient's symptoms, previous state of health, living conditions, and so forth. The physician then makes a review of systems (ROS) or systems inquiry, which is a set of ordered questions about each major body system in order: general (such as weight loss), endocrine, cardio-respiratory, etc. Next comes the actual physical examination and other medical tests; the findings are recorded, leading to a list of possible diagnoses. These will be investigated in order of probability.The next task is to enlist the patient's agreement to a management plan, which will include treatment as well as plans for follow-up. Importantly, during this process the healthcare provider educates the patient about the causes, progression, outcomes, and possible treatments of his ailments, as well as often providing advice for maintaining health.
The physician's expertise comes from his knowledge of what is healthy and normal contrasted with knowledge and experience of other people who have had similar symptoms (unhealthy and abnormal), and the proven ability to relieve it with medicines (pharmacology) or other therapies about which the patient may initially have little knowledge.
Duration
A survey in the United States came to the result that, overall, a physician sees each patient for 13 to 16 minutes. Anesthesiologists, neurologists, and radiologists spend more time with each patient, with 25 minutes or more. On the other hand, primary care physicians spend a median of 13 to 16 minutes per patient, whereas dermatologists and ophthalmologists spend the least time, with a median of 9 to 12 minutes per patient. Overall, female physicians spend more time with each patient than do male physicians.
For the patient, the time spent at the hospital can be substantially longer due to various waiting times, administrative steps or additional care from other health personnel. Regarding wait time, patients that are well informed of the necessary procedures in a clinical encounter, and the time it is expected to take, are generally more satisfied even if there is a longer waiting time.
Web-based health care
With increasing access to computers and published online medical articles, the internet has increased the ability to perform self-diagnosis instead of going to a professional health care provider. Doctors may be fearful of misleading information and being inundated by emails from patients which take time to read and respond to (time for which they are not paid). About three-quarters of the U.S. population reports having a primary care physician, but the Primary Care Assessment Survey found "a significant erosion" in the quality of primary care from 1996 to 2000, most notably in the interpersonal treatment and thoroughness of physical examinations.
Research and development
Analysis
A study systematically assessed advice given by professional general practitioners, typically in the form of verbal-only consultation, for weight-loss to obese patients. They found it rarely included effective methods, was mostly generic, and was rarely tailored to patients' existing knowledge and behaviours.
The National Institute on Aging has produced a list of "Tips for Talking With Your Doctor" that includes asking "if your doctor has any brochures, fact sheets, DVDs, CDs, cassettes, or videotapes about your health conditions or treatments" – for example if a patient's blood pressure was found to be high, the patient could get "brochures explaining what causes high blood pressure and what [the person] can do about it".
Virtual doctor's visit
Software and health records
See also
House call
Doctor-patient relationship
General medical examination
References
External links
Practice of medicine
General practitioners
Human activities | 0.783592 | 0.976974 | 0.765548 |
Micronutrient | Micronutrients are essential dietary elements required by organisms in varying quantities to regulate physiological functions of cells and organs. Micronutrients support the health of organisms throughout life.
In varying amounts supplied through the diet, micronutrients include such compounds as vitamins and dietary minerals. For human nutrition, micronutrient requirements are in amounts generally less than 100 milligrams per day, whereas macronutrients are required in gram quantities daily. A multiple micronutrient powder of at least iron, zinc, and vitamin A was added to the World Health Organization's List of Essential Medicines in 2019. Deficiencies in micronutrient intake commonly result in malnutrition.
Inadequate micronutrient intake
Inadequate intake of essential nutrients predisposes humans to various chronic diseases, with some 50% of American adults having one or more preventable disease. In the United States, foods poor in micronutrient content and high in food energy make up some 27% of daily calorie intake. One US national survey (National Health and Nutrition Examination Survey 2003-2006) found that persons with high sugar intake consumed fewer micronutrients, especially vitamins A, C, and E, and magnesium.
A 1994 report by the World Bank estimated that micronutrient malnutrition costs developing economies at least 5 percent of gross domestic product. The Asian Development Bank has summarized the benefits of eliminating micronutrient deficiencies as follows:
Along with a growing understanding of the extent and impact of micronutrient malnutrition, several interventions have demonstrated the feasibility and benefits of correction and prevention. Distributing inexpensive capsules, diversifying to include more micronutrient-rich foods, or fortifying commonly consumed foods can make an enormous difference. Correcting iodine, vitamin A, and iron deficiencies can improve the population-wide intelligence quotient by 10–15 points, reduce maternal deaths by one-fourth, decrease infant and child mortality by 40 percent, and increase people's work capacity by almost half. The elimination of these deficiencies will reduce health care and education costs, improve work capacity and productivity, and accelerate equitable economic
growth and national development. Improved nutrition is essential to sustain economic growth. Micronutrient deficiency elimination is as cost-effective as the best public health interventions and fortification is the most cost-effective strategy.
Salt iodization
Salt iodization is a major strategy for addressing iodine deficiency, which is a major cause of mental health problems. In 1990, less than 20 percent of households in developing countries were consuming iodized salt. By 1994, international partnerships had formed in a global campaign for Universal Salt Iodization. By 2008, it was estimated that 72 percent of households in developing countries were consuming iodized salt, and the number of countries in which iodine deficiency disorders were a public health concern reduced by more than half from 110 to 47 countries.
Vitamin A supplementation
Vitamin A deficiency is a major factor in causing blindness worldwide, particularly among children. Global vitamin A supplementation efforts have targeted 103 priority countries. In 1999, 16 percent of children in these countries received two annual doses of vitamin A. By 2007, the rate increased to 62 percent.
Fortification of staple foods with vitamin A has uncertain benefits on reducing the risk of subclinical vitamin A deficiency.
Zinc
Fortification of staple foods may improve serum zinc levels in the population. Other effects such as improving zinc deficiency, children's growth, cognition, work capacity of adults, or blood indicators are unknown. Experiments show that soil and foliar application of zinc fertilizer can effectively reduce the phytate zinc ratio in grain. People who eat bread prepared from zinc-enriched wheat show a significant increase in serum zinc, suggesting that the zinc fertilizer strategy is a promising approach to address zinc deficiencies in humans.
Plants
Plants tend not to use vitamins, although minerals are required.
Some seven trace elements are essential to plant growth, although often in trace quantities.
Boron is believed to be involved in carbohydrate transport in plants; it also assists in metabolic regulation. Boron deficiency will often result in bud dieback.
Chloride is necessary for osmosis and ionic balance; it also plays a role in photosynthesis.
Copper, iron, manganese, molybdenum, and zinc are cofactors essential for the functioning of many enzymes. For plants, deficiency in these elements often results in inefficient production of chlorophyll, manifested in chlorosis.
See also
List of micronutrients
Human nutrition
Macronutrient (ecology)
Dietary mineral (redirects to Mineral (nutrient))
Silicon § Human nutrition
Manganese deficiency (medicine)
References
External links
Micronutrient Information Center, Oregon State University
Nutrition | 0.769665 | 0.994643 | 0.765542 |
Deconditioning | Deconditioning is adaptation of an organism to a less demanding environment, or, alternatively, the decrease of physiological adaptation to normal conditions. Deconditioning can result from decreased physical activity, prescribed bed rest, orthopedic casting, paralysis, aging. A particular interest in the study of deconditioning is in aerospace medicine, to diagnose, fight, and prevent adverse effects of the conditions of space flight.
Deconditioning due to decreased physical effort results in muscle loss, including heart muscles.
Deconditioning due to lack of gravity or non-standard gravity action (e.g., during bed rest) results in abnormal distribution of body fluids.
See also
Atrophy
Effect of spaceflight on the human body
Long COVID
References
Physiology | 0.783554 | 0.977006 | 0.765537 |
Injury in humans | An injury is any physiological damage to living tissue caused by immediate physical stress. Injuries to humans can occur intentionally or unintentionally and may be caused by blunt trauma, penetrating trauma, burning, toxic exposure, asphyxiation, or overexertion. Injuries can occur in any part of the body, and different symptoms are associated with different injuries.
Treatment of a major injury is typically carried out by a health professional and varies greatly depending on the nature of the injury. Traffic collisions are the most common cause of accidental injury and injury-related death among humans. Injuries are distinct from chronic conditions, psychological trauma, infections, or medical procedures, though injury can be a contributing factor to any of these.
Several major health organizations have established systems for the classification and description of human injuries.
Occurrence
Injuries may be intentional or unintentional. Intentional injuries may be acts of violence against others or self-inflicted against one's own person. Accidental injuries may be unforeseeable, or they may be caused by negligence. In order, the most common types of unintentional injuries are traffic accidents, falls, drowning, burns, and accidental poisoning. Certain types of injuries are more common in developed countries or developing countries. Traffic injuries are more likely to kill pedestrians than drivers in developing countries. Scalding burns are more common in developed countries, while open-flame injuries are more common in developing countries.
As of 2021, approximately 4.4 million people are killed due to injuries each year worldwide, constituting nearly 8% of all deaths. 3.16 million of these injuries are unintentional, and 1.25 million are intentional. Traffic accidents are the most common form of deadly injury, causing about one-third of injury-related deaths. One-sixth are caused by suicide, and one-tenth are caused by homicide. Tens of millions of individuals require medical treatment for nonfatal injuries each year, and injuries are responsible for about 10% of all years lived with disability. Men are twice as likely to be killed through injury than women. In 2013, 367,000 children under the age of five died from injuries, down from 766,000 in 1990.
Classification systems
The World Health Organization (WHO) developed the International Classification of External Causes of Injury (ICECI). Under this system, injuries are classified by mechanism of injury, objects/substances producing injury, place of occurrence, activity when injured, the role of human intent, and additional modules. These codes allow the identification of distributions of injuries in specific populations and case identification for more detailed research on causes and preventive efforts.
The United States Bureau of Labor Statistics developed the Occupational Injury and Illness Classification System (OIICS). Under this system injuries are classified by nature, part of body affected, source and secondary source, and event or exposure. The OIICS was first published in 1992 and has been updated several times since. The Orchard Sports Injury and Illness Classification System (OSIICS), previously OSICS, is used to classify injuries to enable research into specific sports injuries.
The injury severity score (ISS) is a medical score to assess trauma severity. It correlates with mortality, morbidity, and hospitalization time after trauma. It is used to define the term major trauma (polytrauma), recognized when the ISS is greater than 15. The AIS Committee of the Association for the Advancement of Automotive Medicine designed and updates the scale.
Mechanisms
Trauma
Traumatic injury is caused by an external object making forceful contact with the body, resulting in a wound. Major trauma is a severe traumatic injury that has the potential to cause disability or death. Serious traumatic injury most often occurs as a result of traffic collisions. Traumatic injury is the leading cause of death in people under the age of 45.
Blunt trauma injuries are caused by the forceful impact of an external object. Injuries from blunt trauma may cause internal bleeding and bruising from ruptured capillaries beneath the skin, abrasion from scraping against the superficial epidermis, lacerated tears on the skin or internal organs, or bone fractures. Crush injuries are a severe form of blunt trauma damage that apply large force to a large area over a longer period of time. Penetrating trauma injuries are caused by external objects entering the tissue of the body through the skin. Low-velocity penetration injuries are caused by sharp objects, such as stab wounds, while high-velocity penetration injuries are caused by ballistic projectiles, such as gunshot wounds or injuries caused by shell fragments. Perforated injuries result in an entry wound and an exit wound, while puncture wounds result only in an entry wound. Puncture injuries result in a cavity in the tissue.
Burns
Burn injury is caused by contact with extreme temperature, chemicals, or radiation. The effects of burns vary depending on the depth and size. Superficial or first-degree burns only affect the epidermis, causing pain for a short period of time. Superficial partial-thickness burns cause weeping blisters and require dressing. Deep partial-thickness burns are dry and less painful due to the burning away of the skin and require surgery. Full-thickness or third-degree burns affect the entire dermis and is susceptible to infection. Fourth-degree burns reach deep tissues such as muscles and bones, causing loss of the affected area.
Thermal burns are the most common type of burn, caused by contact with excessive heat, including contact with flame, contact with hot surfaces, or scalding burns caused by contact with hot water or steam. Frostbite is a type of burn caused by contact with excessive cold, causing cellular injury and deep tissue damage through the crystallization of water in the tissue. Friction burns are caused by friction with external objects, resulting in a burn and abrasion. Radiation burns are caused by exposure to ionizing radiation. Most radiation burns are sunburns caused by ultraviolet radiation or high exposure to radiation through medical treatments such as repeated radiography or radiation therapy.
Electrical burns are caused by contact with electricity as it enters and passes through the body. They are often deeper than other burns, affecting lower tissues as electricity penetrates the skin, and the full extent of electrical burns are often obscured. They will also cause extensive destruction of tissue at the entry and exit points. Electrical injuries in the home are often minor, while high tension power cables cause serious electrical injuries in the workplace. Lightning strikes can also cause severe electrical injuries. Fatal electrical injuries are often caused by tetanic spasm inducing respiratory arrest or interference with the heart causing cardiac arrest.
Chemical burns are caused by contact with corrosive substances such as acid or alkali. Chemical burns are rarer than most other burns, though there are many chemicals that can damage tissue. The most common chemical-related injuries are those caused by carbon monoxide, ammonia, chlorine, hydrochloric acid, and sulfuric acid. Some chemical weapons induce chemical burns, such as white phosphorus. Most chemical burns are treated with extensive application of water to remove the chemical contaminant, though some burn-inducing chemicals react with water to create more severe injuries. The ingestion of corrosive substances can cause chemical burns to the larynx and stomach.
Other mechanisms
Toxic injury is caused by the ingestion, inhalation, injection, or absorption of a toxin. This may occur through an interaction caused by a drug or the ingestion of a poison. Different toxins may cause different types of injuries, and many will cause injury to specific organs. Toxins in gases, dusts, aerosols, and smoke can be inhaled, potentially causing respiratory failure. Respiratory toxins can be released by structural fires, industrial accidents, domestic mishaps, or through chemical weapons. Some toxicants may affect other parts of the body after inhalation, such as carbon monoxide.
Asphyxia causes injury to the body from a lack of oxygen. It can be caused by drowning, inhalation of certain substances, strangulation, blockage of the airway, traumatic injury to the airway, apnea, and other means. The most immediate injury caused by asphyxia is hypoxia, which can in turn cause acute lung injury or acute respiratory distress syndrome as well as damage to the circulatory system. The most severe injury associated with asphyxiation is cerebral hypoxia and ischemia, in which the brain receives insufficient oxygen or blood, resulting in neurological damage or death. Specific injuries are associated with water inhalation, including alveolar collapse, atelectasis, intrapulmonary shunting, and ventilation perfusion mismatch. Simple asphyxia is caused by a lack of external oxygen supply. Systemic asphyxia is caused by exposure to a compound that prevents oxygen from being transported or used by the body. This can be caused by azides, carbon monoxide, cyanide, smoke inhalation, hydrogen sulfide, methemoglobinemia-inducing substances, opioids, or other systemic asphyxiants. Ventilation and oxygenation are necessary for treatment of asphyxiation, and some asphyxiants can be treated with antidotes.
Injuries of overuse or overexertion can occur when the body is strained through use, affecting the bones, muscles, ligaments, or tendons. Sports injuries are often overuse injuries such as tendinopathy. Over-extension of the ligaments and tendons can result in sprains and strains, respectively. Repetitive sedentary behaviors such as extended use of a computer or a physically repetitive occupation may cause a repetitive strain injury. Extended use of brightly lit screens may also cause eye strain.
Locations
Abdomen
Abdominal trauma includes injuries to the stomach, intestines, liver, pancreas, kidneys, gallbladder, and spleen. Abdominal injuries are typically caused by traffic accidents, assaults, falls, and work-related injuries, and physical examination is often unreliable in diagnosing blunt abdominal trauma. Splenic injury can cause low blood volume or blood in the peritoneal cavity. The treatment and prognosis of splenic injuries are dependent on cardiovascular stability. The gallbladder is rarely injured in blunt trauma, occurring in about 2% of blunt abdominal trauma cases. Injuries to the gallbladder are typically associated with injuries to other abdominal organs. The intestines are susceptible to injury following blunt abdominal trauma. The kidneys are protected by other structures in the abdomen, and most injuries to the kidney are a result of blunt trauma. Kidney injuries typically cause blood in the urine.
Due to its location in the body, pancreatic injury is relatively uncommon but more difficult to diagnose. Most injuries to the pancreas are caused by penetrative trauma, such as gunshot wounds and stab wounds. Pancreatic injuries occur in under 5% of blunt abdominal trauma cases. The severity of pancreatic injury depends primarily on the amount of harm caused to the pancreatic duct. The stomach is also well protected from injury due to its heavy layering, its extensive blood supply, and its position relative to the rib cage. As with pancreatic injuries, most traumatic stomach injuries are caused by penetrative trauma, and most civilian weapons do not cause long-term tissue damage to the stomach. Blunt trauma injuries to the stomach are typically caused by traffic accidents. Ingestion of corrosive substances can cause chemical burns to the stomach. Liver injury is the most common type of organ damage in cases of abdominal trauma. The liver's size and location in the body makes injury relatively common compared to other abdominal organs, and blunt trauma injury to the liver is typically treated with nonoperative management. Liver injuries are rarely serious, though most injuries to the liver are concomitant with other injuries, particularly to the spleen, ribs, pelvis, or spinal cord. The liver is also susceptible to toxic injury, with overdose of paracetamol being a common cause of liver failure.
Face
Facial trauma may affect the eyes, nose, ears, or mouth. Nasal trauma is a common injury and the most common type of facial injury. Oral injuries are typically caused by traffic accidents or alcohol-related violence, though falls are a more common cause in young children. The primary concerns regarding oral injuries are that the airway is clear and that there are no concurrent injuries to other parts of the head or neck. Oral injuries may occur in the soft tissue of the face, the hard tissue of the mandible, or as dental trauma.
The ear is susceptible to trauma in head injuries due to its prominent location and exposed structure. Ear injuries may be internal or external. Injuries of the external ear are typically lacerations of the cartilage or the formation of a hematoma. Injuries of the middle and internal ear may include a perforated eardrum or trauma caused by extreme pressure changes. The ear is also highly sensitive to blast injury. The bones of the ear are connected to facial nerves, and ear injuries can cause paralysis of the face. Trauma to the ear can cause hearing loss.
Eye injuries often take place in the cornea, and they have the potential to permanently damage vision. Corneal abrasions are a common injury caused by contact with foreign objects. The eye can also be injured by a foreign object remaining in the cornea. Radiation damage can be caused by exposure to excessive light, often caused by welding without eye protection or being exposed to excessive ultraviolet radiation, such as sunlight. Exposure to corrosive chemicals can permanently damage the eyes, causing blindness if not sufficiently irrigated. The eye is protected from most blunt injuries by the infraorbital margin, but in some cases blunt force may cause an eye to hemorrhage or tear. Overuse of the eyes can cause eye strain, particularly when looking at brightly lit screens for an extended period.
Heart
Cardiac injuries affect the heart and blood vessels. Blunt cardiac injury in a common injury caused by blunt trauma to the heart. It can be difficult to diagnose, and it can have many effects on the heart, including contusions, ruptures, acute valvular disorders, arrhythmia, or heart failure. Penetrative trauma to the heart is typically caused by stab wounds or gunshot wounds. Accidental cardiac penetration can also occur in rare cases from a fractured sternum or rib. Stab wounds to the heart are typically survivable with medical attention, though gunshot wounds to the heart are not. The right ventricle is most susceptible to injury due to its prominent location. The two primary consequences of traumatic injury to the heart are severe hemorrhaging and fluid buildup around the heart.
Musculoskeletal
Musculoskeletal injuries affect the skeleton and the muscular system. Soft tissue injuries affect the skeletal muscles, ligaments, and tendons. Ligament and tendon injuries account for half of all musculoskeletal injuries. Ligament sprains and tendon strains are common injuries that do not require intervention, but the healing process is slow. Physical therapy can be used to assist reconstruction and use of injured ligaments and tendons. Torn ligaments or tendons typically require surgery. Skeletal muscles are abundant in the body and commonly injured when engaging in athletic activity. Muscle injuries trigger an inflammatory response to facilitate healing. Blunt trauma to the muscles can cause contusions and hematomas. Excessive tensile strength can overstretch a muscle, causing a strain. Strains may present with torn muscle fibers, hemorrhaging, or fluid in the muscles. Severe muscle injuries in which a tear extends across the muscle can cause total loss of function. Penetrative trauma can cause laceration to muscles, which may take an extended time to heal. Unlike contusions and strains, lacerations are uncommon in sports injuries.
Traumatic injury may cause various bone fractures depending on the amount of force, direction of the force, and width of the area affected. Pathologic fractures occur when a previous condition weakens the bone until it can be easily fractured. Stress fractures occur when the bone is overused or suffers under excessive or traumatic pressure, often during athletic activity. Hematomas occur immediately following a bone fracture, and the healing process often takes from six weeks to three months to complete, though continued use of the fractured bone will prevent healing. Articular cartilage damage may also affect function of the skeletal system, and it can cause posttraumatic osteoarthritis. Unlike most bodily structures, cartilage cannot be healed once it is damaged.
Nervous system
Injuries to the nervous system include brain injury, spinal cord injury, and nerve injury. Trauma to the brain causes traumatic brain injury (TBI), causing "long-term physical, emotional, behavioral, and cognitive consequences". Mild TBI, including concussion, often occurs during athletic activity, military service, or as a result of untreated epilepsy, and its effects are typically short-term. More severe injuries to the brain cause moderate TBI, which may cause confusion or lethargy, or severe TBI, which may result in a coma or a secondary brain injury. TBI is a leading cause of mortality. Approximately half of all trauma-related deaths involve TBI. Non-traumatic injuries to the brain cause acquired brain injury (ABI). This can be caused by stroke, a brain tumor, poison, infection, cerebral hypoxia, drug use, or the secondary effect of a TBI.
Injury to the spinal cord is not immediately terminal, but it is associated with concomitant injuries, lifelong medical complications, and reduction in life expectancy. It may result in complications in several major organ systems and a significant reduction in mobility or paralysis. Spinal shock causes temporary paralysis and loss of reflexes. Unlike most other injuries, damage to the peripheral nerves is not healed through cellular proliferation. Following nerve injury, the nerves undergo degeneration before regenerating, and other pathways can be strengthened or reprogrammed to make up for lost function. The most common form of peripheral nerve injury is stretching, due to their inherent elasticity. Nerve injuries may also be caused by laceration or compression.
Pelvis
Injuries to the pelvic area include injuries to the bladder, rectum, colon, and reproductive organs. Traumatic injury to the bladder is rare and often occurs with other injuries to the abdomen and pelvis. The bladder is protected by the peritoneum, and most cases of bladder injury are concurrent with a fracture of the pelvis. Bladder trauma typically causes hematuria, or blood in the urine. Ingestion of alcohol may cause distension of the bladder, increasing the risk of injury. A catheter may be used to extract blood from the bladder in the case of hemorrhaging, though injuries that break the peritoneum typically require surgery. The colon is rarely injured by blunt trauma, with most cases occurring from penetrative trauma through the abdomen. Rectal injury is less common than injury to the colon, though the rectum is more susceptible to injury following blunt force trauma to the pelvis.
Injuries to the male reproductive system are rarely fatal and typically treatable through grafts and reconstruction. The elastic nature of the scrotum makes it resistant to injury, accounting for 1% of traumatic injuries. Trauma to the scrotum may cause damage to the testis or the spermatic cord. Trauma to the penis can cause penile fracture, typically as a result of vigorous intercourse. Injuries to the female reproductive system are often a result of pregnancy and childbirth or sexual activity. They are rarely fatal, but they can produce a variety of complications, such as chronic discomfort, dyspareunia, infertility, or the formation of fistulas. Age can greatly affect the nature of genital injuries in women due to changes in hormone composition. Childbirth is the most common cause of genital injury to women of reproductive age. Many cultures practice female genital mutilation, which is estimated to affect over 125 million women and girls worldwide as of 2018. Tears and abrasions to the vagina are common during sexual intercourse, and these may be exacerbated in instances of non-consensual sexual activity.
Respiratory tract
Injuries to the respiratory tract affect the lungs, diaphragm, trachea, bronchus, pharynx, or larynx. Tracheobronchial injuries are rare and often associated with other injuries. Bronchoscopy is necessary for an accurate diagnosis of tracheobronchial injury. The neck, including the pharynx and larynx, is highly vulnerable to injury due to its complex, compacted anatomy. Injuries to this area can cause airway obstruction. Ingestion of corrosive chemicals can cause chemical burns to the larynx. Inhalation of toxic materials can also cause serious injury to the respiratory tract.
Severe trauma to the chest can cause damage to the lungs, including pulmonary contusions, accumulation of blood, or a collapsed lung. The inflammation response to a lung injury can cause acute respiratory distress syndrome. Injuries to the lungs may cause symptoms ranging from shortness of breath to terminal respiratory failure. Injuries to the lungs are often fatal, and survivors often have a reduced quality of life. Injuries to the diaphragm are uncommon and rarely serious, but blunt trauma to the diaphragm can result in the formation of a hernia over time. Injuries to the diaphragm may present in many ways, including abnormal blood pressure, cardiac arrest, gastroinetestinal obstruction, and respiratory insufficiency. Injuries to the diaphragm are often associated with other injuries in the chest or abdomen, and its position between two major cavities of the human body may complicate diagnosis.
Skin
Most injuries to the skin are minor and do not require specialist treatment. Lacerations of the skin are typically repaired with sutures, staples, or adhesives. The skin is susceptible to burns, and burns to the skin often cause blistering. Abrasive trauma scrapes or rubs off the skin, and severe abrasions require skin grafting to repair. Skin tears involve the removal of the epidermis or dermis through friction or shearing forces, often in vulnerable populations such as the elderly. Skin injuries are potentially complicated by foreign bodies such as glass, metal, or dirt that entered the wound, and skin wounds often require cleaning.
Treatment
Much of medical practice is dedicated to the treatment of injuries. Traumatology is the study of traumatic injuries and injury repair. Certain injuries may be treated by specialists. Serious injuries sometimes require trauma surgery. Following serious injuries, physical therapy and occupational therapy are sometimes used for rehabilitation. Medication is commonly used to treat injuries.
Emergency medicine during major trauma prioritizes the immediate consideration of life-threatening injuries that can be quickly addressed. The airway is evaluated, clearing bodily fluids with suctioning or creating an artificial airway if necessary. Breathing is evaluated by evaluating motion of the chest wall and checking for blood or air in the pleural cavity. Circulation is evaluated to resuscitate the patient, including the application of intravenous therapy. Disability is evaluated by checking for responsiveness and reflexes. Exposure is then used to examine the patient for external injury. Following immediate life-saving procedures, a CT scan is used for a more thorough diagnosis. Further resuscitation may be required, including ongoing blood transfusion, mechanical ventilation and nutritional support.
Pain management is another aspect of injury treatment. Pain serves as an indicator to determine the nature and severity of an injury, but it can also worsen an injury, reduce mobility, and affect quality of life. Analgesic drugs are used to reduce the pain associated with injuries, depending on the person's age, the severity of the injury, and previous medical conditions that may affect pain relief. NSAIDs such as aspirin and ibuprofen are commonly used for acute pain. Opioid medications such as fentanyl, methadone, and morphine are used to treat severe pain in major trauma, but their use is limited due to associated long-term risks such as addiction.
Complications
Complications may arise as a result of certain injuries, increasing the recovery time, further exasperating the symptoms, or potentially causing death. The extent of the injury and the age of the injured person may contribute to the likelihood of complications. Infection of wounds is a common complication in traumatic injury, resulting in diagnoses such as pneumonia or sepsis. Wound infection prevents the healing process from taking place and can cause further damage to the body. A majority of wounds are contaminated with microbes from other parts of the body, and infection takes place when the immune system is unable to address this contamination. The surgical removing of devitalized tissue and the use of topical antimicrobial agents can prevent infection.
Hemorrhaging of blood is a common result of injuries, and it can cause several complications. Pooling of blood under the skin can cause a hematoma, particularly after blunt trauma or the suture of a laceration. Hematomas are susceptible to infection and are typically treated compression, though surgery is necessary in severe cases. Excessive blood loss can cause hypovolemic shock in which cellular oxygenation can no longer take place. This can cause tachycardia, hypotension, coma, or organ failure. Fluid replacement is often necessary to treat blood loss. Other complications of injuries include cavitation, development of fistulas, and organ failure.
Social and psychological aspects
Injuries often cause psychological harm in addition to physical harm. Traumatic injuries are associated with psychological trauma and distress, and some victims of traumatic injuries will display symptoms of post-traumatic stress disorder during and after the recovery of the injury. The specific symptoms and their triggers vary depending on the nature of the injury. Body image and self-esteem can also be affected by injury. Injuries that cause permanent disabilities, such as spinal cord injuries, can have severe effects on self-esteem. Disfiguring injuries can negatively affect body image, leading to a lower quality of life. Burn injuries in particular can cause dramatic changes in a person's appearance that may negatively affect body image.
Severe injury can also cause social harm. Disfiguring injuries may also result in stigma due to scarring or other changes in appearance. Certain injuries may necessitate a change in occupation or prevent employment entirely. Leisure activities are similarly limited, and athletic activities in particular may be impossible following severe injury. In some cases, the effects of injury may strain personal relationships, such as marriages. Psychological and social variables have been found to affect the likelihood of injuries among athletes. Increased life stress can cause an increase in the likelihood of athletic injury, while social support can decrease the likelihood of injury. Social support also assists in the recovery process after athletic injuries occur.
See also
Injury prevention
List of causes of death by rate
First aid
Medical emergency
Traumatology
References
External links
International Trauma Conferences (registered trauma charity providing trauma education for medical professionals worldwide)
Trauma.org (trauma resources for medical professionals)
Emergency Medicine Research and Perspectives (emergency medicine procedure videos)
American Trauma Society
Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine
Medical emergencies
Traumatology
Causes of death
Acute pain
Trauma types
Adverse childhood experiences | 0.774265 | 0.988684 | 0.765504 |
Influenza | Influenza, commonly known as the flu, is an infectious disease caused by influenza viruses. Symptoms range from mild to severe and often include fever, runny nose, sore throat, muscle pain, headache, coughing, and fatigue. These symptoms begin one to four (typically two) days after exposure to the virus and last for about two to eight days. Diarrhea and vomiting can occur, particularly in children. Influenza may progress to pneumonia from the virus or a subsequent bacterial infection. Other complications include acute respiratory distress syndrome, meningitis, encephalitis, and worsening of pre-existing health problems such as asthma and cardiovascular disease.
There are four types of influenza virus: types A, B, C, and D. Aquatic birds are the primary source of influenza A virus (IAV), which is also widespread in various mammals, including humans and pigs. Influenza B virus (IBV) and influenza C virus (ICV) primarily infect humans, and influenza D virus (IDV) is found in cattle and pigs. Influenza A virus and influenza B virus circulate in humans and cause seasonal epidemics, and influenza C virus causes a mild infection, primarily in children. Influenza D virus can infect humans but is not known to cause illness. In humans, influenza viruses are primarily transmitted through respiratory droplets from coughing and sneezing. Transmission through aerosols and surfaces contaminated by the virus also occur.
Frequent hand washing and covering one's mouth and nose when coughing and sneezing reduce transmission. Annual vaccination can help to provide protection against influenza. Influenza viruses, particularly influenza A virus, evolve quickly, so flu vaccines are updated regularly to match which influenza strains are in circulation. Vaccines provide protection against influenza A virus subtypes H1N1 and H3N2 and one or two influenza B virus subtypes. Influenza infection is diagnosed with laboratory methods such as antibody or antigen tests and a polymerase chain reaction (PCR) to identify viral nucleic acid. The disease can be treated with supportive measures and, in severe cases, with antiviral drugs such as oseltamivir. In healthy individuals, influenza is typically self-limiting and rarely fatal, but it can be deadly in high-risk groups.
In a typical year, five to 15 percent of the population contracts influenza. There are 3 to 5 million severe cases annually, with up to 650,000 respiratory-related deaths globally each year. Deaths most commonly occur in high-risk groups, including young children, the elderly, and people with chronic health conditions. In temperate regions, the number of influenza cases peaks during winter, whereas in the tropics, influenza can occur year-round. Since the late 1800s, pandemic outbreaks of novel influenza strains have occurred every 10 to 50 years. Five flu pandemics have occurred since 1900: the Spanish flu from 1918 to 1920, which was the most severe; the Asian flu in 1957; the Hong Kong flu in 1968; the Russian flu in 1977; and the swine flu pandemic in 2009.
Signs and symptoms
The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose. The time between exposure to the virus and development of symptoms (the incubation period) is one to four days, most commonly one to two days. Many infections are asymptomatic. The onset of symptoms is sudden, and initial symptoms are predominately non-specific, including fever, chills, headaches, muscle pain, malaise, loss of appetite, lack of energy, and confusion. These are usually accompanied by respiratory symptoms such as a dry cough, sore or dry throat, hoarse voice, and a stuffy or runny nose. Coughing is the most common symptom. Gastrointestinal symptoms may also occur, including nausea, vomiting, diarrhea, and gastroenteritis, especially in children. The standard influenza symptoms typically last for two to eight days. Some studies suggest influenza can cause long-lasting symptoms in a similar way to long COVID.
Symptomatic infections are usually mild and limited to the upper respiratory tract, but progression to pneumonia is relatively common. Pneumonia may be caused by the primary viral infection or a secondary bacterial infection. Primary pneumonia is characterized by rapid progression of fever, cough, labored breathing, and low oxygen levels that cause bluish skin. It is especially common among those who have an underlying cardiovascular disease such as rheumatic heart disease. Secondary pneumonia typically has a period of improvement in symptoms for one to three weeks followed by recurrent fever, sputum production, and fluid buildup in the lungs, but can also occur just a few days after influenza symptoms appear. About a third of primary pneumonia cases are followed by secondary pneumonia, which is most frequently caused by the bacteria Streptococcus pneumoniae and Staphylococcus aureus.
Virology
Types of virus
Influenza viruses comprise four species, each the sole member of its own genus. The four influenza genera comprise four of the seven genera in the family Orthomyxoviridae. They are:
Influenza A virus , genus Alphainfluenzavirus
Influenza B virus , genus Betainfluenzavirus
Influenza C virus , genus Gammainfluenzavirus
Influenza D virus , genus Deltainfluenzavirus
Influenza A virus is responsible for most cases of severe illness as well as seasonal epidemics and occasional pandemics. It infects people of all ages but tends to disproportionately cause severe illness in the elderly, the very young, and those with chronic health issues. Birds are the primary reservoir of influenza A virus, especially aquatic birds such as ducks, geese, shorebirds, and gulls, but the virus also circulates among mammals, including pigs, horses, and marine mammals.
Subtypes of Influenza A are defined by the combination of the antigenic viral proteins haemagglutinin (H) and neuraminidase (N) in the viral envelope; for example, "H1N1" designates an IAV subtype that has a type-1 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. Almost all possible combinations of H (1 thru 16) and N (1 thru 11) have been isolated from wild birds. In addition H17, H18, N10 and N11 have been found in bats. The influenza A virus subtypes in circulation among humans are H1N1 and H3N2.
Influenza B virus mainly infects humans but has been identified in seals, horses, dogs, and pigs. Influenza B virus does not have subtypes like influenza A virus but has two antigenically distinct lineages, termed the B/Victoria/2/1987-like and B/Yamagata/16/1988-like lineages, or simply (B/)Victoria(-like) and (B/)Yamagata(-like). Both lineages are in circulation in humans, disproportionately affecting children. However, the B/Yamagata lineage might have become extinct in 2020/2021 due to COVID-19 pandemic measures. Influenza B viruses contribute to seasonal epidemics alongside influenza A viruses but have never been associated with a pandemic.
Influenza C virus, like influenza B virus, is primarily found in humans, though it has been detected in pigs, feral dogs, dromedary camels, cattle, and dogs. Influenza C virus infection primarily affects children and is usually asymptomatic or has mild cold-like symptoms, though more severe symptoms such as gastroenteritis and pneumonia can occur. Unlike influenza A virus and influenza B virus, influenza C virus has not been a major focus of research pertaining to antiviral drugs, vaccines, and other measures against influenza. Influenza C virus is subclassified into six genetic/antigenic lineages.
Influenza D virus has been isolated from pigs and cattle, the latter being the natural reservoir. Infection has also been observed in humans, horses, dromedary camels, and small ruminants such as goats and sheep. Influenza D virus is distantly related to influenza C virus. While cattle workers have occasionally tested positive to prior influenza D virus infection, it is not known to cause disease in humans. Influenza C virus and influenza D virus experience a slower rate of antigenic evolution than influenza A virus and influenza B virus. Because of this antigenic stability, relatively few novel lineages emerge.
Influenza virus nomenclature
Every year, millions of influenza virus samples are analysed to monitor changes in the virus' antigenic properties, and to inform the development of vaccines.
To unambiguously describe a specific isolate of virus, researchers use the internationally accepted influenza virus nomenclature,i which describes, among other things, the species of animal from which the virus was isolated, and the place and year of collection. As an example – A/chicken/Nakorn-Patom/Thailand/CU-K2/04(H5N1):
A stands for the genus of influenza (A, B, C or D).
chicken is the animal species the isolate was found in (note: human isolates lack this component term and are thus identified as human isolates by default)
Nakorn-Patom/Thailand is the place this specific virus was isolated
CU-K2 is the laboratory reference number that identifies it from other influenza viruses isolated at the same place and year
04 represents the year of isolation 2004
H5 stands for the fifth of several known types of the protein hemagglutinin.
N1 stands for the first of several known types of the protein neuraminidase.
The nomenclature for influenza B, C and D, which are less variable, is simpler. Examples are B/Santiago/29615/2020 and C/Minnesota/10/2015.
Genome and structure
Influenza viruses have a negative-sense, single-stranded RNA genome that is segmented. The negative sense of the genome means it can be used as a template to synthesize messenger RNA (mRNA). Influenza A virus and influenza B virus have eight genome segments that encode 10 major proteins. Influenza C virus and influenza D virus have seven genome segments that encode nine major proteins.
Three segments encode three subunits of an RNA-dependent RNA polymerase (RdRp) complex: PB1, a transcriptase, PB2, which recognizes 5' caps, and PA (P3 for influenza C virus and influenza D virus), an endonuclease. The M1 matrix protein and M2 proton channel share a segment, as do the non-structural protein (NS1) and the nuclear export protein (NEP). For influenza A virus and influenza B virus, hemagglutinin (HA) and neuraminidase (NA) are encoded on one segment each, whereas influenza C virus and influenza D virus encode a hemagglutinin-esterase fusion (HEF) protein on one segment that merges the functions of HA and NA. The final genome segment encodes the viral nucleoprotein (NP). Influenza viruses also encode various accessory proteins, such as PB1-F2 and PA-X, that are expressed through alternative open reading frames and which are important in host defense suppression, virulence, and pathogenicity.
The virus particle, called a virion, is pleomorphic and varies between being filamentous, bacilliform, or spherical in shape. Clinical isolates tend to be pleomorphic, whereas strains adapted to laboratory growth typically produce spherical virions. Filamentous virions are about 250 nanometers (nm) by 80 nm, bacilliform 120–250 by 95 nm, and spherical 120 nm in diameter.
The core of the virion comprises one copy of each segment of the genome bound to NP nucleoproteins in separate ribonucleoprotein (RNP) complexes for each segment. There is a copy of the RdRp, all subunits included, bound to each RNP. The genetic material is encapsulated by a layer of M1 matrix protein which provides structural reinforcement to the outer layer, the viral envelope. The envelope comprises a lipid bilayer membrane incorporating HA and NA (or HEF) proteins extending outward from its exterior surface. HA and HEF proteins have a distinct "head" and "stalk" structure. M2 proteins form proton channels through the viral envelope that are required for viral entry and exit. Influenza B viruses contain a surface protein named NB that is anchored in the envelope, but its function is unknown.
Life cycle
The viral life cycle begins by binding to a target cell. Binding is mediated by the viral HA proteins on the surface of the envelope, which bind to cells that contain sialic acid receptors on the surface of the cell membrane. For N1 subtypes with the "G147R" mutation and N2 subtypes, the NA protein can initiate entry. Prior to binding, NA proteins promote access to target cells by degrading mucus, which helps to remove extracellular decoy receptors that would impede access to target cells. After binding, the virus is internalized into the cell by an endosome that contains the virion inside it. The endosome is acidified by cellular vATPase to have lower pH, which triggers a conformational change in HA that allows fusion of the viral envelope with the endosomal membrane. At the same time, hydrogen ions diffuse into the virion through M2 ion channels, disrupting internal protein-protein interactions to release RNPs into the host cell's cytosol. The M1 protein shell surrounding RNPs is degraded, fully uncoating RNPs in the cytosol.
RNPs are then imported into the nucleus with the help of viral localization signals. There, the viral RNA polymerase transcribes mRNA using the genomic negative-sense strand as a template. The polymerase snatches 5' caps for viral mRNA from cellular RNA to prime mRNA synthesis and the 3'-end of mRNA is polyadenylated at the end of transcription. Once viral mRNA is transcribed, it is exported out of the nucleus and translated by host ribosomes in a cap-dependent manner to synthesize viral proteins. RdRp also synthesizes complementary positive-sense strands of the viral genome in a complementary RNP complex which are then used as templates by viral polymerases to synthesize copies of the negative-sense genome. During these processes, RdRps of avian influenza viruses (AIVs) function optimally at a higher temperature than mammalian influenza viruses.
Newly synthesized viral polymerase subunits and NP proteins are imported to the nucleus to further increase the rate of viral replication and form RNPs. HA, NA, and M2 proteins are trafficked with the aid of M1 and NEP proteins to the cell membrane through the Golgi apparatus and inserted into the cell's membrane. Viral non-structural proteins including NS1, PB1-F2, and PA-X regulate host cellular processes to disable antiviral responses. PB1-F2 also interacts with PB1 to keep polymerases in the nucleus longer. M1 and NEP proteins localize to the nucleus during the later stages of infection, bind to viral RNPs and mediate their export to the cytoplasm where they migrate to the cell membrane with the aid of recycled endosomes and are bundled into the segments of the genome.
Progeny viruses leave the cell by budding from the cell membrane, which is initiated by the accumulation of M1 proteins at the cytoplasmic side of the membrane. The viral genome is incorporated inside a viral envelope derived from portions of the cell membrane that have HA, NA, and M2 proteins. At the end of budding, HA proteins remain attached to cellular sialic acid until they are cleaved by the sialidase activity of NA proteins. The virion is then released from the cell. The sialidase activity of NA also cleaves any sialic acid residues from the viral surface, which helps prevent newly assembled viruses from aggregating near the cell surface and improving infectivity. Similar to other aspects of influenza replication, optimal NA activity is temperature- and pH-dependent. Ultimately, presence of large quantities of viral RNA in the cell triggers apoptosis (programmed cell death), which is initiated by cellular factors to restrict viral replication.
Antigenic drift and shift
Two key processes that influenza viruses evolve through are antigenic drift and antigenic shift. Antigenic drift is when an influenza virus' antigens change due to the gradual accumulation of mutations in the antigen's (HA or NA) gene. This can occur in response to evolutionary pressure exerted by the host immune response. Antigenic drift is especially common for the HA protein, in which just a few amino acid changes in the head region can constitute antigenic drift. The result is the production of novel strains that can evade pre-existing antibody-mediated immunity. Antigenic drift occurs in all influenza species but is slower in B than A and slowest in C and D. Antigenic drift is a major cause of seasonal influenza, and requires that flu vaccines be updated annually. HA is the main component of inactivated vaccines, so surveillance monitors antigenic drift of this antigen among circulating strains. Antigenic evolution of influenza viruses of humans appears to be faster than in swine and equines. In wild birds, within-subtype antigenic variation appears to be limited but has been observed in poultry.
Antigenic shift is a sudden, drastic change in an influenza virus' antigen, usually HA. During antigenic shift, antigenically different strains that infect the same cell can reassort genome segments with each other, producing hybrid progeny. Since all influenza viruses have segmented genomes, all are capable of reassortment. Antigenic shift only occurs among influenza viruses of the same genus and most commonly occurs among influenza A viruses. In particular, reassortment is very common in AIVs, creating a large diversity of influenza viruses in birds, but is uncommon in human, equine, and canine lineages. Pigs, bats, and quails have receptors for both mammalian and avian influenza A viruses, so they are potential "mixing vessels" for reassortment. If an animal strain reassorts with a human strain, then a novel strain can emerge that is capable of human-to-human transmission. This has caused pandemics, but only a limited number, so it is difficult to predict when the next will happen.
The Global Influenza Surveillance and Response System of the World Health Organization (GISRS) tests several millions of specimens annually to monitor the spread and evolution of influenza viruses.
Mechanism
Transmission
People who are infected can transmit influenza viruses through breathing, talking, coughing, and sneezing, which spread respiratory droplets and aerosols that contain virus particles into the air. A person susceptible to infection can contract influenza by coming into contact with these particles. Respiratory droplets are relatively large and travel less than two meters before falling onto nearby surfaces. Aerosols are smaller and remain suspended in the air longer, so they take longer to settle and can travel further. Inhalation of aerosols can lead to infection, but most transmission is in the area about two meters around an infected person via respiratory droplets that come into contact with mucosa of the upper respiratory tract. Transmission through contact with a person, bodily fluids, or intermediate objects (fomites) can also occur, since influenza viruses can survive for hours on non-porous surfaces. If one's hands are contaminated, then touching one's face can cause infection.
Influenza is usually transmissible from one day before the onset of symptoms to 5–7 days after. In healthy adults, the virus is shed for up to 3–5 days. In children and the immunocompromised, the virus may be transmissible for several weeks. Children ages 2–17 are considered to be the primary and most efficient spreaders of influenza. Children who have not had multiple prior exposures to influenza viruses shed the virus at greater quantities and for a longer duration than other children. People at risk of exposure to influenza include health care workers, social care workers, and those who live with or care for people vulnerable to influenza. In long-term care facilities, the flu can spread rapidly. A variety of factors likely encourage influenza transmission, including lower temperature, lower absolute and relative humidity, less ultraviolet radiation from the sun, and crowding. Influenza viruses that infect the upper respiratory tract like H1N1 tend to be more mild but more transmissible, whereas those that infect the lower respiratory tract like H5N1 tend to cause more severe illness but are less contagious.
Pathophysiology
In humans, influenza viruses first cause infection by infecting epithelial cells in the respiratory tract. Illness during infection is primarily the result of lung inflammation and compromise caused by epithelial cell infection and death, combined with inflammation caused by the immune system's response to infection. Non-respiratory organs can become involved, but the mechanisms by which influenza is involved in these cases are unknown. Severe respiratory illness can be caused by multiple, non-exclusive mechanisms, including obstruction of the airways, loss of alveolar structure, loss of lung epithelial integrity due to epithelial cell infection and death, and degradation of the extracellular matrix that maintains lung structure. In particular, alveolar cell infection appears to drive severe symptoms since this results in impaired gas exchange and enables viruses to infect endothelial cells, which produce large quantities of pro-inflammatory cytokines.
Pneumonia caused by influenza viruses is characterized by high levels of viral replication in the lower respiratory tract, accompanied by a strong pro-inflammatory response called a cytokine storm. Infection with H5N1 or H7N9 especially produces high levels of pro-inflammatory cytokines. In bacterial infections, early depletion of macrophages during influenza creates a favorable environment in the lungs for bacterial growth since these white blood cells are important in responding to bacterial infection. Host mechanisms to encourage tissue repair may inadvertently allow bacterial infection. Infection also induces production of systemic glucocorticoids that can reduce inflammation to preserve tissue integrity but allow increased bacterial growth.
The pathophysiology of influenza is significantly influenced by which receptors influenza viruses bind to during entry into cells. Mammalian influenza viruses preferentially bind to sialic acids connected to the rest of the oligosaccharide by an α-2,6 link, most commonly found in various respiratory cells, such as respiratory and retinal epithelial cells. AIVs prefer sialic acids with an α-2,3 linkage, which are most common in birds in gastrointestinal epithelial cells and in humans in the lower respiratory tract. Cleavage of the HA protein into HA, the binding subunit, and HA, the fusion subunit, is performed by different proteases, affecting which cells can be infected. For mammalian influenza viruses and low pathogenic AIVs, cleavage is extracellular, which limits infection to cells that have the appropriate proteases, whereas for highly pathogenic AIVs, cleavage is intracellular and performed by ubiquitous proteases, which allows for infection of a greater variety of cells, thereby contributing to more severe disease.
Immunology
Cells possess sensors to detect viral RNA, which can then induce interferon production. Interferons mediate expression of antiviral proteins and proteins that recruit immune cells to the infection site, and they notify nearby uninfected cells of infection. Some infected cells release pro-inflammatory cytokines that recruit immune cells to the site of infection. Immune cells control viral infection by killing infected cells and phagocytizing viral particles and apoptotic cells. An exacerbated immune response can harm the host organism through a cytokine storm. To counter the immune response, influenza viruses encode various non-structural proteins, including NS1, NEP, PB1-F2, and PA-X, that are involved in curtailing the host immune response by suppressing interferon production and host gene expression.
B cells, a type of white blood cell, produce antibodies that bind to influenza antigens HA and NA (or HEF) and other proteins to a lesser degree. Once bound to these proteins, antibodies block virions from binding to cellular receptors, neutralizing the virus. In humans, a sizeable antibody response occurs about one week after viral exposure. This antibody response is typically robust and long-lasting, especially for influenza C virus and influenza D virus. People exposed to a certain strain in childhood still possess antibodies to that strain at a reasonable level later in life, which can provide some protection to related strains. There is, however, an "original antigenic sin", in which the first HA subtype a person is exposed to influences the antibody-based immune response to future infections and vaccines.
Prevention
Vaccination
Annual vaccination is the primary and most effective way to prevent influenza and influenza-associated complications, especially for high-risk groups. Vaccines against the flu are trivalent or quadrivalent, providing protection against an H1N1 strain, an H3N2 strain, and one or two influenza B virus strains corresponding to the two influenza B virus lineages. Two types of vaccines are in use: inactivated vaccines that contain "killed" (i.e. inactivated) viruses and live attenuated influenza vaccines (LAIVs) that contain weakened viruses. There are three types of inactivated vaccines: whole virus, split virus, in which the virus is disrupted by a detergent, and subunit, which only contains the viral antigens HA and NA. Most flu vaccines are inactivated and administered via intramuscular injection. LAIVs are sprayed into the nasal cavity.
Vaccination recommendations vary by country. Some recommend vaccination for all people above a certain age, such as 6 months, whereas other countries limit recommendations to high-risk groups. Young infants cannot receive flu vaccines for safety reasons, but they can inherit passive immunity from their mother if vaccinated during pregnancy. Influenza vaccination helps to reduce the probability of reassortment.
In general, influenza vaccines are only effective if there is an antigenic match between vaccine strains and circulating strains. Most commercially available flu vaccines are manufactured by propagation of influenza viruses in embryonated chicken eggs, taking 6–8 months. Flu seasons are different in the northern and southern hemisphere, so the WHO meets twice a year, once for each hemisphere, to discuss which strains should be included based on observation from HA inhibition assays. Other manufacturing methods include an MDCK cell culture-based inactivated vaccine and a recombinant subunit vaccine manufactured from baculovirus overexpression in insect cells.
Antiviral chemoprophylaxis
Influenza can be prevented or reduced in severity by post-exposure prophylaxis with the antiviral drugs oseltamivir, which can be taken orally by those at least three months old, and zanamivir, which can be inhaled by those above seven years. Chemoprophylaxis is most useful for individuals at high risk for complications and those who cannot receive the flu vaccine. Post-exposure chemoprophylaxis is only recommended if oseltamivir is taken within 48 hours of contact with a confirmed or suspected case and zanamivir within 36 hours. It is recommended for people who have yet to receive a vaccine for the current flu season, who have been vaccinated less than two week since contact, if there is a significant mismatch between vaccine and circulating strains, or during an outbreak in a closed setting regardless of vaccination history.
Infection control
Hand hygiene is important in reducing the spread of influenza. This includes frequent hand washing with soap and water, using alcohol-based hand sanitizers, and not touching one's eyes, nose, and mouth with one's hands. Covering one's nose and mouth when coughing or sneezing is important. Other methods to limit influenza transmission include staying home when sick, avoiding contact with others until one day after symptoms end, and disinfecting surfaces likely to be contaminated by the virus.
Research thus far has not shown a significant reduction in seasonal influenza with mask usage. The effectiveness of screening at points of entry into countries is not well researched. Social distancing measures such as school closures, isolation or quarantine, and limiting mass gatherings may reduce transmission, but these measures are often expensive, unpopular, and difficult to implement. Consequently, the commonly recommended methods of infection control are respiratory etiquette, hand hygiene, and mask wearing, which are inexpensive and easy. Pharmaceutical measures are effective but may not be available in the early stages of an outbreak.
In health care settings, infected individuals may be cohorted or assigned to individual rooms. Protective clothing such as masks, gloves, and gowns is recommended when coming into contact with infected individuals if there is a risk of exposure to infected bodily fluids. Keeping patients in negative pressure rooms and avoiding aerosol-producing activities may help, but special air handling and ventilation systems are not considered necessary to prevent the spread of influenza in the air. In residential homes, new admissions may need to be closed until the spread of influenza is controlled.
Since influenza viruses circulate in animals such as birds and pigs, prevention of transmission from these animals is important. Water treatment, indoor raising of animals, quarantining sick animals, vaccination, and biosecurity are the primary measures used. Placing poultry houses and piggeries on high ground away from high-density farms, backyard farms, live poultry markets, and bodies of water helps to minimize contact with wild birds. Closure of live poultry markets appears to the most effective measure and has shown to be effective at controlling the spread of H5N1, H7N9, and H9N2. Other biosecurity measures include cleaning and disinfecting facilities and vehicles, banning visits to poultry farms, not bringing birds intended for slaughter back to farms, changing clothes, disinfecting foot baths, and treating food and water.
If live poultry markets are not closed, then "clean days" when unsold poultry is removed and facilities are disinfected and "no carry-over" policies to eliminate infectious material before new poultry arrive can be used to reduce the spread of influenza viruses. If a novel influenza viruses has breached the aforementioned biosecurity measures, then rapid detection to stamp it out via quarantining, decontamination, and culling may be necessary to prevent the virus from becoming endemic. Vaccines exist for avian H5, H7, and H9 subtypes that are used in some countries. In China, for example, vaccination of domestic birds against H7N9 successfully limited its spread, indicating that vaccination may be an effective strategy if used in combination with other measures to limit transmission. In pigs and horses, management of influenza is dependent on vaccination with biosecurity.
Diagnosis
Diagnosis based on symptoms is fairly accurate in otherwise healthy people during seasonal epidemics and should be suspected in cases of pneumonia, acute respiratory distress syndrome (ARDS), sepsis, or if encephalitis, myocarditis, or breakdown of muscle tissue occur. Because influenza is similar to other viral respiratory tract illnesses, laboratory diagnosis is necessary for confirmation. Common sample collection methods for testing include nasal and throat swabs. Samples may be taken from the lower respiratory tract if infection has cleared the upper but not lower respiratory tract. Influenza testing is recommended for anyone hospitalized with symptoms resembling influenza during flu season or who is connected to an influenza case. For severe cases, earlier diagnosis improves patient outcome. Diagnostic methods that can identify influenza include viral cultures, antibody- and antigen-detecting tests, and nucleic acid-based tests.
Viruses can be grown in a culture of mammalian cells or embryonated eggs for 3–10 days to monitor cytopathic effect. Final confirmation can then be done via antibody staining, hemadsorption using red blood cells, or immunofluorescence microscopy. Shell vial cultures, which can identify infection via immunostaining before a cytopathic effect appears, are more sensitive than traditional cultures with results in 1–3 days. Cultures can be used to characterize novel viruses, observe sensitivity to antiviral drugs, and monitor antigenic drift, but they are relatively slow and require specialized skills and equipment.
Serological assays can be used to detect an antibody response to influenza after natural infection or vaccination. Common serological assays include hemagglutination inhibition assays that detect HA-specific antibodies, virus neutralization assays that check whether antibodies have neutralized the virus, and enzyme-linked immunoabsorbant assays. These methods tend to be relatively inexpensive and fast but are less reliable than nucleic-acid based tests.
Direct fluorescent or immunofluorescent antibody (DFA/IFA) tests involve staining respiratory epithelial cells in samples with fluorescently-labeled influenza-specific antibodies, followed by examination under a fluorescent microscope. They can differentiate between influenza A virus and influenza B virus but can not subtype influenza A virus. Rapid influenza diagnostic tests (RIDTs) are a simple way of obtaining assay results, are low cost, and produce results in less than 30 minutes, so they are commonly used, but they can not distinguish between influenza A virus and influenza B virus or between influenza A virus subtypes and are not as sensitive as nucleic-acid based tests.
Nucleic acid-based tests (NATs) amplify and detect viral nucleic acid. Most of these tests take a few hours, but rapid molecular assays are as fast as RIDTs. Among NATs, reverse transcription polymerase chain reaction (RT-PCR) is the most traditional and considered the gold standard for diagnosing influenza because it is fast and can subtype influenza A virus, but it is relatively expensive and more prone to false-positives than cultures. Other NATs that have been used include loop-mediated isothermal amplification-based assays, simple amplification-based assays, and nucleic acid sequence-based amplification. Nucleic acid sequencing methods can identify infection by obtaining the nucleic acid sequence of viral samples to identify the virus and antiviral drug resistance. The traditional method is Sanger sequencing, but it has been largely replaced by next-generation methods that have greater sequencing speed and throughput.
Management
Treatment in cases of mild or moderate illness is supportive and includes anti-fever medications such as acetaminophen and ibuprofen, adequate fluid intake to avoid dehydration, and rest. Cough drops and throat sprays may be beneficial for sore throat. It is recommended to avoid alcohol and tobacco use while ill. Aspirin is not recommended to treat influenza in children due to an elevated risk of developing Reye syndrome. Corticosteroids are not recommended except when treating septic shock or an underlying medical condition, such as chronic obstructive pulmonary disease or asthma exacerbation, since they are associated with increased mortality. If a secondary bacterial infection occurs, then antibiotics may be necessary.
Antivirals
Antiviral drugs are primarily used to treat severely ill patients, especially those with compromised immune systems. Antivirals are most effective when started in the first 48 hours after symptoms appear. Later administration may still be beneficial for those who have underlying immune defects, those with more severe symptoms, or those who have a higher risk of developing complications if these individuals are still shedding the virus. Antiviral treatment is also recommended if a person is hospitalized with suspected influenza instead of waiting for test results to return and if symptoms are worsening. Most antiviral drugs against influenza fall into two categories: neuraminidase (NA) inhibitors and M2 inhibitors. Baloxavir marboxil is a notable exception, which targets the endonuclease activity of the viral RNA polymerase and can be used as an alternative to NA and M2 inhibitors for influenza A virus and influenza B virus.
NA inhibitors target the enzymatic activity of NA receptors, mimicking the binding of sialic acid in the active site of NA on influenza A virus and influenza B virus virions so that viral release from infected cells and the rate of viral replication are impaired. NA inhibitors include oseltamivir, which is consumed orally in a prodrug form and converted to its active form in the liver, and zanamivir, which is a powder that is inhaled nasally. Oseltamivir and zanamivir are effective for prophylaxis and post-exposure prophylaxis, and research overall indicates that NA inhibitors are effective at reducing rates of complications, hospitalization, and mortality and the duration of illness. Additionally, the earlier NA inhibitors are provided, the better the outcome, though late administration can still be beneficial in severe cases. Other NA inhibitors include laninamivir and peramivir, the latter of which can be used as an alternative to oseltamivir for people who cannot tolerate or absorb it.
The adamantanes amantadine and rimantadine are orally administered drugs that block the influenza virus' M2 ion channel, preventing viral uncoating. These drugs are only functional against influenza A virus but are no longer recommended for use because of widespread resistance to them among influenza A viruses. Adamantane resistance first emerged in H3N2 in 2003, becoming worldwide by 2008. Oseltamivir resistance is no longer widespread because the 2009 pandemic H1N1 strain (H1N1 pdm09), which is resistant to adamantanes, seemingly replaced resistant strains in circulation. Since the 2009 pandemic, oseltamivir resistance has mainly been observed in patients undergoing therapy, especially the immunocompromised and young children. Oseltamivir resistance is usually reported in H1N1, but has been reported in H3N2 and influenza B viruss less commonly. Because of this, oseltamivir is recommended as the first drug of choice for immunocompetent people, whereas for the immunocompromised, oseltamivir is recommended against H3N2 and influenza B virus and zanamivir against H1N1 pdm09. Zanamivir resistance is observed less frequently, and resistance to peramivir and baloxavir marboxil is possible.
Prognosis
In healthy individuals, influenza infection is usually self-limiting and rarely fatal. Symptoms usually last for 2–8 days. Influenza can cause people to miss work or school, and it is associated with decreased job performance and, in older adults, reduced independence. Fatigue and malaise may last for several weeks after recovery, and healthy adults may experience pulmonary abnormalities that can take several weeks to resolve. Complications and mortality primarily occur in high-risk populations and those who are hospitalized. Severe disease and mortality are usually attributable to pneumonia from the primary viral infection or a secondary bacterial infection, which can progress to ARDS.
Other respiratory complications that may occur include sinusitis, bronchitis, bronchiolitis, excess fluid buildup in the lungs, and exacerbation of chronic bronchitis and asthma. Middle ear infection and croup may occur, most commonly in children. Secondary S. aureus infection has been observed, primarily in children, to cause toxic shock syndrome after influenza, with hypotension, fever, and reddening and peeling of the skin. Complications affecting the cardiovascular system are rare and include pericarditis, fulminant myocarditis with a fast, slow, or irregular heartbeat, and exacerbation of pre-existing cardiovascular disease. Inflammation or swelling of muscles accompanied by muscle tissue breaking down occurs rarely, usually in children, which presents as extreme tenderness and muscle pain in the legs and a reluctance to walk for 2–3 days.
Influenza can affect pregnancy, including causing smaller neonatal size, increased risk of premature birth, and an increased risk of child death shortly before or after birth. Neurological complications have been associated with influenza on rare occasions, including aseptic meningitis, encephalitis, disseminated encephalomyelitis, transverse myelitis, and Guillain–Barré syndrome. Additionally, febrile seizures and Reye syndrome can occur, most commonly in children. Influenza-associated encephalopathy can occur directly from central nervous system infection from the presence of the virus in blood and presents as sudden onset of fever with convulsions, followed by rapid progression to coma. An atypical form of encephalitis called encephalitis lethargica, characterized by headache, drowsiness, and coma, may rarely occur sometime after infection. In survivors of influenza-associated encephalopathy, neurological defects may occur. Primarily in children, in severe cases the immune system may rarely dramatically overproduce white blood cells that release cytokines, causing severe inflammation.
People who are at least 65 years of age, due to a weakened immune system from aging or a chronic illness, are a high-risk group for developing complications, as are children less than one year of age and children who have not been previously exposed to influenza viruses multiple times. Pregnant women are at an elevated risk, which increases by trimester and lasts up to two weeks after childbirth. Obesity, in particular a body mass index greater than 35–40, is associated with greater amounts of viral replication, increased severity of secondary bacterial infection, and reduced vaccination efficacy. People who have underlying health conditions are also considered at-risk, including those who have congenital or chronic heart problems or lung (e.g. asthma), kidney, liver, blood, neurological, or metabolic (e.g. diabetes) disorders, as are people who are immunocompromised from chemotherapy, asplenia, prolonged steroid treatment, splenic dysfunction, or HIV infection. Tobacco use, including past use, places a person at risk. The role of genetics in influenza is not well researched, but it may be a factor in influenza mortality.
Epidemiology
Influenza is typically characterized by seasonal epidemics and sporadic pandemics. Most of the burden of influenza is a result of flu seasons caused by influenza A virus and influenza B virus. Among influenza A virus subtypes, H1N1 and H3N2 circulate in humans and are responsible for seasonal influenza. Cases disproportionately occur in children, but most severe causes are among the elderly, the very young, and the immunocompromised. In a typical year, influenza viruses infect 5–15% of the global population, causing 3–5 million cases of severe illness annually and accounting for 290,000–650,000 deaths each year due to respiratory illness. 5–10% of adults and 20–30% of children contract influenza each year. The reported number of influenza cases is usually much lower than the actual number.
During seasonal epidemics, it is estimated that about 80% of otherwise healthy people who have a cough or sore throat have the flu. Approximately 30–40% of people hospitalized for influenza develop pneumonia, and about 5% of all severe pneumonia cases in hospitals are due to influenza, which is also the most common cause of ARDS in adults. In children, influenza and respiratory syncytial virus are the two most common causes of ARDS. About 3–5% of children each year develop otitis media due to influenza. Adults who develop organ failure from influenza and children who have PIM scores and acute renal failure have higher rates of mortality. During seasonal influenza, mortality is concentrated in the very young and the elderly, whereas during flu pandemics, young adults are often affected at a high rate.
In temperate regions, the number of influenza cases varies from season to season. Lower vitamin D levels, presumably due to less sunlight, lower humidity, lower temperature, and minor changes in virus proteins caused by antigenic drift contribute to annual epidemics that peak during the winter season. In the northern hemisphere, this is from October to May (more narrowly December to April), and in the southern hemisphere, this is from May to October (more narrowly June to September). There are therefore two distinct influenza seasons every year in temperate regions, one in the northern hemisphere and one in the southern hemisphere. In tropical and subtropical regions, seasonality is more complex and appears to be affected by various climatic factors such as minimum temperature, hours of sunshine, maximum rainfall, and high humidity. Influenza may therefore occur year-round in these regions. Influenza epidemics in modern times have the tendency to start in the eastern or southern hemisphere, with Asia being a key reservoir.
Influenza A virus and influenza B virus co-circulate, so have the same patterns of transmission. The seasonality of influenza C virus, however, is poorly understood. Influenza C virus infection is most common in children under the age of two, and by adulthood most people have been exposed to it. Influenza C virus-associated hospitalization most commonly occurs in children under the age of three and is frequently accompanied by co-infection with another virus or a bacterium, which may increase the severity of disease. When considering all hospitalizations for respiratory illness among young children, influenza C virus appears to account for only a small percentage of such cases. Large outbreaks of influenza C virus infection can occur, so incidence varies significantly.
Outbreaks of influenza caused by novel influenza viruses are common. Depending on the level of pre-existing immunity in the population, novel influenza viruses can spread rapidly and cause pandemics with millions of deaths. These pandemics, in contrast to seasonal influenza, are caused by antigenic shifts involving animal influenza viruses. To date, all known flu pandemics have been caused by influenza A viruses, and they follow the same pattern of spreading from an origin point to the rest of the world over the course of multiple waves in a year. Pandemic strains tend to be associated with higher rates of pneumonia in otherwise healthy individuals. Generally after each influenza pandemic, the pandemic strain continues to circulate as the cause of seasonal influenza, replacing prior strains. From 1700 to 1889, influenza pandemics occurred about once every 50–60 years. Since then, pandemics have occurred about once every 10–50 years, so they may be getting more frequent over time.
History
The first influenza epidemic may have occurred around 6,000 BC in China, and possible descriptions of influenza exist in Greek writings from the 5th century BC. In both 1173–1174 AD and 1387 AD, epidemics occurred across Europe that were named "influenza". Whether these epidemics or others were caused by influenza is unclear since there was then no consistent naming pattern for epidemic respiratory diseases, and "influenza" did not become clearly associated with respiratory disease until centuries later. Influenza may have been brought to the Americas as early as 1493, when an epidemic disease resembling influenza killed most of the population of the Antilles.
The first convincing record of an influenza pandemic was in 1510. It began in East Asia before spreading to North Africa and then Europe. Following the pandemic, seasonal influenza occurred, with subsequent pandemics in 1557 and 1580. The flu pandemic in 1557 was potentially the first time influenza was connected to miscarriage and death of pregnant women. The 1580 influenza pandemic originated in Asia during summer, spread to Africa, then Europe, and finally America. By the end of the 16th century, influenza was beginning to become understood as a specific, recognizable disease with epidemic and endemic forms. In 1648, it was discovered that horses also experience influenza.
Influenza data after 1700 is more accurate, so it is easier to identify flu pandemics after this point. The first flu pandemic of the 18th century started in 1729 in Russia in spring, spreading worldwide over the course of three years with distinct waves, the later ones being more lethal. Another flu pandemic occurred in 1781–1782, starting in China in autumn. From this pandemic, influenza became associated with sudden outbreaks of febrile illness. The next flu pandemic was from 1830 to 1833, beginning in China in winter. This pandemic had a high attack rate, but the mortality rate was low.
A minor influenza pandemic occurred from 1847 to 1851 at the same time as the third cholera pandemic and was the first flu pandemic to occur with vital statistics being recorded, so influenza mortality was clearly recorded for the first time. Fowl plague (now recognised as highly pathogenic avian influenza) was recognized in 1878 and was soon linked to transmission to humans. By the time of the 1889 pandemic, which may have been caused by an H2N2 strain, the flu had become an easily recognizable disease.
The microbial agent responsible for influenza was incorrectly identified in 1892 by R. F. J. Pfeiffer as the bacteria species Haemophilus influenzae, which retains "influenza" in its name. From 1901 to 1903, Italian and Austrian researchers were able to show that avian influenza, then called "fowl plague", was caused by a microscopic agent smaller than bacteria by using filters with pores too small for bacteria to pass through. The fundamental differences between viruses and bacteria, however, were not yet fully understood.
From 1918 to 1920, the Spanish flu pandemic became the most devastating influenza pandemic and one of the deadliest pandemics in history. The pandemic, caused by an H1N1 strain of influenza A, likely began in the United States before spreading worldwide via soldiers during and after the First World War. The initial wave in the first half of 1918 was relatively minor and resembled past flu pandemics, but the second wave later that year had a much higher mortality rate. A third wave with lower mortality occurred in many places a few months after the second. By the end of 1920, it is estimated that about a third to half of all people in the world had been infected, with tens of millions of deaths, disproportionately young adults. During the 1918 pandemic, the respiratory route of transmission was clearly identified and influenza was shown to be caused by a "filter passer", not a bacterium, but there remained a lack of agreement about influenza's cause for another decade and research on influenza declined. After the pandemic, H1N1 circulated in humans in seasonal form until the next pandemic.
In 1931, Richard Shope published three papers identifying a virus as the cause of swine influenza, a then newly recognized disease among pigs that was characterized during the second wave of the 1918 pandemic. Shope's research reinvigorated research on human influenza, and many advances in virology, serology, immunology, experimental animal models, vaccinology, and immunotherapy have since arisen from influenza research. Just two years after influenza viruses were discovered, in 1933, influenza A virus was identified as the agent responsible for human influenza. Subtypes of influenza A virus were discovered throughout the 1930s, and influenza B virus was discovered in 1940.
During the Second World War, the US government worked on developing inactivated vaccines for influenza, resulting in the first influenza vaccine being licensed in 1945 in the United States. Influenza C virus was discovered two years later in 1947. In 1955, avian influenza was confirmed to be caused by influenza A virus. Four influenza pandemics have occurred since WWII. The first of these was the Asian flu from 1957 to 1958, caused by an H2N2 strain and beginning in China's Yunnan province. The number of deaths probably exceeded one million, mostly among the very young and very old. This was the first flu pandemic to occur in the presence of a global surveillance system and laboratories able to study the novel influenza virus. After the pandemic, H2N2 was the influenza A virus subtype responsible for seasonal influenza. The first antiviral drug against influenza, amantadine, was approved in 1966, with additional antiviral drugs being used since the 1990s.
In 1968, H3N2 was introduced into humans through a rearrangement between an avian H3N2 strain and an H2N2 strain that was circulating in humans. The novel H3N2 strain emerged in Hong Kong and spread worldwide, causing the Hong Kong flu pandemic, which resulted in 500,000–2,000,000 deaths. This was the first pandemic to spread significantly by air travel. H2N2 and H3N2 co-circulated after the pandemic until 1971 when H2N2 waned in prevalence and was completely replaced by H3N2. In 1977, H1N1 reemerged in humans, possibly after it was released from a freezer in a laboratory accident, and caused a pseudo-pandemic. This H1N1 strain was antigenically similar to the H1N1 strains that circulated prior to 1957. Since 1977, both H1N1 and H3N2 have circulated in humans as part of seasonal influenza. In 1980, the classification system used to subtype influenza viruses was introduced.
At some point, influenza B virus diverged into two strains, named the B/Victoria-like and B/Yamagata-like lineages, both of which have been circulating in humans since 1983.
In 1996, a highly pathogenic H5N1 subtype of influenza A was detected in geese in Guangdong, China and a year later emerged in poultry in Hong Kong, gradually spreading worldwide from there. A small H5N1 outbreak in humans in Hong Kong occurred then, and sporadic human cases have occurred since 1997, carrying a high case fatality rate.
The most recent flu pandemic was the 2009 swine flu pandemic, which originated in Mexico and resulted in hundreds of thousands of deaths. It was caused by a novel H1N1 strain that was a reassortment of human, swine, and avian influenza viruses. The 2009 pandemic had the effect of replacing prior H1N1 strains in circulation with the novel strain but not any other influenza viruses. Consequently, H1N1, H3N2, and both influenza B virus lineages have been in circulation in seasonal form since the 2009 pandemic.
In 2011, influenza D virus was discovered in pigs in Oklahoma, USA, and cattle were later identified as the primary reservoir of influenza D virus.
In the same year, avian H7N9 was detected in China and began to cause human infections in 2013, starting in Shanghai and Anhui and remaining mostly in China. Highly pathogenic H7N9 emerged sometime in 2016 and has occasionally infected humans incidentally. Other avian influenza viruses have less commonly infected humans since the 1990s, including H5N1, H5N5, H5N6, H5N8, H6N1, H7N2, H7N7, and H10N7, and have begun to spread throughout much of the world since the 2010s. Future flu pandemics, which may be caused by an influenza virus of avian origin, are viewed as almost inevitable, and increased globalization has made it easier for a pandemic virus to spread, so there are continual efforts to prepare for future pandemics and improve the prevention and treatment of influenza.
Etymology
The word influenza comes from the Italian word , from medieval Latin , originally meaning 'visitation' or 'influence'. Terms such as , meaning 'influence of the cold', and , meaning 'influence of the stars' are attested from the 14th century. The latter referred to the disease's cause, which at the time was ascribed by some to unfavorable astrological conditions. As early as 1504, began to mean a 'visitation' or 'outbreak' of any disease affecting many people in a single place at once. During an outbreak of influenza in 1743 that started in Italy and spread throughout Europe, the word reached the English language and was anglicized in pronunciation. Since the mid-1800s, influenza has also been used to refer to severe colds. The shortened form of the word, "flu", is first attested in 1839 as flue with the spelling flu confirmed in 1893. Other names that have been used for influenza include epidemic catarrh, la grippe from French, sweating sickness, and, especially when referring to the 1918 pandemic strain, Spanish fever.
In animals
Birds
Aquatic birds such as ducks, geese, shorebirds, and gulls are the primary reservoir of influenza A viruses (IAVs).
Because of the impact of avian influenza on economically important chicken farms, a classification system was devised in 1981 which divided avian virus strains as either highly pathogenic (and therefore potentially requiring vigorous control measures) or low pathogenic. The test for this is based solely on the effect on chickens – a virus strain is highly pathogenic avian influenza (HPAI) if 75% or more of chickens die after being deliberately infected with it. The alternative classification is low pathogenic avian influenza (LPAI) which produces mild or no symptoms. This classification system has since been modified to take into account the structure of the virus' haemagglutinin protein. At the genetic level, an AIV can be identified as an HPAI virus if it has a multibasic cleavage site in the HA protein, which contains additional residues in the HA gene. Other species of birds, especially water birds, can become infected with HPAI virus without experiencing severe symptoms and can spread the infection over large distances; the exact symptoms depend on the species of bird and the strain of virus. Classification of an avian virus strain as HPAI or LPAI does not predict how serious the disease might be if it infects humans or other mammals.
Symptoms of HPAI infection in chickens include lack of energy and appetite, decreased egg production, soft-shelled or misshapen eggs, swelling of the head, comb, wattles, and hocks, purple discoloration of wattles, combs, and legs, nasal discharge, coughing, sneezing, incoordination, and diarrhea; birds infected with an HPAI virus may also die suddenly without any signs of infection. Notable HPAI viruses include influenza A (H5N1) and A (H7N9). HPAI viruses have been a major disease burden in the 21st century, resulting in the death of large numbers of birds. In H7N9's case, some circulating strains were originally low pathogenic but became high pathogenic by mutating to acquire the HA multibasic cleavage site. Avian H9N2 is also of concern because although it is low pathogenic, it is a common donor of genes to H5N1 and H7N9 during reassortment.
Migratory birds can spread influenza across long distances. An example of this was when an H5N1 strain in 2005 infected birds at Qinghai Lake, China, which is a stopover and breeding site for many migratory birds, subsequently spreading the virus to more than 20 countries across Asia, Europe, and the Middle East. AIVs can be transmitted from wild birds to domestic free-range ducks and in turn to poultry through contaminated water, aerosols, and fomites. Ducks therefore act as key intermediates between wild and domestic birds. Transmission to poultry typically occurs in backyard farming and live animal markets where multiple species interact with each other. From there, AIVs can spread to poultry farms in the absence of adequate biosecurity. Among poultry, HPAI transmission occurs through aerosols and contaminated feces, cages, feed, and dead animals. Back-transmission of HPAI viruses from poultry to wild birds has occurred and is implicated in mass die-offs and intercontinental spread.
AIVs have occasionally infected humans through aerosols, fomites, and contaminated water. Direction transmission from wild birds is rare. Instead, most transmission involves domestic poultry, mainly chickens, ducks, and geese but also a variety of other birds such as guinea fowl, partridge, pheasants, and quails. The primary risk factor for infection with AIVs is exposure to birds in farms and live poultry markets. Typically, infection with an AIV has an incubation period of 3–5 days but can be up to 9 days. H5N1 and H7N9 cause severe lower respiratory tract illness, whereas other AIVs such as H9N2 cause a more mild upper respiratory tract illness, commonly with conjunctivitis. Limited transmission of avian H2, H5-7, H9, and H10 subtypes from one person to another through respiratory droplets, aerosols, and fomites has occurred, but sustained human-to-human transmission of AIVs has not occurred.
Pigs
Influenza in pigs is a respiratory disease similar to influenza in humans and is found worldwide. Asymptomatic infections are common. Symptoms typically appear 1–3 days after infection and include fever, lethargy, anorexia, weight loss, labored breathing, coughing, sneezing, and nasal discharge. In sows, pregnancy may be aborted. Complications include secondary infections and potentially fatal bronchopneumonia. Pigs become contagious within a day of infection and typically spread the virus for 7–10 days, which can spread rapidly within a herd. Pigs usually recover within 3–7 days after symptoms appear. Prevention and control measures include inactivated vaccines and culling infected herds. Influenza A virus subtypes H1N1, H1N2, and H3N2 are usually responsible for swine flu.
Some influenza A viruses can be transmitted via aerosols from pigs to humans and vice versa. Pigs, along with bats and quails, are recognized as a mixing vessel of influenza viruses because they have both α-2,3 and α-2,6 sialic acid receptors in their respiratory tract. Because of that, both avian and mammalian influenza viruses can infect pigs. If co-infection occurs, reassortment is possible. A notable example of this was the reassortment of a swine, avian, and human influenza virus that caused the 2009 flu pandemic. Spillover events from humans to pigs appear to be more common than from pigs to humans.
Other animals
Influenza viruses have been found in many other animals, including cattle, horses, dogs, cats, and marine mammals. Nearly all influenza A viruses are apparently descended from ancestral viruses in birds. The exception are bat influenza-like viruses, which have an uncertain origin. These bat viruses have HA and NA subtypes H17, H18, N10, and N11. H17N10 and H18N11 are unable to reassort with other influenza A viruses, but they are still able to replicate in other mammals.
Equine influenza A viruses include H7N7 and two lineages of H3N8. H7N7, however, has not been detected in horses since the late 1970s, so it may have become extinct in horses. H3N8 in equines spreads via aerosols and causes respiratory illness. Equine H3N8 preferentially binds to α-2,3 sialic acids, so horses are usually considered dead-end hosts, but transmission to dogs and camels has occurred, raising concerns that horses may be mixing vessels for reassortment. In canines, the only influenza A viruses in circulation are equine-derived H3N8 and avian-derived H3N2. Canine H3N8 has not been observed to reassort with other subtypes. H3N2 has a much broader host range and can reassort with H1N1 and H5N1. An isolated case of H6N1, likely from a chicken, was found infecting a dog, so other AIVs may emerge in canines.
A wide range of other mammals have been affected by avian influenza A viruses, generally due to eating birds which had been infected. There have been instances where transmission of the disease between mammals, including seals and cows, may have occurred. Various mutations have been identified that are associated with AIVs adapting to mammals. Since HA proteins vary in which sialic acids they bind to, mutations in the HA receptor binding site can allow AIVs to infect mammals. Other mutations include mutations affecting which sialic acids NA proteins cleave and a mutation in the PB2 polymerase subunit that improves tolerance of lower temperatures in mammalian respiratory tracts and enhances RNP assembly by stabilizing NP and PB2 binding.
Influenza B virus is mainly found in humans but has also been detected in pigs, dogs, horses, and seals. Likewise, influenza C virus primarily infects humans but has been observed in pigs, dogs, cattle, and dromedary camels. Influenza D virus causes an influenza-like illness in pigs but its impact in its natural reservoir, cattle, is relatively unknown. It may cause respiratory disease resembling human influenza on its own, or it may be part of a bovine respiratory disease (BRD) complex with other pathogens during co-infection. BRD is a concern for the cattle industry, so influenza D virus' possible involvement in BRD has led to research on vaccines for cattle that can provide protection against influenza D virus. Two antigenic lineages are in circulation: D/swine/Oklahoma/1334/2011 (D/OK) and D/bovine/Oklahoma/660/2013 (D/660).
References
Further reading
Airborne diseases
Animal viral diseases
Healthcare-associated infections
Vaccine-preventable diseases
Wikipedia emergency medicine articles ready to translate
Wikipedia medicine articles ready to translate
Zoonoses | 0.766095 | 0.999155 | 0.765448 |
Staphylococcal infection | A staphylococcal infection or staph infection is an infection caused by members of the Staphylococcus genus of bacteria.
These bacteria commonly inhabit the skin and nose where they are innocuous, but may enter the body through cuts or abrasions which may be nearly invisible. Once inside the body, the bacteria may spread to a number of body systems and organs, including the heart, where the toxins produced by the bacteria may cause cardiac arrest. Once the bacterium has been identified as the cause of the illness, treatment is often in the form of antibiotics and, where possible, drainage of the infected area. However, many strains of this bacterium have become antibiotic resistant; for those with these kinds of infection, the body's own immune system is the only defense against the disease. If that system is weakened or compromised, the disease may progress rapidly. Anyone can contract staph, but pregnant women, children, and people with chronic diseases or who are immuno-deficient are often more susceptible to contracting an infection.
Types
Other infections include:
Closed-space infections of the fingertips, known as paronychia.
Suspected involvement in atopic dermatitis (eczema), including related clinical trials.
Coagulase-positive
The main coagulase-positive staphylococcus is Staphylococcus aureus, although not all strains of Staphylococcus aureus are coagulase positive. These bacteria can survive on dry surfaces, increasing the chance of transmission. S. aureus is also implicated in toxic shock syndrome; during the 1980s some tampons allowed the rapid growth of S. aureus, which released toxins that were absorbed into the bloodstream. Any S. aureus infection can cause the staphylococcal scalded skin syndrome, a cutaneous reaction to exotoxin absorbed into the bloodstream. It can also cause a type of septicaemia called pyaemia. The infection can be life-threatening. Problematically, methicillin-resistant Staphylococcus aureus (MRSA) has become a major cause of hospital-acquired infections. MRSA has also been recognized with increasing frequency in community-acquired infections. The symptoms of a staphylococcal infection include a collection of pus, such as a boil or furuncle, or abscess. The area is typically tender or painful and may be reddened or swollen.
Coagulase-negative
S. epidermidis, a coagulase-negative staphylococcus species, is a commensal of the skin, but can cause severe infections in immune-suppressed patients and those with central venous catheters.
S. saprophyticus, another coagulase-negative species that is part of the normal vaginal flora, is predominantly implicated in uncomplicated lower genitourinary tract infections in young sexually active women.
Other staphylococcal species have been implicated in human infections, notably S. lugdunensis, S. schleiferi, and S. caprae.
Causes
Staph infections have a multitude of different causes, such as:
Open wounds – This is by far the biggest cause of staph infection. Any open wound, even ones as small as a paper cut, are vulnerable to being infected. Staph bacteria will enter the body through any open wound, so it is important to properly treat, disinfect, and bandage any wounds.
Contact with infected persons or surfaces – Staph infections are very contagious when in contact with a person that is already infected. A person with staph infection is contagious until the bacteria are completely out of their body, and any wounds from the infection are healed. It is common to see the spread of staph in contact sports; i.e. wrestling, through contact in locker rooms, or by sharing any equipment.
Weakened immune system – Anyone with a weakened immune system for any reason can be more easily affected by staph bacteria, because their bodies are unable to defend against infectious bacteria as well.
Unwashed linens – Staph bacteria are very resistant under harsh conditions, and they will cling to objects where they can create a niche. Unwashed bath towels, blanket, bed sheets, and clothes can create a great environment for these bacteria to grow. This is important to recognize, because every single day people use linens in their daily lives.
Infection after surgery – Hospitals are a very common place for staph bacteria to contaminate. This becomes problematic when people are in surgery, because in some cases staph can be introduced to a person's body when an incision is opened.
Invasive devices – Medical devices that have any connection to organs to the outside of the body are very problematic, because they allow an easy open pathway into the body. Examples of these devices are; catheters, dialysis tubing, feeding tubes, breathing tubes, etc.
Signs and symptoms
Staph infection is typically characterized by redness, pus, swelling, and tenderness in areas of the infection. But, each type of skin infection caused by staph bacteria is different.
A few common skin infections caused by staph bacteria are:
Boils – Boils are the most common type of staph infection, they are pockets of white pus that start where a hair follicle or oil gland is. The boil is tender and red where the infection is located on the skin.
Impetigo – Impetigo is most prominent among children, and is usually located around their mouth, nose, hands, and feet. It shows up like a rash of painful blisters, will eventually produce pus that is yellowish in color.
Cellulitis – Cellulitis is also rash-like; the skin that is infected will be red, swollen, and usually warm to the touch. Cellulitis commonly infects the lower legs, but can also, less commonly, affect the face and arms.
Staphylococcus scalded skin syndrome – Staphylococcus scalded skin syndrome is caused by toxins produced when a staph infection gets too severe. It is characterized by a fever, rash, and blisters.
Methicillin-resistant Staphylococcus aureus (MRSA) – MRSA is one of the most common antibiotic-resistant strains of staph bacteria. It is more difficult to treat than other staph infections. MRSA causes rashes, boils, sores, and other abscesses.
Bacterial identification
In the microbiology lab, Staphylococcus is mainly suspected when seeing Gram-positive cocci in clusters.
Treatment
Treatment for staph infection varies depending on the type and severity of infection. Common treatments are antibiotics, topical creams, and drainage/cleaning of infectious wounds.
Etymology
The generic name Staphylococcus is derived from the Greek word "staphyle", meaning bunch of grapes, and "kokkos", meaning granule. The bacteria, when seen under a microscope, appear like a branch of grapes or nuts.
Epidemiology
Staphylococcus bacteria is one of the leading community-acquired bacteria. According to the CDC, after a push from hospitals to better prevent staph infections, the percentage of people affected has dropped dramatically. However, staph infections are still prominent and a cause for concern among healthcare professionals, especially new antibiotic-resistant strains. In the U.S., the incidence of staph infection is around 38.2 to 45.7 per 100,000 person-years, whereas other First World countries have an average incidence rate of 10 to 30 per 100,000 person-years.
References
External links
Staphylococcaceae
Bacterial toxins | 0.767758 | 0.99699 | 0.765447 |
Joint effusion | A joint effusion is the presence of increased intra-articular fluid. It may affect any joint. Commonly it involves the knee (see knee effusion).
Diagnostic approach
The approach to diagnosis depends on the joint involved. While aspiration of the joint is considered the gold standard of treatment, this can be difficult for joints such as the hip. Ultrasound may be used both to verify the existence of an effusion and to guide aspiration.
Differential diagnosis
There are many causes of joint effusion. It may result from trauma, inflammation, hematologic conditions, or infections.
Septic arthritis
Septic arthritis is the purulent invasion of a joint by an infectious agent with a resultant large effusion due to inflammation. Septic arthritis is a serious condition. It can lead to irreversible joint damage in the event of delayed diagnosis or mismanagement. It is basically a disease of children and adolescence.
Gout
Gout is usually present with recurrent attacks of acute inflammatory arthritis (red, tender, hot, swollen joint). It is caused by elevated levels of uric acid in the blood that crystallizes and deposits in joints, tendons, and surrounding tissues. Gout affects 1% of individuals in Western populations at some point in their lives.
Trauma
Trauma from ligamentous, osseous or meniscal injuries can result in an effusion. These are often hemarthrosis or bloody effusions.
Treatment
The treatment for joint effusion includes icing, rest and medication as advised by your doctor.
See also
Swelling (medical)
Intermittent hydrarthrosis
References
External links
Musculoskeletal disorders
Medical signs | 0.770585 | 0.993332 | 0.765447 |
Acute radiation syndrome | Acute radiation syndrome (ARS), also known as radiation sickness or radiation poisoning, is a collection of health effects that are caused by being exposed to high amounts of ionizing radiation in a short period of time. Symptoms can start within an hour of exposure, and can last for several months. Early symptoms are usually nausea, vomiting and loss of appetite. In the following hours or weeks, initial symptoms may appear to improve, before the development of additional symptoms, after which either recovery or death follow.
ARS involves a total dose of greater than 0.7 Gy (70 rad), that generally occurs from a source outside the body, delivered within a few minutes. Sources of such radiation can occur accidentally or intentionally. They may involve nuclear reactors, cyclotrons, certain devices used in cancer therapy, nuclear weapons, or radiological weapons. It is generally divided into three types: bone marrow, gastrointestinal, and neurovascular syndrome, with bone marrow syndrome occurring at 0.7 to 10 Gy, and neurovascular syndrome occurring at doses that exceed 50 Gy. The cells that are most affected are generally those that are rapidly dividing. At high doses, this causes DNA damage that may be irreparable. Diagnosis is based on a history of exposure and symptoms. Repeated complete blood counts (CBCs) can indicate the severity of exposure.
Treatment of ARS is generally supportive care. This may include blood transfusions, antibiotics, colony-stimulating factors, or stem cell transplant. Radioactive material remaining on the skin or in the stomach should be removed. If radioiodine was inhaled or ingested, potassium iodide is recommended. Complications such as leukemia and other cancers among those who survive are managed as usual. Short-term outcomes depend on the dose exposure.
ARS is generally rare. A single event can affect a large number of people, as happened in the atomic bombings of Hiroshima and Nagasaki and the Chernobyl nuclear power plant disaster. ARS differs from chronic radiation syndrome, which occurs following prolonged exposures to relatively low doses of radiation.
Signs and symptoms
Classically, ARS is divided into three main presentations: hematopoietic, gastrointestinal, and neurovascular. These syndromes may be preceded by a prodrome. The speed of symptom onset is related to radiation exposure, with greater doses resulting in a shorter delay in symptom onset. These presentations presume whole-body exposure, and many of them are markers that are invalid if the entire body has not been exposed. Each syndrome requires that the tissue showing the syndrome itself be exposed (e.g., gastrointestinal syndrome is not seen if the stomach and intestines are not exposed to radiation). Some areas affected are:
Hematopoietic. This syndrome is marked by a drop in the number of blood cells, called aplastic anemia. This may result in infections, due to a low number of white blood cells, bleeding, due to a lack of platelets, and anemia, due to too few red blood cells in circulation. These changes can be detected by blood tests after receiving a whole-body acute dose as low as , though they might never be felt by the patient if the dose is below . Conventional trauma and burns resulting from a bomb blast are complicated by the poor wound healing caused by hematopoietic syndrome, increasing mortality.
Gastrointestinal. This syndrome often follows absorbed doses of . The signs and symptoms of this form of radiation injury include nausea, vomiting, loss of appetite, and abdominal pain. Vomiting in this time-frame is a marker for whole body exposures that are in the fatal range above . Without exotic treatment such as bone marrow transplant, death with this dose is common, due generally more to infection than gastrointestinal dysfunction.
Neurovascular. This syndrome typically occurs at absorbed doses greater than , though it may occur at doses as low as . It presents with neurological symptoms such as dizziness, headache, or decreased level of consciousness, occurring within minutes to a few hours, with an absence of vomiting, and is almost always fatal, even with aggressive intensive care.
Early symptoms of ARS typically include nausea, vomiting, headaches, fatigue, fever, and a short period of skin reddening. These symptoms may occur at radiation doses as low as . These symptoms are common to many illnesses, and may not, by themselves, indicate acute radiation sickness.
Dose effects
A similar table and description of symptoms (given in rems, where 100 rem = 1 Sv), derived from data from the effects on humans subjected to the atomic bombings of Hiroshima and Nagasaki, the indigenous peoples of the Marshall Islands subjected to the Castle Bravo thermonuclear bomb, animal studies and lab experiment accidents, have been compiled by the U.S. Department of Defense.
A person who was less than from the atomic bomb Little Boy hypocenter at Hiroshima, Japan, was found to have absorbed about 9.46 grays (Gy) of ionizing radiation. The doses at the hypocenters of the Hiroshima and Nagasaki atomic bombings were 240 and 290 Gy, respectively.
Skin changes
Cutaneous radiation syndrome (CRS) refers to the skin symptoms of radiation exposure. Within a few hours after irradiation, a transient and inconsistent redness (associated with itching) can occur. Then, a latent phase may occur and last from a few days up to several weeks, when intense reddening, blistering, and ulceration of the irradiated site is visible. In most cases, healing occurs by regenerative means; however, very large skin doses can cause permanent hair loss, damaged sebaceous and sweat glands, atrophy, fibrosis (mostly keloids), decreased or increased skin pigmentation, and ulceration or necrosis of the exposed tissue.
As seen at Chernobyl, when skin is irradiated with high energy beta particles, moist desquamation (peeling of skin) and similar early effects can heal, only to be followed by the collapse of the dermal vascular system after two months, resulting in the loss of the full thickness of the exposed skin. Another example of skin loss caused by high-level exposure of radiation is during the 1999 Tokaimura nuclear accident, where technician Hisashi Ouchi had lost a majority of his skin due to the high amounts of radiation he absorbed during the irradiation. This effect had been demonstrated previously with pig skin using high energy beta sources at the Churchill Hospital Research Institute, in Oxford.
Cause
ARS is caused by exposure to a large dose of ionizing radiation (> ~0.1 Gy) over a short period of time (> ~0.1 Gy/h). Alpha and beta radiation have low penetrating power and are unlikely to affect vital internal organs from outside the body. Any type of ionizing radiation can cause burns, but alpha and beta radiation can only do so if radioactive contamination or nuclear fallout is deposited on the individual's skin or clothing.
Gamma and neutron radiation can travel much greater distances and penetrate the body easily, so whole-body irradiation generally causes ARS before skin effects are evident. Local gamma irradiation can cause skin effects without any sickness. In the early twentieth century, radiographers would commonly calibrate their machines by irradiating their own hands and measuring the time to onset of erythema.
Accidental
Accidental exposure may be the result of a criticality or radiotherapy accident. There have been numerous criticality accidents dating back to atomic testing during World War II, while computer-controlled radiation therapy machines such as Therac-25 played a major part in radiotherapy accidents. The latter of the two is caused by the failure of equipment software used to monitor the radiational dose given. Human error has played a large part in accidental exposure incidents, including some of the criticality accidents, and larger scale events such as the Chernobyl disaster. Other events have to do with orphan sources, in which radioactive material is unknowingly kept, sold, or stolen. The Goiânia accident is an example, where a forgotten radioactive source was taken from a hospital, resulting in the deaths of 4 people from ARS. Theft and attempted theft of radioactive material by clueless thieves has also led to lethal exposure in at least one incident.
Exposure may also come from routine spaceflight and solar flares that result in radiation effects on earth in the form of solar storms. During spaceflight, astronauts are exposed to both galactic cosmic radiation (GCR) and solar particle event (SPE) radiation. The exposure particularly occurs during flights beyond low Earth orbit (LEO). Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. GCR levels that might lead to acute radiation poisoning are less well understood. The latter cause is rarer, with an event possibly occurring during the solar storm of 1859.
Intentional
Intentional exposure is controversial as it involves the use of nuclear weapons, human experiments, or is given to a victim in an act of murder. The intentional atomic bombings of Hiroshima and Nagasaki resulted in tens of thousands of casualties; the survivors of these bombings are known today as . Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This event is also known as "flash", where radiant heat and light are bombarded into any given victim's exposed skin, causing radiation burns. Death is highly likely, and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking-effects within a radius of 0–3 km from a 1 megaton airburst. The 50% chance of death from the blast extends out to ~8 km from a 1 megaton atmospheric explosion.
Scientific testing on humans within the United States occurred extensively throughout the atomic age. Experiments took place on a range of subjects including, but not limited to; the disabled, children, soldiers, and incarcerated persons, with the level of understanding and consent given by subjects varying from complete to none. Since 1997 there have been requirements for patients to give informed consent, and to be notified if experiments were classified. Across the world, the Soviet nuclear program involved human experiments on a large scale, which is still kept secret by the Russian government and the Rosatom agency. The human experiments that fall under intentional ARS exclude those that involved long term exposure. Criminal activity has involved murder and attempted murder carried out through abrupt victim contact with a radioactive substance such as polonium or plutonium.
Pathophysiology
The most commonly used predictor of ARS is the whole-body absorbed dose. Several related quantities, such as the equivalent dose, effective dose, and committed dose, are used to gauge long-term stochastic biological effects such as cancer incidence, but they are not designed to evaluate ARS. To help avoid confusion between these quantities, absorbed dose is measured in units of grays (in SI, unit symbol Gy) or rad (in CGS), while the others are measured in sieverts (in SI, unit symbol Sv) or rem (in CGS). 1 rad = 0.01 Gy and 1 rem = 0.01 Sv.
In most of the acute exposure scenarios that lead to radiation sickness, the bulk of the radiation is external whole-body gamma, in which case the absorbed, equivalent, and effective doses are all equal. There are exceptions, such as the Therac-25 accidents and the 1958 Cecil Kelley criticality accident, where the absorbed doses in Gy or rad are the only useful quantities, because of the targeted nature of the exposure to the body.
Radiotherapy treatments are typically prescribed in terms of the local absorbed dose, which might be 60 Gy or higher. The dose is fractionated to about 2 Gy per day for curative treatment, which allows normal tissues to undergo repair, allowing them to tolerate a higher dose than would otherwise be expected. The dose to the targeted tissue mass must be averaged over the entire body mass, most of which receives negligible radiation, to arrive at a whole-body absorbed dose that can be compared to the table above.
DNA damage
Exposure to high doses of radiation causes DNA damage, later creating serious and even lethal chromosomal aberrations if left unrepaired. Ionizing radiation can produce reactive oxygen species, and does directly damage cells by causing localized ionization events. The former is very damaging to DNA, while the latter events create clusters of DNA damage. This damage includes loss of nucleobases and breakage of the sugar-phosphate backbone that binds to the nucleobases. The DNA organization at the level of histones, nucleosomes, and chromatin also affects its susceptibility to radiation damage. Clustered damage, defined as at least two lesions within a helical turn, is especially harmful. While DNA damage happens frequently and naturally in the cell from endogenous sources, clustered damage is a unique effect of radiation exposure. Clustered damage takes longer to repair than isolated breakages, and is less likely to be repaired at all. Larger radiation doses are more prone to cause tighter clustering of damage, and closely localized damage is increasingly less likely to be repaired.
Somatic mutations cannot be passed down from parent to offspring, but these mutations can propagate in cell lines within an organism. Radiation damage can also cause chromosome and chromatid aberrations, and their effects depend on in which stage of the mitotic cycle the cell is when the irradiation occurs. If the cell is in interphase, while it is still a single strand of chromatin, the damage will be replicated during the S1 phase of the cell cycle, and there will be a break on both chromosome arms; the damage then will be apparent in both daughter cells. If the irradiation occurs after replication, only one arm will bear the damage; this damage will be apparent in only one daughter cell. A damaged chromosome may cyclize, binding to another chromosome, or to itself.
Diagnosis
Diagnosis is typically made based on a history of significant radiation exposure and suitable clinical findings. An absolute lymphocyte count can give a rough estimate of radiation exposure. Time from exposure to vomiting can also give estimates of exposure levels if they are less than 10 Gy (1000 rad).
Prevention
A guiding principle of radiation safety is as low as reasonably achievable (ALARA). This means try to avoid exposure as much as possible and includes the three components of time, distance, and shielding.
Time
The longer that humans are subjected to radiation the larger the dose will be. The advice in the nuclear war manual entitled Nuclear War Survival Skills published by Cresson Kearny in the U.S. was that if one needed to leave the shelter then this should be done as rapidly as possible to minimize exposure.
In chapter 12, he states that "[q]uickly putting or dumping wastes outside is not hazardous once fallout is no longer being deposited. For example, assume the shelter is in an area of heavy fallout and the dose rate outside is 400 roentgen (R) per hour, enough to give a potentially fatal dose in about an hour to a person exposed in the open. If a person needs to be exposed for only 10 seconds to dump a bucket, in this 1/360 of an hour he will receive a dose of only about 1 R. Under war conditions, an additional 1-R dose is of little concern." In peacetime, radiation workers are taught to work as quickly as possible when performing a task that exposes them to radiation. For instance, the recovery of a radioactive source should be done as quickly as possible.
Shielding
Matter attenuates radiation in most cases, so placing any mass (e.g., lead, dirt, sandbags, vehicles, water, even air) between humans and the source will reduce the radiation dose. This is not always the case, however; care should be taken when constructing shielding for a specific purpose. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present.
There are many types of shielding strategies that can be used to reduce the effects of radiation exposure. Internal contamination protective equipment such as respirators are used to prevent internal deposition as a result of inhalation and ingestion of radioactive material. Dermal protective equipment, which protects against external contamination, provides shielding to prevent radioactive material from being deposited on external structures. While these protective measures do provide a barrier from radioactive material deposition, they do not shield from externally penetrating gamma radiation. This leaves anyone exposed to penetrating gamma rays at high risk of ARS.
Naturally, shielding the entire body from high energy gamma radiation is optimal, but the required mass to provide adequate attenuation makes functional movement nearly impossible. In the event of a radiation catastrophe, medical and security personnel need mobile protection equipment in order to safely assist in containment, evacuation, and many other necessary public safety objectives.
Research has been done exploring the feasibility of partial body shielding, a radiation protection strategy that provides adequate attenuation to only the most radio-sensitive organs and tissues inside the body. Irreversible stem cell damage in the bone marrow is the first life-threatening effect of intense radiation exposure and therefore one of the most important bodily elements to protect. Due to the regenerative property of hematopoietic stem cells, it is only necessary to protect enough bone marrow to repopulate the exposed areas of the body with the shielded supply. This concept allows for the development of lightweight mobile radiation protection equipment, which provides adequate protection, deferring the onset of ARS to much higher exposure doses. One example of such equipment is the 360 gamma, a radiation protection belt that applies selective shielding to protect the bone marrow stored in the pelvic area as well as other radio sensitive organs in the abdominal region without hindering functional mobility.
Reduction of incorporation
Where radioactive contamination is present, an elastomeric respirator, dust mask, or good hygiene practices may offer protection, depending on the nature of the contaminant. Potassium iodide (KI) tablets can reduce the risk of cancer in some situations due to slower uptake of ambient radioiodine. Although this does not protect any organ other than the thyroid gland, their effectiveness is still highly dependent on the time of ingestion, which would protect the gland for the duration of a twenty-four-hour period. They do not prevent ARS as they provide no shielding from other environmental radionuclides.
Fractionation of dose
If an intentional dose is broken up into a number of smaller doses, with time allowed for recovery between irradiations, the same total dose causes less cell death. Even without interruptions, a reduction in dose rate below 0.1 Gy/h also tends to reduce cell death. This technique is routinely used in radiotherapy.
The human body contains many types of cells and a human can be killed by the loss of a single type of cells in a vital organ. For many short term radiation deaths (3–30 days), the loss of two important types of cells that are constantly being regenerated causes death. The loss of cells forming blood cells (bone marrow) and the cells in the digestive system (microvilli, which form part of the wall of the intestines) is fatal.
Management
Treatment usually involves supportive care with possible symptomatic measures employed. The former involves the possible use of antibiotics, blood products, colony stimulating factors, and stem cell transplant.
Antimicrobials
There is a direct relationship between the degree of the neutropenia that emerges after exposure to radiation and the increased risk of developing infection. Since there are no controlled studies of therapeutic intervention in humans, most of the current recommendations are based on animal research.
The treatment of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to the one used for other febrile neutropenic patients. However, important differences between the two conditions exist. Individuals that develop neutropenia after exposure to radiation are also susceptible to irradiation damage in other tissues, such as the gastrointestinal tract, lungs and central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic patients. The response of irradiated animals to antimicrobial therapy can be unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental.
Antimicrobials that reduce the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation.
An empirical regimen of antimicrobials should be chosen based on the pattern of bacterial susceptibility and nosocomial infections in the affected area and medical center and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic bacilli (i.e., Enterobacteriaceae, Pseudomonas) that account for more than three quarters of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may also be needed.
A standardized management plan for people with neutropenia and fever should be devised. Empirical regimens contain antibiotics broadly active against Gram-negative aerobic bacteria (quinolones: i.e., ciprofloxacin, levofloxacin, a third- or fourth-generation cephalosporin with pseudomonal coverage: e.g., cefepime, ceftazidime, or an aminoglycoside: i.e. gentamicin, amikacin).
Prognosis
The prognosis for ARS is dependent on the exposure dose, with anything above 8 Gy being almost always lethal, even with medical care. Radiation burns from lower-level exposures usually manifest after 2 months, while reactions from the burns occur months to years after radiation treatment. Complications from ARS include an increased risk of developing radiation-induced cancer later in life. According to the controversial but commonly applied linear no-threshold model, any exposure to ionizing radiation, even at doses too low to produce any symptoms of radiation sickness, can induce cancer due to cellular and genetic damage. The probability of developing cancer is a linear function with respect to the effective radiation dose. Radiation cancer may occur after ionizing radiation exposure following a latent period averaging 20 to 40 years.
History
Acute effects of ionizing radiation were first observed when Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed that eventually healed, and misattributed them to ozone. Röntgen believed the free radical produced in air by X-rays from the ozone was the cause, but other free radicals produced within the body are now understood to be more important. David Walsh first established the symptoms of radiation sickness in 1897.
Ingestion of radioactive materials caused many radiation-induced cancers in the 1930s, but no one was exposed to high enough doses at high enough rates to bring on ARS.
The atomic bombings of Hiroshima and Nagasaki resulted in high acute doses of radiation to a large number of Japanese people, allowing for greater insight into its symptoms and dangers. Red Cross Hospital Surgeon Terufumi Sasaki led intensive research into the syndrome in the weeks and months following the Hiroshima and Nagasaki bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, Sasaki noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for ARS. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on 24 August 1945 was the first death ever to be officially certified as a result of ARS (or "Atomic bomb disease").
There are two major databases that track radiation accidents: The American ORISE REAC/TS and the European IRSN ACCIRAD. REAC/TS shows 417 accidents occurring between 1944 and 2000, causing about 3000 cases of ARS, of which 127 were fatal. ACCIRAD lists 580 accidents with 180 ARS fatalities for an almost identical period. The two deliberate bombings are not included in either database, nor are any possible radiation-induced cancers from low doses. The detailed accounting is difficult because of confounding factors. ARS may be accompanied by conventional injuries such as steam burns, or may occur in someone with a pre-existing condition undergoing radiotherapy. There may be multiple causes for death, and the contribution from radiation may be unclear. Some documents may incorrectly refer to radiation-induced cancers as radiation poisoning, or may count all overexposed individuals as survivors without mentioning if they had any symptoms of ARS.
Notable cases
The following table includes only those known for their attempted survival with ARS. These cases exclude chronic radiation syndrome such as Albert Stevens, in which radiation is exposed to a given subject over a long duration. The table also necessarily excludes cases where the individual was exposed to so much radiation that death occurred before medical assistance or dose estimations could be made, such as an attempted cobalt-60 thief who reportedly died 30 minutes after exposure. The result column represents the time of exposure to the time of death attributed to the short and long term effects attributed to initial exposure. As ARS is measured by a whole-body absorbed dose, the exposure column only includes units of gray (Gy).
Other animals
Thousands of scientific experiments have been performed to study ARS in animals. There is a simple guide for predicting survival and death in mammals, including humans, following the acute effects of inhaling radioactive particles.
See also
5-Androstenediol
Biological effects of ionizing radiation
Biological effects of radiation on the epigenome
CBLB502
Ex-Rad
List of civilian nuclear accidents
List of military nuclear accidents
Nuclear terrorism
Orders of magnitude (radiation)
Prehydrated electrons
Rongelap Atoll
References
This article incorporates public domain material from websites or documents of the U.S. Armed Forces Radiobiology Research Institute and the U.S. Centers for Disease Control and Prevention
External links
– A well documented account of the biological effects of a criticality accident.
More information on bone marrow shielding can be found in the Health Physics Radiation Safety Journal article: , or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA)'s 2015 report: "Occupational Radiation Protection in Severe Accident Management"
Radioactive contamination
Radiology
Radiobiology
Radiation health effects
Medical emergencies
Causes of death
Effects of external causes
Syndromes affecting blood
Occupational hazards
Wikipedia medicine articles ready to translate | 0.765955 | 0.999303 | 0.765421 |
Fear | Fear is an intensely unpleasant primal emotion in response to perceiving or recognizing a danger or threat. Fear causes psychological changes that may produce behavioral reactions such as mounting an aggressive response or fleeing the threat. Fear in human beings may occur in response to a certain stimulus occurring in the present, or in anticipation or expectation of a future threat perceived as a risk to oneself. The fear response arises from the perception of danger leading to confrontation with or escape from/avoiding the threat (also known as the fight-or-flight response), which in extreme cases of fear (horror and terror) can be a freeze response. The fear response is also implicated in a number of mental disorders, particularly anxiety disorders.
In humans and other animals, fear is modulated by the process of cognition and learning. Thus, fear is judged as rational and appropriate, or irrational and inappropriate (or unconscious). An irrational fear is called a phobia.
Fear is closely related to the emotion anxiety, which occurs as the result of often future threats that are perceived to be uncontrollable or unavoidable. The fear response serves survival by engendering appropriate behavioral responses, so it has been preserved throughout evolution. Sociological and organizational research also suggests that individuals' fears are not solely dependent on their nature but are also shaped by their social relations and culture, which guide their understanding of when and how much fear to feel.
Physiological signs
Many physiological changes in the body are associated with fear, summarized as the fight-or-flight response. An innate response for coping with danger, it works by accelerating the breathing rate (hyperventilation), heart rate, vasoconstriction of the peripheral blood vessels leading to blood pooling, dilating the pupils, increasing muscle tension including the muscles attached to each hair follicle to contract and causing "goosebumps", or more clinically, piloerection (making a cold person warmer or a frightened animal look more impressive), sweating, increased blood glucose (hyperglycemia), increased serum calcium, increase in white blood cells called neutrophilic leukocytes, alertness leading to sleep disturbance and "butterflies in the stomach" (dyspepsia). This primitive mechanism may help an organism survive by either running away or fighting the danger. With the series of physiological changes, the consciousness realizes an emotion of fear.
There are observable physical reactions in individuals who experience fear. An individual might experience a feeling of dizziness, lightheaded, like they are being choked, sweating, shortness of breath, vomiting or nausea, numbness or shaking and any other like symptoms. These bodily reactions informs the individual that they are afraid and should proceed to remove or get away from the stimulus that is causing that fear.
Causes
An influential categorization of stimuli causing fear was proposed by psychologist Jeffrey Alan Gray; namely, intensity, novelty, special evolutionary dangers, stimuli arising during social interaction, and conditioned stimuli. Another categorization was proposed by Archer, who, besides conditioned fear stimuli, categorized fear-evoking (as well as aggression-evoking) stimuli into three groups; namely, pain, novelty, and frustration, although he also described "looming", which refers to an object rapidly moving towards the visual sensors of a subject, and can be categorized as "intensity". Russell described a more functional categorization of fear-evoking stimuli, in which for instance novelty is a variable affecting more than one category: 1) Predator stimuli (including movement, suddenness, proximity, but also learned and innate predator stimuli); 2) Physical environmental dangers (including intensity and heights); 3) Stimuli associated with increased risk of predation and other dangers (including novelty, openness, illumination, and being alone); 4) Stimuli stemming from conspecifics (including novelty, movement, and spacing behavior); 5) Species-predictable fear stimuli and experience (special evolutionary dangers); and 6) Fear stimuli that are not species predictable (conditioned fear stimuli).
Nature
Although many fears are learned, the capacity to fear is part of human nature. Many studies have found that certain fears (e.g. animals, heights) are much more common than others (e.g. flowers, clouds). These fears are also easier to induce in the laboratory. This phenomenon is known as preparedness. Because early humans that were quick to fear dangerous situations were more likely to survive and reproduce; preparedness is theorized to be a genetic effect that is the result of natural selection.
From an evolutionary psychology perspective, different fears may be different adaptations that have been useful in our evolutionary past. They may have developed during different time periods. Some fears, such as fear of heights, may be common to all mammals and developed during the mesozoic period. Other fears, such as fear of snakes, may be common to all simians and developed during the cenozoic time period (the still-ongoing geological era encompassing the last 66 million of history). Still others, such as fear of mice and insects, may be unique to humans and developed during the paleolithic and neolithic time periods (when mice and insects become important carriers of infectious diseases and harmful for crops and stored foods).
Conditioning
Nonhuman animals and humans innovate specific fears as a result of learning. This has been studied in psychology as fear conditioning, beginning with John B. Watson's Little Albert experiment in 1920, which was inspired after observing a child with an irrational fear of dogs. In this study, an 11-month-old boy was conditioned to fear a white rat in the laboratory. The fear became generalized to include other white, furry objects, such as a rabbit, dog, and even a Santa Claus mask with white cotton balls in the beard.
Fear can be learned by experiencing or watching a frightening traumatic accident. For example, if a child falls into a well and struggles to get out, he or she may develop a fear of wells, heights (acrophobia), enclosed spaces (claustrophobia), or water (aquaphobia). There are studies looking at areas of the brain that are affected in relation to fear. When looking at these areas (such as the amygdala), it was proposed that a person learns to fear regardless of whether they themselves have experienced trauma, or if they have observed the fear in others. In a study completed by Andreas Olsson, Katherine I. Nearing and Elizabeth A. Phelps, the amygdala were affected both when subjects observed someone else being submitted to an aversive event, knowing that the same treatment awaited themselves, and when subjects were subsequently placed in a fear-provoking situation. This suggests that fear can develop in both conditions, not just simply from personal history.
Fear is affected by cultural and historical context. For example, in the early 20th century, many Americans feared polio, a disease that can lead to paralysis. There are consistent cross-cultural differences in how people respond to fear. Display rules affect how likely people are to express the facial expression of fear and other emotions.
Fear of victimization is a function of perceived risk and seriousness of potential harm..
Common triggers
Phobias
According to surveys, some of the most common fears are of demons and ghosts, the existence of evil powers, cockroaches, spiders, snakes, heights, water, enclosed spaces, tunnels, bridges, needles, social rejection, failure, examinations, and public speaking.
Regionally some may more so fear terrorist attacks, death, war, criminal or gang violence, being alone, the future, nuclear war, flying, clowns, intimacy, people, and driving.
Uncertainty
Fear of the unknown or irrational fear is caused by negative thinking (worry) which arises from anxiety accompanied by a subjective sense of apprehension or dread. Irrational fear shares a common neural pathway with other fears, a pathway that engages the nervous system to mobilize bodily resources in the face of danger or threat. Many people are scared of the "unknown". The irrational fear can branch out to many areas such as the hereafter, the next ten years or even tomorrow. Chronic irrational fear has deleterious effects since the elicitor stimulus is commonly absent or perceived from delusions. Such fear can create comorbidity with the anxiety disorder umbrella. Being scared may cause people to experience anticipatory fear of what may lie ahead rather than planning and evaluating for the same. For example, "continuation of scholarly education" is perceived by many educators as a risk that may cause them fear and stress, and they would rather teach things they've been taught than go and do research.
The ambiguity of situations that tend to be uncertain and unpredictable can cause anxiety in addition to other psychological and physical problems in some populations; especially those who engage it constantly, for example, in war-ridden places or in places of conflict, terrorism, abuse, etc. Poor parenting that instills fear can also debilitate a child's psyche development or personality. For example, parents tell their children not to talk to strangers in order to protect them. In school, they would be motivated to not show fear in talking with strangers, but to be assertive and also aware of the risks and the environment in which it takes place. Ambiguous and mixed messages like this can affect their self-esteem and self-confidence. Researchers say talking to strangers is not something to be thwarted but allowed in a parent's presence if required. Developing a sense of equanimity to handle various situations is often advocated as an antidote to irrational fear and as an essential skill by a number of ancient philosophies.
Fear of the unknown (FOTU) "may be a, or possibly the, fundamental fear" from early times when there were many threats to life.
Behavior
Although fear behavior varies from species to species, it is often divided into two main categories; namely, avoidance/flight and immobility. To these, different researchers have added different categories, such as threat display and attack, protective responses (including startle and looming responses), defensive burying, and social responses (including alarm vocalizations and submission). Finally, immobility is often divided into freezing and tonic immobility.
The decision as to which particular fear behavior to perform is determined by the level of fear as well as the specific context, such as environmental characteristics (escape route present, distance to refuge), the presence of a discrete and localized threat, the distance between threat and subject, threat characteristics (speed, size, directness of approach), the characteristics of the subject under threat (size, physical condition, speed, degree of crypsis, protective morphological structures), social conditions (group size), and the amount of experience with the type of the threat.
Mechanism
Often laboratory studies with rats are conducted to examine the acquisition and extinction of conditioned fear responses. In 2004, researchers conditioned rats (Rattus norvegicus) to fear a certain stimulus, through electric shock. The researchers were able to then cause an extinction of this conditioned fear, to a point that no medications or drugs were able to further aid in the extinction process. The rats showed signs of avoidance learning, not fear, but simply avoiding the area that brought pain to the test rats. The avoidance learning of rats is seen as a conditioned response, and therefore the behavior can be unconditioned, as supported by the earlier research.
Species-specific defense reactions (SSDRs) or avoidance learning in nature is the specific tendency to avoid certain threats or stimuli, it is how animals survive in the wild. Humans and animals both share these species-specific defense reactions, such as the flight-or-fight, which also include pseudo-aggression, fake or intimidating aggression and freeze response to threats, which is controlled by the sympathetic nervous system. These SSDRs are learned very quickly through social interactions between others of the same species, other species, and interaction with the environment. These acquired sets of reactions or responses are not easily forgotten. The animal that survives is the animal that already knows what to fear and how to avoid this threat. An example in humans is the reaction to the sight of a snake, many jump backwards before cognitively realizing what they are jumping away from, and in some cases, it is a stick rather than a snake.
As with many functions of the brain, there are various regions of the brain involved in deciphering fear in humans and other nonhuman species. The amygdala communicates both directions between the prefrontal cortex, hypothalamus, the sensory cortex, the hippocampus, thalamus, septum, and the brainstem. The amygdala plays an important role in SSDR, such as the ventral amygdalofugal, which is essential for associative learning, and SSDRs are learned through interaction with the environment and others of the same species. An emotional response is created only after the signals have been relayed between the different regions of the brain, and activating the sympathetic nervous systems; which controls the flight, fight, freeze, fright, and faint response. Often a damaged amygdala can cause impairment in the recognition of fear (like the human case of patient S.M.). This impairment can cause different species to lack the sensation of fear, and often can become overly confident, confronting larger peers, or walking up to predatory creatures.
Robert C. Bolles (1970), a researcher at University of Washington, wanted to understand species-specific defense reactions and avoidance learning among animals, but found that the theories of avoidance learning and the tools that were used to measure this tendency were out of touch with the natural world. He theorized the species-specific defense reaction (SSDR). There are three forms of SSDRs: flight, fight (pseudo-aggression), or freeze. Even domesticated animals have SSDRs, and in those moments it is seen that animals revert to atavistic standards and become "wild" again. Dr. Bolles states that responses are often dependent on the reinforcement of a safety signal, and not the aversive conditioned stimuli. This safety signal can be a source of feedback or even stimulus change. Intrinsic feedback or information coming from within, muscle twitches, increased heart rate, are seen to be more important in SSDRs than extrinsic feedback, stimuli that comes from the external environment. Dr. Bolles found that most creatures have some intrinsic set of fears, to help assure survival of the species. Rats will run away from any shocking event, and pigeons will flap their wings harder when threatened. The wing flapping in pigeons and the scattered running of rats are considered species-specific defense reactions or behaviors. Bolles believed that SSDRs are conditioned through Pavlovian conditioning, and not operant conditioning; SSDRs arise from the association between the environmental stimuli and adverse events. Michael S. Fanselow conducted an experiment, to test some specific defense reactions, he observed that rats in two different shock situations responded differently, based on instinct or defensive topography, rather than contextual information.
Species-specific defense responses are created out of fear, and are essential for survival. Rats that lack the gene stathmin show no avoidance learning, or a lack of fear, and will often walk directly up to cats and be eaten. Animals use these SSDRs to continue living, to help increase their chance of fitness, by surviving long enough to procreate. Humans and animals alike have created fear to know what should be avoided, and this fear can be learned through association with others in the community, or learned through personal experience with a creature, species, or situations that should be avoided. SSDRs are an evolutionary adaptation that has been seen in many species throughout the world including rats, chimpanzees, prairie dogs, and even humans, an adaptation created to help individual creatures survive in a hostile world.
Fear learning changes across the lifetime due to natural developmental changes in the brain. This includes changes in the prefrontal cortex and the amygdala.
The visual exploration of an emotional face does not follow a fixed pattern but modulated by the emotional content of the face. Scheller et al. found that participants paid more attention to the eyes when recognising fearful or neutral faces, while the mouth was fixated on when happy faces are presented, irrespective of task demands and spatial locations of face stimuli. These findings were replicated when fearful eyes are presented and when canonical face configurations are distorted for fearful, neutral and happy expressions.
Neurocircuitry in mammals
The thalamus collects sensory data from the senses
Sensory cortex receives data from the thalamus and interprets it
Sensory cortex organizes information for dissemination to the hypothalamus (fight or flight), amygdalae (fear), hippocampus (memory)
The brain structures that are the center of most neurobiological events associated with fear are the two amygdalae, located behind the pituitary gland. Each amygdala is part of a circuitry of fear learning. They are essential for proper adaptation to stress and specific modulation of emotional learning memory. In the presence of a threatening stimulus, the amygdalae generate the secretion of hormones that influence fear and aggression. Once a response to the stimulus in the form of fear or aggression commences, the amygdalae may elicit the release of hormones into the body to put the person into a state of alertness, in which they are ready to move, run, fight, etc. This defensive response is generally referred to in physiology as the fight-or-flight response regulated by the hypothalamus, part of the limbic system. Once the person is in safe mode, meaning that there are no longer any potential threats surrounding them, the amygdalae will send this information to the medial prefrontal cortex (mPFC) where it is stored for similar future situations, which is known as memory consolidation.
Some of the hormones involved during the state of fight-or-flight include epinephrine, which regulates heart rate and metabolism as well as dilating blood vessels and air passages, norepinephrine increasing heart rate, blood flow to skeletal muscles and the release of glucose from energy stores, and cortisol which increases blood sugar, increases circulating neutrophilic leukocytes, calcium amongst other things.
After a situation which incites fear occurs, the amygdalae and hippocampus record the event through synaptic plasticity. The stimulation to the hippocampus will cause the individual to remember many details surrounding the situation. Plasticity and memory formation in the amygdala are generated by activation of the neurons in the region. Experimental data supports the notion that synaptic plasticity of the neurons leading to the lateral amygdalae occurs with fear conditioning. In some cases, this forms permanent fear responses such as post-traumatic stress disorder (PTSD) or a phobia. MRI and fMRI scans have shown that the amygdalae in individuals diagnosed with such disorders including bipolar or panic disorder are larger and wired for a higher level of fear.
Pathogens can suppress amygdala activity. Rats infected with the toxoplasmosis parasite become less fearful of cats, sometimes even seeking out their urine-marked areas. This behavior often leads to them being eaten by cats. The parasite then reproduces within the body of the cat. There is evidence that the parasite concentrates itself in the amygdala of infected rats. In a separate experiment, rats with lesions in the amygdala did not express fear or anxiety towards unwanted stimuli. These rats pulled on levers supplying food that sometimes sent out electrical shocks. While they learned to avoid pressing on them, they did not distance themselves from these shock-inducing levers.
Several brain structures other than the amygdalae have also been observed to be activated when individuals are presented with fearful vs. neutral faces, namely the occipitocerebellar regions including the fusiform gyrus and the inferior parietal / superior temporal gyri. Fearful eyes, brows and mouth seem to separately reproduce these brain responses. Scientists from Zurich studies show that the hormone oxytocin related to stress and sex reduces activity in your brain fear center.
Pheromones and contagion
In threatening situations, insects, aquatic organisms, birds, reptiles, and mammals emit odorant substances, initially called alarm substances, which are chemical signals now called alarm pheromones. This is to defend themselves and at the same time to inform members of the same species of danger and leads to observable behavior change like freezing, defensive behavior, or dispersion depending on circumstances and species. For example, stressed rats release odorant cues that cause other rats to move away from the source of the signal.
After the discovery of pheromones in 1959, alarm pheromones were first described in 1968 in ants and earthworms, and four years later also found in mammals, both mice and rats. Over the next two decades, identification and characterization of these pheromones proceeded in all manner of insects and sea animals, including fish, but it was not until 1990 that more insight into mammalian alarm pheromones was gleaned.
In 1985, a link between odors released by stressed rats and pain perception was discovered: unstressed rats exposed to these odors developed opioid-mediated analgesia. In 1997, researchers found that bees became less responsive to pain after they had been stimulated with isoamyl acetate, a chemical smelling of banana, and a component of bee alarm pheromone. The experiment also showed that the bees' fear-induced pain tolerance was mediated by an endorphin.
By using the forced swimming test in rats as a model of fear-induction, the first mammalian "alarm substance" was found. In 1991, this "alarm substance" was shown to fulfill criteria for pheromones: well-defined behavioral effect, species specificity, minimal influence of experience and control for nonspecific arousal. Rat activity testing with the alarm pheromone, and their preference/avoidance for odors from cylinders containing the pheromone, showed that the pheromone had very low volatility.
In 1993 a connection between alarm chemosignals in mice and their immune response was found. Pheromone production in mice was found to be associated with or mediated by the pituitary gland in 1994.
In 2004, it was demonstrated that rats' alarm pheromones had different effects on the "recipient" rat (the rat perceiving the pheromone) depending which body region they were released from: Pheromone production from the face modified behavior in the recipient rat, e.g. caused sniffing or movement, whereas pheromone secreted from the rat's anal area induced autonomic nervous system stress responses, like an increase in core body temperature. Further experiments showed that when a rat perceived alarm pheromones, it increased its defensive and risk assessment behavior, and its acoustic startle reflex was enhanced.
It was not until 2011 that a link between severe pain, neuroinflammation and alarm pheromones release in rats was found: real time RT-PCR analysis of rat brain tissues indicated that shocking the footpad of a rat increased its production of proinflammatory cytokines in deep brain structures, namely of IL-1β, heteronuclear Corticotropin-releasing hormone and c-fos mRNA expressions in both the paraventricular nucleus and the bed nucleus of the stria terminalis, and it increased stress hormone levels in plasma (corticosterone).
The neurocircuit for how rats perceive alarm pheromones was shown to be related to the hypothalamus, brainstem, and amygdalae, all of which are evolutionary ancient structures deep inside or in the case of the brainstem underneath the brain away from the cortex, and involved in the fight-or-flight response, as is the case in humans.
Alarm pheromone-induced anxiety in rats has been used to evaluate the degree to which anxiolytics can alleviate anxiety in humans. For this, the change in the acoustic startle reflex of rats with alarm pheromone-induced anxiety (i.e. reduction of defensiveness) has been measured. Pretreatment of rats with one of five anxiolytics used in clinical medicine was able to reduce their anxiety: namely midazolam, phenelzine (a nonselective monoamine oxidase (MAO) inhibitor), propranolol, a nonselective beta blocker, clonidine, an alpha 2 adrenergic agonist or CP-154,526, a corticotropin-releasing hormone antagonist.
Faulty development of odor discrimination impairs the perception of pheromones and pheromone-related behavior, like aggressive behavior and mating in male rats: The enzyme Mitogen-activated protein kinase 7 (MAPK7) has been implicated in regulating the development of the olfactory bulb and odor discrimination and it is highly expressed in developing rat brains, but absent in most regions of adult rat brains. Conditional deletion of the MAPK7gene in mouse neural stem cells impairs several pheromone-mediated behaviors, including aggression and mating in male mice. These behavior impairments were not caused by a reduction in the level of testosterone, by physical immobility, by heightened fear or anxiety or by depression. Using mouse urine as a natural pheromone-containing solution, it has been shown that the impairment was associated with defective detection of related pheromones, and with changes in their inborn preference for pheromones related to sexual and reproductive activities.
Lastly, alleviation of an acute fear response because a friendly peer (or in biological language: an affiliative conspecific) tends and befriends is called "social buffering". The term is in analogy to the 1985 "buffering" hypothesis in psychology, where social support has been proven to mitigate the negative health effects of alarm pheromone mediated distress. The role of a "social pheromone" is suggested by the recent discovery that olfactory signals are responsible in mediating the "social buffering" in male rats. "Social buffering" was also observed to mitigate the conditioned fear responses of honeybees. A bee colony exposed to an environment of high threat of predation did not show increased aggression and aggressive-like gene expression patterns in individual bees, but decreased aggression. That the bees did not simply habituate to threats is suggested by the fact that the disturbed colonies also decreased their foraging.
Biologists have proposed in 2012 that fear pheromones evolved as molecules of "keystone significance", a term coined in analogy to keystone species. Pheromones may determine species compositions and affect rates of energy and material exchange in an ecological community. Thus pheromones generate structure in a food web and play critical roles in maintaining natural systems.
Humans
Evidence of chemosensory alarm signals in humans has emerged slowly: Although alarm pheromones have not been physically isolated and their chemical structures have not been identified in humans so far, there is evidence for their presence. Androstadienone, for example, a steroidal, endogenous odorant, is a pheromone candidate found in human sweat, axillary hair and plasma. The closely related compound androstenone is involved in communicating dominance, aggression or competition; sex hormone influences on androstenone perception in humans showed a high testosterone level related to heightened androstenone sensitivity in men, a high testosterone level related to unhappiness in response to androstenone in men, and a high estradiol level related to disliking of androstenone in women.
A German study from 2006 showed when anxiety-induced versus exercise-induced human sweat from a dozen people was pooled and offered to seven study participants, of five able to olfactorily distinguish exercise-induced sweat from room air, three could also distinguish exercise-induced sweat from anxiety induced sweat. The acoustic startle reflex response to a sound when sensing anxiety sweat was larger than when sensing exercise-induced sweat, as measured by electromyography analysis of the orbital muscle, which is responsible for the eyeblink component. This showed for the first time that fear chemosignals can modulate the startle reflex in humans without emotional mediation; fear chemosignals primed the recipient's "defensive behavior" prior to the subjects' conscious attention on the acoustic startle reflex level.
In analogy to the social buffering of rats and honeybees in response to chemosignals, induction of empathy by "smelling anxiety" of another person has been found in humans.
A study from 2013 provided brain imaging evidence that human responses to fear chemosignals may be gender-specific. Researchers collected alarm-induced sweat and exercise-induced sweat from donors extracted it, pooled it and presented it to 16 unrelated people undergoing functional brain MRI. While stress-induced sweat from males produced a comparably strong emotional response in both females and males, stress-induced sweat from females produced markedly stronger arousal in women than in men. Statistical tests pinpointed this gender-specificity to the right amygdala and strongest in the superficial nuclei. Since no significant differences were found in the olfactory bulb, the response to female fear-induced signals is likely based on processing the meaning, i.e. on the emotional level, rather than the strength of chemosensory cues from each gender, i.e. the perceptual level.
An approach-avoidance task was set up where volunteers seeing either an angry or a happy cartoon face on a computer screen pushed away or pulled toward them a joystick as fast as possible. Volunteers smelling androstadienone, masked with clove oil scent responded faster, especially to angry faces than those smelling clove oil only, which was interpreted as androstadienone-related activation of the fear system. A potential mechanism of action is, that androstadienone alters the "emotional face processing". Androstadienone is known to influence the activity of the fusiform gyrus which is relevant for face recognition.
Cognitive-consistency theory
Cognitive-consistency theories assume that "when two or more simultaneously active cognitive structures are logically inconsistent, arousal is increased, which activates processes with the expected consequence of increasing consistency and decreasing arousal." In this context, it has been proposed that fear behavior is caused by an inconsistency between a preferred, or expected, situation and the actually perceived situation, and functions to remove the inconsistent stimulus from the perceptual field, for instance by fleeing or hiding, thereby resolving the inconsistency. This approach puts fear in a broader perspective, also involving aggression and curiosity. When the inconsistency between perception and expectancy is small, learning as a result of curiosity reduces inconsistency by updating expectancy to match perception. If the inconsistency is larger, fear or aggressive behavior may be employed to alter the perception in order to make it match expectancy, depending on the size of the inconsistency as well as the specific context. Aggressive behavior is assumed to alter perception by forcefully manipulating it into matching the expected situation, while in some cases thwarted escape may also trigger aggressive behavior in an attempt to remove the thwarting stimulus.
Research
In order to improve our understanding of the neural and behavioral mechanisms of adaptive and maladaptive fear, investigators use a variety of translational animal models. These models are particularly important for research that would be too invasive for human studies. Rodents such as mice and rats are common animal models, but other species are used. Certain aspects of fear research still requires more research such as sex, gender, and age differences.
Models
These animal models include, but are not limited to, fear conditioning, predator-based psychosocial stress, single prolonged stress, chronic stress models, inescapable foot/tail shocks, immobilization or restraint, and stress enhanced fear learning. While the stress and fear paradigms differ between the models, they tend to involve aspects such as acquisition, generalization, extinction, cognitive regulation, and reconsolidation.
Pavlovian
Fear conditioning, also known as Pavlovian or classical conditioning, is a process of learning that involves pairing a neutral stimulus with an unconditional stimulus (US). A neutral stimulus is something like a bell, tone, or room that does not elicit a response normally where a US is a stimulus that results in a natural or unconditioned response (UR) – in Pavlov's famous experiment the neutral stimulus is a bell and the US would be food with the dog's salvation being the UR. Pairing the neutral stimulus and the US results in the UR occurring not only with the US but also the neutral stimulus. When this occurs the neutral stimulus is referred to as the conditional stimulus (CS) and the response the conditional response (CR). In the fear conditioning model of Pavlovian conditioning the US is an aversive stimulus such as a shock, tone, or unpleasant odor.
Predator-based psychosocial stress
Predator-based psychosocial stress (PPS) involves a more naturalistic approach to fear learning. Predators such as a cat, a snake, or urine from a fox or cat are used along with other stressors such as immobilization or restraint in order to generate instinctual fear responses.
Chronic stress models
Chronic stress models include chronic variable stress, chronic social defeat, and chronic mild stress. These models are often used to study how long-term or prolonged stress/pain can alter fear learning and disorders.
Single prolonged stress
Single prolonged stress (SPS) is a fear model that is often used to study PTSD. Its paradigm involves multiple stressors such as immobilization, a force swim, and exposure to ether delivered concurrently to the subject. This is used to study non-naturalistic, uncontrollable situations that can cause a maladaptive fear responses that is seen in a lot of anxiety and traumatic based disorders.
Stress enhanced fear learning
Stress enhanced fear learning (SEFL) like SPS is often used to study the maladaptive fear learning involved in PTSD and other traumatic based disorders. SEFL involves a single extreme stressor such as a large number of footshocks simulating a single traumatic stressor that somehow enhances and alters future fear learning.
Management
Pharmaceutical
A drug treatment for fear conditioning and phobias via the amygdalae is the use of glucocorticoids. In one study, glucocorticoid receptors in the central nuclei of the amygdalae were disrupted in order to better understand the mechanisms of fear and fear conditioning. The glucocorticoid receptors were inhibited using lentiviral vectors containing Cre-recombinase injected into mice. Results showed that disruption of the glucocorticoid receptors prevented conditioned fear behavior. The mice were subjected to auditory cues which caused them to freeze normally. A reduction of freezing was observed in the mice that had inhibited glucocorticoid receptors.
Psychological
Cognitive behavioral therapy has been successful in helping people overcome their fear. Because fear is more complex than just forgetting or deleting memories, an active and successful approach involves people repeatedly confronting their fears. By confronting their fears in a safe manner a person can suppress the "fear-triggering memories" or stimuli.
Exposure therapy has known to have helped up to 90% of people with specific phobias to significantly decrease their fear over time.
Another psychological treatment is systematic desensitization, which is a type of behavior therapy used to completely remove the fear or produce a disgusted response to this fear and replace it. The replacement that occurs will be relaxation and will occur through conditioning. Through conditioning treatments, muscle tensioning will lessen and deep breathing techniques will aid in de-tensioning.
Literary and religious
There are other methods for treating or coping with one's fear, such as writing down rational thoughts regarding fears. Journal entries are a healthy method of expressing one's fears without compromising safety or causing uncertainty. Another suggestion is a fear ladder. To create a fear ladder, one must write down all of their fears and score them on a scale of one to ten. Next, the person addresses their phobia, starting with the lowest number.
Religion can help some individuals cope with fear.
Incapability
People who have damage to their amygdalae, which can be caused by a rare genetic disease known as Urbach–Wiethe disease, are unable to experience fear. The disease destroys both amygdalae in late childhood. Since the discovery of the disease, there have only been 400 recorded cases. A lack of fear can allow someone to get into a dangerous situation they otherwise would have avoided.
Society and culture
Death
The fear of the end of life and its existence is, in other words, the fear of death. Historically, attempts were made to reduce this fear by performing rituals which have helped collect the cultural ideas that we now have in the present. These rituals also helped preserve the cultural ideas. The results and methods of human existence had been changing at the same time that social formation was changing.
When people are faced with their own thoughts of death, they either accept that they are dying or will die because they have lived a full life or they will experience fear. A theory was developed in response to this, which is called the terror management theory. The theory states that a person's cultural worldviews (religion, values, etc.) will mitigate the terror associated with the fear of death through avoidance. To help manage their terror, they find solace in their death-denying beliefs, such as their religion. Another way people cope with their death related fears is pushing any thoughts of death into the future or by avoiding these thoughts all together through distractions. Although there are methods for one coping with the terror associated with their fear of death, not everyone suffers from these same uncertainties. People who believe they have lived life to the "fullest" typically do not fear death.
Death anxiety is multidimensional; it covers "fears related to one's own death, the death of others, fear of the unknown after death, fear of obliteration, and fear of the dying process, which includes fear of a slow death and a painful death".
The Yale philosopher Shelly Kagan examined fear of death in a 2007 Yale open course by examining the following questions: Is fear of death a reasonable appropriate response? What conditions are required and what are appropriate conditions for feeling fear of death? What is meant by fear, and how much fear is appropriate? According to Kagan for fear in general to make sense, three conditions should be met:
the object of fear needs to be "something bad"
there needs to be a non-negligible chance that the bad state of affairs will happen
there needs to be some uncertainty about the bad state of affairs
The amount of fear should be appropriate to the size of "the bad". If the three conditions are not met, fear is an inappropriate emotion. He argues, that death does not meet the first two criteria, even if death is a "deprivation of good things" and even if one believes in a painful afterlife. Because death is certain, it also does not meet the third criterion, but he grants that the unpredictability of when one dies may be cause to a sense of fear.
In a 2003 study of 167 women and 121 men, aged 65–87, low self-efficacy predicted fear of the unknown after death and fear of dying for women and men better than demographics, social support, and physical health. Fear of death was measured by a "Multidimensional Fear of Death Scale" which included the 8 subscales Fear of Dying, Fear of the Dead, Fear of Being Destroyed, Fear for Significant Others, Fear of the Unknown, Fear of Conscious Death, Fear for the Body After Death, and Fear of Premature Death. In hierarchical multiple regression analysis, the most potent predictors of death fears were low "spiritual health efficacy", defined as beliefs relating to one's perceived ability to generate spiritually based faith and inner strength, and low "instrumental efficacy", defined as beliefs relating to one's perceived ability to manage activities of daily living.
Psychologists have tested the hypotheses that fear of death motivates religious commitment, and that assurances about an afterlife alleviate the fear, with equivocal results. Religiosity can be related to fear of death when the afterlife is portrayed as time of punishment. "Intrinsic religiosity", as opposed to mere "formal religious involvement", has been found to be negatively correlated with death anxiety. In a 1976 study of people of various Christian denominations, those who were most firm in their faith, who attended religious services weekly, were the least afraid of dying. The survey found a negative correlation between fear of death and "religious concern".
In a 2006 study of white, Christian men and women the hypothesis was tested that traditional, church-centered religiousness and de-institutionalized spiritual seeking are ways of approaching fear of death in old age. Both religiousness and spirituality were related to positive psychosocial functioning, but only church-centered religiousness protected subjects against the fear of death.
Religion
Statius in the Thebaid (Book 3, line 661) aired the irreverent suggestion that "fear first made gods in the world".
From a Christian theological perspective, the word fear can encompass more than simple dread. Robert B. Strimple says that fear includes the "... convergence of awe, reverence, adoration, humility..". Some translations of the Bible, such as the New International Version, sometimes express the concept of fear with the word reverence.
A similar phrase, "God-fearing", is sometimes used as a rough synonym for "pious". It is a standard translation for the Arabic word taqwa (; "forbearance, restraint") in Muslim contexts. In Judaism, "fear of God" describes obedience to Jewish law even when invisible to others.
Manipulation
Fear may be politically and culturally manipulated to persuade citizenry of ideas which would otherwise be widely rejected or dissuade citizenry from ideas which would otherwise be widely supported. In contexts of disasters, nation-states manage the fear not only to provide their citizens with an explanation about the event or blaming some minorities, but also to adjust their previous beliefs.
Fear can alter how a person thinks or reacts to situations because fear has the power to inhibit one's rational way of thinking. As a result, people who do not experience fear, are able to use fear as a tool to manipulate others. People who are experiencing fear, seek preservation through safety and can be manipulated by a person who is there to provide that safety that is being sought after. "When we're afraid, a manipulator can talk us out of the truth we see right in front of us. Words become more real than reality" By this, a manipulator can use our fear to manipulate us out the truth and instead make us believe and trust in their truth. Politicians are notorious for using fear to manipulate the people into supporting their policies.This strategy taps into primal human emotions, leveraging fear of the unknown, external threats, or perceived dangers to influence decision-making.
Fiction and mythology
Fear is found and reflected in mythology and folklore as well as in works of fiction such as novels and films.
Works of dystopian and (post)apocalyptic fiction convey the fears and anxieties of societies.
The fear of the world's end is about as old as civilization itself. In a 1967 study, Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Scientific and critical thought supplanting religious and mythical thought as well as a public emancipation may be the cause of eschatology becoming replaced by more realistic scenarios. Such might constructively provoke discussion and steps to be taken to prevent depicted catastrophes.
The Story of the Youth Who Went Forth to Learn What Fear Was is a German fairy tale dealing with the topic of not knowing fear.
Many stories also include characters who fear the antagonist of the plot. One important characteristic of historical and mythical heroes across cultures is to be fearless in the face of big and often lethal enemies.
The Magnus Archives is a fiction horror podcast written by Jonathan Sims and directed by Alexander J. Newall that, among other things, formulates an archetypal ontology of fear through the dissemination of case files at a paranormal research institute set in a world where the metaphysical basis of paranormal activity and unexplainable horrors is fear incarnate. The diegesis states that true categorization of fear is impossible, that fear is all one unknowable thing; however, there exists an ontological structure of fear archetypes in this universe proposed by a fictional version of the architect Robert Smirke. It is a unique construction of fear in that it is not based on the science or neurology of fear, but on thematic and experiential connections between different phobias. For example, the fear of disease and vermin comes from the same place as the fear of abusive relationships, as both lie in fearing corruptions to the self. The final season of the podcast consists almost entirely of poetic meditations on the nature of fear.
Athletics
In the world of athletics, fear is often used as a means of motivation to not fail. This situation involves using fear in a way that increases the chances of a positive outcome. In this case, the fear that is being created is initially a cognitive state to the receiver. This initial state is what generates the first response of the athlete, this response generates a possibility of fight or flight reaction by the athlete (receiver), which in turn will increase or decrease the possibility of success or failure in the certain situation for the athlete. The amount of time that the athlete has to determine this decision is small but it is still enough time for the receiver to make a determination through cognition. Even though the decision is made quickly, the decision is determined through past events that have been experienced by the athlete. The results of these past events will determine how the athlete will make his cognitive decision in the split second that he or she has.
Fear of failure as described above has been studied frequently in the field of sport psychology. Many scholars have tried to determine how often fear of failure is triggered within athletes, as well as what personalities of athletes most often choose to use this type of motivation. Studies have also been conducted to determine the success rate of this method of motivation.
Murray's Exploration in Personal (1938) was one of the first studies that actually identified fear of failure as an actual motive to avoid failure or to achieve success. His studies suggested that inavoidance, the need to avoid failure, was found in many college-aged men during the time of his research in 1938. This was a monumental finding in the field of psychology because it allowed other researchers to better clarify how fear of failure can actually be a determinant of creating achievement goals as well as how it could be used in the actual act of achievement.
In the context of sport, a model was created by R.S. Lazarus in 1991 that uses the cognitive-motivational-relational theory of emotion.
Another study was done in 2001 by Conroy, Poczwardowski, and Henschen that created five aversive consequences of failing that have been repeated over time. The five categories include (a) experiencing shame and embarrassment, (b) devaluing one's self-estimate, (c) having an uncertain future, (d) important others losing interest, (e) upsetting important others. These five categories can help one infer the possibility of an individual to associate failure with one of these threat categories, which will lead them to experiencing fear of failure.
In summary, the two studies that were done above created a more precise definition of fear of failure, which is "a dispositional tendency to experience apprehension and anxiety in evaluative situations because individuals have learned that failure is associated with aversive consequences".
The author and internet content creator John Green wrote about “the yips”—a common colloquialism for a debilitating, often chronic manifestation of athletic anxiety experienced by some professional athletes—in an essay for his podcast and novel The Anthropocene Reviewed. Green discusses famous examples of when athletic anxiety has ruined careers and juxtaposes it with the nature of general anxiety as a whole. Green settles, however, on a conclusion for the essay evoking resilience and hope in the human condition by describing a circumstance where the baseball player Rick Ankiel reset his career back to the minor leagues as an outfielder after getting the yips as a major league pitcher.
See also
Phobia
Appeal to fear
Culture of fear
Ecology of fear
Hysteria
Nightmare
Night terror
Ontogenetic parade
Panic attack
Paranoia
Phobophobia
Psychological trauma
Social anxiety disorder
Social anxiety
Stockholm syndrome
Voodoo death
Anger
Trauma
Sensory overload
Fight-or-flight response
References
Further reading
External links
Emotions
Evolutionary psychology | 0.766882 | 0.998008 | 0.765354 |
Wernicke encephalopathy | Wernicke encephalopathy (WE), also Wernicke's encephalopathy, or wet brain is the presence of neurological symptoms caused by biochemical lesions of the central nervous system after exhaustion of B-vitamin reserves, in particular thiamine (vitamin B1). The condition is part of a larger group of thiamine deficiency disorders that includes beriberi, in all its forms, and alcoholic Korsakoff syndrome. When it occurs simultaneously with alcoholic Korsakoff syndrome it is known as Wernicke–Korsakoff syndrome.
Classically, Wernicke encephalopathy is characterised by a triad of symptoms: ophthalmoplegia, ataxia, and confusion. Around 10% of patients exhibit all three features, and other symptoms may also be present. While it is commonly regarded as a condition particular to malnourished people with alcohol misuse, it can be caused by a variety of diseases.
It is treated with thiamine supplementation, which can lead to improvement of the symptoms and often complete resolution, particularly in those where alcohol misuse is not the underlying cause. Often other nutrients also need to be replaced, depending on the cause. Medical literature notes how managing the condition in a timely fashion can avoid worsening symptoms.
Wernicke encephalopathy may be present in the general population with a prevalence of around 2%, and is considered underdiagnosed; probably, many cases are in patients who do not have commonly-associated symptoms.
Signs and symptoms
The classic triad of symptoms found in Wernicke encephalopathy is:
ophthalmoplegia (later expanded to other eye movement disorders, most commonly affecting the lateral rectus muscle. Lateral nystagmus is most commonly seen although lateral rectus palsy, usually bilateral, may be seen).
ataxia (later expanded to imbalance or any cerebellar signs)
confusion (later expanded to other mental changes. Has 82% incidence in diagnosis cases)
Other symptoms found in patients with this condition include:
pupillary changes, retinal hemorrhage, papilledema, impaired vision and hearing, vision loss
hearing loss,
fatigability, apathy, irritability, drowsiness, psycho and/or motor slowing
dysphagia, blush, sleep apnea, epilepsy and stupor
lactic acidosis
memory impairment, amnesia, depression, psychosis
hypothermia, polyneuropathy, hyperhidrosis.
Although hypothermia is usually diagnosed with a body temperature of 35 °C (95 °F), or less, incipient cooling caused by deregulation in the central nervous system (CNS) needs to be monitored because it can promote the development of an infection. The patient may report feeling cold, followed by mild chills, cold skin, moderate pallor, tachycardia, hypertension, tremor or piloerection. External warming techniques are advised to prevent hypothermia.
Among the frequently altered functions are the cardio circulatory. There may be tachycardia, dyspnea, chest pain, orthostatic hypotension, changes in heart rate and blood pressure. The lack of thiamine sometimes affects other major energy consumers, the myocardium, and also patients may have developed cardiomegaly. Heart failure with lactic acidosis syndrome has been observed. Cardiac abnormalities are an aspect of the WE, which was not included in the traditional approach, and are not classified as a separate disease.
Infections have been pointed out as one of the most frequent triggers of death in WE. Furthermore, infections are usually present in pediatric cases.
In the last stage other symptoms may occur: hyperthermia, increased muscle tone, spastic paralysis, choreic dyskinesias and coma.
Because of the frequent involvement of heart, eyes and peripheral nervous system, several authors prefer to call it Wernicke disease rather than simply encephalopathy.
Early symptoms are nonspecific, and it has been stated that WE may present nonspecific findings. In Wernicke Korsakoff's syndrome some single symptoms are present in about one-third.
Location of the lesion
Depending on the location of the brain lesion different symptoms are more frequent:
Brainstem tegmentum. - Ocular: pupillary changes. Extraocular muscle palsy; gaze palsy: nystagmus.
Hypothalamus. Medulla: dorsal nuc. of vagus. - Autonomic dysfunction: temperature; cardiocirculatory; respiratory.
Medulla: vestibular region. Cerebellum. - Ataxia.
Dorsomedial nuc. of thalamus. Mammillary bodies. - Amnestic syndrome for recent memory.
Mamillary lesion are characteristic-small petechial hemorrhages are found.
Diffuse cerebral dysfunction.- Altered cognition: global confusional state.
Brainstem: periaqueductal gray.- Reduction of consciousness
Hypothalamic lesions may also affect the immune system, which is known in people who consume excessive amounts of alcohol, causing dysplasias and infections.
Korsakoff syndrome
Korsakoff syndrome, characterised by memory impairment, confabulation, confusion and personality changes, has a strong and recognised link with WE. A very high percentage of patients with Wernicke–Korsakoff syndrome also have peripheral neuropathy, and many people who consume excess alcohol have this neuropathy without other neurologic signs or symptoms. Korsakoff's occurs much more frequently in WE due to chronic alcoholism. It is uncommon among those who do not consume excessive amounts of alcohol. Up to 80% of WE patients who misuse alcohol develop Korsakoff's syndrome. In Korsakoff's, is usually observed atrophy of the thalamus and the mammillary bodies, and frontal lobe involvement. In a study, half of Wernicke–Korsakoff cases had good recovery from the amnesic state, which may take from 2 months to 10 years.
Risk factors
Wernicke encephalopathy has classically been thought of as a disease solely of people who drink excessive amounts of alcohol, but it is also found in the chronically undernourished, and in recent years had been discovered post bariatric surgery. Without being exhaustive, the documented causes of Wernicke encephalopathy have included:
pancreatitis, liver dysfunction, chronic diarrhea, celiac disease, Crohn's disease, uremia, thyrotoxicosis
vomiting, hyperemesis gravidarum, malabsorption, gastrointestinal surgery or diseases
incomplete parenteral nutrition, starvation/fasting
chemotherapy, renal dialysis, diuretic therapy, stem cell/marrow transplantation
cancer, AIDS, Creutzfeldt–Jakob disease, febrile infections
this disease may even occur in some people with normal, or even high blood thiamine levels, or people with deficiencies in intracellular transport of this vitamin. Selected genetic mutations, including presence of the X-linked transketolase-like 1 gene, SLC19A2 thiamine transporter protein mutations, and the aldehyde dehydrogenase-2 gene, which may predispose to alcohol use disorder. The APOE epsilon-4 allele, involved in Alzheimer's disease, may increase the chance of developing neurological symptoms.
Pathophysiology
Thiamine deficiency and errors of thiamine metabolism are believed to be the primary cause of Wernicke encephalopathy. Thiamine, also called B1, helps to break down glucose. Specifically, it acts as an essential coenzyme to the TCA cycle and the pentose phosphate shunt. Thiamine is first metabolised to its more active form, thiamine diphosphate (TDP), before it is used. The body only has 2–3 weeks of thiamine reserves, which are readily exhausted without intake, or if depletion occurs rapidly, such as in chronic inflammatory states or in diabetes. Thiamine is involved in:
Metabolism of carbohydrates, releasing energy.
Production of neurotransmitters including glutamic acid and GABA.
Lipid metabolism, necessary for myelin production.
Amino acid modification. Probably linked to the production of taurine, of great cardiac importance.
Neuropathology
The primary neurological-related injury caused by thiamine deficiency in WE is three-fold: oxidative damage, mitochondrial injury leading to apoptosis, and directly stimulating a pro-apoptotic pathway. Thiamine deficiency affects both neurons and astrocytes, glial cells of the brain. Thiamine deficiency alters the glutamate uptake of astrocytes, through changes in the expression of astrocytic glutamate transporters EAAT1 and EAAT2, leading to excitotoxicity. Other changes include those to the GABA transporter subtype GAT-3, GFAP, glutamine synthetase, and the Aquaporin 4 channel. Focal lactic acidosis also causes secondary oedema, oxidative stress, inflammation and white matter damage.
Pathological anatomy
Despite its name, WE is not related to Wernicke's area, a region of the brain associated with speech and language interpretation.
Brain lesions in WE are usually credited to focal lactic acidosis. An absence of thiamine can lead to too much pyruvate within the cells since it is not available to help convert pyruvate through the TCA cycle. An increase in pyruvate causes an increase in lactate concentration leading to focal lactic acidosis.
Lesions can be reversed in most cases with immediate supplementation of thiamine.
Lesions are usually symmetrical in the periventricular region, diencephalon, the midbrain, hypothalamus, and cerebellar vermis. Brainstem lesions may include cranial nerve III, IV, VI and VIII nuclei, the medial thalamic nuclei, and the dorsal nucleus of the vagus nerve. Oedema may be found in the regions surrounding the third ventricle, and fourth ventricle, also appearing petechiae and small hemorrhages. Chronic cases can present the atrophy of the mammillary bodies.
In 1949, the idea that WE lesions are a result of a disruption to the blood-brain barrier was introduced. Large proteins passing into the brain can put neurological tissue at risk of toxic effects. The blood-brain barrier junctions are typically found to have WE lesions located at that region of the brain.
An altered blood–brain barrier may cause a perturbed response to certain drugs and foods.
Diagnosis
Diagnosis of Wernicke encephalopathy or disease is made clinically. Caine et al. in 1997 established criteria that Wernicke encephalopathy can be diagnosed in any patient with just two or more of the main symptoms noted above. The sensitivity of the diagnosis by the classic triad was 23% but increased to 85% taking two or more of the four classic features. These criteria are challenged because all the cases he studied were people who drank excessive amounts of alcohol. Some consider it sufficient to suspect the presence of the disease with only one of the principal symptoms. Some British hospital protocols suspect WE with any one of these symptoms: confusion, decreased consciousness level (or unconsciousness, stupor or coma), memory loss, ataxia or unsteadiness, ophthalmoplegia or nystagmus, and unexplained hypotension with hypothermia. The presence of only one sign should be sufficient for treatment.
The sensitivity of magnetic resonance imaging (MR) was 53% and the specificity was 93%. The reversible cytotoxic edema was considered the most characteristic lesion of WE. The location of the lesions were more frequently atypical among people who drank appropriate amounts of alcohol, while typical contrast enhancement in the thalamus and the mammillary bodies was observed frequently associated with alcohol misuse. These abnormalities may include:
Dorsomedial thalami, periaqueductal gray matter, mamillary bodies, tectal plate and brainstem nuclei are commonly affected. Involvement is always bilateral and symmetric. Value of DWI in the diagnosis of WE is minimal. Axial FLAIR MRI images represent the best diagnostic MRI sequence. Contrast material may highlight involvement of the mamillary bodies.
There appears to be very little value for CT scans.
Thiamine can be measured using an erythrocyte transketolase activity assay, or by activation by measurement of in vitro thiamine diphosphate levels. Normal thiamine levels do not necessarily rule out the presence of WE, as this may be a patient with difficulties in intracellular transport.
Prevention
There are hospital protocols for prevention, supplementing with thiamine in the presence of: history of alcohol misuse or related seizures, requirement for IV glucose, signs of malnutrition, poor diet, recent diarrhea or vomiting, peripheral neuropathy, intercurrent illness, delirium tremens or treatment for DTs, and others.
Some experts advise parenteral thiamine should be given to all at-risk patients in the emergency department.
In the clinical diagnosis should be remembered that early symptoms are nonspecific, and it has been stated that WE may present nonspecific findings. There is consensus to provide water-soluble vitamins and minerals after gastric operations.
In some countries certain foods have been supplemented with thiamine, and have reduced WE cases. Improvement is difficult to quantify because they applied several different actions. Avoiding or moderating alcohol consumption and having adequate nutrition reduces one of the main risk factors in developing Wernicke–Korsakoff syndrome..
Treatment
Most symptoms will improve quickly if deficiencies are treated early. Memory disorder may be permanent.
In patients suspected of WE, thiamine treatment should be started immediately. Blood should be immediately taken to test for thiamine, other vitamins and minerals levels. Following this an immediate intravenous or intramuscular dose of thiamine should be administered two or three times daily. Thiamine administration is usually continued until clinical improvement ceases.
Considering the diversity of possible causes and several surprising symptomatologic presentations, and because there is low assumed risk of toxicity of thiamine, because the therapeutic response is often dramatic from the first day, some qualified authors indicate parenteral thiamine if WE is suspected, both as a resource for diagnosis and treatment. The diagnosis is highly supported by the response to parenteral thiamine, but is not sufficient to be excluded by the lack of it. Parenteral thiamine administration is associated with a very small risk of anaphylaxis.
People who consume excessive amounts of alcohol may have poor dietary intakes of several vitamins, and impaired thiamine absorption, metabolism, and storage; they may thus require higher doses.
If glucose is given, such as in people with an alcohol use disorder who are also hypoglycaemic, thiamine must be given concurrently. If this is not done, the glucose will rapidly consume the remaining thiamine reserves, exacerbating this condition.
The observation of edema in MR, and also the finding of inflation and macrophages in necropsied tissues, has led to successful administration of antiinflammatories.
Other nutritional abnormalities should also be looked for, as they may be exacerbating the disease. In particular, magnesium, a cofactor of transketolase which may induce or aggravate the disease.
Other supplements may also be needed, including: cobalamin, ascorbic acid, folic acid, nicotinamide, zinc, phosphorus (dicalcium phosphate) and in some cases taurine, especially suitable when there cardiocirculatory impairment.
Patient-guided nutrition is suggested. In patients with Wernicke–Korsakoff syndrome, even higher doses of parenteral thiamine are recommended. Concurrent toxic effects of alcohol should also be considered.
Epidemiology
There are no conclusive statistical studies, all figures are based on partial studies.
Wernicke's lesions were observed in 0.8 to 2.8% of the general population autopsies, and 12.5% of people with an alcohol use disorder. This figure increases to 35% of such individuals if including cerebellar damage due to lack of thiamine.
Most autopsy cases were from people with an alcohol use disorder. Autopsy series were performed in hospitals on the material available which is unlikely to be representative of the entire population. Considering the slight affectations, previous to the generation of observable lesions at necropsy, the percentage should be higher. There is evidence to indicate that Wernicke encephalopathy is underdiagnosed. For example, in one 1986 study, 80% of cases were diagnosed postmortem. Is estimated that only 5–14% of patients with WE are diagnosed in life.
In a series of autopsy studies held in Recife, Brazil, it was found that only 7 out of 36 had consumed excessive amounts of alcohol, and only a small minority had malnutrition. In a reviewed of 53 published case reports from 2001 to 2011, the relationship with alcohol was also about 20% (10 out of 53 cases).
WE related to alcohol misuse is more common in males and is more common in females when not related to alcohol misuse. In alcohol-related cases, WE patients average the age of 40, and non-alcohol-related cases typically occur in younger people.
History
WE was first identified in 1881 by the German neurologist Carl Wernicke, although the link with thiamine was not identified until the 1930s.
Carl Wernicke discovered the sensory center of speech. Wernicke figured out that Broca's area was not the only center of speech, it was also able to distinguish motor aphasia from sensory aphasia. He also pointed to the possibility of conduction aphasia since he came to understand the arrangement of the brain's extrinsic and intrinsic connections. He demonstrated that the sensory information reached its corresponding area in the cerebral cortex through projection fibers. From there, this information, following the association system, would be distributed to different regions of the cortex, integrating sensory processing.
He reported three patients with WE, including two men (aged 33 and 36) who were alcoholics and one woman (aged 20) who ingested sulfuric acid, leading to pyloric stenosis. All three had ocular motor abnormalities and he performed an autopsy on each, providing a clinical-pathological correlation.
A similar presentation of this disease was described by the Russian psychiatrist Sergei Korsakoff in a series of articles published 1887–1891; where the chronic version of WE was described as Korsakoff's Syndrome, involving symptoms of amnesia.
References
External links
Alcohol and health
Malnutrition
Central nervous system disorders
Vitamin deficiencies
Thiamine
Medical triads | 0.766196 | 0.998806 | 0.76528 |
Hemolytic–uremic syndrome | Hemolytic–uremic syndrome (HUS) is a group of blood disorders characterized by low red blood cells, acute kidney injury (previously called acute renal failure), and low platelets. Initial symptoms typically include bloody diarrhea, fever, vomiting, and weakness. Kidney problems and low platelets then occur as the diarrhea progresses. Children are more commonly affected, but most children recover without permanent damage to their health, although some children may have serious and sometimes life-threatening complications. Adults, especially the elderly, may present a more complicated presentation. Complications may include neurological problems and heart failure.
Most cases occur after infectious diarrhea due to a specific type of E. coli called O157:H7. Other causes include S. pneumoniae, Shigella, Salmonella, and certain medications. The underlying mechanism typically involves the production of Shiga toxin by the bacteria. Atypical hemolytic uremic syndrome (aHUS) is often due to a genetic mutation and presents differently. However, both can lead to widespread inflammation and multiple blood clots in small blood vessels, a condition known as thrombotic microangiopathy.
Treatment involves supportive care and may include dialysis, steroids, blood transfusions, or plasmapheresis. About 1.5 per 100,000 people are affected per year. Less than 5% of those with the condition die. Of the remainder, up to 25% have ongoing kidney problems. HUS was first defined as a syndrome in 1955.
Signs and symptoms
After eating contaminated food, the first symptoms of infection can emerge anywhere from 1 to 10 days later, but usually after 3 to 4 days. These early symptoms can include diarrhea (which is often bloody), stomach cramps, mild fever, or vomiting that results in dehydration and reduced urine. HUS typically develops about 5–10 days after the first symptoms, but can take up to 3 weeks to manifest, and occurs at a time when the diarrhea is improving. Related symptoms and signs include lethargy, decreased urine output, blood in the urine, kidney failure, low platelets, (which are needed for blood clotting), and destruction of red blood cells (microangiopathic hemolytic anemia). High blood pressure, jaundice (a yellow tinge in skin and the whites of the eyes), seizures, and bleeding into the skin can also occur. In some cases, there are prominent neurologic changes.
People with HUS commonly exhibit the symptoms of thrombotic microangiopathy (TMA), which can include abdominal pain, low platelet count, elevated lactate dehydrogenase LDH, (an enzyme released from damaged cells, and which is therefore a marker of cellular damage) decreased haptoglobin (indicative of the breakdown of red blood cells) anemia (low red blood cell count), schistocytes (damaged red blood cells), elevated creatinine (a protein waste product generated by muscle metabolism and eliminated renally), proteinuria (indicative of kidney injury), confusion, fatigue, swelling, nausea/vomiting, and diarrhea. Additionally, patients with aHUS typically present with an abrupt onset of systemic signs and symptoms such as acute kidney failure, hypertension (high blood pressure), myocardial infarction (heart attack), stroke, lung complications, pancreatitis (inflammation of the pancreas), liver necrosis (death of liver cells or tissue), encephalopathy (brain dysfunction), seizure, and coma. Failure of neurologic, cardiac, renal, and gastrointestinal (GI) organs, as well as death, can occur unpredictably at any time, either very quickly or following prolonged symptomatic or asymptomatic disease progression.
Cause
Typical HUS
Shiga-toxin producing E. coli (STEC) HUS occurs after ingestion of a strain of bacteria expressing Shiga toxin such as enterohemorrhagic Escherichia coli (EHEC), of which E. coli O157:H7 is the most common serotype.
Atypical HUS
Atypical HUS (aHUS) represents 5–10% of HUS cases and is largely due to one or several genetic mutations that cause chronic, uncontrolled, and excessive activation of the complement system, which is a group of immune signaling factors that promote inflammation, enhance the ability of antibodies and phagocytic cells to clear microbes and damaged cells from the body, and directly attack the pathogen's cell membrane. This results in platelet activation, endothelial cell damage, and white blood cell activation, leading to systemic TMA, which manifests as decreased platelet count, hemolysis (breakdown of red blood cells), damage to multiple organs, and ultimately death. Early signs of systemic complement-mediated TMA include thrombocytopenia (platelet count below 150,000 or a decrease from baseline of at least 25%) and evidence of microangiopathic hemolysis, which is characterized by elevated LDH levels, decreased haptoglobin, decreased hemoglobin (the oxygen-containing component of blood), and/or the presence of schistocytes. Despite the use of supportive care, an estimated 33–40% of patients will die or have end-stage renal disease (ESRD) with the first clinical manifestation of aHUS, and 65% of patients will die, require dialysis, or have permanent renal damage within the first year after diagnosis despite plasma exchange or plasma infusion (PE/PI) therapy. Patients who survive the presenting signs and symptoms of aHUS endure a chronic thrombotic and inflammatory state, which puts them at lifelong elevated risk of sudden blood clotting, kidney failure, other severe complications and premature death.
Historically, treatment options for aHUS were limited to plasma exchange or plasma infusion (PE/PI) therapy, which carries significant risks and has not been proven effective in any controlled trials. People with aHUS and ESRD have also had to undergo lifelong dialysis, which has a 5-year survival rate of 34–38%.
Pathogenesis
HUS is caused by ingestion of bacteria that produce Shiga toxins, with Shiga-toxin producing E. coli (STEC) being the most common type. E. coli can produce shigatoxin-1, shigatoxin-2, or both; with shigatoxin-2 producing organisms being more virulent and being much more likely to cause HUS. Once ingested, the bacteria move to the intestines where they produce the Shiga toxins. The bacteria and toxins damage the mucosal lining of the intestines, and thus are able to gain entry into the circulation. Shiga toxin enters the mesenteric microvasculature lining the intestines where it releases inflammatory cytokines including IL-6, IL-8, TNFα, and IL-1β. These inflammatory mediators lead to inflammation and vascular injury with microthrombi that are seen with HUS. It also further damages the intestinal barrier leading to diarrhea (usually bloody) and further entry of Shiga toxin from the intestines to the bloodstream as the intestinal barrier is compromised.
Once Shiga toxin enters the circulation it can travel throughout the body and cause the wide array of end organ damage and the multitude of symptoms seen with HUS. Shiga toxin gains entry to cells by binding to globotriaosylceramide (Gb3) which is a globoside found on cell membranes, it is found throughout the body including the surface of the glomerular endothelium of the kidney. Shiga toxin gains entry to the cell via Gb3 and endocytosis, it then is transported to the Golgi apparatus where furin cleaves the A subunit of the Shiga toxin. It is then transported to the endoplasmic reticulum where it is further cleaved, leaving the A1 subunit of Shiga toxin free. The A1 subunit of Shiga toxin inhibits the 28s subunit of the ribosomal rRNA, this leads to inhibited protein production by the ribosomes. With the cell's protein synthesis inhibited by Shiga toxin, the cell is destroyed. This leads to vascular injury (including in the kidneys where Gb3 is concentrated). The vascular injury facilitates the formation of vascular microthrombi which are characteristic of TTP. The TTP leads to platelet trapping (and thrombocytopenia), red blood cell destruction (and anemia), and end organ damage that is characteristically seen with HUS and TTP.
HUS is one of the thrombotic microangiopathies, a category of disorders that includes STEC-HUS, aHUS, and thrombotic thrombocytopenic purpura (TTP). The release of cytokines and chemokines (IL-6, IL-8, TNF-α, IL-1β) that are commonly released by Shiga toxin are implicated in platelet activation and TTP. The presence of schistocytes is a key finding that helps to diagnose HUS.
Shiga-toxin directly activates the alternative complement pathway and also interferes with complement regulation by binding to complement factor H, an inhibitor of the complement cascade. Shiga-toxin causes complement-mediated platelet, leukocyte, and endothelial cell activation, resulting in systemic hemolysis, inflammation and thrombosis. Severe clinical complications of TMA have been reported in patients from 2 weeks to more than 44 days after presentation with STEC-HUS, with improvements in clinical condition extending beyond this time frame, suggesting that complement activation persists beyond the acute clinical presentation and for at least 4 months.
The consumption of platelets as they adhere to the thrombi lodged in the small vessels typically leads to mild or moderate thrombocytopenia with a platelet count of less than 60,000 per microliter. As in the related condition TTP, reduced blood flow through the narrowed blood vessels of the microvasculature leads to reduced blood flow to vital organs, and ischemia may develop. The kidneys and the central nervous system (brain and spinal cord) are the parts of the body most critically dependent on high blood flow, and are thus the most likely organs to be affected. However, in comparison to TTP, the kidneys tend to be more severely affected in HUS, and the central nervous system is less commonly affected.
In contrast with typical disseminated intravascular coagulation seen with other causes of sepsis and occasionally with advanced cancer, coagulation factors are not consumed in HUS (or TTP) and the coagulation screen, fibrinogen level, and assays for fibrin degradation products such as "D-Dimers", are generally normal despite the low platelet count (thrombocytopenia).
HUS occurs after 3–7% of all sporadic E. coli O157:H7 infections and up to approximately 20% or more of epidemic infections. Children and adolescents are commonly affected. One reason could be that children have more Gb3 receptors than adults which may be why children are more susceptible to HUS. Cattle, swine, deer, and other mammals do not have GB3 receptors, but can be asymptomatic carriers of Shiga toxin-producing bacteria. Some humans can also be asymptomatic carriers. Once the bacteria colonizes, diarrhea followed by bloody diarrhea, hemorrhagic colitis, typically follows. Other serotypes of STEC also cause disease, inlduding HUS, as occurred with E. coli O104:H4, which triggered a 2011 epidemic of STEC-HUS in Germany.
Grossly, the kidneys may show patchy or diffuse renal cortical necrosis. Histologically, the glomeruli show thickened and sometimes split capillary walls due largely to endothelial swelling. Large deposits of fibrin-related materials in the capillary lumens, subendothelially, and in the mesangium are also found along with mesangiolysis. Interlobular and afferent arterioles show fibrinoid necrosis and intimal hyperplasia and are often occluded by thrombi.
STEC-HUS most often affects infants and young children, but also occurs in adults. The most common form of transmission is ingestion of undercooked meat, unpasteurized fruits and juices, contaminated produce, contact with unchlorinated water, and person-to-person transmission in daycare or long-term care facilities.
Unlike typical HUS, aHUS does not follow STEC infection and is thought to result from one or several genetic mutations that cause chronic, uncontrolled, and excessive activation of complement. This leads to platelet activation, endothelial cell damage, and white blood cell activation, leading to systemic TMA, which manifests as decreased platelet count, hemolysis, damage to multiple organs, and ultimately, death. Early signs of systemic complement-mediated TMA include thrombocytopenia (platelet count below 150,000 or a decrease from baseline of at least 25%) and evidence of microangiopathic hemolysis, which is characterized by elevated LDH levels, decreased haptoglobin, decreased hemoglobin, and/or the presence of schistocytes.
Diagnosis
The similarities between HUS, aHUS, and TTP make differential diagnosis essential. All three of these systemic TMA-causing diseases are characterized by thrombocytopenia and microangiopathic hemolysis, plus one or more of the following: neurological symptoms (e.g., confusion, cerebral convulsions, seizures); renal impairment (e.g., elevated creatinine, decreased estimated glomerular filtration rate [eGFR], abnormal urinalysis); and gastrointestinal (GI) symptoms (e.g., diarrhea, nausea/vomiting, abdominal pain, gastroenteritis).The presence of diarrhea does not exclude aHUS as the cause of TMA, as 28% of patients with aHUS present with diarrhea and/or gastroenteritis. First diagnosis of aHUS is often made in the context of an initial, complement-triggering infection, and Shiga-toxin has also been implicated as a trigger that identifies patients with aHUS. Additionally, in one study, mutations of genes encoding several complement regulatory proteins were detected in 8 of 36 (22%) patients diagnosed with STEC-HUS. However, the absence of an identified complement regulatory gene mutation does not preclude aHUS as the cause of the TMA, as approximately 50% of patients with aHUS lack an identifiable mutation in complement regulatory genes.
Diagnostic work-up supports the differential diagnosis of TMA-causing diseases. A positive Shiga-toxin/EHEC test confirms a cause for STEC-HUS, and severe ADAMTS13 deficiency (i.e., ≤5% of normal ADAMTS13 levels) confirms a diagnosis of TTP.
Prevention
The effect of antibiotics in shiga toxin producing E. coli is unclear. While some early studies raised concerns more recent studies show either no effect or a benefit.
Treatment
Treatment involves supportive care and may include dialysis, steroids, blood transfusions, and plasmapheresis. Early IV fluid hydration is associated with better outcomes including shorter hospital stays and reducing the risk of dialysis.
Empiric antibiotics are not indicated in those who are immunocompetent, and may worsen the HUS. Antidiarrheals and narcotic medications to slow the gut are not recommended as they are associated with worsening symptoms, increased risk of HUS in those with STEC infection, and adverse neurologic reactions. Platelet transfusions should not be used as the may drive the process of microangiopathy leading to worsening TTP.
While eculizumab is being used to treat atypical hemolytic uremic syndrome, no evidence as of 2018 supports its use in the main forms of HUS. Scientists are trying to understand how useful it would be to immunize humans or cattle.
Prognosis
Acute renal failure occurs in 55–70% of people with STEC-HUS, although up to 70–85% recover renal function. With aggressive treatment, more than 90% of patients survive the acute phase of HUS, and only about 9% may develop ESRD. Roughly one-third of persons with HUS have abnormal kidney function many years later, and a few require long-term dialysis. Another 8% of persons with HUS have other lifelong complications, such as high blood pressure, seizures, blindness, paralysis, and the effects of having part of their colon removed. STEC-HUS is associated with a 3% mortality rate among young children and a 20% mortality rate in middle age or older adults. 15-20% of children infected with STEC develop HUS, with the highest risk being in children younger than 5 years old.
Patients with aHUS generally have poor outcomes, with up to 50% progressing to end-stage renal disease (ESRD) or irreversible brain damage; as many as 25% die during the acute phase.
History
HUS is now considered as a part of the broader group of Thrombotic microangiopathies (TMA). Thrombotic thrombocytopenic purpura (TTP), a TMA, was first described by the Hungarian born, American pathologist and physician Eli Moschcowitz (18791964). In 1924, Moschcowitz first described TTP as a distinct clinicopathologic condition that can mimic the clinical characteristics of Hemolytic–uremic syndrome (HUS). That was in a 16-year-old girl who died 2 weeks after the abrupt onset and progression of petechial bleeding, pallor, fever, paralysis, hematuria and coma; and called "Moschcowitz disease". Moreover, Moschcowitz was among the first to work in psychosomatic medicine, and he presented a paper in 1935 on the psychological origins of physical disease. HUS was first described by Conrad Gasser in 1955, and the systemic character of HUS was subsequently defined. Bernard Kaplan identified several distinct entities that can manifest as HUS and emphasized that HUS was a syndrome with a common pathologic outcome. Kaplan is a Canadian professor and director of Pediatric Nephrology. He has an international reputation for his studies, over the past 34 years, on the hemolytic uremic syndromes. The discovery that endothelial cell injury underlies this broad spectrum of TMA disorders has come into focus during the last two decades. In the 1980s, Mohamed Karmali (1945–2016) was the first to make the association between Stx, diarrheal E. coli infection and the idiopathic hemolytic uremic syndrome of infancy and childhood. Karmali's work showed that the hemolytic uremic syndrome the children in Canada was caused by this particular bacteria. Karmali also developed the system of classifying strains of E.coli and determining which cause disease in humans. He defined the presence of microvascular injury in diarrhea-associated HUS and the critical role of a verotoxin produced by specific strains of Escherichia coli. This verotoxin was subsequently found to be a member of a family of toxins first identified with Shigella and known as Shiga toxin (Stx). This relationship and the eventual link of TTP to abnormally high levels of ultra-large Von Willebrand factor (vWF) multimers caused by congenital or acquired reductions in ADAMTS13 activity was established at approximately the same time. In 1924, a Finnish physician Erik Adolf von Willebrand (1870–1949) was consulted about a young girl with a bleeding disorder. Von Willebrand described this disorder in 1926, distinguishing it from hemophilia. The disorder was named after him, becoming known as von Willebrand disease. The cause of the disease was later discovered to be a deficiency of a protein, now known as von Willebrand factor, that enables hemostasis. Paul Warwicker is an English nephrologist, whilst in Newcastle in the mid-1990s his research in molecular genetics with Professors Tim and Judith Goodship led to the genetic mapping of the familial form of atypical HUS and the descriptions of the first HUS-related mutations and polymorphisms in the factor H gene in both familial and sporadic HUS. He was awarded an MD in molecular genetics in 2000, and elected fellow of the Royal College of Physicians in the same year. Paul Warwicker confirmed the association of atypical HUS (aHUS) to defects in a region on chromosome 1 that contains the genes for several complement regulatory proteins. Later, mutations in complement factor H, complement factor I, membrane cofactor protein, factor B, C3, and thrombomodulin have now been found to cause many of the familial cases of aHUS. These discoveries have allowed a more comprehensive understanding of the pathogenesis, evaluation, and treatment of the entire spectrum of TMA disorders and provide a more rational and effective approach to the care of these children with complicated disease. Prior to the use of monoclonal antibodies patients with aHUS had an extremely poor prognosis. Eculizumab (Soliris®, Alexion Pharmaceuticals, Inc., Boston, MA, USA) is a humanized monoclonal complement inhibitor that is the first and only approved treatment for patients with aHUS by FDA in September 2011. Eculizumab binds with high affinity to C5, inhibiting C5 cleavage to C5a and C5b and preventing the generation of the terminal complement complex C5b-9, thus inhibiting complement-mediated TMA. Eculizumab was proven to be effective in patients with aHUS in which it resolved and prevented complement-mediated TMA, improving renal function and hematologic outcomes. Alexion head of R&D 'John Orloff, M.D. "The results met the high bar of complete TMA response, defined by hematologic normalization and improved kidney function," said Alexion R&D head John Orloff, M.D., who reckons the drug can become the "new standard of care for patients with aHUS." "We are preparing regulatory submissions for Ultomiris in aHUS in the U.S., European Union and Japan as quickly as possible," he added.
Epidemiology
The country with the highest incidence of HUS is Argentina and it performs a key role in the research of this condition.
In the United States, the overall incidence of HUS is estimated at 2.1 cases per 100,000 persons/year, with a peak incidence between six months and four years of age.
HUS and the E. coli infections that cause it have been the source of much negative publicity for the FDA, meat industries, and fast-food restaurants since the 1990s, especially in the contaminations linked to Jack in the Box restaurants. In 2006, an epidemic of harmful E. coli emerged in the United States due to contaminated spinach. In June 2009, Nestlé Toll House cookie dough was linked to an outbreak of E. coli O157:H7 in the United States, which sickened 70 people in 30 states.
In May 2011 an epidemic of bloody diarrhea caused by E. coli O104:H4-contaminated fenugreek seeds hit Germany. Tracing the epidemic revealed more than 3,800 cases, with HUS developing in more than 800 of the cases, including 36 fatal cases. Nearly 90% of the HUS cases were in adults.
References
Syndromes affecting the kidneys
Acquired hemolytic anemia
Blood disorders
Syndromes affecting blood
Medical triads
Wikipedia medicine articles ready to translate | 0.767676 | 0.996862 | 0.765267 |
Pathology | Pathology is the study of disease. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area that includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"). The suffix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.
As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).
Pathology is a significant field in modern medical diagnosis and medical research.
Etymology
The Latin term pathology derives from the Ancient Greek roots pathos, meaning "experience" or "suffering", and -logia, meaning "study of". The term is of early 16th-century origin, and became increasingly popularized after the 1530s.
History
The study of pathology, including the detailed examination of the body, including dissection and inquiry into specific maladies, dates back to antiquity. Rudimentary understanding of many conditions was present in most early societies and is attested to in the records of the earliest historical societies, including those of the Middle East, India, and China. By the Hellenic period of ancient Greece, a concerted causal study of disease was underway (see Medicine in ancient Greece), with many notable early physicians (such as Hippocrates, for whom the modern Hippocratic Oath is named) having developed methods of diagnosis and prognosis for a number of diseases. The medical practices of the Romans and those of the Byzantines continued from these Greek roots, but, as with many areas of scientific inquiry, growth in understanding of medicine stagnated somewhat after the Classical Era, but continued to slowly develop throughout numerous cultures. Notably, many advances were made in the medieval era of Islam (see Medicine in medieval Islam), during which numerous texts of complex pathologies were developed, also based on the Greek tradition. Even so, growth in complex understanding of disease mostly languished until knowledge and experimentation again began to proliferate in the Renaissance, Enlightenment, and Baroque eras, following the resurgence of the empirical method at new centers of scholarship. By the 17th century, the study of rudimentary microscopy was underway and examination of tissues had led British Royal Society member Robert Hooke to coin the word "cell", setting the stage for later germ theory.
Modern pathology began to develop as a distinct field of inquiry during the 19th Century through natural philosophers and physicians that studied disease and the informal study of what they termed "pathological anatomy" or "morbid anatomy". However, pathology as a formal area of specialty was not fully developed until the late 19th and early 20th centuries, with the advent of detailed study of microbiology. In the 19th century, physicians had begun to understand that disease-causing pathogens, or "germs" (a catch-all for disease-causing, or pathogenic, microbes, such as bacteria, viruses, fungi, amoebae, molds, protists, and prions) existed and were capable of reproduction and multiplication, replacing earlier beliefs in humors or even spiritual agents, that had dominated for much of the previous 1,500 years in European medicine. With the new understanding of causative agents, physicians began to compare the characteristics of one germ's symptoms as they developed within an affected individual to another germ's characteristics and symptoms. This approach led to the foundational understanding that diseases are able to replicate themselves, and that they can have many profound and varied effects on the human host. To determine causes of diseases, medical experts used the most common and widely accepted assumptions or symptoms of their times, a general principle of approach that persists in modern medicine.
Modern medicine was particularly advanced by further developments of the microscope to analyze tissues, to which Rudolf Virchow gave a significant contribution, leading to a slew of research developments.
By the late 1920s to early 1930s pathology was deemed a medical specialty. Combined with developments in the understanding of general physiology, by the beginning of the 20th century, the study of pathology had begun to split into a number of distinct fields, resulting in the development of a large number of modern specialties within pathology and related disciplines of diagnostic medicine.
General pathology
The modern practice of pathology is divided into a number of subdisciplines within the distinct but deeply interconnected aims of biological research and medical practice. Biomedical research into disease incorporates the work of a vast variety of life science specialists, whereas, in most parts of the world, to be licensed to practice pathology as a medical specialty, one has to complete medical school and secure a license to practice medicine. Structurally, the study of disease is divided into many different fields that study or diagnose markers for disease using methods and technologies particular to specific scales, organs, and tissue types.
Anatomical pathology
Anatomical pathology (Commonwealth) or anatomic pathology (United States) is a medical specialty that is concerned with the diagnosis of disease based on the gross, microscopic, chemical, immunologic and molecular examination of organs, tissues, and whole bodies (as in a general examination or an autopsy). Anatomical pathology is itself divided into subfields, the main divisions being surgical pathology, cytopathology, and forensic pathology. Anatomical pathology is one of two main divisions of the medical practice of pathology, the other being clinical pathology, the diagnosis of disease through the laboratory analysis of bodily fluids and tissues. Sometimes, pathologists practice both anatomical and clinical pathology, a combination known as general pathology.
Cytopathology
Cytopathology (sometimes referred to as "cytology") is a branch of pathology that studies and diagnoses diseases on the cellular level. It is usually used to aid in the diagnosis of cancer, but also helps in the diagnosis of certain infectious diseases and other inflammatory conditions as well as thyroid lesions, diseases involving sterile body cavities (peritoneal, pleural, and cerebrospinal), and a wide range of other body sites. Cytopathology is generally used on samples of free cells or tissue fragments (in contrast to histopathology, which studies whole tissues) and cytopathologic tests are sometimes called smear tests because the samples may be smeared across a glass microscope slide for subsequent staining and microscopic examination. However, cytology samples may be prepared in other ways, including cytocentrifugation.
Dermatopathology
Dermatopathology is a subspecialty of anatomic pathology that focuses on the skin and the rest of the integumentary system as an organ. It is unique, in that there are two paths a physician can take to obtain the specialization. All general pathologists and general dermatologists train in the pathology of the skin, so the term dermatopathologist denotes either of these who has reached a certain level of accreditation and experience; in the US, either a general pathologist or a dermatologist can undergo a 1 to 2 year fellowship in the field of dermatopathology. The completion of this fellowship allows one to take a subspecialty board examination, and becomes a board certified dermatopathologist. Dermatologists are able to recognize most skin diseases based on their appearances, anatomic distributions, and behavior. Sometimes, however, those criteria do not lead to a conclusive diagnosis, and a skin biopsy is taken to be examined under the microscope using usual histological tests. In some cases, additional specialized testing needs to be performed on biopsies, including immunofluorescence, immunohistochemistry, electron microscopy, flow cytometry, and molecular-pathologic analysis. One of the greatest challenges of dermatopathology is its scope. More than 1500 different disorders of the skin exist, including cutaneous eruptions ("rashes") and neoplasms. Therefore, dermatopathologists must maintain a broad base of knowledge in clinical dermatology, and be familiar with several other specialty areas in Medicine.
Forensic pathology
Forensic pathology focuses on determining the cause of death by post-mortem examination of a corpse or partial remains. An autopsy is typically performed by a coroner or medical examiner, often during criminal investigations; in this role, coroners and medical examiners are also frequently asked to confirm the identity of a corpse. The requirements for becoming a licensed practitioner of forensic pathology varies from country to country (and even within a given nation) but typically a minimal requirement is a medical doctorate with a specialty in general or anatomical pathology with subsequent study in forensic medicine. The methods forensic scientists use to determine death include examination of tissue specimens to identify the presence or absence of natural disease and other microscopic findings, interpretations of toxicology on body tissues and fluids to determine the chemical cause of overdoses, poisonings or other cases involving toxic agents, and examinations of physical trauma. Forensic pathology is a major component in the trans-disciplinary field of forensic science.
Histopathology
Histopathology refers to the microscopic examination of various forms of human tissue. Specifically, in clinical medicine, histopathology refers to the examination of a biopsy or surgical specimen by a pathologist, after the specimen has been processed and histological sections have been placed onto glass slides. This contrasts with the methods of cytopathology, which uses free cells or tissue fragments. Histopathological examination of tissues starts with surgery, biopsy, or autopsy. The tissue is removed from the body of an organism and then placed in a fixative that stabilizes the tissues to prevent decay. The most common fixative is formalin, although frozen section fixing is also common. To see the tissue under a microscope, the sections are stained with one or more pigments. The aim of staining is to reveal cellular components; counterstains are used to provide contrast. Histochemistry refers to the science of using chemical reactions between laboratory chemicals and components within tissue. The histological slides are then interpreted diagnostically and the resulting pathology report describes the histological findings and the opinion of the pathologist. In the case of cancer, this represents the tissue diagnosis required for most treatment protocols.
Neuropathology
Neuropathology is the study of disease of nervous system tissue, usually in the form of either surgical biopsies or sometimes whole brains in the case of autopsy. Neuropathology is a subspecialty of anatomic pathology, neurology, and neurosurgery. In many English-speaking countries, neuropathology is considered a subfield of anatomical pathology. A physician who specializes in neuropathology, usually by completing a fellowship after a residency in anatomical or general pathology, is called a neuropathologist. In day-to-day clinical practice, a neuropathologist generates diagnoses for patients. If a disease of the nervous system is suspected, and the diagnosis cannot be made by less invasive methods, a biopsy of nervous tissue is taken from the brain or spinal cord to aid in diagnosis. Biopsy is usually requested after a mass is detected by medical imaging. With autopsies, the principal work of the neuropathologist is to help in the post-mortem diagnosis of various conditions that affect the central nervous system. Biopsies can also consist of the skin. Epidermal nerve fiber density testing (ENFD) is a more recently developed neuropathology test in which a punch skin biopsy is taken to identify small fiber neuropathies by analyzing the nerve fibers of the skin. This test is becoming available in select labs as well as many universities; it replaces the traditional nerve biopsy test as less invasive.
Pulmonary pathology
Pulmonary pathology is a subspecialty of anatomic (and especially surgical) pathology that deals with diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery. These tests can be necessary to diagnose between infection, inflammation, or fibrotic conditions.
Renal pathology
Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of disease of the kidneys. In a medical setting, renal pathologists work closely with nephrologists and transplant surgeons, who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from traditional microscope histology, electron microscopy, and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus, the tubules and interstitium, the vessels, or a combination of these compartments.
Surgical pathology
Surgical pathology is one of the primary areas of practice for most anatomical pathologists. Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologists. Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patient. These determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.
There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound, CT scan, or magnetic resonance imaging. Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion, whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis.
Clinical pathology
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine, as well as tissues, using the tools of chemistry, clinical microbiology, hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists, hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures. Sometimes the general term "laboratory medicine specialist" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. Immunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
Hematopathology
Hematopathology is the study of diseases of blood cells (including constituents such as white blood cells, red blood cells, and platelets) and the tissues, and organs comprising
the hematopoietic system. The term hematopoietic system refers to tissues and organs that produce and/or primarily host hematopoietic cells and includes bone marrow, the lymph nodes, thymus, spleen, and other lymphoid tissues. In the United States, hematopathology is a board certified subspecialty (licensed under the American Board of Pathology) practiced by those physicians who have completed a general pathology residency (anatomic, clinical, or combined) and an additional year of fellowship training in hematology. The hematopathologist reviews biopsies of lymph nodes, bone marrows and other tissues involved by an infiltrate of cells of the hematopoietic system. In addition, the hematopathologist may be in charge of flow cytometric and/or molecular hematopathology studies.
Molecular pathology
Molecular pathology is focused upon the study and diagnosis of disease through the examination of molecules within organs, tissues or bodily fluids. Molecular pathology is multidisciplinary by nature and shares some aspects of practice with both anatomic pathology and clinical pathology, molecular biology, biochemistry, proteomics and genetics. It is often applied in a context that is as much scientific as directly medical and encompasses the development of molecular and genetic approaches to the diagnosis and classification of human diseases, the design and validation of predictive biomarkers for treatment response and disease progression, and the susceptibility of individuals of different genetic constitution to particular disorders. The crossover between molecular pathology and epidemiology is represented by a related field "molecular pathological epidemiology". Molecular pathology is commonly used in diagnosis of cancer and infectious diseases. Molecular Pathology is primarily used to detect cancers such as melanoma, brainstem glioma, brain tumors as well as many other types of cancer and infectious diseases. Techniques are numerous but include quantitative polymerase chain reaction (qPCR), multiplex PCR, DNA microarray, in situ hybridization, DNA sequencing, antibody-based immunofluorescence tissue assays, molecular profiling of pathogens, and analysis of bacterial genes for antimicrobial resistance. Techniques used are based on analyzing samples of DNA and RNA. Pathology is widely used for gene therapy and disease diagnosis.
Oral and maxillofacial pathology
Oral and Maxillofacial Pathology is one of nine dental specialties recognized by the American Dental Association, and is sometimes considered a specialty of both dentistry and pathology. Oral Pathologists must complete three years of post doctoral training in an accredited program and subsequently obtain diplomate status from the American Board of Oral and Maxillofacial Pathology. The specialty focuses on the diagnosis, clinical management and investigation of diseases that affect the oral cavity and surrounding maxillofacial structures including but not limited to odontogenic, infectious, epithelial, salivary gland, bone and soft tissue pathologies. It also significantly intersects with the field of dental pathology. Although concerned with a broad variety of diseases of the oral cavity, they have roles distinct from otorhinolaryngologists ("ear, nose, and throat" specialists), and speech pathologists, the latter of which helps diagnose many neurological or neuromuscular conditions relevant to speech phonology or swallowing. Owing to the availability of the oral cavity to non-invasive examination, many conditions in the study of oral disease can be diagnosed, or at least suspected, from gross examination, but biopsies, cell smears, and other tissue analysis remain important diagnostic tools in oral pathology.
Medical training and accreditation
Becoming a pathologist generally requires specialty-training after medical school, but individual nations vary some in the medical licensing required of pathologists. In the United States, pathologists are physicians (D.O. or M.D.) who have completed a four-year undergraduate program, four years of medical school training, and three to four years of postgraduate training in the form of a pathology residency. Training may be within two primary specialties, as recognized by the American Board of Pathology: [anatomical pathology and clinical pathology, each of which requires separate board certification. The American Osteopathic Board of Pathology also recognizes four primary specialties: anatomic pathology, dermatopathology, forensic pathology, and laboratory medicine. Pathologists may pursue specialised fellowship training within one or more subspecialties of either anatomical or clinical pathology. Some of these subspecialties permit additional board certification, while others do not.
In the United Kingdom, pathologists are physicians licensed by the UK General Medical Council. The training to become a pathologist is under the oversight of the Royal College of Pathologists. After four to six years of undergraduate medical study, trainees proceed to a two-year foundation program. Full-time training in histopathology currently lasts between five and five and a half years and includes specialist training in surgical pathology, cytopathology, and autopsy pathology. It is also possible to take a Royal College of Pathologists diploma in forensic pathology, dermatopathology, or cytopathology, recognising additional specialist training and expertise and to get specialist accreditation in forensic pathology, pediatric pathology, and neuropathology. All postgraduate medical training and education in the UK is overseen by the General Medical Council.
In France, pathology is separated into two distinct specialties, anatomical pathology, and clinical pathology. Residencies for both lasts four years. Residency in anatomical pathology is open to physicians only, while clinical pathology is open to both physicians and pharmacists. At the end of the second year of clinical pathology residency, residents can choose between general clinical pathology and a specialization in one of the disciplines, but they can not practice anatomical pathology, nor can anatomical pathology residents practice clinical pathology.
Overlap with other diagnostic medicine
Though separate fields in terms of medical practice, a number of areas of inquiry in medicine and
medical science either overlap greatly with general pathology, work in tandem with it, or contribute significantly to the understanding of the pathology of a given disease or its course in an individual. As a significant portion of all general pathology practice is concerned with cancer, the practice of oncology makes extensive use of both anatomical and clinical pathology in diagnosis and treatment. In particular, biopsy, resection, and blood tests are all examples of pathology work that is essential for the diagnoses of many kinds of cancer and for the staging of cancerous masses. In a similar fashion, the tissue and blood analysis techniques of general pathology are of central significance to the investigation of serious infectious disease and as such inform significantly upon the fields of epidemiology, etiology, immunology, and parasitology. General pathology methods are of great importance to biomedical research into disease, wherein they are sometimes referred to as "experimental" or "investigative" pathology.
Medical imaging is the generating of visual representations of the interior of a body for clinical analysis and medical intervention. Medical imaging reveals details of internal physiology that help medical professionals plan appropriate treatments for tissue infection and trauma. Medical imaging is also central in supplying the biometric data necessary to establish baseline features of anatomy and physiology so as to increase the accuracy with which early or fine-detail abnormalities are detected. These diagnostic techniques are often performed in combination with general pathology procedures and are themselves often essential to developing new understanding of the pathogenesis of a given disease and tracking the progress of disease in specific medical cases. Examples of important subdivisions in medical imaging include radiology (which uses the imaging technologies of X-ray radiography) magnetic resonance imaging, medical ultrasonography (or ultrasound), endoscopy, elastography, tactile imaging, thermography, medical photography, nuclear medicine and functional imaging techniques such as positron emission tomography. Though they do not strictly relay images, readings from diagnostics tests involving electroencephalography, magnetoencephalography, and electrocardiography often give hints as to the state and function of certain tissues in the brain and heart respectively.
Pathology informatics
Pathology informatics is a subfield of health informatics. It is the use of information technology in pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information.
Key aspects of pathology informatics include:
Laboratory information management systems (LIMS): Implementing and managing computer systems specifically designed for pathology departments. These systems help in tracking and managing patient specimens, results, and other pathology data.
Digital pathology: Involves the use of digital technology to create, manage, and analyze pathology images. This includes side scanning and automated image analysis.
Telepathology: Using technology to enable remote pathology consultation and collaboration.
Quality assurance and reporting: Implementing informatics solutions to ensure the quality and accuracy of pathology processes.
Psychopathology
Psychopathology is the study of mental illness, particularly of severe disorders. Informed heavily by both psychology and neurology, its purpose is to classify mental illness, elucidate its underlying causes, and guide clinical psychiatric treatment accordingly. Although diagnosis and classification of mental norms and disorders is largely the purview of psychiatry—the results of which are guidelines such as the Diagnostic and Statistical Manual of Mental Disorders, which attempt to classify mental disease mostly on behavioural evidence, though not without controversy—the field is also heavily, and increasingly, informed upon by neuroscience and other of the biological cognitive sciences. Mental or social disorders or behaviours seen as generally unhealthy or excessive in a given individual, to the point where they cause harm or severe disruption to the person's lifestyle, are often called "pathological" (e.g., pathological gambling or pathological liar).
Non-humans
Although the vast majority of lab work and research in pathology concerns the development of disease in humans, pathology is of significance throughout the biological sciences. Two main catch-all fields exist to represent most complex organisms capable of serving as host to a pathogen or other form of disease: veterinary pathology (concerned with all non-human species of kingdom of Animalia) and phytopathology, which studies disease in plants.
Veterinary pathology
Veterinary pathology covers a vast array of species, but with a significantly smaller number of practitioners, so understanding of disease in non-human animals, especially as regards veterinary practice, varies considerably by species. Nevertheless, significant amounts of pathology research are conducted on animals, for two primary reasons: 1) The origins of diseases are typically zoonotic in nature, and many infectious pathogens have animal vectors and, as such, understanding the mechanisms of action for these pathogens in non-human hosts is essential to the understanding and application of epidemiology and 2) those animals that share physiological and genetic traits with humans can be used as surrogates for the study of the disease and potential treatments as well as the effects of various synthetic products. For this reason, as well as their roles as livestock and companion animals, mammals generally have the largest body of research in veterinary pathology. Animal testing remains a controversial practice, even in cases where it is used to research treatment for human disease. As in human medical pathology, the practice of veterinary pathology is customarily divided into the two main fields of anatomical and clinical pathology.
Plant pathology
Although the pathogens and their mechanics differ greatly from those of animals, plants are subject to a wide variety of diseases, including those caused by fungi, oomycetes, bacteria, viruses, viroids, virus-like organisms, phytoplasmas, protozoa, nematodes and parasitic plants. Damage caused by insects, mites, vertebrate, and other small herbivores is not considered a part of the domain of plant pathology. The field is connected to plant disease epidemiology and especially concerned with the horticulture of species that are of high importance to the human diet or other human utility.
See also
Biopsy
Causal inference
Cell (biology)
Disease
Environmental pathology
Epidemiology
Etiology (medicine)
Hematology
Histology
Immunology
List of pathologists
Medical diagnosis
Medical jurisprudence
Medicine
Microbiology
Microscopy
Minimally-invasive procedures
Oncology
Parasitology
Pathogen
Pathogenesis
Pathophysiology
Precision medicine
Spectroscopy
Speech–language pathology
Telepathology
References
External links
American Society for Clinical Pathology (ASCP)
American Society for Investigative Pathology (ASIP)
Pathpedia online pathology resource: Comprehensive pathology website with numerous resources.
College of American Pathologists
humpath.com (Atlas in Human Pathology)
Intersociety Council for Pathology Training (ICPI)
Pathological Society of Great Britain and Ireland
Royal College of Pathologists (UK)
Royal College of Pathologists of Australasia (Australia & Oceania)
United States and Canadian Academy of Pathology
WebPath: The Internet Pathology Laboratory for Medical Education
Atlases: High Resolution Pathology Images
Branches of biology | 0.766325 | 0.998598 | 0.765251 |
Hypophosphatemia | Hypophosphatemia is an electrolyte disorder in which there is a low level of phosphate in the blood. Symptoms may include weakness, trouble breathing, and loss of appetite. Complications may include seizures, coma, rhabdomyolysis, or softening of the bones.
Causes include alcohol use disorder, refeeding in those with malnutrition, recovery from diabetic ketoacidosis, burns, hyperventilation, and certain medications. It may also occur in the setting of hyperparathyroidism, hypothyroidism, and Cushing syndrome. It is diagnosed based on a blood phosphate concentration of less than 0.81 mmol/L (2.5 mg/dL). When levels are below 0.32 mmol/L (1.0 mg/dL) it is deemed to be severe.
Treatment depends on the underlying cause. Phosphate may be given by mouth or by injection into a vein. Hypophosphatemia occurs in about 2% of people within hospital and 70% of people in the intensive care unit (ICU).
Signs and symptoms
Muscle dysfunction and weakness – This occurs in major muscles, but also may manifest as: diplopia, low cardiac output, dysphagia, and respiratory depression due to respiratory muscle weakness.
Mental status changes – This may range from irritability to gross confusion, delirium, and coma.
White blood cell dysfunction, causing worsening of infections.
Instability of cell membranes due to low adenosine triphosphate (ATP) levels – This may cause rhabdomyolysis with increased serum levels of creatine phosphokinase, and also hemolytic anemia.
Increased affinity for oxygen in the blood caused by decreased production of 2,3-bisphosphoglyceric acid.
If hypophosphatemia is chronic; rickets in children or osteomalacia in adults may develop.
Causes
Refeeding syndrome – This causes a demand for phosphate in cells due to the action of hexokinase, an enzyme that attaches phosphate to glucose to begin metabolism of glucose. Also, production of ATP when cells are fed and recharge their energy supplies requires phosphate. A similar mechanism is seen in the treatment of diabetic ketoacidosis, which can be complicated by respiratory failure in these cases due to respiratory muscle weakness.
Respiratory alkalosis – Any alkalemic condition moves phosphate out of the blood into cells. This includes most common respiratory alkalemia (a higher than normal blood pH from low carbon dioxide levels in the blood), which in turn is caused by any hyperventilation (such as may result from sepsis, fever, pain, anxiety, drug withdrawal, and many other causes). This phenomenon is seen because in respiratory alkalosis carbon dioxide (CO2) decreases in the extracellular space, causing intracellular CO2 to freely diffuse out of the cell. This drop in intracellular CO2 causes a rise in cellular pH which has a stimulating effect on glycolysis. Since the process of glycolysis requires phosphate (the end product is adenosine triphosphate), the result is a massive uptake of phosphate into metabolically active tissue (such as muscle) from the serum. However, that this effect is not seen in metabolic alkalosis, for in such cases the cause of the alkalosis is increased bicarbonate rather than decreased CO2. Bicarbonate, unlike CO2, has poor diffusion across the cellular membrane and therefore there is little change in intracellular pH.
Alcohol use disorder – Alcohol impairs phosphate absorption. People who excessively consume alcohol are usually also malnourished with regard to minerals. In addition, alcohol treatment is associated with refeeding, which further depletes phosphate, and the stress of alcohol withdrawal may create respiratory alkalosis, which exacerbates hypophosphatemia (see above).
Malabsorption – This includes gastrointestinal damage, and also failure to absorb phosphate due to lack of vitamin D, or chronic use of phosphate binders such as sucralfate, aluminum-containing antacids, and (more rarely) calcium-containing antacids.
Intravenous iron (usually for anemia) may cause hypophosphatemia. The loss of phosphate is predominantly the result of renal wasting.
Primary hypophosphatemia is the most common cause of non-nutritional rickets. Laboratory findings include low-normal serum calcium, moderately low serum phosphate, elevated serum alkaline phosphatase, and low serum 1,25 dihydroxy-vitamin D levels, hyperphosphaturia, and no evidence of hyperparathyroidism.
Hypophosphatemia decreases 2,3-bisphosphoglycerate (2,3-BPG) causing a left shift in the oxyhemoglobin curve.
Other rarer causes include:
Certain blood cancers such as lymphoma or leukemia
Hereditary causes
Liver failure
Tumor-induced osteomalacia
Pathophysiology
Hypophosphatemia is caused by the following three mechanisms:
Inadequate intake (often unmasked in refeeding after long-term low phosphate intake)
Increased excretion (e.g. in hyperparathyroidism, hypophosphatemic rickets)
Shift of phosphorus from the extracellular to the intracellular space. This can be seen in treatment of diabetic ketoacidosis, refeeding, short-term increases in cellular demand (e.g. hungry bone syndrome) and acute respiratory alkalosis.
Diagnosis
Hypophosphatemia is diagnosed by measuring the concentration of phosphate in the blood. Concentrations of phosphate less than 0.81 mmol/L (2.5 mg/dL) are considered diagnostic of hypophosphatemia, though additional tests may be needed to identify the underlying cause of the disorder.
Treatment
Standard intravenous preparations of potassium phosphate are available and are routinely used in malnourished people and people who consume excessive amounts of alcohol. Supplementation by mouth is also useful where no intravenous treatment are available. Historically one of the first demonstrations of this was in people in concentration camp who died soon after being re-fed: it was observed that those given milk (high in phosphate) had a higher survival rate than those who did not get milk.
Monitoring parameters during correction with IV phosphate
Phosphorus levels should be monitored after 2 to 4 hours after each dose, also monitor serum potassium, calcium and magnesium. Cardiac monitoring is also advised.
See also
X-linked hypophosphatemia
References
External links
Electrolyte disturbances
Wikipedia medicine articles ready to translate | 0.769263 | 0.994777 | 0.765245 |
Complication (medicine) | A complication in medicine, or medical complication, is an unfavorable result of a disease, health condition, or treatment. Complications may adversely affect the prognosis, or outcome, of a disease. Complications generally involve a worsening in the severity of the disease or the development of new signs, symptoms, or pathological changes that may become widespread throughout the body and affect other organ systems. Thus, complications may lead to the development of new diseases resulting from previously existing diseases. Complications may also arise as a result of various treatments.
The development of complications depends on a number of factors, including the degree of vulnerability, susceptibility, age, health status, and immune system condition. Knowledge of the most common and severe complications of a disease, procedure, or treatment allows for prevention and preparation for treatment if they should occur.
Complications are not to be confused with sequelae, which are residual effects that occur after the acute (initial, most severe) phase of an illness or injury. Sequelae can appear early in the development of disease or weeks to months later and are a result of the initial injury or illness. For example, a scar resulting from a burn or dysphagia resulting from a stroke would be considered sequelae. In addition, complications should not be confused with comorbidities, which are diseases that occur concurrently but have no causative association. Complications are similar to adverse effects, but the latter term is typically used in pharmacological contexts or when the negative consequence is expected or common.
Common illnesses and complications
Iatrogenic complications
Medical errors can fall into various categories listed below:
Medication: Medication medical errors include wrong prescription, impaired delivery, or improper adherence. The process of prescribing medication is a complex process that relies on the accurate transfer of information through various parties. Prevention methods include increased use of electronic prescription, pre-packaging unit dosing, and ensuring medical literacy among patients.
Surgical: Surgery-related medical errors can be anesthesia-related, but most often include wrong-site and wrong-patient procedural errors. Preventive measures include following and double-checking standardized surgical protocol before, during, and after procedures. Universal surgical protocols include verification of patient identity and proper site-marking.
Diagnostic: Diagnostic errors include misdiagnosis, wrong diagnosis, and over diagnosis. Diagnostic errors are often the result of patient characteristics and physician bias.
Machine interface: Errors in this category refer to mistakes in human interaction with tools or machines. Machine-related errors can be reduced by standardization and clear differentiation in design of products.
Transition and handoff: Errors in this category can occur person-to-person or site-to-site during transfer, and can be managed by adhering to proper hand-off protocols.
Human factors, teamwork, and communication: Errors in this category highlight the impact of culture and relationships on communication. These concepts can play a role in other categories of medical errors. Preventive measures include cultivating a "culture of safety" which includes creating an environment where people feel comfortable discussing concerns, feedback, and errors without fear of punishment.
Healthcare-associated infections (HAIs): HAIs are complications of general treatments involving microorganisms or viral infections and are most commonly caused by indwelling devices (urinary catheters, central lines) or previous surgical procedures. Common microbes involved in HAIs are Escherichia coli, Proteus mirabilis, and Clostridioides difficile. The most effective preventive measure is hand-hygiene.
Cardiovascular complications
Atrial fibrillation
Atrial fibrillation is a type of arrhythmia characterized by rapid and irregular heart rhythms due to irregular atrial activation by the atrioventricular (AV) node. In the pathogenesis of atrial fibrillation, there is no effective pumping of blood into either the pulmonary or systemic circulation from the left ventricle of the heart. The left and right ventricles (lower chambers of the heart) do not fill properly due to the irregular contraction of the left and right atria (upper chambers of the heart).
A patient with atrial fibrillation may experience symptoms of fatigue, dizziness or lightheadedness, heart palpitations, chest pain, and shortness of breath. The heart does not effectively pump blood into the pulmonary or systemic vasculature, and causes the blood to remain within the chambers of the heart. The collection of blood within the heart due to atrial fibrillation can cause and increase the risk of development of a thrombus (blood clot). The thrombus can also develop into an embolus (mobile blood clot) and travel into the systemic circulation. Atrial fibrillation is associated with an increase in risk of having a stroke especially if the embolus travels to the brain.
Other examples
Thrombosis in the heart or brain, causing stroke or acute myocardial infarction can be complications of blood coagulation disorders, phlebitis (inflammation of the veins), endocarditis and artificial heart valves.
Metabolic complications
Diabetes mellitus
Diabetes mellitus, also known simply as diabetes, is a disorder of the regulation of blood glucose (a common type of sugar) levels. There are two types of chronic diabetes mellitus: type I and type II. Both lead to abnormally high levels of blood glucose as the body is not able to properly absorb the sugar into tissues. Diabetes requires a life-long consistent monitoring of food intake, blood sugar levels, and physical activity. Diabetes mellitus may present a series of complications in an advanced or more severe stage, such as:
Cardiovascular disease. Adults with diabetes are significantly more likely to die from heart disease than are those without diabetes. Diabetes is associated with risk factors for various cardiovascular diseases including obesity, insulin resistance, high blood cholesterol and triglyceride content, and high blood pressure. These conditions increase risk of thrombosis, atherosclerosis (blockage of coronary arteries leading to inadequate supply of oxygen to parts of the heart), and hypertension which can lead to myocardial infarction, coronary artery disease (CAD), and others.
Diabetic neuropathy. Hyperglycemia can eventually cause damage to nerves in the distal extremities (peripheral neuropathy), thighs and hips (radiculoplexus neuropathy), face (mononeuropathy), and internal organs (autonomic neuropathy). Initial symptoms may present as numbness, tingling, pain, muscular weakness, loss of reflexes or proper bodily functions, and many others.
Diabetic nephropathy. Excessive amounts of certain solutes passing through the kidneys for prolonged periods of time can lead to kidney damage. Diabetic nephropathy is specifically characterized by abnormally high levels of urinary albumin excretion. This affects approximately 40% of patients with type I or type II diabetes.
Diabetic retinopathy. Chronic or prolonged type I and type II diabetes can lead to damage in the blood vessels of the retina due to hyperglycemia (excessive blood glucose). Damage and blockage of the vessels causes microaneurysms, tears, and leakage of fluid into the back of the eye. This can eventually lead to abnormal blood vessel growth, nerve damage, or excessive pressure buildup in the eye. Symptoms initially present as blurred vision but can lead to more serious complications such as blindness, glaucoma, retinal detachment, and vitreous hemorrhage.
Foot damage. Diabetes mellitus can lead to poor vascular blood flow to the extremities. Injury of the foot with inadequate blood flow can progress to ulcers and become infected. Individuals with diabetic neuropathy may not notice the damage and may develop gangrene (tissue necrosis due to inadequate blood supply).
Skin conditions. Insulin insensitivity in the case of type II diabetes can cause prolonged increases in blood insulin. Insulin normally binds to insulin receptors but in excess amounts may bind to insulin-like growth factor (IGF) receptors in epithelial tissue. This can cause excessive proliferation of keratinocytes and fibroblasts. This presents as acanthosis nigricans, a thickening and darkening of areas of the skin such as the armpits, necks, hands, and face. Other skin conditions include diabetic dermopathy, digital sclerosis, eruptive xanthomatosis, and others.
Neurologic / psychiatric complications
Hepatic encephalopathy is a possible complication of liver cirrhosis.
Significant intellectual, physical, and developmental disability are common complications of untreated hydrocephalus.
Suicide is a common complication of many disorders and conditions that consistently affect a person's life negatively, such as major depressive disorder, posttraumatic stress disorder, schizophrenia, anxiety disorders, or substance abuse.
Complications of outpatient drugs are very common and many patients experience worry or discomfort due to them.
Paradoxical reaction to a drug; that is, a reaction that is the opposite to the intended purpose of the drug. An example is benzodiazepines, a class of psychoactive drugs considered minor tranquilizers with varying hypnotic, sedative, anxiolytic, anticonvulsant, and muscle relaxant effects; paradoxically they may also create hyperactivity, anxiety, convulsions etc. in susceptible individuals.
Reproductive complications
Pregnancy
Pregnancy is the development of an embryo or fetus inside the womb of a female for the rough duration of 9 months or 40 weeks from the last menstrual period until birth. It is divided into three trimesters, each lasting for about 3 months. The first trimester is when the developing embryo becomes a fetus, organs start to develop, limbs grow, and facial features appear. The 2nd and 3rd trimesters are marked by a significant amount of growth and functional development of the body. During this time, the woman's body undergoes a series of changes and many complications may arise involving either the fetus, the mother, or both.
Hypertension. The developing fetus enlarges in the mother's womb, placing pressure on the arteries and vasculature of the mother. This causes a reduction in blood flow and a systemic increase in blood pressure. If the mother had high blood pressure prior to and after pregnancy it is considered chronic hypertension; if it occurs after 20 weeks of gestation or pregnancy, it is gestational hypertension. A previously hypertensive mother who shows signs of gestational hypertension can lead to preeclampsia, a more severe case which can be detrimental to the mother and developing fetus.
Gestational diabetes. Appropriate levels of blood sugar is typically maintained by insulin secretion from the pancreas. During pregnancy the placenta surrounding the developing fetus produces hormones that can inhibit the action of insulin, preventing the mother's blood sugar from decreasing. Occurs primarily in the second half of pregnancy and can cause excessive birth weight, preterm delivery, and place the child at greater risk for type II diabetes.
Preterm labor. Delivery of the baby prior to 37 weeks of pregnancy is considered preterm. This can cause a variety of issues with the child including underdeveloped viscera (organs), behavioral or learning disabilities, low birth weight, and respiratory issues.
Miscarriage. The loss of the developing fetus prior to 20 weeks of pregnancy. Common causes may be related to chromosomal abnormalities (abnormal genetic makeup) of the fetus but can also include ectopic pregnancy, maternal age, and other factors.
Stillbirth. The loss of the developing fetus after 20 weeks of pregnancy. Can be due to a variety of reasons including chromosomal abnormalities, developmental issues, or health-related problems of the mother.
Hyperemesis gravidarum. Persistent, acute nausea throughout pregnancy that does not go away after the first trimester. Different from morning sickness which is more common and less severe.
Respiratory complications
Streptococcal pharyngitis
Streptococcal pharyngitis, also known as strep throat, is an infection of the respiratory tract caused by group A Strep, Streptococcus pyogenes, a gram-positive, cocci, beta-hemolytic (lyses blood cells) bacteria. It is primarily spread by direct contact and the transfer of fluids via oral or other secretions and manifests largely in children. Common symptoms associated with streptococcal pharyngitis include sore throat, fever, white excretions at the back of the mouth, and cervical adenopathy (swollen lymph nodes underneath the chin and around the neck area). Streptococcal pharyngitis can lead to various complications and recurrent infection can increase the likelihood. In many of these, lack of treatment and the body's immune response is responsible for the additional adverse reactions. These include:
Scarlet fever. In addition to the symptoms of strep throat, individuals may experience increased a red rash, increased red tone, and strawberry tongue. The prominent rash generally fades after a few days and may peel for a few weeks. Treatment is the same as for strep throat.
Rheumatic fever. Rheumatic fever generally develops a few weeks after symptoms of strep throat have passed and is less likely to develop if prompt treatment (antibiotics) is given. Typical symptoms can include polyarthritis (temporary joint pain in multiple areas), carditis or chest pain, rash, subcutaneous nodules, and involuntary jerks. Rheumatic fever is believed to be the result of an autoimmune reaction to various tissues in the body that are similar to toxins produced by Streptococcus pyogenes. Rheumatic fever may lead to more serious complications of the heart such as rheumatic heart disease.
Glomerulonephritis. Onset of kidney damage that may present a few weeks after strep infection. Rather than being a direct result of infection in the kidneys, It is believed to be caused by an overreactive immune response. Symptoms can include blood or protein in the urine, hypertension, and reduced urine output. Can lead to further kidney damage later in life
Otitis media. Infection of the middle ear.
Meningitis. Infection of the meninges of the central nervous system (brain and spinal cord) that leads to swelling. Symptoms vary and differ between adults and children but can include headaches, fever, stiff neck, and other neurological-related issues. Early treatment is important to prevent more serious complications.
Toxic shock syndrome. A severe reaction of the body to toxins produced by various bacteria such as Streptococcus pyogenes. Results from an overactive response by the immune system and can cause hypotension, fever, and in more severe cases, organ failure.
Surgical / procedural complications
Puerperal fever was a common complication of childbirth, contributing to the high mortality of mothers before the advent of antisepsis and antibiotics.
Erectile dysfunction and urinary incontinence which may follow prostatectomy.
Malignant hyperthermia can be a reaction to general anesthetics, as a complication in a surgery.
Fractured ribs and sternum may be a complication of cardiopulmonary resuscitation attempts.
Other examples of complications
Sepsis (infection of the blood) may occur as a complication of a bacterial, viral, or fungal infection.
Miscarriage is the most common complication of early pregnancy.
Eczema vaccinatum is a rare and severe complication of smallpox vaccination in people with eczema.
See also
Adverse effect
Classification of Pharmaco-Therapeutic Referrals
Diagnosis
Iatrogenesis
Late effect
Nocebo
Placebo
Prognosis
Sequela
References
Further reading
Medical terminology | 0.769505 | 0.994406 | 0.7652 |
Meningoencephalitis | Meningoencephalitis (; from ; ; and the medical suffix -itis, "inflammation"), also known as herpes meningoencephalitis, is a medical condition that simultaneously resembles both meningitis, which is an infection or inflammation of the meninges, and encephalitis, which is an infection or inflammation of the brain tissue.
Signs and symptoms
Signs of meningoencephalitis include unusual behavior, personality changes, nausea and thinking problems.
Symptoms may include headache, fever, pain in neck movement, light sensitivity, and seizure.
Causes
Causative organisms include protozoans, viral and bacterial pathogens.
Specific types include:
Bacterial
Veterinarians have observed meningoencephalitis in animals infected with listeriosis, caused by the pathogenic bacteria L. monocytogenes. Meningitis and encephalitis already present in the brain or spinal cord of an animal may form simultaneously into meningeoencephalitis. The bacterium commonly targets the sensitive structures of the brain stem. L. monocytogenes meningoencephalitis has been documented to significantly increase the number of cytokines, such as IL-1β, IL-12, IL-15, leading to toxic effects on the brain.
Meningoencephalitis may be one of the severe complications of diseases originating from several Rickettsia species, such as Rickettsia rickettsii (agent of Rocky Mountain spotted fever (RMSF)), Rickettsia conorii, Rickettsia prowazekii (agent of epidemic louse-borne typhus), and Rickettsia africae. It can cause impairments to the cranial nerves, paralysis to the eyes, and sudden hearing loss. Meningoencephalitis is a rare, late-stage manifestation of tick-borne ricksettial diseases, such as RMSF and human monocytotropic ehrlichiosis (HME), caused by Ehrlichia chaffeensis (a species of rickettsiales bacteria).
Other bacteria that can cause it are Mycoplasma pneumoniae, Tuberculosis, Borrelia (Lyme disease) and Leptospirosis.
Viral
Tick-borne encephalitis
West Nile virus
Measles
Epstein–Barr virus
Varicella-zoster virus
Enterovirus
Herpes simplex virus type 1
Herpes simplex virus type 2
Rabies virus
Adenovirus, although meningoencephalitis is almost solely seen in heavily immunocompromised patients.
Mumps, a relatively common cause of meningoencephalitis. However, most cases are mild, and mumps meningoencephalitis generally does not result in death or neurologic sequelae.
HIV, a very small number of individuals exhibit meningoencephalitis at the primary stage of infection.
Autoimmune
Antibodies targeting amyloid beta peptide proteins which have been used during research on Alzheimer's disease.
Anti-N-methyl-D-aspartate (anti-NMDA) receptor antibodies, which are also associated with seizures and a movement disorder, and related to anti-NMDA receptor encephalitis.
NAIM or "Nonvasculitic autoimmune inflammatory meningoencephalitis" (NAIM). They can be divided into GFAP- and GFAP+ cases. The second is related to the autoimmune GFAP astrocytopathy.
Protozoal
Naegleria fowleri (percolozoa)
Trypanosoma brucei (euglenozoa)
Toxoplasma gondii (apicomplexa)
Animal
Halicephalobus gingivalis
This nematode is an exceptionally rare cause of meningoencephalitis.
Other/multiple
Other causes include granulomatous meningoencephalitis and vasculitis. The fungus, Cryptococcus neoformans, can be symptomatically manifested within the CNS as meningoencephalitis with hydrocephalus being a very characteristic finding due to the unique thick polysaccharide capsule of the organism.
Diagnosis
Clinical diagnosis includes evaluation for the presence of recurrent or recent herpes infection, fever, headache, altered mental status, convulsions, disturbance of consciousness, and focal signs. Testing of cerebrospinal fluid is usually performed.
Treatment
Antiviral therapy, such as acyclovir and ganciclovir, work best when applied as early as possible. May also be treated with interferon as an immune therapy. Symptomatic therapy can be applied as needed. High fever can be treated by physical regulation of body temperature. Seizure can be treated with antiepileptic drugs. High intracranial pressure can be treated with drugs such as mannitol. If caused by an infection then the infection can be treated with antibiotic drugs.
See also
Meningitis
Meningism
Primary amoebic meningoencephalitis
Encephalitis
Naegleria fowleri
References
External links
Encephalitis
Meningitis
Central nervous system disorders
Herpes simplex virus–associated diseases | 0.768595 | 0.995568 | 0.765188 |
Ovarian hyperstimulation syndrome | Ovarian hyperstimulation syndrome (OHSS) is a medical condition that can occur in some women who take fertility medication to stimulate egg growth, and in other women in sporadic cases. Most cases are mild, but rarely the condition is severe and can lead to serious illness or even death.
Signs and symptoms
Mild symptoms include abdominal bloating and feeling of fullness, nausea, diarrhea, and slight weight gain. Moderate symptoms include weight gain greater than per day, increased abdominal girth, vomiting, diarrhea, darker urine, decreased urine output, excessive thirst, and skin and/or hair feeling dry (in addition to mild symptoms). Severe symptoms are fullness/bloating above the waist, shortness of breath, pleural effusion, urination significantly darker or diminished in quantity, calf and chest pain, marked abdominal bloating or distention, and lower abdominal pain.
Complications
OHSS may be complicated by ovarian torsion or rupture, venous thromboembolism, acute respiratory distress syndrome, electrolytes imbalance, thrombophlebitis and acute kidney injury. Symptoms generally resolve in 1 to 2 weeks but will be more severe and persist longer if pregnancy occurs. This is due to human chorionic gonadotropin (hCG) from the pregnancy acting on the corpus luteum in the ovaries in sustaining the pregnancy before the placenta has fully developed. Typically, even in severe OHSS with a developing pregnancy, the duration does not exceed the first trimester. Mortality is low, but several fatal cases have been reported.
Cause
Sporadic OHSS is very rare and may have a genetic component. Clomifene citrate therapy can occasionally lead to OHSS, but the vast majority of cases develop after use of gonadotropin therapy (with administration of FSH), such as Pergonal, and administration of hCG to induce final oocyte maturation and/or trigger oocyte release, often in conjunction with in vitro fertilisation (IVF). The frequency varies and depends on a woman's risk factors, management, and methods of surveillance. About 5% of treated women may encounter moderate to severe OHSS. Risk factors include polycystic ovary syndrome, young age, low BMI, high antral follicle count, the development of many ovarian follicles under stimulation, extreme elevated serum estradiol concentrations, the use of hCG for final oocyte maturation and/or release, the continued use of hCG for luteal support, and the occurrence of a pregnancy (resulting in hCG production).
Medications
Ovarian hyperstimulation syndrome is particularly associated with injection of a hormone called human chorionic gonadotropin (hCG) which is used for inducing final oocyte maturation and/or triggering oocyte release. The risk is further increased by multiple doses of hCG after ovulation and if the procedure results in pregnancy.
Using a GnRH agonist instead of hCG for inducing final oocyte maturation and/or release results in an elimination of the risk of ovarian hyperstimulation syndrome, but a slight decrease of the delivery rate of approximately 6%.
Pathophysiology
OHSS has been characterized by the presence of multiple luteinized cysts within the ovaries leading to ovarian enlargement and secondary complications, but that definition includes almost all women undergoing ovarian stimulation. The central feature of clinically significant OHSS is the development of vascular hyperpermeability and the resulting shift of fluids into the third space.
As hCG causes the ovary to undergo extensive luteinization, large amounts of estrogens, progesterone, and local cytokines are released. It is almost certain that vascular endothelial growth factor (VEGF) is a key substance that induces vascular hyperpermeability, making local capillaries "leaky", leading to a shift of fluids from the intravascular system to the abdominal and pleural cavity. Supraphysiologic production of VEGF from many follicles under the prolonged effect of hCG appears to be the specific key process underlying OHSS. Thus, while the woman accumulates fluid in the third space, primarily in the form of ascites, she actually becomes hypovolemic and is at risk for respiratory, circulatory (such as arterial thromboembolism since blood is now thicker), and renal problems. Women who are pregnant sustain the ovarian luteinization process through the production of hCG.
Avoiding OHSS typically requires interrupting the pathological sequence, such as avoiding the use of hCG. One alternative is to use a GnRH agonist instead of hCG. While this has been repeatedly shown to "virtually eliminate" OHSS risk, there is some controversy regarding the effect on pregnancy rates if a fresh non-donor embryo transfer is attempted, almost certainly due to a luteal phase defect. There is no dispute that the GnRH agonist trigger is effective for oocyte donors and for embryo banking (cryopreservation) cycles.
Diagnosis
Classification
OHSS is divided into the categories mild, moderate, severe, and critical. In mild forms of OHSS the ovaries are enlarged (5–12 cm) and there may be additional accumulation of ascites with mild abdominal distension, abdominal pain, nausea, and diarrhea. In severe forms of OHSS there may be hemoconcentration, thrombosis, distension, oliguria (decreased urine production), pleural effusion, and respiratory distress. Early OHSS develops before pregnancy testing and late OHSS is seen in early pregnancy.
Criteria for severe OHSS include enlarged ovary, ascites, hematocrit > 45%, WBC > 15,000, oliguria, creatinine 1.0–1.5 mg/dl, creatinine clearance > 50 ml/min, liver dysfunction, and anasarca. Critical OHSS includes enlarged ovary, tense ascites with hydrothorax and pericardial effusion, hematocrit > 55%, WBC > 25,000, oligoanuria, creatinine > 1.6 mg/dl, creatinine clearance < 50 ml/min, kidney failure, thromboembolic phenomena, and ARDS.
Prevention
Physicians can reduce the risk of OHSS by monitoring of FSH therapy to use this medication judiciously, and by withholding hCG medication.
Cabergoline confers a significant reduction in the risk of OHSS in high risk women according to a Cochrane review of randomized studies, but the included trials did not report the live birth rates or multiple pregnancy rates. Cabergoline, as well as other dopamine agonists, might reduce the severity of OHSS by interfering with the VEGF system. A systematic review and meta-analysis concluded that prophylactic treatment with cabergoline reduces the incidence, but not the severity of OHSS, without compromising pregnancy outcomes.
The risk of OHSS is smaller when using GnRH antagonist protocol instead of GnRH agonist protocol for suppression of ovulation during ovarian hyperstimulation. The underlying mechanism is that, with the GnRH antagonist protocol, initial follicular recruitment and selection is undertaken by endogenous endocrine factors prior to starting the exogenous hyperstimulation, resulting in a smaller number of growing follicles when compared with the standard long GnRH agonist protocol.
A Cochrane review found administration of hydroxyethyl starch decreases the incidence of severe OHSS. There was insufficient evidence to support routine cryopreservation and insufficient evidence for the relative merits of intravenous albumin versus cryopreservation. Also, coasting, which is ovarian hyperstimulation without induction of final maturation, does not significantly decrease the risk of OHSS.
Volume expanders such as albumin and hydroxyethyl starch solutions act providing volume to the circulatory system
Treatment
Treatment of OHSS depends on the severity of the hyperstimulation.
Mild OHSS can be treated conservatively with monitoring of abdominal girth, weight, and discomfort on an outpatient basis until either conception or menstruation occurs. Conception can cause mild OHSS to worsen in severity.
Moderate OHSS is treated with bed rest, fluids, and close monitoring of labs such as electrolytes and blood counts. Ultrasound may be used to monitor the size of ovarian follicles. Depending on the situation, a physician may closely monitor a women's fluid intake and output on an outpatient basis, looking for increased discrepancy in fluid balance (over 1 liter discrepancy is cause for concern). Resolution of the syndrome is measured by decreasing size of the follicular cysts on 2 consecutive ultrasounds.
Aspiration of accumulated fluid (ascites) from the abdominal/pleural cavity may be necessary, as well as opioids for the pain. If the OHSS develops within an IVF protocol, it can be prudent to postpone transfer of the pre-embryos since establishment of pregnancy can lengthen the recovery time or contribute to a more severe course. Over time, if carefully monitored, the condition will naturally reverse to normal – so treatment is typically supportive, although a woman may need to be treated or hospitalized for pain, paracentesis, and/or intravenous hydration.
References
Further reading
External links
Fertility medicine
Gynaecological endocrinology
Noninflammatory disorders of female genital tract
Syndromes in females | 0.768163 | 0.996099 | 0.765166 |
Plague (disease) | Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die.
The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum.
Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%.
Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe.
Signs and symptoms
There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph nodes if they have bubonic plague. For those with pneumonic plague, symptoms may (or may not) include a cough, pain in the chest, and haemoptysis.
Bubonic plague
When a flea bites a human and contaminates the wound with regurgitated blood, the plague-causing bacteria are passed into the tissue. Y. pestis can reproduce inside cells, so even if phagocytosed, they can still survive. Once in the body, the bacteria can enter the lymphatic system, which drains interstitial fluid. Plague bacteria secrete several toxins, one of which is known to cause beta-adrenergic blockade.
Y. pestis spreads through the lymphatic vessels of the infected human until it reaches a lymph node, where it causes acute lymphadenitis. The swollen lymph nodes form the characteristic buboes associated with the disease, and autopsies of these buboes have revealed them to be mostly hemorrhagic or necrotic.
If the lymph node is overwhelmed, the infection can pass into the bloodstream, causing secondary septicemic plague and if the lungs are seeded, it can cause secondary pneumonic plague.
Septicemic plague
Lymphatics ultimately drain into the bloodstream, so the plague bacteria may enter the blood and travel to almost any part of the body. In septicemic plague, bacterial endotoxins cause disseminated intravascular coagulation (DIC), causing tiny clots throughout the body and possibly ischemic necrosis (tissue death due to lack of circulation/perfusion to that tissue) from the clots. DIC results in depletion of the body's clotting resources so that it can no longer control bleeding. Consequently, there is bleeding into the skin and other organs, which can cause red and/or black patchy rash and hemoptysis/hematemesis (coughing up/ vomiting of blood). There are bumps on the skin that look somewhat like insect bites; these are usually red, and sometimes white in the centre. Untreated, the septicemic plague is usually fatal. Early treatment with antibiotics reduces the mortality rate to between 4 and 15 per cent.
Pneumonic plague
The pneumonic form of plague arises from infection of the lungs. It causes coughing and thereby produces airborne droplets that contain bacterial cells and are likely to infect anyone inhaling them. The incubation period for pneumonic plague is short, usually two to four days, but sometimes just a few hours. The initial signs are indistinguishable from several other respiratory illnesses; they include headache, weakness, and spitting or vomiting of blood. The course of the disease is rapid; unless diagnosed and treated soon enough, typically within a few hours, death may follow in one to six days; in untreated cases, mortality is nearly 100%.
Cause
Transmission of Y. pestis to an uninfected individual is possible by any of the following means:
droplet contact – coughing or sneezing on another person
direct physical contact – touching an infected person, including sexual contact
indirect contact – usually by touching soil contamination or a contaminated surface
airborne transmission – if the microorganism can remain in the air for long periods
fecal-oral transmission – usually from contaminated food or water sources
vector borne transmission – carried by insects or other animals.
Yersinia pestis circulates in animal reservoirs, particularly in rodents, in the natural foci of infection found on all continents except Australia. The natural foci of plague are situated in a broad belt in the tropical and sub-tropical latitudes and the warmer parts of the temperate latitudes around the globe, between the parallels 55° N and 40° S.
Contrary to popular belief, rats did not directly start the spread of the bubonic plague. It is mainly a disease in the fleas (Xenopsylla cheopis) that infested the rats, making the rats themselves the first victims of the plague. Rodent-borne infection in a human occurs when a person is bitten by a flea that has been infected by biting a rodent that itself has been infected by the bite of a flea carrying the disease. The bacteria multiply inside the flea, sticking together to form a plug that blocks its stomach and causes it to starve. The flea then bites a host and continues to feed, even though it cannot quell its hunger, and consequently, the flea vomits blood tainted with the bacteria back into the bite wound. The bubonic plague bacterium then infects a new person and the flea eventually dies from starvation. Serious outbreaks of plague are usually started by other disease outbreaks in rodents or a rise in the rodent population.
A 21st-century study of a 1665 outbreak of plague in the village of Eyam in England's Derbyshire Dales – which isolated itself during the outbreak, facilitating modern study – found that three-quarters of cases are likely to have been due to human-to-human transmission, especially within families, a much larger proportion than previously thought.
Diagnosis
Symptoms of plague are usually non-specific and to definitively diagnose plague, laboratory testing is required. Y. pestis can be identified through both a microscope and by culturing a sample and this is used as a reference standard to confirm that a person has a case of plague. The sample can be obtained from the blood, mucus (sputum), or aspirate extracted from inflamed lymph nodes (buboes). If a person is administered antibiotics before a sample is taken or if there is a delay in transporting the person's sample to a laboratory and/or a poorly stored sample, there is a possibility for false negative results.
Polymerase chain reaction (PCR) may also be used to diagnose plague, by detecting the presence of bacterial genes such as the pla gene (plasmogen activator) and caf1 gene, (F1 capsule antigen). PCR testing requires a very small sample and is effective for both alive and dead bacteria. For this reason, if a person receives antibiotics before a sample is collected for laboratory testing, they may have a false negative culture and a positive PCR result.
Blood tests to detect antibodies against Y. pestis can also be used to diagnose plague, however, this requires taking blood samples at different periods to detect differences between the acute and convalescent phases of F1 antibody titres.
In 2020, a study about rapid diagnostic tests that detect the F1 capsule antigen (F1RDT) by sampling sputum or bubo aspirate was released. Results show rapid diagnostic F1RDT test can be used for people who have suspected pneumonic and bubonic plague but cannot be used in asymptomatic people. F1RDT may be useful in providing a fast result for prompt treatment and fast public health response as studies suggest that F1RDT is highly sensitive for both pneumonic and bubonic plague. However, when using the rapid test, both positive and negative results need to be confirmed to establish or reject the diagnosis of a confirmed case of plague and the test result needs to be interpreted within the epidemiological context as study findings indicate that although 40 out of 40 people who had the plague in a population of 1000 were correctly diagnosed, 317 people were diagnosed falsely as positive.
Prevention
Vaccination
Bacteriologist Waldemar Haffkine developed the first plague vaccine in 1897. He conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay between 1897 and 1925, reducing the plague mortality by 50–85%.
Since human plague is rare in most parts of the world as of 2023, routine vaccination is not needed other than for those at particularly high risk of exposure, nor for people living in areas with enzootic plague, meaning it occurs at regular, predictable rates in populations and specific areas, such as the western United States. It is not even indicated for most travellers to countries with known recent reported cases, particularly if their travel is limited to urban areas with modern hotels. The United States CDC thus only recommends vaccination for (1) all laboratory and field personnel who are working with Y. pestis organisms resistant to antimicrobials: (2) people engaged in aerosol experiments with Y. pestis; and (3) people engaged in field operations in areas with enzootic plague where preventing exposure is not possible (such as some disaster areas). A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine.
Early diagnosis
Diagnosing plague early leads to a decrease in transmission or spread of the disease.
Prophylaxis
Pre-exposure prophylaxis for first responders and health care providers who will care for patients with pneumonic plague is not considered necessary as long as standard and droplet precautions can be maintained. In cases of surgical mask shortages, patient overcrowding, poor ventilation in hospital wards, or other crises, pre-exposure prophylaxis might be warranted if sufficient supplies of antimicrobials are available.
Postexposure prophylaxis should be considered for people who had close (<6 feet), sustained contact with a patient with pneumonic plague and were not wearing adequate personal protective equipment. Antimicrobial postexposure prophylaxis also can be considered for laboratory workers accidentally exposed to infectious materials and people who had close (<6 feet) or direct contact with infected animals, such as veterinary staff, pet owners, and hunters.
Specific recommendations on pre- and post-exposure prophylaxis are available in the clinical guidelines on treatment and prophylaxis of plague published in 2021.
Treatments
If diagnosed in time, the various forms of plague are usually highly responsive to antibiotic therapy. The antibiotics often used are streptomycin, chloramphenicol and tetracycline. Amongst the newer generation of antibiotics, gentamicin and doxycycline have proven effective in monotherapeutic treatment of plague. Guidelines on treatment and prophylaxis of plague were published by the Centers for Disease Control and Prevention in 2021.
The plague bacterium could develop drug resistance and again become a major health threat. One case of a drug-resistant form of the bacterium was found in Madagascar in 1995. Further outbreaks in Madagascar were reported in November 2014 and October 2017.
Epidemiology
Globally about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century which resulted in more than 50 million dead. In recent years, cases have been distributed between small seasonal outbreaks which occur primarily in Madagascar, and sporadic outbreaks or isolated cases in endemic areas.
In 2022 the possible origin of all modern strands of Yersinia pestis DNA was found in human remains in three graves located in Kyrgyzstan, dated to 1338 and 1339. The siege of Caffa in Crimea in 1346, is known to have been the first plague outbreak with following strands, later to spread over Europe. Sequencing DNA compared to other ancient and modern strands paints a family tree of the bacteria. Bacteria today affecting marmots in Kyrgyzstan, are closest to the strand found in the graves, suggesting this is also the location where plague transferred from animals to humans.
Biological weapon
The plague has a long history as a biological weapon. Historical accounts from ancient China and medieval Europe details the use of infected animal carcasses, such as cows or horses, and human carcasses, by the Xiongnu/Huns, Mongols, Turks and other groups, to contaminate enemy water supplies. Han dynasty general Huo Qubing is recorded to have died of such contamination while engaging in warfare against the Xiongnu. Plague victims were also reported to have been tossed by catapult into cities under siege.
In 1347, the Genoese possession of Caffa, a great trade emporium on the Crimean peninsula, came under siege by an army of Mongol warriors of the Golden Horde under the command of Jani Beg. After a protracted siege during which the Mongol army was reportedly withering from the disease, they decided to use the infected corpses as a biological weapon. The corpses were catapulted over the city walls, infecting the inhabitants. This event might have led to the transfer of the Black Death via their ships into the south of Europe, possibly explaining its rapid spread.
During World War II, the Japanese Army developed weaponized plague, based on the breeding and release of large numbers of fleas. During the Japanese occupation of Manchuria, Unit 731 deliberately infected Chinese, Korean and Manchurian civilians and prisoners of war with the plague bacterium. These subjects, termed "maruta" or "logs", were then studied by dissection, others by vivisection while still conscious. Members of the unit such as Shiro Ishii were exonerated from the Tokyo tribunal by Douglas MacArthur but 12 of them were prosecuted in the Khabarovsk War Crime Trials in 1949 during which some admitted having spread bubonic plague within a radius around the city of Changde.
Ishii innovated bombs containing live mice and fleas, with very small explosive loads, to deliver the weaponized microbes, overcoming the problem of the explosive killing the infected animal and insect by the use of a ceramic, rather than metal, casing for the warhead. While no records survive of the actual usage of the ceramic shells, prototypes exist and are believed to have been used in experiments during WWII.
After World War II, both the United States and the Soviet Union developed means of weaponising pneumonic plague. Experiments included various delivery methods, vacuum drying, sizing the bacterium, developing strains resistant to antibiotics, combining the bacterium with other diseases (such as diphtheria), and genetic engineering. Scientists who worked in USSR bio-weapons programs have stated that the Soviet effort was formidable and that large stocks of weaponised plague bacteria were produced. Information on many of the Soviet and US projects is largely unavailable. Aerosolized pneumonic plague remains the most significant threat.
The plague can be easily treated with antibiotics. Some countries, such as the United States, have large supplies on hand if such an attack should occur, making the threat less severe.
See also
Timeline of plague
References
Further reading
External links
WHO Health topic
CDC Plague map world distribution, publications, information on bioterrorism preparedness and response regarding plague
Symptoms, causes, pictures of bubonic plague
Airborne diseases
Bacterium-related cutaneous conditions
Biological weapons
Epidemics
Insect-borne diseases
Rodent-carried diseases
Zoonoses
Zoonotic bacterial diseases
Cat diseases
Wikipedia medicine articles ready to translate | 0.765718 | 0.999217 | 0.765119 |
Swimming | Swimming is the self-propulsion of a person through water or other liquid, usually for recreation, sport, exercise, or survival. Locomotion is achieved through coordinated movement of the limbs and the body to achieve hydrodynamic thrust that results in directional motion. Humans can hold their breath underwater and undertake rudimentary locomotive swimming within weeks of birth, as a survival response. Swimming requires stamina, skills, and proper technique.
Swimming is a popular activity and competitive sport where certain techniques are deployed to move through water. It offers numerous health benefits, such as strengthened cardiovascular health, muscle strength, and increased flexibility. It is suitable for people of all ages and fitness levels.
Swimming is consistently among the top public recreational activities, and in some countries, swimming lessons are a compulsory part of the educational curriculum. As a formalized sport, swimming is featured in various local, national, and international competitions, including every modern Summer Olympics.
Swimming involves repeated motions known as strokes to propel the body forward. While the front crawl, also known as freestyle, is widely regarded as the fastest of the four main strokes, other strokes are practiced for special purposes, such as training.
Swimming comes with certain risks, mainly because of the aquatic environment where it takes place. For instance, swimmers may find themselves incapacitated by panic and exhaustion, both potential causes of death by drowning. Other dangers may arise from exposure to infection or hostile aquatic fauna. To minimize such eventualities, most facilities employ a lifeguard to keep alert for any signs of distress.
Swimmers often wear specialized swimwear, although depending on the area's culture, some swimmers may also swim nude or wear their day attire. In addition, a variety of equipment can be used to enhance the swimming experience or performance, including but not limited to the use of swimming goggles, floatation devices, swim fins, and snorkels.
Science
Swimming relies on the nearly neutral buoyancy of the human body. On average, the body has a relative density of 0.98 compared to water, which causes the body to float. However, buoyancy varies based on body composition, lung inflation, muscle and fat content, centre of gravity and the salinity of the water. Higher levels of body fat and saltier water both lower the relative density of the body and increase its buoyancy. Because they tend to have a lower centre of gravity and higher muscle content, human males find it more difficult to float or be buoyant. See also: Hydrostatic weighing.
Since the human body is less dense than water, water can support the body's weight during swimming. As a result, swimming is "low-impact" compared to land activities such as running. The density and viscosity of water also create resistance for objects moving through the water. Swimming strokes use this resistance to create propulsion, but this same resistance also generates drag on the body.
Hydrodynamics is important to stroke technique for swimming faster, and swimmers who want to swim faster or exhaust less try to reduce the drag of the body's motion through the water. To be more hydrodynamically effective, swimmers can either increase the power of their strokes or reduce water resistance. However, power must increase by a factor of three to achieve the same effect as reducing resistance. Efficient swimming by reducing water resistance involves a horizontal water position, rolling the body to reduce the breadth of the body in the water, and extending the arms as far as possible to reduce wave resistance.
Just before plunging into the pool, swimmers may perform exercises such as squatting. Squatting helps enhance a swimmer's start by warming up the thigh muscles.
Infant swimming
Human babies demonstrate an innate swimming or diving reflex from newborn until approximately ten months. Other mammals also demonstrate this phenomenon (see mammalian diving reflex). The diving response involves apnea, reflex bradycardia, and peripheral vasoconstriction; in other words, babies immersed in water spontaneously hold their breath, slow their heart rate, and reduce blood circulation to the extremities (fingers and toes).
Because infants are innately able to swim, classes for babies about six months old are offered in many locations. This helps build muscle memory and makes strong swimmers from a young age.
Technique
Swimming can be undertaken using a wide range of styles, known as 'strokes,' and which are used for different purposes or to distinguish between classes in competitive swimming. Using a defined stroke for propulsion through the water is unnecessary, and untrained swimmers may use a 'doggy paddle' of arm and leg movements, similar to how four-legged animals swim.
Four main strokes are used in competition and recreational swimming: the front crawl, breaststroke, backstroke, and butterfly. Competitive swimming in Europe started around 1800, mostly using the breaststroke, which started as the current breaststroke arms and the legs of the butterfly stroke. In 1873, John Arthur Trudgen introduced the trudgen to Western swimming competitions. The butterfly was developed in the 1930s and was considered a variant of the breaststroke until it was accepted as a separate style in 1953. Butterfly is considered the hardest stroke by many people, but it is the most effective for all-around toning and the building of muscles. It also burns the most calories and can be the second fastest stroke if practiced regularly.
In non-competitive swimming, there are some swimming strokes, including sidestroke. The sidestroke, toward the end of the 19th century, changed this pattern by raising one arm above the water first, then the other, and then each in turn. It is still used in lifesaving and recreational swimming.
Other strokes exist for particular reasons, such as training, school lessons, and rescue, and it is often possible to change strokes to avoid using parts of the body, either to separate specific body parts, such as swimming with only arms or legs to exercise them harder, or for amputees or those affected by paralysis.
History
Swimming has been recorded since prehistoric times, and the earliest records of swimming date back to Stone Age paintings from around 7,000 years ago. Written references date from 2000 BCE. Some earliest references include the Epic of Gilgamesh, the Iliad, the Odyssey, the Bible (Ezekiel 47:5, Acts 27:42, Isaiah 25:11), Beowulf, and other sagas.
In 450 BC, Herodotus described a failed seaborne expedition of Mardonius with the words "…those who could not swim perished from that cause, others from the cold".
The coastal tribes living in the volatile Low Countries were known as excellent swimmers by the Romans. Men and horses of the Batavi tribe could cross the Rhine without losing formation, according to Tacitus. Dio Cassius describes one surprise tactic employed by Aulus Plautius against the Celts at the Battle of the Medway:
The [British Celts] thought that Romans would not be able to cross it without a bridge, and consequently bivouacked in rather careless fashion on the opposite bank; but he sent across a detachment of [Batavii], who were accustomed to swim easily in full armour across the most turbulent streams. ... Thence the Britons retired to the river Thames at a point near where it empties into the ocean and at flood-tide forms a lake. This they easily crossed because they knew where the firm ground and the easy passages in this region were to be found, but the Romans in attempting to follow them were not so successful. However, the [Batavii] swam across again and some others got over by a bridge a little way up-stream, after which they assailed the barbarians from several sides at once and cut down many of them.
The Talmud, a compendium of Jewish law written compiled c. 500 CE, requires fathers to teach their son how to swim.
In 1538, Nikolaus Wynmann, a Swiss–German professor of languages, wrote the earliest known complete book about swimming, Colymbetes, sive de arte natandi dialogus et festivus et iucundus lectu (The Swimmer, or A Dialogue on the Art of Swimming and Joyful and Pleasant to Read).
Purpose
There are many reasons why people swim, from a recreational pursuit to swimming as a necessary part of a job or other activity. Swimming may also be used to rehabilitate injuries, especially various cardiovascular and muscle injuries. People may also pursue swimming as a career or field of interest. Some may be gifted and choose to compete professionally and go on to claim fame.
Recreation
Many swimmers swim for recreation, with swimming consistently ranking as one of the physical activities people are most likely to participate in. Recreational swimming can also be used for exercise, relaxation, or rehabilitation. The support of the water and the reduction in impact make swimming accessible for people unable to undertake activities such as running. Swimming is one of the most relaxing activities, and water is known to calm us and help reduce stress.
Health
Swimming is primarily a cardiovascular/aerobic exercise due to the long exercise time, requiring a constant oxygen supply, except for short sprints where the muscles work anaerobically. Furthermore, swimming can help tone and strengthen muscles. Regular swimming can help in weight management and contribute to maintaining a healthy body weight. (Robinson 2022) Swimming allows sufferers of arthritis to exercise affected joints without worsening their symptoms. Swimming is often recommended for individuals with joint conditions or injuries, as the buoyancy of water reduces stress on the joints. However, swimmers with arthritis may wish to avoid swimming breaststroke, as improper technique can exacerbate arthritic knee pain. As with most aerobic exercise, swimming reduces the harmful effects of stress. Swimming also improves health for people with cardiovascular problems and chronic illnesses. It is proven to impact the mental health of pregnant women and mothers positively. Swimming can even improve mood. Although many forms of physical activity have been shown to improve bone density and health, this is where swimming has its downfalls. Due to the low-impact nature of the sport, studies have demonstrated that bone mass acquisition will be negatively impacted, which could be an issue for adolescent athletes in particular.
Disabled swimmers
Since 2010, the Americans with Disabilities Act has required that swimming pools in the United States be accessible to disabled swimmers.
Elderly swimmers
"Water-based exercise can benefit older adults by improving quality of life and decreasing disability. It also improves or maintains the bone health of post-menopausal women."
Swimming is an ideal workout for the elderly, as it is a low-impact sport with very little risk of injury. Exercise in the water works out all muscle groups, helping with conditions such as muscular dystrophy which is common in seniors. It is also a common way to relieve pain from arthritis.
Sport
Swimming as a sport predominantly involves participants competing to be the fastest over a given distance in a certain period of time. Competitors swim different distances in different levels of competition. For example, swimming has been an Olympic sport since 1896, and the current program includes events from 50 m to 1500 m in length, across all four main strokes and medley. During the season competitive swimmers typically train multiple times per day and week to increase endurance, strength, and preserve fitness. Furthermore when the cycle of work is completed swimmers go through a stage called taper where intensity is reduced in preparation for competition season. During taper, focus is on power and water feel.
The sport is governed internationally by the Fédération Internationale de Natation (FINA), and competition pools for FINA events are 25 or 50 meters in length. In the United States, a pool 25 yards in length is commonly used for competition.
Other swimming and water-related sporting disciplines include open water swimming, diving, synchronized swimming, water polo, triathlon, and the modern pentathlon.
Safety
It is important to prioritize safety when swimming. This includes having lifeguards present, swimming in designated areas, and being aware of potential hazards such as currents and underwater obstacles.
As a popular leisure activity done all over the world, one of the primary risks of swimming is drowning. Drowning may occur from a variety of factors, from swimming fatigue to simply inexperience in the water. From 2005 to 2014, an average of 3,536 fatal unintentional drownings occurred in the United States, approximating 10 deaths a day.
To minimize the risk and prevent potential drownings from occurring, lifeguards are often employed to supervise swimming locations such as pools, waterparks, lakes and beaches. Different lifeguards receive different training depending on the sites that they are employed at; i.e. a waterfront lifeguard receives more rigorous training than a poolside lifeguard. Well-known aquatic training services include the National Lifesaving Society and the Canadian Red Cross, which specialize in training lifeguards in North America.
Learning basic water safety skills, such as swimming with a buddy and knowing how to respond to emergencies, is essential for swimmers of all levels.
Occupation
Some occupations require workers to swim, such as abalone and pearl diving, and spearfishing.
Swimming is used to rescue people in the water who are in distress, including exhausted swimmers, non-swimmers who have accidentally entered the water, and others who have come to harm on the water. Lifeguards or volunteer lifesavers are deployed at many pools and beaches worldwide to fulfil this purpose, and they, as well as rescue swimmers, may use specific swimming styles for rescue purposes.
Swimming is also used in marine biology to observe plants and animals in their natural habitat. Other sciences use swimming; for example, Konrad Lorenz swam with geese as part of his studies of animal behavior.
Swimming also has military purposes. Military swimming is usually done by special operation forces, such as Navy SEALs and US Army Special Forces. Swimming is used to approach a location, gather intelligence, engage in sabotage or combat, and subsequently depart. This may also include airborne insertion into water or exiting a submarine while it is submerged. Due to regular exposure to large bodies of water, all recruits in the United States Navy, Marine Corps, and Coast Guard are required to complete basic swimming or water survival training.
Swimming is also a professional sport. Companies sponsor swimmers who have the skills to compete at the international level. Many swimmers compete competitively to represent their home countries in the Olympics. Professional swimmers may also earn a living as entertainers, performing in water ballets.
Locomotion
Locomotion by swimming over brief distances is frequent when alternatives are precluded. There have been cases of political refugees swimming in the Baltic Sea and of people jumping in the water and swimming ashore from vessels not intended to reach land where they planned to go.
Risks
There are many risks associated with voluntary or involuntary human presence in water, which may result in death directly or through drowning asphyxiation. Swimming is both the goal of much voluntary presence and the prime means of regaining land in accidental situations.
Most recorded water deaths fall into these categories:
Panic occurs when an inexperienced swimmer or a nonswimmer becomes mentally overwhelmed by the circumstances of their immersion, leading to sinking and drowning. Occasionally, panic kills through hyperventilation, even in shallow water.
Exhaustion can make a person unable to sustain efforts to swim or tread water, often leading to death through drowning. An adult with fully developed and extended lungs has generally positive or at least neutral buoyancy, and can float with modest effort when calm and in still water. A small child has negative buoyancy and must make a sustained effort to avoid sinking rapidly.
Hypothermia, in which a person loses critical core temperature, can lead to unconsciousness or heart failure.
Dehydration from prolonged exposure to hypertonic salt water—or, less frequently, salt water aspiration syndrome where inhaled salt water creates foam in the lungs that restricts breathing—can cause loss of physical control or kill directly without actual drowning. Hypothermia and dehydration also kill directly, without causing drowning, even when the person wears a life vest.
Blunt trauma in a fast moving flood or river water can kill a swimmer outright, or lead to their drowning.
Adverse effects of swimming can include:
Exostosis, an abnormal bony overgrowth narrowing the ear canal due to frequent, long-term splashing or filling of cold water into the ear canal, also known as surfer's ear
Infection from water-borne bacteria, viruses, or parasites
Chlorine inhalation (in swimming pools)
Heart attacks while swimming (the primary cause of sudden death among triathlon participants, occurring at the rate of 1 to 2 per 100,000 participations.)
Adverse encounters with aquatic life:
Stings from sea lice, jellyfish, fish, seashells, and some species of coral
Puncture wounds caused by crabs, lobsters, sea urchins, zebra mussels, stingrays, flying fish, sea birds, and debris
Hemorrhaging bites from fish, marine mammals, and marine reptiles, occasionally resulting from predation
Venomous bites from sea snakes and certain species of octopus
Electrocution or mild shock from electric eels and electric rays
Around any pool area, safety equipment is often important, and is a zoning requirement for most residential pools in the United States. Supervision by personnel trained in rescue techniques is required at most competitive swimming meets and public pools.
Lessons
Traditionally, children were considered not able to swim independently until 4 years of age,
although now infant swimming lessons are recommended to prevent drowning.
In Sweden, Denmark, Norway, Estonia and Finland, the curriculum for the fifth grade (fourth grade in Estonia) states that all children should learn to swim as well as how to handle emergencies near water. Most commonly, children are expected to be able to swim —of which at least on their back – after first falling into deep water and getting their head under water. Even though about 95 percent of Swedish school children know how to swim, drowning remains the third most common cause of death among children.
In both the Netherlands and Belgium swimming lessons under school time (schoolzwemmen, school swimming) are supported by the government. Most schools provide swimming lessons. There is a long tradition of swimming lessons in the Netherlands and Belgium, the Dutch translation for the breaststroke swimming style is even schoolslag (schoolstroke). In France, swimming is a compulsory part of the curriculum for primary schools. Children usually spend one semester per year learning swimming during CP/CE1/CE2/CM1 (1st, 2nd, 3rd and 4th grade).
In many places, swimming lessons are provided by local swimming pools, both those run by the local authority and by private leisure companies. Many schools also include swimming lessons into their Physical Education curricula, provided either in the schools' own pool or in the nearest public pool.
In the UK, the "Top-ups scheme" calls for school children who cannot swim by the age of 11 to receive intensive daily lessons. Children who have not reached Great Britain's National Curriculum standard of swimming 25 meters by the time they leave primary school receive a half-hour lesson every day for two weeks during term-time.
In Canada and Mexico there has been a call to include swimming in public school curriculum.
In the United States there is the Infant Swimming Resource (ISR) initiative that provides lessons for infant children, to cope with an emergency where they have fallen into the water. They are taught how to roll-back-to-float (hold their breath underwater, to roll onto their back, to float unassisted, rest and breathe until help arrives), while clothed and unclothed. In ISR they teach the children how to roll with their clothes on, as a simulation, if they were to actually fall in walking or crawling by.
In Switzerland, swimming lessons for babies are popular, to help them getting used to be in another element. At the competition level, unlike in other countries - such as the Commonwealth countries, swimming teams are not related to educational institutions (high-schools and universities), but rather to cities or regions.
Clothing and equipment
Swimsuits
Standard everyday clothing is usually impractical for swimming and is unsafe under some circumstances. Most cultures today expect swimmers to wear swimsuits.
Men's swimsuits commonly resemble shorts, or briefs. Men's casual swimsuits (for example, boardshorts) are rarely skintight, unlike competitive swimwear, like jammers or diveskins. In most cases, boys and men swim with their upper body exposed, except in countries where custom or law prohibits it in a public setting, or for practical reasons such as sun protection.
Modern women's swimsuits are generally skintight, covering the pubic region and the breasts (See bikini). Women's swimwear may also cover the midriff as well. Women's swimwear is often a fashion statement, and whether it is modest or not is a subject of debate by many groups, religious and secular.
Competitive swimwear is built so that the wearer can swim faster and more efficiently. Modern competitive swimwear is skintight and lightweight. There are many kinds of competitive swimwear for each gender. It is used in aquatic competitions, such as water polo, swim racing, diving, and rowing.
Wetsuits provide both thermal insulation and flotation. Many swimmers lack buoyancy in the leg. The wetsuit provides additional volume at a lower density and therefore improves buoyancy and trim while swimming. It provides insulation between the skin and water which reduces heat loss. The wetsuit is the usual choice for those who swim in cold water for long periods of time, as it reduces susceptibility to hypothermia.
Some people also choose to wear no clothing while swimming. In some European countries public pools allow clothes-free swimming and many countries have beaches where one can swim naked. It is legal to swim naked in the sea at all UK beaches. It was common for males to swim naked in a public setting up to the early 20th century. Today, swimming naked can be a rebellious activity or merely a casual one.
Accessories
Ear plugs can prevent water from getting in the ears.
Noseclips can prevent water from getting in the nose. However, using noseclips in competitive swimming can cause disadvantage, so many competitive swimmer choose not to use one. For this reason, nose clips are primarily used for synchronized swimming and recreational swimming.
Goggles protect the eyes from chlorinated water, and improve underwater visibility. Tinted goggles protect the eyes from sunlight that reflects from the bottom of the pool.
Swim caps keep the body streamlined and protect the hair from chlorinated water, though they are not entirely watertight.
Kickboards are used to keep the upper body afloat while exercising the lower body.
Pull buoys are used to keep the lower body afloat while exercising the upper body.
Swimfins are used in training to elongate the kick and improve technique and speed. Fins also build upper calf muscles. Fins provide a significantly greater and more efficient conversion of muscle power to thrust than available from the feet, and allow the powerful leg muscles to be used effectively for propulsion through water. The value of fins as an active aid in the teaching, learning and practice of swimming has long been recognised. In the US, as early as 1947, they were used experimentally to build the confidence of reluctant beginners in swimming, while a 1950 YMCA lifesaving and water safety manual reminded swimming instructors how "flippers can be used to great advantage for treading water, surface diving, towing, underwater searching and supporting a tired swimmer". In 1967, research was conducted on fin use in teaching the crawl stroke. During the 1970s, the so-called "flipper-float" method came into vogue in Europe with the aim of helping beginners learn to swim faster and more safely,
Hand paddles are used to increase resistance during arm movements, with the goal of improving technique and power.
Finger paddles have a similar effect to handle paddles however due to their smaller size create less resistance. They also help with improving a swimmers 'catch' in the water.
Snorkels are used to help improve and maintain a good head position in the water. They may also be used by some during physical therapy.
Pool noodles are used to keep the user afloat during the time in the water.
Safety fencing and equipment is mandatory at public pools and a zoning requirement at most residential pools in the United States.
Swimming Parachutes are used in competitive training, adding an element of resistance in the water helping athletes to increase power in the strokes central movements.
Inflatable armbands are swimming aids designed to provide buoyancy for the swimmer which helps the wearer to float.
See also
Aquatic ape hypothesis
Aquatic locomotion
List of swimmers
List of water sports
Microswimmer
Mixed bathing
Resistance swimming
Stunt swimming
Swimhiking
Swimming machine
Total Immersion
Winter swimming
References
Bibliography
Maniscalco F., Il nuoto nel mondo greco romano, Naples 1993.
Mehl H., Antike Schwimmkunst, Munchen 1927.
Schuster G., Smits W. & Ullal J., Thinkers of the Jungle. Tandem Verlag 2008.
svin
WebMD. (n.d.). Health benefits of swimming. WebMD. https://www.webmd.com/fitness-exercise/a-z/swimming-for-fitness
"The Benefits of Swimming," Swim England
Swimming and Arthritis," Arthritis Foundatio
Water Safety Tips," American Red Cross
Water Safety," Safe Kids Worldwide
External links
Swimmingstrokes.info, Overview of 150 historical and less known swimming-strokes
Swimming | 0.768108 | 0.996056 | 0.765079 |
Thiamine deficiency | Thiamine deficiency is a medical condition of low levels of thiamine (vitamin B1). A severe and chronic form is known as beriberi. The name beriberi was possibly borrowed in the 18th century from the Sinhalese phrase බැරි බැරි (bæri bæri, “I cannot, I cannot”), owing to the weakness caused by the condition. The two main types in adults are wet beriberi and dry beriberi. Wet beriberi affects the cardiovascular system, resulting in a fast heart rate, shortness of breath, and leg swelling. Dry beriberi affects the nervous system, resulting in numbness of the hands and feet, confusion, trouble moving the legs, and pain. A form with loss of appetite and constipation may also occur. Another type, acute beriberi, found mostly in babies, presents with loss of appetite, vomiting, lactic acidosis, changes in heart rate, and enlargement of the heart.
Risk factors include a diet of mostly white rice, alcoholism, dialysis, chronic diarrhea, and taking high doses of diuretics. In rare cases, it may be due to a genetic condition that results in difficulties absorbing thiamine found in food. Wernicke encephalopathy and Korsakoff syndrome are forms of dry beriberi. Diagnosis is based on symptoms, low levels of thiamine in the urine, high blood lactate, and improvement with thiamine supplementation.
Treatment is by thiamine supplementation, either by mouth or by injection. With treatment, symptoms generally resolve in a few weeks. The disease may be prevented at the population level through the fortification of food.
Thiamine deficiency is rare in the United States. It remains relatively common in sub-Saharan Africa. Outbreaks have been seen in refugee camps. Thiamine deficiency has been described for thousands of years in Asia, and became more common in the late 1800s with the increased processing of rice.
Signs and symptoms
Symptoms of beriberi include weight loss, emotional disturbances, impaired sensory perception, weakness and pain in the limbs, and periods of irregular heart rate. Edema (swelling of bodily tissues) is common. It may increase the amount of lactic acid and pyruvic acid within the blood. In advanced cases, the disease may cause high-output cardiac failure and death.
Symptoms may occur concurrently with those of Wernicke's encephalopathy, a primarily neurological thiamine deficiency-related condition.
Beriberi is divided into four categories. The first three are historical and the fourth, gastrointestinal beriberi, was recognized in 2004:
Dry beriberi especially affects the peripheral nervous system.
Wet beriberi especially affects the cardiovascular system and other bodily systems.
Infantile beriberi affects the babies of malnourished mothers.
Gastrointestinal beriberi affects the digestive system and other bodily systems.
Dry beriberi
Dry beriberi causes wasting and partial paralysis resulting from damaged peripheral nerves. It is also referred to as endemic neuritis. It is characterized by:
Difficulty with walking
Tingling or loss of sensation (numbness) in hands and feet
Loss of tendon reflexes
Loss of muscle function or paralysis of the lower legs
Mental confusion/speech difficulties
Pain
Involuntary eye movements (nystagmus)
Vomiting
A selective impairment of the large proprioceptive sensory fibers without motor impairment can occur and present as a prominent sensory ataxia, which is a loss of balance and coordination due to loss of the proprioceptive inputs from the periphery and loss of position sense.
Brain disease
Wernicke's encephalopathy (WE), Korsakoff syndrome (also called alcohol amnestic disorder), and Wernicke–Korsakoff syndrome are forms of dry beriberi.
Wernicke's encephalopathy is the most frequently encountered manifestation of thiamine deficiency in Western society, though it may also occur in patients with impaired nutrition from other causes, such as gastrointestinal disease, those with HIV/AIDS, and with the injudicious administration of parenteral glucose or hyperalimentation without adequate B-vitamin supplementation. This is a striking neuro-psychiatric disorder characterized by paralysis of eye movements, abnormal stance and gait, and markedly deranged mental function.
Korsakoff syndrome, in general, is considered to occur with deterioration of brain function in patients initially diagnosed with WE. This is an amnestic-confabulatory syndrome characterized by retrograde and anterograde amnesia, impairment of conceptual functions, and decreased spontaneity and initiative.
Alcoholics may have thiamine deficiency because of:
Inadequate nutritional intake: Alcoholics tend to intake less than the recommended amount of thiamine.
Decreased uptake of thiamine from the GI tract: Active transport of thiamine into enterocytes is disturbed during acute alcohol exposure.
Liver thiamine stores are reduced due to hepatic steatosis or fibrosis.
Impaired thiamine utilization: Magnesium, which is required for the binding of thiamine to thiamine-using enzymes within the cell, is also deficient due to chronic alcohol consumption. The inefficient use of any thiamine that does reach the cells will further exacerbate the thiamine deficiency.
Ethanol per se inhibits thiamine transport in the gastrointestinal system and blocks phosphorylation of thiamine to its cofactor form (ThDP).
Following improved nutrition and the removal of alcohol consumption, some impairments linked with thiamine deficiency are reversed, in particular poor brain functionality, although in more severe cases, Wernicke–Korsakoff syndrome leaves permanent damage. (See delirium tremens.)
Wet beriberi
Wet beriberi affects the heart and circulatory system. It is sometimes fatal, as it causes a combination of heart failure and weakening of the capillary walls, which causes the peripheral tissues to become edematous. Wet beriberi is characterized by:
Increased heart rate
Vasodilation leading to decreased systemic vascular resistance, and high-output heart failure
Elevated jugular venous pressure
Dyspnea (shortness of breath) on exertion
Paroxysmal nocturnal dyspnea
Peripheral edema (swelling of lower legs) or generalized edema (swelling throughout the body)
Dilated cardiomyopathy
Gastrointestinal beriberi
Gastrointestinal beriberi causes abdominal pain. It is characterized by:
Abdominal pain
Nausea
Vomiting
Lactic acidosis
Infants
Infantile beriberi usually occurs between two and six months of age in children whose mothers have inadequate thiamine intake. It may present as either wet or dry beriberi.
In the acute form, the baby develops dyspnea and cyanosis and soon dies of heart failure. These symptoms may be described in infantile beriberi:
Hoarseness, where the child makes moves to moan, but emits no sound or just faint moans caused by nerve paralysis
Weight loss, becoming thinner and then marasmic as the disease progresses
Vomiting
Diarrhea
Pale skin
Edema
Ill temper
Alterations of the cardiovascular system, especially tachycardia (rapid heart rate)
Convulsions occasionally observed in the terminal stages
Cause
Beriberi is often caused by eating a diet with a very high proportion of calorie rich polished rice (common in Asia) or cassava root (common in sub-Saharan Africa), without much if any thiamine-containing animal products or vegetables.
It may also be caused by shortcomings other than inadequate intake – diseases or operations on the digestive tract, alcoholism, dialysis or genetic deficiencies. All those causes mainly affect the central nervous system, and provoke the development of Wernicke's encephalopathy.
Wernicke's disease is one of the most prevalent neurological or neuropsychiatric diseases. In autopsy series, features of Wernicke lesions are observed in approximately 2% of general cases. Medical record research shows that about 85% had not been diagnosed, although only 19% would be asymptomatic. In children, only 58% were diagnosed. In alcohol abusers, autopsy series showed neurological damages at rates of 12.5% or more. Mortality caused by Wernicke's disease reaches 17% of diseases, which means 3.4/1000 or about 25 million contemporaries. The number of people with Wernicke's disease may be even higher, considering that early stages may have dysfunctions prior to the production of observable lesions at necropsy. In addition, uncounted numbers of people can experience fetal damage and subsequent diseases.
Genetics
Genetic diseases of thiamine transport are rare but serious. Thiamine responsive megaloblastic anemia syndrome (TRMA) with diabetes mellitus and sensorineural deafness is an autosomal recessive disorder caused by mutations in the gene SLC19A2, a high affinity thiamine transporter. TRMA patients do not show signs of systemic thiamine deficiency, suggesting redundancy in the thiamine transport system. This has led to the discovery of a second high-affinity thiamine transporter, SLC19A3. Leigh disease (subacute necrotising encephalomyelopathy) is an inherited disorder that affects mostly infants in the first years of life and is invariably fatal. Pathological similarities between Leigh disease and WE led to the hypothesis that the cause was a defect in thiamine metabolism. One of the most consistent findings has been an abnormality of the activation of the pyruvate dehydrogenase complex.
Mutations in the SLC19A3 gene have been linked to biotin-thiamine responsive basal ganglia disease, which is treated with pharmacological doses of thiamine and biotin, another B vitamin.
Other disorders in which a putative role for thiamine has been implicated include subacute necrotising encephalomyelopathy, opsoclonus myoclonus syndrome (a paraneoplastic syndrome), and Nigerian seasonal ataxia (or African seasonal ataxia). In addition, several inherited disorders of ThDP-dependent enzymes have been reported, which may respond to thiamine treatment.
Pathophysiology
Thiamine in the human body has a half-life of 17 days and is quickly exhausted, particularly when metabolic demands exceed intake. A derivative of thiamine, thiamine pyrophosphate (TPP), is a cofactor involved in the citric acid cycle, as well as connecting the breakdown of sugars with the citric acid cycle. The citric acid cycle is a central metabolic pathway involved in the regulation of carbohydrate, lipid, and amino acid metabolism, and its disruption due to thiamine deficiency inhibits the production of many molecules including the neurotransmitters glutamic acid and GABA. Additionally, thiamine may also be directly involved in neuromodulation.
Diagnosis
A positive diagnosis test for thiamine deficiency involves measuring the activity of the enzyme transketolase in erythrocytes (Erythrocyte transketolase activation assay). Alternatively, thiamine and its phosphorylated derivatives can directly be detected in whole blood, tissues, foods, animal feed, and pharmaceutical preparations following the conversion of thiamine to fluorescent thiochrome derivatives (thiochrome assay) and separation by high-performance liquid chromatography (HPLC). Capillary electrophoresis (CE) techniques and in-capillary enzyme reaction methods have emerged as alternative techniques in quantifying and monitoring thiamine levels in samples.
The normal thiamine concentration in EDTA-blood is about 20–100 μg/L.
Treatment
Many people with beriberi can be treated with thiamine alone. Given thiamine intravenously (and later orally), rapid and dramatic recovery occurs, generally within 24 hours.
Improvements of peripheral neuropathy may require several months of thiamine treatment.
Epidemiology
Beriberi is a recurrent nutritional disease in detention houses, even in this century. In 1999, an outbreak of beriberi occurred in a detention center in Taiwan. High rates of illness and death from beriberi in overcrowded Haitian jails in 2007 were traced to the traditional practice of washing rice before cooking; this removed a nutritious coating which had been applied to the rice after processing (enriched white rice). In the Ivory Coast, among a group of prisoners with heavy punishment, 64% were affected by beriberi. Before beginning treatment, prisoners exhibited symptoms of dry or wet beriberi with neurological signs (tingling: 41%), cardiovascular signs (dyspnoea: 42%, thoracic pain: 35%), and edemas of the lower limbs (51%). With treatment, the rate of healing was about 97%.
Populations under extreme stress may be at higher risk for beriberi. Displaced populations, such as refugees from war, are susceptible to micronutritional deficiency, including beriberi. The severe nutritional deprivation caused by famine also can cause beriberis, although symptoms may be overlooked in clinical assessment or masked by other famine-related problems. An extreme weight-loss diet can, rarely, induce a famine-like state and the accompanying beriberi.
Workers on Chinese squid ships are at elevated risk of beriberi due to the simple carbohydrate-rich diet they are fed and the long period of time between shoring. Between 2013 and 2021, 15 workers on 14 ships have died with symptoms of beriberi.
History
Earliest written descriptions of thiamine deficiency are from ancient China in the context of Chinese medicine. One of the earliest is by Ge Hong in his book Zhou hou bei ji fang (Emergency Formulas to Keep up Your Sleeve) written sometime during the third century. Hong called the illness by the name jiao qi, which can be interpreted as "foot qi". He described the symptoms to include swelling, weakness, and numbness of the feet. He also acknowledged that the illness could be deadly, and claimed that it could be cured by eating certain foods, such as fermented soybeans in wine. Better known examples of early descriptions of "foot qi" are by Chao Yuanfang (who lived during 550–630) in his book Zhu bing yuan hou lun (Sources and Symptoms of All Diseases) and by Sun Simiao (581–682) in his book Bei ji qian jin yao fang (Essential Emergency Formulas Worth a Thousand in Gold).
In the mid-19th century, interest in beriberi steadily rose as the disease became more noticeable with changes in diet in East and Southeast Asia. There was a steady uptick in medical publications, reaching one hundred and eighty-one publications from 1880 and 1889, and hundreds more in the following decades. The link to white rice was clear to Western doctors, but a confounding factor was that some other foods like meat failed to prevent beriberi, so it could not be easily explained as a lack of known chemicals like carbon or nitrogen. With no knowledge of vitamins, the etiology of beriberi was among the most hotly debated subjects in Victorian medicine.
The first successful preventative measure against beriberi was discovered by Takaki Kanehiro, a British-trained Japanese medical doctor of the Imperial Japanese Navy, in the mid-1880s. Beriberi was a serious problem in the Japanese navy; sailors fell ill an average of four times a year in the period 1878 to 1881, and 35% were cases of beriberi. In 1882, Takaki learned of a very high incidence of beriberi among cadets on a training mission from Japan to Hawaii, via New Zealand and South America. The voyage lasted more than nine months and resulted in 169 cases of sickness and 25 deaths on a ship of 376 men. Takaki observed that beriberi was common among low-ranking crew who were often provided free rice, thus ate little else, but not among crews of Western navies, nor among Japanese officers who consumed a more varied diet. With the support of the Japanese Navy, he conducted an experiment in which another ship was deployed on the same route, except that its crew was fed a diet of meat, fish, barley, rice, and beans. At the end of the voyage, this crew had only 14 cases of beriberi and no deaths. This emphasis on varied diet contradicted observations by other doctors, and Takaki's carbon-based etiology was just as incorrect as similar theories before him, but the results of his experiment impressed the Japanese Navy, which adopted his proposed solution. By 1887 beriberi had been completely eliminated on Navy ships.
Takaki's experiment was described favorably in The Lancet, but his incorrect etiology was not taken seriously. In 1897, Christiaan Eijkman, a Dutch physician and pathologist, published his mid-1880s experiments showing that feeding unpolished rice (instead of the polished variety) to chickens helped to prevent beriberi. This was the first experiment to show that not a major chemical but some minor nutrient was the true cause of beriberi. The following year, Sir Frederick Hopkins postulated that some foods contained "accessory factors"—in addition to proteins, carbohydrates, fats, and salt—that were necessary for the functions of the human body. In 1901, Gerrit Grijns, a Dutch physician and assistant to Christiaan Eijkman in the Netherlands, correctly interpreted beriberi as a deficiency syndrome, and between 1910 and 1913, Edward Bright Vedder established that an extract of rice bran is a treatment for beriberi. In 1929, Eijkman and Hopkins were awarded the Nobel Prize for Physiology or Medicine for their discoveries.
Japanese Army denialism
Although the identification of beriberi as a deficiency syndrome was proven beyond a doubt by 1913, a Japanese group headed by Mori Ōgai and backed by Tokyo Imperial University continued to deny this conclusion until 1926. In 1886, Mori, then working in the Japanese Army Medical Bureau, asserted that white rice was sufficient as a diet for soldiers. Simultaneously, Navy surgeon general Takaki Kanehiro published the groundbreaking results described above. Mori, who had been educated under German doctors, responded that Takaki was a "fake doctor" due to his lack of prestigious medical background, while Mori himself and his fellow graduates of Tokyo Imperial University constituted the only "real doctors" in Japan and that they alone were capable of "experimental induction", although Mori himself had not conducted any beriberi experiments.
The Japanese Navy sided with Takaki and adopted his suggestions. In order to prevent himself and the Army from losing face, Mori assembled a team of doctors and professors from Tokyo Imperial University and the Japanese Army who proposed that beriberi was caused by an unknown pathogen, which they described as etowasu (from the German Etwas, meaning "something"). They employed various social tactics to denounce vitamin deficiency experiments and prevent them from being published, while beriberi ravaged the Japanese Army. During the First Sino-Japanese War and Russo-Japanese War, Army soldiers continued to die in mass numbers from beriberi, while Navy sailors survived. In response to this severe loss of life, in 1907, the Army ordered the formation of a Beriberi Emergency Research Council, headed by Mori. Its members pledged to find the cause of beriberi. By 1919, with most Western doctors acknowledging that beriberi was a deficiency syndrome, the Emergency Research Council began conducting experiments using various vitamins, but stressed that "more research was necessary". During this period, more than 300,000 Japanese soldiers contracted beriberi and over 27,000 died.
Mori died in 1922. The Beriberi Research Council disbanded in 1925, and by the time Eijkman and Hopkins were awarded the Nobel Prize, all of its members had acknowledged that beriberi was a deficiency syndrome.
Etymology
Although according to the Oxford English Dictionary, the term "beriberi" comes from a Sinhalese phrase meaning "weak, weak" or "I cannot, I cannot", the word being duplicated for emphasis, the origin of the phrase is questionable. It has also been suggested to come from Hindi, Arabic, and a few other languages, with many meanings like "weakness", "sailor", and even "sheep". Such suggested origins were listed by Heinrich Botho Scheube, among others. Edward Vedder wrote in his book Beriberi (1913) that "it is impossible to definitely trace the origin of the word beriberi". The word berbere was used in writing at least as early as 1568 by Diogo do Couto, when he described the deficiency in India.
, which is a Japanese synonym for thiamine deficiency, comes from the way "jiao qi" is pronounced in Japanese. "Jiao qi is an old word used in Chinese medicine to describe beriberi. "Kakke is supposed to have entered into the Japanese language sometime between the sixth and eighth centuries.
Other animals
Poultry
As most feedstuffs used in poultry diets contain enough quantities of vitamins to meet the requirements in this species, deficiencies in this vitamin do not occur with commercial diets. This was, at least, the opinion in the 1960s.
Mature chickens show signs three weeks after being fed a deficient diet. In young chicks, it can appear before two weeks of age. Onset is sudden in young chicks, with anorexia and an unsteady gait. Later on, locomotor signs begin, with an apparent paralysis of the flexor of the toes. The characteristic position is called "stargazing", with the affected animal sitting on its hocks with its head thrown back in a posture called opisthotonos. Response to administration of the vitamin is rather quick, occurring a few hours later.
Ruminants
Polioencephalomalacia (PEM) is the most common thiamine deficiency disorder in young ruminant and nonruminant animals. Symptoms of PEM include a profuse, but transient, diarrhea, listlessness, circling movements, stargazing or opisthotonus (head drawn back over neck), and muscle tremors. The most common cause is high-carbohydrate feeds, leading to the overgrowth of thiaminase-producing bacteria, but dietary ingestion of thiaminase (e.g., in bracken fern), or inhibition of thiamine absorption by high sulfur intake are also possible. Another cause of PEM is Clostridium sporogenes or Bacillus aneurinolyticus infection. These bacteria produce thiaminases that can cause an acute thiamine deficiency in the affected animal.
Snakes
Snakes that consume a diet largely composed of goldfish and feeder minnows are susceptible to developing thiamine deficiency. This is often a problem observed in captivity when keeping garter and ribbon snakes that are fed a goldfish-exclusive diet, as these fish contain thiaminase, an enzyme that breaks down thiamine.
Wild birds and fish
Thiamine deficiency has been identified as the cause of a paralytic disease affecting wild birds in the Baltic Sea area dating back to 1982. In this condition, there is difficulty in keeping the wings folded along the side of the body when resting, loss of the ability to fly and voice, with eventual paralysis of the wings and legs and death. It affects primarily 0.5–1 kg-sized birds such as the European herring gull (Larus argentatus), common starling (Sturnus vulgaris), and common eider (Somateria mollissima). Researchers noted, "Because the investigated species occupy a wide range of ecological niches and positions in the food web, we are open to the possibility that other animal classes may develop thiamine deficiency, as well."p. 12006
In the counties of Blekinge and Skåne, mass deaths of several bird species, especially the European herring gull, have been observed since the early 2000s. More recently, species of other classes seems to be affected. High mortality of salmon (Salmo salar) in the river Mörrumsån is reported, and mammals such as the Eurasian elk (Alces alces) have died in unusually high numbers. Lack of thiamine is the common denominator where analysis is done. In April 2012, the County Administrative Board of Blekinge found the situation so alarming that they asked the Swedish government to set up a closer investigation.
References
Further reading
External links
Reduplicants
Vitamin deficiencies
Thiamine
Wikipedia medicine articles ready to translate | 0.766661 | 0.997919 | 0.765066 |
Nerve agent | Nerve agents, sometimes also called nerve gases, are a class of organic chemicals that disrupt the mechanisms by which nerves transfer messages to organs. The disruption is caused by the blocking of acetylcholinesterase (AChE), an enzyme that catalyzes the breakdown of acetylcholine, a neurotransmitter. Nerve agents are irreversible acetylcholinesterase inhibitors used as poison.
Poisoning by a nerve agent leads to constriction of pupils, profuse salivation, convulsions, and involuntary urination and defecation, with the first symptoms appearing in seconds after exposure. Death by asphyxiation or cardiac arrest may follow in minutes due to the loss of the body's control over respiratory and other muscles. Some nerve agents are readily vaporized or aerosolized, and the primary portal of entry into the body is the respiratory system. Nerve agents can also be absorbed through the skin, requiring that those likely to be subjected to such agents wear a full body suit in addition to a respirator.
Nerve agents are generally colorless and tasteless liquids. Nerve agents evaporate at varying rates depending on the substance. None are gases in normal environments. The popular term "nerve gas" is inaccurate.
Agents Sarin and VX are odorless; Tabun has a slightly fruity odor and Soman has a slight camphor odor.
Biological effects
Nerve agents attack the nervous system. All such agents function the same way resulting in cholinergic crisis: they inhibit the enzyme acetylcholinesterase, which is responsible for the breakdown of acetylcholine (ACh) in the synapses between nerves that control whether muscle tissues are to relax or contract. If the agent cannot be broken down, muscles are prevented from receiving 'relax' signals and they are effectively paralyzed. It is the compounding of this paralysis throughout the body that quickly leads to more severe complications, including the heart and the muscles used for breathing. Because of this, the first symptoms usually appear within 30 seconds of exposure and death can occur via asphyxiation or cardiac arrest in a few minutes, depending upon the dose received and the agent used.
Initial symptoms following exposure to nerve agents (like Sarin) are a runny nose, tightness in the chest, and constriction of the pupils. Soon after, the victim will have difficulty breathing and will experience nausea and salivation. As the victim continues to lose control of bodily functions, involuntary salivation, lacrimation, urination, defecation,
gastrointestinal pain and vomiting will be experienced. Blisters and burning of the eyes and/or lungs may also occur. This phase is followed by initially myoclonic jerks (muscle jerks) followed by status epilepticus–type epileptic seizure. Death then comes via complete respiratory depression, most likely via the excessive peripheral activity at the neuromuscular junction of the diaphragm.
The effects of nerve agents are long lasting and increase with continued exposure. Survivors of nerve agent poisoning almost invariably develop chronic neurological damage and related psychiatric effects. Possible effects that can last at least up to two–three years after exposure include blurred vision, tiredness, declined memory, hoarse voice, palpitations, sleeplessness, shoulder stiffness and eye strain. In people exposed to nerve agents, serum and erythrocyte acetylcholinesterase in the long-term are noticeably lower than normal and tend to be lower the worse the persisting symptoms are.
Mechanism of action
When a normally functioning motor nerve is stimulated, it releases the neurotransmitter acetylcholine, which transmits the impulse to a muscle or organ. Once the impulse is sent, the enzyme acetylcholinesterase immediately breaks down the acetylcholine in order to allow the muscle or organ to relax.
Nerve agents disrupt the nervous system by inhibiting the function of the enzyme acetylcholinesterase by forming a covalent bond with its active site, where acetylcholine would normally be broken down (undergo hydrolysis). Acetylcholine thus builds up and continues to act so that any nerve impulses are continually transmitted and muscle contractions do not stop. This same action also occurs at the gland and organ levels, resulting in uncontrolled drooling, tearing of the eyes (lacrimation) and excess production of mucus from the nose (rhinorrhea).
The reaction product of the most important nerve agents, including Soman, Sarin, Tabun and VX, with acetylcholinesterase were solved by the U.S. Army using X-ray crystallography in the 1990s. The reaction products have been confirmed subsequently using different sources of acetylcholinesterase and the closely related target enzyme, butyrylcholinesterase. The X-ray structures clarify important aspects of the reaction mechanism (e.g., stereochemical inversion) at atomic resolution and provide a key tool for antidote development.
Treatment
Standard treatment for nerve agent poisoning is a combination of an anticholinergic to manage the symptoms, and an oxime as an antidote. Anticholinergics treat the symptoms by reducing the effects of acetylcholine, while oximes displaces phosphate molecules from the active site of the cholinesterase enzymes, allowing the breakdown of acetylcholine. Military personnel are issued the combination in an autoinjector (e.g. ATNAA), for ease of use in stressful conditions.
Atropine is the standard anticholinergic drug used to manage the symptoms of nerve agent poisoning. It acts as an antagonist to muscarinic acetylcholine receptors, blocking the effects of excess acetylcholine. Some synthetic anticholinergics, such as biperiden, may counteract the central symptoms of nerve agent poisoning more effectively than atropine, since they pass the blood–brain barrier better. While these drugs will save the life of a person affected by nerve agents, that person may be incapacitated briefly or for an extended period, depending on the extent of exposure. The endpoint of atropine administration is the clearing of bronchial secretions.
Pralidoxime chloride (also known as 2-PAMCl) is the standard oxime used to treat nerve agent poisoning. Rather than counteracting the initial effects of the nerve agent on the nervous system as does atropine, pralidoxime chloride reactivates the poisoned enzyme (acetylcholinesterase) by scavenging the phosphoryl group attached on the functional hydroxyl group of the enzyme, counteracting the nerve agent itself. Revival of acetylcholinesterase with pralidoxime chloride works more effectively on nicotinic receptors while blocking acetylcholine receptors with atropine is more effective on muscarinic receptors.
Anticonvulsants, such as diazepam, may be administered to manage seizures, improving long term prognosis and reducing risk of brain damage. This is not usually self-administered as its use is for actively seizing patients.
Countermeasures
Pyridostigmine bromide was used by the US military in the first Gulf War as a pretreatment for Soman as it increased the median lethal dose. It is only effective if taken prior to exposure and in conjunction with Atropine and Pralidoxime, issued in the Mark I NAAK autoinjector, and is ineffective against other nerve agents. While it reduces fatality rates, there is an increased risk of brain damage; this can be mitigated by administration of an anticonvulsant. Evidence suggests that the use of pyridostigmine may be responsible for some of the symptoms of Gulf War syndrome.
Butyrylcholinesterase is under development by the U.S. Department of Defense as a prophylactic countermeasure against organophosphate nerve agents. It binds nerve agent in the bloodstream before the poison can exert effects in the nervous system.
Both purified acetylcholinesterase and butyrylcholinesterase have demonstrated success in animal studies as "biological scavengers" (and universal targets) to provide stoichiometric protection against the entire spectrum of organophosphate nerve agents. Butyrylcholinesterase currently is the preferred enzyme for development as a pharmaceutical drug primarily because it is a naturally circulating human plasma protein (superior pharmacokinetics) and its larger active site compared with acetylcholinesterase may permit greater flexibility for future design and improvement of butyrylcholinesterase to act as a nerve agent scavenger.
Classes
There are two main classes of nerve agents. The members of the two classes share similar properties and are given both a common name (such as Sarin) and a two-character NATO identifier (such as GB).
G-series
The G-series is thus named because German scientists first synthesized them. G series agents are known as non-persistent, meaning that they evaporate shortly after release, and do not remain active in the dispersal area for very long. All of the compounds in this class were discovered and synthesized during or prior to World War II, led by Gerhard Schrader (later under the employment of IG Farben).
This series is the first and oldest family of nerve agents. The first nerve agent ever synthesized was GA (Tabun) in 1936. GB (Sarin) was discovered next in 1939, followed by GD (Soman) in 1944, and finally the more obscure GF (Cyclosarin) in 1949. GB was the only G agent that was fielded by the US as a munition, in rockets, aerial bombs, and artillery shells.
V-series
The V-series is the second family of nerve agents and contains five well known members: VE, VG, VM, VR, and VX, along with several more obscure analogues.
The most studied agent in this family, VX (it is thought that the 'X' in its name comes from its overlapping isopropyl radicals), was invented in the 1950s at Porton Down in Wiltshire, England. Ranajit Ghosh, a chemist at the Plant Protection Laboratories of Imperial Chemical Industries (ICI) was investigating a class of organophosphate compounds (organophosphate esters of substituted aminoethanethiols). Like Schrader, Ghosh found that they were quite effective pesticides. In 1954, ICI put one of them on the market under the trade name Amiton. It was subsequently withdrawn, as it was too toxic for safe use. The toxicity did not escape military notice and some of the more toxic materials had been sent to Porton Down for evaluation. After the evaluation was complete, several members of this class of compounds became a new group of nerve agents, the V agents (depending on the source, the V stands for Victory, Venomous, or Viscous). The best known of these is probably VX, with VR ("Russian V-gas") coming a close second (Amiton is largely forgotten as VG, with G probably coming from "G"hosh). All of the V-agents are persistent agents, meaning that these agents do not degrade or wash away easily and can therefore remain on clothes and other surfaces for long periods. In use, this allows the V-agents to be used to blanket terrain to guide or curtail the movement of enemy ground forces. The consistency of these agents is similar to oil; as a result, the contact hazard for V-agents is primarily – but not exclusively – dermal. VX was the only V-series agent that was fielded by the US as a munition, in rockets, artillery shells, airplane spray tanks, and landmines.
Analyzing the structure of thirteen V agents, the standard composition, which makes a compound enter this group, is the absence of halides. It is clear that many agricultural pesticides can be considered as V agents if they are notoriously toxic. The agent is not required to be a phosphonate and presents a dialkylaminoethyl group. The toxicity requirement is waived as the VT agent and its salts (VT-1 and VT-2) are "non-toxic". Replacing the sulfur atom with selenium increases the toxicity of the agent by orders of magnitude.
Novichok agents
The Novichok (Russian: , "newcomer") agents, a series of organophosphate compounds, were developed in the Soviet Union and in Russia from the mid-1960s to the 1990s. The Novichok program aimed to develop and manufacture highly deadly chemical weapons that were unknown to the West. The new agents were designed to be undetectable by standard NATO chemical-detection equipment and overcome contemporary chemical-protective equipment.
In addition to the newly developed "third generation" weapons, binary versions of several Soviet agents were developed and were designated as "Novichok" agents.
Carbamates
Contrary to some claims, not all nerve agents are organophosphates. The starting compound studied by the United States was the carbamate EA-1464, of notorious toxicity. Compounds similar in structure and effect to EA-1464 formed a large group, including compounds such as EA-3990 and EA-4056. The Family Practice Notebook claims carbamate-based nerve agents can be three times as toxic as VX. Both the United States and the Soviet Union
developed carbamate-based nerve agents during the Cold War. Carbamate-based nerve agents are sometimes grouped in academic literature with Fourth Generation Novichok agents, as they were added to the CWC schedule on banned agents at the same time, despite their significant differences in chemical makeup and mechanisms of action. Carbamate-based nerve agents have been identified as Schedule 1 Nerve Agents, the highest classification possible under the CWC, reserved for agents with no identified alternate use, and those that can cause the most harm.
Insecticides
Some insecticides, including carbamates and organophosphates such as dichlorvos, malathion and parathion, are nerve agents. The metabolism of insects is sufficiently different from mammals that these compounds have little effect on humans and other mammals at proper doses, but there is considerable concern about the effects of long-term exposure to these chemicals by farm workers and animals alike. At high enough doses, acute toxicity and death can occur through the same mechanism as other nerve agents. Some insecticides such as demeton, dimefox and paraoxon are sufficiently toxic to humans that they have been withdrawn from agricultural use, and were at one stage investigated for potential military applications.
Paraoxon was allegedly used as an assassination weapon by the apartheid South African government as part of Project Coast. Organophosphate pesticide poisoning is a major cause of disability in many developing countries and is often the preferred method of suicide.
Methods of dissemination
Many methods exist for spreading nerve agents such as:
uncontrolled aerosol munitions
smoke generation
explosive dissemination
atomizers, humidifiers and foggers
The method chosen will depend on the physical properties of the nerve agent(s) used, the nature of the target, and the achievable level of sophistication.
History
Discovery
This first class of nerve agents, the G-series, was accidentally discovered in Germany on December 23, 1936, by a research team headed by Gerhard Schrader working for IG Farben. Since 1934, Schrader had been working in a laboratory in Leverkusen to develop new types of insecticides for IG Farben. While working toward his goal of improved insecticide, Schrader experimented with numerous compounds, eventually leading to the preparation of Tabun.
In experiments, Tabun was extremely potent against insects: as little as 5 ppm of Tabun killed all the leaf lice he used in his initial experiment. In January 1937, Schrader observed the effects of nerve agents on human beings first-hand when a drop of Tabun spilled onto a lab bench. Within minutes he and his laboratory assistant began to experience miosis (constriction of the pupils of the eyes), dizziness and severe shortness of breath. It took them three weeks to recover fully.
In 1935 the Nazi government had passed a decree that required all inventions of possible military significance to be reported to the Ministry of War, so in May 1937 Schrader sent a sample of Tabun to the chemical warfare (CW) section of the Army Weapons Office in Berlin-Spandau. Schrader was summoned to the Wehrmacht chemical lab in Berlin to give a demonstration, after which Schrader's patent application and all related research was classified as secret. Colonel Rüdiger, head of the CW section, ordered the construction of new laboratories for the further investigation of Tabun and other organophosphate compounds and Schrader soon moved to a new laboratory at Wuppertal-Elberfeld in the Ruhr valley to continue his research in secret throughout World War II. The compound was initially codenamed Le-100 and later Trilon-83.
Sarin was discovered by Schrader and his team in 1938 and named in honor of its discoverers: Gerhard Schrader, Otto Ambros, , and Hans-Jürgen von der Linde. It was codenamed T-144 or Trilon-46. It was found to be more than ten times as potent as Tabun.
Soman was discovered by Richard Kuhn in 1944 as he worked with the existing compounds; the name is derived from either the Greek 'to sleep' or the Latin 'to bludgeon'. It was codenamed T-300.
Cyclosarin was also discovered during WWII but the details were lost and it was rediscovered in 1949.
The G-series naming system was created by the United States when it uncovered the German activities, labeling Tabun as GA (German Agent A), Sarin as GB and Soman as GD. Ethyl Sarin was tagged GE and CycloSarin as GF.
During World War II
In 1939, a pilot plant for Tabun production was set up at Munster-Lager, on Lüneburg Heath near the German Army proving grounds at . In January 1940, construction began on a secret plant, code named "Hochwerk" (High factory), for the production of Tabun at Dyhernfurth an der Oder (now Brzeg Dolny in Poland), on the Oder River from Breslau (now Wrocław) in Silesia.
The plant was large, covering an area of and was completely self-contained, synthesizing all intermediates as well as the final product, Tabun. The factory even had an underground plant for filling munitions, which were then stored at Krappitz (now Krapkowice) in Upper Silesia. The plant was operated by , a subsidiary of IG Farben, as were all other chemical weapon agent production plants in Germany at the time.
Because of the plant's deep secrecy and the difficult nature of the production process, it took from January 1940 until June 1942 for the plant to become fully operational. Many of Tabun's chemical precursors were so corrosive that reaction chambers not lined with quartz or silver soon became useless. Tabun itself was so hazardous that the final processes had to be performed while enclosed in double glass-lined chambers with a stream of pressurized air circulating between the walls.
Three thousand German nationals were employed at Hochwerk, all equipped with respirators and clothing constructed of a poly-layered rubber/cloth/rubber sandwich that was destroyed after the tenth wearing. Despite all precautions, there were over 300 accidents before production even began and at least ten workers died during the two and a half years of operation. Some incidents cited in A Higher Form of Killing: The Secret History of Chemical and Biological Warfare are as follows:
Four pipe fitters had liquid Tabun drain onto them and died before their rubber suits could be removed.
A worker had two liters of Tabun pour down the neck of his rubber suit. He died within two minutes.
Seven workers were hit in the face with a stream of Tabun of such force that the liquid was forced behind their respirators. Only two survived despite resuscitation measures.
and moved, probably to Dzerzhinsk, USSR.
In 1940 the German Army Weapons Office ordered the mass production of Sarin for wartime use. A number of pilot plants were built and a high-production facility was under construction (but was not finished) by the end of World War II. Estimates for total Sarin production by Nazi Germany range from 500 kg to 10 tons.
During that time, German intelligence believed that the Allies also knew of these compounds, assuming that because these compounds were not discussed in the Allies' scientific journals information about them was being suppressed. Though Sarin, Tabun and Soman were incorporated into artillery shells, the German government ultimately decided not to use nerve agents against Allied targets. The Allies did not learn of these agents until shells filled with them were captured towards the end of the war. German forces used chemical warfare against partisans during the Battle of the Kerch Peninsula in 1942, but did not use any nerve agent.
This is detailed in Joseph Borkin's book The Crime and Punishment of IG Farben:
Post–World War II
Since World War II, Iraq's use of mustard gas against Iranian troops and Kurds (Iran–Iraq War of 1980–1988) has been the only large-scale use of any chemical weapons. On the scale of the single Kurdish village of Halabja within its own territory, Iraqi forces did expose the populace to some kind of chemical weapons, possibly mustard gas and most likely nerve agents.
Operatives of the Aum Shinrikyo religious group made and used Sarin several times on other Japanese, most notably the Tokyo subway sarin attack.
In the Gulf War, no nerve agents (nor other chemical weapons) were used, but a number of U.S. and UK personnel were exposed to them when the Khamisiyah chemical depot was destroyed. This and the widespread use of anticholinergic drugs as a protective treatment against any possible nerve gas attack have been proposed as a possible cause of Gulf War syndrome.
Sarin gas was deployed in a 2013 attack on Ghouta during the Syrian Civil War, killing several hundred people. Most governments contend that forces loyal to President Bashar al-Assad deployed the gas; however, the Syrian Government has denied responsibility.
On 13 February 2017, the nerve agent VX was used in the assassination of Kim Jong-nam, half-brother of the North Korean leader Kim Jong-un, at Kuala Lumpur International Airport in Malaysia.
On 4 March 2018, a former Russian agent (who was convicted of high treason but allowed to live in the United Kingdom via a spy swap agreement), Sergei Skripal, and his daughter, who was visiting from Moscow, were both poisoned by a Novichok nerve agent in the English city of Salisbury. They survived, and were subsequently released from hospital. In addition, a Wiltshire Police officer, Nick Bailey, was exposed to the substance. He was one of the first to respond to the incident. Twenty-one members of the public received medical treatment following exposure to the nerve agent. Despite this, only Bailey and the Skripals remained in critical condition. On 11 March 2018, Public Health England issued advice for the other people believed to have been in the Mill pub (the location where the attack is believed to have been carried out) or the nearby Zizzi Restaurant. On 12 March 2018, British Prime Minister Theresa May stated that the substance used was a Novichok nerve agent.
On 30 June 2018, two British nationals, Charlie Rowley and Dawn Sturgess, were poisoned by a Novichok nerve agent of the same kind that was used in the Skripal poisoning, which Rowley had found in a discarded perfume bottle and gifted to Sturgess. Whilst Rowley survived, Sturgess died on 8 July. Metropolitan Police believe that the poisoning was not a targeted attack, but a result of the way the nerve agent was disposed of after the poisoning in Salisbury.
Ocean disposal
In 1972, the United States Congress banned the practice of disposing chemical weapons into the ocean. Thirty-two thousand tons of nerve and mustard agents had already been dumped into the ocean waters off the United States by the U.S. Army, primarily as part of Operation CHASE. According to a 1998 report by William Brankowitz, a deputy project manager in the U.S. Army Chemical Materials Agency, the Army created at least 26 chemical weapons dump sites in the ocean off at least 11 states on both the west and east coasts. Due to poor records, they currently only know the rough whereabouts of half of them.
There is currently a lack of scientific data regarding the ecological and health effects of this dumping. In the event of leakage, many nerve agents are soluble in water and would dissolve in a few days, while other substances like sulfur mustard could last longer. There have also been a few incidents of chemical weapons washing ashore or being accidentally retrieved, for example during dredging or trawl fishing operations.
Detection
Detection of gaseous nerve agents
The methods of detecting gaseous nerve agents include but are not limited to the following.
Laser photoacoustic spectroscopy
Laser photoacoustic spectroscopy (LPAS) is a method that has been used to detect nerve agents in the air. In this method, laser light is absorbed by gaseous matter. This causes a heating/cooling cycle and changes in pressure. Sensitive microphones convey sound waves that result from the pressure changes. Scientists at the U.S. Army Research Laboratory engineered an LPAS system that can detect multiple trace amounts of toxic gases in one air sample.
This technology contained three lasers modulated to different frequency, each producing a different sound wave tone. The different wavelengths of light were directed into a sensor referred to as the photoacoustic cell. Within the cell were the vapors of different nerve agents. The traces of each nerve agent had a signature effect on the "loudness" of the lasers' sound wave tones. Some overlap of nerve agents' effects did occur in the acoustic results. However, it was predicted that specificity would increase as additional lasers with unique wavelengths were added. Yet, too many lasers set to different wavelengths could result in overlap of absorption spectra. Citation LPAS technology can identify gases in parts per billion (ppb) concentrations.
The following nerve agent simulants have been identified with this multiwavelength LPAS:
dimethyl methyl phosphonate (DMMP)
diethyl methyl phosphonate (DEMP)
diisopropyl methyl phosphonate (DIMP)
dimethylpolysiloxane (DIME), triethyl phosphate (TEP)
tributyl phosphate (TBP)
two volatile organic compounds (VOCs)
acetone (ACE)
isopropanol (ISO), used to construct Sarin
Other gases and air contaminants identified with LPAS include:
CO2 Carbon dioxide
Benzene
Formaldehyde
Acetaldehyde
Ammonia
NOx Nitrogen oxide
SO2 Sulphur oxide
Ethylene Glycol
TATP
TNT
Non-dispersive infrared
Non-dispersive infrared techniques have been reported to be used for gaseous nerve agent detection.
IR absorption
Traditional IR absorption has been reported to detect gaseous nerve agents.
Fourier transform infrared spectroscopy
Fourier transform infrared (FTIR) spectroscopy has been reported to detect gaseous nerve agents.
References
Sources
External links
ATSDR Case Studies in Environmental Medicine: Cholinesterase Inhibitors, Including Pesticides and Chemical Warfare Nerve Agents U.S. Department of Health and Human Services
Nervegas: America's Fifteen-year Struggle for Modern Chemical Weapons Army Chemical Review
History Note: The CWS Effort to Obtain German Chemical Weapons for Retaliation Against Japan CBIAC Newsletter
AChE inhibitors and substrates – 2wfz, 2wg0, 2wg1, 1som in Proteopedia
Acetylcholinesterase inhibitors | 0.767127 | 0.997297 | 0.765054 |
Pneumonia | Pneumonia is an inflammatory condition of the lung primarily affecting the small air sacs known as alveoli. Symptoms typically include some combination of productive or dry cough, chest pain, fever, and difficulty breathing. The severity of the condition is variable.
Pneumonia is usually caused by infection with viruses or bacteria, and less commonly by other microorganisms. Identifying the responsible pathogen can be difficult. Diagnosis is often based on symptoms and physical examination. Chest X-rays, blood tests, and culture of the sputum may help confirm the diagnosis. The disease may be classified by where it was acquired, such as community- or hospital-acquired or healthcare-associated pneumonia.
Risk factors for pneumonia include cystic fibrosis, chronic obstructive pulmonary disease (COPD), sickle cell disease, asthma, diabetes, heart failure, a history of smoking, a poor ability to cough (such as following a stroke), and a weak immune system.
Vaccines to prevent certain types of pneumonia (such as those caused by Streptococcus pneumoniae bacteria, linked to influenza, or linked to COVID-19) are available. Other methods of prevention include hand washing to prevent infection, and not smoking.
Treatment depends on the underlying cause. Pneumonia believed to be due to bacteria is treated with antibiotics. If the pneumonia is severe, the affected person is generally hospitalized. Oxygen therapy may be used if oxygen levels are low.
Each year, pneumonia affects about 450 million people globally (7% of the population) and results in about 4 million deaths. With the introduction of antibiotics and vaccines in the 20th century, survival has greatly improved. Nevertheless, pneumonia remains a leading cause of death in developing countries, and also among the very old, the very young, and the chronically ill. Pneumonia often shortens the period of suffering among those already close to death and has thus been called "the old man's friend".
Signs and symptoms
People with infectious pneumonia often have a productive cough, fever accompanied by shaking chills, shortness of breath, sharp or stabbing chest pain during deep breaths, and an increased rate of breathing. In elderly people, confusion may be the most prominent sign.
The typical signs and symptoms in children under five are fever, cough, and fast or difficult breathing. Fever is not very specific, as it occurs in many other common illnesses and may be absent in those with severe disease, malnutrition or in the elderly. In addition, a cough is frequently absent in children less than 2 months old. More severe signs and symptoms in children may include blue-tinged skin, unwillingness to drink, convulsions, ongoing vomiting, extremes of temperature, or a decreased level of consciousness.
Bacterial and viral cases of pneumonia usually result in similar symptoms. Some causes are associated with classic, but non-specific, clinical characteristics. Pneumonia caused by Legionella may occur with abdominal pain, diarrhea, or confusion. Pneumonia caused by Streptococcus pneumoniae is associated with rusty colored sputum. Pneumonia caused by Klebsiella may have bloody sputum often described as "currant jelly". Bloody sputum (known as hemoptysis) may also occur with tuberculosis, Gram-negative pneumonia, lung abscesses and more commonly acute bronchitis. Pneumonia caused by Mycoplasma pneumoniae may occur in association with swelling of the lymph nodes in the neck, joint pain, or a middle ear infection. Viral pneumonia presents more commonly with wheezing than bacterial pneumonia. Pneumonia was historically divided into "typical" and "atypical" based on the belief that the presentation predicted the underlying cause. However, evidence has not supported this distinction, therefore it is no longer emphasized.
Cause
Pneumonia is due to infections caused primarily by bacteria or viruses and less commonly by fungi and parasites. Although more than 100 strains of infectious agents have been identified, only a few are responsible for the majority of cases. Mixed infections with both viruses and bacteria may occur in roughly 45% of infections in children and 15% of infections in adults. A causative agent may not be isolated in about half of cases despite careful testing. In an active population-based surveillance for community-acquired pneumonia requiring hospitalization in five hospitals in Chicago and Nashville from January 2010 through June 2012, 2259 patients were identified who had radiographic evidence of pneumonia and specimens that could be tested for the responsible pathogen. Most patients (62%) had no detectable pathogens in their sample, and unexpectedly, respiratory viruses were detected more frequently than bacteria. Specifically, 23% had one or more viruses, 11% had one or more bacteria, 3% had both bacterial and viral pathogens, and 1% had a fungal or mycobacterial infection. "The most common pathogens were human rhinovirus (in 9% of patients), influenza virus (in 6%), and Streptococcus pneumoniae (in 5%)."
The term pneumonia is sometimes more broadly applied to any condition resulting in inflammation of the lungs (caused for example by autoimmune diseases, chemical burns or drug reactions); however, this inflammation is more accurately referred to as pneumonitis.
Factors that predispose to pneumonia include smoking, immunodeficiency, alcoholism, chronic obstructive pulmonary disease, sickle cell disease (SCD), asthma, chronic kidney disease, liver disease, and biological aging. Additional risks in children include not being breastfed, exposure to cigarette smoke and other air pollution, malnutrition, and poverty. The use of acid-suppressing medications – such as proton-pump inhibitors or H2 blockers – is associated with an increased risk of pneumonia. Approximately 10% of people who require mechanical ventilation develop ventilator-associated pneumonia, and people with a gastric feeding tube have an increased risk of developing aspiration pneumonia. Moreover, the misplacement of a feeding tube can lead to aspiration pneumonia. 28% of tube malposition results in pneumonia. As with Avanos Medical's feeding tube placement system, the CORTRAK* 2 EAS, which was recalled in May 2022 by the FDA due to adverse events reported, including pneumonia, caused a total of 60 injuries and 23 patient deaths, as communicated by the FDA. For people with certain variants of the FER gene, the risk of death is reduced in sepsis caused by pneumonia. However, for those with TLR6 variants, the risk of getting Legionnaires' disease is increased.
Bacteria
Bacteria are the most common cause of community-acquired pneumonia (CAP), with Streptococcus pneumoniae isolated in nearly 50% of cases. Other commonly isolated bacteria include Haemophilus influenzae in 20%, Chlamydophila pneumoniae in 13%, and Mycoplasma pneumoniae in 3% of cases; Staphylococcus aureus; Moraxella catarrhalis; and Legionella pneumophila. A number of drug-resistant versions of the above infections are becoming more common, including drug-resistant Streptococcus pneumoniae (DRSP) and methicillin-resistant Staphylococcus aureus (MRSA).
The spreading of organisms is facilitated by certain risk factors. Alcoholism is associated with Streptococcus pneumoniae, anaerobic organisms, and Mycobacterium tuberculosis; smoking facilitates the effects of Streptococcus pneumoniae, Haemophilus influenzae, Moraxella catarrhalis, and Legionella pneumophila. Exposure to birds is associated with Chlamydia psittaci; farm animals with Coxiella burnetti; aspiration of stomach contents with anaerobic organisms; and cystic fibrosis with Pseudomonas aeruginosa and Staphylococcus aureus. Streptococcus pneumoniae is more common in the winter, and it should be suspected in persons aspirating a large number of anaerobic organisms.
Viruses
In adults, viruses account for about one third of pneumonia cases, and in children for about 15% of them. Commonly implicated agents include rhinoviruses, coronaviruses, influenza virus, respiratory syncytial virus (RSV), adenovirus, and parainfluenza. Herpes simplex virus rarely causes pneumonia, except in groups such as newborns, persons with cancer, transplant recipients, and people with significant burns. After organ transplantation or in otherwise immunocompromised persons, there are high rates of cytomegalovirus pneumonia. Those with viral infections may be secondarily infected with the bacteria Streptococcus pneumoniae, Staphylococcus aureus, or Haemophilus influenzae, particularly when other health problems are present. Different viruses predominate at different times of the year; during flu season, for example, influenza may account for more than half of all viral cases. Outbreaks of other viruses also occur occasionally, including hantaviruses and coronaviruses. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can also result in pneumonia.
Fungi
Fungal pneumonia is uncommon, but occurs more commonly in individuals with weakened immune systems due to AIDS, immunosuppressive drugs, or other medical problems. It is most often caused by Histoplasma capsulatum, Blastomyces, Cryptococcus neoformans, Pneumocystis jiroveci (pneumocystis pneumonia, or PCP), and Coccidioides immitis. Histoplasmosis is most common in the Mississippi River basin, and coccidioidomycosis is most common in the Southwestern United States. The number of cases of fungal pneumonia has been increasing in the latter half of the 20th century due to increasing travel and rates of immunosuppression in the population. For people infected with HIV/AIDS, PCP is a common opportunistic infection.
Parasites
A variety of parasites can affect the lungs, including Toxoplasma gondii, Strongyloides stercoralis, Ascaris lumbricoides, and Plasmodium malariae. These organisms typically enter the body through direct contact with the skin, ingestion, or via an insect vector. Except for Paragonimus westermani, most parasites do not specifically affect the lungs but involve the lungs secondarily to other sites. Some parasites, in particular those belonging to the Ascaris and Strongyloides genera, stimulate a strong eosinophilic reaction, which may result in eosinophilic pneumonia. In other infections, such as malaria, lung involvement is due primarily to cytokine-induced systemic inflammation. In the developed world, these infections are most common in people returning from travel or in immigrants. Around the world, parasitic pneumonia is most common in the immunodeficient.
Noninfectious
Idiopathic interstitial pneumonia or noninfectious pneumonia is a class of diffuse lung diseases. They include diffuse alveolar damage, organizing pneumonia, nonspecific interstitial pneumonia, lymphocytic interstitial pneumonia, desquamative interstitial pneumonia, respiratory bronchiolitis interstitial lung disease, and usual interstitial pneumonia. Lipoid pneumonia is another rare cause due to lipids entering the lung. These lipids can either be inhaled or spread to the lungs from elsewhere in the body.
Mechanisms
Pneumonia frequently starts as an upper respiratory tract infection that moves into the lower respiratory tract. It is a type of pneumonitis (lung inflammation). The normal flora of the upper airway give protection by competing with pathogens for nutrients. In the lower airways, reflexes of the glottis, actions of complement proteins and immunoglobulins are important for protection. Microaspiration of contaminated secretions can infect the lower airways and cause pneumonia. The progress of pneumonia is determined by the virulence of the organism; the amount of organism required to start an infection; and the body's immune response against the infection.
Bacterial
Most bacteria enter the lungs via small aspirations of organisms residing in the throat or nose. Half of normal people have these small aspirations during sleep. While the throat always contains bacteria, potentially infectious ones reside there only at certain times and under certain conditions. A minority of types of bacteria such as Mycobacterium tuberculosis and Legionella pneumophila reach the lungs via contaminated airborne droplets. Bacteria can also spread via the blood. Once in the lungs, bacteria may invade the spaces between cells and between alveoli, where the macrophages and neutrophils (defensive white blood cells) attempt to inactivate the bacteria. The neutrophils also release cytokines, causing a general activation of the immune system. This leads to the fever, chills, and fatigue common in bacterial pneumonia. The neutrophils, bacteria, and fluid from surrounding blood vessels fill the alveoli, resulting in the consolidation seen on chest X-ray.
Viral
Viruses may reach the lung by a number of different routes. Respiratory syncytial virus is typically contracted when people touch contaminated objects and then touch their eyes or nose. Other viral infections occur when contaminated airborne droplets are inhaled through the nose or mouth. Once in the upper airway, the viruses may make their way into the lungs, where they invade the cells lining the airways, alveoli, or lung parenchyma. Some viruses such as measles and herpes simplex may reach the lungs via the blood. The invasion of the lungs may lead to varying degrees of cell death. When the immune system responds to the infection, even more lung damage may occur. Primarily white blood cells, mainly mononuclear cells, generate the inflammation. As well as damaging the lungs, many viruses simultaneously affect other organs and thus disrupt other body functions. Viruses also make the body more susceptible to bacterial infections; in this way, bacterial pneumonia can occur at the same time as viral pneumonia.
Diagnosis
Pneumonia is typically diagnosed based on a combination of physical signs and often a chest X-ray. In adults with normal vital signs and a normal lung examination, the diagnosis is unlikely. However, the underlying cause can be difficult to confirm, as there is no definitive test able to distinguish between bacterial and non-bacterial cause. The overall impression of a physician appears to be at least as good as decision rules for making or excluding the diagnosis.
Diagnosis in children
The World Health Organization has defined pneumonia in children clinically based on either a cough or difficulty breathing and a rapid respiratory rate, chest indrawing, or a decreased level of consciousness. A rapid respiratory rate is defined as greater than 60 breaths per minute in children under 2 months old, greater than 50 breaths per minute in children 2 months to 1 year old, or greater than 40 breaths per minute in children 1 to 5 years old.
In children, low oxygen levels and lower chest indrawing are more sensitive than hearing chest crackles with a stethoscope or increased respiratory rate. Grunting and nasal flaring may be other useful signs in children less than five years old.
Lack of wheezing is an indicator of Mycoplasma pneumoniae in children with pneumonia, but as an indicator it is not accurate enough to decide whether or not macrolide treatment should be used. The presence of chest pain in children with pneumonia doubles the probability of Mycoplasma pneumoniae.
Diagnosis in adults
In general, in adults, investigations are not needed in mild cases. There is a very low risk of pneumonia if all vital signs and auscultation are normal. C-reactive protein (CRP) may help support the diagnosis. For those with CRP less than 20 mg/L without convincing evidence of pneumonia, antibiotics are not recommended.
Procalcitonin may help determine the cause and support decisions about who should receive antibiotics. Antibiotics are encouraged if the procalcitonin level reaches 0.25 μg/L, strongly encouraged if it reaches 0.5 μg/L, and strongly discouraged if the level is below 0.10 μg/L. In people requiring hospitalization, pulse oximetry, chest radiography and blood tests – including a complete blood count, serum electrolytes, C-reactive protein level, and possibly liver function tests – are recommended.
The diagnosis of influenza-like illness can be made based on the signs and symptoms; however, confirmation of an influenza infection requires testing. Thus, treatment is frequently based on the presence of influenza in the community or a rapid influenza test.
Adults 65 years old or older, as well as cigarette smokers and people with ongoing medical conditions are at increased risk for pneumonia.
Physical exam
Physical examination may sometimes reveal low blood pressure, high heart rate, or low oxygen saturation. The respiratory rate may be faster than normal, and this may occur a day or two before other signs. Examination of the chest may be normal, but it may show decreased expansion on the affected side. Harsh breath sounds from the larger airways that are transmitted through the inflamed lung are termed bronchial breathing and are heard on auscultation with a stethoscope. Crackles (rales) may be heard over the affected area during inspiration. Percussion may be dulled over the affected lung, and increased, rather than decreased, vocal resonance distinguishes pneumonia from a pleural effusion.
Imaging
A chest radiograph is frequently used in diagnosis. In people with mild disease, imaging is needed only in those with potential complications, those not having improved with treatment, or those in which the cause is uncertain. If a person is sufficiently sick to require hospitalization, a chest radiograph is recommended. Findings do not always match the severity of disease and do not reliably separate between bacterial and viral infection.
X-ray presentations of pneumonia may be classified as lobar pneumonia, bronchopneumonia, lobular pneumonia, and interstitial pneumonia. Bacterial, community-acquired pneumonia classically show lung consolidation of one lung segmental lobe, which is known as lobar pneumonia. However, findings may vary, and other patterns are common in other types of pneumonia. Aspiration pneumonia may present with bilateral opacities primarily in the bases of the lungs and on the right side. Radiographs of viral pneumonia may appear normal, appear hyper-inflated, have bilateral patchy areas, or present similar to bacterial pneumonia with lobar consolidation. Radiologic findings may not be present in the early stages of the disease, especially in the presence of dehydration, or may be difficult to interpret in the obese or those with a history of lung disease. Complications such as pleural effusion may also be found on chest radiographs. Laterolateral chest radiographs can increase the diagnostic accuracy of lung consolidation and pleural effusion.
A CT scan can give additional information in indeterminate cases and provide more details in those with an unclear chest radiograph (for example occult pneumonia in chronic obstructive pulmonary disease). They can be used to exclude pulmonary embolism and fungal pneumonia, and detect lung abscesses in those who are not responding to treatments. However, CT scans are more expensive, have a higher dose of radiation, and cannot be done at bedside.
Lung ultrasound may also be useful in helping to make the diagnosis. Ultrasound is radiation free and can be done at bedside. However, ultrasound requires specific skills to operate the machine and interpret the findings. It may be more accurate than chest X-ray.
Microbiology
In people managed in the community, determining the causative agent is not cost-effective and typically does not alter management. For people who do not respond to treatment, sputum culture should be considered, and culture for Mycobacterium tuberculosis should be carried out in persons with a chronic productive cough. Microbiological evaluation is also indicated in severe pneumonia, alcoholism, asplenia, immunosuppression, HIV infection, and those being empirically treated for MRSA of pseudomonas. Although positive blood culture and pleural fluid culture definitively establish the diagnosis of the type of micro-organism involved, a positive sputum culture has to be interpreted with care for the possibility of colonisation of respiratory tract. Testing for other specific organisms may be recommended during outbreaks, for public health reasons. In those hospitalized for severe disease, both sputum and blood cultures are recommended, as well as testing the urine for antigens to Legionella and Streptococcus. Viral infections, can be confirmed via detection of either the virus or its antigens with culture or polymerase chain reaction (PCR), among other techniques. Mycoplasma, Legionella, Streptococcus, and Chlamydia can also be detected using PCR techniques on bronchoalveolar lavage and nasopharyngeal swab. The causative agent is determined in only 15% of cases with routine microbiological tests.
Classification
Pneumonitis refers to lung inflammation; pneumonia refers to pneumonitis, usually due to infection but sometimes non-infectious, that has the additional feature of pulmonary consolidation. Pneumonia is most commonly classified by where or how it was acquired: community-acquired, aspiration, healthcare-associated, hospital-acquired, and ventilator-associated pneumonia. It may also be classified by the area of the lung affected: lobar, bronchial pneumonia and acute interstitial pneumonia; or by the causative organism. Pneumonia in children may additionally be classified based on signs and symptoms as non-severe, severe, or very severe.
The setting in which pneumonia develops is important to treatment, as it correlates to which pathogens are likely suspects, which mechanisms are likely, which antibiotics are likely to work or fail, and which complications can be expected based on the person's health status.
Community
Community-acquired pneumonia (CAP) is acquired in the community, outside of health care facilities. Compared with healthcare-associated pneumonia, it is less likely to involve multidrug-resistant bacteria. Although the latter are no longer rare in CAP, they are still less likely. Prior stays in healthcare-related environments such as hospitals, nursing homes, or hemodialysis centers or a history of receiving domiciliary care can increase patients' risk for CAP caused by multidrug-resistant bacteria.
Healthcare
Health care–associated pneumonia (HCAP) is an infection associated with recent exposure to the health care system, including hospitals, outpatient clinics, nursing homes, dialysis centers, chemotherapy treatment, or home care. HCAP is sometimes called MCAP (medical care–associated pneumonia).
People may become infected with pneumonia in a hospital; this is defined as pneumonia not present at the time of admission (symptoms must start at least 48 hours after admission). It is likely to involve hospital-acquired infections, with higher risk of multidrug-resistant pathogens. People in a hospital often have other medical conditions, which may make them more susceptible to pathogens in the hospital.
Ventilator-associated pneumonia occurs in people breathing with the help of mechanical ventilation. Ventilator-associated pneumonia is specifically defined as pneumonia that arises more than 48 to 72 hours after endotracheal intubation.
Differential diagnosis
Several diseases can present with similar signs and symptoms to pneumonia, such as: chronic obstructive pulmonary disease, asthma, pulmonary edema, bronchiectasis, lung cancer, and pulmonary emboli. Unlike pneumonia, asthma and COPD typically present with wheezing, pulmonary edema presents with an abnormal electrocardiogram, cancer and bronchiectasis present with a cough of longer duration, and pulmonary emboli present with acute onset sharp chest pain and shortness of breath. Mild pneumonia should be differentiated from upper respiratory tract infection (URTI). Severe pneumonia should be differentiated from acute heart failure. Pulmonary infiltrates that resolved after giving mechanical ventilation should point to heart failure and atelectasis rather than pneumonia. For recurrent pneumonia, underlying lung cancer, metastasis, tuberculosis, a foreign bodies, immunosuppression, and hypersensitivity should be suspected.
Prevention
Prevention includes vaccination, environmental measures, and appropriate treatment of other health problems. It is believed that, if appropriate preventive measures were instituted globally, mortality among children could be reduced by 400,000; and, if proper treatment were universally available, childhood deaths could be decreased by another 600,000.
Vaccination
Vaccination prevents against certain bacterial and viral pneumonias both in children and adults. Influenza vaccines are modestly effective at preventing symptoms of influenza, The Centers for Disease Control and Prevention (CDC) recommends yearly influenza vaccination for every person 6 months and older. Immunizing health care workers decreases the risk of viral pneumonia among their patients.
Vaccinations against Haemophilus influenzae and Streptococcus pneumoniae have good evidence to support their use. There is strong evidence for vaccinating children under the age of 2 against Streptococcus pneumoniae (pneumococcal conjugate vaccine). Vaccinating children against Streptococcus pneumoniae has led to a decreased rate of these infections in adults, because many adults acquire infections from children. A Streptococcus pneumoniae vaccine is available for adults, and has been found to decrease the risk of invasive pneumococcal disease by 74%, but there is insufficient evidence to suggest using the pneumococcal vaccine to prevent pneumonia or death in the general adult population. The CDC recommends that young children and adults over the age of 65 receive the pneumococcal vaccine, as well as older children or younger adults who have an increased risk of getting pneumococcal disease. The pneumococcal vaccine has been shown to reduce the risk of community acquired pneumonia in people with chronic obstructive pulmonary disease, but does not reduce mortality or the risk of hospitalization for people with this condition. People with COPD are recommended by a number of guidelines to have a pneumococcal vaccination. Other vaccines for which there is support for a protective effect against pneumonia include pertussis, varicella, and measles.
Medications
When influenza outbreaks occur, medications such as amantadine or rimantadine may help prevent the condition, but they are associated with side effects. Zanamivir or oseltamivir decrease the chance that people who are exposed to the virus will develop symptoms; however, it is recommended that potential side effects are taken into account.
Other
Smoking cessation and reducing indoor air pollution, such as that from cooking indoors with wood, crop residues or dung, are both recommended. Smoking appears to be the single biggest risk factor for pneumococcal pneumonia in otherwise-healthy adults. Hand hygiene and coughing into one's sleeve may also be effective preventative measures. Wearing surgical masks by the sick may also prevent illness.
Appropriately treating underlying illnesses (such as HIV/AIDS, diabetes mellitus, and malnutrition) can decrease the risk of pneumonia. In children less than 6 months of age, exclusive breast feeding reduces both the risk and severity of disease. In people with HIV/AIDS and a CD4 count of less than 200 cells/uL the antibiotic trimethoprim/sulfamethoxazole decreases the risk of Pneumocystis pneumonia and is also useful for prevention in those that are immunocompromised but do not have HIV.
Testing pregnant women for Group B Streptococcus and Chlamydia trachomatis, and administering antibiotic treatment, if needed, reduces rates of pneumonia in infants; preventive measures for HIV transmission from mother to child may also be efficient. Suctioning the mouth and throat of infants with meconium-stained amniotic fluid has not been found to reduce the rate of aspiration pneumonia and may cause potential harm, thus this practice is not recommended in the majority of situations. In the frail elderly good oral health care may lower the risk of aspiration pneumonia, even though there is no good evidence that one approach to mouth care is better than others in preventing nursing home acquired pneumonia. Zinc supplementation in children 2 months to five years old appears to reduce rates of pneumonia.
For people with low levels of vitamin C in their diet or blood, taking vitamin C supplements may be suggested to decrease the risk of pneumonia, although there is no strong evidence of benefit. There is insufficient evidence to recommend that the general population take vitamin C to prevent or treat pneumonia.
For adults and children in the hospital who require a respirator, there is no strong evidence indicating a difference between heat and moisture exchangers and heated humidifiers for preventing pneumonia. There is tentative evidence that laying flat on the back compared to semi-raised increases pneumonia risks in people who are intubated.
Management
Antibiotics by mouth, rest, simple analgesics, and fluids usually suffice for complete resolution. However, those with other medical conditions, the elderly, or those with significant trouble breathing may require more advanced care. If the symptoms worsen, the pneumonia does not improve with home treatment, or complications occur, hospitalization may be required. Worldwide, approximately 7–13% of cases in children result in hospitalization, whereas in the developed world between 22 and 42% of adults with community-acquired pneumonia are admitted. The CURB-65 score is useful for determining the need for admission in adults. If the score is 0 or 1, people can typically be managed at home; if it is 2, a short hospital stay or close follow-up is needed; if it is 3–5, hospitalization is recommended. In children those with respiratory distress or oxygen saturations of less than 90% should be hospitalized. The utility of chest physiotherapy in pneumonia has not yet been determined. Over-the-counter cough medicine has not been found to be effective, nor has the use of zinc supplementation in children. There is insufficient evidence for mucolytics. There is no strong evidence to recommend that children who have non-measles related pneumonia take vitamin A supplements. Vitamin D, as of 2023 is of unclear benefit in children. Vitamin C administration in pneumonia needs further research, although it can be given to patient of low plasma vitamin C because it is not expensive and low risk.
Pneumonia can cause severe illness in a number of ways, and pneumonia with evidence of organ dysfunction may require intensive care unit admission for observation and specific treatment. The main impact is on the respiratory and the circulatory system. Respiratory failure not responding to normal oxygen therapy may require heated humidified high-flow therapy delivered through nasal cannulae, non-invasive ventilation, or in severe cases mechanical ventilation through an endotracheal tube. Regarding circulatory problems as part of sepsis, evidence of poor blood flow or low blood pressure is initially treated with 30 mL/kg of crystalloid infused intravenously. In situations where fluids alone are ineffective, vasopressor medication may be required.
For adults with moderate or severe acute respiratory distress syndrome (ARDS) undergoing mechanical ventilation, there is a reduction in mortality when people lie on their front for at least 12 hours a day. However, this increases the risk of endotracheal tube obstruction and pressure sores.
Bacterial
Antibiotics improve outcomes in those with bacterial pneumonia. The first dose of antibiotics should be given as soon as possible. Increased use of antibiotics, however, may lead to the development of antimicrobial resistant strains of bacteria. Antibiotic choice depends initially on the characteristics of the person affected, such as age, underlying health, and the location the infection was acquired. Antibiotic use is also associated with side effects such as nausea, diarrhea, dizziness, taste distortion, or headaches. In the UK, treatment before culture results with amoxicillin is recommended as the first line for community-acquired pneumonia, with doxycycline or clarithromycin as alternatives. In North America, amoxicillin, doxycycline, and in some areas a macrolide (such as azithromycin or erythromycin) is the first-line outpatient treatment in adults. In children with mild or moderate symptoms, amoxicillin taken by mouth is the first line. The use of fluoroquinolones in uncomplicated cases is discouraged due to concerns about side-effects and generating resistance in light of there being no greater benefit.
For those who require hospitalization and caught their pneumonia in the community the use of a β-lactam such as cephazolin plus a macrolide such as azithromycin is recommended. A fluoroquinolone may replace azithromycin but is less preferred. Antibiotics by mouth and by injection appear to be similarly effective in children with severe pneumonia.
The duration of treatment has traditionally been seven to ten days, but increasing evidence suggests that shorter courses (3–5 days) may be effective for certain types of pneumonia and may reduce the risk of antibiotic resistance. Research in children showed that a shorter, 3-day course of amoxicillin was as effective as a longer, 7-day course for treating pneumonia in this population. For pneumonia that is associated with a ventilator caused by non-fermenting Gram-negative bacilli (NF-GNB), a shorter course of antibiotics increases the risk that the pneumonia will return. Recommendations for hospital-acquired pneumonia include third- and fourth-generation cephalosporins, carbapenems, fluoroquinolones, aminoglycosides, and vancomycin. These antibiotics are often given intravenously and used in combination. In those treated in hospital, more than 90% improve with the initial antibiotics. For people with ventilator-acquired pneumonia, the choice of antibiotic therapy will depend on the person's risk of being infected with a strain of bacteria that is multi-drug resistant. Once clinically stable, intravenous antibiotics should be switched to oral antibiotics. For those with Methicillin resistant Staphylococcus aureus (MRSA) or Legionella infections, prolonged antibiotics may be beneficial.
The addition of corticosteroids to standard antibiotic treatment appears to improve outcomes, reducing death and morbidity for adults with severe community acquired pneumonia, and reducing death for adults and children with non-severe community acquired pneumonia. A 2017 review therefore recommended them in adults with severe community acquired pneumonia. A 2019 guideline however recommended against their general use, unless refractory shock was present. Side effects associated with the use of corticosteroids include high blood sugar. There is some evidence that adding corticosteroids to the standard PCP pneumonia treatment may be beneficial for people who are infected with HIV.
The use of granulocyte colony stimulating factor (G-CSF) along with antibiotics does not appear to reduce mortality and routine use for treating pneumonia is not supported by evidence.
Viral
Neuraminidase inhibitors may be used to treat viral pneumonia caused by influenza viruses (influenza A and influenza B). No specific antiviral medications are recommended for other types of community acquired viral pneumonias including SARS coronavirus, adenovirus, hantavirus, and parainfluenza virus. Influenza A may be treated with rimantadine or amantadine, while influenza A or B may be treated with oseltamivir, zanamivir or peramivir. These are of most benefit if they are started within 48 hours of the onset of symptoms. Many strains of H5N1 influenza A, also known as avian influenza or "bird flu", have shown resistance to rimantadine and amantadine. The use of antibiotics in viral pneumonia is recommended by some experts, as it is impossible to rule out a complicating bacterial infection. The British Thoracic Society recommends that antibiotics be withheld in those with mild disease. The use of corticosteroids is controversial.
Aspiration
In general, aspiration pneumonitis is treated conservatively with antibiotics indicated only for aspiration pneumonia. The choice of antibiotic will depend on several factors, including the suspected causative organism and whether pneumonia was acquired in the community or developed in a hospital setting. Common options include clindamycin, a combination of a beta-lactam antibiotic and metronidazole, or an aminoglycoside.
Corticosteroids are sometimes used in aspiration pneumonia, but there is limited evidence to support their effectiveness.
Follow-up
The British Thoracic Society recommends that a follow-up chest radiograph be taken in people with persistent symptoms, smokers, and people older than 50. American guidelines vary, from generally recommending a follow-up chest radiograph to not mentioning any follow-up.
Prognosis
With treatment, most types of bacterial pneumonia will stabilize in 3–6 days. It often takes a few weeks before most symptoms resolve. X-ray findings typically clear within four weeks and mortality is low (less than 1%). In the elderly or people with other lung problems, recovery may take more than 12 weeks. In persons requiring hospitalization, mortality may be as high as 10%, and in those requiring intensive care it may reach 30–50%. Pneumonia is the most common hospital-acquired infection that causes death. Before the advent of antibiotics, mortality was typically 30% in those that were hospitalized. However, for those whose lung condition deteriorates within 72 hours, the problem is usually due to sepsis. If pneumonia deteriorates after 72 hours, it could be due to nosocomial infection or excerbation of other underlying comorbidities. About 10% of those discharged from hospital are readmitted due to underlying co-morbidities such as heart, lung, or neurological disorders, or due to new onset of pneumonia.
Complications may occur in particular in the elderly and those with underlying health problems. This may include, among others: empyema, lung abscess, bronchiolitis obliterans, acute respiratory distress syndrome, sepsis, and worsening of underlying health problems.
Clinical prediction rules
Clinical prediction rules have been developed to more objectively predict outcomes of pneumonia. These rules are often used to decide whether to hospitalize the person.
CURB-65 score, which takes into account the severity of symptoms, any underlying diseases, and age
Pneumonia severity index (or PSI Score)
Pleural effusion, empyema, and abscess
In pneumonia, a collection of fluid may form in the space that surrounds the lung. Occasionally, microorganisms will infect this fluid, causing an empyema. To distinguish an empyema from the more common simple parapneumonic effusion, the fluid may be collected with a needle (thoracentesis), and examined. If this shows evidence of empyema, complete drainage of the fluid is necessary, often requiring a drainage catheter. In severe cases of empyema, surgery may be needed. If the infected fluid is not drained, the infection may persist, because antibiotics do not penetrate well into the pleural cavity. If the fluid is sterile, it must be drained only if it is causing symptoms or remains unresolved.
In rare circumstances, bacteria in the lung will form a pocket of infected fluid called a lung abscess. Lung abscesses can usually be seen with a chest X-ray but frequently require a chest CT scan to confirm the diagnosis. Abscesses typically occur in aspiration pneumonia, and often contain several types of bacteria. Long-term antibiotics are usually adequate to treat a lung abscess, but sometimes the abscess must be drained by a surgeon or radiologist.
Respiratory and circulatory failure
Pneumonia can cause respiratory failure by triggering acute respiratory distress syndrome (ARDS), which results from a combination of infection and inflammatory response. The lungs quickly fill with fluid and become stiff. This stiffness, combined with severe difficulties extracting oxygen due to the alveolar fluid, may require long periods of mechanical ventilation for survival. Other causes of circulatory failure are hypoxemia, inflammation, and increased coagulability.
Sepsis is a potential complication of pneumonia but usually occurs in people with poor immunity or hyposplenism. The organisms most commonly involved are Streptococcus pneumoniae, Haemophilus influenzae, and Klebsiella pneumoniae. Other causes of the symptoms should be considered such as a myocardial infarction or a pulmonary embolism.
Epidemiology
Pneumonia is a common illness affecting approximately 450 million people a year and occurring in all parts of the world. It is a major cause of death among all age groups resulting in 4 million deaths (7% of the world's total death) yearly. Rates are greatest in children less than five, and adults older than 75 years. It occurs about five times more frequently in the developing world than in the developed world. Viral pneumonia accounts for about 200 million cases. In the United States, , pneumonia is the 8th leading cause of death.
Children
In 2008, pneumonia occurred in approximately 156 million children (151 million in the developing world and 5 million in the developed world). In 2010, it resulted in 1.3 million deaths, or 18% of all deaths in those under five years, of which 95% occurred in the developing world. Countries with the greatest burden of disease include India (43 million), China (21 million) and Pakistan (10 million). It is the leading cause of death among children in low income countries. Many of these deaths occur in the newborn period. The World Health Organization estimates that one in three newborn infant deaths is due to pneumonia. Approximately half of these deaths can be prevented, as they are caused by the bacteria for which an effective vaccine is available. The IDSA has recommended that children and infants with symptoms of CAP should be hospitalized so they have access to pediatric nursing care. In 2011, pneumonia was the most common reason for admission to the hospital after an emergency department visit in the U.S. for infants and children.
History
Pneumonia has been a common disease throughout human history. The word is from Greek πνεύμων (pneúmōn) meaning "lung". The symptoms were described by Hippocrates (–370 BC): "Peripneumonia, and pleuritic affections, are to be thus observed: If the fever be acute, and if there be pains on either side, or in both, and if expiration be if cough be present, and the sputa expectorated be of a blond or livid color, or likewise thin, frothy, and florid, or having any other character different from the common... When pneumonia is at its height, the case is beyond remedy if he is not purged, and it is bad if he has dyspnoea, and urine that is thin and acrid, and if sweats come out about the neck and head, for such sweats are bad, as proceeding from the suffocation, rales, and the violence of the disease which is obtaining the upper hand." However, Hippocrates referred to pneumonia as a disease "named by the ancients". He also reported the results of surgical drainage of empyemas. Maimonides (1135–1204 AD) observed: "The basic symptoms that occur in pneumonia and that are never lacking are as follows: acute fever, sticking pleuritic pain in the side, short rapid breaths, serrated pulse and cough." This clinical description is quite similar to those found in modern textbooks, and it reflected the extent of medical knowledge through the Middle Ages into the 19th century.
Edwin Klebs was the first to observe bacteria in the airways of persons having died of pneumonia in 1875. Initial work identifying the two common bacterial causes, Streptococcus pneumoniae and Klebsiella pneumoniae, was performed by Carl Friedländer and Albert Fraenkel in 1882 and 1884, respectively. Friedländer's initial work introduced the Gram stain, a fundamental laboratory test still used today to identify and categorize bacteria. Christian Gram's paper describing the procedure in 1884 helped to differentiate the two bacteria, and showed that pneumonia could be caused by more than one microorganism. In 1887, Jaccond demonstrated pneumonia may be caused by opportunistic bacteria always present in the lung.
Sir William Osler, known as "the father of modern medicine", appreciated the death and disability caused by pneumonia, describing it as the "captain of the men of death" in 1918, as it had overtaken tuberculosis as one of the leading causes of death at the time. This phrase was originally coined by John Bunyan in reference to "consumption" (tuberculosis). Osler also described pneumonia as "the old man's friend" as death was often quick and painless when there were much slower and more painful ways to die.
Viral pneumonia was first described by Hobart Reimann in 1938. Reimann, Chairman of the Department of Medicine at Jefferson Medical College, had established the practice of routinely typing the pneumococcal organism in cases where pneumonia presented. Out of this work, the distinction between viral and bacterial strains was noticed.
Several developments in the 1900s improved the outcome for those with pneumonia. With the advent of penicillin and other antibiotics, modern surgical techniques, and intensive care in the 20th century, mortality from pneumonia, which had approached 30%, dropped precipitously in the developed world. Vaccination of infants against Haemophilus influenzae type B began in 1988 and led to a dramatic decline in cases shortly thereafter. Vaccination against Streptococcus pneumoniae in adults began in 1977, and in children in 2000, resulting in a similar decline.
Society and culture
Awareness
Due to the relatively low awareness of the disease, 12 November was declared in 2009 as the annual World Pneumonia Day, a day for concerned citizens and policy makers to take action against the disease.
Costs
The global economic cost of community-acquired pneumonia has been estimated at $17 billion annually. Other estimates are considerably higher. In 2012 the estimated aggregate costs of treating pneumonia in the United States were $20 billion; the median cost of a single pneumonia-related hospitalization is over $15,000. According to data released by the Centers for Medicare and Medicaid Services, average 2012 hospital charges for inpatient treatment of uncomplicated pneumonia in the U.S. were $24,549 and ranged as high as $124,000. The average cost of an emergency room consult for pneumonia was $943 and the average cost for medication was $66. Aggregate annual costs of treating pneumonia in Europe have been estimated at €10 billion.
References
Footnotes
Citations
Bibliography
External links
Articles containing video clips
Coronavirus-associated diseases
Infectious diseases
Respiratory and cardiovascular disorders specific to the perinatal period
Wikipedia medicine articles ready to translate (full)
Wikipedia emergency medicine articles ready to translate | 0.765108 | 0.999907 | 0.765036 |
Starvation | Starvation is a severe deficiency in caloric energy intake, below the level needed to maintain an organism's life. It is the most extreme form of malnutrition. In humans, prolonged starvation can cause permanent organ damage and eventually, death. The term inanition refers to the symptoms and effects of starvation. Starvation by outside forces is a crime according to international criminal law and may also be used as a means of torture or execution.
According to the World Health Organization (WHO), hunger is the single gravest threat to the world's public health. The WHO also states that malnutrition is by far the biggest contributor to child mortality, present in half of all cases. Undernutrition is a contributory factor in the death of 3.1 million children under five every year. The results also demonstrates that as global hunger levels have stabilized, however, despite some progress in specific areas such as stunting and exclusive breastfeeding, an alarming number of people still face food insecurity and malnutrition. In fact, the world has been set back 15 years, with levels of undernourishment similar to those in 2008-2009, with between 713 and 757 million people undernourished in 2023, and over 152 million more than in 2019 when the mid-range was 733 million.
The bloated stomach represents a form of malnutrition called kwashiorkor. The exact pathogenesis of kwashiorkor is not clear, as initially it was thought to relate to diets high in carbohydrates (e.g. maize) but low in protein. While many patients have low albumin, this is thought to be a consequence of the condition. Possible causes such as aflatoxin poisoning, oxidative stress, immune dysregulation, and altered gut microbiota have been suggested. Treatment can help mitigate symptoms such as the pictured weight loss and muscle wasting, however prevention is of utmost importance.
Without any food, humans usually die in around 2 months. There was a case when someone survived over a year (382 days) under medical supervision. Lean people can usually survive with a loss of up to 18% of their body mass; obese people can tolerate more, possibly over 20%. Biological females survive longer than males.
Signs and symptoms
The following are some of the symptoms of starvation:
Changes in behaviour or mental status
The beginning stages of starvation impact mental status and behaviours. These symptoms show up as irritable mood, fatigue, trouble concentrating, and preoccupation with food thoughts. People with those symptoms tend to be easily distracted and have no energy. Psychological effects are profound, including depression, anxiety, and a decrease in cognitive functions.
Physical signs
As starvation progresses, the physical symptoms set in. The timing of these symptoms depends on age, size, and overall health. It usually takes days to weeks, and includes weakness, fast heart rate, shallow breaths that are slowed, thirst, and constipation. There may also be diarrhea in some cases. The eyes begin to sink in and glass over. The muscles begin to become smaller and muscle wasting sets in. Tiredness and dizziness also commonly occur, especially from any physical task. The skin is often overly pale. One prominent sign in children is a swollen belly. Skin loosens and turns pale in color, and there may be swelling of the feet and ankles.
Weakened immune system
Symptoms of starvation may also appear as a weakened immune system, slow wound healing, and poor response to infection. Rashes may develop on the skin. The body directs any nutrients available to keeping organs functioning.
Other symptoms
Other effects of starvation may include:
Anemia
Gallstones
Hypotension
Stomach disease
Cardiovascular and respiratory diseases
Irregular or absent menstrual periods in women
Kidney disease or failure
Electrolyte imbalance
Emaciation
Oliguria
Stages of starvation
The symptoms of starvation show up in three stages. Phase one and two can show up in anyone that skips meals, diets, and goes through fasting. Phase three is more severe, can be fatal, and results from long-term starvation.
Phase one: When meals are skipped, the body begins to maintain blood sugar levels by degrading glycogen in the liver and breaking down stored fat and protein. The liver can provide glucose for the first few hours. After that, the body begins to break down fat and protein. The body uses fatty acids as an energy source for muscles but lowers the amount of glucose sent to the brain. Another chemical that comes from fatty acids is glycerol. It can be used as glucose for energy but eventually runs out.
Phase two: Phase two can last for weeks at a time. In this phase, the body mainly uses stored fat for energy. The breakdown occurs in the liver and turns fat into ketones. After fasting for one week, the brain will use these ketones and any available glucose. Using ketones lowers the need for glucose, and the body slows the breakdown of proteins.
Phase three: By this point, the fat stores are gone, and the body begins to turn to stored protein for energy. This means it needs to break down muscle tissues full of protein; the muscles break down very quickly. Protein is essential for cells to work correctly, and when it runs out, the cells can no longer function.
The cause of death due to starvation is usually an infection or the result of tissue breakdown. This is due to the body becoming unable to produce enough energy to fight off bacteria and viruses. The final stage of starvation includes signals like hair color loss, skin flaking, swelling in the extremities, and a bloated belly. Even though they may feel hunger, people in the final stage of starvation usually cannot eat enough food to recover without significant medical intervention.
Causes
Starvation occurs when the body expends more energy than it takes in for an extended period of time. This imbalance can arise from one or more medical conditions or circumstantial situations, which can include:
Medical reasons
Anorexia nervosa
Bulimia nervosa
Eating disorder, not otherwise specified
Celiac disease
Coma
Major depressive disorder
Diabetes mellitus
Digestive disease
Constant vomiting
Circumstantial causes
Child, elder, or dependent abuse
Famine for any reason, such as political strife and war
Hunger striking
Excessive fasting
Poverty
Torture
Biochemistry
With a typical high-carbohydrate diet, the human body relies on free blood glucose as its primary energy source. Glucose can be obtained directly from dietary sugars and by the breakdown of other carbohydrates. In the absence of dietary sugars and carbohydrates, glucose is obtained from the breakdown of stored glycogen. Glycogen is a readily-accessible storage form of glucose, stored in notable quantities in the liver and skeletal muscle.
After the exhaustion of the glycogen reserve, and for the next two to three days, fatty acids become the principal metabolic fuel. At first, the brain continues to use glucose. If a non-brain tissue is using fatty acids as its metabolic fuel, the use of glucose in the same tissue is switched off. Thus, when fatty acids are being broken down for energy, all of the remaining glucose is made available for use by the brain.
After two or three days of fasting, the liver begins to synthesize ketone bodies from precursors obtained from fatty acid breakdown. The brain uses these ketone bodies as fuel, thus cutting its requirement for glucose. After fasting for three days, the brain gets 30% of its energy from ketone bodies. After four days, this may increase to 70% or more. Thus, the production of ketone bodies cuts the brain's glucose requirement from 80 g per day to 30 g per day, about 35% of normal, with 65% derived from ketone bodies. But of the brain's remaining 30 g requirement, 20 g per day can be produced by the liver from glycerol (itself a product of fat breakdown). This still leaves a deficit of about 10 g of glucose per day that must be supplied from another source; this other source will be the body's own proteins.
After exhaustion of fat stores, the cells in the body begin to break down protein. This releases alanine and lactate produced from pyruvate, which can be converted into glucose by the liver. Since much of human muscle mass is protein, this phenomenon is responsible for the wasting away of muscle mass seen in starvation. However, the body is able to choose which cells will break down protein and which will not. About 2–3 g of protein has to be broken down to synthesize 1 g of glucose; about 20–30 g of protein is broken down each day to make 10 g of glucose to keep the brain alive. However, this number may decrease the longer the fasting period is continued, in order to conserve protein.
Starvation ensues when the fat reserves are completely exhausted and protein is the only fuel source available to the body. Thus, after periods of starvation, the loss of body protein affects the function of important organs, and death results, even if there are still fat reserves left. In a leaner person, the fat reserves are depleted faster, and the protein, sooner, therefore death occurs sooner. Ultimately, the cause of death is in general cardiac arrhythmia or cardiac arrest, brought on by tissue degradation and electrolyte imbalances. Conditions like metabolic acidosis may also kill starving people.
Prevention
Starvation can be caused by factors beyond the control of the individual. The Rome Declaration on World Food Security outlines several policies aimed at increasing food security and, consequently, preventing starvation. These include:
Poverty reduction
Prevention of wars and political instability
Food aid
Agricultural sustainability
Reduction of economic inequality
Supporting farmers in areas of food insecurity through such measures as free or subsidized fertilizers and seeds increases food harvest and reduces food prices.
Starvation is commonly used as a method of warfare, however, it has been outlawed and is now a crime. Notable incidents in history include the blockade of Germany and blockade of Biafra.
Treatment
Patients that suffer from starvation can be treated, but this must be done cautiously to avoid refeeding syndrome. Rest and warmth must be provided and maintained. Food can be given gradually in small quantities. The quantity of food can be increased over time. Proteins may be administered intravenously to raise the level of serum proteins. For worse situations, hospice care and opioid medications can be used.
Organizations
Many organizations have been highly effective at reducing starvation in different regions. Aid agencies give direct assistance to individuals, while political organizations pressure political leaders to enact more macro-scale policies that will reduce famine and provide aid.
Statistics
According to estimates by the Food and Agriculture Organization, between 720 and 811 million people were affected by hunger globally in 2020. This was a decrease from estimated 925 million in 2010 and roughly 1 billion in 2009. In 2007, 923 million people were reported as being undernourished, an increase of 80 million since 1990–92.
An estimated 820 million people did not have enough to eat in 2018, up from 811 million in the previous year, which is the third year of increase in a row.
As the definitions of starving and malnourished people are different, the number of starving people is different from that of malnourished. Generally, far fewer people are starving than are malnourished.
The proportion of malnourished and starving people in the world has been more or less continually decreasing for at least several centuries. This is due to an increasing supply of food and to overall gains in economic efficiency. In 40 years, the proportion of malnourished people in the developing world has been more than halved. The proportion of starving people has decreased even faster.
Capital punishment
Historically, starvation has been used as a death sentence. From the beginning of civilization to the Middle Ages, people were immured, and died for want of food.
In ancient Greco-Roman societies, starvation was sometimes used to dispose of guilty upper-class citizens, especially erring female members of patrician families. In the year 31, Livilla, the niece and daughter-in-law of Tiberius, was discreetly starved to death by her mother for her adulterous relationship with Sejanus and for her complicity in the murder of her own husband, Drusus the Younger.
Another daughter-in-law of Tiberius, named Agrippina the Elder (a granddaughter of Augustus and the mother of Caligula), also died of starvation, in 33 AD; however, it is unclear if her starvation was self-inflicted.
A son and daughter of Agrippina were also executed by starvation for political reasons; Drusus Caesar, her second son, was put in prison in 33 AD, and starved to death by orders of Tiberius (he managed to stay alive for nine days by chewing the stuffing of his bed); Agrippina's youngest daughter, Julia Livilla, was exiled on an island in 41 by her uncle, Emperor Claudius, and her death by starvation was arranged by the empress Messalina.
It is also possible that Vestal Virgins were starved when found guilty of breaking their vows of celibacy.
Ugolino della Gherardesca, his sons, and other members of his family were immured in the Muda, a tower of Pisa, and starved to death in the thirteenth century. Dante, his contemporary, wrote about Gherardesca in his masterpiece The Divine Comedy.
In Sweden in 1317, King Birger of Sweden imprisoned his two brothers for a coup they had staged several years earlier (Nyköping Banquet). According to legend they died of starvation a few weeks later, since their brother had thrown the prison key in the castle moat.
In Cornwall in the UK in 1671, John Trehenban from St Columb Major was condemned to be starved to death in a cage at Castle An Dinas for the murder of two girls.
The Makah, a Native American tribe inhabiting the Pacific Northwest near the modern border of Canada and the United States, practiced death by starvation as a punishment for slaves.
See also
2007–2008 world food price crisis
Anorexia mirabilis
Cachexia
Global Hunger Index
Starvation mode
Famine scales
Hunger strike
List of famines
List of people who died of starvation
Marasmus
Protein poisoning
References
Further reading
U.N. Chief: Hunger Kills 17,000 Kids Daily - by CNN
Causes of death
Effects of external causes
Execution methods
Famines
Hunger
Malnutrition
Physical torture techniques
Weight loss
Suicide by starvation and dehydration | 0.766725 | 0.997754 | 0.765003 |
ASD | ASD most often refers to:
Autism spectrum disorder, a neurodevelopmental condition
Acute stress disorder, a psychological response
ASD may also refer to:
In science and technology
Biology
ASD (database), an online directory of allosteric proteins and their structure
Asd RNA motif, a structure in lactic-acid bacterium ribonucleic acid
Aspartate-semialdehyde dehydrogenase, an amino-acid-synthesising enzyme in plants, fungi and bacteria
Medicine
Antiseizure drug, an epilepsy medication
Antiseptic Dorogov's Stimulator, a Russian topical veterinary drug
Arthroscopic subacromial decompression, a surgical procedure on the shoulder
Atrial septal defect, a congenital heart defect
Computing
Accredited Symbian Developer, a computer programming qualification
Adaptive software development, a software development process
Aircraft and Scenery Designer, an add-on for the Microsoft Flight Simulator 4.0 video game
Application Specific Device, a Wi-Fi certification type
Other uses in science and technology
Active sound design, a technology used in cars to alter or enhance the sound inside and outside of the vehicle
Adjustable-speed drive, of an electric motor
Allowable stress design, a structural design methodology
Aspirating smoke detector, an indoor fire-protection device
Acceleration spectral density, a mechanical vibration test parameter
Transport
Aeronautical Systems Division (1961-1992), US Air Force technical division
Air Sinai, by ICAO code
Amsterdam Centraal railway station, station code
Andros Town International Airport, by IATA code
Slidell Airport, by FAA LID
Education
United States
Academy for Science and Design, Nashua, New Hampshire
Alabama School for the Deaf, part of the Alabama Institute for Deaf and Blind
Allentown School District, Pennsylvania
American School for the Deaf, West Hartford, Connecticut
Anchorage School District, Alaska
Armstrong School District (Pennsylvania)
Ashland School District (Oregon)
Avondale School District, Auburn Hills, Michigan
Other places
American School of Doha, Qatar
American School of Douala, Cameroon
American School of Dubai
Government and politics
AeroSpace and Defence Industries Association of Europe, a European business association
Alliance for Securing Democracy, a trans-Atlantic group
Alliance for Social Democracy, a political party in Benin
Architectural Services Department, Hong Kong
Australian Signals Directorate, intelligence agency
United States Assistant Secretary of Defense, one of several senior US Department of Defense officials
Other uses
ASD (album), 2015, by A Skylit Drive, an American band
Asas language, by its ISO 639 code
A. S. Byatt (born 1936), English critic, novelist, poet and short story writer, who was born Antonia Susan Drabble and whose married name is Antonia Susan Duffy
Association for the Study of Dreams
See also | 0.777985 | 0.983195 | 0.764911 |
Diarrhea | Diarrhea (American English), also spelled diarrhoea or diarrhœa (British English), is the condition of having at least three loose, liquid, or watery bowel movements in a day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are exclusively breastfed, however, are normal.
The most common cause is an infection of the intestines due to a virus, bacterium, or parasite—a condition also known as gastroenteritis. These infections are often acquired from food or water that has been contaminated by feces, or directly from another person who is infected. The three types of diarrhea are: short duration watery diarrhea, short duration bloody diarrhea, and persistent diarrhea (lasting more than two weeks, which can be either watery or bloody). The short duration watery diarrhea may be due to cholera, although this is rare in the developed world. If blood is present, it is also known as dysentery. A number of non-infectious causes can result in diarrhea. These include lactose intolerance, irritable bowel syndrome, non-celiac gluten sensitivity, celiac disease, inflammatory bowel disease such as ulcerative colitis, hyperthyroidism, bile acid diarrhea, and a number of medications. In most cases, stool cultures to confirm the exact cause are not required.
Diarrhea can be prevented by improved sanitation, clean drinking water, and hand washing with soap. Breastfeeding for at least six months and vaccination against rotavirus is also recommended. Oral rehydration solution (ORS)—clean water with modest amounts of salts and sugar—is the treatment of choice. Zinc tablets are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years. When people have diarrhea it is recommended that they continue to eat healthy food, and babies continue to be breastfed. If commercial ORS is not available, homemade solutions may be used. In those with severe dehydration, intravenous fluids may be required. Most cases, however, can be managed well with fluids by mouth. Antibiotics, while rarely used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help decrease the number of bowel movements but is not recommended in those with severe disease.
About 1.7 to 5 billion cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on average three times a year. Total deaths from diarrhea are estimated at 1.53 million in 2019—down from 2.9 million in 1990. In 2012, it was the second most common cause of deaths in children younger than five (0.76 million or 11%). Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years of age. Other long term problems that can result include stunted growth and poor intellectual development.
Terminology
The word diarrhea is from the Ancient Greek from "through" and "flow".
Diarrhea is the spelling in American English, whereas diarrhoea is the spelling in British English.
Slang terms for the condition include "the runs", "the squirts" (or "squits" in Britain) and "the trots".
The word is often pronounced as .
Definition
Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day, or as having more stools than is normal for that person.
Acute diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel, lasting less than 14 days, by World Gastroenterology Organization. Acute diarrhea that is watery may be known as AWD (Acute Watery Diarrhoea.)
Secretory
Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption. There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates the secretion of anions, especially chloride ions (Cl–). Therefore, to maintain a charge balance in the gastrointestinal tract, sodium (Na+) is carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even during fasting. It continues even when there is no oral food intake.
Osmotic
Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also result from maldigestion, e.g. pancreatic disease or coeliac disease, in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium or vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent, e.g. milk or sorbitol, is stopped.
Exudative
Exudative diarrhea occurs with the presence of blood and pus in the stool. This occurs with inflammatory bowel diseases, such as Crohn's disease or ulcerative colitis, and other severe infections such as E. coli or other forms of food poisoning.
Inflammatory
Inflammatory diarrhea occurs when there is damage to the mucosal lining or brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids. Features of all three of the other types of diarrhea can be found in this type of diarrhea. It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis.
Dysentery
If there is blood visible in the stools, it is also known as dysentery. The blood is a trace of an invasion of bowel tissue. Dysentery is a symptom of, among others, Shigella, Entamoeba histolytica, and Salmonella.
Health effects
Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition resulting from any cause reduces physical fitness and work productivity in adults", and diarrhea is a primary cause of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence.
Diarrhea can cause electrolyte imbalances, kidney impairment, dehydration, and defective immune system responses. When oral drugs are administered, the efficiency of the drug is to produce a therapeutic effect and the lack of this effect may be due to the medication travelling too quickly through the digestive system, limiting the time that it can be absorbed. Clinicians try to treat the diarrheas by reducing the dosage of medication, changing the dosing schedule, discontinuation of the drug, and rehydration. The interventions to control the diarrhea are not often effective. Diarrhea can have a profound effect on the quality of life because fecal incontinence is one of the leading factors for placing older adults in long term care facilities (nursing homes).
Causes
In the latter stages of human digestion, ingested materials are inundated with water and digestive fluids such as gastric acid, bile, and digestive enzymes in order to break them down into their nutrient components, which are then absorbed into the bloodstream via the intestinal tract in the small intestine. Prior to defecation, the large intestine reabsorbs the water and other digestive solvents in the waste product in order to maintain proper hydration and overall equilibrium. Diarrhea occurs when the large intestine is prevented, for any number of reasons, from sufficiently absorbing the water or other digestive fluids from fecal matter, resulting in a liquid, or "loose", bowel movement.
Acute diarrhea is most commonly due to viral gastroenteritis with rotavirus, which accounts for 40% of cases in children under five. In travelers, however, bacterial infections predominate. Various toxins such as mushroom poisoning and drugs can also cause acute diarrhea.
Chronic diarrhea can be the part of the presentations of a number of chronic medical conditions affecting the intestine. Common causes include ulcerative colitis, Crohn's disease, microscopic colitis, celiac disease, irritable bowel syndrome, and bile acid malabsorption.
Infections
There are many causes of infectious diarrhea, which include viruses, bacteria and parasites. Infectious diarrhea is frequently referred to as gastroenteritis. Norovirus is the most common cause of viral diarrhea in adults, but rotavirus is the most common cause in children under five years old. Adenovirus types 40 and 41, and astroviruses cause a significant number of infections. Shiga-toxin producing Escherichia coli, such as E coli o157:h7, are the most common cause of infectious bloody diarrhea in the United States.
Campylobacter spp. are a common cause of bacterial diarrhea, but infections by Salmonella spp., Shigella spp. and some strains of Escherichia coli are also a frequent cause.
In the elderly, particularly those who have been treated with antibiotics for unrelated infections, a toxin produced by Clostridioides difficile often causes severe diarrhea.
Parasites, particularly protozoa e.g., Cryptosporidium spp., Giardia spp., Entamoeba histolytica, Blastocystis spp., Cyclospora cayetanensis, are frequently the cause of diarrhea that involves chronic infection. The broad-spectrum antiparasitic agent nitazoxanide has shown efficacy against many diarrhea-causing parasites.
Other infectious agents, such as parasites or bacterial toxins, may exacerbate symptoms. In sanitary living conditions where there is ample food and a supply of clean water, an otherwise healthy person usually recovers from viral infections in a few days. However, for ill or malnourished individuals, diarrhea can lead to severe dehydration and can become life-threatening.
Sanitation
Open defecation is a leading cause of infectious diarrhea leading to death.
Poverty is a good indicator of the rate of infectious diarrhea in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical care."
One of the most common causes of infectious diarrhea is a lack of clean water. Often, improper fecal disposal leads to contamination of groundwater. This can lead to widespread infection among a population, especially in the absence of water filtration or purification. Human feces contains a variety of potentially harmful human pathogens.
Nutrition
Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea. It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a condition often found in children in developing countries can, even in mild cases, have a significant impact on the development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency and reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever. Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes. However, there is some discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship does not exist between the rate of disease and vitamin A status, others suggest an increase in the rate associated with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this population has the potential for increased risk of disease contraction.
Malabsorption
Malabsorption is the inability to absorb food fully, mostly from disorders in the small bowel, but also due to maldigestion from diseases of the pancreas.
Causes include:
enzyme deficiencies or mucosal abnormality, as in food allergy and food intolerance, e.g. celiac disease (gluten intolerance), lactose intolerance (intolerance to milk sugar, common in non-Europeans), and fructose malabsorption.
pernicious anemia, or impaired bowel function due to the inability to absorb vitamin B12,
loss of pancreatic secretions, which may be due to cystic fibrosis or pancreatitis,
structural defects, like short bowel syndrome (surgically removed bowel) and radiation fibrosis, such as usually follows cancer treatment and other drugs, including agents used in chemotherapy; and
certain drugs, like orlistat, which inhibits the absorption of fat.
Inflammatory bowel disease
The two overlapping types here are of unknown origin:
Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum.
Crohn's disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel.
Irritable bowel syndrome
Another possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved by defecation and unusual stool (diarrhea or constipation) for at least three days a week over the previous three months. Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements and medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid malabsorption diagnosed with an abnormal SeHCAT test.
Other diseases
Diarrhea can be caused by other diseases and conditions, namely:
Chronic ethanol ingestion
Hyperthyroidism
Certain medications
Bile acid malabsorption
Ischemic bowel disease: This usually affects older people and can be due to blocked arteries.
Microscopic colitis, a type of inflammatory bowel disease where changes are seen only on histological examination of colonic biopsies.
Bile salt malabsorption (primary bile acid diarrhea) where excessive bile acids in the colon produce a secretory diarrhea.
Hormone-secreting tumors: some hormones, e.g. serotonin, can cause diarrhea if secreted in excess (usually from a tumor).
Chronic mild diarrhea in infants and toddlers may occur with no obvious cause and with no other ill effects; this condition is called toddler's diarrhea.
Environmental enteropathy
Radiation enteropathy following treatment for pelvic and abdominal cancers.
Medications
Over 700 medications, such as penicillin, are known to cause diarrhea. The classes of medications that are known to cause diarrhea are laxatives, antacids, heartburn medications, antibiotics, anti-neoplastic drugs, anti-inflammatories as well as many dietary supplements.
Pathophysiology
Evolution
According to two researchers, Nesse and Williams, diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea may represent a defense mechanism".
Diagnostic approach
The following types of diarrhea may indicate further investigation is needed:
In infants
Moderate or severe diarrhea in young children
Associated with blood
Continues for more than two days
Associated non-cramping abdominal pain, fever, weight loss, etc.
In travelers
In food handlers, because of the potential to infect others;
In institutions such as hospitals, child care centers, or geriatric and convalescent homes.
A severity score is used to aid diagnosis in children.
When diarrhea lasts for more than four weeks a number of further tests may be recommended including:
Complete blood count and a ferritin if anemia is present
Thyroid stimulating hormone
Tissue transglutaminase for celiac disease
Fecal calprotectin to exclude inflammatory bowel disease
Stool tests for ova and parasites as well as for Clostridioides difficile
A colonoscopy or fecal immunochemical testing for cancer, including biopsies to detect microscopic colitis
Testing for bile acid diarrhea with SeHCAT, 7α-hydroxy-4-cholesten-3-one or fecal bile acids depending on availability
Hydrogen breath test looking for lactose intolerance
Further tests if immunodeficiency, pelvic radiation disease or small intestinal bacterial overgrowth suspected.
A 2019 guideline recommended that testing for ova and parasites was only needed in people who are at high risk though they recommend routine testing for giardia. Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) were not recommended.
Epidemiology
Worldwide in 2004, approximately 2.5 billion cases of diarrhea occurred, which resulted in 1.5 million deaths among children under the age of five. Greater than half of these were in Africa and South Asia. This is down from a death rate of 4.5 million in 1980 for gastroenteritis. Diarrhea remains the second leading cause of infant mortality (16%) after pneumonia (17%) in this age group.
The majority of such cases occur in the developing world, with over half of the recorded cases of childhood diarrhea occurring in Africa and Asia, with 696 million and 1.2 billion cases, respectively, compared to only 480 million in the rest of the world.
Infectious diarrhea resulted in about 0.7 million deaths in children under five years old in 2011 and 250 million lost school days. In the Americas, diarrheal disease accounts for a total of 10% of deaths among children aged 1–59 months while in South East Asia, it accounts for 31.3% of deaths. It is estimated that around 21% of child mortalities in developing countries are due to diarrheal disease.
The World Health Organization has reported that "deaths due to diarrhoeal diseases have dropped by 45%, from sixth leading cause of death in 2000 to thirteenth in 2021."
Even though diarrhea is best known in humans, it affects many other species, notably among primates. The cecal appendix, when present, appears to afford some protection against diarrhea to young primates.
Prevention
Sanitation
Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhoea. Such improvements might include for example use of water filters, provision of high-quality piped water and sewer connections.
In institutions, communities, and households, interventions that promote hand washing with soap lead to significant reductions in the incidence of diarrhea. The same applies to preventing open defecation at a community-wide level and providing access to improved sanitation. This includes use of toilets and implementation of the entire sanitation chain connected to the toilets (collection, transport, disposal or reuse of human excreta).
There is limited evidence that safe disposal of child or adult feces can prevent diarrheal disease.
Hand washing
Basic sanitation techniques can have a profound effect on the transmission of diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally shown to reduce the incidence of disease by approximately 30–48%. Hand washing in developing countries, however, is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts of the world; however, access to soap and water is limited in a number of less developed countries. This lack of access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require the implementation of educational programs that encourage sanitary behaviours.
Water
Given that water contamination is a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease.
Chlorine treatment of water, for example, has been shown to reduce both the risk of diarrheal disease, and of contamination of stored water with diarrheal pathogens.
Vaccination
Immunization against the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and 20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in 1985 yielded a slight (2–3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6–10%. Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective vaccines have been developed that have the potential to save many thousands of lives in developing nations, while reducing the overall cost of treatment, and the costs to society.
Rotavirus vaccine decreases the rates of diarrhea in a population. New vaccines against rotavirus, Shigella, Enterotoxigenic Escherichia coli (ETEC), and cholera are under development, as well as other causes of infectious diarrhea.
Nutrition
Dietary deficiencies in developing countries can be combated by promoting better eating practices. Zinc supplementation proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group. The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence. Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the latter strategy was estimated to be significantly more cost effective.
Breastfeeding
Breastfeeding practices have been shown to have a dramatic effect on the incidence of diarrheal disease in poor populations. Studies across a number of developing nations have shown that those who receive exclusive breastfeeding during their first 6 months of life are better protected against infection with diarrheal diseases. One study in Brazil found that non-breastfed infants were 14 times more likely to die from diarrhea than exclusively breastfed infants. Exclusive breastfeeding is currently recommended for the first six months of an infant's life by the WHO, with continued breastfeeding until at least two years of age.
Others
Probiotics decrease the risk of diarrhea in those taking antibiotics. Insecticide spraying may reduce fly numbers and the risk of diarrhea in children in a setting where there is seasonal variations in fly numbers throughout the year.
Management
In many cases of diarrhea, replacing lost fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC recommends that children and adults with cholera also continue to eat. There is no evidence that early refeeding in children can cause an increase in inappropriate use of intravenous fluid, episodes of vomiting, and risk of having persistent diarrhea.
Medications such as loperamide (Imodium) and bismuth subsalicylate may be beneficial; however they may be contraindicated in certain situations.
Fluids
Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the "taste of tears"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree that drinks with too much sugar or salt can make dehydration worse.
Appropriate amounts of supplemental zinc and potassium should be added if available. But the availability of these should not delay rehydration. As WHO points out, the most important thing is to begin preventing dehydration as early as possible. In another example of prompt ORS hopefully preventing dehydration, CDC recommends for the treatment of cholera continuing to give Oral Rehydration Solution during travel to medical treatment.
Vomiting often occurs during the first hour or two of treatment with ORS, especially if a child drinks the solution too quickly, but this seldom prevents successful rehydration since most of the fluid is still absorbed. WHO recommends that if a child vomits, to wait five or ten minutes and then start to give the solution again more slowly.
Drinks especially high in simple sugars, such as soft drinks and fruit juices, are not recommended in children under five as they may increase dehydration. A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water. Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally, a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person, with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to administer fluids if warranted.
Eating
The WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer duration and recover intestinal function more slowly. The WHO states "Food should never be withheld and the child's usual foods should not be diluted. Breastfeeding should always be continued." In the specific example of cholera, the CDC makes the same recommendation. Breast-fed infants with diarrhea often choose to breastfeed more, and should be encouraged to do so. In young children who are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery.
Eating food containing soluble fibre may help, but insoluble fibre might make it worse.
Medications
Antidiarrheal agents can be classified into four different groups: antimotility, antisecretory, adsorbent, and anti-infectious. While antibiotics are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated diarrhea is the most common adverse effect of treatment with general antibiotics.
While bismuth compounds (Pepto-Bismol) decreased the number of bowel movements in those with travelers' diarrhea, they do not decrease the length of illness. Anti-motility agents like loperamide are also effective at reducing the number of stools but not the duration of disease. These agents should be used only if bloody diarrhea is not present.
Diosmectite, a natural aluminomagnesium silicate clay, is effective in alleviating symptoms of acute diarrhea in children, and also has some effects in chronic functional diarrhea, radiation-induced diarrhea, and chemotherapy-induced diarrhea. Another absorbent agent used for the treatment of mild diarrhea is kaopectate.
Racecadotril an antisecretory medication may be used to treat diarrhea in children and adults. It has better tolerability than loperamide, as it causes less constipation and flatulence. However, it has little benefit in improving acute diarrhea in children.
Bile acid sequestrants such as cholestyramine can be effective in chronic diarrhea due to bile acid malabsorption. Therapeutic trials of these drugs are indicated in chronic diarrhea if bile acid malabsorption cannot be diagnosed with a specific test, such as SeHCAT retention.
Alternative therapies
Zinc supplementation may benefit children over six months old with diarrhea in areas with high rates of malnourishment or zinc deficiency. This supports the World Health Organization guidelines for zinc, but not in the very young.
A Cochrane Review from 2020 concludes that probiotics make little or no difference to people who have diarrhea lasting 2 days or longer and that there is no proof that they reduce its duration. The probiotic lactobacillus can help prevent antibiotic-associated diarrhea in adults but possibly not children. For those with lactose intolerance, taking digestive enzymes containing lactase when consuming dairy products often improves symptoms.
See also
References
External links
WHO fact sheet on diarrhoeal disease
Intestinal infectious diseases
Waterborne diseases
Diseases of intestines
Conditions diagnosed by stool test
Symptoms and signs: Digestive system and abdomen
Feces
Wikipedia medicine articles ready to translate
Sanitation
Wikipedia emergency medicine articles ready to translate
Articles containing video clips | 0.765219 | 0.999597 | 0.764911 |
Metabolic syndrome | Metabolic syndrome is a clustering of at least three of the following five medical conditions: abdominal obesity, high blood pressure, high blood sugar, high serum triglycerides, and low serum high-density lipoprotein (HDL).
Metabolic syndrome is associated with the risk of developing cardiovascular disease and type 2 diabetes. In the U.S., about 25% of the adult population has metabolic syndrome, a proportion increasing with age, particularly among racial and ethnic minorities.
Insulin resistance, metabolic syndrome, and prediabetes are closely related to one another and have overlapping aspects. The syndrome is thought to be caused by an underlying disorder of energy utilization and storage, but the cause of the syndrome is an area of ongoing medical research. Researchers debate whether a diagnosis of metabolic syndrome implies differential treatment or increases risk of cardiovascular disease beyond what is suggested by the sum of its individual components.
Signs and symptoms
The key sign of metabolic syndrome is central obesity, also known as visceral, male-pattern or apple-shaped adiposity. It is characterized by adipose tissue accumulation predominantly around the waist and trunk. Other signs of metabolic syndrome include high blood pressure, decreased fasting serum HDL cholesterol, elevated fasting serum triglyceride level, impaired fasting glucose, insulin resistance, or prediabetes. Associated conditions include hyperuricemia; fatty liver (especially in concurrent obesity) progressing to nonalcoholic fatty liver disease; polycystic ovarian syndrome in women and erectile dysfunction in men; and acanthosis nigricans.
Neck circumference
Neck circumference has been used as a surrogate simple and reliable index to indicate upper-body subcutaneous fat accumulation. Neck circumference of more than for men and more than for women are considered high-risk for metabolic syndrome. Persons with large neck circumferences have a more-than-double risk of metabolic syndrome. In adults with overweight/obesity, clinically significant weight loss may protect against COVID-19 and neck circumference has been associated with the risk of being mechanically ventilated in COVID-19 patients, with a 26% increased risk for each centimeter increase in neck circumference. Moreover, hospitalized COVID-19 patients with a "large neck phenotype" on admission had a more than double risk of death.
Complications
Metabolic syndrome can lead to several serious and chronic complications, including type-2 diabetes, cardiovascular diseases, stroke, kidney disease and nonalcoholic fatty liver disease.
Furthermore, metabolic syndrome is associated with a significantly increased risk of surgical complications across most types of surgery in a 2023 systematic review and meta-analysis of over 13 million individuals.
Causes
The mechanisms of the complex pathways of metabolic syndrome are under investigation. The pathophysiology is very complex and has been only partially elucidated. Most people affected by the condition are older, obese, sedentary, and have a degree of insulin resistance. Stress can also be a contributing factor. The most important risk factors are diet (particularly sugar-sweetened beverage consumption), genetics, aging, sedentary behavior or low physical activity, disrupted chronobiology/sleep, mood disorders/psychotropic medication use, and excessive alcohol use. The pathogenic role played in the syndrome by the excessive expansion of adipose tissue occurring under sustained overeating, and its resulting lipotoxicity was reviewed by Vidal-Puig.
Recent studies have highlighted the global prevalence of metabolic syndrome, driven by the rise in obesity and type 2 diabetes. The World Health Organization (WHO) and other major health organizations define metabolic syndrome with criteria that include central obesity, insulin resistance, hypertension, and dyslipidemia. As of 2015, metabolic syndrome affects approximately 25% of the global population, with rates significantly higher in urban areas due to increased consumption of high-calorie, low-nutrient diets and decreased physical activity. This condition is associated with a threefold increase in the risk of type 2 diabetes and cardiovascular disease, accounting for a substantial burden of non-communicable diseases globally (Saklayen, 2018).
There is debate regarding whether obesity or insulin resistance is the cause of the metabolic syndrome or if they are consequences of a more far-reaching metabolic derangement. Markers of systemic inflammation, including C-reactive protein, are often increased, as are fibrinogen, interleukin 6, tumor necrosis factor-alpha (TNF-α), and others. Some have pointed to a variety of causes, including increased uric acid levels caused by dietary fructose.
Research shows that Western diet habits are a factor in the development of metabolic syndrome, with high consumption of food that is not biochemically suited to humans. Weight gain is associated with metabolic syndrome. Rather than total adiposity, the core clinical component of the syndrome is visceral and/or ectopic fat (i.e., fat in organs not designed for fat storage) whereas the principal metabolic abnormality is insulin resistance. The continuous provision of energy via dietary carbohydrate, lipid, and protein fuels, unmatched by physical activity/energy demand, creates a backlog of the products of mitochondrial oxidation, a process associated with progressive mitochondrial dysfunction and insulin resistance.
Stress
Recent research indicates prolonged chronic stress can contribute to metabolic syndrome by disrupting the hormonal balance of the hypothalamic-pituitary-adrenal axis (HPA-axis). A dysfunctional HPA-axis causes high cortisol levels to circulate, which results in raising glucose and insulin levels, which in turn cause insulin-mediated effects on adipose tissue, ultimately promoting visceral adiposity, insulin resistance, dyslipidemia and hypertension, with direct effects on the bone, causing "low turnover" osteoporosis. HPA-axis dysfunction may explain the reported risk indication of abdominal obesity to cardiovascular disease (CVD), type 2 diabetes and stroke. Psychosocial stress is also linked to heart disease.
Obesity
Central obesity is a key feature of the syndrome, as both a sign and a cause, in that the increasing adiposity often reflected in high waist circumference may both result from and contribute to insulin resistance. However, despite the importance of obesity, affected people who are of normal weight may also be insulin-resistant and have the syndrome.
Sedentary lifestyle
Physical inactivity is a predictor of CVD events and related mortality. Many components of metabolic syndrome are associated with a sedentary lifestyle, including increased adipose tissue (predominantly central); reduced HDL cholesterol; and a trend toward increased triglycerides, blood pressure, and glucose in the genetically susceptible. Compared with individuals who watched television or videos or used their computers for less than one hour daily, those who carried out these behaviors for greater than four hours daily have a twofold increased risk of metabolic syndrome.
Aging
Metabolic syndrome affects 60% of the U.S. population older than age 50. With respect to that demographic, the percentage of women having the syndrome is higher than that of men. The age dependency of the syndrome's prevalence is seen in most populations around the world.
Diabetes mellitus type 2
The metabolic syndrome quintuples the risk of type 2 diabetes mellitus. Type 2 diabetes is considered a complication of metabolic syndrome. In people with impaired glucose tolerance or impaired fasting glucose, presence of metabolic syndrome doubles the risk of developing type 2 diabetes. It is likely that prediabetes and metabolic syndrome denote the same disorder, defining it by the different sets of biological markers.
The presence of metabolic syndrome is associated with a higher prevalence of CVD than found in people with type 2 diabetes or impaired glucose tolerance without the syndrome. Hypoadiponectinemia has been shown to increase insulin resistance and is considered to be a risk factor for developing metabolic syndrome.
Coronary heart disease
The approximate prevalence of the metabolic syndrome in people with coronary artery disease (CAD) is 50%, with a prevalence of 37% in people with premature coronary artery disease (age 45), particularly in women. With appropriate cardiac rehabilitation and changes in lifestyle (e.g., nutrition, physical activity, weight reduction, and, in some cases, drugs), the prevalence of the syndrome can be reduced.
Lipodystrophy
Lipodystrophic disorders in general are associated with metabolic syndrome. Both genetic (e.g., Berardinelli-Seip congenital lipodystrophy, Dunnigan familial partial lipodystrophy) and acquired (e.g., HIV-related lipodystrophy in people treated with highly active antiretroviral therapy) forms of lipodystrophy may give rise to severe insulin resistance and many of metabolic syndrome's components.
Rheumatic diseases
There is research that associates comorbidity with rheumatic diseases. Both psoriasis and psoriatic arthritis have been found to be associated with metabolic syndrome.
Chronic obstructive pulmonary disease
Metabolic syndrome is seen to be a comorbidity in up to 50 percent of those with chronic obstructive pulmonary disease (COPD). It may pre-exist or may be a consequence of the lung pathology of COPD.
Pathophysiology
It is common for there to be a development of visceral fat, after which the adipocytes (fat cells) of the visceral fat increase plasma levels of TNF-α and alter levels of other substances (e.g., adiponectin, resistin, and PAI-1). TNF-α has been shown to cause the production of inflammatory cytokines and also possibly trigger cell signaling by interaction with a TNF-α receptor that may lead to insulin resistance. An experiment with rats fed a diet with 33% sucrose has been proposed as a model for the development of metabolic syndrome. The sucrose first elevated blood levels of triglycerides, which induced visceral fat and ultimately resulted in insulin resistance. The progression from visceral fat to increased TNF-α to insulin resistance has some parallels to human development of metabolic syndrome. The increase in adipose tissue also increases the number of immune cells, which play a role in inflammation. Chronic inflammation contributes to an increased risk of hypertension, atherosclerosis and diabetes.
The involvement of the endocannabinoid system in the development of metabolic syndrome is indisputable. Endocannabinoid overproduction may induce reward system dysfunction and cause executive dysfunctions (e.g., impaired delay discounting), in turn perpetuating unhealthy behaviors. The brain is crucial in development of metabolic syndrome, modulating peripheral carbohydrate and lipid metabolism.
Metabolic syndrome can be induced by overfeeding with sucrose or fructose, particularly concomitantly with high-fat diet. The resulting oversupply of omega-6 fatty acids, particularly arachidonic acid (AA), is an important factor in the pathogenesis of metabolic syndrome. Arachidonic acid (with its precursor – linoleic acid) serves as a substrate to the production of inflammatory mediators known as eicosanoids, whereas the arachidonic acid-containing compound diacylglycerol (DAG) is a precursor to the endocannabinoid 2-arachidonoylglycerol (2-AG) while fatty acid amide hydrolase (FAAH) mediates the metabolism of anandamide into arachidonic acid. Anandamide can also be produced from N-acylphosphatidylethanolamine via several pathways. Anandamide and 2-AG can also be hydrolized into arachidonic acid, potentially leading to increased eicosanoid synthesis.
Diagnosis
NCEP
As of 2023, the U.S. National Cholesterol Education Program Adult Treatment Panel III (2001) continues to be the most widely-used clinical definition. It requires at least three of the following:
Central obesity: waist circumference ≥ 102 cm or 40 inches (male), ≥ 88 cm or 35 inches(female)
Dyslipidemia: TG ≥ 1.7 mmol/L (150 mg/dL)
Dyslipidemia: HDL-C < 40 mg/dL (male), < 50 mg/dL (female)
Blood pressure ≥ 130/85 mmHg (or treated for hypertension)
Fasting plasma glucose ≥ 6.1 mmol/L (110 mg/dL)
2009 Interim Joint Statement
The International Diabetes Federation Task Force on Epidemiology and Prevention; the National Heart, Lung, and Blood Institute; the American Heart Association; the World Heart Federation; the International Atherosclerosis Society; and the International Association for the Study of Obesity published an interim joint statement to harmonize the definition of the metabolic syndrome in 2009. According to this statement, the criteria for clinical diagnosis of the
metabolic syndrome are three or more of the following:
Elevated waist circumference with population- and country-specific definitions
Elevated triglycerides (≥ 150 mg/dL (1.7 mmol/L))
Reduced HDL-C (≤40 mg/dL (1.0 mmol/L) in males, ≤50 mg/dL (1.3 mmol/L) in females)
Elevated blood pressure (systolic ≥130 and/or diastolic ≥85 mm Hg)
Elevated fasting glucose (≥100 mg/dL (5.55 mmol/L)
This definition recognizes that the risk associated with a particular waist measurement will differ in different populations. However, for international comparisons and to facilitate the etiology, the organizations agree that it is critical that a commonly agreed-upon set of criteria be used worldwide, with agreed-upon cut points for different ethnic groups and sexes. There are many people in the world of mixed ethnicity, and in those cases, pragmatic decisions will have to be made. Therefore, an international criterion of overweight may be more appropriate than ethnic specific criteria of abdominal obesity for an anthropometric component of this syndrome which results from an excess lipid storage in adipose tissue, skeletal muscle and liver.
The report notes that previous definitions of the metabolic syndrome by the International Diabetes Federation (IDF) and the revised National Cholesterol Education Program (NCEP) are very similar, and they identify individuals with a given set of symptoms as having metabolic syndrome. There are two differences, however: the IDF definition states that if body mass index (BMI) is greater than 30 kg/m2, central obesity can be assumed, and waist circumference does not need to be measured. However, this potentially excludes any subject without increased waist circumference if BMI is less than 30. Conversely, the NCEP definition indicates that metabolic syndrome can be diagnosed based on other criteria. Also, the IDF uses geography-specific cut points for waist circumference, while NCEP uses only one set of cut points for waist circumference regardless of geography.
WHO
The World Health Organization (1999) requires the presence of any one of diabetes mellitus, impaired glucose tolerance, impaired fasting glucose or insulin resistance, AND two of the following:
Blood pressure ≥ 140/90 mmHg
Dyslipidemia: triglycerides (TG) ≥ 1.695 mmol/L and HDL cholesterol ≤ 0.9 mmol/L (male), ≤ 1.0 mmol/L (female)
Central obesity: waist:hip ratio > 0.90 (male); > 0.85 (female), or BMI > 30 kg/m2
Microalbuminuria: urinary albumin excretion ratio ≥ 20 μg/min or albumin:creatinine ratio ≥ 30 mg/g
EGIR
The European Group for the Study of Insulin Resistance (1999) requires that subjects have insulin resistance (defined for purposes of clinical practivality as the top 25% of the fasting insulin values among nondiabetic individuals) AND two or more of the following:
Central obesity: waist circumference ≥ 94 cm or 37 inches (male), ≥ 80 cm or 31.5 inches (female)
Dyslipidemia: TG ≥ 2.0 mmol/L (177 mg/dL) and/or HDL-C < 1.0 mmol/L (38.61 mg/dL) or treated for dyslipidemia
Blood pressure ≥ 140/90 mmHg or antihypertensive medication
Fasting plasma glucose ≥ 6.1 mmol/L (110 mg/dL)
Cardiometabolic index
The Cardiometabolic index (CMI) is a tool used to calculate risk of type 2 diabetes, non-alcoholic fatty liver disease, and metabolic issues. It is based on calculations from waist-to-height ratio and triglycerides-to-HDL cholesterol ratio.
CMI can also be used for finding connections between cardiovascular disease and erectile dysfunction. When following an anti inflammatory diet (low-glycemic carbohydrates, fruits, vegetables, fish, less red meat and processed foods) the markers may drop resulting in a significant reduction in body weight and adipose tissue.
Other
High-sensitivity C-reactive protein has been developed and used as a marker to predict coronary vascular diseases in metabolic syndrome, and it was recently used as a predictor for nonalcoholic fatty liver disease (steatohepatitis) in correlation with serum markers that indicated lipid and glucose metabolism. Fatty liver disease and steatohepatitis can be considered manifestations of metabolic syndrome, indicative of abnormal energy storage as fat in ectopic distribution.
Reproductive disorders (such as polycystic ovary syndrome in women of reproductive age), and erectile dysfunction or decreased total testosterone (low testosterone-binding globulin) in men can be attributed to metabolic syndrome.
Prevention
Various strategies have been proposed to prevent the development of metabolic syndrome. These include increased physical activity (such as walking 30 minutes every day), and a healthy, reduced calorie diet. Many studies support the value of a healthy lifestyle as above. However, one study stated these potentially beneficial measures are effective in only a minority of people, primarily because of a lack of compliance with lifestyle and diet changes. The International Obesity Taskforce states that interventions on a sociopolitical level are required to reduce development of the metabolic syndrome in populations.
The Caerphilly Heart Disease Study followed 2,375 male subjects over 20 years and suggested the daily intake of an Imperial pint (~568 mL) of milk or equivalent dairy products more than halved the risk of metabolic syndrome. Some subsequent studies support the authors' findings, while others dispute them. A systematic review of four randomized controlled trials said that, in the short term, a paleolithic nutritional pattern improved three of five measurable components of the metabolic syndrome in participants with at least one of the components.
Management
Medications
Generally, the individual disorders that compose the metabolic syndrome are treated separately. Diuretics and ACE inhibitors may be used to treat hypertension. Various cholesterol medications may be useful if LDL cholesterol, triglycerides, and/or HDL cholesterol is abnormal.
Diet
Dietary carbohydrate restriction reduces blood glucose levels, contributes to weight loss, and reduces the use of several medications that may be prescribed for metabolic syndrome. Studies suggest that meal timing and frequency can significantly impact the risk of developing metabolic syndrome. Research indicates that individuals who maintain regular meal timings and avoid eating late at night have a reduced risk of developing this condition (Alkhulaifi & Darkoh, 2022).
Epidemiology
Approximately 20–25 percent of the world's adult population has the cluster of risk factors that is metabolic syndrome. In 2000, approximately 32% of U.S. adults had metabolic syndrome. In more recent years that figure has climbed to 34%.
In young children, there is no consensus on how to measure metabolic syndrome since age-specific cut points and reference values that would indicate "high risk" have not been well established. A continuous cardiometabolic risk summary score is often used for children instead of a dichotomous measure of metabolic syndrome.
Other conditions and specific microbiome diversity seems to be associated with metabolic syndrome, with certain-degree of gender-specificity.
History
In 1921, Joslin first reported the association of diabetes with hypertension and hyperuricemia.
In 1923, Kylin reported additional studies on the above triad.
In 1947, Vague observed that upper body obesity appeared to predispose to diabetes, atherosclerosis, gout and calculi.
In the late 1950s, the term metabolic syndrome was first used.
In 1967, Avogadro, Crepaldi and coworkers described six moderately obese people with diabetes, hypercholesterolemia, and marked hypertriglyceridemia, all of which improved when the affected people were put on a hypocaloric, low-carbohydrate diet.
In 1977, Haller used the term metabolic syndrome for associations of obesity, diabetes mellitus, hyperlipoproteinemia, hyperuricemia, and hepatic steatosis when describing the additive effects of risk factors on atherosclerosis.
The same year, Singer used the term for associations of obesity, gout, diabetes mellitus, and hypertension with hyperlipoproteinemia.
In 1977 and 1978, Gerald B. Phillips developed the concept that risk factors for myocardial infarction concur to form a "constellation of abnormalities" (i.e., glucose intolerance, hyperinsulinemia, hypercholesterolemia, hypertriglyceridemia, and hypertension) associated not only with heart disease, but also with aging, obesity and other clinical states. He suggested there must be an underlying linking factor, the identification of which could lead to the prevention of cardiovascular disease; he hypothesized that this factor was sex hormones.
In 1988, in his Banting lecture, Gerald M. Reaven proposed insulin resistance as the underlying factor and named the constellation of abnormalities syndrome X. Reaven did not include abdominal obesity, which has also been hypothesized as the underlying factor, as part of the condition.
See also
Metabolic disorder
Portal-visceral hypothesis
References
Metabolic disorders
Endocrine diseases
Medical conditions related to obesity
Syndromes affecting the endocrine system
Syndromes with obesity | 0.766349 | 0.998108 | 0.764898 |
Exudate | An exudate is a fluid released by an organism through pores or a wound, a process known as exuding or exudation.
Exudate is derived from exude 'to ooze' from Latin 'to (ooze out) sweat' ( 'out' and 'to sweat').
Medicine
An exudate is any fluid that filters from the circulatory system into lesions or areas of inflammation. It can be a pus-like or clear fluid. When an injury occurs, leaving skin exposed, it leaks out of the blood vessels and into nearby tissues. The fluid is composed of serum, fibrin, and leukocytes. Exudate may ooze from cuts or from areas of infection or inflammation.
Types
Purulent or suppurative exudate consists of plasma with both active and dead neutrophils, fibrinogen, and necrotic parenchymal cells. This kind of exudate is consistent with more severe infections, and is commonly referred to as pus.
Fibrinous exudate is composed mainly of fibrinogen and fibrin. It is characteristic of rheumatic carditis, but is seen in all severe injuries such as strep throat and bacterial pneumonia. Fibrinous inflammation is often difficult to resolve due to blood vessels growing into the exudate and filling space that was occupied by fibrin. Often, large amounts of antibiotics are necessary for resolution.
Catarrhal exudate is seen in the nose and throat and is characterized by a high content of mucus.
Serous exudate (sometimes classified as serous transudate) is usually seen in mild inflammation, with relatively low protein. Its consistency resembles that of serum, and can usually be seen in certain disease states like tuberculosis. (See below for difference between transudate and exudate)
Malignant (or cancerous) pleural effusion is effusion where cancer cells are present. It is usually classified as exudate.
Types of exudates: serous, serosanguineous, sanguineous, hemorrhaging and purulent drainage.
Serous: Clear straw colored liquid that drains from the wound. This is a normal part of the healing process.
Serosanguineous: Small amount of blood is present in the drainage; it is pink in color due to the presence of red blood cells mixed with serous drainage. This is a normal part of the healing process.
Sanguineous: This type of drainage contains red blood due to trauma of blood vessels, this may occur while cleaning the wound. Sanguineous drainage is abnormal.
Hemorrhaging: This type of drainage contains frank blood from a leaking blood vessel. This will require emergency treatment to control the bleed. This type of drainage is abnormal.
Purulent drainage: This type of drainage is malodorous and can be yellow, gray, or greenish in color. This is an indication of an infection.
Exudates vs. transudates
There is an important distinction between transudates and exudates. Transudates are caused by disturbances of hydrostatic or colloid osmotic pressure, not by inflammation. They have a low protein content in comparison to exudates. Medical distinction between transudates and exudates is through the measurement of the specific gravity of extracted fluid. Specific gravity is used to measure the protein content of the fluid. The higher the specific gravity, the greater the likelihood of capillary permeability changes in relation to body cavities. For example, the specific gravity of the transudate is usually less than 1.012 and a protein content of less than 2 g/100 mL (2 g%). Rivalta test may be used to differentiate an exudate from a transudate.
It is not clear if there is a distinction in the difference of transudates and exudates in plants.
Plant exudates
Plant exudates include saps, gums, latex, and resin. Sometimes nectar is considered an exudate. Plant seeds exudate a variety of molecules into the spermosphere, and roots exudate into the rhizosphere; these exudates include acids, sugars, polysaccharides and ectoenzymes, and collectively account for 40% of root carbon. Exudation of these compounds has various benefits to the plant and to the microorganisms of the rhizosphere.
See also
Honeydew (secretion)
Guttation
Pleural effusion
Scarless wound healing
Surfactant leaching
References
External links
Cardiovascular physiology
Body fluids | 0.768294 | 0.995551 | 0.764876 |
CREST syndrome | CREST syndrome, also known as the limited cutaneous form of systemic sclerosis (lcSSc), is a multisystem connective tissue disorder. The acronym "CREST" refers to the five main features: calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia.
CREST syndrome is associated with detectable antibodies against centromeres (a component of the cell nucleus), and usually spares the kidneys (a feature more common in the related condition systemic scleroderma). If the lungs are involved, it is usually in the form of pulmonary arterial hypertension.
Signs and symptoms
Calcinosis
CREST causes thickening and tightening of the skin with deposition of calcific nodules ("calcinosis").
Raynaud's phenomenon
Raynaud's phenomenon is frequently the first manifestation of CREST/lcSSc, preceding other symptoms by years. Stress and cold temperature induce an exaggerated vasoconstriction of the small arteries, arterioles, and thermoregulatory vessels of the skin of the digits. Clinically this manifests as a white-blue-red transition in skin color. Underlying this transition is pallor and cyanosis of the digits, followed by a reactive hyperemia as they rewarm. When extreme and frequent, this phenomenon can lead to digital ulcerations, gangrene, or amputation.
Ulceration can predispose to chronic infections of the involved site.
Esophageal dysmotility
Presents as a sensation of food getting stuck (dysphagia) in the mid- or lower esophagus, atypical chest pain, or cough. People often state they must drink liquids to swallow solid food. This motility problem results from atrophy of the gastrointestinal tract wall smooth muscle. This change may occur with or without pathologic evidence of significant tissue fibrosis.
Sclerodactyly
Though it is the most easily recognizable manifestation, it is not prominent in all patients. Thickening generally only involves the skin of the fingers distal to the metacarpophalangeal joints in CREST. Early in the course of the disease, the skin may appear edematous and inflamed. Eventually, dermal fibroblasts overproduce extracellular matrix leading to increased tissue collagen deposition in the skin. Collagen cross-linking then causes a progressive skin tightening. Digital ischemic ulcers commonly form on the distal fingers in 30–50% of patients.
Telangiectasias
Marked telangiectasias (dilated capillaries) occur on the skin of the face, the palmar surface of the hands, and the mucous membranes. Telangiectasias tend to be more numerous in people with other scleroderma related vascular disease (i.e., pulmonary arterial hypertension). The number of telangiectasias and the sites involved tend to increase over time.
Other
Other symptoms of CREST syndrome can be exhaustion, weakness, difficulties with breathing, pain in hands and feet, dizziness and badly healing wounds.
Patients with lcSSc commonly develop pulmonary artery hypertension which may result in cor pulmonale (heart failure due to increased pulmonary artery pressure).
Cause
CREST syndrome involves the production of autoimmune anti-nuclear and anti-centromere antibodies, though their cause is not currently understood. There is no known infectious cause.
Diagnosis
CREST is not easily diagnosed as it closely mimics symptoms of other connective tissue and autoimmune diseases. Diagnoses are usually given when a patient presents two or more of the five major clinical symptoms. Additionally, blood exams can be given to test for a positive ANAs and ACAs or skin biopsies can be given to help confirm a diagnosis.
Treatment
Disease progression may be slowed with immunosuppressives and other medications, and esophageal reflux, pulmonary hypertension and Raynaud phenomenon may benefit from symptomatic treatment. However, there is no cure for this disease as there is no cure for scleroderma in general.
Epidemiology
CREST syndrome can be noted in up to 10% of patients with primary biliary cholangitis.
History
The combination of symptoms was first reported in 1964 by R.H. Winterbauer, at that point a medical student at Johns Hopkins School of Medicine.
See also
Scleroderma
References
External links
Connective tissue diseases
Systemic connective tissue disorders
Autoimmune diseases
Syndromes | 0.767209 | 0.996875 | 0.764812 |
Immune disorder | An immune disorder is a dysfunction of the immune system. These disorders can be characterized in several different ways:
By the component(s) of the immune system affected
By whether the immune system is overactive or underactive
By whether the condition is congenital or acquired
According to the International Union of Immunological Societies, more than 150 primary immunodeficiency diseases (PIDs) have been characterized. However, the number of acquired immunodeficiencies exceeds the number of PIDs.
It has been suggested that most people have at least one primary immunodeficiency. Due to redundancies in the immune system, though, many of these are never detected.
Autoimmune diseases
An autoimmune disease is a condition arising from an abnormal immune response to a normal body part. There are at least 80 types of autoimmune diseases. Nearly any body part can be involved. Common symptoms include low-grade fever and feeling tired. Often symptoms come and go.
List of some autoimmune disorders
Lupus
Scleroderma
Certain types of hemolytic anemia
Vasculitis
Type 1 diabetes
Graves' disease
Rheumatoid arthritis
Multiple sclerosis (although it is thought to be an immune-mediated process)
Goodpasture syndrome
Pernicious anemia
Some types of myopathy
Lyme disease (Late)
Celiac disease
Alopecia Areata
Immunodeficiencies
Primary immune deficiency diseases are those caused by inherited genetic mutations. Secondary or acquired immune deficiencies are caused by something outside the body such as a virus or immune suppressing drugs.
Primary immune diseases are at risk to an increased susceptibility to, and often recurrent ear infections, pneumonia, bronchitis, sinusitis or skin infections. Immunodeficient patients may less frequently develop abscesses of their internal organs, autoimmune or rheumatologic and gastrointestinal problems.
Primary immune deficiencies
Severe combined immunodeficiency (SCID)
DiGeorge syndrome
Hyperimmunoglobulin E syndrome (also known as Job's Syndrome)
Common variable immunodeficiency (CVID): B cell levels are normal in circulation but with decreased production of IgG throughout the years, so it is the only primary immune disorder that presents onset in the late teens years.
Chronic granulomatous disease (CGD): a deficiency in NADPH oxidase enzyme, which causes failure to generate oxygen radicals. Classical recurrent infection from catalase positive bacteria and fungi.
Wiskott–Aldrich syndrome (WAS)
Autoimmune lymphoproliferative syndrome (ALPS)
Hyper IgM syndrome: X-linked disorder that causes a deficiency in the production of CD40 ligand on activated T cells. This increases the production and release of IgM into circulation. The B cell and T cell numbers are within normal limits. Increased susceptibility to extracellular bacteria and opportunistic infections.
Leukocyte adhesion deficiency (LAD)
NF-κB Essential Modifier (NEMO) Mutations
Selective immunoglobulin A deficiency: the most common defect of the humoral immunity, characterized by a deficiency of IgA. Produces repeating sino-pulmonary and gastrointestinal infections.
X-linked agammaglobulinemia (XLA; also known as Bruton type agammaglobulinemia): characterized by a deficiency in tyrosine kinase enzyme that blocks B cell maturation in the bone marrow. No B cells are produced to circulation and thus, there are no immunoglobulin classes, although there tends to be a normal cell-mediated immunity.
X-linked lymphoproliferative disease (XLP)
Ataxia–telangiectasia
Secondary immune deficiencies
AIDS
Allergies
An allergy is an abnormal immune reaction to a harmless antigen.
Seasonal allergy
Mastocytosis
Perennial allergy
Anaphylaxis
Food allergy
Allergic rhinitis
Atopic dermatitis
See also
Disorders of human immunity
Hypersplenism
References
External links | 0.7751 | 0.98672 | 0.764807 |
Abscess | An abscess is a collection of pus that has built up within the tissue of the body. Signs and symptoms of abscesses include redness, pain, warmth, and swelling. The swelling may feel fluid-filled when pressed. The area of redness often extends beyond the swelling. Carbuncles and boils are types of abscess that often involve hair follicles, with carbuncles being larger. A cyst is related to an abscess, but it contains a material other than pus, and a cyst has a clearly defined wall.
They are usually caused by a bacterial infection. Often many different types of bacteria are involved in a single infection. In many areas of the world, the most common bacteria present is methicillin-resistant Staphylococcus aureus. Rarely, parasites can cause abscesses; this is more common in the developing world. Diagnosis of a skin abscess is usually made based on what it looks like and is confirmed by cutting it open. Ultrasound imaging may be useful in cases in which the diagnosis is not clear. In abscesses around the anus, computer tomography (CT) may be important to look for deeper infection.
Standard treatment for most skin or soft tissue abscesses is cutting it open and drainage. There appears to be some benefit from also using antibiotics. A small amount of evidence supports not packing the cavity that remains with gauze after drainage. Closing this cavity right after draining it rather than leaving it open may speed healing without increasing the risk of the abscess returning. Sucking out the pus with a needle is often not sufficient.
Skin abscesses are common and have become more common in recent years. Risk factors include intravenous drug use, with rates reported as high as 65% among users. In 2005, 3.2 million people went to American emergency departments for abscesses. In Australia, around 13,000 people were hospitalized in 2008 with the condition.
Signs and symptoms
Abscesses may occur in any kind of tissue but most frequently within the skin surface (where they may be superficial pustules known as boils or deep skin abscesses), in the lungs, brain, teeth, kidneys, and tonsils. Major complications may include spreading of the abscess material to adjacent or remote tissues, and extensive regional tissue death (gangrene).
The main symptoms and signs of a skin abscess are redness, heat, swelling, pain, and loss of function. There may also be high temperature (fever) and chills. If superficial, abscesses may be fluctuant when palpated; this wave-like motion is caused by movement of the pus inside the abscess.
An internal abscess is more difficult to identify, but signs include pain in the affected area, a high temperature, and generally feeling unwell.
Internal abscesses rarely heal themselves, so prompt medical attention is indicated if such an abscess is suspected. An abscess can potentially be fatal depending on where it is located.
Causes
Risk factors for abscess formation include intravenous drug use. Another possible risk factor is a prior history of disc herniation or other spinal abnormality, though this has not been proven.
Abscesses are caused by bacterial infection, parasites, or foreign substances.
Bacterial infection is the most common cause, particularly Staphylococcus aureus. The more invasive methicillin-resistant Staphylococcus aureus (MRSA) may also be a source of infection, though is much rarer. Among spinal subdural abscesses, methicillin-sensitive Staphylococcus aureus is the most common organism involved.
Rarely parasites can cause abscesses and this is more common in the developing world. Specific parasites known to do this include dracunculiasis and myiasis.
Anorectal abscess
Anorectal abscesses can be caused by non-specific obstruction and ensuing infection of the glandular crypts inside of the anus or rectum. Other causes include cancer, trauma, or inflammatory bowel diseases.
Incisional abscess
An incisional abscess is one that develops as a complication secondary to a surgical incision. It presents as redness and warmth at the margins of the incision with purulent drainage from it. If the diagnosis is uncertain, the wound should be aspirated with a needle, with aspiration of pus confirming the diagnosis and availing for Gram stain and bacterial culture.
Pathophysiology
An abscess is a defensive reaction of the tissue to prevent the spread of infectious materials to other parts of the body.
Organisms or foreign materials destroy the local cells, which results in the release of cytokines. The cytokines trigger an inflammatory response, which draws large numbers of white blood cells to the area and increases the regional blood flow.
The final structure of the abscess is an abscess wall, or capsule, that is formed by the adjacent healthy cells in an attempt to keep the pus from infecting neighboring structures. However, such encapsulation tends to prevent immune cells from attacking bacteria in the pus, or from reaching the causative organism or foreign object.
Diagnosis
An abscess is a localized collection of pus (purulent inflammatory tissue) caused by suppuration buried in a tissue, an organ, or a confined space, lined by the pyogenic membrane. Ultrasound imaging can help in a diagnosis.
Classification
Abscesses may be classified as either skin abscesses or internal abscesses. Skin abscesses are common; internal abscesses tend to be harder to diagnose, and more serious. Skin abscesses are also called cutaneous or subcutaneous abscesses.
IV drug use
For those with a history of intravenous drug use, an X-ray is recommended before treatment to verify that no needle fragments are present. If there is also a fever present in this population, infectious endocarditis should be considered.
Differential
Abscesses should be differentiated from empyemas, which are accumulations of pus in a preexisting, rather than a newly formed, anatomical cavity.
Other conditions that can cause similar symptoms include: cellulitis, a sebaceous cyst, and necrotising fasciitis. Cellulitis typically also has an erythematous reaction, but does not confer any purulent drainage.
Treatment
The standard treatment for an uncomplicated skin or soft tissue abscess is the act of opening and draining. There does not appear to be any benefit from also using antibiotics in most cases. A small amount of evidence did not find a benefit from packing the abscess with gauze.
Incision and drainage
The abscess should be inspected to identify if foreign objects are a cause, which may require their removal. If foreign objects are not the cause, incising and draining the abscess is standard treatment.
Antibiotics
Most people who have an uncomplicated skin abscess should not use antibiotics. Antibiotics in addition to standard incision and drainage is recommended in persons with severe abscesses, many sites of infection, rapid disease progression, the presence of cellulitis, symptoms indicating bacterial illness throughout the body, or a health condition causing immunosuppression. People who are very young or very old may also need antibiotics. If the abscess does not heal only with incision and drainage, or if the abscess is in a place that is difficult to drain such as the face, hands, or genitals, then antibiotics may be indicated.
In those cases of abscess which do require antibiotic treatment, Staphylococcus aureus bacteria is a common cause and an anti-staphylococcus antibiotic such as flucloxacillin or dicloxacillin is used. The Infectious Diseases Society of America advises that the draining of an abscess is not enough to address community-acquired methicillin-resistant Staphylococcus aureus (MRSA), and in those cases, traditional antibiotics may be ineffective. Alternative antibiotics effective against community-acquired MRSA often include clindamycin, doxycycline, minocycline, and trimethoprim-sulfamethoxazole. The American College of Emergency Physicians advises that typical cases of abscess from MRSA get no benefit from having antibiotic treatment in addition to the standard treatment.
Culturing the wound is not needed if standard follow-up care can be provided after the incision and drainage. Performing a wound culture is unnecessary because it rarely gives information which can be used to guide treatment.
Packing
In North America, after drainage, an abscess cavity is usually packed, often with special iodoform-treated cloth. This is done to absorb and neutralize any remaining exudate as well as to promote draining and prevent premature closure. Prolonged draining is thought to promote healing. The hypothesis is that though the heart's pumping action can deliver immune and regenerative cells to the edge of an injury, an abscess is by definition a void in which no blood vessels are present. Packing is thought to provide a wicking action that continuously draws beneficial factors and cells from the body into the void that must be healed. Discharge is then absorbed by cutaneous bandages and further wicking promoted by changing these bandages regularly. However, evidence from emergency medicine literature reports that packing wounds after draining, especially smaller wounds, causes pain to the person and does not decrease the rate of recurrence, nor bring faster healing, or fewer physician visits.
Loop drainage
More recently, several North American hospitals have opted for less-invasive loop drainage over standard drainage and wound packing. In one study of 143 pediatric outcomes, a failure rate of 1.4% was reported in the loop group versus 10.5% in the packing group (P<.030), while a separate study reported a 5.5% failure rate among the loop group.
Primary closure
Closing an abscess immediately after draining it appears to speed healing without increasing the risk of recurrence. This may not apply to anorectal abscesses as while they may heal faster, there may be a higher rate of recurrence than those left open.
Prognosis
Even without treatment, skin abscesses rarely result in death, as they will naturally break through the skin. Other types of abscess are more dangerous. Brain abscesses may be fatal if untreated. When treated, the mortality rate reduces to 5–10%, but is higher if the abscess ruptures.
Epidemiology
Skin abscesses are common and have become more common in recent years. Risk factors include intravenous drug use, with rates reported as high as 65% among users. In 2005, in the United States 3.2 million people went to the emergency department for an abscess. In Australia around 13,000 people were hospitalized in 2008 for the disease.
Society and culture
The Latin medical aphorism "ubi pus, ibi evacua" expresses "where there is pus, there evacuate it" and is classical advice in the culture of Western medicine.
Needle exchange programmes often administer or provide referrals for abscess treatment to injection drug users as part of a harm reduction public health strategy.
Etymology
An abscess is so called "abscess" because there is an abscessus (a going away or departure) of portions of the animal tissue from each other to make room for the suppurated matter lodged between them.
The word carbuncle is believed to have originated from the Latin: carbunculus, originally a small coal; diminutive of carbon-, carbo: charcoal or ember, but also a carbuncle stone, "precious stones of a red or fiery colour", usually garnets.
Other types
The following types of abscess are listed in the medical dictionary:
References
External links
General surgery
Cutaneous lesion
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | 0.765463 | 0.99914 | 0.764804 |
Influenza-like illness | Influenza-like illness (ILI), also known as flu-like syndrome or flu-like symptoms, is a medical diagnosis of possible influenza or other illness causing a set of common symptoms. These include fever, shivering, chills, malaise, dry cough, loss of appetite, body aches, nausea, and sneezing typically in connection with a sudden onset of illness. In most cases, the symptoms are caused by cytokines released by immune system activation, and are thus relatively non-specific.
Common causes of ILI include the common cold and influenza, which tends to be less common but more severe than the common cold. Less common causes include side effects of many drugs and manifestations of many other diseases.
Definition
The term ILI can be used casually, but when used in the surveillance of influenza cases, can have a strict definition. The World Health Organization defines an illness as an ILI if the patient has a fever greater than or equal to 38 °C and a cough, which began in the last 10 days. If the patient requires hospitalisation, the illness is classified instead as a severe acute respiratory infection (SARI). Other organisations may have different definitions. For instance, the CDC defines it as a fever or greater, and a cough or sore throat.
Causes
The causes of influenza-like illness range from benign self-limited illnesses such as gastroenteritis, rhinoviral disease, and influenza, to severe, sometimes life-threatening, diseases such as meningitis, sepsis, and leukemia.
Influenza
Technically, any clinical diagnosis of influenza is a diagnosis of ILI, not of influenza. This distinction usually is of no great concern because, regardless of cause, most cases of ILI are mild and self-limiting. Furthermore, except perhaps during the peak of a major outbreak of influenza, most cases of ILI are not due to influenza. ILI is very common: in the United States each adult can average 1–3 episodes per year and each child can average 3–6 episodes per year.
Influenza in humans is subject to clinical surveillance by a global network of more than 110 National Influenza Centers. These centers receive samples obtained from patients diagnosed with ILI, and test the samples for the presence of an influenza virus. Not all patients diagnosed with ILI are tested, and not all test results are reported. Samples are selected for testing based on severity of ILI, and as part of routine sampling, and at participating surveillance clinics and laboratories. The United States has a general surveillance program, a border surveillance program, and a hospital surveillance program, all devoted to finding new outbreaks of influenza.
In most years, in the majority of samples tested, the influenza virus is not present (see figure above). In the United States during the 2008–9 influenza season through 18 April, out of 183,839 samples tested and reported to the CDC, only 25,925 (14.1%) were positive for influenza. The percent positive reached a maximum of about 25%. The percent positive increases with the incidence of infection, peaking with the peak incidence of influenza (see figure). During an epidemic, 60–70% of patients with a clear influenza-like illness actually have influenza.
Samples are respiratory samples, usually collected by a physician, nurse, or assistant, and sent to a hospital laboratory for preliminary testing. There are several methods of collecting a respiratory sample, depending on requirements of the laboratory that will test the sample. A sample may be obtained from around the nose simply by wiping with a dry cotton swab.
Other causes
Infectious diseases causing ILI include respiratory syncytial virus, malaria, acute HIV/AIDS infection, herpes, hepatitis C, Lyme disease, rabies, myocarditis, Q fever, dengue fever, poliomyelitis, pneumonia, measles, SARS, COVID-19, and many others.
Pharmaceutical drugs that may cause ILI include many biologics such as interferons and monoclonal antibodies. Chemotherapeutic agents also commonly cause flu-like symptoms. Other drugs associated with a flu-like syndrome include bisphosphonates, caspofungin, and levamisole. A flu-like syndrome can also be caused by an influenza vaccine or other vaccines, and by opioid withdrawal in physically dependent individuals.
Diagnosis
Influenza-like illness is a nonspecific respiratory illness characterized by fever, fatigue, cough, and other symptoms that stop within a few days. Most cases of ILI are caused not by influenza but by other viruses (e.g., rhinoviruses, coronaviruses, human respiratory syncytial virus, adenoviruses, and human parainfluenza viruses). Less common causes of ILI include bacteria such as Legionella, Chlamydia pneumoniae, Mycoplasma pneumoniae, and Streptococcus pneumoniae. Influenza, RSV, and certain bacterial infections are particularly important causes of ILI because these infections can lead to serious complications requiring hospitalization. Physicians who examine persons with ILI can use a combination of epidemiologic and clinical data (information about recent other patients and the individual patient) and, if necessary, laboratory and radiographic tests to determine the cause of the ILI. The use of multiplexed point-of-care testing such as CRP (C-reactive protein) along with an examination by a doctor may help to identify a bacterial and avoid an unnecessary antibiotic prescription.
During the 2009 flu pandemic, many thousands of cases of ILI were reported in the media as suspected swine flu. Most were false alarms. A differential diagnosis of probable swine flu requires not only symptoms but also a high likelihood of swine flu due to the person's recent history. During the 2009 flu pandemic in the United States, the CDC advised physicians to "consider swine influenza infection in the differential diagnosis of patients with acute febrile respiratory illness who have either been in contact with persons with confirmed swine flu, or who were in one of the five U.S. states that have reported swine flu cases or in Mexico during the 7 days preceding their illness onset." A diagnosis of confirmed swine flu required laboratory testing of a respiratory sample (a simple nose and throat swab).
In rare cases
If a person with ILI also has either a history of exposure or an occupational or environmental risk of exposure to Bacillus anthracis (anthrax), then a differential diagnosis requires distinguishing between ILI and anthrax. Other rare causes of ILI include leukemia and metal fume fever.
In horses
ILI occurs in some horses after intramuscular injection of vaccines. For these horses, light exercise speeds resolution of the ILI. Non-steroidal anti-inflammatory drugs (NSAIDs) may be given with the vaccine.
See also
Attack rate
Disease surveillance
Histamine glutarimide
References
Influenza
Symptoms
Medical signs
Syndromes | 0.768253 | 0.995476 | 0.764777 |
Hypothermia | Hypothermia is defined as a body core temperature below in humans. Symptoms depend on the temperature. In mild hypothermia, there is shivering and mental confusion. In moderate hypothermia, shivering stops and confusion increases. In severe hypothermia, there may be hallucinations and paradoxical undressing, in which a person removes their clothing, as well as an increased risk of the heart stopping.
Hypothermia has two main types of causes. It classically occurs from exposure to cold weather and cold water immersion. It may also occur from any condition that decreases heat production or increases heat loss. Commonly, this includes alcohol intoxication but may also include low blood sugar, anorexia and advanced age. Body temperature is usually maintained near a constant level of through thermoregulation. Efforts to increase body temperature involve shivering, increased voluntary activity, and putting on warmer clothing. Hypothermia may be diagnosed based on either a person's symptoms in the presence of risk factors or by measuring a person's core temperature.
The treatment of mild hypothermia involves warm drinks, warm clothing, and voluntary physical activity. In those with moderate hypothermia, heating blankets and warmed intravenous fluids are recommended. People with moderate or severe hypothermia should be moved gently. In severe hypothermia, extracorporeal membrane oxygenation (ECMO) or cardiopulmonary bypass may be useful. In those without a pulse, cardiopulmonary resuscitation (CPR) is indicated along with the above measures. Rewarming is typically continued until a person's temperature is greater than . If there is no improvement at this point or the blood potassium level is greater than 12 millimoles per litre at any time, resuscitation may be discontinued.
Hypothermia is the cause of at least 1,500 deaths a year in the United States. It is more common in older people and males. One of the lowest documented body temperatures from which someone with accidental hypothermia has survived is in a 2-year-old boy from Poland named Adam. Survival after more than six hours of CPR has been described. In individuals for whom ECMO or bypass is used, survival is around 50%. Deaths due to hypothermia have played an important role in many wars.
The term is from Greek ῠ̔πο (ypo), meaning "under", and θέρμη (thérmē), meaning "heat". The opposite of hypothermia is hyperthermia, an increased body temperature due to failed thermoregulation.
Classification
Hypothermia is often defined as any body temperature below . With this method it is divided into degrees of severity based on the core temperature.
Another classification system, the Swiss staging system, divides hypothermia based on the presenting symptoms which is preferred when it is not possible to determine an accurate core temperature.
Other cold-related injuries that can be present either alone or in combination with hypothermia include:
Chilblains: condition caused by repeated exposure of skin to temperatures just above freezing. The cold causes damage to small blood vessels in the skin. This damage is permanent and the redness and itching will return with additional exposure. The redness and itching typically occurs on cheeks, ears, fingers, and toes.
Frostbite: the freezing and destruction of tissue, which happens below the freezing point of water
Frostnip: a superficial cooling of tissues without cellular destruction
Trench foot or immersion foot: a condition caused by repetitive exposure to water at non-freezing temperatures
The normal human body temperature is often stated as . Hyperthermia and fevers are defined as a temperature of greater than .
Signs and symptoms
Signs and symptoms vary depending on the degree of hypothermia, and may be divided by the three stages of severity. People with hypothermia may appear pale and feel cold to touch.
Infants with hypothermia may feel cold when touched, with bright red skin and an unusual lack of energy.
Behavioural changes such as impaired judgement, impaired sense of time and place, unusual aggression and numbness can be observed in individuals with hypothermia, they can also deny their condition and refuse any help. A hypothermic person can be euphoric and hallucinating.
Cold stress refers to a near-normal body temperature with low skin temperature; signs include shivering. Cold stress is caused by cold exposure and it can lead to hypothermia and frostbite if not treated.
Mild
Symptoms of mild hypothermia may be vague, with sympathetic nervous system excitation (shivering, high blood pressure, fast heart rate, fast respiratory rate, and contraction of blood vessels). These are all physiological responses to preserve heat. Increased urine production due to cold, mental confusion, and liver dysfunction may also be present. Hyperglycemia may be present, as glucose consumption by cells and insulin secretion both decrease, and tissue sensitivity to insulin may be blunted. Sympathetic activation also releases glucose from the liver. In many cases, however, especially in people with alcoholic intoxication, hypoglycemia appears to be a more common cause. Hypoglycemia is also found in many people with hypothermia, as hypothermia may be a result of hypoglycemia.
Moderate
As hypothermia progresses, symptoms include: mental status changes such as amnesia, confusion, slurred speech, decreased reflexes, and loss of fine motor skills.
Severe
As the temperature decreases, further physiological systems falter and heart rate, respiratory rate, and blood pressure all decrease. This results in an expected heart rate in the 30s at a temperature of .
There is often cold, inflamed skin, hallucinations, lack of reflexes, fixed dilated pupils, low blood pressure, pulmonary edema, and shivering is often absent. Pulse and respiration rates decrease significantly, but fast heart rates (ventricular tachycardia, atrial fibrillation) can also occur. Atrial fibrillation is not typically a concern in and of itself.
Paradoxical undressing
Twenty to fifty percent of hypothermia deaths are associated with paradoxical undressing. This typically occurs during moderate and severe hypothermia, as the person becomes disoriented, confused, and combative. They may begin discarding their clothing, which, in turn, increases the rate of heat loss.
Rescuers who are trained in mountain survival techniques are taught to expect this; however, people who die from hypothermia in urban environments who are found in an undressed state are sometimes incorrectly assumed to have been subjected to sexual assault.
One explanation for the effect is a cold-induced malfunction of the hypothalamus, the part of the brain that regulates body temperature. Another explanation is that the muscles contracting peripheral blood vessels become exhausted (known as a loss of vasomotor tone) and relax, leading to a sudden surge of blood (and heat) to the extremities, causing the person to feel overheated.
Terminal burrowing
An apparent self-protective behaviour, known as "terminal burrowing", or "hide-and-die syndrome", occurs in the final stages of hypothermia. Those affected will enter small, enclosed spaces, such as underneath beds or behind wardrobes. It is often associated with paradoxical undressing. Researchers in Germany claim this is "obviously an autonomous process of the brain stem, which is triggered in the final state of hypothermia and produces a primitive and burrowing-like behavior of protection, as seen in hibernating mammals". This happens mostly in cases where temperature drops slowly.
Causes
Hypothermia usually occurs from exposure to low temperatures, and is frequently complicated by alcohol consumption. Any condition that decreases heat production, increases heat loss, or impairs thermoregulation, however, may contribute. Thus, hypothermia risk factors include: substance use disorders (including alcohol use disorder), homelessness, any condition that affects judgment (such as hypoglycemia), the extremes of age, poor clothing, chronic medical conditions (such as hypothyroidism and sepsis), and living in a cold environment. Hypothermia occurs frequently in major trauma, and is also observed in severe cases of anorexia nervosa. Hypothermia is also associated with worse outcomes in people with sepsiswhile most people with sepsis develop fevers (elevated body temperature), some develop hypothermia.
In urban areas, hypothermia frequently occurs with chronic cold exposure, such as in cases of homelessness, as well as with immersion accidents involving drugs, alcohol or mental illness. While studies have shown that people experiencing homelessness are at risk of premature death from hypothermia, the true incidence of hypothermia-related deaths in this population is difficult to determine. In more rural environments, the incidence of hypothermia is higher among people with significant comorbidities and less ability to move independently. With rising interest in wilderness exploration, and outdoor and water sports, the incidence of hypothermia secondary to accidental exposure may become more frequent in the general population.
Alcohol
Alcohol consumption increases the risk of hypothermia in two ways: vasodilation and temperature controlling systems in the brain. Vasodilation increases blood flow to the skin, resulting in heat being lost to the environment. This produces the effect of feeling warm, when one is actually losing heat. Alcohol also affects the temperature-regulating system in the brain, decreasing the body's ability to shiver and use energy that would normally aid the body in generating heat. The overall effects of alcohol lead to a decrease in body temperature and a decreased ability to generate body heat in response to cold environments. Alcohol is a common risk factor for death due to hypothermia. Between 33% and 73% of hypothermia cases are complicated by alcohol.
Water immersion
Hypothermia continues to be a major limitation to swimming or diving in cold water. The reduction in finger dexterity due to pain or numbness decreases general safety and work capacity, which consequently increases the risk of other injuries.
Other factors predisposing to immersion hypothermia include dehydration, inadequate rewarming between repetitive dives, starting a dive while wearing cold, wet dry suit undergarments, sweating with work, inadequate thermal insulation, and poor physical conditioning.
Heat is lost much more quickly in water than in air. Thus, water temperatures that would be quite reasonable as outdoor air temperatures can lead to hypothermia in survivors, although this is not usually the direct clinical cause of death for those who are not rescued. A water temperature of can lead to death in as little as one hour, and water temperatures near freezing can cause death in as little as 15 minutes. During the sinking of the Titanic, most people who entered the water died in 15–30 minutes.
The actual cause of death in cold water is usually the bodily reactions to heat loss and to freezing water, rather than hypothermia (loss of core temperature) itself. For example, plunged into freezing seas, around 20% of victims die within two minutes from cold shock (uncontrolled rapid breathing, and gasping, causing water inhalation, massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic); another 50% die within 15–30 minutes from cold incapacitation: inability to use or control limbs and hands for swimming or gripping, as the body "protectively" shuts down the peripheral muscles of the limbs to protect its core. Exhaustion and unconsciousness cause drowning, claiming the rest within a similar time.
Pathophysiology
Heat is primarily generated in muscle tissue, including the heart, and in the liver, while it is lost through the skin (90%) and lungs (10%). Heat production may be increased two- to four-fold through muscle contractions (i.e. exercise and shivering). The rate of heat loss is determined, as with any object, by convection, conduction, and radiation. The rates of these can be affected by body mass index, body surface area to volume ratios, clothing and other environmental conditions.
Many changes to physiology occur as body temperatures decrease. These occur in the cardiovascular system leading to the Osborn J wave and other dysrhythmias, decreased central nervous system electrical activity, cold diuresis, and non-cardiogenic pulmonary edema.
Research has shown that glomerular filtration rates (GFR) decrease as a result of hypothermia. In essence, hypothermia increases preglomerular vasoconstriction, thus decreasing both renal blood flow (RBF) and GFR.
Diagnosis
Accurate determination of core temperature often requires a special low temperature thermometer, as most clinical thermometers do not measure accurately below . A low temperature thermometer can be placed in the rectum, esophagus or bladder. Esophageal measurements are the most accurate and are recommended once a person is intubated. Other methods of measurement such as in the mouth, under the arm, or using an infrared ear thermometer are often not accurate.
As a hypothermic person's heart rate may be very slow, prolonged feeling for a pulse could be required before detecting. In 2005, the American Heart Association recommended at least 30–45 seconds to verify the absence of a pulse before initiating CPR. Others recommend a 60-second check.
The classical ECG finding of hypothermia is the Osborn J wave. Also, ventricular fibrillation frequently occurs below and asystole below . The Osborn J may look very similar to those of an acute ST elevation myocardial infarction. Thrombolysis as a reaction to the presence of Osborn J waves is not indicated, as it would only worsen the underlying coagulopathy caused by hypothermia.
Prevention
Staying dry and wearing proper clothing help to prevent hypothermia. Synthetic and wool fabrics are superior to cotton as they provide better insulation when wet and dry. Some synthetic fabrics, such as polypropylene and polyester, are used in clothing designed to wick perspiration away from the body, such as liner socks and moisture-wicking undergarments. Clothing should be loose fitting, as tight clothing reduces the circulation of warm blood. In planning outdoor activity, prepare appropriately for possible cold weather. Those who drink alcohol before or during outdoor activity should ensure at least one sober person responsible for safety is present.
Covering the head is effective, but no more effective than covering any other part of the body. While common folklore says that people lose most of their heat through their heads, heat loss from the head is no more significant than that from other uncovered parts of the body. However, heat loss from the head is significant in infants, whose head is larger relative to the rest of the body than in adults. Several studies have shown that for uncovered infants, lined hats significantly reduce heat loss and thermal stress. Children have a larger surface area per unit mass, and other things being equal should have one more layer of clothing than adults in similar conditions, and the time they spend in cold environments should be limited. However, children are often more active than adults, and may generate more heat. In both adults and children, overexertion causes sweating and thus increases heat loss.
Building a shelter can aid survival where there is danger of death from exposure. Shelters can be constructed out of a variety of materials. Metal can conduct heat away from the occupants and is sometimes best avoided. The shelter should not be too big so body warmth stays near the occupants. Good ventilation is essential especially if a fire will be lit in the shelter. Fires should be put out before the occupants sleep to prevent carbon monoxide poisoning. People caught in very cold, snowy conditions can build an igloo or snow cave to shelter.
The United States Coast Guard promotes using life vests to protect against hypothermia through the 50/50/50 rule: If someone is in water for 50 minutes, they have a 50 percent better chance of survival if they are wearing a life jacket. A heat escape lessening position can be used to increase survival in cold water.
Babies should sleep at and housebound people should be checked regularly to make sure the temperature of the home is at least .
Management
Aggressiveness of treatment is matched to the degree of hypothermia. Treatment ranges from noninvasive, passive external warming to active external rewarming, to active core rewarming. In severe cases resuscitation begins with simultaneous removal from the cold environment and management of the airway, breathing, and circulation. Rapid rewarming is then commenced. Moving the person as little and as gently as possible is recommended as aggressive handling may increase risks of a dysrhythmia.
Hypoglycemia is a frequent complication and needs to be tested for and treated. Intravenous thiamine and glucose is often recommended, as many causes of hypothermia are complicated by Wernicke's encephalopathy.
The UK National Health Service advises against putting a person in a hot bath, massaging their arms and legs, using a heating pad, or giving them alcohol. These measures can cause a rapid fall in blood pressure and potential cardiac arrest.
Rewarming
Rewarming can be done with a number of methods including passive external rewarming, active external rewarming, and active internal rewarming. Passive external rewarming involves the use of a person's own ability to generate heat by providing properly insulated dry clothing and moving to a warm environment. Passive external rewarming is recommended for those with mild hypothermia.
Active external rewarming involves applying warming devices externally, such as a heating blanket. These may function by warmed forced air (Bair Hugger is a commonly used device), chemical reactions, or electricity. In wilderness environments, hypothermia may be helped by placing hot water bottles in both armpits and in the groin. Active external rewarming is recommended for moderate hypothermia. Active core rewarming involves the use of intravenous warmed fluids, irrigation of body cavities with warmed fluids (the chest or abdomen), use of warm humidified inhaled air, or use of extracorporeal rewarming such as via a heart lung machine or extracorporeal membrane oxygenation (ECMO). Extracorporeal rewarming is the fastest method for those with severe hypothermia. When severe hypothermia has led to cardiac arrest, effective extracorporeal warming results in survival with normal mental function about 50% of the time. Chest irrigation is recommended if bypass or ECMO is not possible.
Rewarming shock (or rewarming collapse) is a sudden drop in blood pressure in combination with a low cardiac output which may occur during active treatment of a severely hypothermic person. There was a theoretical concern that external rewarming rather than internal rewarming may increase the risk. These concerns were partly believed to be due to afterdrop, a situation detected during laboratory experiments where there is a continued decrease in core temperature after rewarming has been started. Recent studies have not supported these concerns, and problems are not found with active external rewarming.
Fluids
For people who are alert and able to swallow, drinking warm (not hot) sweetened liquids can help raise the temperature. General medical consensus advises against alcohol and caffeinated drinks. As most hypothermic people are moderately dehydrated due to cold-induced diuresis, warmed intravenous fluids to a temperature of are often recommended.
Cardiac arrest
In those without signs of life, cardiopulmonary resuscitation (CPR) should be continued during active rewarming. For ventricular fibrillation or ventricular tachycardia, a single defibrillation should be attempted. However, people with severe hypothermia may not respond to pacing or defibrillation. It is not known if further defibrillation should be withheld until the core temperature reaches . In Europe, epinephrine is not recommended until the person's core temperature reaches , while the American Heart Association recommends up to three doses of epinephrine before a core temperature of is reached. Once a temperature of has been reached, normal ACLS protocols should be followed.
Prognosis
It is usually recommended not to declare a person dead until their body is warmed to a near normal body temperature of greater than , since extreme hypothermia can suppress heart and brain function. This is summarized in the common saying "You're not dead until you're warm and dead." Exceptions include if there are obvious fatal injuries or the chest is frozen so that it cannot be compressed. If a person was buried in an avalanche for more than 35 minutes and is found with a mouth packed full of snow without a pulse, stopping early may also be reasonable. This is also the case if a person's blood potassium is greater than 12 mmol/L.
Those who are stiff with pupils that do not move may survive if treated aggressively. Survival with good function also occasionally occurs even after the need for hours of CPR. Children who have near-drowning accidents in water near can occasionally be revived, even over an hour after losing consciousness. The cold water lowers the metabolism, allowing the brain to withstand a much longer period of hypoxia. While survival is possible, mortality from severe or profound hypothermia remains high despite optimal treatment. Studies estimate mortality at between 38% and 75%.
In those who have hypothermia due to another underlying health problem, when death occurs it is frequently from that underlying health problem.
Epidemiology
Between 1995 and 2004 in the United States, an average of 1,560 cold-related emergency department visits occurred per year and in the years 1999 to 2004, an average of 647 people died per year due to hypothermia. Of deaths reported between 1999 and 2002 in the US, 49% of those affected were 65 years or older and two-thirds were male. Most deaths were not work related (63%) and 23% of affected people were at home. Hypothermia was most common during the autumn and winter months of October through March. In the United Kingdom, an estimated 300 deaths per year are due to hypothermia, whereas the annual incidence of hypothermia-related deaths in Canada is 8,000.
History
Hypothermia has played a major role in the success or failure of many military campaigns, from Hannibal's loss of nearly half his men in the Second Punic War (218 B.C.) to the near destruction of Napoleon's armies in Russia in 1812. Men wandered around confused by hypothermia, some lost consciousness and died, others shivered, later developed torpor, and tended to sleep. Others too weak to walk fell on their knees; some stayed that way for some time resisting death. The pulse of some was weak and hard to detect; others groaned; yet others had eyes open and wild with quiet delirium. Deaths from hypothermia in Russian regions continued through the first and second world wars, especially in the Battle of Stalingrad.
Civilian examples of deaths caused by hypothermia occurred during the sinkings of the RMS Titanic and RMS Lusitania, and more recently of the MS Estonia.
Antarctic explorers developed hypothermia; Ernest Shackleton and his team measured body temperatures "below 94.2°, which spells death at home", though this probably referred to oral temperatures rather than core temperature and corresponded to mild hypothermia. One of Scott's team, Atkinson, became confused through hypothermia.
Nazi human experimentation during World War II amounting to medical torture included hypothermia experiments, which killed many victims. There were 360 to 400 experiments and 280 to 300 subjects, indicating some had more than one experiment performed on them. Various methods of rewarming were attempted: "One assistant later testified that some victims were thrown into boiling water for rewarming".
Medical use
Various degrees of hypothermia may be deliberately induced in medicine for purposes of treatment of brain injury, or lowering metabolism so that total brain ischemia can be tolerated for a short time. Deep hypothermic circulatory arrest is a medical technique in which the brain is cooled as low as 10 °C, which allows the heart to be stopped and blood pressure to be lowered to zero, for the treatment of aneurysms and other circulatory problems that do not tolerate arterial pressure or blood flow. The time limit for this technique, as also for accidental arrest in ice water (which internal temperatures may drop to as low as 15 °C), is about one hour.
Other animals
Hypothermia can happen in most mammals in cold weather and can be fatal. Baby mammals such as kittens are unable to regulate their body temperatures and have a risk of hypothermia if they are not kept warm by their mothers.
Many animals other than humans often induce hypothermia during hibernation or torpor.
Water bears (Tardigrade), microscopic multicellular organisms, can survive freezing at low temperatures by replacing most of their internal water with the sugar trehalose, preventing the crystallization that otherwise damages cell membranes.
See also
, two versions of a short story by Jack London portraying the effects of cold and hypothermia
, a short story by Hans Christian Andersen about a child dying of hypothermia
Dyatlov Pass incident
References
Bibliography
External links
CDC - NIOSH Workplace Safety & Health Topic: Cold Stress
Underwater diving medicine
Medical emergencies
Wilderness medical emergencies
Physiology
Causes of death
Cryobiology
Heat transfer
Effects of external causes
Cardiac arrhythmia
Thermoregulation
Cold waves
Weather and health
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | 0.765481 | 0.999045 | 0.764751 |
Motion sickness | Motion sickness occurs due to a difference between actual and expected motion. Symptoms commonly include nausea, vomiting, cold sweat, headache, dizziness, tiredness, loss of appetite, and increased salivation. Complications may rarely include dehydration, electrolyte problems, or a lower esophageal tear.
The cause of motion sickness is either real or perceived motion. This may include car travel, air travel, sea travel, space travel, or reality simulation. Risk factors include pregnancy, migraines, and Ménière's disease. The diagnosis is based on symptoms.
Treatment may include behavioral measures or medications. Behavioral measures include keeping the head still and focusing on the horizon. Three types of medications are useful: antimuscarinics such as scopolamine, H1 antihistamines such as dimenhydrinate, and amphetamines such as dexamphetamine. Side effects, however, may limit the use of medications. A number of medications used for nausea such as ondansetron are not effective for motion sickness.
Many people are affected with sufficient motion and some people will experience motion sickness at least once in their lifetime. Susceptibility, however, is variable, with about one-third of the population being susceptible while the other people are affected only under very extreme conditions. Women are more easily affected than men. Motion sickness has been described since at least the time of Homer ( eighth century BC).
Signs and symptoms
Symptoms commonly include nausea, vomiting, cold sweat, headache, dizziness, tiredness, loss of appetite, and increased salivation. Occasionally, tiredness can last for hours to days after an episode of motion sickness, known as "sopite syndrome". Rarely severe symptoms such as the inability to walk, ongoing vomiting, or social isolation may occur while rare complications may include dehydration, electrolyte problems, or a lower esophageal tear from severe vomiting.
Cause
Motion sickness can be divided into three categories:
Motion sickness caused by motion that is felt but not seen i.e. terrestrial motion sickness;
Motion sickness caused by motion that is seen but not felt i.e. space motion sickness;
Motion sickness caused when both systems detect motion but they do not correspond i.e. either terrestrial or space motion sickness.
Motion felt but not seen
In these cases, motion is sensed by the vestibular system and hence the motion is felt, but no motion or little motion is detected by the visual system, as in terrestrial motion sickness.
Carsickness
A specific form of terrestrial motion sickness, being carsick is quite common and evidenced by disorientation while reading a map, a book, or a small screen during travel. Carsickness results from the sensory conflict arising in the brain from differing sensory inputs. Motion sickness is caused by a conflict between signals arriving in the brain from the inner ear, which forms the base of the vestibular system, the sensory apparatus that deals with movement and balance, and which detects motion mechanically. If someone is looking at a stationary object within a vehicle, such as a magazine, their eyes will inform their brain that what they are viewing is not moving. Their inner ears, however, will contradict this by sensing the motion of the vehicle.
Varying theories exist as to cause. The sensory conflict theory notes that the eyes view motion while riding in the moving vehicle while other body sensors sense stillness, creating conflict between the eyes and inner ear. Another suggests the eyes mostly see the interior of the car which is motionless while the vestibular system of the inner ear senses motion as the vehicle goes around corners or over hills and even small bumps. Therefore, the effect is worse when looking down but may be lessened by looking outside of the vehicle.
In the early 20th century, Austro-Hungarian scientist Róbert Bárány observed the back and forth movement of the eyes of railroad passengers as they looked out the side windows at the scenery whipping by. He called this "railway nystagmus", also known as "optokinetic nystagmus". His findings were published in the journal Laeger, 83:1516, Nov.17, 1921.
Airsickness
Air sickness is a kind of terrestrial motion sickness induced by certain sensations of air travel. It is a specific form of motion sickness and is considered a normal response in healthy individuals. It is essentially the same as carsickness but occurs in an airplane. An airplane may bank and tilt sharply, and unless passengers are sitting by a window, they are likely to see only the stationary interior of the plane due to the small window sizes and during flights at night. Another factor is that while in flight, the view out of windows may be blocked by clouds, preventing passengers from seeing the moving ground or passing clouds.
Seasickness
Seasickness is a form of terrestrial motion sickness characterized by a feeling of nausea and, in extreme cases, vertigo experienced after spending time on a boat. It is essentially the same as carsickness, though the motion of a watercraft tends to be more regular. It is typically brought on by the rocking motion of the craft or movement while the craft is immersed in water. As with airsickness, it can be difficult to visually detect motion even if one looks outside the boat since water does not offer fixed points with which to visually judge motion. Poor visibility conditions, such as fog, may worsen seasickness. The greatest contributor to seasickness is the tendency for people being affected by the rolling or surging motions of the craft to seek refuge below decks, where they are unable to relate themselves to the boat's surroundings and consequent motion. Some people with carsickness are resistant to seasickness and vice versa. Adjusting to the craft's motion at sea is called "gaining one's sea legs"; it can take a significant portion of the time spent at sea after disembarking to regain a sense of stability "post-sea legs".
Centrifuge motion sickness
Rotating devices such as centrifuges used in astronaut training and amusement park rides such as the Rotor, Mission: Space and the Gravitron can cause motion sickness in many people. While the interior of the centrifuge does not appear to move, one will experience a sense of motion. In addition, centrifugal force can cause the vestibular system to give one the sense that downward is in the direction away from the center of the centrifuge rather than the true downward direction.
Dizziness due to spinning
When one spins and stops suddenly, fluid in the inner ear continues to rotate causing a sense of continued spinning while one's visual system no longer detects motion.
Virtual reality
Usually, VR programs would detect the motion of the user's head and adjust the rotation of vision to avoid dizziness. However, some cases such as system lagging or software crashing could cause lags in the screen updates. In such cases, even some small head motions could trigger the motion sickness by the defense mechanism mentioned below: the inner ear transmits to the brain that it senses motion, but the eyes tell the brain that everything is still.
Motion seen but not felt
In these cases, motion is detected by the visual system and hence the motion is seen, but no motion or little motion is sensed by the vestibular system. Motion sickness arising from such situations has been referred to as "visually induced motion sickness" (VIMS).
Space motion sickness
Zero gravity interferes with the vestibular system's gravity-dependent operations, so that the two systems, vestibular and visual, no longer provide a unified and coherent sensory representation. This causes unpleasant disorientation sensations often quite distinct from terrestrial motion sickness, but with similar symptoms. The symptoms may be more intense because a condition caused by prolonged weightlessness is usually quite unfamiliar.
Space motion sickness was effectively unknown during the earliest spaceflights because the very cramped conditions of the spacecraft allowed for only minimal bodily motion, especially head motion. Space motion sickness seems to be aggravated by being able to freely move around, and so is more common in larger spacecraft. Around 60% of Space Shuttle astronauts experienced it on their first flight; the first case of space motion sickness is now thought to be the Soviet cosmonaut Gherman Titov, in August 1961 onboard Vostok 2, who reported dizziness, nausea, and vomiting. The first severe cases were in early Apollo flights; Frank Borman on Apollo 8 and Rusty Schweickart on Apollo 9. Both experienced identifiable and quite unpleasant symptoms—in the latter case causing the mission plan to be modified.
Screen images
This type of terrestrial motion sickness is particularly prevalent when susceptible people are watching films presented on very large screens such as IMAX, but may also occur in regular format theaters or even when watching TV or playing games. For the sake of novelty, IMAX and other panoramic type theaters often show dramatic motions such as flying over a landscape or riding a roller coaster.
In regular-format theaters, an example of a movie that caused motion sickness in many people is The Blair Witch Project. Theaters warned patrons of its possible nauseating effects, cautioning pregnant women in particular. Blair Witch was filmed with a handheld camcorder, which was subjected to considerably more motion than the average movie camera, and lacks the stabilization mechanisms of steadicams.
Home movies, often filmed with a cell phone camera, also tend to cause motion sickness in those who view them. The person holding the cell phone or other camera usually is unaware of this as the recording is being made since the sense of motion seems to match the motion seen through the camera's viewfinder. Those who view the film afterward only see the movement, which may be considerable, without any sense of motion. Using the zoom function seems to contribute to motion sickness as well since zooming is not a normal function of the eye. The use of a tripod or a camera or cell phone with image stabilization while filming can reduce this effect.
Virtual reality
Motion sickness due to virtual reality is very similar to simulation sickness and motion sickness due to films. In virtual reality the effect is made more acute as all external reference points are blocked from vision, the simulated images are three-dimensional and in some cases stereo sound that may also give a sense of motion. The NADS-1, a simulator located at the National Advanced Driving Simulator, is capable of accurately stimulating the vestibular system with a 360-degree horizontal field of view and 13 degrees of freedom motion base. Studies have shown that exposure to rotational motions in a virtual environment can cause significant increases in nausea and other symptoms of motion sickness.
In a study conducted by the U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 – Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than one hour, 44 (6%) reported that symptoms lasted longer than four hours, and 28 (4%) reported that symptoms lasted longer than six hours. There were also four (1%) reported cases of spontaneously occurring flashbacks."
Motion that is seen and felt
When moving within a rotating reference frame such as in a centrifuge or environment where gravity is simulated with centrifugal force, the coriolis effect causes a sense of motion in the vestibular system that does not match the motion that is seen.
Pathophysiology
There are various hypotheses that attempt to explain the cause of the condition.
Sensory conflict theory
Contemporary sensory conflict theory, referring to "a discontinuity between either visual, proprioceptive, and somatosensory input, or semicircular canal and otolith input", is probably the most thoroughly studied. According to this theory, when the brain presents the mind with two incongruous states of motion, the result is often nausea and other symptoms of disorientation known as motion sickness. Such conditions happen when the vestibular system and the visual system do not present a synchronized and unified representation of one's body and surroundings.
According to sensory conflict theory, the cause of terrestrial motion sickness is the opposite of the cause of space motion sickness. The former occurs when one perceives visually that one's surroundings are relatively immobile while the vestibular system reports that one's body is in motion relative to its surroundings. The latter can occur when the visual system perceives that one's surroundings are in motion while the vestibular system reports relative bodily immobility (as in zero gravity.)
Neural mismatch
A variation of the sensory conflict theory is known as neural mismatch, implying a mismatch occurring between ongoing sensory experience and long-term memory rather than between components of the vestibular and visual systems. This theory emphasizes "the limbic system in the integration of sensory information and long-term memory, in the expression of the symptoms of motion sickness, and the impact of anti-motion-sickness drugs and stress hormones on limbic system function. The limbic system may be the neural mismatch center of the brain."
Defense against poisoning
It has also been proposed that motion sickness could function as a defense mechanism against neurotoxins. The area postrema in the brain is responsible for inducing vomiting when poisons are detected, and for resolving conflicts between vision and balance. When feeling motion but not seeing it (for example, in the cabin of a ship with no portholes), the inner ear transmits to the brain that it senses motion, but the eyes tell the brain that everything is still. As a result of the incongruity, the brain concludes that the individual is hallucinating and further concludes that the hallucination is due to poison ingestion. The brain responds by inducing vomiting, to clear the supposed toxin. Treisman's indirect argument has recently been questioned via an alternative direct evolutionary hypothesis, as well as modified and extended via a direct poison hypothesis. The direct evolutionary hypothesis essentially argues that there are plausible means by which ancient real or apparent motion could have contributed directly to the evolution of aversive reactions, without the need for the co-opting of a poison response as posited by Treisman. Nevertheless, the direct poison hypothesis argues that there still are plausible ways in which the body's poison response system may have played a role in shaping the evolution of some of the signature symptoms that characterize motion sickness.
Nystagmus hypothesis
Yet another theory, known as the nystagmus hypothesis, has been proposed based on stimulation of the vagus nerve resulting from the stretching or traction of extra-ocular muscles co-occurring with eye movements caused by vestibular stimulation. There are three critical aspects to the theory: first is the close linkage between activity in the vestibular system, i.e., semicircular canals and otolith organs, and a change in tonus among various of each eye's six extra-ocular muscles. Thus, with the exception of voluntary eye movements, the vestibular and oculomotor systems are thoroughly linked. Second is the operation of Sherrington's Law describing reciprocal inhibition between agonist-antagonist muscle pairs, and by implication the stretching of extraocular muscle that must occur whenever Sherrington's Law is made to fail, thereby causing an unrelaxed (contracted) muscle to be stretched. Finally, there is the critical presence of afferent output to the Vagus nerves as a direct result of eye muscle stretch or traction. Thus, tenth nerve stimulation resulting from eye muscle stretch is proposed as the cause of motion sickness. The theory explains why labyrinthine-defective individuals are immune to motion sickness; why symptoms emerge when undergoing various body-head accelerations; why combinations of voluntary and reflexive eye movements may challenge the proper operation of Sherrington's Law, and why many drugs that suppress eye movements also serve to suppress motion sickness symptoms.
A recent theory argues that the main reason motion sickness occurs is due to an imbalance in vestibular outputs favoring the semicircular canals (nauseogenic) vs. otolith organs (anti-nauseogenic). This theory attempts to integrate previous theories of motion sickness. For example, there are many sensory conflicts that are associated with motion sickness and many that are not, but those in which canal stimulation occurs in the absence of normal otolith function (e.g., in free fall) are the most provocative. The vestibular imbalance theory is also tied to the different roles of the otoliths and canals in autonomic arousal (otolith output more sympathetic).
Diagnosis
The diagnosis is based on symptoms. Other conditions that may present similarly include vestibular disorders such as benign paroxysmal positional vertigo and vestibular migraine and stroke.
Treatment
Treatment may include behavioral measures or medications.
Behavioral measures
Behavioral measures to decrease motion sickness include holding the head still and lying on the back. Focusing on the horizon may also be useful. Listening to music, mindful breathing, being the driver, and not reading while moving are other techniques.
Habituation is the most effective technique but requires significant time. It is often used by the military for pilots. These techniques must be carried out at least every week to retain effectiveness.
A head-worn, computer device with a transparent display can be used to mitigate the effects of motion sickness (and spatial disorientation) if visual indicators of the wearer's head position are shown. Such a device functions by providing the wearer with digital reference lines in their field of vision that indicate the horizon's position relative to the user's head. This is accomplished by combining readings from accelerometers and gyroscopes mounted in the device. This technology has been implemented in both standalone devices and Google Glass. One promising looking treatment is to wear LCD shutter glasses that create a stroboscopic vision of 4 Hz with a dwell of 10 milliseconds.
Medication
Three types of medications are sometimes prescribed to improve symptoms of motion sickness: antimuscarinics such as scopolamine, H1 antihistamines such as dimenhydrinate, and amphetamines such as dexamphetamine. Benefits are greater if used before the onset of symptoms or shortly after symptoms begin. Side effects, however, may limit the use of medications. A number of medications used for nausea such as ondansetron and metoclopramide are not effective in motion sickness.
Scopolamine (antimuscarinic)
Scopolamine is the most effective medication. Evidence is best for when it is used preventatively. It is available as a skin patch. Side effects may include blurry vision.
Antihistamines
Antihistamine medications are sometimes given to prevent or treat motion sickness. This class of medication is often effective at reducing the risk of getting motion sickness while in motion, however, the effectiveness of antihistamines at treating or stopping motion sickness once a person is already experiencing it has not been well studied. Effective first generation antihistamines include doxylamine, diphenhydramine, promethazine, meclizine, cyclizine, and cinnarizine. In pregnancy meclizine, dimenhydrinate and doxylamine are generally felt to be safe. Side effects include sleepiness. Second generation antihistamines have not been found to be useful.
Amphetamines
Dextroamphetamine may be used together with an antihistamine or an antimuscarinic. Concerns include their addictive potential.
Those involved in high-risk activities, such as SCUBA diving, should evaluate the risks versus the benefits of medications. Promethazine combined with ephedrine to counteract the sedation is known as "the Coast Guard cocktail".
Alternative medicine
Alternative treatments include acupuncture and ginger, although their effectiveness against motion sickness is variable. Providing smells does not appear to have a significant effect on the rate of motion sickness.
Epidemiology
Roughly one-third of people are highly susceptible to motion sickness, and most of the rest get motion sick under extreme conditions. Around 80% of the general population is susceptible to cases of medium to high motion sickness. The rates of space motion sickness have been estimated at between forty and eighty percent of those who enter weightless orbit. Several factors influence susceptibility to motion sickness, including sleep deprivation and the cubic footage allocated to each space traveler. Studies indicate that women are more likely to be affected than men, and that the risk decreases with advancing age. There is some evidence that people with Asian ancestry may develop motion sickness more frequently than people of European ancestry, and there are situational and behavioral factors, such as whether a passenger has a view of the road ahead, and diet and eating behaviors.
See also
Mal de debarquement - disembarkment syndrome, usually follows a cruise or other motion experience
References
External links
Motion Sickness from MedlinePlus
Neurological disorders
Effects of external causes
Vomiting
Wikipedia medicine articles ready to translate | 0.767247 | 0.996736 | 0.764743 |
Synovial joint | A synovial joint, also known as diarthrosis, join bones or cartilage with a fibrous joint capsule that is continuous with the periosteum of the joined bones, constitutes the outer boundary of a synovial cavity, and surrounds the bones' articulating surfaces. This joint unites long bones and permits free bone movement and greater mobility. The synovial cavity/joint is filled with synovial fluid. The joint capsule is made up of an outer layer of fibrous membrane, which keeps the bones together structurally, and an inner layer, the synovial membrane, which seals in the synovial fluid.
They are the most common and most movable type of joint in the body of a mammal. As with most other joints, synovial joints achieve movement at the point of contact of the articulating bones.
Structure
Synovial joints contain the following structures:
Synovial cavity: all diarthroses have the characteristic space between the bones that is filled with synovial fluid.
Joint capsule: the fibrous capsule, continuous with the periosteum of articulating bones, surrounds the diarthrosis and unites the articulating bones; the joint capsule consists of two layers - (1) the outer fibrous membrane that may contain ligaments and (2) the inner synovial membrane that secretes the lubricating, shock absorbing, and joint-nourishing synovial fluid; the joint capsule is highly innervated, but without blood and lymph vessels, and receives nutrition from the surrounding blood supply via either diffusion (slow), or via convection (fast, more efficient), induced through exercise.
Articular cartilage: the bones of a synovial joint are covered by a layer of hyaline cartilage that lines the epiphyses of the joint end of the bone with a smooth, slippery surface that prevents adhesion; articular cartilage functions to absorb shock and reduce friction during movement.
Many, but not all, synovial joints also contain additional structures:
Articular discs or menisci - the fibrocartilage pads between opposing surfaces in a joint
Articular fat pads - adipose tissue pads that protect the articular cartilage, as seen in the infrapatellar fat pad in the knee
Tendons - cords of dense regular connective tissue composed of parallel bundles of collagen fibers
Accessory ligaments (extracapsular and intracapsular) - the fibers of some fibrous membranes are arranged in parallel bundles of dense regular connective tissue that are highly adapted for resisting strains to prevent extreme movements that may damage the articulation
Bursae - sac-like structures that are situated strategically to alleviate friction in some joints (shoulder and knee) that are filled with fluid similar to synovial fluid
The bone surrounding the joint on the proximal side is sometimes called the plafond (French word for ceiling), especially in the talocrural joint. Damage to this structure is referred to as a Gosselin fracture.
Blood supply
The blood supply of a synovial joint is derived from the arteries sharing in the anastomosis around the joint.
Types
There are seven types of synovial joints. Some are relatively immobile, therefore more stable. Others have multiple degrees of freedom, but at the expense of greater risk of injury. In ascending order of mobility, they are:
Multiaxial joints
A multiaxial joint (polyaxial joint or triaxial joint) is a synovial joint that allows for several directions of movement. In the human body, the shoulder and hip joints are multiaxial joints. They allow the upper or lower limb to move in an anterior-posterior direction and a medial-lateral direction. In addition, the limb can also be rotated around its long axis. This third movement results in rotation of the limb so that its anterior surface is moved either toward or away from the midline of the body.
Function
The movements possible with synovial joints are:
abduction: movement away from the mid-line of the body
adduction: movement toward the mid-line of the body
extension: straightening limbs at a joint
flexion: bending the limbs at a joint
rotation: a circular movement around a fixed point
Clinical significance
The joint space equals the distance between the involved bones of the joint. A joint space narrowing is a sign of either (or both) osteoarthritis and inflammatory degeneration. The normal joint space is at least 2 mm in the hip (at the superior acetabulum), at least 3 mm in the knee, and 4–5 mm in the shoulder joint. For the temporomandibular joint, a joint space of between 1.5 and 4 mm is regarded as normal. Joint space narrowing is therefore a component of several radiographic classifications of osteoarthritis.
In rheumatoid arthritis, the clinical manifestations are primarily synovial inflammation and joint damage. The fibroblast-like synoviocytes, highly specialized mesenchymal cells found in the synovial membrane, have an active and prominent role in the pathogenic processes in the rheumatic joints. Therapies that target these cells are emerging as promising therapeutic tools, raising hope for future applications in rheumatoid arthritis.
References
Sources
Joints | 0.767172 | 0.996831 | 0.764741 |
Psychiatry | Psychiatry is the medical specialty devoted to the diagnosis, prevention, and treatment of deleterious mental conditions. These include various matters related to mood, behaviour, cognition, perceptions, and emotions.
Initial psychiatric assessment of a person begins with creating a case history and conducting a mental status examination. Physical examinations, psychological tests, and laboratory tests may be conducted. On occasion, neuroimaging or other neurophysiological studies are performed. Mental disorders are diagnosed in accordance with diagnostic manuals such as the International Classification of Diseases (ICD), edited by the World Health Organization (WHO), and the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association (APA). The fifth edition of the DSM (DSM-5), published in May 2013, reorganized the categories of disorders and added newer information and insights consistent with current research.
Treatment may include psychotropics (psychiatric medicines), interventional approaches and psychotherapy, and also other modalities such as assertive community treatment, community reinforcement, substance-abuse treatment, and supported employment. Treatment may be delivered on an inpatient or outpatient basis, depending on the severity of functional impairment or risk to the individual or community. Research within psychiatry is conducted on an interdisciplinary basis with other professionals, such as epidemiologists, nurses, social workers, occupational therapists, and clinical psychologists.
Etymology
The term psychiatry was first coined by the German physician Johann Christian Reil in 1808 and literally means the 'medical treatment of the soul' (ψυχή psych- 'soul' from Ancient Greek psykhē 'soul'; -iatry 'medical treatment' from Gk. ιατρικός iātrikos 'medical' from ιάσθαι iāsthai 'to heal'). A medical doctor specializing in psychiatry is a psychiatrist (for a historical overview, see: Timeline of psychiatry).
Theory and focus
Psychiatry refers to a field of medicine focused specifically on the mind, aiming to study, prevent, and treat mental disorders in humans. It has been described as an intermediary between the world from a social context and the world from the perspective of those who are mentally ill.
People who specialize in psychiatry often differ from most other mental health professionals and physicians in that they must be familiar with both the social and biological sciences. The discipline studies the operations of different organs and body systems as classified by the patient's subjective experiences and the objective physiology of the patient. Psychiatry treats mental disorders, which are conventionally divided into three general categories: mental illnesses, severe learning disabilities, and personality disorders. Although the focus of psychiatry has changed little over time, the diagnostic and treatment processes have evolved dramatically and continue to do so. Since the late 20th century, the field of psychiatry has continued to become more biological and less conceptually isolated from other medical fields.
Scope of practice
Though the medical specialty of psychiatry uses research in the field of neuroscience, psychology, medicine, biology, biochemistry, and pharmacology, it has generally been considered a middle ground between neurology and psychology. Because psychiatry and neurology are deeply intertwined medical specialties, all certification for both specialties and for their subspecialties is offered by a single board, the American Board of Psychiatry and Neurology, one of the member boards of the American Board of Medical Specialties. Unlike other physicians and neurologists, psychiatrists specialize in the doctor–patient relationship and are trained to varying extents in the use of psychotherapy and other therapeutic communication techniques. Psychiatrists also differ from psychologists in that they are physicians and have post-graduate training called residency (usually four to five years) in psychiatry; the quality and thoroughness of their graduate medical training is identical to that of all other physicians. Psychiatrists can therefore counsel patients, prescribe medication, order laboratory tests, order neuroimaging, and conduct physical examinations. As well, some psychiatrists are trained in interventional psychiatry and can deliver interventional treatments such as electroconvulsive therapy, transcranial magnetic stimulation, vagus nerve stimulation and ketamine.
Ethics
The World Psychiatric Association issues an ethical code to govern the conduct of psychiatrists (like other purveyors of professional ethics). The psychiatric code of ethics, first set forth through the Declaration of Hawaii in 1977 has been expanded through a 1983 Vienna update and in the broader Madrid Declaration in 1996. The code was further revised during the organization's general assemblies in 1999, 2002, 2005, and 2011.
The World Psychiatric Association code covers such matters as confidentiality, the death penalty, ethnic or cultural discrimination, euthanasia, genetics, the human dignity of incapacitated patients, media relations, organ transplantation, patient assessment, research ethics, sex selection, torture, and up-to-date knowledge.
In establishing such ethical codes, the profession has responded to a number of controversies about the practice of psychiatry, for example, surrounding the use of lobotomy and electroconvulsive therapy.
Discredited psychiatrists who operated outside the norms of medical ethics include Harry Bailey, Donald Ewen Cameron, Samuel A. Cartwright, Henry Cotton, and Andrei Snezhnevsky.
Approaches
Psychiatric illnesses can be conceptualised in a number of different ways. The biomedical approach examines signs and symptoms and compares them with diagnostic criteria. Mental illness can be assessed, conversely, through a narrative which tries to incorporate symptoms into a meaningful life history and to frame them as responses to external conditions. Both approaches are important in the field of psychiatry but have not sufficiently reconciled to settle controversy over either the selection of a psychiatric paradigm or the specification of psychopathology. The notion of a "biopsychosocial model" is often used to underline the multifactorial nature of clinical impairment. In this notion the word model is not used in a strictly scientific way though. Alternatively, a Niall McLaren acknowledges the physiological basis for the mind's existence but identifies cognition as an irreducible and independent realm in which disorder may occur. The biocognitive approach includes a mentalist etiology and provides a natural dualist (i.e., non-spiritual) revision of the biopsychosocial view, reflecting the efforts of Australian psychiatrist Niall McLaren to bring the discipline into scientific maturity in accordance with the paradigmatic standards of philosopher Thomas Kuhn.
Once a medical professional diagnoses a patient there are numerous ways that they could choose to treat the patient. Often psychiatrists will develop a treatment strategy that incorporates different facets of different approaches into one. Drug prescriptions are very commonly written to be regimented to patients along with any therapy they receive. There are three major pillars of psychotherapy that treatment strategies are most regularly drawn from. Humanistic psychology attempts to put the "whole" of the patient in perspective; it also focuses on self exploration. Behaviorism is a therapeutic school of thought that elects to focus solely on real and observable events, rather than mining the unconscious or subconscious. Psychoanalysis, on the other hand, concentrates its dealings on early childhood, irrational drives, the unconscious, and conflict between conscious and unconscious streams.
Practitioners
All physicians can diagnose mental disorders and prescribe treatments utilizing principles of psychiatry. Psychiatrists are trained physicians who specialize in psychiatry and are certified to treat mental illness. They may treat outpatients, inpatients, or both; they may practice as solo practitioners or as members of groups; they may be self-employed, be members of partnerships, or be employees of governmental, academic, nonprofit, or for-profit entities; employees of hospitals; they may treat military personnel as civilians or as members of the military; and in any of these settings they may function as clinicians, researchers, teachers, or some combination of these. Although psychiatrists may also go through significant training to conduct psychotherapy, psychoanalysis or cognitive behavioral therapy, it is their training as physicians that differentiates them from other mental health professionals.
As a career choice in the US
Psychiatry was not a popular career choice among medical students, even though medical school placements are rated favorably. This has resulted in a significant shortage of psychiatrists in the United States and elsewhere. Strategies to address this shortfall have included the use of short 'taster' placements early in the medical school curriculum and attempts to extend psychiatry services further using telemedicine technologies and other methods. Recently, however, there has been an increase in the number of medical students entering into a psychiatry residency. There are several reasons for this surge, including the intriguing nature of the field, growing interest in genetic biomarkers involved in psychiatric diagnoses, and newer pharmaceuticals on the drug market to treat psychiatric illnesses.
Subspecialties
The field of psychiatry has many subspecialties that require additional training and certification by the American Board of Psychiatry and Neurology (ABPN). Such subspecialties include:
Addiction psychiatry, addiction medicine
Brain injury medicine
Child and adolescent psychiatry
Consultation-liaison psychiatry
Forensic psychiatry
Geriatric psychiatry
Hospice and palliative medicine
Sleep medicine
Additional psychiatry subspecialties, for which the ABPN does not provide formal certification, include:
Biological psychiatry
Community psychiatry
Cross-cultural psychiatry
Emergency psychiatry
Evolutionary psychiatry
Global mental health
Learning disabilities
Military psychiatry
Neurodevelopmental disorders
Neuropsychiatry
Interventional Psychiatry
Social psychiatry
Addiction psychiatry focuses on evaluation and treatment of individuals with alcohol, drug, or other substance-related disorders, and of individuals with dual diagnosis of substance-related and other psychiatric disorders. Biological psychiatry is an approach to psychiatry that aims to understand mental disorders in terms of the biological function of the nervous system. Child and adolescent psychiatry is the branch of psychiatry that specializes in work with children, teenagers, and their families. Community psychiatry is an approach that reflects an inclusive public health perspective and is practiced in community mental health services. Cross-cultural psychiatry is a branch of psychiatry concerned with the cultural and ethnic context of mental disorder and psychiatric services. Emergency psychiatry is the clinical application of psychiatry in emergency settings. Forensic psychiatry utilizes medical science generally, and psychiatric knowledge and assessment methods in particular, to help answer legal questions. Geriatric psychiatry is a branch of psychiatry dealing with the study, prevention, and treatment of mental disorders in the elderly. Global mental health is an area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide, although some scholars consider it to be a neo-colonial, culturally insensitive project. Liaison psychiatry is the branch of psychiatry that specializes in the interface between other medical specialties and psychiatry. Military psychiatry covers special aspects of psychiatry and mental disorders within the military context. Neuropsychiatry is a branch of medicine dealing with mental disorders attributable to diseases of the nervous system. Social psychiatry is a branch of psychiatry that focuses on the interpersonal and cultural context of mental disorder and mental well-being.
In larger healthcare organizations, psychiatrists often serve in senior management roles, where they are responsible for the efficient and effective delivery of mental health services for the organization's constituents. For example, the Chief of Mental Health Services at most VA medical centers is usually a psychiatrist, although psychologists occasionally are selected for the position as well.
In the United States, psychiatry is one of the few specialties which qualify for further education and board-certification in pain medicine, palliative medicine, and sleep medicine.
Research
Psychiatric research is, by its very nature, interdisciplinary; combining social, biological and psychological perspectives in attempt to understand the nature and treatment of mental disorders. Clinical and research psychiatrists study basic and clinical psychiatric topics at research institutions and publish articles in journals. Under the supervision of institutional review boards, psychiatric clinical researchers look at topics such as neuroimaging, genetics, and psychopharmacology in order to enhance diagnostic validity and reliability, to discover new treatment methods, and to classify new mental disorders.
Clinical application
Diagnostic systems
Psychiatric diagnoses take place in a wide variety of settings and are performed by many different health professionals. Therefore, the diagnostic procedure may vary greatly based upon these factors. Typically, though, a psychiatric diagnosis utilizes a differential diagnosis procedure where a mental status examination and physical examination is conducted, with pathological, psychopathological or psychosocial histories obtained, and sometimes neuroimages or other neurophysiological measurements are taken, or personality tests or cognitive tests administered. In some cases, a brain scan might be used to rule out other medical illnesses, but at this time relying on brain scans alone cannot accurately diagnose a mental illness or tell the risk of getting a mental illness in the future. Some clinicians are beginning to utilize genetics and automated speech assessment during the diagnostic process but on the whole these remain research topics.
Potential use of MRI/fMRI in diagnosis
In 2018, the American Psychological Association commissioned a review to reach a consensus on whether modern clinical MRI/fMRI will be able to be used in the diagnosis of mental health disorders. The criteria presented by the APA stated that the biomarkers used in diagnosis should:
"have a sensitivity of at least 80% for detecting a particular psychiatric disorder"
"should have a specificity of at least 80% for distinguishing this disorder from other psychiatric or medical disorders"
"should be reliable, reproducible, and ideally be noninvasive, simple to perform, and inexpensive"
"proposed biomarkers should be verified by 2 independent studies each by a different investigator and different population samples and published in a peer-reviewed journal"
The review concluded that although neuroimaging diagnosis may technically be feasible, very large studies are needed to evaluate specific biomarkers which were not available.
Diagnostic manuals
Three main diagnostic manuals used to classify mental health conditions are in use today. The ICD-11 is produced and published by the World Health Organization, includes a section on psychiatric conditions, and is used worldwide. The Diagnostic and Statistical Manual of Mental Disorders, produced and published by the American Psychiatric Association (APA), is primarily focused on mental health conditions and is the main classification tool in the United States. It is currently in its fifth revised edition and is also used worldwide. The Chinese Society of Psychiatry has also produced a diagnostic manual, the Chinese Classification of Mental Disorders.
The stated intention of diagnostic manuals is typically to develop replicable and clinically useful categories and criteria, to facilitate consensus and agreed upon standards, whilst being atheoretical as regards etiology. However, the categories are nevertheless based on particular psychiatric theories and data; they are broad and often specified by numerous possible combinations of symptoms, and many of the categories overlap in symptomology or typically occur together. While originally intended only as a guide for experienced clinicians trained in its use, the nomenclature is now widely used by clinicians, administrators and insurance companies in many countries.
The DSM has attracted praise for standardizing psychiatric diagnostic categories and criteria. It has also attracted controversy and criticism. Some critics argue that the DSM represents an unscientific system that enshrines the opinions of a few powerful psychiatrists. There are ongoing issues concerning the validity and reliability of the diagnostic categories; the reliance on superficial symptoms; the use of artificial dividing lines between categories and from 'normality'; possible cultural bias; medicalization of human distress and financial conflicts of interest, including with the practice of psychiatrists and with the pharmaceutical industry; political controversies about the inclusion or exclusion of diagnoses from the manual, in general or in regard to specific issues; and the experience of those who are most directly affected by the manual by being diagnosed, including the consumer/survivor movement.
Treatment
General considerations
Individuals receiving psychiatric treatment are commonly referred to as patients but may also be called clients, consumers, or service recipients. They may come under the care of a psychiatric physician or other psychiatric practitioners by various paths, the two most common being self-referral or referral by a primary care physician. Alternatively, a person may be referred by hospital medical staff, by court order, involuntary commitment, or, in countries such as the UK and Australia, by sectioning under a mental health law.
A psychiatrist or medical provider evaluates people through a psychiatric assessment for their mental and physical condition. This usually involves interviewing the person and often obtaining information from other sources such as other health and social care professionals, relatives, associates, law enforcement personnel, emergency medical personnel, and psychiatric rating scales. A mental status examination is carried out, and a physical examination is usually performed to establish or exclude other illnesses that may be contributing to the alleged psychiatric problems. A physical examination may also serve to identify any signs of self-harm; this examination is often performed by someone other than the psychiatrist, especially if blood tests and medical imaging are performed.
Like most medications, psychiatric medications can cause adverse effects in patients, and some require ongoing therapeutic drug monitoring, for instance full blood counts, serum drug levels, renal function, liver function or thyroid function. Electroconvulsive therapy (ECT) is sometimes administered for serious conditions, such as those unresponsive to medication. The efficacy and adverse effects of psychiatric drugs may vary from patient to patient.
Inpatient treatment
Psychiatric treatments have changed over the past several decades. In the past, psychiatric patients were often hospitalized for six months or more, with some cases involving hospitalization for many years.
Average inpatient psychiatric treatment stay has decreased significantly since the 1960s, a trend known as deinstitutionalization. Today in most countries, people receiving psychiatric treatment are more likely to be seen as outpatients. If hospitalization is required, the average hospital stay is around one to two weeks, with only a small number receiving long-term hospitalization. However, in Japan psychiatric hospitals continue to keep patients for long periods, sometimes even keeping them in physical restraints, strapped to their beds for periods of weeks or months.
Psychiatric inpatients are people admitted to a hospital or clinic to receive psychiatric care. Some are admitted involuntarily, perhaps committed to a secure hospital, or in some jurisdictions to a facility within the prison system. In many countries including the United States and Canada, the criteria for involuntary admission vary with local jurisdiction. They may be as broad as having a mental health condition, or as narrow as being an immediate danger to themselves or others. Bed availability is often the real determinant of admission decisions to hard pressed public facilities.
People may be admitted voluntarily if the treating doctor considers that safety is not compromised by this less restrictive option. For many years, controversy has surrounded the use of involuntary treatment and use of the term "lack of insight" in describing patients. Internationally, mental health laws vary significantly but in many cases, involuntary psychiatric treatment is permitted when there is deemed to be a significant risk to the patient or others due to the patient's illness. Involuntary treatment refers to treatment that occurs based on a treating physician's recommendations, without requiring consent from the patient.
Inpatient psychiatric wards may be secure (for those thought to have a particular risk of violence or self-harm) or unlocked/open. Some wards are mixed-sex whilst same-sex wards are increasingly favored to protect women inpatients. Once in the care of a hospital, people are assessed, monitored, and often given medication and care from a multidisciplinary team, which may include physicians, pharmacists, psychiatric nurse practitioners, psychiatric nurses, clinical psychologists, psychotherapists, psychiatric social workers, occupational therapists and social workers. If a person receiving treatment in a psychiatric hospital is assessed as at particular risk of harming themselves or others, they may be put on constant or intermittent one-to-one supervision and may be put in physical restraints or medicated. People on inpatient wards may be allowed leave for periods of time, either accompanied or on their own.
In many developed countries there has been a massive reduction in psychiatric beds since the mid 20th century, with the growth of community care. Italy has been a pioneer in psychiatric reform, particularly through the no-restraint initiative that began nearly fifty years ago. The Italian movement, heavily influenced by Franco Basaglia, emphasizes ethical treatment and the elimination of physical restraints in psychiatric care. A study examining the application of these principles in Italy found that 14 general hospital psychiatric units reported zero restraint incidents in 2022.
Standards of inpatient care remain a challenge in some public and private facilities, due to levels of funding, and facilities in developing countries are typically grossly inadequate for the same reason. Even in developed countries, programs in public hospitals vary widely. Some may offer structured activities and therapies offered from many perspectives while others may only have the funding for medicating and monitoring patients. This may be problematic in that the maximum amount of therapeutic work might not actually take place in the hospital setting. This is why hospitals are increasingly used in limited situations and moments of crisis where patients are a direct threat to themselves or others. Alternatives to psychiatric hospitals that may actively offer more therapeutic approaches include rehabilitation centers or "rehab" as popularly termed.
Outpatient treatment
Outpatient treatment involves periodic visits to a psychiatrist for consultation in his or her office, or at a community-based outpatient clinic. During initial appointments, a psychiatrist generally conducts a psychiatric assessment or evaluation of the patient. Follow-up appointments then focus on making medication adjustments, reviewing potential medication interactions, considering the impact of other medical disorders on the patient's mental and emotional functioning, and counseling patients regarding changes they might make to facilitate healing and remission of symptoms. The frequency with which a psychiatrist sees people in treatment varies widely, from once a week to twice a year, depending on the type, severity and stability of each person's condition, and depending on what the clinician and patient decide would be best.
Increasingly, psychiatrists are limiting their practices to psychopharmacology (prescribing medications), as opposed to previous practice in which a psychiatrist would provide traditional 50-minute psychotherapy sessions, of which psychopharmacology would be a part, but most of the consultation sessions consisted of "talk therapy". This shift began in the early 1980s and accelerated in the 1990s and 2000s. A major reason for this change was the advent of managed care insurance plans, which began to limit reimbursement for psychotherapy sessions provided by psychiatrists. The underlying assumption was that psychopharmacology was at least as effective as psychotherapy, and it could be delivered more efficiently because less time is required for the appointment. Because of this shift in practice patterns, psychiatrists often refer patients whom they think would benefit from psychotherapy to other mental health professionals, e.g., clinical social workers and psychologists.
Telepsychiatry
History
Earliest knowledge
The earliest known texts on mental disorders are from ancient India and include the Ayurvedic text, Charaka Samhita. The first hospitals for curing mental illness were established in India during the 3rd century BCE.
Greek philosophers, including Thales, Plato, and Aristotle (especially in his De Anima treatise), also addressed the workings of the mind. As early as the 4th century BC, the Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes. In 387 BCE, Plato suggested that the brain is where mental processes take place. In 4th to 5th century B.C. Greece, Hippocrates wrote that he visited Democritus and found him in his garden cutting open animals. Democritus explained that he was attempting to discover the cause of madness and melancholy. Hippocrates praised his work. Democritus had with him a book on madness and melancholy. During the 5th century BCE, mental disorders, especially those with psychotic traits, were considered supernatural in origin, a view which existed throughout ancient Greece and Rome, as well as Egyptian regions. Alcmaeon, believed the brain, not the heart, was the "organ of thought". He tracked the ascending sensory nerves from the body to the brain, theorizing that mental activity originated in the CNS and that the cause of mental illness resided within the brain. He applied this understanding to classify mental diseases and treatments. Religious leaders often turned to versions of exorcism to treat mental disorders often utilizing methods that many consider to be cruel or barbaric methods. Trepanning was one of these methods used throughout history.
In the 6th century AD, Lin Xie carried out an early psychological experiment, in which he asked people to draw a square with one hand and at the same time draw a circle with the other (ostensibly to test people's vulnerability to distraction). It has been cited that this was an early psychiatric experiment.
The Islamic Golden Age fostered early studies in Islamic psychology and psychiatry, with many scholars writing about mental disorders. The Persian physician Muhammad ibn Zakariya al-Razi, also known as "Rhazes", wrote texts about psychiatric conditions in the 9th century. As chief physician of a hospital in Baghdad, he was also the director of one of the first bimaristans in the world.
The first bimaristan was founded in Baghdad in the 9th century, and several others of increasing complexity were created throughout the Arab world in the following centuries. Some of the bimaristans contained wards dedicated to the care of mentally ill patients. During the Middle Ages, Psychiatric hospitals and lunatic asylums were built and expanded throughout Europe. Specialist hospitals such as Bethlem Royal Hospital in London were built in medieval Europe from the 13th century to treat mental disorders, but were used only as custodial institutions and did not provide any type of treatment. It is the oldest extant psychiatric hospital in the world.
An ancient text known as The Yellow Emperor's Classic of Internal Medicine identifies the brain as the nexus of wisdom and sensation, includes theories of personality based on yin–yang balance, and analyzes mental disorder in terms of physiological and social disequilibria. Chinese scholarship that focused on the brain advanced during the Qing Dynasty with the work of Western-educated Fang Yizhi (1611–1671), Liu Zhi (1660–1730), and Wang Qingren (1768–1831). Wang Qingren emphasized the importance of the brain as the center of the nervous system, linked mental disorder with brain diseases, investigated the causes of dreams, insomnia, psychosis, depression and epilepsy.
Medical specialty
The beginning of psychiatry as a medical specialty is dated to the middle of the nineteenth century, although its germination can be traced to the late eighteenth century. In the late 17th century, privately run asylums for the insane began to proliferate and expand in size. In 1713, the Bethel Hospital Norwich was opened, the first purpose-built asylum in England. In 1656, Louis XIV of France created a public system of hospitals for those with mental disorders, but as in England, no real treatment was applied.
During the Enlightenment, attitudes towards the mentally ill began to change. It came to be viewed as a disorder that required compassionate treatment. In 1758, English physician William Battie wrote his Treatise on Madness on the management of mental disorder. It was a critique aimed particularly at the Bethlem Royal Hospital, where a conservative regime continued to use barbaric custodial treatment. Battie argued for a tailored management of patients entailing cleanliness, good food, fresh air, and distraction from friends and family. He argued that mental disorder originated from dysfunction of the material brain and body rather than the internal workings of the mind.
The introduction of moral treatment was initiated independently by the French doctor Philippe Pinel and the English Quaker William Tuke. In 1792, Pinel became the chief physician at the Bicêtre Hospital. Patients were allowed to move freely about the hospital grounds, and eventually dark dungeons were replaced with sunny, well-ventilated rooms. Pinel's student and successor, Jean Esquirol (1772–1840), went on to help establish 10 new mental hospitals that operated on the same principles.
Although Tuke, Pinel and others had tried to do away with physical restraint, it remained widespread into the 19th century. At the Lincoln Asylum in England, Robert Gardiner Hill, with the support of Edward Parker Charlesworth, pioneered a mode of treatment that suited "all types" of patients, so that mechanical restraints and coercion could be dispensed with—a situation he finally achieved in 1838. In 1839, Sergeant John Adams and Dr. John Conolly were impressed by the work of Hill, and introduced the method into their Hanwell Asylum, by then the largest in the country.
The modern era of institutionalized provision for the care of the mentally ill, began in the early 19th century with a large state-led effort. In England, the Lunacy Act 1845 was an important landmark in the treatment of the mentally ill, as it explicitly changed the status of mentally ill people to patients who required treatment. All asylums were required to have written regulations and to have a resident qualified physician. In 1838, France enacted a law to regulate both the admissions into asylums and asylum services across the country.
In the United States, the erection of state asylums began with the first law for the creation of one in New York, passed in 1842. The Utica State Hospital was opened around 1850. Many state hospitals in the United States were built in the 1850s and 1860s on the Kirkbride Plan, an architectural style meant to have curative effect.
At the turn of the century, England and France combined had only a few hundred individuals in asylums. By the late 1890s and early 1900s, this number had risen to the hundreds of thousands. However, the idea that mental illness could be ameliorated through institutionalization ran into difficulties. Psychiatrists were pressured by an ever-increasing patient population, and asylums again became almost indistinguishable from custodial institutions.
In the early 1800s, psychiatry made advances in the diagnosis of mental illness by broadening the category of mental disease to include mood disorders, in addition to disease level delusion or irrationality. The 20th century introduced a new psychiatry into the world, with different perspectives of looking at mental disorders. For Emil Kraepelin, the initial ideas behind biological psychiatry, stating that the different mental disorders are all biological in nature, evolved into a new concept of "nerves", and psychiatry became a rough approximation of neurology and neuropsychiatry. Following Sigmund Freud's pioneering work, ideas stemming from psychoanalytic theory also began to take root in psychiatry. The psychoanalytic theory became popular among psychiatrists because it allowed the patients to be treated in private practices instead of warehoused in asylums.
By the 1970s, however, the psychoanalytic school of thought became marginalized within the field. Biological psychiatry reemerged during this time. Psychopharmacology and neurochemistry became the integral parts of psychiatry starting with Otto Loewi's discovery of the neuromodulatory properties of acetylcholine; thus identifying it as the first-known neurotransmitter. Subsequently, it has been shown that different neurotransmitters have different and multiple functions in regulation of behaviour. In a wide range of studies in neurochemistry using human and animal samples, individual differences in neurotransmitters' production, reuptake, receptors' density and locations were linked to differences in dispositions for specific psychiatric disorders. For example, the discovery of chlorpromazine's effectiveness in treating schizophrenia in 1952 revolutionized treatment of the disorder, as did lithium carbonate's ability to stabilize mood highs and lows in bipolar disorder in 1948. Psychotherapy was still utilized, but as a treatment for psychosocial issues. This proved the idea of neurochemical nature of many psychiatric disorders.
Another approach to look for biomarkers of psychiatric disorders is Neuroimaging that was first utilized as a tool for psychiatry in the 1980s.
In 1963, US president John F. Kennedy introduced legislation delegating the National Institute of Mental Health to administer Community Mental Health Centers for those being discharged from state psychiatric hospitals. Later, though, the Community Mental Health Centers focus shifted to providing psychotherapy for those with acute but less serious mental disorders. Ultimately there were no arrangements made for actively following and treating severely mentally ill patients who were being discharged from hospitals, resulting in a large population of chronically homeless people with mental illness.
Controversy and criticism
The institution of psychiatry has attracted controversy since its inception. Scholars including those from social psychiatry, psychoanalysis, psychotherapy, and critical psychiatry have produced critiques. It has been argued that psychiatry confuses disorders of the mind with disorders of the brain that can be treated with drugs; that its use of drugs is in part due to lobbying by drug companies resulting in distortion of research; and that the concept of "mental illness" is often used to label and control those with beliefs and behaviours that the majority of people disagree with; and that it is too influenced by ideas from medicine causing it to misunderstand the nature of mental distress. Critique of psychiatry from within the field comes from the critical psychiatry group in the UK.
Double argues that most critical psychiatry is anti-reductionist. Rashed argues new mental health science has moved beyond this reductionist critique by seeking integrative and biopsychosocial models for conditions and that much of critical psychiatry now exists with orthodox psychiatry but notes that many critiques remain unaddressed
The term anti-psychiatry was coined by psychiatrist David Cooper in 1967 and was later made popular by Thomas Szasz. The word Antipsychiatrie was already used in Germany in 1904. The basic premise of the anti-psychiatry movement is that psychiatrists attempt to classify "normal" people as "deviant"; psychiatric treatments are ultimately more damaging than helpful to patients; and psychiatry's history involves (what may now be seen as) dangerous treatments, such as psychosurgery an example of this being the frontal lobectomy (commonly called a lobotomy). The use of lobotomies largely disappeared by the late 1970s.
See also
Glossary of psychiatry
Medical psychology
Biopsychiatry controversy
Child and adolescent psychiatry
Telepsychiatry
Psychiatry Innovation Lab
Anti-psychiatry
Controversies about psychiatry
Notes
References
Citations
Cited texts
Further reading
Francis, Gavin, "Changing Psychiatry's Mind" (review of Anne Harrington, Mind Fixers: Psychiatry's Troubled Search for the Biology of Mental Illness, Norton, 366 pp.; and Nathan Filer, This Book Will Change Your Mind about Mental Health: A Journey into the Heartland of Psychiatry, London, Faber and Faber, 248 pp.), The New York Review of Books, vol. LXVIII, no. 1 (14 January 2021), pp. 26–29. "[M]ental disorders are different [from illnesses addressed by other medical specialties].... [T]o treat them as purely physical is to misunderstand their nature." "[C]are [needs to be] based on distress and [cognitive, emotional, and physical] need rather than [on psychiatric] diagnos[is]", which is often uncertain, erratic, and unreplicable. (p. 29.)
Halpern, Sue, "The Bull's-Eye on Your Thoughts" (review of Nita A. Farahany, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology, St. Martin's, 2023, 277 pp.; and Daniel Barron, Reading Our Minds: The Rise of Big Data Psychiatry, Columbia Global Reports, 2023, 150 pp.), The New York Review of Books, vol. LXX, no. 17 (2 November 2023), pp. 60–62. Psychiatrist Daniel Barron deplores psychiatry's reliance largely on subjective impressions of a patient's condition – on behavioral-pattern recognition – whereas other medical specialties dispose of a more substantial armamentarium of objective diagnostic technologies. A psychiatric patient's diagnoses are arguably more in the eye of the physician: "An anti-psychotic 'works' if a [psychiatric] patient looks and feels less psychotic." Barron also posits that talking – an important aspect of psychiatric diagnostics and treatment – involves vague, subjective language and therefore cannot reveal the brain's objective workings. He trusts, though, that Big Data technologies will make psychiatric signs and symptoms more quantifiably objective. Sue Halpern cautions, however, that "When numbers have no agreed-upon, scientifically-derived, extrinsic meaning, quantification is unavailing." (p. 62.)
Singh, Manvir, "Read the Label: How psychiatric diagnoses create identities", The New Yorker, 13 May 2024, pp. 20-24. "[T]he Diagnostic and Statistical Manual of Mental Disorders, or DSM [...] guides how Americans [...] understand and deal with mental illness. [...] The DSM as we know it appeared in 1980, with the publication of the DSM-III [which] favored more precise diagnostic criteria and a more scientific approach [than the first two DSM editions]. [H]owever, the emerging picture is of overlapping conditions, of categories that blur rather than stand apart. No disorder has been tied to a specific gene or set of genes. Nearly [p. 20] all genetic vulnerabilities implicated in mental illness have been associated with many conditions. [...] As the philosopher Ian Hacking observed, labelling people is very different from labelling quarks or microbes. Quarks and microbes are indifferent to their labels; by contrast, human classifications change how 'individuals experience themselves – and may even lead people to evolve their feelings and behavior in part because they are so classified.' Hacking's best-known example is multiple personality disorder [now called dissociative identity disorder]. Between 1972 and 1986, the number of cases of patients with multiple personalities exploded from the double digits to an estimated six thousand. [...] [I]n 1955 [n]o such diagnosis [had] existed. [Similarly, o]ver the past twenty years, the prevalence of autism in the United States has quadrupled [...]. A major driver of this surge has been a broadening of the definition and a lowering of the diagnostic threshold. Among people diagnosed with autism [...] evidence of the psychological and neurological traits associated with the condition declined by up to eighty per cent between 2000 and 2015. Temple Grandin [has commented that] [p. 21] 'The spectrum is so broad it doesn't make much sense.' [Confusion has also surrounded the term "sociopathy", which] was dropped from the DSM-II with the arrival of 'antisocial personality disorder' [...]. Some scholars associated sociopathy with remorseless and impulsive behavior caused by a brain injury. Other people associated it with an antisocial personality. [T]he psychologist Martha Stout used it to mean a lack of conscience." (p. 22.) Yet another confusing nosological entity is borderline personality disorder, "defined by sudden swings in mood, self-image, and perceptions of others. [...] The concept is generally attributed to the psychoanalyst Adolph Stern, who used it in 1937 to describe patients who were neither neurotic nor psychotic and thus [were] 'borderline.' [It has been noted that] key symptoms such as identity disturbance, outbursts of anger, and unstable interpersonal relations also feature in narcissistic and histrionic personality disorders. [Medical sociologist] Allan Horwitz [...] asks why the DSM still treats B.P.D. as a disorder of personality rather than of mood. [p. 23.] [T]he process of labelling reifies categories [that is, endows them with a deceptive quality of "thingness"], especially in the age of the Internet. [...] [P]eople everywhere encounter models of illness that they unconsciously embody. [...] In 2006, a [Mexican] student [...] developed devastating leg pain and had trouble walking; soon hundreds of classmates were afflicted." (p. 24.)
Academic disciplines
Behavioural sciences
Branches of psychology
Mental disorders
Social sciences | 0.765881 | 0.998503 | 0.764735 |
Motility | Motility is the ability of an organism to move independently using metabolic energy. This biological concept encompasses movement at various levels, from whole organisms to cells and subcellular components.
Motility is observed in animals, microorganisms, and even some plant structures, playing crucial roles in activities such as foraging, reproduction, and cellular functions. It is genetically determined but can be influenced by environmental factors.
In multicellular organisms, motility is facilitated by systems like the nervous and musculoskeletal systems, while at the cellular level, it involves mechanisms such as amoeboid movement and flagellar propulsion. These cellular movements can be directed by external stimuli, a phenomenon known as taxis. Examples include chemotaxis (movement along chemical gradients) and phototaxis (movement in response to light).
Motility also includes physiological processes like gastrointestinal movements and peristalsis. Understanding motility is important in biology, medicine, and ecology, as it impacts processes ranging from bacterial behavior to ecosystem dynamics.
Definitions
Motility, the ability of an organism to move independently, using metabolic energy, can be contrasted with sessility, the state of organisms that do not possess a means of self-locomotion and are normally immobile.
Motility differs from mobility, the ability of an object to be moved.
The term vagility means a lifeform that can be moved but only passively; sessile organisms including plants and fungi often have vagile parts such as fruits, seeds, or spores which may be dispersed by other agents such as wind, water, or other organisms.
Motility is genetically determined, but may be affected by environmental factors such as toxins. The nervous system and musculoskeletal system provide the majority of mammalian motility.
In addition to animal locomotion, most animals are motile, though some are vagile, described as having passive locomotion. Many bacteria and other microorganisms, including even some viruses, and multicellular organisms are motile; some mechanisms of fluid flow in multicellular organs and tissue are also considered instances of motility, as with gastrointestinal motility. Motile marine animals are commonly called free-swimming, and motile non-parasitic organisms are called free-living.
Motility includes an organism's ability to move food through its digestive tract. There are two types of intestinal motility – peristalsis and segmentation. This motility is brought about by the contraction of smooth muscles in the gastrointestinal tract which mix the luminal contents with various secretions (segmentation) and move contents through the digestive tract from the mouth to the anus (peristalsis).
Cellular level
At the cellular level, different modes of movement exist:
amoeboid movement, a crawling-like movement, which also makes swimming possible
filopodia, enabling movement of the axonal growth cone
flagellar motility, a swimming-like motion (observed for example in spermatozoa, propelled by the regular beat of their flagellum, or the E. coli bacterium, which swims by rotating a helical prokaryotic flagellum)
gliding motility
swarming motility
twitching motility, a form of motility used by bacteria to crawl over surfaces using grappling hook-like filaments called type IV pili.
Many cells are not motile, for example Klebsiella pneumoniae and Shigella, or under specific circumstances such as Yersinia pestis at 37 °C.
Movements
Events perceived as movements can be directed:
along a chemical gradient (see chemotaxis)
along a temperature gradient (see thermotaxis)
along a light gradient (see phototaxis)
along a magnetic field line (see magnetotaxis)
along an electric field (see galvanotaxis)
along the direction of the gravitational force (see gravitaxis)
along a rigidity gradient (see durotaxis)
along a gradient of cell adhesion sites (see haptotaxis)
along other cells or biopolymers
See also
Cell migration
References
Physiology
Cell movement
Articles containing video clips | 0.768874 | 0.994522 | 0.764663 |
Biomedical engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of Chemical Engineering, and Pharmaceutical Analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions.
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
See also
Biomedical Engineering and Instrumentation Program (BEIP)
References
Further reading
External links | 0.766027 | 0.998214 | 0.764659 |
Parenchyma | Parenchyma is the bulk of functional substance in an animal organ or structure such as a tumour. In zoology, it is the tissue that fills the interior of flatworms. In botany, it is some layers in the cross-section of the leaf.
Etymology
The term parenchyma is Neo-Latin from the Ancient Greek word meaning 'visceral flesh', and from meaning 'to pour in' from 'beside' + 'in' + 'to pour'.
Originally, Erasistratus and other anatomists used it for certain human tissues. Later, it was also applied to plant tissues by Nehemiah Grew.
Structure
The parenchyma is the functional parts of an organ, or of a structure such as a tumour in the body. This is in contrast to the stroma, which refers to the structural tissue of organs or of structures, namely, the connective tissues.
Brain
The brain parenchyma refers to the functional tissue in the brain that is made up of the two types of brain cell, neurons and glial cells. It is also known to contain collagen proteins. Damage or trauma to the brain parenchyma often results in a loss of cognitive ability or even death. Bleeding into the parenchyma is known as intraparenchymal hemorrhage.
Lungs
Lung parenchyma is the substance of the lung that is involved with gas exchange and includes the pulmonary alveoli.
Liver
The liver parenchyma is the functional tissue of the organ made up of around 80% of the liver volume as hepatocytes. The other main type of liver cells are non-parenchymal. Non-parenchymal cells constitute 40% of the total number of liver cells but only 6.5% of its volume.
Kidneys
The renal parenchyma is divided into two major structures: the outer renal cortex and the inner renal medulla. Grossly, these structures take the shape of 7 to 18 cone-shaped renal lobes, each containing renal cortex surrounding a portion of medulla called a renal pyramid.
Tumors
The tumor parenchyma, of a solid tumour, is one of the two distinct compartments in a solid tumour. The parenchyma is made up of neoplastic cells. The other compartment is the stroma induced by the neoplastic cells, needed for nutritional support and waste removal. In many types of tumour, clusters of parenchymal cells are separated by a basal lamina that can sometimes be incomplete.
Flatworms
Parenchyma is the tissue made up of cells and intercellular spaces that fills the interior of the body of a flatworm, which is an acoelomate. This is a spongy tissue also known as a mesenchymal tissue, in which several types of cells are lodged in their extracellular matrices. The parenchymal cells include myocytes, and many types of specialised cells. The cells are often attached to each other and also to their nearby epithelial cells mainly by gap junctions and hemidesmosomes. There is much variation in the types of cell in the parenchyma according to the species and anatomical regions. Its possible functions may include skeletal support, nutrient storage, movement, and many others.
References
External links
tissues (biology) | 0.767252 | 0.996577 | 0.764626 |
Joint | A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole. They are constructed to allow for different degrees and types of movement. Some joints, such as the knee, elbow, and shoulder, are self-lubricating, almost frictionless, and are able to withstand compression and maintain heavy loads while still executing smooth and precise movements. Other joints such as sutures between the bones of the skull permit very little movement (only during birth) in order to protect the brain and the sense organs. The connection between a tooth and the jawbone is also called a joint, and is described as a fibrous joint known as a gomphosis. Joints are classified both structurally and functionally.
Classification
The number of joints depends on if sesamoids are included, age of the human and the definition of joints. However, the number of sesamoids is the same in most people with variations being rare.
Joints are mainly classified structurally and functionally. Structural classification is determined by how the bones connect to each other, while functional classification is determined by the degree of movement between the articulating bones. In practice, there is significant overlap between the two types of classifications.
Clinical, numerical classification
monoarticular – concerning one joint
oligoarticular or pauciarticular – concerning 2–4 joints
polyarticular – concerning 5 or more joints
Structural classification (binding tissue)
Structural classification names and divides joints according to the type of binding tissue that connects the bones to each other. There are four structural classifications of joints:
fibrous joint – joined by dense regular connective tissue that is rich in collagen fibers
cartilaginous joint – joined by cartilage. There are two types: primary cartilaginous joints composed of hyaline cartilage, and secondary cartilaginous joints composed of hyaline cartilage covering the articular surfaces of the involved bones with fibrocartilage connecting them.
synovial joint – not directly joined – the bones have a synovial cavity and are united by the dense irregular connective tissue that forms the articular capsule that is normally associated with accessory ligaments.
facet joint – joint between two articular processes between two vertebrae.
Functional classification (movement)
Joints can also be classified functionally according to the type and degree of movement they allow: Joint movements are described with reference to the basic anatomical planes.
synarthrosis – permits little or no mobility. Most synarthrosis joints are fibrous joints, such as skull sutures. This lack of mobility is important, because the skull bones serve to protect the brain.
amphiarthrosis – permits slight mobility. Most amphiarthrosis joints are cartilaginous joints. An example is the intervertebral disc. Individual intervertebral discs allow for small movements between adjacent vertebrae, but when added together, the vertebral column provides the flexibility that allows the body to twist, or bend to the front, back, or side.
synovial joint (also known as a diarthrosis) – freely movable. Synovial joints can in turn be classified into six groups according to the type of movement they allow: plane joint, ball and socket joint, hinge joint, pivot joint, condyloid joint and saddle joint.
Joints can also be classified, according to the number of axes of movement they allow, into nonaxial (gliding, as between the proximal ends of the ulna and radius), monoaxial (uniaxial), biaxial and multiaxial. Another classification is according to the degrees of freedom allowed, and distinguished between joints with one, two or three degrees of freedom. A further classification is according to the number and shapes of the articular surfaces: flat, concave and convex surfaces. Types of articular surfaces include trochlear surfaces.
Biomechanical classification
Joints can also be classified based on their anatomy or on their biomechanical properties. According to the anatomic classification, joints are subdivided into simple and compound, depending on the number of bones involved, and into complex and combination joints:
Simple joint: two articulation surfaces (e.g. shoulder joint, hip joint)
Compound joint: three or more articulation surfaces (e.g. radiocarpal joint)
Complex joint: two or more articulation surfaces and an articular disc or meniscus (e.g. knee joint)
Anatomical
The joints may be classified anatomically into the following groups:
Joints of hand
Elbow joints
Wrist joints
Axillary joints
Sternoclavicular joints
Vertebral articulations
Temporomandibular joints
Sacroiliac joints
Hip joints
Knee joints
Articulations of foot
Unmyelinated nerve fibers are abundant in joint capsules and ligaments, as well as in the outer part of intra-articular menisci. These nerve fibers are responsible for pain perception when a joint is strained.
Clinical significance
Damaging the cartilage of joints (articular cartilage) or the bones and muscles that stabilize the joints can lead to joint dislocations and osteoarthritis. Swimming is a great way to exercise the joints with minimal damage.
A joint disorder is termed arthropathy, and when involving inflammation of one or more joints the disorder is called arthritis. Most joint disorders involve arthritis, but joint damage by external physical trauma is typically not termed arthritis.
Arthropathies are called polyarticular (multiarticular) when involving many joints and monoarticular when involving only a single joint.
Arthritis is the leading cause of disability in people over the age of 55. There are many different forms of arthritis, each of which has a different cause. The most common form of arthritis, osteoarthritis (also known as degenerative joint disease), occurs following trauma to the joint, following an infection of the joint or simply as a result of aging and the deterioration of articular cartilage. Furthermore, there is emerging evidence that abnormal anatomy may contribute to early development of osteoarthritis. Other forms of arthritis are rheumatoid arthritis and psoriatic arthritis, which are autoimmune diseases in which the body is attacking itself. Septic arthritis is caused by joint infection. Gouty arthritis is caused by deposition of uric acid crystals in the joint that results in subsequent inflammation. Additionally, there is a less common form of gout that is caused by the formation of rhomboidal-shaped crystals of calcium pyrophosphate. This form of gout is known as pseudogout.
Temporomandibular joint syndrome (TMJ) involves the jaw joints and can cause facial pain, clicking sounds in the jaw, or limitation of jaw movement, to name a few symptoms. It is caused by psychological tension and misalignment of the jaw (malocclusion), and may be affecting as many as 75 million Americans.
History
Etymology
The English word joint is a past participle of the verb join, and can be read as joined. Joint is derived from Latin iunctus, past participle of the Latin verb iungere, join, unite, connect, attach.
The English term articulation is derived from Latin articulatio.
Humans have also developed lighter, more fragile joint bones over time due to the decrease in physical activity compared to thousands of years ago.
See also
Arthrology
Cracking joints
Kinesiology
Ligament
Development of joints
References
External links
Synovial joints Illustrations and Classification
Skeletal system | 0.767896 | 0.995631 | 0.764541 |
Human behavior | Human behavior is the potential and expressed capacity (mentally, physically, and socially) of human individuals or groups to respond to internal and external stimuli throughout their life. Behavior is driven by genetic and environmental factors that affect an individual. Behavior is also driven, in part, by thoughts and feelings, which provide insight into individual psyche, revealing such things as attitudes and values. Human behavior is shaped by psychological traits, as personality types vary from person to person, producing different actions and behavior.
Social behavior accounts for actions directed at others. It is concerned with the considerable influence of social interaction and culture, as well as ethics, interpersonal relationships, politics, and conflict. Some behaviors are common while others are unusual. The acceptability of behavior depends upon social norms and is regulated by various means of social control. Social norms also condition behavior, whereby humans are pressured into following certain rules and displaying certain behaviors that are deemed acceptable or unacceptable depending on the given society or culture.
Cognitive behavior accounts for actions of obtaining and using knowledge. It is concerned with how information is learned and passed on, as well as creative application of knowledge and personal beliefs such as religion. Physiological behavior accounts for actions to maintain the body. It is concerned with basic bodily functions as well as measures taken to maintain health. Economic behavior accounts for actions regarding the development, organization, and use of materials as well as other forms of work. Ecological behavior accounts for actions involving the ecosystem. It is concerned with how humans interact with other organisms and how the environment shapes human behavior.
Study
Human behavior is studied by the social sciences, which include psychology, sociology, ethology, and their various branches and schools of thought. There are many different facets of human behavior, and no one definition or field study encompasses it in its entirety. The nature versus nurture debate is one of the fundamental divisions in the study of human behavior; this debate considers whether behavior is predominantly affected by genetic or environmental factors. The study of human behavior sometimes receives public attention due to its intersection with cultural issues, including crime, sexuality, and social inequality.
Some natural sciences also place emphasis on human behavior. Neurology and evolutionary biology, study how behavior is controlled by the nervous system and how the human mind evolved, respectively. In other fields, human behavior may be a secondary subject of study when considering how it affects another subject. Outside of formal scientific inquiry, human behavior and the human condition is also a major focus of philosophy and literature. Philosophy of mind considers aspects such as free will, the mind–body problem, and malleability of human behavior.
Human behavior may be evaluated through questionnaires, interviews, and experimental methods. Animal testing may also be used to test behaviors that can then be compared to human behavior. Twin studies are a common method by which human behavior is studied. Twins with identical genomes can be compared to isolate genetic and environmental factors in behavior. Lifestyle, susceptibility to disease, and unhealthy behaviors have been identified to have both genetic and environmental indicators through twin studies.
Social behavior
Human social behavior is the behavior that considers other humans, including communication and cooperation. It is highly complex and structured, based on advanced theory of mind that allows humans to attribute thoughts and actions to one another. Through social behavior, humans have developed society and culture distinct from other animals. Human social behavior is governed by a combination of biological factors that affect all humans and cultural factors that change depending on upbringing and societal norms. Human communication is based heavily on language, typically through speech or writing. Nonverbal communication and paralanguage can modify the meaning of communications by demonstrating ideas and intent through physical and vocal behaviors.
Social norms
Human behavior in a society is governed by social norms. Social norms are unwritten expectations that members of society have for one another. These norms are ingrained in the particular culture that they emerge from, and humans often follow them unconsciously or without deliberation. These norms affect every aspect of life in human society, including decorum, social responsibility, property rights, contractual agreement, morality, and justice. Many norms facilitate coordination between members of society and prove mutually beneficial, such as norms regarding communication and agreements. Norms are enforced by social pressure, and individuals that violate social norms risk social exclusion.
Systems of ethics are used to guide human behavior to determine what is moral. Humans are distinct from other animals in the use of ethical systems to determine behavior. Ethical behavior is human behavior that takes into consideration how actions will affect others and whether behaviors will be optimal for others. What constitutes ethical behavior is determined by the individual value judgments of the person and the collective social norms regarding right and wrong. Value judgments are intrinsic to people of all cultures, though the specific systems used to evaluate them may vary. These systems may be derived from divine law, natural law, civil authority, reason, or a combination of these and other principles. Altruism is an associated behavior in which humans consider the welfare of others equally or preferentially to their own. While other animals engage in biological altruism, ethical altruism is unique to humans.
Deviance is behavior that violates social norms. As social norms vary between individuals and cultures, the nature and severity of a deviant act is subjective. What is considered deviant by a society may also change over time as new social norms are developed. Deviance is punished by other individuals through social stigma, censure, or violence. Many deviant actions are recognized as crimes and punished through a system of criminal justice. Deviant actions may be punished to prevent harm to others, to maintain a particular worldview and way of life, or to enforce principles of morality and decency. Cultures also attribute positive or negative value to certain physical traits, causing individuals that do not have desirable traits to be seen as deviant.
Interpersonal relationships
Interpersonal relationships can be evaluated by the specific choices and emotions between two individuals, or they can be evaluated by the broader societal context of how such a relationship is expected to function. Relationships are developed through communication, which creates intimacy, expresses emotions, and develops identity. An individual's interpersonal relationships form a social group in which individuals all communicate and socialize with one another, and these social groups are connected by additional relationships. Human social behavior is affected not only by individual relationships, but also by how behaviors in one relationship may affect others. Individuals that actively seek out social interactions are extraverts, and those that do not are introverts.
Romantic love is a significant interpersonal attraction toward another. Its nature varies by culture, but it is often contingent on gender, occurring in conjunction with sexual attraction and being either heterosexual or homosexual. It takes different forms and is associated with many individual emotions. Many cultures place a higher emphasis on romantic love than other forms of interpersonal attraction. Marriage is a union between two people, though whether it is associated with romantic love is dependent on the culture. Individuals that are closely related by consanguinity form a family. There are many variations on family structures that may include parents and children as well as stepchildren or extended relatives. Family units with children emphasize parenting, in which parents engage in a high level of parental investment to protect and instruct children as they develop over a period of time longer than that of most other mammals.
Politics and conflict
When humans make decisions as a group, they engage in politics. Humans have evolved to engage in behaviors of self-interest, but this also includes behaviors that facilitate cooperation rather than conflict in collective settings. Individuals will often form in-group and out-group perceptions, through which individuals cooperate with the in-group and compete with the out-group. This causes behaviors such as unconsciously conforming, passively obeying authority, taking pleasure in the misfortune of opponents, initiating hostility toward out-group members, artificially creating out-groups when none exist, and punishing those that do not comply with the standards of the in-group. These behaviors lead to the creation of political systems that enforce in-group standards and norms.
When humans oppose one another, it creates conflict. It may occur when the involved parties have a disagreement of opinion, when one party obstructs the goals of another, or when parties experience negative emotions such as anger toward one another. Conflicts purely of disagreement are often resolved through communication or negotiation, but incorporation of emotional or obstructive aspects can escalate conflict. Interpersonal conflict is that between specific individuals or groups of individuals. Social conflict is that between different social groups or demographics. This form of conflict often takes place when groups in society are marginalized, do not have the resources they desire, wish to instigate social change, or wish to resist social change. Significant social conflict can cause civil disorder. International conflict is that between nations or governments. It may be solved through diplomacy or war.
Cognitive behavior
Human cognition is distinct from that of other animals. This is derived from biological traits of human cognition, but also from shared knowledge and development passed down culturally. Humans are able to learn from one another due to advanced theory of mind that allows knowledge to be obtained through education. The use of language allows humans to directly pass knowledge to one another. The human brain has neuroplasticity, allowing it to modify its features in response to new experiences. This facilitates learning in humans and leads to behaviors of practice, allowing the development of new skills in individual humans. Behavior carried out over time can be ingrained as a habit, where humans will continue to regularly engage in the behavior without consciously deciding to do so.
Humans engage in reason to make inferences with a limited amount of information. Most human reasoning is done automatically without conscious effort on the part of the individual. Reasoning is carried out by making generalizations from past experiences and applying them to new circumstances. Learned knowledge is acquired to make more accurate inferences about the subject. Deductive reasoning infers conclusions that are true based on logical premises, while inductive reasoning infers what conclusions are likely to be true based on context.
Emotion is a cognitive experience innate to humans. Basic emotions such as joy, distress, anger, fear, surprise, and disgust are common to all cultures, though social norms regarding the expression of emotion may vary. Other emotions come from higher cognition, such as love, guilt, shame, embarrassment, pride, envy, and jealousy. These emotions develop over time rather than instantly and are more strongly influenced by cultural factors. Emotions are influenced by sensory information, such as color and music, and moods of happiness and sadness. Humans typically maintain a standard level of happiness or sadness determined by health and social relationships, though positive and negative events have short-term influences on mood. Humans often seek to improve the moods of one another through consolation, entertainment, and venting. Humans can also self-regulate mood through exercise and meditation.
Creativity is the use of previous ideas or resources to produce something original. It allows for innovation, adaptation to change, learning new information, and novel problem solving. Expression of creativity also supports quality of life. Creativity includes personal creativity, in which a person presents new ideas authentically, but it can also be expanded to social creativity, in which a community or society produces and recognizes ideas collectively. Creativity is applied in typical human life to solve problems as they occur. It also leads humans to carry out art and science. Individuals engaging in advanced creative work typically have specialized knowledge in that field, and humans draw on this knowledge to develop novel ideas. In art, creativity is used to develop new artistic works, such as visual art or music. In science, those with knowledge in a particular scientific field can use trial and error to develop theories that more accurately explain phenomena.
Religious behavior is a set of traditions that are followed based on the teachings of a religious belief system. The nature of religious behavior varies depending on the specific religious traditions. Most religious traditions involve variations of telling myths, practicing rituals, making certain things taboo, adopting symbolism, determining morality, experiencing altered states of consciousness, and believing in supernatural beings. Religious behavior is often demanding and has high time, energy, and material costs, and it conflicts with rational choice models of human behavior, though it does provide community-related benefits. Anthropologists offer competing theories as to why humans adopted religious behavior. Religious behavior is heavily influenced by social factors, and group involvement is significant in the development of an individual's religious behavior. Social structures such as religious organizations or family units allow the sharing and coordination of religious behavior. These social connections reinforce the cognitive behaviors associated with religion, encouraging orthodoxy and commitment. According to a Pew Research Center report, 54% of adults around the world state that religion is very important in their lives as of 2018.
Physiological behavior
Humans undergo many behaviors common to animals to support the processes of the human body. Humans eat food to obtain nutrition. These foods may be chosen for their nutritional value, but they may also be eaten for pleasure. Eating often follows a food preparation process to make it more enjoyable. Humans dispose of waste through urination and defecation. Excrement is often treated as taboo, particularly in developed and urban communities where sanitation is more widely available and excrement has no value as fertilizer. Humans also regularly engage in sleep, based on homeostatic and circadian factors. The circadian rhythm causes humans to require sleep at a regular pattern and is typically calibrated to the day-night cycle and sleep-wake habits. Homeostasis is also maintained, causing longer sleep longer after periods of sleep deprivation. The human sleep cycle takes place over 90 minutes, and it repeats 3–5 times during normal sleep.
There are also unique behaviors that humans undergo to maintain physical health. Humans have developed medicine to prevent and treat illnesses. In industrialized nations, eating habits that favor better nutrition, hygienic behaviors that promote sanitation, medical treatment to eradicate diseases, and the use of birth control significantly improve human health. Humans can also engage in exercise beyond that required for survival to maintain health. Humans engage in hygiene to limit exposure to dirt and pathogens. Some of these behaviors are adaptive while others are learned. Basic behaviors of disgust evolved as an adaptation to prevent contact with sources of pathogens, resulting in a biological aversion to feces, body fluids, rotten food, and animals that are commonly disease vectors. Personal grooming, disposal of human corpses, use of sewerage, and use of cleaning agents are hygienic behaviors common to most human societies.
Humans reproduce sexually, engaging in sexual intercourse for both reproduction and sexual pleasure. Human reproduction is closely associated with human sexuality and an instinctive desire to procreate, though humans are unique in that they intentionally control the number of offspring that they produce. Humans engage in a large variety of reproductive behaviors relative to other animals, with various mating structures that include forms of monogamy, polygyny, and polyandry. How humans engage in mating behavior is heavily influenced by cultural norms and customs. Unlike most mammals, human women ovulate spontaneously rather than seasonally, with a menstrual cycle that typically lasts 25–35 days.
Humans are bipedal and move by walking. Human walking corresponds to the bipedal gait cycle, which involves alternating heel contact and toe off with the ground and slight elevation and rotation of the pelvis. Balance while walking is learned during the first 7–9 years of life, and individual humans develop unique gaits while learning to displace weight, adjust center of mass, and coordinate neural control with movement. Humans can achieve higher speed by running. The endurance running hypothesis proposes that humans can outpace most other animals over long distances through running, though human running causes a higher rate of energy exertion. The human body self-regulates through perspiration during periods of exertion, allowing humans more endurance than other animals. The human hand is prehensile and capable of grasping objects and applying force with control over the hand's dexterity and grip strength. This allows the use of complex tools by humans.
Economic behavior
Humans engage in predictable behaviors when considering economic decisions, and these behaviors may or may not be rational. Humans make basic decisions through cost–benefit analysis and the acceptable rate of return at the minimum risk. Human economic decision making is often reference dependent, in which options are weighed in reference to the status quo rather than absolute gains and losses. Humans are also loss averse, fearing loss rather than seeking gain. Advanced economic behavior developed in humans after the Neolithic Revolution and the development of agriculture. These developments led to a sustainable supply of resources that allowed specialization in more complex societies.
Work
The nature of human work is defined by the complexity of society. The simplest societies are tribes that work primarily for sustenance as hunter-gatherers. In this sense, work is not a distinct activity but a constant that makes up all parts of life, as all members of the society must work consistently to stay alive.
More advanced societies developed after the Neolithic Revolution, emphasizing work in agricultural and pastoral settings. In these societies, production is increased, ending the need for constant work and allowing some individuals to specialize and work in areas outside of food-production. This also created non-laborious work, as increasing occupational complexity required some individuals to specialize in technical knowledge and administration. Laborious work in these societies has variously been carried out by slaves, serfs, peasants, and guild craftsmen.
The nature of work changed significantly during the Industrial Revolution in which the factory system was developed for use by industrializing nations. In addition to further increasing general quality of life, this development changed the dynamic of work. Under the factory system, workers increasingly collaborate with others, employers serve as authority figures during work hours, and forced labor is largely eradicated. Further changes occur in post-industrial societies where technological advance makes industries obsolete, replacing them with mass production and service industries.
Humans approach work differently based on both physical and personal attributes, and some work with more effectiveness and commitment than others. Some find work to contribute to personal fulfillment, while others work only out of necessity. Work can also serve as an identity, with individuals identifying themselves based on their occupation. Work motivation is complex, both contributing to and subtracting from various human needs. The primary motivation for work is for material gain, which takes the form of money in modern societies. It may also serve to create self-esteem and personal worth, provide activity, gain respect, and express creativity. Modern work is typically categorized as laborious or blue-collar work and non-laborious or white-collar work.
Leisure
Leisure is activity or lack of activity that takes place outside of work. It provides relaxation, entertainment, and improved quality of life for individuals. Engaging in leisure can be beneficial for physical and mental health. It may be used to seek temporary relief from psychological stress, to produce positive emotions, or to facilitate social interaction. However, leisure can also facilitate health risks and negative emotions caused by boredom, substance abuse, or high-risk behavior.
Leisure may be defined as serious or casual. Serious leisure behaviors involve non-professional pursuit of arts and sciences, the development of hobbies, or career volunteering in an area of expertise. Casual leisure behaviors provide short-term gratification, but they do not provide long-term gratification or personal identity. These include play, relaxation, casual social interaction, volunteering, passive entertainment, active entertainment, and sensory stimulation. Passive entertainment is typically derived from mass media, which may include written works or digital media. Active entertainment involves games in which individuals participate. Sensory stimulation is immediate gratification from behaviors such as eating or sexual intercourse.
Consumption
Humans operate as consumers that obtain and use goods. All production is ultimately designed for consumption, and consumers adapt their behavior based on the availability of production. Mass consumption began during the Industrial Revolution, caused by the development of new technologies that allowed for increased production. Many factors affect a consumer's decision to purchase goods through trade. They may consider the nature of the product, its associated cost, the convenience of purchase, and the nature of advertising around the product. Cultural factors may influence this decision, as different cultures value different things, and subcultures may have different priorities when it comes to purchasing decisions. Social class, including wealth, education, and occupation may affect one's purchasing behavior. A consumer's interpersonal relationships and reference groups may also influence purchasing behavior.
Ecological behavior
Like all living things, humans live in ecosystems and interact with other organisms. Human behavior is affected by the environment in which a human lives, and environments are affected by human habitation. Humans have also developed man-made ecosystems such as urban areas and agricultural land. Geography and landscape ecology determine how humans are distributed within an ecosystem, both naturally and through planned urban morphology.
Humans exercise control over the animals that live within their environment. Domesticated animals are trained and cared for by humans. Humans can develop social and emotional bonds with animals in their care. Pets are kept for companionship within human homes, including dogs and cats that have been bred for domestication over many centuries. Livestock animals, such as cattle, sheep, goats, and poultry, are kept on agricultural land to produce animal products. Domesticated animals are also kept in laboratories for animal testing. Non-domesticated animals are sometimes kept in nature reserves and zoos for tourism and conservation.
Causes and factors
Human behavior is influenced by biological and cultural elements. The structure and agency debate considers whether human behavior is predominantly led by individual human impulses or by external structural forces. Behavioral genetics considers how human behavior is affected by inherited traits. Though genes do not guarantee certain behaviors, certain traits can be inherited that make individuals more likely to engage in certain behaviors or express certain personalities. An individual's environment can also affect behavior, often in conjunction with genetic factors. An individual's personality and attitudes affect how behaviors are expressed, formed in conjunction by genetic and environmental factors.
Age
Infants
Infants are limited in their ability to interpret their surroundings shortly after birth. Object permanence and understanding of motion typically develop within the first six months of an infant's life, though the specific cognitive processes are not understood. The ability to mentally categorize different concepts and objects that they perceive also develops within the first year. Infants are quickly able to discern their body from their surroundings and often take interest in their own limbs or actions they cause by two months of age.
Infants practice imitation of other individuals to engage socially and learn new behaviors. In young infants, this involves imitating facial expressions, and imitation of tool use takes place within the first year. Communication develops over the first year, and infants begin using gestures to communicate intention around nine to ten months of age. Verbal communication develops more gradually, taking form during the second year of age.
Children
Children develop fine motor skills shortly after infancy, in the range of three to six years of age, allowing them to engage in behaviors using the hands and eye–hand coordination and perform basic activities of self sufficiency. Children begin expressing more complex emotions in the three- to six-year-old range, including humor, empathy, and altruism, as well engaging in creativity and inquiry. Aggressive behaviors also become varied at this age as children engage in increased physical aggression before learning to favor diplomacy over aggression. Children at this age can express themselves using language with basic grammar.
As children grow older, they develop emotional intelligence. Young children engage in basic social behaviors with peers, typically forming friendships centered on play with individuals of the same age and gender. Behaviors of young children are centered around play, which allows them to practice physical, cognitive, and social behaviors. Basic self-concept first develops as children grow, particularly centered around traits such as gender and ethnicity, and behavior is heavily affected by peers for the first time.
Adolescents
Adolescents undergo changes in behavior caused by puberty and the associated changes in hormone production. Production of testosterone increases sensation seeking and sensitivity to rewards in adolescents as well as aggression and risk-taking in adolescent boys. Production of estradiol causes similar risk-taking behavior among adolescent girls. The new hormones cause changes in emotional processing that allow for close friendships, stronger motivations and intentions, and adolescent sexuality.
Adolescents undergo social changes on a large scale, developing a full self-concept and making autonomous decisions independently of adults. They typically become more aware of social norms and social cues than children, causing an increase in self-consciousness and adolescent egocentrism that guides behavior in social settings throughout adolescence.
Culture and environment
Human brains, as with those of all mammals, are neuroplastic. This means that the structure of the brain changes over time as neural pathways are altered in response to the environment. Many behaviors are learned through interaction with others during early development of the brain. Human behavior is distinct from the behavior of other animals in that it is heavily influenced by culture and language. Social learning allows humans to develop new behaviors by following the example of others. Culture is also the guiding influence that defines social norms.
Physiology
Neurotransmitters, hormones, and metabolism are all recognized as biological factors in human behavior.
Physical disabilities can prevent individuals from engaging in typical human behavior or necessitate alternative behaviors. Accommodations and accessibility are often made available for individuals with physical disabilities in developed nations, including health care, assistive technology, and vocational services. Severe disabilities are associated with increased leisure time but also with a lower satisfaction in the quality of leisure time. Productivity and health both commonly undergo long term decline following the onset of a severe disability. Mental disabilities are those that directly affect cognitive and social behavior. Common mental disorders include mood disorders, anxiety disorders, personality disorders, and substance dependence.
See also
Behavioral modernity
Behaviorism
Cultural ecology
Human behavioral ecology
References
Bibliography
Further reading
Ardrey, Robert. 1970. The Social Contract: A Personal Inquiry into the Evolutionary Sources of Order and Disorder. Atheneum. .
Tissot, S. A. D. (1768), An essay on diseases incidental to literary and sedentary persons.
External links
Culture
Main topic articles | 0.766536 | 0.997367 | 0.764517 |
Dhātu (ayurveda) | Dhātus (dhä·tōōs), n.pl. ( from Sanskrit धातु dhātu - layer, stratum, constituent part, ingredient, element, primitive matter ) in Ayurveda, the seven fundamental principles (elements) that support the basic structure (and functioning) of the body.
They consist of:
Rasa dhatu (lymph) the substratum formed just after the digestion of food. The main function of this Dhatu is nourishment.
Rakta dhatu (blood) This is the second Dhatu formed after the food digestion. This is formed from the former Dhatu, Rasa Dhatu
Mamsa dhatu (muscles) This is the third Dhatu. This is formed from the former Dhatu, Rakta Dhatu. The main function of covering the bones.
Medus dhatu (fat)
Asthi dhatu (bone)
Majja dhatu (marrow (bone and spinal))
Shukra dhatu (semen)
Traditional texts often refer to these as the Seven Dhātus (Saptadhātus). Ojas, meaning vigour or vitality, is known as the eighth Dhātu, or Mahādhātu (superior, or great dhātu).
See also
Dhātu (disambiguation) - a Buddhist technical term or a stupa, Pāli thūpa.
References
External links
The Dhatus
Ayurveda
Hindu philosophical concepts | 0.791009 | 0.966444 | 0.764466 |
Resource | Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified according to their availability as renewable or national and international resources. An item may become a resource with technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well. From a human perspective, a regular resource is anything to satisfy human needs and wants.
The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management.
The concept of resources can also be tied to the direction of leadership over resources; this may include human resources issues, for which leaders are responsible, in managing, supporting, or directing those matters and the resulting necessary actions. For example, in the cases of professional groups, innovative leaders and technical experts in archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administration.
Definition of size asymmetry
Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource).
Economic versus biological
There are three fundamental differences between economic versus ecological views: 1) the economic resource definition is human-centered (anthropocentric) and the biological or ecological resource definition is nature-centered (biocentric or ecocentric); 2) the economic view includes desire along with necessity, whereas the biological view is about basic biological needs; and 3) economic systems are based on markets of currency exchanged for goods and services, whereas biological systems are based on natural processes of growth, maintenance, and reproduction.
Computer resources
A computer resource is any physical or virtual component of limited availability within a computer or information management system. Computer resources include means for input, processing, output, communication, and storage.
Natural
Natural resources are derived from the environment. Many natural resources are essential for human survival, while others are used to satisfy human desire. Conservation is the management of natural resources with the goal of sustainability. Natural resources may be further classified in different ways.
Resources can be categorized based on origin:
Abiotic resources comprise non-living things (e.g., land, water, air, and minerals such as gold, iron, copper, silver).
Biotic resources are obtained from the biosphere. Forests and their products, animals, birds and their products, fish and other marine organisms are important examples. Minerals such as coal and petroleum are sometimes included in this category because they were formed from fossilized organic matter, over long periods.
Natural resources are also categorized based on the stage of development:
Potential resources are known to exist and may be used in the future. For example, petroleum may exist in many parts of India and Kuwait that have sedimentary rocks, but until the time it is actually drilled out and put into use, it remains a potential resource.
Actual resources are those, that have been surveyed, their quantity and quality determined, and are being used in present times. For example, petroleum and natural gas are actively being obtained from the Mumbai High Fields. The development of an actual resource, such as wood processing depends on the technology available and the cost involved. That part of the actual resource that can be developed profitably with the available technology is known as a reserve resource, while that part that can not be developed profitably due to a lack of technology is known as a stock resource.
Natural resources can be categorized based on renewability:
Non-renewable resources are formed over very long geological periods. Minerals and fossils are included in this category. Since their formation rate is extremely slow, they cannot be replenished, once they are depleted. Even though metals can be recycled and reused, whereas petroleum and gas cannot, they are still considered non-renewable resources.
Renewable resources, such as forests and fisheries, can be replenished or reproduced relatively quickly. The highest rate at which a resource can be used sustainably is the sustainable yield. Some resources, such as sunlight, air, and wind, are called perpetual resources because they are available continuously, though at a limited rate. Human consumption does not affect their quantity. Many renewable resources can be depleted by human use, but may also be replenished, thus maintaining a flow. Some of these, such as crops, take a short time for renewal; others, such as water, take a comparatively longer time, while others, such as forests, need even longer periods.
Depending upon the speed and quantity of consumption, overconsumption can lead to depletion or the total and everlasting destruction of a resource. Important examples are agricultural areas, fish and other animals, forests, healthy water and soil, cultivated and natural landscapes. Such conditionally renewable resources are sometimes classified as a third kind of resource or as a subtype of renewable resources. Conditionally renewable resources are presently subject to excess human consumption and the only sustainable long-term use of such resources is within the so-called zero ecological footprint, where humans use less than the Earth's ecological capacity to regenerate.
Natural resources are also categorized based on distribution:
Ubiquitous resources are found everywhere (for example, air, light, and water).
Localized resources are found only in certain parts of the world (for example metal ores and geothermal power).
Actual vs. potential natural resources are distinguished as follows:
Actual resources are those resources whose location and quantity are known and we have the technology to exploit and use them.
Potential resources are those of which we have insufficient knowledge or do not have the technology to exploit them at present.
Based on ownership, resources can be classified as individual, community, national, and international.
Labour or human resources
In economics, labor or human resources refers to the human work in the production of goods and rendering of services. Human resources can be defined in terms of skills, energy, talent, abilities, or knowledge.
In a project management context, human resources are those employees responsible for undertaking the activities defined in the project plan.
Capital or infrastructure
In economics, capital goods or capital are "those durable produced goods that are in turn used as productive inputs for further production" of goods and services. A typical example is the machinery used in a factory. At the macroeconomic level, "the nation's capital stock includes buildings, equipment, software, and inventories during a given year." Capitals are the most important economic resource.
Tangible versus intangible
Whereas, tangible resources such as equipment have an actual physical existence, intangible resources such as corporate images, brands and patents, and other intellectual properties exist in abstraction.
Use and sustainable development
Typically resources cannot be consumed in their original form, but rather through resource development they must be processed into more usable commodities and usable things. The demand for resources is increasing as economies develop. There are marked differences in resource distribution and associated economic inequality between regions or countries, with developed countries using more natural resources than developing countries. Sustainable development is a pattern of resource use, that aims to meet human needs while preserving the environment. Sustainable development means that we should exploit our resources carefully to meet our present requirement without compromising the ability of future generations to meet their own needs. The practice of the three R's – reduce, reuse, and recycle must be followed to save and extend the availability of resources.
Various problems are related to the usage of resources:
Environmental degradation
Over-consumption
Resource curse
Resource depletion
Tragedy of the commons
Various benefits can result from the wise usage of resources:
Economic growth
Ethical consumerism
Prosperity
Quality of life
Sustainability
Wealth
See also
Natural resource management
Resource-based view
Waste management
References
Further reading
Elizabeth Kolbert, "Needful Things: The raw materials for the world we've built come at a cost" (largely based on Ed Conway, Material World: The Six Raw Materials That Shape Modern Civilization, Knopf, 2023; Vince Beiser, The World in a Grain; and Chip Colwell, So Much Stuff: How Humans Discovered Tools, Invented Meaning, and Made More of Everything, Chicago), The New Yorker, 30 October 2023, pp. 20–23. Kolbert mainly discusses the importance to modern civilization, and the finite sources of, six raw materials: high-purity quartz (needed to produce silicon chips), sand, iron, copper, petroleum (which Conway lumps together with another fossil fuel, natural gas), and lithium. Kolbert summarizes archeologist Colwell's review of the evolution of technology, which has ended up giving the Global North a superabundance of "stuff," at an unsustainable cost to the world's environment and reserves of raw materials.
External links
Resource economics
Ecology | 0.766787 | 0.996928 | 0.764431 |
Occupational hazard | An occupational hazard is a hazard experienced in the workplace. This encompasses many types of hazards, including chemical hazards, biological hazards (biohazards), psychosocial hazards, and physical hazards. In the United States, the National Institute for Occupational Safety and Health (NIOSH) conduct workplace investigations and research addressing workplace health and safety hazards resulting in guidelines. The Occupational Safety and Health Administration (OSHA) establishes enforceable standards to prevent workplace injuries and illnesses. In the EU, a similar role is taken by EU-OSHA.
Occupational hazard, as a term signifies both long-term and short-term risks associated with the workplace environment. It is a field of study within occupational safety and health and public health. Short term risks may include physical injury (e.g., eye, back, head, etc.,), while long-term risks may be an increased risk of developing occupational disease, such as cancer or heart disease. In general, adverse health effects caused by short term risks are reversible while those caused by long term risks are irreversible.
Chemical hazards
Chemical hazards are a subtype of occupational hazards that involve a wide variety of chemicals. Exposure to chemicals in the workplace can cause acute or long-term detrimental health effects. There are many classifications of hazardous chemicals, including neurotoxins, immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers.
NIOSH sets recommended exposure limits (REL) as well as recommends preventative measures on specific chemicals in order to reduce or eliminate negative health effects from exposure to those chemicals. Additionally, NIOSH keeps an index of chemical hazards based on their chemical name, Chemical Abstracts Service Registry Number (CAS No.), and RTECS Number. Furthermore, OSHA has set permissible exposure limits (PEL) on around 500 chemicals which are enforceable by law.
These exposure limits include evidence that a certain amount of a chemical exposure is linked to one or more adverse health effects. For instance, heart disease is more prevalent in workers who are exposed to the chemicals found in engine exhausts. Exposure to carbon tetrachloride has shown to cause liver and kidney damage. Exposure to benzene has been linked to leukemia.
Biological hazards
Biological agents, which create biological hazards, include bacteria, fungi, viruses, microorganisms, and toxins. These biological agents can cause adverse health effects in workers. Influenza is an example of a biological hazard which affects a broad population of workers.
Exposure to toxins generated by insects, spiders, snakes, scorpions, etc., require physical contact be made between the worker and the living organism. Skin exposure to biological agents can cause contact dermatitis (caused by exposure to urushiol from poisonous Toxicodendron plants), Lyme disease, West Nile virus, and coccidioidomycosis (caused by exposure to fungi). According to NIOSH, outdoor workers at risk for these hazards "include farmers, foresters, landscapers, groundskeepers, gardeners, painters, roofers, pavers, construction workers, laborers, mechanics, and any other workers who spend time outside."
Health care professionals are at risk to exposure to blood-borne illnesses (such as HIV, hepatitis B, and hepatitis C) and particularly to emerging infectious diseases, especially when not enough resources are available to control the spread of the disease. Veterinary health workers, including veterinarians, are at risk for exposure to zoonotic disease. Those who do clinical work in the field or in a laboratory risk exposure to West Nile virus if performing necropsies on birds affected by the virus or are otherwise working with infected tissue.
Other occupations at risk to biological hazard exposure include poultry workers, who are exposed to bacteria; and tattooists and piercers, who risk exposure to blood-borne pathogens.
Psychosocial hazards
Psychosocial hazards are occupational hazards that affect someone's social life or psychological health. Psychosocial hazards in the workplace include occupational burnout and occupational stress, which can lead to burnout.
According to the Mayo Clinic, symptoms of occupational burnout include a cynical attitude towards work, severe lack of motivation at work, erratic sleeping habits, and disillusionment about one's occupation.
Physical hazards
Physical hazards are a subtype of occupational hazards that involve environmental hazards that can cause harm with or without contact. Physical hazards include ergonomic hazards, radiation, heat and cold stress, vibration hazards, and noise hazards.
Heat and cold stress
Heat and cold stress occur when the temperature is significantly different from room temperature (68-74 degrees Fahrenheit). When the body is exposed to heat stress, excess sweating can lead to a range of heat-related illnesses. Excessive cold can also lead to several cold-related illnesses like hypothermia, frost bite, etc.
Vibration hazards
Occupational vibration hazards most often occur when a worker is operating machinery that vibrates as a symptom of its functioning (e.g., chainsaws, power drills, etc.). The most common type of vibration syndrome is Hand Arm Vibration Syndrome (HAVS). Long-term exposure to HAVS can lead to damage occurring in the blood vessels, nerves, muscles, and joints of the hand, wrist, and arm.
Noise
Each year in the US, twenty-two million workers are exposed to noise levels that could potentially harm their health. Occupational hearing loss is the most common occupational illness in the manufacturing sector. Workers in exceptionally high noise environments, such as musicians, mine workers, and even those involved with stock car racing, are at a much higher risk of developing hearing loss, when compared to other workers (e.g., factory workers, etc.).
While permanent noise-induced hearing is often preventable through proper hearing protection, limiting the amount of time one is exposed to high levels of noise is still required. As such a widespread issue, NIOSH has been committed to preventing future hearing loss for workers by establishing recommended exposure limits (RELs) of 85 dB(A) for an 8-house time-weighted average (TWA). The Buy Quiet program was developed by NIOSH to encourage employers to reduce workplace noise levels by purchasing quieter models of tools and machinery. Additionally, a partnership with the National Hearing Conservation Association (NHCA) has resulted in the creation of the Safe-in-Sound Award to recognize excellence and innovation in the field of hearing loss prevention.
Furthermore, OSHA's development and implementation of the Hearing Conservation Program (HCP) has required employers to more effectively protect their workers against noise levels that are too high. The HCP empowers workers to not only receive noise exposure testing, as well as audiometric testing, but also to have access to noise protection devices adequate for the noise levels they are being exposed to.
See also
Health hazards in semiconductor manufacturing occupations
Health and safety hazards of nanomaterials
Health and safety hazards of 3D printing
Hazards of synthetic biology
Precarious work
Occupational hazards in dentistry
Occupational hazards of fire debris cleanup
Occupational hazards of grain facilities
Occupational hazards of human nail dust
Occupational hazards of solar panel installation
References | 0.771671 | 0.990578 | 0.764401 |
Iatrogenesis | Iatrogenesis is the causation of a disease, a harmful complication, or other ill effect by any medical activity, including diagnosis, intervention, error, or negligence. First used in this sense in 1924, the term was introduced to sociology in 1976 by Ivan Illich, alleging that industrialized societies impair quality of life by overmedicalizing life. Iatrogenesis may thus include mental suffering via medical beliefs or a practitioner's statements. Some iatrogenic events are obvious, like amputation of the wrong limb, whereas others, like drug interactions, can evade recognition. In a 2013 estimate, about 20 million negative effects from treatment had occurred globally. In 2013, an estimated 142,000 persons died from adverse effects of medical treatment, up from an estimated 94,000 in 1990.
Iatrogenic avenues
Risk associated with medical interventions
Adverse effects of prescription drugs or vaccines
Overuse of drugs (causing, for example, antibiotic resistance in bacteria)
Prescription drug interaction
Medical errors
Incorrect prescription, perhaps due to illegible handwriting or computer typos
Faulty procedures, techniques, information, methods, or equipment
Negligence
Hospital-acquired infections
Causes and consequences
Medical error and negligence
Iatrogenic conditions need not result from medical errors, such as mistakes made in surgery, or the prescription or dispensing of the wrong therapy, such as a drug. In fact, intrinsic and sometimes adverse effects of a medical treatment are iatrogenic. For example, radiation therapy and chemotherapy—necessarily aggressive for therapeutic effect – frequently produce such iatrogenic effects as hair loss, hemolytic anemia, diabetes insipidus, vomiting, nausea, brain damage, lymphedema, infertility, etc. The loss of function resulting from the required removal of a diseased organ is iatrogenic, as in the case of diabetes consequential to the removal of all or part of the pancreas.
The incidence of iatrogenesis may be misleading in some cases. For example, a ruptured aortic aneurysm is fatal in most cases; the survival rate for treatment of a ruptured aortic aneurysm is under 25%. Patients who die during or after an operation will still be considered iatrogenic deaths, but the procedure itself remains a better bet than the probability of death if left untreated.
Other situations may involve actual negligence or faulty procedures, such as when pharmacotherapists produce handwritten prescriptions for drugs.
Another situation may involve negligence where patients are brushed off and not given proper care due to providers holding prejudice for reasons such as sexual orientation, ethnicity, religion, immigration status, etc. This can cause mistrust between patients and providers, leading to patients to not go in for treatment, resulting in more deaths.
Adverse effects
Adverse reactions, such as allergic reactions to drugs, even when unexpected by pharmacotherapists, are also classified as iatrogenic.
The evolution of antibiotic resistance in bacteria is iatrogenic as well. Bacterial strains resistant to antibiotics have evolved in response to the over prescription of antibiotic drugs.
Certain drugs and vaccines are toxic in their own right in therapeutic doses because of their mechanism of action. Alkylating antineoplastic agents, for example, cause DNA damage, which is more harmful to cancer cells than regular cells. However, alkylation causes severe side-effects and is actually carcinogenic in its own right, with potential to lead to the development of secondary tumors. In a similar manner, arsenic-based medications like melarsoprol, used to treat trypanosomiasis, can cause arsenic poisoning.
Adverse effects can appear mechanically. The design of some surgical instruments may be decades old, hence certain adverse effects (such as tissue trauma) may never have been properly characterized.
Psychiatry
In psychiatry, iatrogenesis can occur due to misdiagnosis (including diagnosis with a false condition, as was the case of hystero-epilepsy). An example of a potentially iatrogenic circumstance is misdiagnosis of bipolar disorder for another disorder, especially in pediatric patients considered to have major depressive disorder and prescribed stimulants or antidepressants. Other conditions such as somatoform disorder are theorized to have significant sociocultural and iatrogenic components. Chronic Fatigue Syndrome/Myalgic Encephalomyelitis was historically viewed as a psychiatric/somatoform condition, and the now-outdated treatment of Graded Exercise Therapy is known to have caused iatrogenic harm. Post-traumatic stress disorder is hypothesized to be prone to iatrogenic complications based on treatment modality. Certain antipsychotics have been shown to reduce brain volumes in animals and in humans over long-term use.
Some populations may be at risk of underdiagnosis or misdiagnosis of psychiatric disorders, including those identified as having substance abuse disorders. At the other end of the spectrum, dissociative identity disorder is considered by a minority of theorists to be a wholly iatrogenic disorder with the bulk of diagnoses arising from a tiny fraction of practitioners.
The degree of association of any particular condition with iatrogenesis is unclear and in some cases controversial. The over-diagnosis of psychiatric conditions (with the assignment of mental illness terminology) may relate primarily to clinician dependence on subjective criteria. The assignment of pathological nomenclature is rarely a benign process and can easily rise to the level of emotional iatrogenesis, especially when no alternatives outside of the diagnostic naming process have been considered. Many former patients come to the conclusion that their difficulties are largely the result of the power relationships inherent in psychiatric treatment, which has led to the rise of the anti-psychiatry movement.
Iatrogenic poverty
Meessen et al. used the term "iatrogenic poverty" to describe impoverishment induced by medical care. Impoverishment is described for households exposed to catastrophic health expenditure or to hardship financing. Every year, worldwide, over 100,000 households fall into poverty due to health care expenses. A study reported that in the United States in 2001, illness and medical debt caused half of all personal bankruptcies. Especially in countries in economic transition, the willingness to pay for health care is increasing, and the supply side does not stay behind and develops very fast. But the regulatory and protective capacity in those countries is often lagging behind. Patients easily fall into a vicious cycle of illness, ineffective therapies, consumption of savings, indebtedness, sale of productive assets, and eventually poverty.
Social and cultural iatrogenesis
The 20th-century social critic Ivan Illich broadened the concept of medical iatrogenesis in his 1974 book Medical Nemesis: The Expropriation of Health by defining it at three levels.
First, clinical iatrogenesis is the injury done to patients by ineffective, unsafe, and erroneous treatments as described above. In this regard, he described the need for evidence-based medicine 20 years before the term was coined (the concept itself had been known and followed for centuries).
Second, at another level social iatrogenesis is the medicalization of life in which medical professionals, pharmaceutical companies, and medical device companies have a vested interest in sponsoring sickness by creating unrealistic health demands that require more treatments or treating non-diseases that are part of the normal human experience, such as age-related declines. In this way, aspects of medical practice and medical industries can produce social harm in which society members ultimately become less healthy or excessively dependent on institutional care. He argued that medical education of physicians contributes to medicalization of society because they are trained predominantly for diagnosing and treating illness, therefore they focus on disease rather than on health. Iatrogenic poverty (above) can be considered a specific manifestation of social iatrogenesis.
Third, cultural iatrogenesis refers to the destruction of traditional ways of dealing with, and making sense of, death, suffering, and sickness. In this way the medicalization of life leads to cultural harm as society members lose their autonomous coping skills. It is worth noting that in these critiques "Illich does not reject all benefits of modern society but rejects those that involve unwarranted dependency and exploitation."
Epidemiology
Globally it is estimated that 142,000 people died in 2013 from adverse effects of medical treatment, an increase of 51 percent from 94,000 in 1990. In the United States, estimated deaths per year include:
12,000 due to unnecessary surgery
7,000 due to medication errors in hospitals
20,000 due to other errors in hospitals
80,000 due to nosocomial infections in hospitals
106,000 due to non-error, negative effects of drugs
Based on these figures, iatrogenesis may cause as many as 225,000 deaths per year in the United States (excluding recognizable error). An earlier Institute of Medicine report estimated 230,000 to 284,000 iatrogenic deaths annually.
History
The term "iatrogenesis" means brought forth by a healer, from the Greek (, "healer") and (, "origin"); as such, in its earlier forms, it could refer to good or bad effects.
Since at least the time of Hippocrates, people have recognized the potentially damaging effects of medical intervention. "First do no harm" (primum non nocere) is a primary Hippocratic mandate in modern medical ethics. Iatrogenic illness or death caused purposefully or by avoidable error or negligence on the healer's part became a punishable offense in many civilizations.
The transfer of pathogens from the autopsy room to maternity patients, leading to shocking historical mortality rates of puerperal fever (also known as "childbed fever") at maternity institutions in the 19th century, was a major iatrogenic catastrophe of the era. The infection mechanism was first identified by Ignaz Semmelweis.
With the development of scientific medicine in the 20th century, it could be expected that iatrogenic illness or death might be more easily avoided. Antiseptics, anesthesia, antibiotics, better surgical techniques, evidence-based protocols and best practices continue to be developed to decrease iatrogenic side effects and mortality.
See also
Adverse drug reaction
Antifragile
Bioethics
Bloodletting
Cascade effect
Classification of Pharmaco-Therapeutic Referrals
Fatal Care: Survive in the U.S. Health System
Hospital-acquired infection
Journal of Negative Results in Biomedicine
List of medicine contamination incidents
Medical malpractice
Medicalization
Nassim Nicholas Taleb
Nocebo
Paradoxical reaction
Patient safety
Placebo
Polypharmacy
Pressure ulcer
Quaternary prevention
Risk–benefit ratio
Sentinel event
References
External links
Patient Safety Network (US)
Medical ethics
Health care quality
Medical error
Social problems in medicine | 0.76751 | 0.995922 | 0.76438 |
Biomedical model | The biomedical model of medicine care is the medical model used in most Western healthcare settings, and is built from the perception that a state of health is defined purely in the absence of illness. The biomedical model contrasts with sociological theories of care.
Forms of the biomedical model have existed since before 400 BC, with Hippocrates advocating for physical etiologies of illness. Despite this, the model did not form the dominant view of health until the nineteenth century during the Scientific Revolution.
Criticism of the model generally surrounds its perception that health is independent of the social environment in which it occurs, and can be defined one way across all populations. The model is also criticised for its view of the health system as socially and politically neutral, and not as a source of social and political power or as embedded into the structure of society.
Features
In their book Society, Culture and Health: an Introduction to Sociology for Nurses, health sociologists Dr. Karen Willis and Dr. Shandell Elmer outline eight 'features' of the biomedical model's approach to illness and health:
doctrine of specific aetiology: that all illness and disease is attributable to a specific, physiological dysfunction
body as a machine: that the body is formed of machinery to be fixed by medical doctors
mind-body distinction: that the mind and body are separate entities that do not interrelate
reductionism
narrow definition of health: that a state of health is always the absence of a definable illness
individualistic: that sources of ill health are always in the individual, and not the environment which health occurs
treatment versus prevention: that the focus of health is on diagnosis and treatment of illness, not prevention
treatment imperative: that medicine can 'fix the broken machinery' of ill-health
neutral scientific process: that health care systems and agents of health are socially and culturally detached
See also
Biopsychosocial model
Medical model
Medical model of disability
Social model of disability
Trauma model of mental disorders
References
Medical models | 0.778633 | 0.981559 | 0.764274 |
Medical research | Medical research (or biomedical research), also known as health research, refers to the process of using scientific methods with the aim to produce knowledge about human diseases, the prevention and treatment of illness, and the promotion of health.
Medical research encompasses a wide array of research, extending from "basic research" (also called bench science or bench research), – involving fundamental scientific principles that may apply to a preclinical understanding – to clinical research, which involves studies of people who may be subjects in clinical trials. Within this spectrum is applied research, or translational research, conducted to expand knowledge in the field of medicine.
Both clinical and preclinical research phases exist in the pharmaceutical industry's drug development pipelines, where the clinical phase is denoted by the term clinical trial. However, only part of the clinical or preclinical research is oriented towards a specific pharmaceutical purpose. The need for fundamental and mechanism-based understanding, diagnostics, medical devices, and non-pharmaceutical therapies means that pharmaceutical research is only a small part of medical research.
Most of the research in the field is pursued by biomedical scientists, but significant contributions are made by other type of biologists. Medical research on humans has to strictly follow the medical ethics sanctioned in the Declaration of Helsinki and the institutional review board where the research is conducted. In all cases, research ethics are expected.
Impact
The increased longevity of humans over the past century can be significantly attributed to advances resulting from medical research. Among the major benefits of medical research have been vaccines for measles and polio, insulin treatment for diabetes, classes of antibiotics for treating a host of maladies, medication for high blood pressure, improved treatments for AIDS, statins and other treatments for atherosclerosis, new surgical techniques such as microsurgery, and increasingly successful treatments for cancer. New, beneficial tests and treatments are expected as a result of the Human Genome Project. Many challenges remain, however, including the appearance of antibiotic resistance and the obesity epidemic.
Phases of medical research
Basic medical research
Example areas in basic medical research include: cellular and molecular biology, medical genetics, immunology, neuroscience, and psychology. Researchers, mainly in universities or government-funded research institutes, aim to establish an understanding of the cellular, molecular and physiological mechanisms of human health and disease.
Pre-clinical research
Pre-clinical research covers understanding of mechanisms that may lead to clinical research with people. Typically, the work requires no ethical approval, is supervised by scientists rather than physicians, and is carried out in a university or company, rather than a hospital.
Clinical research
Clinical research is carried out with people as the experimental subjects. It is generally supervised by physicians and conducted by nurses in a medical setting, such as a hospital or research clinic, and requires ethical approval.
Role of patients and the public
Besides being participants in a clinical trial, members of the public can actively collaborate with researchers in designing and conducting medical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and complement their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for.
Funding
Research funding in many countries derives from research bodies and private organizations which distribute money for equipment, salaries, and research expenses. United States, Europe, Asia, Canada, and Australia combined spent $265.0 billion in 2011, which reflected growth of 3.5% annually from $208.8 billion in 2004. The United States contributed 49% of governmental funding from these regions in 2011 compared to 57% in 2004.
In the United Kingdom, funding bodies such as the National Institute for Health and Care Research (NIHR) and the Medical Research Council derive their assets from UK tax payers, and distribute revenues to institutions by competitive research grants. The Wellcome Trust is the UK's largest non-governmental source of funds for biomedical research and provides over £600 million per year in grants to scientists and funds for research centres.
In the United States, data from ongoing surveys by the National Science Foundation (NSF) show that federal agencies provided only 44% of the $86 billion spent on basic research in 2015. The National Institutes of Health and pharmaceutical companies collectively contribute $26.4 billion and $27 billion, which constitute 28% and 29% of the total, respectively. Other significant contributors include biotechnology companies ($17.9 billion, 19% of total), medical device companies ($9.2 billion, 10% of total), other federal sources, and state and local governments. Foundations and charities, led by the Bill and Melinda Gates Foundation, contributed about 3% of the funding. These funders are attempting to maximize their return on investment in public health. One method proposed to maximize the return on investment in medicine is to fund the development of open source hardware for medical research and treatment.
The enactment of orphan drug legislation in some countries has increased funding available to develop drugs meant to treat rare conditions, resulting in breakthroughs that previously were uneconomical to pursue.
Government-funded biomedical research
Since the establishment of the National Institutes of Health (NIH) in the mid-1940s, the main source of U.S. federal support of biomedical research, investment priorities and levels of funding have fluctuated. From 1995 to 2010, NIH support of biomedical research increased from 11 billion to 27 billion Despite the jump in federal spending, advancements measured by citations to publications and the number of drugs passed by the FDA remained stagnant over the same time span. Financial projections indicate federal spending will remain constant in the near future.
US federal funding trends
The National Institutes of Health (NIH) is the agency that is responsible for management of the lion's share of federal funding of biomedical research. It funds over 280 areas directly related to health. Over the past century there were two notable periods of NIH support.
From 1995 to 1996 funding increased from $8.877 billion to $9.366 billion, years which represented the start of what is considered the "doubling period" of rapid NIH support. The second notable period started in 1997 and ended in 2010, a period where the NIH moved to organize research spending for engagement with the scientific community.
Privately (industry) funded biomedical research
Since 1980 the share of biomedical research funding from industry sources has grown from 32% to 62%, which has resulted in the development of numerous life-saving medical advances. The relationship between industry and government-funded research in the US has seen great movement over the years. The 1980 Bayh–Dole Act was passed by Congress to foster a more constructive relationship between the collaboration of government and industry funded biomedical research. The Bayh Doyle Act gave private corporations the option of applying for government funded grants for biomedical research which in turn allowed the private corporations to license the technology. Both government and industry research funding increased rapidly from between the years of 1994–2003; industry saw a compound average annual growth rate of 8.1% a year and slowed only slightly to a compound average annual growth rate of 5.8% from 2003 to 2008.
Conflicts of interests
"Conflict of interest" in the field of medical research has been defined as "a set of conditions in which professional judgment concerning a primary interest (such as a person's welfare or the validity of research) tends to be unduly influenced by a secondary interest (such as financial gain)."
Regulation on industry funded biomedical research has seen great changes since Samuel Hopkins Adams declaration. In 1906 congress passed the Pure Food and Drugs Act of 1906. In 1912 Congress passed the Shirley Amendment to prohibit the wide dissemination of false information on pharmaceuticals. The Food and Drug Administration was formally created in 1930 under the McNarey Mapes Amendment to oversee the regulation of Food and Drugs in the United States. In 1962 the Kefauver-Harris Amendments to the Food, Drug and Cosmetics Act made it so that before a drug was marketed in the United States the FDA must first approve that the drug was safe. The Kefauver-Harris amendments also mandated that more stringent clinical trials must be performed before a drug is brought to the market. The Kefauver-Harris amendments were met with opposition from industry due to the requirement of lengthier clinical trial periods that would lessen the period of time in which the investor is able to see return on their money. In the pharmaceutical industry patents are typically granted for a 20-year period of time, and most patent applications are submitted during the early stages of the product development. According to Ariel Katz on average after a patent application is submitted it takes an additional 8 years before the FDA approves a drug for marketing. As such this would leave a company with only 12 years to market the drug to see a return on their investments. After a sharp decline of new drugs entering the US market following the 1962 Kefauver-Harris amendments economist Sam Petlzman concluded that cost of loss of innovation was greater than the savings recognized by consumers no longer purchasing ineffective drugs. In 1984 the Hatch-Waxman Act or the Drug Price Competition and Patent Term Restoration Act of 1984 was passed by congress. The Hatch-Waxman Act was passed with the idea that giving brand manufacturers the ability to extend their patent by an additional 5 years would create greater incentives for innovation and private sector funding for investment.
The relationship that exists with industry funded biomedical research is that of which industry is the financier for academic institutions which in turn employ scientific investigators to conduct research. A fear that exists wherein a project is funded by industry is that firms might negate informing the public of negative effects to better promote their product.
A list of studies shows that public fear of the conflicts of interest that exist when biomedical research is funded by industry can be considered valid after a 2003 publication of "Scope and Impact of Financial Conflicts of Interest in Biomedical Research" in The Journal of American Association of Medicine. This publication included 37 different studies that met specific criteria to determine whether or not an academic institution or scientific investigator funded by industry had engaged in behavior that could be deduced to be a conflict of interest in the field of biomedical research. Survey results from one study concluded that 43% of scientific investigators employed by a participating academic institution had received research related gifts and discretionary funds from industry sponsors. Another participating institution surveyed showed that 7.6% of investigators were financially tied to research sponsors, including paid speaking engagements (34%), consulting arrangements (33%), advisory board positions (32%) and equity (14%). A 1994 study concluded that 58% out of 210 life science companies indicated that investigators were required to withhold information pertaining to their research as to extend the life of the interested companies' patents. Rules and regulations regarding conflict of interest disclosures are being studied by experts in the biomedical research field to eliminate conflicts of interest that could possibly affect the outcomes of biomedical research.
Transparency laws
Two laws which are both still in effect, one passed in 2006 and the other in 2010, were instrumental in defining funding reporting standards for biomedical research, and defining for the first time reporting regulations that were previously not required. The 2006 Federal Funding Accountability and Transparency Act mandates that all entities receiving over $25,000 in federal funds must report annual spending reports, including disclosure of executive salaries. The 2010 amendment to the act mandates that progress reports be submitted along with financial reporting. Data from the federal mandate is managed and made publicly available on usaspending.gov. Aside from the main source, usaspending.gov, other reporting mechanisms exist: Data specifically on biomedical research funding from federal sources is made publicly available by the National Health Expenditure Accounts (NHEA), data on health services research, approximately 0.1% of federal funding on biomedical research, is available through the Coalition of Health Services Research, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, and the Veterans Health Administration.
Currently, there are not any funding reporting requirements for industry sponsored research, but there has been voluntary movement toward this goal. In 2014, major pharmaceutical stakeholders such as Roche and Johnson and Johnson have made financial information publicly available and Pharmaceutical Research and Manufacturers of America (PhRMA), the most prominent professional association for biomedical research companies, has recently begun to provide limited public funding reports.
History
Ancient to 20th century in other regions
The earliest narrative describing a medical trial is found in the Book of Daniel, which says that Babylonian king Nebuchadnezzar ordered youths of royal blood to eat only red meat and wine for three years, while another group of youths ate only beans and water. The experiment was intended to determine if a diet of vegetables and water was healthier than a diet of wine and red meat. At the experiment endpoint, the trial accomplished its prerogative: the youths who ate only beans and water were noticeably healthier. Scientific curiosity to understand health outcomes from varying treatments has been present for centuries, but it was not until the mid-19th century when an organizational platform was created to support and regulate this curiosity. In 1945, Vannevar Bush said that biomedical scientific research was "the pacemaker of technological progress", an idea which contributed to the initiative to found the National Institutes of Health (NIH) in 1948, a historical benchmark that marked the beginning of a near century substantial investment in biomedical research.
20th and 21st century in the United States
The NIH provides more financial support for medical research than any other agency in the world to date and claims responsibility for numerous innovations that have improved global health. The historical funding of biomedical research has undergone many changes over the past century. Innovations such as the polio vaccine, antibiotics and antipsychotic agents, developed in the early years of the NIH lead to social and political support of the agency. Political initiatives in the early 1990s lead to a doubling of NIH funding, spurring an era of great scientific progress. There have been dramatic changes in the era since the turn of the 21st century to date; roughly around the start of the century, the cost of trials dramatically increased while the rate of scientific discoveries did not keep pace.
Biomedical research spending increased substantially faster than GDP growth over the past decade in the US, between the years of 2003 and 2007 spending increased 14% per year, while GDP growth increased 1% over the same period (both measures adjusted for inflation). Industry, not-for-profit entities, state and federal funding spending combined accounted for an increase in funding from $75.5 billion in 2003 to $101.1 billion in 2007. Due to the immediacy of federal financing priorities and stagnant corporate spending during the recession, biomedical research spending decreased 2% in real terms in 2008. Despite an overall increase of investment in biomedical research, there has been stagnation, and in some areas a marked decline in the number of drug and device approvals over the same time period.
As of 2010, industry sponsored research accounts for 58% of expenditures, NIH for 27% of expenditures, state governments for 5% of expenditures, non NIH-federal sources for 5% of expenditures and not-for-profit entities accounted for 4% of support. Federally funded biomedical research expenditures increased nominally, 0.7% (adjusted for inflation), from 2003 to 2007. Previous reports showed a stark contrast in federal investment, from 1994 to 2003, federal funding increased 100% (adjusted for inflation).
The NIH manages the majority, over 85%, of federal biomedical research expenditures. NIH support for biomedical research decreased from $31.8 billion in 2003, to $29.0 billion in 2007, a 25% decline (in real terms adjusted for inflation), while non-NIH federal funding allowed for the maintenance of government financial support levels through the era (the 0.7% four-year increase). Spending from industry-initiated research increased 25% (adjusted for inflation) over the same time period of time, from 2003 to 2007, an increase from $40 billion in 2003, to $58.6 billion in 2007. Industry sourced expenditures from 1994 to 2003 showed industry sponsored research funding increased 8.1%, a stark contrast to 25% increase in recent years.
Of industry sponsored research, pharmaceutical firm spending was the greatest contributor from all industry sponsored biomedical research spending, but only increased 15% (adjusted for inflation) from 2003 to 2007, while device and biotechnology firms accounted for the majority of the spending. The stock performance, a measure that can be an indication of future firm growth or technological direction, has substantially increased for both predominantly medical device and biotechnology producers. Contributing factors to this growth are thought to be less rigorous FDA approval requirements for devices as opposed to drugs, lower cost of trials, lower pricing and profitability of products and predictable influence of new technology due to a limited number of competitors. Another visible shift during the era was a shift in focus to late stage research trials; formerly dispersed, since 1994 an increasingly large portion of industry-sponsored research was late phase trials rather than early-experimental phases now accounting for the majority of industry sponsored research. This shift is attributable to a lower risk investment and a shorter development to market schedule. The low risk preference is also reflected in the trend of large pharmaceutical firms acquiring smaller companies that hold patents to newly developed drug or device discoveries which have not yet passed federal regulation (large companies are mitigating their risk by purchasing technology created by smaller companies in early-phase high-risk studies). Medical research support from universities increased from $22 billion in 2003 to $27.7 billion in 2007, a 7.8% increase (adjusted for inflation). In 2007 the most heavily funded institutions received 20% of HIN medical research funding, and the top 50 institutions received 58% of NIH medical research funding, the percent of funding allocated to the largest institutions is a trend which has increased only slightly over data from 1994. Relative to federal and private funding, health policy and service research accounted for a nominal amount of sponsored research; health policy and service research was funded $1.8 billion in 2003, which increased to $2.2 billion in 2008.
Stagnant rates of investment from the US government over the past decade may be in part attributable to challenges that plague the field. To date, only two-thirds of published drug trial findings have results that can be re-produced, which raises concerns from a US regulatory standpoint where great investment has been made in research ethics and standards, yet trial results remain inconsistent. Federal agencies have called upon greater regulation to address these problems; a spokesman from the National Institute of Neurological Disorders and Stroke, an agency of the NIH, stated that there is "widespread poor reporting of experimental design in articles and grant applications, that animal research should follow a core set of research parameters, and that a concerted effort by all stakeholders is needed to disseminate best reporting practices and put them into practice".
Regulations and guidelines
Medical research is highly regulated. National regulatory authorities are appointed in most countries to oversee and monitor medical research, such as for the development and distribution of new drugs. In the United States, the Food and Drug Administration oversees new drug development; in Europe, the European Medicines Agency (see also EudraLex); and in Japan, the Ministry of Health, Labour and Welfare. The World Medical Association develops the ethical standards for medical professionals involved in medical research. The most fundamental of them is the Declaration of Helsinki. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) works on the creation of rules and guidelines for the development of new medication, such as the guidelines for Good Clinical Practice (GCP). All ideas of regulation are based on a country's ethical standards code. This is why treatment of a particular disease in one country may not be allowed, but is in another.
Flaws and vulnerabilities
A major flaw and vulnerability in biomedical research appears to be the hypercompetition for the resources and positions that are required to conduct science. The competition seems to suppress the creativity, cooperation, risk-taking, and original thinking required to make fundamental discoveries. Other consequences of today's highly pressured environment for research appear to be a substantial number of research publications whose results cannot be replicated, and perverse incentives in research funding that encourage grantee institutions to grow without making sufficient investments in their own faculty and facilities. Other risky trends include a decline in the share of key research grants going to younger scientists, as well as a steady rise in the age at which investigators receive their first funding.
A significant flaw in biomedical research is the toxic culture that particularly impacts medical students and early career researchers. They face challenges such as bullying, harassment, and unethical authorship practices. Intense competition for funding and publication pressures fosters a climate of secrecy and self-protection, stifling creativity and collaboration. The power imbalance in academic hierarchies exacerbates these issues, with junior researchers often subjected to exploitative practices and denied proper recognition for their contributions.
Commercialization
After clinical research, medical therapies are typically commercialized by private companies such as pharmaceutical companies or medical device company. In the United States, one estimate found that in 2011, one-third of Medicare physician and outpatient hospital spending was on new technologies unavailable in the prior decade.
Medical therapies are constantly being researched, so the difference between a therapy which is investigational versus standard of care is not always clear, particularly given cost-effectiveness considerations. Payers have utilization management clinical guidelines which do not pay for "experimental or investigational" therapies, or may require that the therapy is medically necessary or superior to cheaper treatments. For example, proton therapy was approved by the FDA, but private health insurers in the United States considered it unproven or unnecessary given its high cost, although it was ultimately covered for certain cancers.
Fields of research
Fields of biomedical research include:
See also
References | 0.772511 | 0.989312 | 0.764255 |
SAMPLE history | SAMPLE history is a mnemonic acronym to remember key questions for a person's medical assessment. The SAMPLE history is sometimes used in conjunction with vital signs and OPQRST. The questions are most commonly used in the field of emergency medicine by first responders during the secondary assessment. It is used for alert (conscious) people, but often much of this information can also be obtained from the family or friend of an unresponsive person. In the case of severe trauma, this portion of the assessment is less important. A derivative of SAMPLE history is AMPLE history which places a greater emphasis on a person's medical history.
Meaning
The parts of the mnemonic are:
S – Signs/Symptoms (Symptoms are important but they are subjective.)
A – Allergies
M – Medications
P – Past Pertinent medical history
L – Last Oral Intake (Sometimes also Last Menstrual Cycle.)
E – Events Leading Up To Present Illness / Injury
See also
OPQRST
ABC (medicine)
Past Medical History
References
External links
Emergency medical services
First aid
Medical mnemonics
Mnemonic acronyms | 0.772753 | 0.988999 | 0.764252 |
Anemia in pregnancy | Anemia is a condition in which blood has a lower-than-normal amount of red blood cells or hemoglobin. Anemia in pregnancy is a decrease in the total red blood cells (RBCs) or hemoglobin in the blood during pregnancy. Anemia is an extremely common condition in pregnancy world-wide, conferring a number of health risks to mother and child. While anemia in pregnancy may be pathologic, in normal pregnancies, the increase in RBC mass is smaller than the increase in plasma volume, leading to a mild decrease in hemoglobin concentration referred to as physiologic (or dilutional) anemia. Maternal signs and symptoms are usually non-specific, but can include: fatigue, pallor, dyspnea, palpitations, and dizziness. There are numerous well-known maternal consequences of anemia including: maternal cardiovascular strain, reduced physical and mental performance, reduced peripartum blood reserves, increased risk for peripartum blood product transfusion, and increased risk for maternal mortality.
Signs and symptoms
Common symptoms are headache, fatigue, lethargy, tachycardia, tachypnea, paresthesia, pallor, glossitis and cheilitis. Severe symptoms include congestive heart failure, placenta previa, abruptio placenta, and operative delivery.
Causes
Physiologic causes
Dilutional anemia: There is an increase in overall blood volume during pregnancy, and even though there is an increase in overall red blood cell mass, the increase in the other parts of the blood like plasma decrease the overall percentage of red blood cells in
circulation.
Non-physiologic causes
Iron deficiency anemia: this can occur from the increased production of red blood cells, which requires a lot of iron and also from inadequate intake of iron, which increase in pregnancy.
Hemoglobinopathies: Thalassemia and sickle cell disease.
Dietary deficiencies: Folate deficiency and vitamin B12 deficiency are common causes of anemia in pregnancy. Folate deficiency occurs due to diets low in leafy green vegetables, and animal sources of protein. B12 deficiency tends to be more common in individuals with Crohn's disease or gastrectomies.
Cell membrane disorders: Hereditary spherocytosis
Autoimmune causes: lead to the hemolysis of red blood cells (Ex: autoimmune hemolytic anemia).
Hypothyroidism and chronic kidney disease
Parasitic infestations: some examples are hookworm or Plasmodium species
Bacterial or viral infections
Iron deficiency is the most common cause of anemia in the pregnant woman. During pregnancy, the average total iron requirement is about 1200 mg per day for a 55 kg woman. This iron is used for the increase in red cell mass, placental needs and fetal growth. About 40% of women start their pregnancy with low to absent iron stores and up to 90% have iron stores insufficient to meet the increased iron requirements during pregnancy and the postpartum period.
The majority of women presenting with postpartum anemia have pre-delivery iron deficiency anemia or iron deficiency anemia combined with acute blood loss during delivery.
Adverse outcomes
Maternal outcomes
Studies have suggested that severe maternal morbidity (SMM) is increased approximately twofold in antepartum maternal anemia. SMM is defined by maternal death, eclampsia, transfusion, hysterectomy, or intensive care unit admission at delivery. Additional complications may include postpartum haemorrhage, preeclampsia, cesarean delivery, and infections.
Fetal outcomes
Iron deficiency during pregnancy is linked to a number of harmful effects on the fetus such as intrauterine growth restriction, death in utero, infection, preterm delivery and neurodevelopmental damage, which may be irreversible.
Diagnosis
The most useful test with which to render a diagnosis of anemia is a low RBC count, however hemoglobin and hematocrit values are most commonly used in making the initial diagnosis of anemia. Testing involved in diagnosing anemia in pregnant women must be tailored to each individual patient. Suggested tests include: hemoglobin and hematocrit (ratio of red blood cells to the total blood volume), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), erythrocyte count (number of red blood cells in the blood), red cell distribution width (RDW), reticulocyte count, and a peripheral smear to assess red blood cell morphology. If iron deficiency is suspected, additional tests such as: serum iron, total iron-binding capacity (TIBC), transferrin saturation, and plasma or serum ferritin may be warranted. It is important to note that references ranges for these values are often not the same for pregnant women. Additionally, laboratory values for pregnancy often change throughout the duration of a woman's gestation. For example, the reference values for what level of hemoglobin is considered anemic varies in each trimester of pregnancy.
- First trimester hemoglobin < 11 g/dL
- Second trimester hemoglobin < 10.5 g/dL
- Third trimester hemoglobin < 11 g/dL
- Postpartum hemoglobin < 10 g/dL
Listed below are normal ranges for important lab values in the diagnosis of anemia. Keep in mind that these ranges might change based on each patient's stage in pregnancy:
- Hemoglobin: Men (13.6-16.9), women (11.9-14.8)
- Hematocrit: Men (40-50%), women (35-43%)
- MCV: 82.5 - 98
- Reticulocyte count: Men (16-130X10^3/microL or X10^9), Women (16-98/microL or X10^9)
Differential using MCV
MCV can be a great measure for differentiating between different forms of anemia. MCV measures the average size of your red blood cells. There are three cut off measurements for MCV. If the MCV is < 80fL it is considered microcytic. If the MCV is from 80 to 100 fL then it is considered a normocytic anemia. If the MCV is > 100 fL it is considered a macrocytic anemia. Some causes of anemia can be characterized by different ranges of MCV depending upon the severity disease. Here are common causes of anemia organized by MCV.
MCV < 80 fL
- Iron deficiency
- Thalassemia
- Anemia of chronic disease or anemia of inflammation
MCV 80 - 100 fL
- Iron deficiency
- Infection
- Hypothyroidism
- Liver disease or alcohol use
- Drug-induced
- Hemolysis
- Vitamin B12 or folate deficiency
MCV > 100 fL
- Vitamin B12 or folate deficiency
- Drug induced
- Liver disease or alcohol use
- Hypothyroidism
- Myelodysplastic Syndromes
Pregnancy
Pregnant women need almost twice as much iron as women who are not pregnant do. Not getting enough iron during pregnancy raises risk of premature birth or a low-birth-weight baby. Hormonal changes in the pregnant woman result in an increase in circulating blood volume to 100 mL/kg with a total blood volume of approximately 6000–7000 mL. While red cell mass increases by 15–20% during pregnancy, plasma volume increases by 40%. Hemoglobin levels less than 11 g/dL during the first trimester, less than 10.5 g/dL during the second and third trimesters and less than 10 mg/dL in the postpartum period are considered anemic.
Prevention
Anemia is a very common complication of pregnancy. A mild form of anemia can be a result of dilution of blood. There is a relatively larger increase in blood plasma compared to total red cell mass in all pregnancies, which results in dilution of the blood and causes physiologic anemia . These changes take place to ensure adequate amount of blood is supplied to the fetus and prepares body for expected blood loss at the time of delivery.
Prevention of iron deficiency anemia
Iron deficiency is the most common cause of non-physiologic anemia. Iron deficiency anemia can be prevented with supplemental oral iron 27–30 mg daily. This dose typically corresponds to the amount of iron found in iron-containing prenatal vitamins. Consult with your medical provider to determine whether additional supplements are needed. Complete routine labs during pregnancy for early detection of iron deficiency anemia.
Iron deficiency anemia can also be prevented by eating iron-rich foods. This includes dark green leafy vegetables, eggs, meat, fish, dried beans, and fortified grains.
Prevention of other causes of anemia
This may be only applicable to select individuals.
Vitamin B12: Women who consume strictly vegan diets are advised to take Vitamin B12 supplements; this helps prevent anemia due to low Vitamin B12 levels.
Folic Acid: Folic acid supplement recommended for women with history of documented folate deficiency. Folic acid supplementation also recommended for prevention of neural tube defects in the fetus.
Treatment
For treatment of iron deficiency anemia in pregnant women, iron supplementation at doses higher than prenatal supplements is recommended. The standard doses of oral iron ranges from 40 mg to 200 mg elemental iron daily. Consult with your medical provider to determine the exact dose needed for your condition, higher than needed doses of iron supplements may sometimes lead to more adverse effects.
Iron supplements are easy to take, however adverse effects in some cases may include gastrointestinal side effects, nausea, diarrhea, and/or constipation. In cases when oral iron supplement is not tolerable, other options include longer intervals between each oral dose, liquid iron supplements, or intravenous iron. Intravenous iron may also be used in cases of severe iron deficiency anemia during second and third trimesters of pregnancy.
Anemias due to other deficiencies such as folic acid or vitamin B12 can also be treated with supplementation as well; dose may vary based on level of deficiency.
Other forms of anemias, such as inherited or acquired anemias prior to pregnancy, will require continuous management during pregnancy as well.
Treatment should target the underlying disease or condition affecting the patient.
The majority of obstetric anemia cases can be treated based on their etiology if diagnosed in time. Oral iron supplementation is the gold standard for the treatment of iron deficiency anemia and intravenous iron can be used when oral iron is not effective or tolerated from the second trimester of pregnancy onwards.
Treatment of postpartum hemorrhage is multifactorial and includes medical management, surgical management along with blood product support.
Epidemiology
According to the WHO estimation, the global prevalence of anemia during pregnancy is over 40%, and the prevalence of anemia during pregnancy in North America is 6%. Prevalence of anemia in pregnancy is higher in developing countries compared to developed countries. 56% of pregnant women from low and middle income countries were reported to have anemia.
Guidelines
References
Human pregnancy
Anemias
Transfusion medicine | 0.775319 | 0.985676 | 0.764213 |
Miasma theory | The miasma theory (also called the miasmic theory) is an abandoned medical theory that held that diseases—such as cholera, chlamydia, or the Black Death—were caused by a miasma (, Ancient Greek for 'pollution'), a noxious form of "bad air", also known as night air. The theory held that epidemics were caused by miasma, emanating from rotting organic matter. Though miasma theory is typically associated with the spread of contagious diseases, some academics in the early nineteenth century suggested that the theory extended to other conditions as well, e.g. one could become obese by inhaling the odor of food.
The miasma theory was advanced by Hippocrates in the fourth century B.C., and accepted from ancient times in Europe and China. The theory was eventually abandoned by scientists and physicians after 1880, replaced by the germ theory of disease: specific germs, not miasma, caused specific diseases. However, cultural beliefs about getting rid of odor made the clean-up of waste a high priority for cities. It also encouraged the construction of well-ventilated hospital facilities, schools and other buildings.
Etymology
The word miasma comes from ancient Greek and though conceptually, there is no word in English that has the same exact meaning, it can be loosely translated as 'stain' or 'pollution'.
The idea later gave rise to the name malaria (literally 'bad air' in Medieval Italian).
Views worldwide
Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that caused illnesses. The miasmatic position was that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infection was not passed between individuals but would affect individuals within the locale that gave rise to such vapors. It was identifiable by its foul smell. It was also initially believed that miasmas were propagated through worms from ulcers within those affected by a plague.
Europe
In the fifth or fourth century BC, Hippocrates wrote about the effects of the environs over the human diseases:
In the 1st century BC, the Roman architectural writer Vitruvius described the potential effects of miasma (Latin ) from fetid swamplands when visiting a city:
The miasmatic theory of disease remained popular in the Middle Ages and a sense of effluvia contributed to Robert Boyle's Suspicions about the Hidden Realities of the Air.
In the 1850s, miasma was used to explain the spread of cholera in London and in Paris, partly justifying Haussmann's later renovation of the French capital. The disease was said to be preventable by cleansing and scouring of the body and items. Dr. William Farr, the assistant commissioner for the 1851 London census, was an important supporter of the miasma theory. He believed that cholera was transmitted by air, and that there was a deadly concentration of miasmata near the River Thames' banks. Such a belief was in part accepted because of the general lack of air quality in urbanized areas. The wide acceptance of miasma theory during the cholera outbreaks overshadowed the partially correct theory brought forth by John Snow that cholera was spread through water. This slowed the response to the major outbreaks in the Soho district of London and other areas. The Crimean War nurse Florence Nightingale (1820–1910) was a proponent of the theory and worked to make hospitals sanitary and fresh-smelling. It was stated in 'Notes on Nursing for the Labouring Classes' (1860) that Nightingale would "keep the air [the patient] breathes as pure as the external air."
Fear of miasma registered in many early nineteenth-century warnings concerning what was termed "unhealthy fog". The presence of fog was thought to strongly indicate the presence of miasma. The miasmas were thought to behave like smoke or mist, blown with air currents, wafted by winds. It was thought that miasma did not simply travel on air but changed the air through which it propagated; the atmosphere was infected by miasma, as diseased people were.
China
In China, miasma (; alternative names , ) is an old concept of illness, used extensively by ancient Chinese local chronicles and works of literature. Miasma has different names in Chinese culture. Most of the explanations of miasma refer to it as a kind of sickness, or poison gas.
The ancient Chinese thought that miasma was related to the environment of parts of Southern China. The miasma was thought to be caused by the heat, moisture and the dead air in the Southern Chinese mountains. They thought that insects' waste polluted the air, the fog, and the water, and the virgin forest harbored a great environment for miasma to occur.
In descriptions by ancient travelers, soldiers, or local officials (most of them are men of letters) of the phenomenon of miasma, fog, haze, dust, gas, or poison geological gassing were always mentioned. The miasma was thought to have caused a lot of diseases such as the cold, influenza, heat strokes, malaria, or dysentery. In the medical history of China, malaria had been referred to by different names in different dynasty periods. Poisoning and psittacosis were also called miasma in ancient China because they did not accurately understand the cause of disease.
In the Sui dynasty (581–618 CE), doctor Chao Yuanfang mentioned miasma in his book On Pathogen and Syndromes (諸病源候論). He thought that miasma in Southern China was similar to typhoid fever in Northern China. However, in his opinion, miasma was different from malaria and dysentery. In his book, he discussed dysentery in another chapter, and malaria in a single chapter. He also claimed that miasma caused various diseases, so he suggested that one should find apt and specific ways to resolve problems.
The concept of miasma developed in several stages. First, before the Western Jin dynasty, the concept of miasma was gradually forming; at least, in the Eastern Han dynasty, there was no description of miasma. During the Eastern Jin, large numbers of northern people moved south, and miasma was then recognized by men of letters and nobility. After the Sui and the Tang dynasty, scholars-bureaucrats sent to be the local officials recorded and investigated miasma. As a result, the government became concerned about the severe cases and the causes of miasma by sending doctors to the areas of epidemic to research the disease and heal the patients. In the Ming dynasty and Qing dynasty, versions of local chronicles record different miasma in different places.
However, Southern China was highly developed in the Ming and Qing dynasties. The environment changed rapidly, and after the 19th century, western science and medical knowledge were introduced into China, and people knew how to distinguish and deal with the disease. The concept of miasma therefore faded out due to the progress of medicine in China.
Influence in Southern China
The terrifying miasma diseases in the southern regions of China made it the primary location for relegating officials and sending criminals to exile since the Qin-Han dynasty. Poet Han Yu of the Tang dynasty, for example, wrote to his nephew who came to see him off after his banishment to the Chao Prefecture in his poem, En Route (左遷至藍關示姪孫湘):
The prevalent belief and predominant fear of the southern region with its "poisonous air and gases" is evident in historical documents.
Similar topics and feelings toward the miasma-infected south are often reflected in early Chinese poetry and records. Most scholars of the time agreed that the geological environments in the south had a direct impact on the population composition and growth. Many historical records reflect that females were less prone to miasma infection, and mortality rates were much higher in the south, especially for the men. This directly influenced agriculture cultivation and the southern economy, as men were the engine of agriculture production. Zhou Qufei, a local magistrate from the Southern Song dynasty, described in his treatise Representative Answers from the South: "... The men are short and tan, while the women were plump and seldom came down with illness," and exclaimed at the populous female population in the Guangxi region.
This inherent environmental threat also prevented immigration from other regions. Hence, development in the damp and sultry south was much slower than in the north, where the dynasties' political power resided for much of early Chinese history.
India
In India, there was also a miasma theory. Gambir was considered the first antimiasmatic application. This gambir tree is found in Southern India and Sri Lanka.
Developments from 19th century onwards
Zymotic theory
Based on zymotic theory, people believed vapors called miasmata (singular: miasma) rose from the soil and spread diseases. Miasmata were believed to come from rotting vegetation and foul water—especially in swamps and urban ghettos.
Many people, especially the weak or infirm, avoided breathing night air by going indoors and keeping windows and doors shut. In addition to ideas associated with zymotic theory, there was also a general fear that cold or cool air spread disease. The fear of night air gradually disappeared as understanding about disease increased as well as with improvements in home heating and ventilation. Particularly important was the understanding that the agent spreading malaria was the mosquito (active at night) rather than miasmata.
Contagionism versus miasmatism
Prior to the late 19th century, night air was considered dangerous in most Western cultures. Throughout the 19th century, the medical community was divided on the explanation for disease proliferation. On one side were the contagionists, believing disease was passed through physical contact, while others believed disease was present in the air in the form of miasma, and thus could proliferate without physical contact. Two members of the latter group were Dr. Thomas S. Smith and Florence Nightingale.
Thomas Southwood Smith spent many years comparing the miasmatic theory to contagionism.
Florence Nightingale:
The current germ theory accounts for disease proliferation by both direct and indirect physical contact.
Influence on sanitary engineering reforms
In the early 19th century, the living conditions in industrialized cities in Britain were increasingly unsanitary. The population was growing at a much faster rate than the infrastructure could support. For example, the population of Manchester doubled within a single decade, leading to overcrowding and a significant increase in waste accumulation. The miasma theory of disease made sense to the sanitary reformers of the mid-19th century. Miasmas explained why cholera and other diseases were epidemic in places where the water was stagnant and foul-smelling. A leading sanitary reformer, London's Edwin Chadwick, asserted that "all smell is disease", and maintained that a fundamental change in the structure of sanitation systems was needed to combat increasing urban mortality rates.
Chadwick saw the problem of cholera and typhoid epidemics as being directly related to urbanization, and he proposed that new, independent sewerage systems should be connected to homes. Chadwick supported his proposal with reports from the London Statistical Society which showed dramatic increases in both morbidity and mortality rates since the beginning of urbanization in the early 19th century. Though Chadwick proposed reform on the basis of the miasma theory, his proposals did contribute to improvements in sanitation, such as preventing the reflux of noxious air from sewers back into houses by using separate drainage systems in the design of sanitation. That led, incidentally, to decreased outbreaks of cholera and thus helped to support the theory.
The miasma theory was consistent with the observation that disease was associated with poor sanitation, and hence foul odours, and that sanitary improvements reduced disease. However, it was inconsistent with the findings arising from microbiology and bacteriology in the later 19th century, which eventually led to the adoption of the germ theory of disease, although consensus was not reached immediately. Concerns over sewer gas, which was a major component of the miasma theory developed by Galen, and brought to prominence by the "Great Stink" in London in the summer of 1858, led proponents of the theory to observe that sewers enclosed the refuse of the human bowel, which medical science had discovered could teem with typhoid, cholera, and other microbes.
In 1846, the Nuisances Removal and Diseases Prevention Act was passed to identify whether the transmission of cholera was by air or by water. The act was used to encourage owners to clean their dwellings and connect them to sewers.
Even though eventually disproved by the understanding of bacteria and the discovery of viruses, the miasma theory helped establish the connection between poor sanitation and disease. That encouraged cleanliness and spurred public health reforms which, in Britain, led to the Public Health Acts of 1848 and 1858, and the Local Government Act of 1858. The latter of those enabled the instituting of investigations into the health and sanitary regulations of any town or place, upon the petition of residents or as a result of death rates exceeding the norm. Early medical and sanitary engineering reformers included Henry Austin, Joseph Bazalgette, Edwin Chadwick, Frank Forster, Thomas Hawksley, William Haywood, Henry Letheby, Robert Rawlinson, John Simon, John Snow and Thomas Wicksteed. Their efforts, and associated British regulatory improvements, were reported in the United States as early as 1865.
Particularly notable in 19th century sanitation reform is the work of Joseph Bazalgette, chief engineer to London's Metropolitan Board of Works. Encouraged by the Great Stink, Parliament sanctioned Bazalgette to design and construct a comprehensive system of sewers, which intercepted London's sewage and diverted it away from its water supply. The system helped purify London's water and saved the city from epidemics. In 1866, the last of the three great British cholera
epidemics took hold in a small area of Whitechapel. However, the area was not yet connected to Bazalgette's system, and the confined area of the epidemic acted as testament to the efficiency of the system's design.
Years later, the influence of those sanitary reforms on Britain was described by Richard Rogers:
The miasma theory did contribute to containing disease in urban settlements, but did not allow the adoption of a suitable approach to the reuse of excreta in agriculture. It was a major factor in the practice of collecting human excreta from urban settlements and reusing them in the surrounding farmland. That type of resource recovery scheme was common in major cities in the 19th century before the introduction of sewer-based sanitation systems. Nowadays, the reuse of excreta, when done in a hygienic manner, is known as ecological sanitation, and is promoted as a way of "closing the loop".
Throughout the 19th century, concern about public health and sanitation, along with the influence of the miasma theory, were reasons for the advocacy of the then-controversial practice of cremation. If infectious diseases were spread by noxious gases emitted from decaying organic matter, that included decaying corpses. The public health argument for cremation faded with the eclipsing of the miasma theory of disease.
Replacement by germ theory
Although the connection between germ and disease was proposed quite early, it was not until the late 1800s that the germ theory was generally accepted. The miasmatic theory was challenged by John Snow, suggesting that there was some means by which the disease was spread via a poison or morbid material (orig: ) in the water. He suggested this before and in response to a cholera epidemic on Broad Street in central London in 1854. Because of the miasmatic theory's predominance among Italian scientists, the discovery in the same year by Filippo Pacini of the bacillus that caused the disease was completely ignored. It was not until 1876 that Robert Koch proved that the bacterium Bacillus anthracis caused anthrax, which brought a definitive end to miasma theory.
1854 Broad Street cholera outbreak
The work of John Snow is notable for helping to make the connection between cholera and typhoid epidemics and contaminated water sources, which contributed to the eventual demise of miasma theory. During the cholera epidemic of 1854, Snow traced high mortality rates among the citizens of Soho to a water pump in Broad Street. Snow convinced the local government to remove the pump handle, which resulted in a marked decrease in cases of cholera in the area. In 1857, Snow submitted a paper to the British Medical Journal which attributed high numbers of cholera cases to water sources that were contaminated with human waste. Snow used statistical data to show that citizens who received their water from upstream sources were considerably less likely to develop cholera than those who received their water from downstream sources. Though his research supported his hypothesis that contaminated water, not foul air, was the source of cholera
epidemics, a review committee concluded that Snow's findings were not significant enough to warrant change, and they were summarily dismissed. Additionally, other interests intervened in the process of reform. Many water companies and civic authorities pumped water directly from contaminated sources such as the Thames to public wells, and the idea of changing sources or implementing filtration techniques was an unattractive economic prospect. In the face of such economic interests, reform was slow to be adopted.
In 1855, John Snow made a testimony against the Amendment to the "Nuisances Removal and Diseases Prevention Act" that regularized air pollution of some industries. He claimed that:
The same year, William Farr, who was then the major supporter of the miasma theory, issued a report to criticize the germ theory. Farr and the Committee wrote that:
Experiments by Louis Pasteur
The more formal experiments on the relationship between germ and disease were conducted by Louis Pasteur between 1860 and 1864. He discovered the pathology of the puerperal fever and the pyogenic vibrio in the blood, and suggested using boric acid to kill these microorganisms before and after confinement.
By 1866, eight years after the death of John Snow, William Farr publicly acknowledged that the miasma theory on the transmission of cholera was wrong, by his statistical justification on the death rate.
Anthrax
Robert Koch is widely known for his work with anthrax, discovering the causative agent of the fatal disease to be Bacillus anthracis. He published the discovery in a booklet as (The Etiology of Anthrax Disease, Based on the Developmental History of Bacillus Anthracis) in 1876 while working in Wöllstein. His publication in 1877 on the structure of anthrax bacterium marked the first photography of a bacterium. He discovered the formation of spores in anthrax bacteria, which could remain dormant under specific conditions. However, under optimal conditions, the spores were activated and caused disease. To determine this causative agent, he dry-fixed bacterial cultures onto glass slides, used dyes to stain the cultures, and observed them through a microscope. His work with anthrax is notable in that he was the first to link a specific microorganism with a specific disease, rejecting the idea of spontaneous generation and supporting the germ theory of disease.
See also
Germ theory of disease
Airborne disease
Homeopathy
Aromatherapy
Indoor air quality
References
Further reading
External links
Prevailing theories before the germ theory
Cholera theories
Term definition
Obsolete medical theories
Superstitions
Night
Wetlands in folklore | 0.765504 | 0.998275 | 0.764183 |
Aphorism | An aphorism (from Greek ἀφορισμός: aphorismos, denoting 'delimitation', 'distinction', and 'definition') is a concise, terse, laconic, or memorable expression of a general truth or principle. Aphorisms are often handed down by tradition from generation to generation.
The concept is generally distinct from those of an adage, brocard, chiasmus, epigram, maxim (legal or philosophical), principle, proverb, and saying; although some of these concepts may be construed as types of aphorism.
Often aphorisms are distinguished from other short sayings by the need for interpretation to make sense of them. In A Theory of the Aphorism, Andrew Hui defined an aphorism as "a short saying that requires interpretation".
A famous example is:
History
The word was first used in the Aphorisms of Hippocrates, a long series of propositions concerning the symptoms and diagnosis of disease and the art of healing and medicine. The often-cited first sentence of this work is: "" - "life is short, art is long", usually reversed in order (Ars longa, vita brevis).
This aphorism was later applied or adapted to physical science and then morphed into multifarious aphorisms of philosophy, morality, and literature. Currently, an aphorism is generally understood to be a concise and eloquent statement of truth.
Aphorisms are distinct from axioms: aphorisms generally originate from experience and custom, whereas axioms are self-evident truths and therefore require no additional proof. Aphorisms have been especially used in subjects to which no methodical or scientific treatment was originally applied, such as agriculture, medicine, jurisprudence, and politics.
Literature
Aphoristic collections, sometimes known as wisdom literature, have a prominent place in the canons of several ancient societies, such as the Sutra literature of India, the Biblical Ecclesiastes, Islamic hadiths, the golden verses of Pythagoras, Hesiod's Works and Days, the Delphic maxims, and Epictetus' Handbook. Aphoristic collections also make up an important part of the work of some modern authors. A 1559 oil–on–oak-panel painting, Netherlandish Proverbs (also called The Blue Cloak or The Topsy Turvy World) by Pieter Bruegel the Elder, artfully depicts a land populated with literal renditions of Flemish aphorisms (proverbs) of the day.
The first noted published collection of aphorisms is Adagia by Erasmus. Other important early aphorists were Baltasar Gracián, François de La Rochefoucauld, and Blaise Pascal.
Two influential collections of aphorisms published in the twentieth century were Unkempt Thoughts by Stanisław Jerzy Lec (in Polish) and Itch of Wisdom by Mikhail Turovsky (in Russian and English).
Society
Many societies have traditional sages or culture heroes to whom aphorisms are commonly attributed, such as the Seven Sages of Greece, Chanakya, Confucius, or King Solomon.
Misquoted or misadvised aphorisms are frequently used as a source of humour; for instance, wordplays of aphorisms appear in the works of P. G. Wodehouse, Terry Pratchett, and Douglas Adams. Aphorisms being misquoted by sports players, coaches, and commentators form the basis of Private Eye's Colemanballs section.
Philosophy
Professor of Humanities Andrew Hui, author of A Theory of the Aphorism offered the following definition of an aphorism: "a short saying that requires interpretation". Hui showed that some of the earliest philosophical texts from traditions around the world used an aphoristic style. Some of the earliest texts in the western philosophical canon feature short statements requiring interpretation, as seen in the Pre-Socratics like Heraclitus and Parmenides. In early Hindu literature, the Vedas were composed of many aphorisms. Likewise, in early Chinese philosophy, Taoist texts like the Tao Te Ching and the Confucian Analects relied on an aphoristic style. Francis Bacon, Blaise Pascal, Desiderius Erasmus, and Friedrich Nietzsche rank among some of the most notable philosophers who employed them in the modern time.
Andrew Hui argued that aphorisms played an important role in the history of philosophy, influencing the favored mediums of philosophical traditions. He argued for example, that the Platonic Dialogues served as a response to the difficult to interpret fragments and phrases which Pre-Socratic philosophers were famous for. Hui proposes that aphorisms often arrive before, after, or in response to more systematic argumentative philosophy. For example, aphorisms may come before a systematic philosophy, because the systematic philosophy consists of the attempt to interpret and explain the aphorisms, as he argues is the case with Confucianism. Alternately, aphorisms may be written against systematic philosophy, as a form of challenge or irreverence, as seen in Nietzsche's work. Lastly, aphorisms may come after or following systematic philosophy, as was the case with Francis Bacon, who sought to bring an end to old ways of thinking.
Aphorists
Georges Bataille
George E. P. Box
Jean Baudrillard
Ambrose Bierce (The Devil's Dictionary)
Nicolás Gómez Dávila (Escolios a un texto implícito)
Theodor W. Adorno (Minima Moralia: Reflections from Damaged Life)
F. H. Bradley
Malcolm de Chazal
Emil Cioran
Arkady Davidowitz
Desiderius Erasmus
Gustave Flaubert (Dictionary of Received Ideas)
Benjamin Franklin
Andrzej Maksymilian Fredro
Robert A. Heinlein (The Notebooks of Lazarus Long)
Edmond Jabès
Tomáš Janovic
Joseph Joubert
Franz Kafka
Karl Kraus
Stanisław Jerzy Lec
Georg Christoph Lichtenberg
Andrzej Majewski
Juan Manuel (the second, third and fourth parts of his famous work El Conde Lucanor)
Friedrich Nietzsche
Mark Miremont
Oiva Paloheimo
Dorothy Parker
Patanjali
Petar II Petrović-Njegoš
Faina Ranevskaya
François de La Rochefoucauld
George Santayana
Arthur Schopenhauer
Seneca the Younger
George Bernard Shaw
Mikhail Turovsky
Lev Shestov
Nassim Nicholas Taleb (The Bed of Procrustes)
Lao Tze
Voltaire
Wasif Ali Wasif
Oscar Wilde
Alexander Woollcott
Burchard of Worms
Cheng Yen (Jing Si Aphorism)
Sun Tzu
See also
Adage
Adagia by Desiderius Erasmus Roterodamus
Brocard
Chiasmus
Cliché
Epigram
Epitaph
French moralists
Gospel of Thomas
Legal maxim
Mahavakya
Maxim
Platitude
Proverb
Pseudo-Phocylides
Sacred Scripture:
Book of Proverbs
Ecclesiastes
Hidden Words
Wisdom of Sirach
Saying
Sūtra
The Triads of Ireland, and the Welsh Triads
References
Further reading
Gopnik, Adam, "Brevity, Soul, Wit: The art of the aphorism" (includes discussion of Andrew Hui, A Theory of the Aphorism: From Confucius to Twitter, Princeton, 2019), The New Yorker, 22 July 2019, pp. 67–69. "The aphorism [...] is [...] always an epitome, and seeks an essence. The ability to elide the extraneous is what makes the aphorism bite, but the possibility of inferring backward to a missing text is what makes the aphorism poetic." (p.69.)
External links
Commentary on Hippocrates' Aphorisms
Narrative techniques
Paremiology
Phrases | 0.765131 | 0.998641 | 0.76409 |
Stimulant | Stimulants (also known as central nervous system stimulants, or psychostimulants, or colloquially as uppers) are a class of drugs that increase the activity of the brain. They are used for various purposes, such as enhancing alertness, attention, motivation, cognition, mood, and physical performance. Some of the most common stimulants are caffeine, nicotine, amphetamines, cocaine, methylphenidate, and modafinil.
Stimulants work by affecting the levels of certain neurotransmitters, such as dopamine, norepinephrine, serotonin, histamine and acetylcholine, in the synapses between neurons. Stimulants sometimes also work by binding to the receptors for neurotransmitters. These neurotransmitters regulate various functions, such as arousal, attention, the reward system, learning, memory, and emotion. By increasing their availability, stimulants can produce a range of effects, from mild stimulation to euphoria, depending on the specific drug, dose, route of administration, and individual factors.
Stimulants have a long history of use, both for medical and non-medical purposes. They have been used to treat various conditions, such as narcolepsy, attention deficit hyperactivity disorder (ADHD), obesity, depression, and fatigue. They have also been used as recreational drugs, performance-enhancing substances, and cognitive enhancers, by various groups of people, such as students, athletes, artists, workers, and soldiers.
However, stimulants also have potential risks and side effects, such as addiction, tolerance, withdrawal, psychosis, anxiety, insomnia, cardiovascular problems, and neurotoxicity. The misuse and abuse of stimulants can lead to serious health and social consequences, such as overdose, dependence, crime, and violence. Therefore, the use of stimulants is regulated by laws and policies in most countries, and requires medical supervision and prescription in some cases.
Definition
A stimulant is an overarching term that covers many drugs including those that increase the activity of the central nervous system and the body, drugs that are pleasurable and invigorating, or drugs that have sympathomimetic effects. Sympathomimetic effects are those effects that mimic or copy the actions of the sympathetic nervous system. The sympathetic nervous system is a part of the nervous system that prepares the body for action, such as increasing the heart rate, blood pressure, and breathing rate. Stimulants can activate the same receptors as the natural chemicals released by the sympathetic nervous system (namely epinephrine and norepinephrine) and cause similar effects.
Effects
Acute
Stimulants in therapeutic doses, such as those given to patients with attention deficit hyperactivity disorder (ADHD), increases ability to focus, vigor, sociability, libido and may elevate mood. However, in higher doses, stimulants may actually decrease the ability to focus, a principle of the Yerkes-Dodson Law. The Yerkes-Dodson Law is a psychological theory that describes how stress affects performance. The theory says that there is an optimal level of stress that helps people perform better, but too much or too little stress can impair performance. The theory can be illustrated by an upside-down U-shaped curve, where the peak of the curve represents the optimal level of stress and performance. The theory was developed by psychologists Robert Yerkes and John Dillingham Dodson in 1908, based on experiments with mice. Drugs that stimulate the central nervous system, such as those used to treat ADHD, can improve the ability to focus and other aspects of mood and behavior when taken in appropriate doses. However, when taken in higher doses, these drugs can have the opposite effect and reduce the ability to focus. This is because the higher doses cause too much stress, which exceeds the optimal level and harms performance. In higher doses, stimulants may also produce euphoria, vigor, and a decreased need for sleep. Many, but not all, stimulants have ergogenic effects. The term "ergogenic" means "enhancing physical performance". Ergogenic effects are those effects that improve physical performance or endurance. For example, if a drug allows its user to run faster, lift heavier, or last longer, it is said to have ergogenic effects. Drugs such as ephedrine, pseudoephedrine, amphetamine and methylphenidate have well documented ergogenic effects, while cocaine has the opposite effect. Neurocognitive enhancing effects of stimulants, specifically modafinil, amphetamine and methylphenidate have been reported in healthy adolescents by some studies, and is a commonly cited reason among illicit drug users for use, particularly among college students in the context of studying. Still, results of these studies is inconclusive: assessing the potential overall neurocognitive benefits of stimulants among healthy youth is challenging due to the diversity within the population, the variability in cognitive task characteristics, and the absence of replication of studies. Research on the cognitive enhancement effects of modafinil in healthy non-sleep-deprived individuals has yielded mixed results, with some studies suggesting modest improvements in attention and executive functions while others show no significant benefits or even a decline in cognitive functions.
In some cases, psychiatric phenomena may emerge such as stimulant psychosis, paranoia, and suicidal ideation. Acute toxicity has been reportedly associated with homicide, paranoia, aggressive behavior, motor dysfunction, and punding. The violent and aggressive behavior associated with acute stimulant toxicity may partially be driven by paranoia. Most drugs classified as stimulants are sympathomimetics, that is they stimulate the sympathetic branch of the autonomic nervous system. This leads to effects such as mydriasis, increased heart rate, blood pressure, respiratory rate and body temperature. When these changes become pathological, they are called arrhythmia, hypertension, and hyperthermia, and may lead to rhabdomyolysis, stroke, cardiac arrest, or seizures. However, given the complexity of the mechanisms that underlie these potentially fatal outcomes of acute stimulant toxicity, it is impossible to determine what dose may be lethal.
Chronic
Assessment of the effects of stimulants is relevant given the large population currently taking stimulants. A systematic review of cardiovascular effects of prescription stimulants found no association in children, but found a correlation between prescription stimulant use and ischemic heart attacks. A review over a four-year period found that there were few negative effects of stimulant treatment, but stressed the need for longer-term studies. A review of a year long period of prescription stimulant use in those with ADHD found that cardiovascular side effects were limited to transient increases in blood pressure only. However, a 2024 systematic review of the evidence found that stimulants overall improve ADHD symptoms and broadband behavioral measures in children and adolescents, though they carry risks of side effects like appetite suppression and other adverse events. Initiation of stimulant treatment in those with ADHD in early childhood appears to carry benefits into adulthood with regard to social and cognitive functioning, and appears to be relatively safe.
Abuse of prescription stimulants (not following physician instruction) or of illicit stimulants carries many negative health risks. Abuse of cocaine, depending upon route of administration, increases risk of cardiorespiratory disease, stroke, and sepsis. Some effects are dependent upon the route of administration, with intravenous use associated with the transmission of many disease such as Hepatitis C, HIV/AIDS and potential medical emergencies such as infection, thrombosis or pseudoaneurysm, while inhalation may be associated with increased lower respiratory tract infection, lung cancer, and pathological restricting of lung tissue. Cocaine may also increase risk for autoimmune disease and damage nasal cartilage. Abuse of methamphetamine produces similar effects as well as marked degeneration of dopaminergic neurons, resulting in an increased risk for Parkinson's disease.
Medical uses
Stimulants are widely used throughout the world as prescription medicines as well as without a prescription (either legally or illicitly) as performance-enhancing or recreational drugs. Among narcotics, stimulants produce a noticeable crash or comedown at the end of their effects. In the US, the most frequently prescribed stimulants as of 2013 were lisdexamfetamine (Vyvanse), methylphenidate (Ritalin), and amphetamine (Adderall). It was estimated in 2015 that the percentage of the world population that had used cocaine during a year was 0.4%. For the category "amphetamines and prescription stimulants" (with "amphetamines" including amphetamine and methamphetamine) the value was 0.7%, and for MDMA 0.4%.
Stimulants have been used in medicine for many conditions including obesity, sleep disorders, mood disorders, impulse control disorders, asthma, nasal congestion and, in case of cocaine, as local anesthetics. Drugs used to treat obesity are called anorectics and generally include drugs that follow the general definition of a stimulant, but other drugs such as cannabinoid receptor antagonists also belong to this group. Eugeroics are used in management of sleep disorders characterized by excessive daytime sleepiness, such as narcolepsy, and include stimulants such as modafinil and pitolisant. Stimulants are used in impulse control disorders such as ADHD and off-label in mood disorders such as major depressive disorder to increase energy, focus and elevate mood. Stimulants such as epinephrine, theophylline and salbutamol orally have been used to treat asthma, but inhaled adrenergic drugs are now preferred due to less systemic side effects. Pseudoephedrine is used to relieve nasal or sinus congestion caused by the common cold, sinusitis, hay fever and other respiratory allergies; it is also used to relieve ear congestion caused by ear inflammation or infection.
Depression
Stimulants were one of the first classes of drugs to be used in the treatment of depression, beginning after the introduction of the amphetamines in the 1930s. However, they were largely abandoned for treatment of depression following the introduction of conventional antidepressants in the 1950s. Subsequent to this, there has been a resurgence in interest in stimulants for depression in recent years.
Stimulants produce a fast-acting and pronounced but transient and short-lived mood lift. In relation to this, they are minimally effective in the treatment of depression when administered continuously. In addition, tolerance to the mood-lifting effects of amphetamine has led to dose escalation and dependence. Although the efficacy for depression with continuous administration is modest, it may still reach statistical significance over placebo and provide benefits similar in magnitude to those of conventional antidepressants. The reasons for the short-term mood-improving effects of stimulants are unclear, but may relate to rapid tolerance. Tolerance to the effects of stimulants has been studied and characterized both in animals and humans. Stimulant withdrawal is remarkably similar in its symptoms to those of major depressive disorder.
Chemistry
Classifying stimulants is difficult, because of the large number of classes the drugs occupy, and the fact that they may belong to multiple classes; for example, ecstasy can be classified as a substituted methylenedioxyphenethylamine, a substituted amphetamine and consequently, a substituted phenethylamine.
Major stimulant classes include phenethylamines and their daughter class substituted amphetamines.
Amphetamines (class)
Substituted amphetamines are a class of compounds based upon the amphetamine structure; it includes all derivative compounds which are formed by replacing, or substituting, one or more hydrogen atoms in the amphetamine core structure with substituents. Examples of substituted amphetamines are amphetamine (itself), methamphetamine, ephedrine, cathinone, phentermine, mephentermine, bupropion, methoxyphenamine, selegiline, amfepramone, pyrovalerone, MDMA (ecstasy), and DOM (STP). Many drugs in this class work primarily by activating trace amine-associated receptor 1 (TAAR1); in turn, this causes reuptake inhibition and effluxion, or release, of dopamine, norepinephrine, and serotonin. An additional mechanism of some substituted amphetamines is the release of vesicular stores of monoamine neurotransmitters through VMAT2, thereby increasing the concentration of these neurotransmitters in the cytosol, or intracellular fluid, of the presynaptic neuron.
Amphetamines-type stimulants are often used for their therapeutic effects. Physicians sometimes prescribe amphetamine to treat major depression, where subjects do not respond well to traditional SSRI medications, but evidence supporting this use is poor/mixed. Notably, two recent large phase III studies of lisdexamfetamine (a prodrug to amphetamine) as an adjunct to an SSRI or SNRI in the treatment of major depressive disorder showed no further benefit relative to placebo in effectiveness. Numerous studies have demonstrated the effectiveness of drugs such as Adderall (a mixture of salts of amphetamine and dextroamphetamine) in controlling symptoms associated with ADHD. Due to their availability and fast-acting effects, substituted amphetamines are prime candidates for abuse.
Cocaine analogs
Hundreds of cocaine analogs have been created, all of them usually maintaining a benzyloxy connected to the 3 carbon of a tropane. Various modifications include substitutions on the benzene ring, as well as additions or substitutions in place of the normal carboxylate on the tropane 2 carbon. Various compound with similar structure activity relationships to cocaine that aren't technically analogs have been developed as well.
Mechanisms of action
Most stimulants exert their activating effects by enhancing catecholamine neurotransmission. Catecholamine neurotransmitters are employed in regulatory pathways implicated in attention, arousal, motivation, task salience and reward anticipation. Classical stimulants either block the reuptake or stimulate the efflux of these catecholamines, resulting in increased activity of their circuits. Some stimulants, specifically those with empathogenic and hallucinogenic effects, also affect serotonergic transmission. Some stimulants, such as some amphetamine derivatives and, notably, yohimbine, can decrease negative feedback by antagonizing regulatory autoreceptors. Adrenergic agonists, such as, in part, ephedrine, act by directly binding to and activating adrenergic receptors, producing sympathomimetic effects.
There are also more indirect mechanisms of action by which a drug can elicit activating effects. Caffeine is an adenosine receptor antagonist, and only indirectly increases catecholamine transmission in the brain. Pitolisant is an histamine 3 (H3)-receptor inverse agonist. As histamine 3 (H3) receptors mainly act as autoreceptors, pitolisant decreases negative feedback to histaminergic neurons, enhancing histaminergic transmission.
The precise mechanism of action of some stimulants, such as modafinil, for treating symptoms of narcolepsy and other sleep disorders, remains unknown.
Notable stimulants
Amphetamine
Amphetamine is a potent central nervous system (CNS) stimulant of the phenethylamine class that is approved for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. Amphetamine is also used off-label as a performance and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. Although it is a prescription medication in many countries, unauthorized possession and distribution of amphetamine is often tightly controlled due to the significant health risks associated with uncontrolled or heavy use. As a consequence, amphetamine is illegally manufactured in clandestine labs to be trafficked and sold to users. Based upon drug and drug precursor seizures worldwide, illicit amphetamine production and trafficking is much less prevalent than that of methamphetamine.
The first pharmaceutical amphetamine was Benzedrine, a brand of inhalers used to treat a variety of conditions. Because the dextrorotary isomer has greater stimulant properties, Benzedrine was gradually discontinued in favor of formulations containing all or mostly dextroamphetamine. Presently, it is typically prescribed as mixed amphetamine salts, dextroamphetamine, and lisdexamfetamine.
Amphetamine is a norepinephrine-dopamine releasing agent (NDRA). It enters neurons through dopamine and norepinephrine transporters and facilitates neurotransmitter efflux by activating TAAR1 and inhibiting VMAT2. At therapeutic doses, this causes emotional and cognitive effects such as euphoria, change in libido, increased arousal, and improved cognitive control. Likewise, it induces physical effects such as decreased reaction time, fatigue resistance, and increased muscle strength. In contrast, supratherapeutic doses of amphetamine are likely to impair cognitive function and induce rapid muscle breakdown. Very high doses can result in psychosis (e.g., delusions and paranoia), which very rarely occurs at therapeutic doses even during long-term use. As recreational doses are generally much larger than prescribed therapeutic doses, recreational use carries a far greater risk of serious side effects, such as dependence, which only rarely arises with therapeutic amphetamine use.
Caffeine
Caffeine is a stimulant compound belonging to the xanthine class of chemicals naturally found in coffee, tea, and (to a lesser degree) cocoa or chocolate. It is included in many soft drinks, as well as a larger amount in energy drinks. Caffeine is the world's most widely used psychoactive drug and by far the most common stimulant. In North America, 90% of adults consume caffeine daily.
A few jurisdictions restrict the sale and use of caffeine. In the United States, the FDA has banned the sale of pure and highly concentrated caffeine products for personal consumption, due to the risk of overdose and death. The Australian Government has announced a ban on the sale of pure and highly concentrated caffeine food products for personal consumption, following the death of a young man from acute caffeine toxicity. In Canada, Health Canada has proposed to limit the amount of caffeine in energy drinks to 180 mg per serving, and to require warning labels and other safety measures on these products.
Caffeine is also included in some medications, usually for the purpose of enhancing the effect of the primary ingredient, or reducing one of its side-effects (especially drowsiness). Tablets containing standardized doses of caffeine are also widely available.
Caffeine's mechanism of action differs from many stimulants, as it produces stimulant effects by inhibiting adenosine receptors. Adenosine receptors are thought to be a large driver of drowsiness and sleep, and their action increases with extended wakefulness. Caffeine has been found to increase striatal dopamine in animal models, as well as inhibit the inhibitory effect of adenosine receptors on dopamine receptors, however the implications for humans are unknown. Unlike most stimulants, caffeine has no addictive potential. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, per a study on drug abuse liability published in an NIDA research monograph that described a group preferring placebo over caffeine. In large telephone surveys only 11% reported dependence symptoms. However, when people were tested in labs, only half of those who claim dependence actually experienced it, casting doubt on caffeine's ability to produce dependence and putting societal pressures in the spotlight.
Coffee consumption is associated with a lower overall risk of cancer. This is primarily due to a decrease in the risks of hepatocellular and endometrial cancer, but it may also have a modest effect on colorectal cancer. There does not appear to be a significant protective effect against other types of cancers, and heavy coffee consumption may increase the risk of bladder cancer. A protective effect of caffeine against Alzheimer's disease is possible, but the evidence is inconclusive. Moderate coffee consumption may decrease the risk of cardiovascular disease, and it may somewhat reduce the risk of type 2 diabetes. Drinking 1-3 cups of coffee per day does not affect the risk of hypertension compared to drinking little or no coffee. However those who drink 2–4 cups per day may be at a slightly increased risk. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. It may protect people from liver cirrhosis. There is no evidence that coffee stunts a child's growth. Caffeine may increase the effectiveness of some medications including ones used to treat headaches. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude.
Ephedrine
Ephedrine is a sympathomimetic amine similar in molecular structure to the well-known drugs phenylpropanolamine and methamphetamine, as well as to the important neurotransmitter epinephrine (adrenaline). Ephedrine is commonly used as a stimulant, appetite suppressant, concentration aid, and decongestant, and to treat hypotension associated with anesthesia.
In chemical terms, it is an alkaloid with a phenethylamine skeleton found in various plants in the genus Ephedra (family Ephedraceae). It works mainly by increasing the activity of norepinephrine (noradrenaline) on adrenergic receptors. It is most usually marketed as the hydrochloride or sulfate salt.
The herb má huáng (Ephedra sinica), used in traditional Chinese medicine (TCM), contains ephedrine and pseudoephedrine as its principal active constituents. The same may be true of other herbal products containing extracts from other Ephedra species.
MDMA
3,4-Methylenedioxymethamphetamine (MDMA, ecstasy, or molly) is a euphoriant, empathogen, and stimulant of the amphetamine class. Briefly used by some psychotherapists as an adjunct to therapy, the drug became popular recreationally and the DEA listed MDMA as a Schedule I controlled substance, prohibiting most medical studies and applications. MDMA is known for its entactogenic properties. The stimulant effects of MDMA include hypertension, anorexia (appetite loss), euphoria, social disinhibition, insomnia (enhanced wakefulness/inability to sleep), improved energy, increased arousal, and increased perspiration, among others. Relative to catecholaminergic transmission, MDMA enhances serotonergic transmission significantly more, when compared to classical stimulants like amphetamine. MDMA does not appear to be significantly addictive or dependence forming.
Due to the relative safety of MDMA, some researchers such as David Nutt have criticized the scheduling level, writing a satirical article finding MDMA to be 28 times less dangerous than horseriding, a condition he termed "equasy" or "Equine Addiction Syndrome".
MDPV
Methylenedioxypyrovalerone (MDPV) is a psychoactive drug with stimulant properties that acts as a norepinephrine-dopamine reuptake inhibitor (NDRI). It was first developed in the 1960s by a team at Boehringer Ingelheim. MDPV remained an obscure stimulant until around 2004, when it was reported to be sold as a designer drug. Products labeled as bath salts containing MDPV were previously sold as recreational drugs in gas stations and convenience stores in the United States, similar to the marketing for Spice and K2 as incense.
Incidents of psychological and physical harm have been attributed to MDPV use.
Mephedrone
Mephedrone is a synthetic stimulant drug of the amphetamine and cathinone classes. Slang names include drone and MCAT. It is reported to be manufactured in China and is chemically similar to the cathinone compounds found in the khat plant of eastern Africa. It comes in the form of tablets or a powder, which users can swallow, snort, or inject, producing similar effects to MDMA, amphetamines, and cocaine.
Mephedrone was first synthesized in 1929, but did not become widely known until it was rediscovered in 2003. By 2007, mephedrone was reported to be available for sale on the Internet; by 2008 law enforcement agencies had become aware of the compound; and, by 2010, it had been reported in most of Europe, becoming particularly prevalent in the United Kingdom. Mephedrone was first made illegal in Israel in 2008, followed by Sweden later that year. In 2010, it was made illegal in many European countries, and, in December 2010, the EU ruled it illegal. In Australia, New Zealand, and the US, it is considered an analog of other illegal drugs and can be controlled by laws similar to the Federal Analog Act. In September 2011, the USA temporarily classified mephedrone as illegal, in effect from October 2011.
Mephedrone is neurotoxic and has abuse potential, predominantly exerted on 5-hydroxytryptamine (5-HT) terminals, mimicking that of MDMA with which it shares the same subjective sensations on abusers.
Methamphetamine
Methamphetamine (contracted from ) is a potent psychostimulant of the phenethylamine and amphetamine classes that is used to treat attention deficit hyperactivity disorder (ADHD) and obesity. Methamphetamine exists as two enantiomers, dextrorotary and levorotary. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine; however, both are addictive and produce the same toxicity symptoms at high doses. Although rarely prescribed due to the potential risks, methamphetamine hydrochloride is approved by the United States Food and Drug Administration (USFDA) under the trade name Desoxyn. Recreationally, methamphetamine is used to increase sexual desire, lift the mood, and increase energy, allowing some users to engage in sexual activity continuously for several days straight.
Methamphetamine may be sold illicitly, either as pure dextromethamphetamine or in an equal parts mixture of the right- and left-handed molecules (i.e., 50% levomethamphetamine and 50% dextromethamphetamine). Both dextromethamphetamine and racemic methamphetamine are schedule II controlled substances in the United States. Also, the production, distribution, sale, and possession of methamphetamine is restricted or illegal in many other countries due to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. In contrast, levomethamphetamine is an over-the-counter drug in the United States.
In low doses, methamphetamine can cause an elevated mood and increase alertness, concentration, and energy in fatigued individuals. At higher doses, it can induce psychosis, rhabdomyolysis, and cerebral hemorrhage. Methamphetamine is known to have a high potential for abuse and addiction. Recreational use of methamphetamine may result in psychosis or lead to post-withdrawal syndrome, a withdrawal syndrome that can persist for months beyond the typical withdrawal period. Unlike amphetamine and cocaine, methamphetamine is neurotoxic to humans, damaging both dopamine and serotonin neurons in the central nervous system (CNS). Unlike the long-term use of amphetamine in prescription doses, which may improve certain brain regions in individuals with ADHD, there is evidence that methamphetamine causes brain damage from long-term use in humans; this damage includes adverse changes in brain structure and function, such as reductions in gray matter volume in several brain regions and adverse changes in markers of metabolic integrity. However, recreational amphetamine doses may also be neurotoxic.
Methylphenidate
Methylphenidate is a stimulant drug that is often used in the treatment of ADHD and narcolepsy and occasionally to treat obesity in combination with diet restraints and exercise. Its effects at therapeutic doses include increased focus, increased alertness, decreased appetite, decreased need for sleep and decreased impulsivity. Methylphenidate is not usually used recreationally, but when it is used, its effects are very similar to those of amphetamines.
Methylphenidate acts as a norepinephrine-dopamine reuptake inhibitor (NDRI), by blocking the norepinephrine transporter (NET) and the dopamine transporter (DAT). Methylphenidate has a higher affinity for the dopamine transporter than for the norepinephrine transporter, and so its effects are mainly due to elevated dopamine levels caused by the inhibited reuptake of dopamine, however increased norepinephrine levels also contribute to various of the effects caused by the drug.
Methylphenidate is sold under a number of brand names including Ritalin. Other versions include the long lasting tablet Concerta and the long lasting transdermal patch Daytrana.
Cocaine
Cocaine is an SNDRI. Cocaine is made from the leaves of the coca shrub, which grows in the mountain regions of South American countries such as Bolivia, Colombia, and Peru, regions in which it was cultivated and used for centuries mainly by the Aymara people. In Europe, North America, and some parts of Asia, the most common form of cocaine is a white crystalline powder. Cocaine is a stimulant but is not normally prescribed therapeutically for its stimulant properties, although it sees clinical use as a local anesthetic, in particular in ophthalmology. Most cocaine use is recreational and its abuse potential is high (higher than amphetamine), and so its sale and possession are strictly controlled in most jurisdictions. Other tropane derivative drugs related to cocaine are also known such as troparil and lometopane but have not been widely sold or used recreationally.
Nicotine
Nicotine is the active chemical constituent in tobacco, which is available in many forms, including cigarettes, cigars, chewing tobacco, and smoking cessation aids such as nicotine patches, nicotine gum, and electronic cigarettes. Nicotine is used widely throughout the world for its stimulating and relaxing effects. Nicotine exerts its effects through the agonism of nicotinic acetylcholine receptors, resulting in multiple downstream effects such as increase in activity of dopaminergic neurons in the midbrain reward system, and acetaldehyde one of the tobacco constituent decreased the expression of monoamine oxidase in the brain. Nicotine is addictive and dependence forming. Tobacco, the most common source of nicotine, has an overall harm to user and self score 3 percent below cocaine, and 13 percent above amphetamines, ranking 6th most harmful of the 20 drugs assessed, as determined by a multi-criteria decision analysis.
Phenylpropanolamine
Phenylpropanolamine (PPA; Accutrim; β-hydroxyamphetamine), also known as the stereoisomers norephedrine and norpseudoephedrine, is a psychoactive drug of the phenethylamine and amphetamine chemical classes that is used as a stimulant, decongestant, and anorectic agent. It is commonly used in prescription and over-the-counter cough and cold preparations. In veterinary medicine, it is used to control urinary incontinence in dogs under trade names Propalin and Proin.
In the United States, PPA is no longer sold without a prescription due to a possible increased risk of stroke in younger women. In a few countries in Europe, however, it is still available either by prescription or sometimes over-the-counter. In Canada, it was withdrawn from the market on 31 May 2001. In India, human use of PPA and its formulations were banned on 10 February 2011.
Lisdexamfetamine
Lisdexamfetamine (Vyvanse, etc.) is an amphetamine-type medication, sold for use in treating ADHD. Its effects typically last around 14 hours. Lisdexamfetamine is inactive on its own and is metabolized into dextroamphetamine in the body. Consequently, it has a lower abuse potential.
Pseudoephedrine
Pseudoephedrine is a sympathomimetic drug of the phenethylamine and amphetamine chemical classes. It may be used as a nasal/sinus decongestant, as a stimulant, or as a wakefulness-promoting agent.
The salts pseudoephedrine hydrochloride and pseudoephedrine sulfate are found in many over-the-counter preparations, either as a single ingredient or (more commonly) in combination with antihistamines, guaifenesin, dextromethorphan, and/or paracetamol (acetaminophen) or another NSAID (such as aspirin or ibuprofen). It is also used as a precursor chemical in the illegal production of methamphetamine.
Catha edulis (Khat)
Khat is a flowering plant native to the Horn of Africa and the Arabian Peninsula.
Khat contains a monoamine alkaloid called cathinone, a "keto-amphetamine". This alkaloid causes excitement, loss of appetite, and euphoria. In 1980, the World Health Organization (WHO) classified it as a drug of abuse that can produce mild to moderate psychological dependence (less than tobacco or alcohol), although the WHO does not consider khat to be seriously addictive. It is banned in some countries, such as the United States, Canada, and Germany, while its production, sale, and consumption are legal in other countries, including Djibouti, Ethiopia, Somalia, Kenya and Yemen.
Modafinil
Modafinil is an eugeroic medication, which means that it promotes wakefulness and alertness. Modafinil is sold under the brand name Provigil among others. Modafinil is used to treat excessive daytime sleepiness due to narcolepsy, shift work sleep disorder, or obstructive sleep apnea. While it has seen off-label use as a purported cognitive enhancer, the research on its effectiveness for this use is not conclusive. Despite being a CNS stimulant, the addiction and dependence liabilities of modafinil are considered very low. Although modafinil shares biochemical mechanisms with stimulant drugs, it is less likely to have mood-elevating properties. The similarities in effects with caffeine are not clearly established. Unlike other stimulants, modafinil does not induce a subjective feeling of pleasure or reward, which is commonly associated with euphoria, an intense feeling of well-being. Euphoria is a potential indicator of drug abuse, which is the compulsive and excessive use of a substance despite adverse consequences. In clinical trials, modafinil has shown no evidence of abuse potential, that is why modafinil is considered to have a low risk of addiction and dependence, however, caution is advised.
Pitolisant
Pitolisant is an inverse agonist (antagonist) of the histamine 3 (H3) autoreceptor. As such, pitolisant is an antihistamine medication that also belongs to the class of CNS stimulants. Pitolisant is also considered a medication of eugeroic class, which means that it promotes wakefulness and alertness. Pitolisant is the first eugeroic drug that acts by blocking the H3 autoreceptor.
Pitolisant has been shown to be effective and well-tolerated for the treatment of narcolepsy with or without cataplexy.
Pitolisant is the only non-controlled anti-narcoleptic drug in the US. It has shown minimal abuse risk in studies.
Blocking the histamine 3 (H3) autoreceptor increases the activity of histamine neurons in the brain. The H3 autoreceptors regulate histaminergic activity in the central nervous system (and to a lesser extent, the peripheral nervous system) by inhibiting histamine biosynthesis and release upon binding to endogenous histamine. By preventing the binding of endogenous histamine at the H3, as well as producing a response opposite to that of endogenous histamine at the receptor (inverse agonism), pitolisant enhances histaminergic activity in the brain.
Recreational use and issues of abuse
Stimulants enhance the activity of the central and peripheral nervous systems. Common effects may include increased alertness, awareness, wakefulness, endurance, productivity, and motivation, arousal, locomotion, heart rate, and blood pressure, and a diminished desire for food and sleep. Use of stimulants may cause the body to reduce significantly its production of natural body chemicals that fulfill similar functions. Until the body reestablishes its normal state, once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and miserable. This is referred to as a "crash", and may provoke reuse of the stimulant.
Abuse of central nervous system (CNS) stimulants is common. Addiction to some CNS stimulants can quickly lead to medical, psychiatric, and psychosocial deterioration. Drug tolerance, dependence, and sensitization as well as a withdrawal syndrome can occur. Stimulants may be screened for in animal discrimination and self-administration models which have high sensitivity albeit low specificity. Research on a progressive ratio self-administration protocol has found amphetamine, methylphenidate, modafinil, cocaine, and nicotine to all have a higher break point than placebo that scales with dose indicating reinforcing effects. A progressive ratio self-administration protocol is a way of testing how much an animal or a human wants a drug by making them do a certain action (like pressing a lever or poking a nose device) to get the drug. The number of actions needed to get the drug increases every time, so it becomes harder and harder to get the drug. The highest number of actions that the animal or human is willing to do to get the drug is called the break point. The higher the break point, the more the animal or human wants the drug. In contrast to the classical stimulants such as amphetamine, the effects of modafinil depend on what the animals or humans have to do after getting the drug. If they have to do a performance task, like solving a puzzle or remembering something, modafinil makes them work harder for it than placebo, and the subjects wanted to self-administer modafinil. But if they had to do a relaxation task, like listening to music or watching a video, the subjects did not want to self-administer modafinil. This suggests that modafinil is more rewarding when it helps the animals or humans do something better or faster, especially considering that modafinil is not commonly abused or depended on by people, unlike other stimulants.
Treatment for misuse
Psychosocial treatments, such as contingency management, have demonstrated improved effectiveness when added to treatment as usual consisting of counseling and/or case-management. This is demonstrated with a decrease in dropout rates and a lengthening of periods of abstinence.
Testing
The presence of stimulants in the body may be tested by a variety of procedures. Serum and urine are the common sources of testing material although saliva is sometimes used. Commonly used tests include chromatography, immunologic assay, and mass spectrometry.
See also
Antidepressants
Depressants
Hallucinogens
Nootropics
Psychoanaleptics
Notes
References
External links
Asia & Pacific Amphetamine-Type Stimulants Information Centre (APAIC)
Drug classes defined by psychological effects
Psychopharmacology | 0.765136 | 0.998588 | 0.764056 |
Human pathogen | A human pathogen is a pathogen (microbe or microorganism such as a virus, bacterium, prion, or fungus) that causes disease in humans.
The human physiological defense against common pathogens (such as Pneumocystis) is mainly the responsibility of the immune system with help by some of the body's normal microbiota. However, if the immune system or "good" microbiota are damaged in any way (such as by chemotherapy, human immunodeficiency virus (HIV), or antibiotics being taken to kill other pathogens), pathogenic bacteria that were being held at bay can proliferate and cause harm to the host. Such cases are called opportunistic infections.
Some pathogens (such as the bacterium Yersinia pestis, which may have caused the Black Plague, the Variola virus, and the malaria protozoa) have been responsible for massive numbers of casualties and have had numerous effects on affected groups. Of particular note in modern times is HIV, which is known to have infected several million humans globally, along with the influenza virus. Today, while many medical advances have been made to safeguard against infection by pathogens, through the use of vaccination, antibiotics, and fungicide, pathogens continue to threaten human life. Social advances such as food safety, hygiene, and water treatment have reduced the threat from some pathogens.
Types
Viral
Pathogenic viruses are mainly those of the families of: Adenoviridae, Picornaviridae, Herpesviridae, Hepadnaviridae, Coronaviridae, Flaviviridae, Retroviridae, Orthomyxoviridae, Paramyxoviridae, Papovaviridae, Polyomavirus, Poxviridae, Rhabdoviridae, and Togaviridae. Some notable pathogenic viruses cause smallpox, influenza, mumps, measles, chickenpox, ebola, and rubella. Viruses typically range between 20 and 300 nanometers in length.
This type of pathogen is not cellular, and is instead composed of either RNA (Ribonucleic acid) or DNA (Deoxyribonucleic acid) within a protein shell - the capsid. Pathogenic viruses infiltrate host cells and manipulate the organelles within the cell such as the Ribosomes, Golgi Apparatus, and Endoplasmic Reticulum in order to multiply which commonly results in the death of the host cell via cellular decay. All the viruses that were contained within the lipid bilayer of the cell membrane are then released into the intercellular matrix to infect neighboring cells to continue the viral life cycle.
White blood cells surround and consume the virus using a mechanism known as phagocytosis (a type of endocytosis) within the extracellular matrix to reduce and fight the infection. The components within the white blood cell are responsible for destroying the virus and recycling its components for the body to use.
Bacterial
Although the vast majority of bacteria are harmless or beneficial to one's body, a few pathogenic bacteria can cause infectious diseases. The most common bacterial disease is tuberculosis, caused by the bacterium Mycobacterium tuberculosis, which affects about 2 million people mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as Streptococcus and Pseudomonas, and foodborne illnesses, which can be caused by bacteria such as Shigella, Campylobacter, and Salmonella. Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and Hansen's disease. They typically range between 1 and 5 micrometers in length.
Fungal
Fungi are a eukaryotic kingdom of microbes that are usually saprophytes, but can cause diseases in humans. Life-threatening fungal infections in humans most often occur in immunocompromised patients or vulnerable people with a weakened immune system, although fungi are common problems in the immunocompetent population as the causative agents of skin, nail, or yeast infections. Most antibiotics that function on bacterial pathogens cannot be used to treat fungal infections because fungi and their hosts both have eukaryotic cells. Most clinical fungicides belong to the azole group. The typical fungal spore size is 1-40 micrometers in length.
Other parasites
Protozoans are single-celled eukaryotes that feed on microorganisms and organic tissues. Considered as "one-celled animal" as they have animal like behaviors such as motility, predation, and a lack of a cell wall. Many protozoan pathogens are considered human parasites as they cause a variety of diseases such as: malaria, amoebiasis, babesiosis,
giardiasis, toxoplasmosis, cryptosporidiosis, trichomoniasis, Chagas disease, leishmaniasis, African trypanosomiasis (sleeping sickness), Acanthamoeba keratitis, and primary amoebic meningoencephalitis (naegleriasis).
Parasitic worms (Helminths) are macroparasites that can be seen by the naked eye. Worms live and feed in their living host, receiving nourishment and shelter while affecting the host's way of digesting nutrients. They also manipulate the host's immune system by secreting immunomodulatory products which allows them to live in their host for years. Many parasitic worms are more commonly intestinal that are soil-transmitted and infect the digestive tract; other parasitic worms are found in the host's blood vessels. Parasitic worms living in the host can cause weakness and even lead to many diseases. Parasitic worms can cause many diseases to both humans and animals. Helminthiasis (worm infection), Ascariasis, and enterobiasis (pinworm infection) are few that are caused by various parasitic worms.
Prionic
Prions are misfolded proteins that are transmissible and can influence abnormal folding of normal proteins in the brain. They do not contain any DNA or RNA and cannot replicate other than to convert already existing normal proteins to the misfolded state. These abnormally folded proteins are found characteristically in many neurodegenerative diseases as they aggregate the central nervous system and create plaques that damages the tissue structure. This essentially creates "holes" in the tissue. It has been found that prions transmit three ways: obtained, familial, and sporadic. It has also been found that plants play the role of vector for prions. There are eight different diseases that affect mammals that are caused by prions such as scrapie, bovine spongiform encephalopathy (mad cow disease) and Feline spongiform encephalopathy (FSE). There are also ten diseases that affect humans such as, Creutzfeldt–Jakob disease (CJD). and Fatal familial insomnia (FFI).
Animal pathogens
Animal pathogens are disease-causing agents of wild and domestic animal species, at times including humans.
Virulence
Virulence (the tendency of a pathogen to cause damage to a host's fitness) evolves when that pathogen can spread from a diseased host, despite that host being very debilitated. An example is the malaria parasite, which can spread from a person near death, by hitching a ride to a healthy person on a mosquito that has bitten the diseased person. This is called horizontal transmission in contrast to vertical transmission, which tends to evolve symbiosis (after a period of high morbidity and mortality in the population) by linking the pathogen's evolutionary success to the evolutionary success of the host organism.
Evolutionary medicine has found that under horizontal transmission, the host population might never develop tolerance to the pathogen.
Transmission
Transmission of pathogens occurs through many different routes, including airborne, direct or indirect contact, sexual contact, through blood, breast milk, or other body fluids, and through the fecal-oral route. One of the primary pathways by which food or water become contaminated is from the release of untreated sewage into a drinking water supply or onto cropland, with the result that people who eat or drink contaminated sources become infected. In developing countries, most sewage is discharged into the environment or on cropland; even in developed countries, some locations have periodic system failures that result in sanitary sewer overflows.
Examples
Bacillus anthracis — the causative agent of anthrax in humans and animals
Clostridium botulinum — releases the most powerful neurotoxin leading to death from botulism
Mycobacterium tuberculosis — the causative agent of most cases of tuberculosis
Mycobacterium leprae — the bacterium that causes leprosy (Hansen's disease)
Yersinia pestis — pneumonic, septicemic, and the notorious bubonic plagues (Black Death)
Rickettsia prowazekii — the etiologic agent of typhus fever
Bartonella spp.
Spanish influenza virus
Entamoeba histolytica virus '' amoeba or amoebiasis
See also
Cancer bacteria
Emerging Pathogens Institute
Oncovirus
List of clinically important bacteria
Lists of diseases
List of human diseases associated with infectious pathogens
List of infectious diseases
List of parasites of humans
References
External links
Infectious Disease -- University of Arizona (microvet.arizona.edu)
Pronunciation Guide to Microorganisms
Microbiology | 0.786644 | 0.971277 | 0.764049 |
Emergency | An emergency is an urgent, unexpected, and usually dangerous situation that poses an immediate risk to health, life, property, or environment and requires immediate action. Most emergencies require urgent intervention to prevent a worsening of the situation, although in some situations, mitigation may not be possible and agencies may only be able to offer palliative care for the aftermath.
While some emergencies are self-evident (such as a natural disaster that threatens many lives), many smaller incidents require that an observer (or affected party) decide whether it qualifies as an emergency.
The precise definition of an emergency, the agencies involved and the procedures used, vary by jurisdiction, and this is usually set by the government, whose agencies (emergency services) are responsible for emergency planning and management.
Defining an emergency
An incident, to be an emergency, conforms to one or more of the following, if it:
Poses an immediate threat to life, health, property, or environment
Has already caused loss of life, health detriments, property damage, or environmental damage
has a high probability of escalating to cause immediate danger to life, health, property, or environment
In the United States, most states mandate that a notice be printed in each telephone book that requires that someone must relinquish use of a phone line, if a person requests the use of a telephone line (such as a party line) to report an emergency. State statutes typically define an emergency as, "...a condition where life, health, or property is in jeopardy, and the prompt summoning of aid is essential."
Whilst most emergency services agree on protecting human health, life and property, the environmental impacts are not considered sufficiently important by some agencies. This also extends to areas such as animal welfare, where some emergency organizations cover this element through the "property" definition, where animals owned by a person are threatened (although this does not cover wild animals). This means that some agencies do not mount an "emergency" response where it endangers wild animals or environment, though others respond to such incidents (such as oil spills at sea that threaten marine life). The attitude of the agencies involved is likely to reflect the predominant opinion of the government of the area.
Types of emergency
Dangers to life
Many emergencies cause an immediate danger to the life of people involved. This can range from emergencies affecting a single person, such as the entire range of medical emergencies including heart attacks, strokes, cardiac arrest and trauma, to incidents that affect large numbers of people such as natural disasters including tornadoes, hurricanes, floods, earthquakes, mudslides and outbreaks of diseases such as coronavirus, cholera, Ebola, and malaria.
Most agencies consider these the highest priority emergency, which follows the general school of thought that nothing is more important than human life.
Dangers to health
Some emergencies are not necessarily immediately threatening to life, but might have serious implications for the continued health and well-being of a person or persons (though a health emergency can subsequently escalate to life-threatening).
The causes of a health emergency are often very similar to the causes of an emergency threatening to life, which includes medical emergencies and natural disasters, although the range of incidents that can be categorized here is far greater than those that cause a danger to life (such as broken limbs, which do not usually cause death, but immediate intervention is required if the person is to recover properly). Many life emergencies, such as cardiac arrest, are also health emergencies.
Dangers to the environment
Some emergencies do not immediately endanger life, health or property, but do affect the natural environment and creatures living within it. Not all agencies consider this a genuine emergency, but it can have far-reaching effects on animals and the long term condition of the land. Examples would include forest fires and marine oil spills.
Systems of classifying emergencies
Agencies across the world have different systems for classifying incidents, but all of them serve to help them allocate finite resource, by prioritising between different emergencies.
The first stage of any classification is likely to define whether the incident qualifies as an emergency, and consequently if it warrants an emergency response. Some agencies may still respond to non-emergency calls, depending on their remit and availability of resource. An example of this would be a fire department responding to help retrieve a cat from a tree, where no life, health or property is immediately at risk.
Following this, many agencies assign a sub-classification to the emergency, prioritising incidents that have the most potential for risk to life, health or property (in that order). For instance, many ambulance services use a system called the Advanced Medical Priority Dispatch System (AMPDS) or a similar solution. The AMPDS categorises all calls to the ambulance service using it as either 'A' category (immediately life-threatening), 'B' Category (immediately health threatening) or 'C' category (non-emergency call that still requires a response). Some services have a fourth category, where they believe that no response is required after clinical questions are asked.
Another system for prioritizing medical calls is known as Emergency Medical Dispatch (EMD). Jurisdictions that use EMD typically assign a code of "alpha" (low priority), "bravo" (medium priority), "charlie" (requiring advanced life support), delta (high priority, requiring advanced life support) or "echo" (maximum possible priority, e.g., witnessed cardiac arrests) to each inbound request for service; these codes are then used to determine the appropriate level of response.
Other systems (especially as regards major incidents) use objective measures to direct resource. Two such systems are SAD CHALET and ETHANE, which are both mnemonics to help emergency services staff classify incidents, and direct resource. Each of these acronyms helps ascertain the number of casualties (usually including the number of dead and number of non-injured people involved), how the incident has occurred, and what emergency services are required.
Agencies involved in dealing with emergencies
Most developed countries have a number of emergency services operating within them, whose purpose is to provide assistance in dealing with any emergency. They are often government operated, paid for from tax revenue as a public service, but in some cases, they may be private companies, responding to emergencies in return for payment, or they may be voluntary organisations, providing the assistance from funds raised from donations.
Most developed countries operate three core emergency services:
Police – handle mainly crime-related emergencies.
Fire – handle fire-related emergencies and usually possess secondary rescue duties.
Medical – handle medical-related emergencies.
There may also be a number of specialized emergency services, which may be a part of one of the core agencies, or may be separate entities who assist the main agencies. This can include services, such as bomb disposal, search and rescue, and hazardous material operations.
The Military and the Amateur Radio Emergency Service (ARES) or Radio Amateur Civil Emergency Service (RACES) help in large emergencies such as a disaster or major civil unrest.
Summoning emergency services
Most countries have an emergency telephone number, also known as the universal emergency number, which can be used to summon the emergency services to any incident. This number varies from country to country (and in some cases by region within a country), but in most cases, they are in a short number format, such as 911 (United States and many parts of Canada), 999 (United Kingdom), 112 (Europe) and 000 (Australia).
To simplify the summoning of emergency services, EmerGa, a French first aid association recognized as being of public interest, is developing e-mergency in 2024. This mobile application offers quick access to emergency services worldwide through an intuitive visual interface, eliminating the need to memorize different emergency numbers.
The majority of mobile phones also dial the emergency services, even if the phone keyboard is locked, or if the phone has an expired or missing SIM card, although the provision of this service varies by country and network.
Civil emergency services
In addition to those services provided specifically for emergencies, there may be a number of agencies who provide an emergency service as an incidental part of their normal 'day job' provision. This can include public utility workers, such as in provision of electricity or gas, who may be required to respond quickly, as both utilities have a large potential to cause danger to life, health and property if there is an infrastructure failure.
Domestic emergency services
Generally perceived as pay per use emergency services, domestic emergency services are small, medium or large businesses who tend to emergencies within the boundaries of licensing or capabilities. These tend to consist of emergencies where health or property is perceived to be at risk but may not qualify for official emergency response. Domestic emergency services are in principle similar to civil emergency services where public or private utility workers will perform corrective repairs to essential services and avail their service at all times; however, these are at a cost for the service. An example would be an emergency plumber.
Emergency action principles (EAP)
Emergency action principles are key 'rules' that guide the actions of rescuers and potential rescuers. Because of the inherent nature of emergencies, no two are likely to be the same, so emergency action principles help to guide rescuers at incidents, by sticking to some basic tenets.
The adherence to (and contents of) the principles by would-be rescuers varies widely based on the training the people involved in emergency have received, the support available from emergency services (and the time it takes to arrive) and the emergency itself.
Key emergency principle
The key principle taught in almost all systems is that the rescuer, whether a lay person or a professional, should assess the situation for danger.
The reason that an assessment for danger is given such high priority is that it is core to emergency management that rescuers do not become secondary victims of any incident, as this creates a further emergency that must be dealt with.
A typical assessment for danger would involve observation of the surroundings, starting with the cause of the accident (e.g. a falling object) and expanding outwards to include any situational hazards (e.g. fast moving traffic) and history or secondary information given by witnesses, bystanders or the emergency services (e.g. an attacker still waiting nearby).
Once a primary danger assessment has been completed, this should not end the system of checking for danger, but should inform all other parts of the process.
If at any time the risk from any hazard poses a significant danger (as a factor of likelihood and seriousness) to the rescuer, they should consider whether they should approach the scene (or leave the scene if appropriate).
Managing an emergency
There are many emergency services protocols that apply in an emergency, which usually start with planning before an emergency occurs. One commonly used system for demonstrating the phases is shown here on the right.
The planning phase starts at preparedness, where the agencies decide how to respond to a given incident or set of circumstances. This should ideally include lines of command and control, and division of activities between agencies. This avoids potentially negative situations such as three separate agencies all starting an official emergency shelter for victims of a disaster.
Following an emergency occurring, the agencies then move to a response phase, where they execute their plans, and may end up improvising some areas of their response (due to gaps in the planning phase, which are inevitable due to the individual nature of most incidents).
Agencies may then be involved in recovery following the incident, where they assist in the clear up from the incident, or help the people involved overcome their mental trauma.
The final phase in the circle is mitigation, which involves taking steps to ensure no re-occurrence is possible, or putting additional plans in place to ensure less damage is done. This should feed back into the preparedness stage, with updated plans in place to deal with future emergencies, thus completing the circle.
State of emergency
In the event of a major incident, such as civil unrest or a major disaster, many governments maintain the right to declare a state of emergency, which gives them extensive powers over the daily lives of their citizens, and may include temporary curtailment on certain civil rights, including the right to trial. For instance to discourage looting of an evacuated area, a shoot on sight policy, however unlikely to occur, may be publicized.
See also
Certified first responder
First aid
Emergency Communication System
Emergency medical service
Emergency Response Information Network
Emergency sanitation
Lockdown
Prevention
Natural disaster
Maritime emergency
SWAT (Special Weapons And Tactics)
References
External links
Emergency management
Safety
Crisis
Legal doctrines and principles
hu:Szükségállapot (politika) | 0.76731 | 0.995717 | 0.764023 |
Hypervitaminosis | Hypervitaminosis is a condition of abnormally high storage levels of vitamins, which can lead to various symptoms as over excitement, irritability, or even toxicity. Specific medical names of the different conditions are derived from the given vitamin involved: an excess of vitamin A, for example, is called hypervitaminosis A. Hypervitaminoses are primarily caused by fat-soluble vitamins (D and A), as these are stored by the body for longer than the water-soluble vitamins.
Generally, toxic levels of vitamins stem from high supplement intake and not always from natural sources but rather the mix of natural, derived vitamins and enhancers (vitamin boosters). Toxicities of fat-soluble vitamins can also be caused by a large intake of highly fortified foods, but natural food in modest levels rarely deliver extreme or dangerous levels of fat-soluble vitamins. The Dietary Reference Intake recommendations from the United States Department of Agriculture define a "tolerable upper intake level" for most vitamins.
For those who are entirely healthy and do not experience long periods of avitaminosis, vitamin overdose can be avoided by not taking more than the normal or recommended amount of multi-vitamin supplement shown on the bottle and not ingesting multiple vitamin-containing supplements concurrently.
Signs and symptoms
A few described symptoms:
Frequent urination and/or cloudy urine
Increased urine amount
Eye irritation and/or increased sensitivity to light
Irregular and/or rapid heartbeat
Bone and joint pain (associated with avitaminosis)
Muscle pain
Confusion and mood changes (e.g. irritability, inability to focus)
Convulsions
Fatigue
Headache
Flushing of skin (associated with niacin (vitamin B3) overdose)
Skin disturbances (e.g. dryness, itching, cracking of skin, rashes, increased sensitivity to sun)
Changes of hair texture (e.g. thickening and/or clumping of hair)
Appetite loss
Constipation (associated with iron or calcium overdose)
Nausea and vomiting
Diarrhoea
Moderate weight loss (more commonly seen in long-term overdose cases)
Causes
With few exceptions, like some vitamins from B-complex, hypervitaminosis usually occurs with the fat-soluble vitamins A and D, which are stored, respectively, in the liver and fatty tissues of the body. These vitamins build up and remain for a longer time in the body than water-soluble vitamins. Conditions include:
Hypervitaminosis A
Hypervitaminosis D
Vitamin B3 § Toxicity
Megavitamin-B6 syndrome
Vitamin E toxicity
Prevention
Prevention in healthy individuals not having any periods of avitaminosis or vitamin (vegetables) lack for 2 years at least is by not taking more than the expected normal or recommended amount of vitamin supplements.
Epidemiology
In the United States, overdose exposure to all formulations of "vitamins" (which includes multi-vitamin/mineral products) was reported by 62,562 individuals in 2004 with nearly 80% of these exposures in children under the age of 6, leading to 53 "major" life-threatening outcomes and 3 deaths (2 from vitamins D and E; 1 from a multivitamin with iron). This may be compared to the 19,250 people who died of unintentional poisoning of all kinds in the U.S. in the same year (2004). In 2016, overdose exposure to all formulations of vitamins and multi-vitamin/mineral formulations was reported by 63,931 individuals to the American Association of Poison Control Centers with 72% of these exposures in children under the age of five. No deaths were reported.
See also
Avitaminosis
Megavitamin therapy
Vitamin C megadosage
References
External links
Dietary reference intakes, official website.
Effects of external causes | 0.770841 | 0.991078 | 0.763963 |
Lung volumes | Lung volumes and lung capacities refer to the volume of air in the lungs at different phases of the respiratory cycle.
The average total lung capacity of an adult human male is about 6 litres of air.
Tidal breathing is normal, resting breathing; the tidal volume is the volume of air that is inhaled or exhaled in only a single such breath.
The average human respiratory rate is 30–60 breaths per minute at birth, decreasing to 12–20 breaths per minute in adults.
Factors affecting volumes
Several factors affect lung volumes; some can be controlled, and some cannot be controlled. Lung volumes vary with different people as follows:
A person who is born and lives at sea level will develop a slightly smaller lung capacity than a person who spends their life at a high altitude. This is because the partial pressure of oxygen is lower at higher altitude which, as a result means that oxygen less readily diffuses into the bloodstream. In response to higher altitude, the body's diffusing capacity increases in order to process more air. Also, due to the lower environmental air pressure at higher altitudes, the air pressure within the breathing system must be lower in order to inhale; in order to meet this requirement, the thoracic diaphragm has a tendency to lower to a greater extent during inhalation, which in turn causes an increase in lung volume.
When someone living at or near sea level travels to locations at high altitudes (e.g. the Andes; Denver, Colorado; Tibet; the Himalayas) that person can develop a condition called altitude sickness because their lungs remove adequate amounts of carbon dioxide but they do not take in enough oxygen. (In normal individuals, carbon dioxide is the primary determinant of respiratory drive.)
Lung function development is reduced in children who grow up near motorways although this seems at least in part reversible. Air pollution exposure affects FEV1 in asthmatics, but also affects FVC and FEV1 in healthy adults even at low concentrations.
Specific changes in lung volumes also occur during pregnancy. Functional residual capacity drops 18–20%, typically falling from 1.7 to 1.35 litres, due to the compression of the diaphragm by the uterus. The compression also causes a decreased total lung capacity (TLC) by 5% and decreased expiratory reserve volume by 20%. Tidal volume increases by 30–40%, from 0.5 to 0.7 litres, and minute ventilation by 30–40% giving an increase in pulmonary ventilation. This is necessary to meet the increased oxygen requirement of the body, which reaches 50 ml/min, 20 ml of which goes to reproductive tissues. Overall, the net change in maximum breathing capacity is zero.
Values
The tidal volume, vital capacity, inspiratory capacity and expiratory reserve volume can be measured directly with a spirometer. These are the basic elements of a ventilatory pulmonary function test.
Determination of the residual volume is more difficult as it is impossible to "completely" breathe out. Therefore, measurement of the residual volume has to be done via indirect methods such as radiographic planimetry, body plethysmography, closed circuit dilution (including the helium dilution technique) and nitrogen washout.
In absence of such, estimates of residual volume have been prepared as a proportion of body mass for infants (18.1 ml/kg), or as a proportion of vital capacity (0.24 for men and 0.28 for women) or in relation to height and age ((0.0275* Age [Years]+0.0189*Height [cm]−2.6139) litres for normal-mass individuals and (0.0277*Age [Years]+0.0138*Height [cm]−2.3967) litres for overweight individuals). Standard errors in prediction equations for residual volume have been measured at 579 ml for men and 355 ml for women, while the use of 0.24*FVC gave a standard error of 318 ml.
Online calculators are available that can compute predicted lung volumes, and other spirometric parameters based on a patient's age, height, weight, and ethnic origin for many reference sources.
British rower and three-time Olympic gold medalist Pete Reed is reported to hold the largest recorded lung capacity of 11.68 litres; US swimmer Michael Phelps is also said to have a lung capacity of around 12 litres.
Weight of breath
The mass of one breath is approximately a gram (0.5-5 g). A litre of air weighs about 1.2 g (1.2 kg/m3). A half litre ordinary tidal breath weighs 0.6 g; a maximal 4.8 litre breath (average vital capacity for males) weighs approximately 5.8 g.
Restrictive and obstructive
The results (in particular FEV1/FVC and FRC) can be used to distinguish between restrictive and obstructive pulmonary diseases:
See also
Pulmonary function testing (PFT)
Spirometry
References
External links
Lung function fundamentals (anaesthetist.com)
Volume of human lungs
Respiratory physiology
Pulmonary function testing | 0.766971 | 0.995978 | 0.763886 |
Skin | Skin is the layer of usually soft, flexible outer tissue covering the body of a vertebrate animal, with three main functions: protection, regulation, and sensation.
Other animal coverings, such as the arthropod exoskeleton, have different developmental origin, structure and chemical composition. The adjective cutaneous means "of the skin" (from Latin cutis 'skin'). In mammals, the skin is an organ of the integumentary system made up of multiple layers of ectodermal tissue and guards the underlying muscles, bones, ligaments, and internal organs. Skin of a different nature exists in amphibians, reptiles, and birds. Skin (including cutaneous and subcutaneous tissues) plays crucial roles in formation, structure, and function of extraskeletal apparatus such as horns of bovids (e.g., cattle) and rhinos, cervids' antlers, giraffids' ossicones, armadillos' osteoderm, and os penis/os clitoris.
All mammals have some hair on their skin, even marine mammals like whales, dolphins, and porpoises that appear to be hairless.
The skin interfaces with the environment and is the first line of defense from external factors. For example, the skin plays a key role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, and the production of vitamin D folates. Severely damaged skin may heal by forming scar tissue. This is sometimes discoloured and depigmented. The thickness of skin also varies from location to location on an organism. In humans, for example, the skin located under the eyes and around the eyelids is the thinnest skin on the body at 0.5 mm thick and is one of the first areas to show signs of aging such as "crows feet" and wrinkles. The skin on the palms and the soles of the feet is the thickest skin on the body at 4 mm thick. The speed and quality of wound healing in skin is promoted by estrogen.
Fur is dense hair. Primarily, fur augments the insulation the skin provides but can also serve as a secondary sexual characteristic or as camouflage. On some animals, the skin is very hard and thick and can be processed to create leather. Reptiles and most fish have hard protective scales on their skin for protection, and birds have hard feathers, all made of tough beta-keratins. Amphibian skin is not a strong barrier, especially regarding the passage of chemicals via skin, and is often subject to osmosis and diffusive forces. For example, a frog sitting in an anesthetic solution would be sedated quickly as the chemical diffuses through its skin. Amphibian skin plays key roles in everyday survival and their ability to exploit a wide range of habitats and ecological conditions.
On 11 January 2024, biologists reported the discovery of the oldest known skin, fossilized about 289 million years ago, and possibly the skin from an ancient reptile.
Etymology
The word skin originally only referred to dressed and tanned animal hide and the usual word for human skin was hide.
Skin is a borrowing from Old Norse skinn "animal hide, fur", ultimately from the Proto-Indo-European root *sek-, meaning "to cut" (probably a reference to the fact that in those times animal hide was commonly cut off to be used as garment).
Structure in mammals
Mammalian skin is composed of two primary layers:
The epidermis, which provides waterproofing and serves as a barrier to infection.
The dermis, which serves as a location for the appendages of skin.
Epidermis
The epidermis is composed of the outermost layers of the skin. It forms a protective barrier over the body's surface, responsible for keeping water in the body and preventing pathogens from entering, and is a stratified squamous epithelium, composed of proliferating basal and differentiated suprabasal keratinocytes.
Keratinocytes are the major cells, constituting 95% of the epidermis, while Merkel cells, melanocytes and Langerhans cells are also present. The epidermis can be further subdivided into the following strata or layers (beginning with the outermost layer):
Stratum corneum
Stratum lucidum (only in palms and soles)
Stratum granulosum
Stratum spinosum
Stratum basale (also called the stratum germinativum)
Keratinocytes in the stratum basale proliferate through mitosis and the daughter cells move up the strata changing shape and composition as they undergo multiple stages of cell differentiation to eventually become anucleated. During that process, keratinocytes will become highly organized, forming cellular junctions (desmosomes) between each other and secreting keratin proteins and lipids which contribute to the formation of an extracellular matrix and provide mechanical strength to the skin. Keratinocytes from the stratum corneum are eventually shed from the surface (desquamation).
The epidermis contains no blood vessels, and cells in the deepest layers are nourished by diffusion from blood capillaries extending to the upper layers of the dermis.
Basement membrane
The epidermis and dermis are separated by a thin sheet of fibers called the basement membrane, which is made through the action of both tissues.
The basement membrane controls the traffic of the cells and molecules between the dermis and epidermis but also serves, through the binding of a variety of cytokines and growth factors, as a reservoir for their controlled release during physiological remodeling or repair processes.
Dermis
The dermis is the layer of skin beneath the epidermis that consists of connective tissue and cushions the body from stress and strain. The dermis provides tensile strength and elasticity to the skin through an extracellular matrix composed of collagen fibrils, microfibrils, and elastic fibers, embedded in hyaluronan and proteoglycans. Skin proteoglycans are varied and have very specific locations. For example, hyaluronan, versican and decorin are present throughout the dermis and epidermis extracellular matrix, whereas biglycan and perlecan are only found in the epidermis.
It harbors many mechanoreceptors (nerve endings) that provide the sense of touch and heat through nociceptors and thermoreceptors. It also contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as for the epidermis.
Dermis and subcutaneous tissues are thought to contain germinative cells involved in formation of horns, osteoderm, and other extra-skeletal apparatus in mammals.
The dermis is tightly connected to the epidermis through a basement membrane and is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
Papillary region
The papillary region is composed of loose areolar connective tissue. This is named for its fingerlike projections called papillae that extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
Reticular region
The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Also located within the reticular region are the roots of the hair, sweat glands, sebaceous glands, receptors, nails, and blood vessels.
Subcutaneous tissue
The subcutaneous tissue (also hypodermis) is not part of the skin, and lies below the dermis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (the subcutaneous tissue contains 50% of body fat). Fat serves as padding and insulation for the body.
Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings.
Detailed cross section
Structure in fish, amphibians, birds, and reptiles
Fish
The epidermis of fish and of most amphibians consists entirely of live cells, with only minimal quantities of keratin in the cells of the superficial layer. It is generally permeable, and in the case of many amphibians, may actually be a major respiratory organ. The dermis of bony fish typically contains relatively little of the connective tissue found in tetrapods. Instead, in most species, it is largely replaced by solid, protective bony scales. Apart from some particularly large dermal bones that form parts of the skull, these scales are lost in tetrapods, although many reptiles do have scales of a different kind, as do pangolins. Cartilaginous fish have numerous tooth-like denticles embedded in their skin, in place of true scales.
Sweat glands and sebaceous glands are both unique to mammals, but other types of skin gland are found in other vertebrates. Fish typically have a numerous individual mucus-secreting skin cells that aid in insulation and protection, but may also have poison glands, photophores, or cells that produce a more watery, serous fluid. In amphibians, the mucous cells are gathered together to form sac-like glands. Most living amphibians also possess granular glands in the skin, that secrete irritating or toxic compounds.
Although melanin is found in the skin of many species, in the reptiles, the amphibians, and fish, the epidermis is often relatively colorless. Instead, the color of the skin is largely due to chromatophores in the dermis, which, in addition to melanin, may contain guanine or carotenoid pigments. Many species, such as chameleons and flounders may be able to change the color of their skin by adjusting the relative size of their chromatophores.
Amphibians
Overview
Amphibians possess two types of glands, mucous and granular (serous). Both of these glands are part of the integument and thus considered cutaneous. Mucous and granular glands are both divided into three different sections which all connect to structure the gland as a whole. The three individual parts of the gland are the duct, the intercalary region, and lastly the alveolar gland (sac). Structurally, the duct is derived via keratinocytes and passes through to the surface of the epidermal or outer skin layer thus allowing external secretions of the body. The gland alveolus is a sac-shaped structure that is found on the bottom or base region of the granular gland. The cells in this sac specialize in secretion. Between the alveolar gland and the duct is the intercalary system which can be summed up as a transitional region connecting the duct to the grand alveolar beneath the epidermal skin layer. In general, granular glands are larger in size than the mucous glands, which are greater in number.
Granular glands
Granular glands can be identified as venomous and often differ in the type of toxin as well as the concentrations of secretions across various orders and species within the amphibians. They are located in clusters differing in concentration depending on amphibian taxa. The toxins can be fatal to most vertebrates or have no effect against others. These glands are alveolar meaning they structurally have little sacs in which venom is produced and held before it is secreted upon defensive behaviors.
Structurally, the ducts of the granular gland initially maintain a cylindrical shape. When the ducts mature and fill with fluid, the base of the ducts become swollen due to the pressure from the inside. This causes the epidermal layer to form a pit like opening on the surface of the duct in which the inner fluid will be secreted in an upwards fashion.
The intercalary region of granular glands is more developed and mature in comparison with mucous glands. This region resides as a ring of cells surrounding the basal portion of the duct which are argued to have an ectodermal muscular nature due to their influence over the lumen (space inside the tube) of the duct with dilation and constriction functions during secretions. The cells are found radially around the duct and provide a distinct attachment site for muscle fibers around the gland's body.
The gland alveolus is a sac that is divided into three specific regions/layers. The outer layer or tunica fibrosa is composed of densely packed connective-tissue which connects with fibers from the spongy intermediate layer where elastic fibers, as well as nerves, reside. The nerves send signals to the muscles as well as the epithelial layers. Lastly, the epithelium or tunica propria encloses the gland.
Mucous glands
Mucous glands are non-venomous and offer a different functionality for amphibians than granular. Mucous glands cover the entire surface area of the amphibian body and specialize in keeping the body lubricated. There are many other functions of the mucous glands such as controlling the pH, thermoregulation, adhesive properties to the environment, anti-predator behaviors (slimy to the grasp), chemical communication, even anti-bacterial/viral properties for protection against pathogens.
The ducts of the mucous gland appear as cylindrical vertical tubes that break through the epidermal layer to the surface of the skin. The cells lining the inside of the ducts are oriented with their longitudinal axis forming 90-degree angles surrounding the duct in a helical fashion.
Intercalary cells react identically to those of granular glands but on a smaller scale. Among the amphibians, there are taxa which contain a modified intercalary region (depending on the function of the glands), yet the majority share the same structure.
The alveolar or mucous glands are much more simple and only consist of an epithelium layer as well as connective tissue which forms a cover over the gland. This gland lacks a tunica propria and appears to have delicate and intricate fibers which pass over the gland's muscle and epithelial layers.
Birds and reptiles
The epidermis of birds and reptiles is closer to that of mammals, with a layer of dead keratin-filled cells at the surface, to help reduce water loss. A similar pattern is also seen in some of the more terrestrial amphibians such as toads. In these animals, there is no clear differentiation of the epidermis into distinct layers, as occurs in humans, with the change in cell type being relatively gradual. The mammalian epidermis always possesses at least a stratum germinativum and stratum corneum, but the other intermediate layers found in humans are not always distinguishable.
Hair is a distinctive feature of mammalian skin, while feathers are (at least among living species) similarly unique to birds.
Birds and reptiles have relatively few skin glands, although there may be a few structures for specific purposes, such as pheromone-secreting cells in some reptiles, or the uropygial gland of most birds.
Development
Cutaneous structures arise from the epidermis and include a variety of features such as hair, feathers, claws and nails. During embryogenesis, the epidermis splits into two layers: the periderm (which is lost) and the basal layer. The basal layer is a stem cell layer and through asymmetrical divisions, becomes the source of skin cells throughout life. It is maintained as a stem cell layer through an autocrine signal, TGF alpha, and through paracrine signaling from FGF7 (keratinocyte growth factor) produced by the dermis below the basal cells. In mice, over-expression of these factors leads to an overproduction of granular cells and thick skin.
It is believed that the mesoderm defines the pattern. The epidermis instructs the mesodermal cells to condense and then the mesoderm instructs the epidermis of what structure to make through a series of reciprocal inductions. Transplantation experiments involving frog and newt epidermis indicated that the mesodermal signals are conserved between species but the epidermal response is species-specific meaning that the mesoderm instructs the epidermis of its position and the epidermis uses this information to make a specific structure.
Functions
Skin performs the following functions:
Protection: an anatomical barrier from pathogens and damage between the internal and external environment in bodily defense. (See Skin absorption.) Langerhans cells in the skin are part of the adaptive immune system.
Sensation: contains a variety of nerve endings that jump to heat and cold, touch, pressure, vibration, and tissue injury (see somatosensory system and haptic perception).
Thermoregulation: Eccrine (sweat) glands and dilated blood vessels (increased superficial perfusion) aid heat loss, while constricted vessels greatly reduce cutaneous blood flow and conserve heat. Erector pili muscles in mammals adjust the angle of hair shafts to change the degree of insulation provided by hair or fur.
Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to reduce fluid loss.
Storage and synthesis: acts as a storage center for lipids and water
Absorption through the skin: Oxygen, nitrogen and carbon dioxide can diffuse into the epidermis in small amounts; some animals use their skin as their sole respiration organ (in humans, the cells comprising the outermost 0.25–0.40 mm of the skin are "almost exclusively supplied by external oxygen", although the "contribution to total respiration is negligible") Some medications are absorbed through the skin.
Water resistance: The skin acts as a water resistant barrier so essential nutrients aren't washed out of the body. The nutrients and oils that help hydrate the skin are covered by the most outer skin layer, the epidermis. This is helped in part by the sebaceous glands that release sebum, an oily liquid. Water itself will not cause the elimination of oils on the skin, because the oils residing in our dermis flow and would be affected by water without the epidermis.
Camouflage, whether the skin is naked or covered in fur, scales, or feathers, skin structures provide protective coloration and patterns that help to conceal animals from predators or prey.
Mechanics
Skin is a soft tissue and exhibits key mechanical behaviors of these tissues. The most pronounced feature is the J-curve stress strain response, in which a region of large strain and minimal stress exists and corresponds to the microstructural straightening and reorientation of collagen fibrils. In some cases the intact skin is prestreched, like wetsuits around the diver's body, and in other cases the intact skin is under compression. Small circular holes punched on the skin may widen or close into ellipses, or shrink and remain circular, depending on preexisting stresses.
Aging
Tissue homeostasis generally declines with age, in part because stem/progenitor cells fail to self-renew or differentiate. Skin aging is caused in part by TGF-β by blocking the conversion of dermal fibroblasts into fat cells which provide support. Common changes in the skin as a result of aging range from wrinkles, discoloration, and skin laxity, but can manifest in more severe forms such as skin malignancies. Moreover, these factors may be worsened by sun exposure in a process known as photoaging.
See also
Cutaneous reflex in human locomotion
Cutaneous respiration – gas exchange conducted through skin
Moult
Role of skin in locomotion
Skinning
References
External links
Soft tissue
Leathermaking
Organs (anatomy)
Animal anatomy
Skin physiology | 0.765947 | 0.997247 | 0.763838 |