title
stringlengths 2
90
| text
stringlengths 128
106k
| relevans
float64 0.76
0.81
| popularity
float64 0.26
1
| ranking
float64 0.2
0.81
|
---|---|---|---|---|
Biological system | A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Exocrine system: various functions including lubrication and protection by exocrine glands such sweat glands, mucous glands, lacrimal glands and mammary glands
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from foreign bodies.
Nervous system: collecting, transferring and processing information with brain, spinal cord, peripheral nervous system and sense organs.
Sensory systems: visual system, auditory system, olfactory system, gustatory system, somatosensory system, vestibular system.
Muscular system: allows for manipulation of the environment, provides locomotion, maintains posture, and produces heat. Includes skeletal muscles, smooth muscles and cardiac muscle.
Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles and prostate.
History
The notion of system (or apparatus) relies upon the concept of vital or organic function: a system is a set of organs with a definite function. This idea was already present in Antiquity (Galen, Aristotle), but the application of the term "system" is more recent. For example, the nervous system was named by Monro (1783), but Rufus of Ephesus (c. 90–120), clearly viewed for the first time the brain, spinal cord, and craniospinal nerves as an anatomical unit, although he wrote little about its function, nor gave a name to this unit.
The enumeration of the principal functions - and consequently of the systems - remained almost the same since Antiquity, but the classification of them has been very various, e.g., compare Aristotle, Bichat, Cuvier.
The notion of physiological division of labor, introduced in the 1820s by the French physiologist Henri Milne-Edwards, allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
Cellular organelle systems
The exact components of a cell are determined by whether the cell is a eukaryote or prokaryote.
Nucleus (eukaryotic only): storage of genetic material; control center of the cell.
Cytosol: component of the cytoplasm consisting of jelly-like fluid in which organelles are suspended within
Cell membrane (plasma membrane):
Endoplasmic reticulum: outer part of the nuclear envelope forming a continuous channel used for transportation; consists of the rough endoplasmic reticulum and the smooth endoplasmic reticulum
Rough endoplasmic reticulum (RER): considered "rough" due to the ribosomes attached to the channeling; made up of cisternae that allow for protein production
Smooth endoplasmic reticulum (SER): storage and synthesis of lipids and steroid hormones as well as detoxification
Ribosome: site of biological protein synthesis essential for internal activity and cannot be reproduced in other organs
Mitochondrion (mitochondria): powerhouse of the cell; site of cellular respiration producing ATP (adenosine triphosphate)
Lysosome: center of breakdown for unwanted/unneeded material within the cell
Peroxisome: breaks down toxic materials from the contained digestive enzymes such as H2O2(hydrogen peroxide)
Golgi apparatus (eukaryotic only): folded network involved in modification, transport, and secretion
Chloroplast: site of photosynthesis; storage of chlorophyllyourmom.com.in.us.33.11.44.55.66.77.88.99.1010.1111.1212.1313.1414.1515.1616.1717.1818.1919.2020
See also
Biological network
Artificial life
Biological systems engineering
Evolutionary systems
Organ system
Systems biology
Systems ecology
Systems theory
External links
Systems Biology: An Overview by Mario Jardon: A review from the Science Creative Quarterly, 2005.
Synthesis and Analysis of a Biological System, by Hiroyuki Kurata, 1999.
It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms and biological systems originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
Schmidt-Rhaesa, A. 2007. The Evolution of Organ Systems. Oxford University Press, Oxford, .
References
Biological systems | 0.812755 | 0.993594 | 0.807549 |
Disease | A disease is a particular abnormal condition that adversely affects the structure or function of all or part of an organism and is not immediately due to any external injury. Diseases are often known to be medical conditions that are associated with specific signs and symptoms. A disease may be caused by external factors such as pathogens or by internal dysfunctions. For example, internal dysfunctions of the immune system can produce a variety of different diseases, including various forms of immunodeficiency, hypersensitivity, allergies, and autoimmune disorders.
In humans, disease is often used more broadly to refer to any condition that causes pain, dysfunction, distress, social problems, or death to the person affected, or similar problems for those in contact with the person. In this broader sense, it sometimes includes injuries, disabilities, disorders, syndromes, infections, isolated symptoms, deviant behaviors, and atypical variations of structure and function, while in other contexts and for other purposes these may be considered distinguishable categories. Diseases can affect people not only physically but also mentally, as contracting and living with a disease can alter the affected person's perspective on life.
Death due to disease is called death by natural causes. There are four main types of disease: infectious diseases, deficiency diseases, hereditary diseases (including both genetic and non-genetic hereditary diseases), and physiological diseases. Diseases can also be classified in other ways, such as communicable versus non-communicable diseases. The deadliest diseases in humans are coronary artery disease (blood flow obstruction), followed by cerebrovascular disease and lower respiratory infections. In developed countries, the diseases that cause the most sickness overall are neuropsychiatric conditions, such as depression and anxiety.
The study of disease is called pathology, which includes the study of etiology, or cause.
Terminology
Concepts
In many cases, terms such as disease, disorder, morbidity, sickness and illness are used interchangeably; however, there are situations when specific terms are considered preferable.
Disease
The term disease broadly refers to any condition that impairs the normal functioning of the body. For this reason, diseases are associated with the dysfunction of the body's normal homeostatic processes. Commonly, the term is used to refer specifically to infectious diseases, which are clinically evident diseases that result from the presence of pathogenic microbial agents, including viruses, bacteria, fungi, protozoa, multicellular organisms, and aberrant proteins known as prions. An infection or colonization that does not and will not produce clinically evident impairment of normal functioning, such as the presence of the normal bacteria and yeasts in the gut, or of a passenger virus, is not considered a disease. By contrast, an infection that is asymptomatic during its incubation period, but expected to produce symptoms later, is usually considered a disease. Non-infectious diseases are all other diseases, including most forms of cancer, heart disease, and genetic disease.
Acquired disease
An acquired disease is one that began at some point during one's lifetime, as opposed to disease that was already present at birth, which is congenital disease. Acquired sounds like it could mean "caught via contagion", but it simply means acquired sometime after birth. It also sounds like it could imply secondary disease, but acquired disease can be primary disease.
Acute disease
An acute disease is one of a short-term nature (acute); the term sometimes also connotes a fulminant nature
Chronic condition or chronic disease
A chronic disease is one that persists over time, often for at least six months, but may also include illnesses that are expected to last for the entirety of one's natural life.
Congenital disorder or congenital disease
A congenital disorder is one that is present at birth. It is often a genetic disease or disorder and can be inherited. It can also be the result of a vertically transmitted infection from the mother, such as HIV/AIDS.
Genetic disease
A genetic disorder or disease is caused by one or more genetic mutations. It is often inherited, but some mutations are random and de novo.
Hereditary or inherited disease
A hereditary disease is a type of genetic disease caused by genetic mutations that are hereditary (and can run in families)
Iatrogenic disease
An iatrogenic disease or condition is one that is caused by medical intervention, whether as a side effect of a treatment or as an inadvertent outcome.
Idiopathic disease
An idiopathic disease has an unknown cause or source. As medical science has advanced, many diseases with entirely unknown causes have had some aspects of their sources explained and therefore shed their idiopathic status. For example, when germs were discovered, it became known that they were a cause of infection, but particular germs and diseases had not been linked. In another example, it is known that autoimmunity is the cause of some forms of diabetes mellitus type 1, even though the particular molecular pathways by which it works are not yet understood. It is also common to know certain factors are associated with certain diseases; however, association does not necessarily imply causality. For example, a third factor might be causing both the disease, and the associated phenomenon.
Incurable disease
A disease that cannot be cured. Incurable diseases are not necessarily terminal diseases, and sometimes a disease's symptoms can be treated sufficiently for the disease to have little or no impact on quality of life.
Primary disease
A primary disease is a disease that is due to a root cause of illness, as opposed to secondary disease, which is a sequela, or complication that is caused by the primary disease. For example, a common cold is a primary disease, where rhinitis is a possible secondary disease, or sequela. A doctor must determine what primary disease, a cold or bacterial infection, is causing a patient's secondary rhinitis when deciding whether or not to prescribe antibiotics.
Secondary disease
A secondary disease is a disease that is a sequela or complication of a prior, causal disease, which is referred to as the primary disease or simply the underlying cause (root cause). For example, a bacterial infection can be primary, wherein a healthy person is exposed to bacteria and becomes infected, or it can be secondary to a primary cause, that predisposes the body to infection. For example, a primary viral infection that weakens the immune system could lead to a secondary bacterial infection. Similarly, a primary burn that creates an open wound could provide an entry point for bacteria, and lead to a secondary bacterial infection.
Terminal disease
A terminal disease is one that is expected to have the inevitable result of death. Previously, AIDS was a terminal disease; it is now incurable, but can be managed indefinitely using medications.
Illness
The terms illness and sickness are both generally used as synonyms for disease; however, the term illness is occasionally used to refer specifically to the patient's personal experience of their disease. In this model, it is possible for a person to have a disease without being ill (to have an objectively definable, but asymptomatic, medical condition, such as a subclinical infection, or to have a clinically apparent physical impairment but not feel sick or distressed by it), and to be ill without being diseased (such as when a person perceives a normal experience as a medical condition, or medicalizes a non-disease situation in their life – for example, a person who feels unwell as a result of embarrassment, and who interprets those feelings as sickness rather than normal emotions). Symptoms of illness are often not directly the result of infection, but a collection of evolved responses – sickness behavior by the body – that helps clear infection and promote recovery. Such aspects of illness can include lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate.
A disorder is a functional abnormality or disturbance that may or may not show specific signs and symptoms. Medical disorders can be categorized into mental disorders, physical disorders, genetic disorders, emotional and behavioral disorders, and functional disorders. The term disorder is often considered more value-neutral and less stigmatizing than the terms disease or illness, and therefore is preferred terminology in some circumstances. In mental health, the term mental disorder is used as a way of acknowledging the complex interaction of biological, social, and psychological factors in psychiatric conditions; however, the term disorder is also used in many other areas of medicine, primarily to identify physical disorders that are not caused by infectious organisms, such as metabolic disorders.
Medical condition or health condition
A medical condition or health condition is a broad concept that includes all diseases, lesions, disorders, or nonpathologic condition that normally receives medical treatment, such as pregnancy or childbirth. While the term medical condition generally includes mental illnesses, in some contexts the term is used specifically to denote any illness, injury, or disease except for mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the widely used psychiatric manual that defines all mental disorders, uses the term general medical condition to refer to all diseases, illnesses, and injuries except for mental disorders. This usage is also commonly seen in the psychiatric literature. Some health insurance policies also define a medical condition as any illness, injury, or disease except for psychiatric illnesses.
As it is more value-neutral than terms like disease, the term medical condition is sometimes preferred by people with health issues that they do not consider deleterious. However, by emphasizing the medical nature of the condition, this term is sometimes rejected, such as by proponents of the autism rights movement.
The term medical condition is also a synonym for medical state, in which case it describes an individual patient's current state from a medical standpoint. This usage appears in statements that describe a patient as being in critical condition, for example.
Morbidity is a diseased state, disability, or poor health due to any cause. The term may refer to the existence of any form of disease, or to the degree that the health condition affects the patient. Among severely ill patients, the level of morbidity is often measured by ICU scoring systems. Comorbidity, or co-existing disease, is the simultaneous presence of two or more medical conditions, such as schizophrenia and substance abuse.
In epidemiology and actuarial science, the term morbidity (also morbidity rate or morbidity frequency) can refer to either the incidence rate, the prevalence of a disease or medical condition, or the percentage of people who experience a given condition within a given timeframe (e.g., 20% of people will get influenza in a year). This measure of sickness is contrasted with the mortality rate of a condition, which is the proportion of people dying during a given time interval. Morbidity rates are used in actuarial professions, such as health insurance, life insurance, and long-term care insurance, to determine the premiums charged to customers. Morbidity rates help insurers predict the likelihood that an insured will contract or develop any number of specified diseases.
Pathosis or pathology
Pathosis (plural pathoses) is synonymous with disease. The word pathology also has this sense, in which it is commonly used by physicians in the medical literature, although some editors prefer to reserve pathology to its other senses. Sometimes a slight connotative shade causes preference for pathology or pathosis implying "some [as yet poorly analyzed] pathophysiologic process" rather than disease implying "a specific disease entity as defined by diagnostic criteria being already met". This is hard to quantify denotatively, but it explains why cognitive synonymy is not invariable.
Syndrome
A syndrome is the association of several signs and symptoms, or other characteristics that often occur together, regardless of whether the cause is known. Some syndromes such as Down syndrome are known to have only one cause (an extra chromosome at birth). Others such as Parkinsonian syndrome are known to have multiple possible causes. Acute coronary syndrome, for example, is not a single disease itself but is rather the manifestation of any of several diseases including myocardial infarction secondary to coronary artery disease. In yet other syndromes, however, the cause is unknown. A familiar syndrome name often remains in use even after an underlying cause has been found or when there are a number of different possible primary causes. Examples of the first-mentioned type are that Turner syndrome and DiGeorge syndrome are still often called by the "syndrome" name despite that they can also be viewed as disease entities and not solely as sets of signs and symptoms.
Predisease
Predisease is a subclinical or prodromal vanguard of a disease. Prediabetes and prehypertension are common examples. The nosology or epistemology of predisease is contentious, though, because there is seldom a bright line differentiating a legitimate concern for subclinical or premonitory status and the conflict of interest–driven over-medicalization (e.g., by pharmaceutical manufacturers) or de-medicalization (e.g., by medical and disability insurers). Identifying legitimate predisease can result in useful preventive measures, such as motivating the person to get a healthy amount of physical exercise, but labeling a healthy person with an unfounded notion of predisease can result in overtreatment, such as taking drugs that only help people with severe disease or paying for treatments with a poor benefit–cost ratio.
One review proposed three criteria for predisease:
a high risk for progression to disease making one "far more likely to develop" it than others are- for example, a pre-cancer will almost certainly turn into cancer over time
actionability for risk reduction – for example, removal of the precancerous tissue prevents it from turning into a potentially deadly cancer
benefit that outweighs the harm of any interventions taken – removing the precancerous tissue prevents cancer, and thus prevents a potential death from cancer.
Types by body system
Mental
Mental illness is a broad, generic label for a category of illnesses that may include affective or emotional instability, behavioral dysregulation, cognitive dysfunction or impairment. Specific illnesses known as mental illnesses include major depression, generalized anxiety disorders, schizophrenia, and attention deficit hyperactivity disorder, to name a few. Mental illness can be of biological (e.g., anatomical, chemical, or genetic) or psychological (e.g., trauma or conflict) origin. It can impair the affected person's ability to work or study and can harm interpersonal relationship.
Organic
An organic disease is one caused by a physical or physiological change to some tissue or organ of the body. The term sometimes excludes infections. It is commonly used in contrast with mental disorders. It includes emotional and behavioral disorders if they are due to changes to the physical structures or functioning of the body, such as after a stroke or a traumatic brain injury, but not if they are due to psychosocial issues.
Stages
In an infectious disease, the incubation period is the time between infection and the appearance of symptoms. The latency period is the time between infection and the ability of the disease to spread to another person, which may precede, follow, or be simultaneous with the appearance of symptoms. Some viruses also exhibit a dormant phase, called viral latency, in which the virus hides in the body in an inactive state. For example, varicella zoster virus causes chickenpox in the acute phase; after recovery from chickenpox, the virus may remain dormant in nerve cells for many years, and later cause herpes zoster (shingles).
Acute disease
An acute disease is a short-lived disease, like the common cold.
Chronic disease
A chronic disease is one that lasts for a long time, usually at least six months. During that time, it may be constantly present, or it may go into remission and periodically relapse. A chronic disease may be stable (does not get any worse) or it may be progressive (gets worse over time). Some chronic diseases can be permanently cured. Most chronic diseases can be beneficially treated, even if they cannot be permanently cured.
Clinical disease
One that has clinical consequences; in other words, the stage of the disease that produces the characteristic signs and symptoms of that disease. AIDS is the clinical disease stage of HIV infection.
Cure
A cure is the end of a medical condition or a treatment that is very likely to end it, while remission refers to the disappearance, possibly temporarily, of symptoms. Complete remission is the best possible outcome for incurable diseases.
Flare-up
A flare-up can refer to either the recurrence of symptoms or an onset of more severe symptoms.
Progressive disease
Progressive disease is a disease whose typical natural course is the worsening of the disease until death, serious debility, or organ failure occurs. Slowly progressive diseases are also chronic diseases; many are also degenerative diseases. The opposite of progressive disease is stable disease or static disease: a medical condition that exists, but does not get better or worse.
A refractory disease is a disease that resists treatment, especially an individual case that resists treatment more than is normal for the specific disease in question.
Subclinical disease
Also called silent disease, silent stage, or asymptomatic disease. This is a stage in some diseases before the symptoms are first noted.
Terminal phase
If a person will die soon from a disease, regardless of whether that disease typically causes death, then the stage between the earlier disease process and active dying is the terminal phase.
Recovery
Recovery can refer to the repairing of physical processes (tissues, organs etc.) and the resumption of healthy functioning after damage causing processes have been cured.
Extent
Localized disease
A localized disease is one that affects only one part of the body, such as athlete's foot or an eye infection.
Disseminated disease
A disseminated disease has spread to other parts; with cancer, this is usually called metastatic disease.
Systemic disease
A systemic disease is a disease that affects the entire body, such as influenza or high blood pressure.
Classification
Diseases may be classified by cause, pathogenesis (mechanism by which the disease is caused), or by symptoms. Alternatively, diseases may be classified according to the organ system involved, though this is often complicated since many diseases affect more than one organ.
A chief difficulty in nosology is that diseases often cannot be defined and classified clearly, especially when cause or pathogenesis are unknown. Thus diagnostic terms often only reflect a symptom or set of symptoms (syndrome).
Classical classification of human disease derives from the observational correlation between pathological analysis and clinical syndromes. Today it is preferred to classify them by their cause if it is known.
The most known and used classification of diseases is the World Health Organization's ICD. This is periodically updated. Currently, the last publication is the ICD-11.
Causes
Diseases can be caused by any number of factors and may be acquired or congenital. Microorganisms, genetics, the environment or a combination of these can contribute to a diseased state.
Only some diseases such as influenza are contagious and commonly believed infectious. The microorganisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa, and fungi. Infectious diseases can be transmitted, e.g. by hand-to-mouth contact with infectious material on surfaces, by bites of insects or other carriers of the disease, and from contaminated water or food (often via fecal contamination), etc. Also, there are sexually transmitted diseases. In some cases, microorganisms that are not readily spread from person to person play a role, while other diseases can be prevented or ameliorated with appropriate nutrition or other lifestyle changes.
Some diseases, such as most (but not all) forms of cancer, heart disease, and mental disorders, are non-infectious diseases. Many non-infectious diseases have a partly or completely genetic basis (see genetic disorder) and may thus be transmitted from one generation to another.
Social determinants of health are the social conditions in which people live that determine their health. Illnesses are generally related to social, economic, political, and environmental circumstances. Social determinants of health have been recognized by several health organizations such as the Public Health Agency of Canada and the World Health Organization to greatly influence collective and personal well-being. The World Health Organization's Social Determinants Council also recognizes Social determinants of health in poverty.
When the cause of a disease is poorly understood, societies tend to mythologize the disease or use it as a metaphor or symbol of whatever that culture considers evil. For example, until the bacterial cause of tuberculosis was discovered in 1882, experts variously ascribed the disease to heredity, a sedentary lifestyle, depressed mood, and overindulgence in sex, rich food, or alcohol, all of which were social ills at the time.
When a disease is caused by a pathogenic organism (e.g., when malaria is caused by Plasmodium), one should not confuse the pathogen (the cause of the disease) with disease itself. For example, West Nile virus (the pathogen) causes West Nile fever (the disease). The misuse of basic definitions in epidemiology is frequent in scientific publications.
Types of causes
Airborne An airborne disease is any disease that is caused by pathogens and transmitted through the air.
Foodborne Foodborne illness or food poisoning is any illness resulting from the consumption of food contaminated with pathogenic bacteria, toxins, viruses, prions or parasites.
Infectious Infectious diseases, also known as transmissible diseases or communicable diseases, comprise clinically evident illness (i.e., characteristic medical signs or symptoms of disease) resulting from the infection, presence and growth of pathogenic biological agents in an individual host organism. Included in this category are contagious diseases – an infection, such as influenza or the common cold, that commonly spreads from one person to another – and communicable diseases – a disease that can spread from one person to another, but does not necessarily spread through everyday contact.
Lifestyle A lifestyle disease is any disease that appears to increase in frequency as countries become more industrialized and people live longer, especially if the risk factors include behavioral choices like a sedentary lifestyle or a diet high in unhealthful foods such as refined carbohydrates, trans fats, or alcoholic beverages.
Non-communicable A non-communicable disease is a medical condition or disease that is non-transmissible. Non-communicable diseases cannot be spread directly from one person to another. Heart disease and cancer are examples of non-communicable diseases in humans.
Prevention
Many diseases and disorders can be prevented through a variety of means. These include sanitation, proper nutrition, adequate exercise, vaccinations and other self-care and public health measures, .
Treatments
Medical therapies or treatments are efforts to cure or improve a disease or other health problems. In the medical field, therapy is synonymous with the word treatment. Among psychologists, the term may refer specifically to psychotherapy or "talk therapy". Common treatments include medications, surgery, medical devices, and self-care. Treatments may be provided by an organized health care system, or informally, by the patient or family members.
Preventive healthcare is a way to avoid an injury, sickness, or disease in the first place. A treatment or cure is applied after a medical problem has already started. A treatment attempts to improve or remove a problem, but treatments may not produce permanent cures, especially in chronic diseases. Cures are a subset of treatments that reverse diseases completely or end medical problems permanently. Many diseases that cannot be completely cured are still treatable. Pain management (also called pain medicine) is that branch of medicine employing an interdisciplinary approach to the relief of pain and improvement in the quality of life of those living with pain.
Treatment for medical emergencies must be provided promptly, often through an emergency department or, in less critical situations, through an urgent care facility.
Epidemiology
Epidemiology is the study of the factors that cause or encourage diseases. Some diseases are more common in certain geographic areas, among people with certain genetic or socioeconomic characteristics, or at different times of the year.
Epidemiology is considered a cornerstone methodology of public health research and is highly regarded in evidence-based medicine for identifying risk factors for diseases. In the study of communicable and non-communicable diseases, the work of epidemiologists ranges from outbreak investigation to study design, data collection, and analysis including the development of statistical models to test hypotheses and the documentation of results for submission to peer-reviewed journals. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. Epidemiologists rely on a number of other scientific disciplines such as biology (to better understand disease processes), biostatistics (the current raw information available), Geographic Information Science (to store data and map disease patterns) and social science disciplines (to better understand proximate and distal risk factors). Epidemiology can help identify causes as well as guide prevention efforts.
In studying diseases, epidemiology faces the challenge of defining them. Especially for poorly understood diseases, different groups might use significantly different definitions. Without an agreed-on definition, different researchers may report different numbers of cases and characteristics of the disease.
Some morbidity databases are compiled with data supplied by states and territories health authorities, at national levels or larger scale (such as European Hospital Morbidity Database (HMDB)) which may contain hospital discharge data by detailed diagnosis, age and sex. The European HMDB data was submitted by European countries to the World Health Organization Regional Office for Europe.
Burdens of disease
Disease burden is the impact of a health problem in an area measured by financial cost, mortality, morbidity, or other indicators.
There are several measures used to quantify the burden imposed by diseases on people. The years of potential life lost (YPLL) is a simple estimate of the number of years that a person's life was shortened due to a disease. For example, if a person dies at the age of 65 from a disease, and would probably have lived until age 80 without that disease, then that disease has caused a loss of 15 years of potential life. YPLL measurements do not account for how disabled a person is before dying, so the measurement treats a person who dies suddenly and a person who died at the same age after decades of illness as equivalent. In 2004, the World Health Organization calculated that 932 million years of potential life were lost to premature death.
The quality-adjusted life year (QALY) and disability-adjusted life year (DALY) metrics are similar but take into account whether the person was healthy after diagnosis. In addition to the number of years lost due to premature death, these measurements add part of the years lost to being sick. Unlike YPLL, these measurements show the burden imposed on people who are very sick, but who live a normal lifespan. A disease that has high morbidity, but low mortality, has a high DALY and a low YPLL. In 2004, the World Health Organization calculated that 1.5 billion disability-adjusted life years were lost to disease and injury. In the developed world, heart disease and stroke cause the most loss of life, but neuropsychiatric conditions like major depressive disorder cause the most years lost to being sick.
Society and culture
How a society responds to diseases is the subject of medical sociology.
A condition may be considered a disease in some cultures or eras but not in others. For example, obesity can represent wealth and abundance, and is a status symbol in famine-prone areas and some places hard-hit by HIV/AIDS. Epilepsy is considered a sign of spiritual gifts among the Hmong people.
Sickness confers the social legitimization of certain benefits, such as illness benefits, work avoidance, and being looked after by others. The person who is sick takes on a social role called the sick role. A person who responds to a dreaded disease, such as cancer, in a culturally acceptable fashion may be publicly and privately honored with higher social status. In return for these benefits, the sick person is obligated to seek treatment and work to become well once more. As a comparison, consider pregnancy, which is not interpreted as a disease or sickness, even if the mother and baby may both benefit from medical care.
Most religions grant exceptions from religious duties to people who are sick. For example, one whose life would be endangered by fasting on Yom Kippur or during the month of Ramadan is exempted from the requirement, or even forbidden from participating. People who are sick are also exempted from social duties. For example, ill health is the only socially acceptable reason for an American to refuse an invitation to the White House.
The identification of a condition as a disease, rather than as simply a variation of human structure or function, can have significant social or economic implications. The controversial recognition of diseases such as repetitive stress injury (RSI) and post-traumatic stress disorder (PTSD) has had a number of positive and negative effects on the financial and other responsibilities of governments, corporations, and institutions towards individuals, as well as on the individuals themselves. The social implication of viewing aging as a disease could be profound, though this classification is not yet widespread.
Lepers were people who were historically shunned because they had an infectious disease, and the term "leper" still evokes social stigma. Fear of disease can still be a widespread social phenomenon, though not all diseases evoke extreme social stigma.
Social standing and economic status affect health. Diseases of poverty are diseases that are associated with poverty and low social status; diseases of affluence are diseases that are associated with high social and economic status. Which diseases are associated with which states vary according to time, place, and technology. Some diseases, such as diabetes mellitus, may be associated with both poverty (poor food choices) and affluence (long lifespans and sedentary lifestyles), through different mechanisms. The term lifestyle diseases describes diseases associated with longevity and that are more common among older people. For example, cancer is far more common in societies in which most members live until they reach the age of 80 than in societies in which most members die before they reach the age of 50.
Language of disease
An illness narrative is a way of organizing a medical experience into a coherent story that illustrates the sick individual's personal experience.
People use metaphors to make sense of their experiences with disease. The metaphors move disease from an objective thing that exists to an affective experience. The most popular metaphors draw on military concepts: Disease is an enemy that must be feared, fought, battled, and routed. The patient or the healthcare provider is a warrior, rather than a passive victim or bystander. The agents of communicable diseases are invaders; non-communicable diseases constitute internal insurrection or civil war. Because the threat is urgent, perhaps a matter of life and death, unthinkably radical, even oppressive, measures are society's and the patient's moral duty as they courageously mobilize to struggle against destruction. The War on Cancer is an example of this metaphorical use of language. This language is empowering to some patients, but leaves others feeling like they are failures.
Another class of metaphors describes the experience of illness as a journey: The person travels to or from a place of disease, and changes himself, discovers new information, or increases his experience along the way. He may travel "on the road to recovery" or make changes to "get on the right track" or choose "pathways". Some are explicitly immigration-themed: the patient has been exiled from the home territory of health to the land of the ill, changing identity and relationships in the process. This language is more common among British healthcare professionals than the language of physical aggression.
Some metaphors are disease-specific. Slavery is a common metaphor for addictions: The alcoholic is enslaved by drink, and the smoker is captive to nicotine. Some cancer patients treat the loss of their hair from chemotherapy as a metonymy or metaphor for all the losses caused by the disease.
Some diseases are used as metaphors for social ills: "Cancer" is a common description for anything that is endemic and destructive in society, such as poverty, injustice, or racism. AIDS was seen as a divine judgment for moral decadence, and only by purging itself from the "pollution" of the "invader" could society become healthy again. More recently, when AIDS seemed less threatening, this type of emotive language was applied to avian flu and type 2 diabetes mellitus. Authors in the 19th century commonly used tuberculosis as a symbol and a metaphor for transcendence. People with the disease were portrayed in literature as having risen above daily life to become ephemeral objects of spiritual or artistic achievement. In the 20th century, after its cause was better understood, the same disease became the emblem of poverty, squalor, and other social problems.
See also
Cryptogenic disease, a disease whose cause is currently unknown
Developmental disability, severe, lifelong disabilities attributable to mental or physical impairments
Environmental disease
Host–pathogen interaction
Lists of diseases
Mitochondrial disease
Philosophy of medicine
Plant pathology
Rare disease, a disease that affects very few people
Sociology of health and illness
Syndrome
References
External links
"Man and Disease", BBC Radio 4 discussion with Anne Hardy, David Bradley & Chris Dye (In Our Time, 15 December 2002)
CTD The Comparative Toxicogenomics Database is a scientific resource connecting chemicals, genes, and human diseases.
Free online health-risk assessment by Your Disease Risk at Washington University in St. Louis
Health Topics A–Z, fact sheets about many common diseases at the Centers for Disease Control
Health Topics, MedlinePlus descriptions of most diseases, with access to current research articles.
NLM Comprehensive database from the US National Library of Medicine
OMIM Comprehensive information on genes that cause disease at Online Mendelian Inheritance in Man
Report: The global burden of disease from the World Health Organization (WHO), 2004
The Merck Manual containing detailed description of most diseases
Medical terminology | 0.805434 | 0.999128 | 0.804732 |
Human body | The human body is the entire structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems.
The external human body consists of a head, hair, neck, torso (which includes the thorax and abdomen), genitals, arms, hands, legs, and feet. The internal human body includes organs, teeth, bones, muscle, tendons, ligaments, blood vessels and blood, lymphatic vessels and lymph.
The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar, iron, and oxygen in the blood.
The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work.
Composition
The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.
The adult male body is about 60% total body water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates.
Cells
The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 30 trillion cells, and 38 trillion bacteria in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The skin of the body is also host to billions of commensal organisms as well as immune cells. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen, surrounded by extracellular fluids.
Genome
Cells in the body function because of DNA. DNA sits within the nucleus of a cell. Here, parts of DNA are copied and sent to the body of the cell via RNA. The RNA is then used to create proteins, which form the basis for cells, their activity, and their products. Proteins dictate cell function and gene expression, a cell is able to self-regulate by the amount of proteins produced. However, not all cells have DNA; some cells such as mature red blood cells lose their nucleus as they mature.
Tissues
The body consists of many different types of tissue, defined as cells that act with a specialised function. The study of tissues is called histology and is often done with a microscope. The body consists of four main types of tissues. These are lining cells (epithelia), connective tissue, nerve tissue and muscle tissue.
Cells
Cells that line surfaces exposed to the outside world or gastrointestinal tract (epithelia) or internal cavities (endothelium) come in numerous shapes and forms – from single layers of flat cells, to cells with small beating hair-like cilia in the lungs, to column-like cells that line the stomach. Endothelial cells are cells that line internal cavities including blood vessels and glands. Lining cells regulate what can and cannot pass through them, protect internal structures, and function as sensory surfaces.
Organs
Organs, structured collections of cells with a specific function, mostly sit within the body, with the exception of skin. Examples include the heart, lungs and liver. Many organs reside within cavities within the body. These cavities include the abdomen (which contains the stomach, for example) and pleura, which contains the lungs.
Heart
The heart is an organ located in the thoracic cavity between the lungs and slightly to the left. It is surrounded by the pericardium, which holds it in place in the mediastinum and serves to protect it from blunt trauma, infection and help lubricate the movement of the heart via pericardial fluid. The heart works by pumping blood around the body allowing oxygen, nutrients, waste, hormones and white blood cells to be transported.
The heart is composed of two atria and two ventricles. The primary purpose of the atria is to allow uninterrupted venous blood flow to the heart during ventricular systole. This allows enough blood to get into the ventricles during atrial systole. Consequently, the atria allows a cardiac output roughly 75% greater than would be possible without them. The purpose of the ventricles is to pump blood to the lungs through the right ventricle and to the rest of the body through the left ventricle.
The heart has an electrical conduction system to control the contraction and relaxation of the muscles. It starts in the sinoatrial node traveling through the atria causing them to pump blood into the ventricles. It then travels to the atrioventricular node, which makes the signal slow down slightly allowing the ventricles to fill with blood before pumping it out and starting the cycle over again.
Coronary artery disease is the leading cause of death worldwide, making up 16% of all deaths. It is caused by the buildup of plaque in the coronary arteries supplying the heart, eventually the arteries may become so narrow that not enough blood is able to reach the myocardium, a condition known as myocardial infarction or heart attack, this can cause heart failure or cardiac arrest and eventually death. Risk factors for coronary artery disease include obesity, smoking, high cholesterol, high blood pressure, lack of exercise and diabetes. Cancer can affect the heart, though it is exceedingly rare and has usually metastasized from another part of the body such as the lungs or breasts. This is because the heart cells quickly stop dividing and all growth occurs through size increase rather than cell division.
Gallbladder
The gallbladder is a hollow pear-shaped organ located posterior to the inferior middle part of the right lobe of the liver. It is variable in shape and size. It stores bile before it is released into the small intestine via the common bile duct to help with digestion of fats. It receives bile from the liver via the cystic duct, which connects to the common hepatic duct to form the common bile duct.
The gallbladder gets its blood supply from the cystic artery, which in most people, emerges from the right hepatic artery.
Gallstones is a common disease in which one or more stones form in the gallbladder or biliary tract. Most people are asymptomatic but if a stone blocks the biliary tract, it causes a gallbladder attack, symptoms may include sudden pain in the upper right abdomen or center of the abdomen. Nausea and vomiting may also occur. Typical treatment is removal of the gallbladder through a procedure called a cholecystectomy. Having gallstones is a risk factor for gallbladder cancer, which although quite uncommon, is rapidly fatal if not diagnosed early.
Systems
Circulatory system
The circulatory system consists of the heart and blood vessels (arteries, veins and capillaries). The heart propels the circulation of the blood, which serves as a "transportation system" to transfer oxygen, fuel, nutrients, waste products, immune cells and signaling molecules (i.e. hormones) from one part of the body to another. Paths of blood circulation within the human body can be divided into two circuits: the pulmonary circuit, which pumps blood to the lungs to receive oxygen and leave carbon dioxide, and the systemic circuit, which carries blood from the heart off to the rest of the body. The blood consists of fluid that carries cells in the circulation, including some that move from tissue to blood vessels and back, as well as the spleen and bone marrow.
Digestive system
The digestive system consists of the mouth including the tongue and teeth, esophagus, stomach, (gastrointestinal tract, small and large intestines, and rectum), as well as the liver, pancreas, gallbladder, and salivary glands. It converts food into small, nutritional, non-toxic molecules for distribution and absorption into the body. These molecules take the form of proteins (which are broken down into amino acids), fats, vitamins and minerals (the last of which are mainly ionic rather than molecular). After being swallowed, food moves through the gastrointestinal tract by means of peristalsis: the systematic expansion and contraction of muscles to push food from one area to the next.
Digestion begins in the mouth, which chews food into smaller pieces for easier digestion. Then it is swallowed, and moves through the esophagus to the stomach. In the stomach, food is mixed with gastric acids to allow the extraction of nutrients. What is left is called chyme; this then moves into the small intestine, which absorbs the nutrients and water from the chyme. What remains passes on to the large intestine, where it is dried to form feces; these are then stored in the rectum until they are expelled through the anus.
Endocrine system
The endocrine system consists of the principal endocrine glands: the pituitary, thyroid, adrenals, pancreas, parathyroids, and gonads, but nearly all organs and tissues produce specific endocrine hormones as well. The endocrine hormones serve as signals from one body system to another regarding an enormous array of conditions, resulting in variety of changes of function.
Immune system
The immune system consists of the white blood cells, the thymus, lymph nodes and lymph channels, which are also part of the lymphatic system. The immune system provides a mechanism for the body to distinguish its own cells and tissues from outside cells and substances and to neutralize or destroy the latter by using specialized proteins such as antibodies, cytokines, and toll-like receptors, among many others.
Integumentary system
The integumentary system consists of the covering of the body (the skin), including hair and nails as well as other functionally important structures such as the sweat glands and sebaceous glands. The skin provides containment, structure, and protection for other organs, and serves as a major sensory interface with the outside world.
Lymphatic system
The lymphatic system extracts, transports and metabolizes lymph, the fluid found in between cells. The lymphatic system is similar to the circulatory system in terms of both its structure and its most basic function, to carry a body fluid.
Musculoskeletal system
The musculoskeletal system consists of the human skeleton (which includes bones, ligaments, tendons, joints and cartilage) and attached muscles. It gives the body basic structure and the ability for movement. In addition to their structural role, the larger bones in the body contain bone marrow, the site of production of blood cells. Also, all bones are major storage sites for calcium and phosphate. This system can be split up into the muscular system and the skeletal system.
Nervous system
The nervous system consists of the body's neurons and glial cells, which together form the nerves, ganglia and gray matter, which in turn form the brain and related structures. The brain is the organ of thought, emotion, memory, and sensory processing; it serves many aspects of communication and controls various systems and functions. The special senses consist of vision, hearing, taste, and smell. The eyes, ears, tongue, and nose gather information about the body's environment.
From a structural perspective, the nervous system is typically subdivided into two component parts: the central nervous system (CNS), composed of the brain and the spinal cord; and the peripheral nervous system (PNS), composed of the nerves and ganglia outside the brain and spinal cord. The CNS is mostly responsible for organizing motion, processing sensory information, thought, memory, cognition and other such functions. It remains a matter of some debate whether the CNS directly gives rise to consciousness. The peripheral nervous system (PNS) is mostly responsible for gathering information with sensory neurons and directing body movements with motor neurons.
From a functional perspective, the nervous system is again typically divided into two component parts: the somatic nervous system (SNS) and the autonomic nervous system (ANS). The SNS is involved in voluntary functions like speaking and sensory processes. The ANS is involved in involuntary processes, such as digestion and regulating blood pressure.
The nervous system is subject to many different diseases. In epilepsy, abnormal electrical activity in the brain can cause seizures. In multiple sclerosis, the immune system attacks the nerve linings, damaging the nerves' ability to transmit signals. Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease, is a motor neuron disease which gradually reduces movement in patients. There are also many other diseases of the nervous system.
Reproductive system
The purpose of the reproductive system is to reproduce and nurture the growth of offspring. The functions include the production of germ cells and hormones. The sex organs of the male reproductive system and the female reproductive system develops and mature at puberty. These systems include the internal and external genitalia.
Female puberty generally occurs between the ages of 9 and 13 and is characterized by ovulation and menstruation; the growth of secondary sex characteristics, such as growth of pubic and underarm hair, breast, uterine and vaginal growth, widening hips and increased height and weight, also occur during puberty. Male puberty sees the further development of the penis and testicles.
The female inner sex organs are the two ovaries, their fallopian tubes, the uterus, and the cervix. At birth there are about 70,000 immature egg cells that degenerate until at puberty there are around 40,000. No more egg cells are produced. Hormones stimulate the beginning of menstruation, and the ongoing menstrual cycles. The female external sex organs are the vulva (labia, clitoris, and vestibule).
The male external genitalia include the penis and scrotum that contains the testicles. The testicle is the gonad, the sex gland that produces the sperm cells. Unlike the egg cells in the female, sperm cells are produced throughout life. Other internal sex organs are the epididymides, vasa deferentia, and some accessory glands.
Diseases that affect the reproductive system include polycystic ovary syndrome, a number of disorders of the testicles including testicular torsion, and a number of sexually transmitted infections including syphilis, HIV, chlamydia, HPV and genital warts. Cancer can affect most parts of the reproductive system including the penis, testicles, prostate, ovaries, cervix, vagina, fallopian, uterus and vulva.
Respiratory system
The respiratory system consists of the nose, nasopharynx, trachea, and lungs. It brings oxygen from the air and excretes carbon dioxide and water back into the air. First, air is pulled through the trachea into the lungs by the diaphragm pushing down, which creates a vacuum. Air is briefly stored inside small sacs known as alveoli (sing.: alveolus) before being expelled from the lungs when the diaphragm contracts again. Each alveolus is surrounded by capillaries carrying deoxygenated blood, which absorbs oxygen out of the air and into the bloodstream.
For the respiratory system to function properly, there need to be as few impediments as possible to the movement of air within the lungs. Inflammation of the lungs and excess mucus are common sources of breathing difficulties. In asthma, the respiratory system is persistently inflamed, causing wheezing or shortness of breath. Pneumonia occurs through infection of the alveoli, and may be caused by tuberculosis. Emphysema, commonly a result of smoking, is caused by damage to connections between the alveoli.
Urinary system
The urinary system consists of the two kidneys, two ureters, bladder, and urethra. It removes waste materials from the blood through urine, which carries a variety of waste molecules and excess ions and water out of the body.
First, the kidneys filter the blood through their respective nephrons, removing waste products like urea, creatinine and maintaining the proper balance of electrolytes and turning the waste products into urine by combining them with water from the blood. The kidneys filter about 150 quarts (170 liters) of blood daily, but most of it is returned to the blood stream with only 1-2 quarts (1-2 liters) ending up as urine. The urine is brought by the ureters from the kidneys down to the bladder.
The smooth muscle lining the ureter walls continuously tighten and relax through a process called peristalsis to force urine away from the kidneys and down into the bladder. Small amounts of urine are released into the bladder every 10–15 seconds.
The bladder is a hollow balloon shaped organ located in the pelvis. It stores urine until the brain signals it to relax the urinary sphincter and release the urine into the urethra starting urination. A normal bladder can hold up to 16 ounces (half a liter) for 3–5 hours comfortably.
Numerous diseases affect the urinary system including kidney stones, which are formed when materials in the urine concentrate enough to form a solid mass, urinary tract infections, which are infections of the urinary tract and can cause pain when urinating, frequent urination and even death if left untreated. Renal failure occurs when the kidneys fail to adequately filter waste from the blood and can lead to death if not treated with dialysis or kidney transplantation. Cancer can affect the bladder, kidneys, urethra and ureters, with the latter two being far more rare.
Anatomy
Human anatomy is the study of the shape and form of the human body. The human body has four limbs (two arms and two legs), a head and a neck, which connect to the torso. The body's shape is determined by a strong skeleton made of bone and cartilage, surrounded by fat (adipose tissue), muscle, connective tissue, organs, and other structures. The spine at the back of the skeleton contains the flexible vertebral column, which surrounds the spinal cord, which is a collection of nerve fibres connecting the brain to the rest of the body. Nerves connect the spinal cord and brain to the rest of the body. All major bones, muscles, and nerves in the body are named, with the exception of anatomical variations such as sesamoid bones and accessory muscles.
Blood vessels carry blood throughout the body, which moves because of the beating of the heart. Venules and veins collect blood low in oxygen from tissues throughout the body. These collect in progressively larger veins until they reach the body's two largest veins, the superior and inferior vena cava, which drain blood into the right side of the heart. From here, the blood is pumped into the lungs where it receives oxygen and drains back into the left side of the heart. From here, it is pumped into the body's largest artery, the aorta, and then progressively smaller arteries and arterioles until it reaches tissue. Here, blood passes from small arteries into capillaries, then small veins and the process begins again. Blood carries oxygen, waste products, and hormones from one place in the body to another. Blood is filtered at the kidneys and liver.
The body consists of a number of body cavities, separated areas which house different organ systems. The brain and central nervous system reside in an area protected from the rest of the body by the blood brain barrier. The lungs sit in the pleural cavity. The intestines, liver, and spleen sit in the abdominal cavity.
Height, weight, shape and other body proportions vary individually and with age and sex. Body shape is influenced by the distribution of bones, muscle and fat tissue.
Physiology
Human physiology is the study of how the human body functions. This includes the mechanical, physical, bioelectrical, and biochemical functions of humans in good health, from organs to the cells of which they are composed. The human body consists of many interacting systems of organs. These interact to maintain homeostasis, keeping the body in a stable state with safe levels of substances such as sugar and oxygen in the blood.
Each system contributes to homeostasis, of itself, other systems, and the entire body. Some combined systems are referred to by joint names. For example, the nervous system and the endocrine system operate together as the neuroendocrine system. The nervous system receives information from the body, and transmits this to the brain via nerve impulses and neurotransmitters. At the same time, the endocrine system releases hormones, such as to help regulate blood pressure and volume. Together, these systems regulate the internal environment of the body, maintaining blood flow, posture, energy supply, temperature, and acid balance (pH).
Development
Development of the human body is the process of growth to maturity. The process begins with fertilisation, where an egg released from the ovary of a female is penetrated by sperm. The egg then lodges in the uterus, where an embryo and later fetus develop until birth. Growth and development occur after birth, and include both physical and psychological development, influenced by genetic, hormonal, environmental and other factors. Development and growth continue throughout life, through childhood, adolescence, and through adulthood to old age, and are referred to as the process of aging.
Society and culture
Professional study
Health professionals learn about the human body from illustrations, models, and demonstrations. Medical and dental students in addition gain practical experience, for example by dissection of cadavers. Human anatomy, physiology, and biochemistry are basic medical sciences, generally taught to medical students in their first year at medical school.
Depiction
In Western societies, the contexts for depictions of the human body include information, art and pornography. Information includes both science and education, such as anatomical drawings. Any ambiguous image not easily fitting into one of these categories may be misinterpreted, leading to disputes. The most contentious disputes are between fine art and erotic images, which define the legal distinction of which images are permitted or prohibited.
History of anatomy
In Ancient Greece, the Hippocratic Corpus described the anatomy of the skeleton and muscles. The 2nd century physician Galen of Pergamum compiled classical knowledge of anatomy into a text that was used throughout the Middle Ages. In the Renaissance, Andreas Vesalius (1514–1564) pioneered the modern study of human anatomy by dissection, writing the influential book De humani corporis fabrica. Anatomy advanced further with the invention of the microscope and the study of the cellular structure of tissues and organs. Modern anatomy uses techniques such as magnetic resonance imaging, computed tomography, fluoroscopy and ultrasound imaging to study the body in unprecedented detail.
History of physiology
The study of human physiology began with Hippocrates in Ancient Greece, around 420 BCE, and with Aristotle (384–322 BCE) who applied critical thinking and emphasis on the relationship between structure and function. Galen was the first to use experiments to probe the body's functions. The term physiology was introduced by the French physician Jean Fernel (1497–1558). In the 17th century, William Harvey (1578–1657) described the circulatory system, pioneering the combination of close observation with careful experiment. In the 19th century, physiological knowledge began to accumulate at a rapid rate with the cell theory of Matthias Schleiden and Theodor Schwann in 1838, that organisms are made up of cells. Claude Bernard (1813–1878) created the concept of the milieu interieur (internal environment), which Walter Cannon (1871–1945) later said was regulated to a steady state in homeostasis. In the 20th century, the physiologists Knut Schmidt-Nielsen and George Bartholomew extended their studies to comparative physiology and ecophysiology. Most recently, evolutionary physiology has become a distinct subdiscipline.
See also
Organ system
Outline of human anatomy
The Birth of the Clinic: An Archaeology of Medical Perception
Human body lists
List of skeletal muscles of the human body
List of organs of the human body
List of distinct cell types in the adult human body
List of human microbiota
References
Books
External links
The Book of Humans (from the late 18th and early 19th centuries) (archived 26 January 2014)
Inner Body (archived 10 December 1997)
Anatomia 1522–1867: Anatomical Plates from the Thomas Fisher Rare Book Library | 0.803969 | 0.999776 | 0.803789 |
Physiology | Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology.
Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases.
The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine.
Foundations
Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines:
Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another.
Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms.
Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment.
Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype.
Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment.
Subdisciplines
There are many ways to categorize the subdisciplines of physiology:
based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology
based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology
based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology
based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology)
Subdisciplines by level of organisation
Cell physiology
Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism.
Subdisciplines by taxa
Plant physiology
Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology.
Animal physiology
Human physiology
Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing.
It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical.
Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals.
Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum.
Subdisciplines by research objective
Comparative physiology
Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms.
History
The classical era
The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine.
Early modern period
Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature.
In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law.
In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount.
The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences."
In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli.
In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology.
Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology.
Late modern period
In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline.
In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated.
In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory.
Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains.
Notable physiologists
Women in physiology
Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine.
Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society."
Prominent women physiologists include:
Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975.
Gerty Cori, along with her husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation.
Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize.
Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes.
Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system.
Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS).
Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase.
See also
Outline of physiology
Biochemistry
Biophysics
Cytoarchitecture
Defense physiology
Ecophysiology
Exercise physiology
Fish physiology
Insect physiology
Human body
Molecular biology
Metabolome
Neurophysiology
Pathophysiology
Pharmacology
Physiome
American Physiological Society
International Union of Physiological Sciences
The Physiological Society
Brazilian Society of Physiology
References
Bibliography
Human physiology
Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009.
Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012.
Animal physiology
Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012.
Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008.
Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002.
Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997.
Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992.
Plant physiology
Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001.
Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992
Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010.
Fungal physiology
Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994.
Protistan physiology
Levandowsky, M. Physiological Adaptations of Protists. In: Cell physiology sourcebook: essentials of membrane biophysics. Amsterdam; Boston: Elsevier/AP, 2012.
Levandowski, M., Hutner, S.H. (eds). Biochemistry and physiology of protozoa. Volumes 1, 2, and 3. Academic Press: New York, NY, 1979; 2nd ed.
Laybourn-Parry J. A Functional Biology of Free-Living Protozoa. Berkeley, California: University of California Press; 1984.
Algal physiology
Lobban, C.S., Harrison, P.J. Seaweed ecology and physiology. Cambridge University Press, 1997.
Stewart, W. D. P. (ed.). Algal Physiology and Biochemistry. Blackwell Scientific Publications, Oxford, 1974.
Bacterial physiology
El-Sharoud, W. (ed.). Bacterial Physiology: A Molecular Approach. Springer-Verlag, Berlin-Heidelberg, 2008.
Kim, B.H., Gadd, M.G. Bacterial Physiology and Metabolism. Cambridge, 2008.
Moat, A.G., Foster, J.W., Spector, M.P. Microbial Physiology, 4th ed. Wiley-Liss, Inc. New York, NY, 2002.
External links
physiologyINFO.org – public information site sponsored by the American Physiological Society
Branches of biology | 0.796894 | 0.99859 | 0.79577 |
Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy.
Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health.
In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators.
Molecular biology
Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA.
Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated.
Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match.
Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample.
Biochemistry
Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose, CHO, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy.
Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells.
Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes.
Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences.
See also
References
External links
Branches of biology
Veterinary medicine
Western culture | 0.796369 | 0.9903 | 0.788644 |
Metabolic disorder | A metabolic disorder is a disorder that negatively alters the body's processing and distribution of macronutrients, such as proteins, fats, and carbohydrates. Metabolic disorders can happen when abnormal chemical reactions in the body alter the normal metabolic process. It can also be defined as inherited single gene anomaly, most of which are autosomal recessive.
Signs and symptoms
Some of the symptoms that can occur with metabolic disorders are lethargy, weight loss, jaundice and seizures. The symptoms expressed would vary with the type of metabolic disorder. There are four categories of symptoms: acute symptoms, late-onset acute symptoms, progressive general symptoms and permanent symptoms.
Causes
Inherited metabolic disorders are one cause of metabolic disorders, and occur when a defective gene causes an enzyme deficiency. These diseases, of which there are many subtypes, are known as inborn errors of metabolism. Metabolic diseases can also occur when the liver or pancreas do not function properly.
Types
The principal classes of metabolic disorders are:
Diagnosis
Metabolic disorders can be present at birth, and many can be identified by routine screening. If a metabolic disorder is not identified early, then it may be diagnosed later in life, when symptoms appear. Specific blood and DNA tests can be done to diagnose genetic metabolic disorders.
The gut microbiota, which is a population of microbes that live in the human digestive system, also has an important part in metabolism and generally has a positive function for its host. In terms of pathophysiological/mechanism interactions, an abnormal gut microbiota can play a role in metabolic disorder related obesity.
Screening
Metabolic disorder screening can be done in newborns via blood, skin, or hearing tests.
Management
Metabolic disorders can be treatable by nutrition management, especially if detected early. It is important for dieticians to have knowledge of the genotype to create a treatment that will be more effective for the individual.
See also
Metabolic syndrome
Metabolic Myopathies
Lysosomal storage disease
Deficiency disease
Hypermetabolism
Citrullinemia
References
Further reading
External links
Metabolism
Endocrinology
Medical genetics | 0.790089 | 0.996941 | 0.787673 |
Confusion | In medicine, confusion is the quality or state of being bewildered or unclear. The term "acute mental confusion" is often used interchangeably with delirium in the International Statistical Classification of Diseases and Related Health Problems and the Medical Subject Headings publications to describe the pathology. These refer to the loss of orientation, or the ability to place oneself correctly in the world by time, location and personal identity. Mental confusion is sometimes accompanied by disordered consciousness (the loss of linear thinking) and memory loss (the inability to correctly recall previous events or learn new material).
Etymology
The word confusion derives from the Latin word, confundo, which means "confuse, mix, blend, pour together, disorder, embroil."
Causes
Confusion may result from drug side effects or from a relatively sudden brain dysfunction. Acute confusion is often called delirium (or "acute confusional state"), although delirium often includes a much broader array of disorders than simple confusion. These disorders include the inability to focus attention; various impairments in awareness, and temporal or spatial dis-orientation. Mental confusion can result from chronic organic brain pathologies, such as dementia, as well.
Other
Acute stress reaction
Alcoholism
Anemia
Anticholinergic toxicity
Anxiety
Brain damage
Brain tumor
Concussion
Dehydration
Encephalopathy
Epileptic seizure
Depression
Fatigue
Fever
Brain injury
Heat stroke
Hypoglycemia
Hypothermia
Hypothyroidism
Jet lag
Kidney failure
Kidney infection (pyelonephritis)
Lactic acidosis
Lassa fever
Lewy body dementia
Listeria
Lyme disease
Meningitis
Postpartum depression & Postpartum psychosis
Psychotic Disorder
Reye's syndrome
Rocky Mountain spotted fever (RMSF)
Schizophrenia
Sick building syndrome
Sleep apnea
Stroke
Yellow fever
STDs & STIs
Streptococcal Infections
Toxicity
Toxic shock syndrome
Transient ischemic attack (TIA, Mini-Stroke)
Vitamin B12 deficiency
Acute Porphyria
West Nile virus
Differential diagnosis
The most common causes of drug induced acute confusion are dopaminergic drugs (used for the treatment of Parkinson's disease), diuretics, tricyclic, tetracyclic antidepressants and benzodiazepines or alcohol. The elderly, and especially those with pre-existing dementia, are most at risk for drug induced acute confusional states. New research is finding a link between vitamin D deficiency and cognitive impairment (which includes "foggy brain").
See also
Cognitive distortion
References
External links
National Library of Medicine - National Institutes of Health
Cognitive dissonance
Emotions
Neurology
Symptoms and signs of mental disorders
Failure
Mental states
Cognitive neuroscience
Error
Anxiety
de:Verworrenheit | 0.793596 | 0.99161 | 0.786938 |
Biological process | Biological processes are those processes that are necessary for an organism to live and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Interaction between organisms. the processes by which an organism has an observable effect on another organism of the same or different species.
Also: cellular differentiation, fermentation, fertilisation, germination, tropism, hybridisation, metamorphosis, morphogenesis, photosynthesis, transpiration.
See also
Chemical process
Life
Organic reaction
References
Biological concepts | 0.791612 | 0.993705 | 0.786629 |
Immune system | The immune system is a network of biological systems that protects an organism from diseases. It detects and responds to a wide variety of pathogens, from viruses to parasitic worms, as well as cancer cells and objects such as wood splinters, distinguishing them from the organism's own healthy tissue. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions.
Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against viral infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination.
Dysfunction of the immune system can cause autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. Autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system.
Layered defense
The immune system protects its host from infection with layered defenses of increasing specificity. Physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered.
Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (originally named for being antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response.
Surface barriers
Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. Organisms cannot be completely sealed from their environments, so systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms.
Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a chemical defense against ingested pathogens.
Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, changing the conditions in their environment, such as pH or available iron. As a result, the probability that pathogens will reach sufficient numbers to cause illness is reduced.
Innate immune system
Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms, and the only one in plants.
Immune sensing
Cells in the innate immune system use pattern recognition receptors to recognize molecular structures that are produced by pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils, and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or cell death.
Recognition of extracellular or endosomal PAMPs is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the leucine rich repeats (LRRs), which give them a curved shape. Toll-like receptors were first discovered in Drosophila and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. Ten toll-like receptors have been described in humans.
Cells in the innate immune system have pattern recognition receptors, which detect infection or cell damage, inside. Three major classes of these "cytosolic" receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors.
Innate immune cells
Some leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the "professional" phagocytes (macrophages, neutrophils, and dendritic cells). These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. The other cells involved in the innate response include innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells.
Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals.
Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, representing 50% to 60% of total circulating leukocytes. During the acute phase of inflammation, neutrophils migrate toward the site of inflammation in a process called chemotaxis and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce an array of chemicals including enzymes, complement proteins, and cytokines. They can also act as scavengers that rid the body of worn-out cells and other debris and as antigen-presenting cells (APCs) that activate the adaptive immune system.
Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system.
Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma.
Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by the absence of antigen-specific B- or T-cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers.
Natural killer cells (NK cells) are lymphocytes and a component of the innate immune system that does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors, which essentially put the brakes on NK cells.
Inflammation
Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have antiviral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote the healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18.
Humoral defenses
The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane via the formation of a membrane attack complex.
Adaptive immune system
The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it.
Recognition of antigen
The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection.
Antigen presentation to T lymphocytes
Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule.
Cell mediated immunity
There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response.
Killer T cells
Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below).
Helper T cells
Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks.
Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells.
Gamma delta T cells
Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells.
Humoral immune response
A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells.
Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another.
Immunological memory
When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. T-cells recognize pathogens by small protein-based infection signals, called antigens, that bind to directly to T-cell surface receptors. B-cells use the protein, immunoglobulin, to recognize pathogens by their antigens. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory.
Physiological regulation
The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration.
Hormones
Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D.
Vitamin D
Although cellular studies indicate that vitamin D has receptors and probable functions in the immune system, there is no clinical evidence to prove that vitamin D deficiency increases the risk for immune diseases or vitamin D supplementation lowers immune disease risk. A 2011 United States Institute of Medicine report stated that "outcomes related to ... immune functioning and autoimmune disorders, and infections ... could not be linked reliably with calcium or vitamin D intake and were often conflicting."
Sleep and rest
The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep.
In people with sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma.
In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses.
During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time.
Physical exercise
Physical exercise has a positive effect on the immune system and depending on the frequency and intensity, the pathogenic effects of diseases caused by bacteria and viruses are moderated. Immediately after intense exercise there is a transient immunodepression, where the number of circulating lymphocytes decreases and antibody production declines. This may give rise to a window of opportunity for infection and reactivation of latent virus infections, but the evidence is inconclusive.
Changes at the cellular level
During exercise there is an increase in circulating white blood cells of all types. This is caused by the frictional force of blood flowing on the endothelial cell surface and catecholamines affecting β-adrenergic receptors (βARs). The number of neutrophils in the blood increases and remains raised for up to six hours and immature forms are present. Although the increase in neutrophils ("neutrophilia") is similar to that seen during bacterial infections, after exercise the cell population returns to normal by around 24 hours.
The number of circulating lymphocytes (mainly natural killer cells) decreases during intense exercise but returns to normal after 4 to 6 hours. Although up to 2% of the cells die most migrate from the blood to the tissues, mainly the intestines and lungs, where pathogens are most likely to be encountered.
Some monocytes leave the blood circulation and migrate to the muscles where they differentiate and become macrophages. These cells differentiate into two types: proliferative macrophages, which are responsible for increasing the number of stem cells and restorative macrophages, which are involved their maturing to muscle cells.
Repair and regeneration
The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate.
Disorders of human immunity
Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities.
Immunodeficiencies
Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency.
Autoimmunity
Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune diseases. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus.
Hypersensitivity
Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen.
Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages.
Idiopathic inflammation
Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause.
Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens.
Manipulation in medicine
The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer.
Immunosuppression
Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent rejection after an organ transplant.
Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs and can have many undesirable side effects, such as central obesity, hyperglycemia, and osteoporosis. Their use is tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine.
Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. This killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways.
Immunostimulation
Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness.
Vaccination
Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed.
Many vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity.
Tumor immunology
Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (for example, melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes.
The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system.
Some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells.
Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumors.
Predicting immunogenicity
Some drugs can cause a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics. Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response.
Evolution and other mechanisms
Evolution of the immune system
It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Immune systems evolved in deuterostomes as shown in the cladogram.
Many species, however, use mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally simplest forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes (bacteria and archea) also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few.
Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses.
Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication.
Alternative adaptive immune system
Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (for example, immunoglobulins and T-cell receptors) exist only in jawed vertebrates. A distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity.
Manipulation by pathogens
The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses.
An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium spp.) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, such as the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L).
The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures.
History of immunology
Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis experimented with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" being expelled from the blood—therefore preventing a second occurrence of the disease—this theory explained many observations about smallpox known during this time.
These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed.
Immunology made a great advance towards the end of the 19th century, through rapid developments in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a joint Nobel Prize in 1908, along with the founder of cellular immunology, Elie Metchnikoff. In 1974, Niels Kaj Jerne developed the immune network theory; he shared a Nobel Prize in 1984 with Georges J. F. Köhler and César Milstein for theories related to the immune system.
See also
Disgust
Fc receptor
List of human cell types
Neuroimmune system
Original antigenic sin – when the immune system uses immunological memory upon encountering a slightly different pathogen
Plant disease resistance
Polyclonal response
References
Citations
General bibliography
Further reading
(The book's sources are only online.) A popular science explanation of the immune system.
External links | 0.787765 | 0.998305 | 0.78643 |
Biomedical sciences | Biomedical sciences are a set of sciences applying portions of natural science or formal science, or both, to develop knowledge, interventions, or technology that are of use in healthcare or public health. Such disciplines as medical microbiology, clinical virology, clinical epidemiology, genetic epidemiology, and biomedical engineering are medical sciences. In explaining physiological mechanisms operating in pathological processes, however, pathophysiology can be regarded as basic science.
Biomedical Sciences, as defined by the UK Quality Assurance Agency for Higher Education Benchmark Statement in 2015, includes those science disciplines whose primary focus is the biology of human health and disease and ranges from the generic study of biomedical sciences and human biology to more specialised subject areas such as pharmacology, human physiology and human nutrition. It is underpinned by relevant basic sciences including anatomy and physiology, cell biology, biochemistry, microbiology, genetics and molecular biology, pharmacology, immunology, mathematics and statistics, and bioinformatics. As such the biomedical sciences have a much wider range of academic and research activities and economic significance than that defined by hospital laboratory sciences. Biomedical Sciences are the major focus of bioscience research and funding in the 21st century.
Roles within biomedical science
A sub-set of biomedical sciences is the science of clinical laboratory diagnosis. This is commonly referred to in the UK as 'biomedical science' or 'healthcare science'. There are at least 45 different specialisms within healthcare science, which are traditionally grouped into three main divisions:
specialisms involving life sciences
specialisms involving physiological science
specialisms involving medical physics or bioengineering
Life sciences specialties
Molecular toxicology
Molecular pathology
Blood transfusion science
Cervical cytology
Clinical biochemistry
Clinical embryology
Clinical immunology
Clinical pharmacology and therapeutics
Electron microscopy
External quality assurance
Haematology
Haemostasis and thrombosis
Histocompatibility and immunogenetics
Histopathology and cytopathology
Molecular genetics and cytogenetics
Molecular biology and cell biology
Microbiology including mycology
Bacteriology
Tropical diseases
Phlebotomy
Tissue banking/transplant
Virology
Physiological science specialisms
Physics and bioengineering specialisms
Biomedical science in the United Kingdom
The healthcare science workforce is an important part of the UK's National Health Service. While people working in healthcare science are only 5% of the staff of the NHS, 80% of all diagnoses can be attributed to their work.
The volume of specialist healthcare science work is a significant part of the work of the NHS. Every year, NHS healthcare scientists carry out:
nearly 1 billion pathology laboratory tests
more than 12 million physiological tests
support for 1.5 million fractions of radiotherapy
The four governments of the UK have recognised the importance of healthcare science to the NHS, introducing the Modernising Scientific Careers initiative to make certain that the education and training for healthcare scientists ensures there is the flexibility to meet patient needs while keeping up to date with scientific developments.
Graduates of an accredited biomedical science degree programme can also apply for the NHS' Scientist training programme, which gives successful applicants an opportunity to work in a clinical setting whilst also studying towards an MSc or Doctoral qualification.
Biomedical Science in the 20th century
At this point in history the field of medicine was the most prevalent sub field of biomedical science, as several breakthroughs on how to treat diseases and help the immune system were made. As well as the birth of body augmentations.
1910s
In 1912, the Institute of Biomedical Science was founded in the United Kingdom. The institute is still standing today and still regularly publishes works in the major breakthroughs in disease treatments and other breakthroughs in the field 117 years later. The IBMS today represents approximately 20,000 members employed mainly in National Health Service and private laboratories.
1920s
In 1928, British Scientist Alexander Fleming discovered the first antibiotic penicillin. This was a huge breakthrough in biomedical science because it allowed for the treatment of bacterial infections.
In 1926, the first artificial pacemaker was made by Australian physician Dr. Mark C. Lidwell. This portable machine was plugged into a lighting point. One pole was applied to a skin pad soaked with strong salt solution, while the other consisted of a needle insulated up to the point and was plunged into the appropriate cardiac chamber and the machine started. A switch was incorporated to change the polarity. The pacemaker rate ranged from about 80 to 120 pulses per minute and the voltage also variable from 1.5 to 120 volts.
1930s
The 1930s was a huge era for biomedical research, as this was the era where antibiotics became more widespread and vaccines started to be developed. In 1935, the idea of a polio vaccine was introduced by Dr. Maurice Brodie. Brodie prepared a died poliomyelitis vaccine, which he then tested on chimpanzees, himself, and several children. Brodie's vaccine trials went poorly since the polio-virus became active in many of the human test subjects. Many subjects had fatal side effects, paralyzing, and causing death.
1940s
During and after World War II, the field of biomedical science saw a new age of technology and treatment methods. For instance in 1941 the first hormonal treatment for prostate cancer was implemented by Urologist and cancer researcher Charles B. Huggins. Huggins discovered that if you remove the testicles from a man with prostate cancer, the cancer had nowhere to spread, and nothing to feed on thus putting the subject into remission. This advancement lead to the development of hormonal blocking drugs, which is less invasive and still used today. At the tail end of this decade, the first bone marrow transplant was done on a mouse in 1949. The surgery was conducted by Dr. Leon O. Jacobson, he discovered that he could transplant bone marrow and spleen tissues in a mouse that had both no bone marrow and a destroyed spleen. The procedure is still used in modern medicine today and is responsible for saving countless lives.
1950s
In the 1950s, we saw innovation in technology across all fields, but most importantly there were many breakthroughs which led to modern medicine. On 6 March 1953, Dr. Jonas Salk announced the completion of the first successful killed-virus Polio vaccine. The vaccine was tested on about 1.6 million Canadian, American, and Finnish children in 1954. The vaccine was announced as safe on 12 April 1955.
See also
Biomedical research institution Austral University Hospital
References
External links
Extraordinary You: Case studies of Healthcare scientists in the UK's National Health Service
National Institute of Environmental Health Sciences
The US National Library of Medicine
National Health Service
Health sciences
Health care occupations
Science occupations | 0.789715 | 0.995015 | 0.785779 |
Hypovolemia | Hypovolemia, also known as volume depletion or volume contraction, is a state of abnormally low extracellular fluid in the body. This may be due to either a loss of both salt and water or a decrease in blood volume. Hypovolemia refers to the loss of extracellular fluid and should not be confused with dehydration.
Hypovolemia is caused by a variety of events, but these can be simplified into two categories: those that are associated with kidney function and those that are not. The signs and symptoms of hypovolemia worsen as the amount of fluid lost increases. Immediately or shortly after mild fluid loss (from blood donation, diarrhea, vomiting, bleeding from trauma, etc.), one may experience headache, fatigue, weakness, dizziness, or thirst. Untreated hypovolemia or excessive and rapid losses of volume may lead to hypovolemic shock. Signs and symptoms of hypovolemic shock include increased heart rate, low blood pressure, pale or cold skin, and altered mental status. When these signs are seen, immediate action should be taken to restore the lost volume.
Signs and symptoms
Signs and symptoms of hypovolemia progress with increased loss of fluid volume.
Early symptoms of hypovolemia include headache, fatigue, weakness, thirst, and dizziness. The more severe signs and symptoms are often associated with hypovolemic shock. These include oliguria, cyanosis, abdominal and chest pain, hypotension, tachycardia, cold hands and feet, and progressively altering mental status.
Causes
The causes of hypovolemia can be characterized into two categories:
Kidney
Loss of body sodium and consequent intravascular water (due to impaired reabsorption of salt and water in the tubules of the kidneys)
Osmotic diuresis: the increase in urine production due to an excess of osmotic (namely glucose and urea) load in the tubules of the kidneys
Overuse of pharmacologic diuretics
Impaired response to hormones controlling salt and water balance (see mineralocorticoids)
Impaired kidney function due to tubular injury or other diseases
Other
Loss of bodily fluids due to:
Gastrointestinal losses; e.g. vomiting and diarrhea
Skin losses; e.g. excessive sweating and burns
Respiratory losses; e.g. hyperventilation (breathing fast)
Build up of fluid in empty spaces (third spaces) of the body due to:
Acute pancreatitis
Intestinal obstruction
Increase in vascular permeability
Dysautonomia such as Vasovagal syncope or POTS Postural orthostatic tachycardia syndrome
Hypoalbuminemia
Loss of blood (external or internal bleeding or blood donation)
Pathophysiology
The signs and symptoms of hypovolemia are primarily due to the consequences of decreased circulating volume and a subsequent reduction in the amount of blood reaching the tissues of the body. In order to properly perform their functions, tissues require the oxygen transported in the blood. A decrease in circulating volume can lead to a decrease in bloodflow to the brain, resulting in headache and dizziness.
Baroreceptors in the body (primarily those located in the carotid sinuses and aortic arch) sense the reduction of circulating fluid and send signals to the brain to increase sympathetic response (see also: baroreflex). This sympathetic response is to release epinephrine and norepinephrine, which results in peripheral vasoconstriction (reducing size of blood vessels) in order to conserve the circulating fluids for organs vital to survival (i.e. brain and heart). Peripheral vasoconstriction accounts for the cold extremities (hands and feet), increased heart rate, increased cardiac output (and associated chest pain). Eventually, there will be less perfusion to the kidneys, resulting in decreased urine output.
Diagnosis
Hypovolemia can be recognized by a fast heart rate, low blood pressure, and the absence of perfusion as assessed by skin signs (skin turning pale) and/or capillary refill on forehead, lips and nail beds. The patient may feel dizzy, faint, nauseated, or very thirsty. These signs are also characteristic of most types of shock.
In children, compensation can result in an artificially high blood pressure despite hypovolemia (a decrease in blood volume). Children typically are able to compensate (maintain blood pressure despite hypovolemia) for a longer period than adults, but deteriorate rapidly and severely once they are unable to compensate (decompensate). Consequently, any possibility of internal bleeding in children should be treated aggressively.
Signs of external bleeding should be assessed, noting that individuals can bleed internally without external blood loss or otherwise apparent signs.
There should be considered possible mechanisms of injury that may have caused internal bleeding, such as ruptured or bruised internal organs. If trained to do so and if the situation permits, there should be conducted a secondary survey and checked the chest and abdomen for pain, deformity, guarding, discoloration or swelling. Bleeding into the abdominal cavity can cause the classical bruising patterns of Grey Turner's sign (bruising along the sides) or Cullen's sign (around the navel).
Investigation
In a hospital, physicians respond to a case of hypovolemic shock by conducting these investigations:
Blood tests: U+Es/Chem7, full blood count, glucose, blood type and screen
Central venous catheter
Arterial line
Urine output measurements (via urinary catheter)
Blood pressure
SpO2 oxygen saturation monitoring
Stages
Untreated hypovolemia can lead to shock (see also: hypovolemic shock). Most sources state that there are 4 stages of hypovolemia and subsequent shock; however, a number of other systems exist with as many as 6 stages.
The 4 stages are sometimes known as the "Tennis" staging of hypovolemic shock, as the stages of blood loss (under 15% of volume, 15–30% of volume, 30–40% of volume and above 40% of volume) mimic the scores in a game of tennis: 15, 15–30, 30–40 and 40. It is basically the same as used in classifying bleeding by blood loss.
The signs and symptoms of the major stages of hypovolemic shock include:
Treatment
Field care
The most important step in treatment of hypovolemic shock is to identify and control the source of bleeding.
Medical personnel should immediately supply emergency oxygen to increase efficiency of the patient's remaining blood supply. This intervention can be life-saving.
Also, the respiratory pump is especially important during hypovolemia as spontaneous breathing may help reduce the effect of this loss of blood pressure on stroke volume by increasing venous return.
The use of intravenous fluids (IVs) may help compensate for lost fluid volume, but IV fluids cannot carry oxygen the way blood does—however, researchers are developing blood substitutes that can. Infusing colloid or crystalloid IV fluids also dilutes clotting factors in the blood, increasing the risk of bleeding. Current best practice allow permissive hypotension in patients with hypovolemic shock, both avoid overly diluting clotting factors and avoid artificially raising blood pressure to a point where it "blows off" clots that have formed.
Hospital treatment
Fluid replacement is beneficial in hypovolemia of stage 2, and is necessary in stage 3 and 4. See also the discussion of shock and the importance of treating reversible shock while it can still be countered.
The following interventions are carried out:
IV access
Oxygen as required
Fresh frozen plasma or blood transfusion
Surgical repair at sites of bleeding
Vasopressors (such as dopamine and noradrenaline) should generally be avoided, as they may result in further tissue ischemia and don't correct the primary problem. Fluids are the preferred choice of therapy.
History
In cases where loss of blood volume is clearly attributable to bleeding (as opposed to, e.g., dehydration), most medical practitioners prefer the term exsanguination for its greater specificity and descriptiveness, with the effect that the latter term is now more common in the relevant context.
See also
Hypervolemia
Non-pneumatic anti-shock garment
Polycythemia, an increase of the hematocrit level, with the "relative polycythemia" being a decrease in the volume of plasma
Volume status
References
Blood disorders
Medical emergencies | 0.788235 | 0.996722 | 0.785651 |
Hemodynamics | Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels.
Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm.
Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics.
The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology.
Blood
Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids.
Viscosity of plasma
Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%.
Osmotic pressure of plasma
The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains molecules per liter of that substance and at 0 °C it has an osmotic pressure of . The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood.
Red blood cells
The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration.
This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law
Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above.
Hemodilution
Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions.
Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient.
On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity.
In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm.
To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane.
When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation:
To identify the minimum safe hematocrit desirable for a given patient the following equation is useful:
where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit.
From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs.
How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by
This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume).
The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins.
The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm)
The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level
If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s.
The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH.
When expressed in terms of the RCM
Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH.
The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient.
Result
The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used.
This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit.
For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above.
If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T
Basically, the model considered above is designed to predict the maximum RCM that can save ANH.
In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field.
Blood flow
Cardiac output
The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).
Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again.
In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level.
Cardiac output is determined by two methods. One is to use the Fick equation:
The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port.
Cardiac output is mathematically expressed by the following equation:
where
CO = cardiac output (L/sec)
SV = stroke volume (ml)
HR = heart rate (bpm)
The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV).
Anatomical features
Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain.
Turbulence
Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls.
The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel.
The equation for this dimensionless relationship is written as:
ρ: density of the blood
v: mean velocity of the blood
L: characteristic dimension of the vessel, in this case diameter
μ: viscosity of blood
The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow.
Velocity
Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry.
Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart.
Blood vessels
Vascular resistance
Resistance is also related to vessel radius, vessel length, and blood viscosity.
In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows:
∆P: pressure drop/gradient
μ: viscosity
l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel.
Q: flow rate of the blood in the vessel
r: radius of the vessel
In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.
The blood resistance law appears as R adapted to blood flow profile :
where
R = resistance to blood flow
c = constant coefficient of flow
L = length of the vessel
η(δ) = viscosity of blood in the wall plasma release-cell layering
r = radius of the blood vessel
δ = distance in the plasma release-cell layer
Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels.
Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is:
The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system.
In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow.
Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure.
Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure.
If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot.
To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used.
This translates for SVR into:
Where
SVR = systemic vascular resistance (mmHg/L/min)
MAP = mean arterial pressure (mmHg)
CVP = central venous pressure (mmHg)
CO = cardiac output (L/min)
To get this in Wood units the answer is multiplied by 80.
Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5.
Wall tension
Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen):
where
P is the blood pressure
t is the wall thickness
r is the inside radius of the cylinder.
is the cylinder stress or "hoop stress".
For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius.
The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as:
where:
F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides:
t is the radial thickness of the cylinder
l is the axial length of the cylinder
Stress
When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa.
.
Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease.
Capacitance
Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume.
Blood pressure
The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows:
where:
MAP = Mean Arterial Pressure
DP = Diastolic blood pressure
PP = Pulse pressure which is systolic pressure minus diastolic pressure.
Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins.
The relationship between pressure, flow, and resistance is expressed in the following equation:
When applied to the circulatory system, we get:
where
CO = cardiac output (in L/min)
MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart
RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart
SVR = systemic vascular resistance (in mmHg * min/L)
A simplified form of this equation assumes right atrial pressure is approximately 0:
The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them.
Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4 mm), therefore the resistance is low.
The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta.
Clinical significance
Pressure monitoring
Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff.
Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits.
Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade.
Remote, indirect monitoring of blood flow by laser Doppler
Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels.
Glossary
ANHAcute Normovolemic Hemodilution
ANHuNumber of Units During ANH
BLHMaximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed
BLIIncremental Blood Loss Possible with ANH.(BLH – BLs)
BLsMaximum blood loss without ANH before homologous blood transfusion is required
EBVEstimated Blood Volume(70 mL/kg)
HctHaematocrit Always Expressed Here As A Fraction
HiInitial Haematocrit
HmMinimum Safe Haematocrit
PRBCPacked Red Blood Cell Equivalent Saved by ANH
RCMRed cell mass.
RCMHCell Mass Available For Transfusion after ANH
RCMIRed Cell Mass Saved by ANH
SBLSurgical Blood Loss
Etymology and pronunciation
The word hemodynamics uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation.
Blood hammer
Blood pressure
Cardiac output
Cardiovascular System Dynamics Society
Electrical cardiometry
Esophogeal doppler
Hemodynamics of the aorta
Impedance cardiography
Photoplethysmogram
Laser Doppler imaging
Windkessel effect
Functional near-infrared spectroscopy
Notes and references
Bibliography
Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997
Rowell LB. Human Cardiovascular Control. Oxford University press 1993
Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997
Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991
American Heart Association
Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184
Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139
Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25
Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6
Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853
Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale
WR Milnor: Hemodynamics, Williams & Wilkins, 1982
B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0
External links
Learn hemodynamics
Fluid mechanics
Computational fluid dynamics
Cardiovascular physiology
Exercise physiology
Blood
Mathematics in medicine
Fluid dynamics | 0.792209 | 0.991439 | 0.785427 |
Blood | Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells.
Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, and hormones. The blood cells are mainly red blood cells (erythrocytes), white blood cells (leukocytes), and (in mammals) platelets (thrombocytes). The most abundant cells are red blood cells. These contain hemoglobin, which facilitates oxygen transport by reversibly binding to it, increasing its solubility. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood.
Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated.
Medical terms related to blood often begin with hemo-, hemato-, haemo- or haemato- from the Greek word for "blood". In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen.
Functions
Blood performs many important functions within the body, including:
Supply of oxygen to tissues (bound to hemoglobin, which is carried in red cells)
Supply of nutrients such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins (e.g., blood lipids))
Removal of waste such as carbon dioxide, urea, and lactic acid
Immunological functions, including circulation of white blood cells, and detection of foreign material by antibodies
Coagulation, the response to a broken blood vessel, the conversion of blood from a liquid to a semisolid gel to stop bleeding
Messenger functions, including the transport of hormones and the signaling of tissue damage
Regulation of core body temperature
Hydraulic functions
Constituents
In mammals
Blood accounts for 7% of the human body weight, with an average density around 1060 kg/m3, very close to pure water's density of 1000 kg/m3. The average adult has a blood volume of roughly or 1.3 gallons, which is composed of plasma and formed elements. The formed elements are the two types of blood cell or corpuscle – the red blood cells, (erythrocytes) and white blood cells (leukocytes), and the cell fragments called platelets that are involved in clotting. By volume, the red blood cells constitute about 45% of whole blood, the plasma about 54.3%, and white cells about 0.7%.
Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics.
Cells
One microliter of blood contains:
4.7 to 6.1 million (male), 4.2 to 5.4 million (female) erythrocytes: Red blood cells contain the blood's hemoglobin and distribute oxygen. Mature red blood cells lack a nucleus and organelles in mammals. The red blood cells (together with endothelial vessel cells and other cells) are also marked by glycoproteins that define the different blood types. The proportion of blood occupied by red blood cells is referred to as the hematocrit, and is normally about 45%. The combined surface area of all red blood cells of the human body would be roughly 2,000 times as great as the body's exterior surface.
4,000–11,000 leukocytes: White blood cells are part of the body's immune system; they destroy and remove old or aberrant cells and cellular debris, as well as attack infectious agents (pathogens) and foreign substances. The cancer of leukocytes is called leukemia.
200,000–500,000 thrombocytes: Also called platelets, they take part in blood clotting (coagulation). Fibrin from the coagulation cascade creates a mesh over the platelet plug.
Plasma
About 55% of blood is blood plasma, a fluid that is the blood's liquid medium, which by itself is straw-yellow in color. The blood plasma volume totals of 2.7–3.0 liters (2.8–3.2 quarts) in an average human. It is essentially an aqueous solution containing 92% water, 8% blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid.
Other important components include:
Serum albumin
Blood-clotting factors (to facilitate coagulation)
Immunoglobulins (antibodies)
lipoprotein particles
Various other proteins
Various electrolytes (mainly sodium and chloride)
The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins.
Acidity
Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic (compensation). Extra-cellular fluid in blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. A pH below 6.9 or above 7.8 is usually lethal. Blood pH, partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2), and bicarbonate (HCO3−) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid–base balance and respiration, which is called compensation. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive.
In non-mammals
Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences:
Red blood cells of non-mammalian vertebrates are flattened and ovoid in form, and retain their cell nuclei.
There is considerable variation in the types and proportions of white blood cells; for example, acidophils are generally more common than in humans.
Platelets are unique to mammals; in other vertebrates, small nucleated, spindle cells called thrombocytes are responsible for blood clotting instead.
Physiology
Circulatory system
Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood.
Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium.
The blood circulation was famously described by William Harvey in 1628.
Cell production and degradation
In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes.
The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney.
Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine.
Oxygen transport
About 98.5% of the oxygen in a sample of arterial blood in a healthy human breathing air at sea-level pressure is chemically combined with the hemoglobin. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species. Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O2 per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O2 per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries).
With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart.
Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98–99% saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml/min to the body. In a healthy adult at rest, oxygen consumption is approximately 200–250 ml/min, and deoxygenated blood returning to the lungs is still roughly 75% (70 to 78%) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15% in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95% or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90%), is dangerous to health, and severe hypoxia (saturations less than 30%) may be rapidly fatal.
A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21% of the level found in an adult's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions.
Carbon dioxide transport
CO2 is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells by the reaction ; about 7% is dissolved in the plasma; and about 23% is bound to hemoglobin as carbamino compounds.
Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect.
Transport of hydrogen ions
Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin.
Lymphatic system
In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein, where lymph rejoins the systemic blood circulation.
Thermoregulation
Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially.
Rate of flow
Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml/min. Kidney and brain are the second and the third most supplied organs, with 1100 ml/min and ~700 ml/min, respectively.
Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively.
Hydraulic functions
The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris.
Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs.
Color
Hemoglobin is the principal determinant of the color of blood (hemochrome). Each molecule has four heme groups, and their interaction with various molecules alters the exact color. Arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states.
Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot use oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue.
Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood.
Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin.
Disorders
General medical
Disorders of volume
Injury can cause blood loss through bleeding. A healthy adult can lose almost 20% of blood volume (1 L) before the first symptom, restlessness, begins, and 40% of volume (2 L) before shock sets in. Thrombocytes are important for blood coagulation and the formation of blood clots, which can stop bleeding. Trauma to the internal organs or bones can cause internal bleeding, which can sometimes be severe.
Dehydration can reduce the blood volume by reducing the water content of the blood. This would rarely result in shock (apart from the very severe cases) but may result in orthostatic hypotension and fainting.
Disorders of circulation
Shock is the ineffective perfusion of tissues, and can be caused by a variety of conditions including blood loss, infection, poor cardiac output.
Atherosclerosis reduces the flow of blood through arteries, because atheroma lines arteries and narrows them. Atheroma tends to increase with age, and its progression can be compounded by many causes including smoking, high blood pressure, excess circulating lipids (hyperlipidemia), and diabetes mellitus.
Coagulation can form a thrombosis, which can obstruct vessels.
Problems with blood composition, the pumping action of the heart, or narrowing of blood vessels can have many consequences including hypoxia (lack of oxygen) of the tissues supplied. The term ischemia refers to tissue that is inadequately perfused with blood, and infarction refers to tissue death (necrosis), which can occur when the blood supply has been blocked (or is very inadequate).
Hematological
Anemia
Insufficient red cell mass (anemia) can be the result of bleeding, blood disorders like thalassemia, or nutritional deficiencies, and may require one or more blood transfusions. Anemia can also be due to a genetic disorder in which the red blood cells do not function effectively. Anemia can be confirmed by a blood test if the hemoglobin value is less than 13.5 gm/dl in men or less than 12.0 gm/dl in women. Several countries have blood banks to fill the demand for transfusable blood. A person receiving a blood transfusion must have a blood type compatible with that of the donor.
Sickle-cell anemia
Disorders of cell proliferation
Leukemia is a group of cancers of the blood-forming tissues and cells.
Non-cancerous overproduction of red cells (polycythemia vera) or platelets (essential thrombocytosis) may be premalignant.
Myelodysplastic syndromes involve ineffective production of one or more cell lines.
Disorders of coagulation
Hemophilia is a genetic illness that causes dysfunction in one of the blood's clotting mechanisms. This can allow otherwise inconsequential wounds to be life-threatening, but more commonly results in hemarthrosis, or bleeding into joint spaces, which can be crippling.
Ineffective or insufficient platelets can also result in coagulopathy (bleeding disorders).
Hypercoagulable state (thrombophilia) results from defects in regulation of platelet or clotting factor function, and can cause thrombosis.
Infectious disorders of blood
Blood is an important vector of infection. HIV, the virus that causes AIDS, is transmitted through contact with blood, semen or other body secretions of an infected person. Hepatitis B and C are transmitted primarily through blood contact. Owing to blood-borne infections, bloodstained objects are treated as a biohazard.
Bacterial infection of the blood is bacteremia or sepsis. Viral Infection is viremia. Malaria and trypanosomiasis are blood-borne parasitic infections.
Carbon monoxide poisoning
Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco.
Treatments
Transfusion
Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused.
Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates.
Intravenous administration
Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract.
After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion.
Letting
In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine.
Etymology
English blood (Old English blod) derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German Blut, Swedish blod, Gothic blōþ). There is no accepted Indo-European etymology.
History
Classical Greek medicine
Robin Fåhræus (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile").
In general, Greek thinkers believed that blood was made from food. Plato and Aristotle are two important sources of evidence for this view, but it dates back to Homer's Iliad. Plato thinks that fire in our bellies transform food into blood. Plato believes that the movements of air in the body as we exhale and inhale carry the fire as it transforms our food into blood. Aristotle believed that food is concocted into blood in the heart and transformed into our body's matter.
Types
The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937.
Culture and religion
Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother".
Blood is given particular emphasis in the Islamic, Jewish, and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off.
Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death.
Indigenous Australians
In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic.
European paganism
Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath" and "bond", see Ishara.
The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals.
As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century.
Christianity
In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church.
It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway.
At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice.
Christ's blood is the means for the atonement of sins. Also, "... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), "... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ..." (Revelation 12:11).
Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you.".
Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast.
Judaism
In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition.
Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it.
Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption.
Islam
Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah."
Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules.
Jehovah's Witnesses
Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components.
Vampirism
Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the 'Nosferatu' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are New World creatures discovered well after the origins of the European myths.
Invertebrates
In invertebrates, a body fluid analogous to blood called hemolymph is found, the main difference being that hemolymph is not contained in a closed circulatory system. Hemolymph may function to carry oxygen, although hemoglobin is not necessarily used. Crustaceans and mollusks use hemocyanin instead of hemoglobin. In most insects, their hemolymph does not contain oxygen-carrying molecules because their bodies are small enough for their tracheal system to suffice for supplying oxygen.
Other uses
Forensic and archaeological
Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains.
Blood residue analysis is also a technique used in archeology.
Artistic
Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O'Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood.
Genealogical
The term blood is used in genealogical circles to refer to one's ancestry, origins, and ethnic background as in the word bloodline. Other terms where blood is used in a family history sense are blue-blood, royal blood, mixed-blood and blood relative.
See also
Autotransfusion
Blood as food
Blood pressure
Blood substitutes ("artificial blood")
Blood test
Hematology
Hemophobia
Luminol, a visual test for blood left at crime scenes.
Oct-1-en-3-one ("Smell" of blood)
Taboo food and drink: Blood
References
External links
Blood Groups and Red Cell Antigens. Free online book at NCBI Bookshelf ID: NBK2261
Blood Photomicrographs
Hematology
Tissues (biology)
Articles containing video clips | 0.785345 | 0.999393 | 0.784868 |
Pharmacology | Pharmacology is the science of drugs and medications, including a substance's origin, composition, pharmacokinetics, pharmacodynamics, therapeutic use, and toxicology. More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. If substances have medicinal properties, they are considered pharmaceuticals.
The field encompasses drug composition and properties, functions, sources, synthesis and drug design, molecular and cellular mechanisms, organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics, interactions, chemical biology, therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics. Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors, and pharmacokinetics discusses the absorption, distribution, metabolism, and excretion (ADME) of chemicals from the biological systems.
Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science, deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology.
Etymology
The word pharmacology is derived from Greek word , pharmakon, meaning "drug" or "poison", together with another Greek word , logia with the meaning of "study of" or "knowledge of" (cf. the etymology of pharmacy). Pharmakon is related to pharmakos, the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion.
The modern term pharmacon is used more broadly than the term drug because it includes endogenous substances, and biologically active substances which are not used as drugs. Typically it includes pharmacological agonists and antagonists, but also enzyme inhibitors (such as monoamine oxidase inhibitors).
History
The origins of clinical pharmacology date back to the Middle Ages, with pharmacognosy and Avicenna's The Canon of Medicine, Peter of Spain's Commentary on Isaac, and John of St Amand's Commentary on the Antedotary of Nicholas. Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias. Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances.
Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese, Mongolian, Tibetan and Korean medicine. However much of this has since been regarded as pseudoscience. Pharmacological substances known as entheogens may have spiritual and religious use and historical context.
In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering. Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine, quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. The first pharmacology department was set up by Rudolf Buchheim in 1847, at University of Tartu, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. Subsequently, the first pharmacology department in England was set up in 1905 at University College London.
Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph, and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. Modern pharmacologists use techniques from genetics, molecular biology, biochemistry, and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventive care, diagnostics, and ultimately personalized medicine.
Divisions
The discipline of pharmacology can be divided into many sub disciplines each with a specific focus.
Systems of the body
Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology, in the central and peripheral nervous systems; immunopharmacology in the immune system. Other divisions include cardiovascular, renal and endocrine pharmacology. Psychopharmacology is the study of the use of drugs that affect the psyche, mind and behavior (e.g. antidepressants) in treating mental disorders (e.g. depression). It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche.
Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome. Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment.
Clinical practice and drug discovery
Pharmacology can be applied within clinical sciences. Clinical pharmacology is the application of pharmacological methods and principles in the study of drugs in humans. An example of this is posology, which is the study of dosage of medicines.
Pharmacology is closely related to toxicology. Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment.
Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy.
Drug discovery
Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development. Drug discovery starts with drug design, which is the inventive process of finding new drugs. In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. Drug discovery is related to pharmacoeconomics, which is the sub-discipline of health economics that considers the value of drugs Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. The techniques used for the discovery, formulation, manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering, a branch of engineering. Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs.
Development of medication is a vital concern to medicine, but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States, the main body that regulates pharmaceuticals is the Food and Drug Administration; they enforce standards set by the United States Pharmacopoeia. In the European Union, the main body that regulates pharmaceuticals is the EMA, and they enforce standards set by the European Pharmacopoeia.
The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling.
Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things:
Carefully research the demand for their potential new product before spending an outlay of company funds.
Obtain a patent on the new medicine preventing other companies from producing that medicine for a certain allocation of time.
The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing.
When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value.
Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects.
Wider contexts
Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology concerns the variations of the effects of drugs in or between populations, it is the bridge between clinical pharmacology and epidemiology. Pharmacoenvironmentology or environmental pharmacology is the study of the effects of used pharmaceuticals and personal care products (PPCPs) on the environment after their elimination from the body. Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment.
Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology.
Emerging fields
Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light. The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment.
Theory of pharmacology
The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors, to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function).
Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution), and pharmacodynamics describes the chemical's effect on the body (desired or toxic).
Systems, receptors and ligands
Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems. The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine, adrenaline, glutamate, GABA, dopamine, histamine, serotonin, cannabinoid and opioid.
Molecular targets in pharmacology include receptors, enzymes and membrane transport proteins. Enzymes can be targeted with enzyme inhibitors. Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors, ligand gated ion channels and receptor tyrosine kinases.
Network pharmacology is a subfield of pharmacology that combines principles from pharmacology, systems biology, and network analysis to study the complex interactions between drugs and targets (e.g., receptors or enzymes etc.) in biological systems. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help designing efficient and safe therapeutic strategies. The topology Network pharmacology utilizes computational tools and network analysis algorithms to identify drug targets, predict drug-drug interactions, elucidate signaling pathways, and explore the polypharmacology of drugs.
Pharmacodynamics
Pharmacodynamics is defined as how the body reacts to the drugs. Pharmacodynamics theory often investigates the binding affinity of ligands to their receptors. Ligands can be agonists, partial agonists or antagonists at specific receptors in the body. Agonists bind to receptors and produce a biological response, a partial agonist produces a biological response lower than that of a full agonist, antagonists have affinity for a receptor but do not produce a biological response.
The ability of a ligand to produce a biological response is termed efficacy, in a dose-response profile it is indicated as percentage on the y-axis, where 100% is the maximal efficacy (all receptors are occupied).
Binding affinity is the ability of a ligand to form a ligand-receptor complex either through weak attractive forces (reversible) or covalent bond (irreversible), therefore efficacy is dependent on binding affinity.
Potency of drug is the measure of its effectiveness, EC50 is the drug concentration of a drug that produces an efficacy of 50% and the lower the concentration the higher the potency of the drug therefore EC50 can be used to compare potencies of drugs.
Medication is said to have a narrow or wide therapeutic index, certain safety factor or therapeutic window. This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin, some antiepileptics, aminoglycoside antibiotics). Most anti-cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors.
The effect of drugs can be described with Loewe additivity which is one of several common reference models.
Other models include the Hill equation, Cheng-Prusoff equation and Schild regression.
Pharmacokinetics
Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs.
When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in L-ADME:
Liberation – How is the API disintegrated (for solid oral forms (breaking down into smaller particles), dispersed, or dissolved from the medication?
Absorption – How is the API absorbed (through the skin, the intestine, the oral mucosa)?
Distribution – How does the API spread through the organism?
Metabolism – Is the API converted chemically inside the body, and into which substances. Are these active (as well)? Could they be toxic?
Excretion – How is the API excreted (through the bile, urine, breath, skin)?
Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing.
Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug the reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration(first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes this is true because biological membranes are made up of a lipid bilayer (phospholipids etc.) Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs.
Administration, drug policy and safety
Drug policy
In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements:
The drug must be found to be effective against the disease for which it is seeking approval (where 'effective' means only that the drug performed better than placebo or competitors in at least two trials).
The drug must meet safety criteria by being subject to animal and controlled human testing.
Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome.
The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987.
The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK.
Medicare Part D is a prescription drug plan in the U.S.
The Prescription Drug Marketing Act (PDMA) is an act related to drug policy.
Prescription drugs are drugs regulated by legislation.
Societies and education
Societies and administration
The International Union of Basic and Clinical Pharmacology, Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology.
Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration.; Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act; Hong Kong Drug Registration, administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED, C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifier.
Education
The study of pharmacology overlaps with biomedical sciences and is the study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology. Students of pharmacology must have a detailed working knowledge of aspects in physiology, pathology, and chemistry. They may also require knowledge of plants as sources of pharmacologically active compounds. Modern pharmacology is interdisciplinary and involves biophysical and computational sciences, and analytical chemistry. A pharmacist needs to be well-equipped with knowledge on pharmacology for application in pharmaceutical research or pharmacy practice in hospitals or commercial organisations selling to customers. Pharmacologists, however, usually work in a laboratory undertaking research or development of new products. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum.
See also
References
External links
American Society for Pharmacology and Experimental Therapeutics
British Pharmacological Society
International Conference on Harmonisation
US Pharmacopeia
International Union of Basic and Clinical Pharmacology
IUPHAR Committee on Receptor Nomenclature and Drug Classification
IUPHAR/BPS Guide to Pharmacology
Further reading
Biochemistry
Life sciences industry | 0.786089 | 0.997744 | 0.784316 |
Metabolism | Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism.
Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy.
The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions.
A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer.
Key biochemicals
Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life.
Amino acids and proteins
Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress.
Lipids
Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids.
Carbohydrates
Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways.
Nucleotides
The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions.
Coenzymes
Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled.
One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions.
A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions.
Mineral and cofactors
Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water.
The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.
Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use.
Catabolism
Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight.
The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH.
Digestion
Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides.
Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins.
Energy from organic compounds
Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids.
Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis.
Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Energy transformations
Oxidative phosphorylation
In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane.
Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP.
Energy from inorganic compounds
Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility.
Energy from light
The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds.
In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two.
In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+.
Anabolism
Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids.
Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions.
Carbon fixation
Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions.
In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction.
Carbohydrates and glycans
In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle.
Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood.
Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases.
Fatty acids, isoprenoids and sterol
Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway.
Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol.
Proteins
Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid.
Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA.
Nucleotide synthesis and salvage
Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate.
Xenobiotics and redox metabolism
All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds.
A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases.
Thermodynamics of living organisms
Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder.
Regulation and control
As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway.
There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins.
A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes.
Evolution
The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world.
Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules.
As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms.
Investigation and manipulation
Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell.
An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites.
Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies.
A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes.
History
The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change"
Greek philosophy
Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces.
Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change."
Application of the scientific method and Modern metabolic theories
The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration".
In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry.
It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle.
See also
, a "metabolism first" theory of the origin of life
Microphysiometry
Oncometabolism
References
Further reading
Introductory
Advanced
External links
General information
The Biochemistry of Metabolism (archived 8 March 2005)
Sparknotes SAT biochemistry Overview of biochemistry. School level.
MIT Biology Hypertextbook Undergraduate-level guide to molecular biology.
Human metabolism
Topics in Medical Biochemistry Guide to human metabolic pathways. School level.
THE Medical Biochemistry Page Comprehensive resource on human metabolism.
Databases
Flow Chart of Metabolic Pathways at ExPASy
IUBMB-Nicholson Metabolic Pathways Chart
SuperCYP: Database for Drug-Cytochrome-Metabolism
Metabolic pathways
Metabolism reference Pathway
Underwater diving physiology | 0.784708 | 0.999218 | 0.784094 |
Postorgasmic illness syndrome | Postorgasmic illness syndrome (POIS) is a syndrome in which human males have chronic physical and cognitive symptoms following ejaculation. The symptoms usually onset within seconds, minutes, or hours, and last for up to a week. The cause and prevalence are unknown; it is considered a rare disease.
Signs and symptoms
The distinguishing characteristics of POIS are:
the rapid onset of symptoms after ejaculation;
the presence of an overwhelming systemic reaction.
POIS symptoms, which are called a "POIS attack", can include some combination of the following: cognitive dysfunction, aphasia, severe muscle pain throughout the body, severe fatigue, weakness, and flu-like or allergy-like symptoms, such as sneezing, itchy eyes, and nasal irritation. Additional symptoms include headache, dizziness, lightheadedness, extreme hunger, sensory and motor problems, intense discomfort, irritability, anxiety, gastrointestinal disturbances, craving for relief, susceptibility to nervous system stresses, depressed mood, and difficulty communicating, remembering words, reading and retaining information, concentrating, and socializing. Affected individuals may also experience intense warmth or cold. An online anonymous self-report study found that 80% of respondents always experienced the symptom cluster involving fatigue, insomnia, irritation, and concentration difficulties.
The symptoms usually begin within 30 minutes of ejaculation, and can last for several days, sometimes up to a week. In some cases, symptoms may be delayed by 2 to 3 days or may last up to 2 weeks.
In some men, the onset of POIS is in puberty, while in others, the onset is later in life. POIS that is manifest from the first ejaculations in adolescence is called primary type; POIS that starts later in life is called secondary type.
Many individuals with POIS report lifelong premature ejaculation, with intravaginal ejaculation latency time (IELT) of less than one minute.
The 7 clusters of symptoms of criterion 1: General: Extreme fatigue, exhaustion, palpitations, problems finding words, incoherent speech, dysarthria, concentration difficulties, quickly irritated, cannot stand noise, photophobia, depressed mood
Flu-like: Feverish, extreme warmth, perspiration, shivery, ill with flu, feeling sick, feeling cold
Head: Headache, foggy feeling in the head, heavy feeling in the head
Eyes: Burning, red injected eyes, blurred vision, watery, irritating, itching eyes, painful eyes
Nose: Congested nose, watery/runny nose, sneezing
Throat: Dirty taste in mouth, dry mouth, sore throat, tickling cough, hoarse voice
Muscle: Muscle tension behind neck, muscle weakness, pain in muscles, heavy legs, stiff muscles
Synonyms and related conditions
POIS has been called by a number of other names, including "postejaculatory syndrome", "postorgasm illness syndrome", "post ejaculation sickness", and "post orgasmic sick syndrome".
Dhat syndrome is a condition , N. N. Wig first coined the term and described in 1960 in India, with symptoms similar to POIS. Dhat syndrome is thought to be a culture-bound psychiatric condition and is treated with cognitive behavioral therapy along with anti-anxiety and antidepressant drugs.
Post-coital tristesse (PCT) is a feeling of melancholy and anxiety after sexual intercourse that lasts anywhere from five minutes to two hours. PCT, which affects both men and women, occurs only after sexual intercourse and does not require an orgasm to occur, and in that its effects are primarily emotional rather than physiological. By contrast, POIS affects only men, consists primarily of physiological symptoms that are triggered by ejaculation and that can last, in some people, for up to a week. While PCT and POIS are distinct conditions, some doctors speculate that they could be related.
An array of more subtle and lingering symptoms after orgasm, which do not constitute POIS, may contribute to habituation between mates. They may show up as restlessness, irritability, increased sexual frustration, apathy, sluggishness, neediness, dissatisfaction with a mate, or weepiness over the days or weeks after intense sexual stimulation. Such phenomena may be part of human mating physiology itself.
Sexual headache is a distinct condition characterized by headaches that usually begin before or during orgasm.
Mechanism
The cause of POIS is unknown. Some doctors hypothesize that POIS is caused by an auto-immune reaction. Other doctors suspect a hormone imbalance as the cause. Different causes have also been proposed. None of the proposed causes can fully explain the disease.
Allergy hypothesis
According to one hypothesis, "POIS is caused by Type-I and Type-IV allergy to the males' own semen". This was conditioned by another study stating "IgE-mediated semen allergy in men may not be the potential mechanism of POIS".
Alternatively, POIS could be caused by an auto-immune reaction not to semen, but to a different substance released during ejaculation, such as cytokines.
Hormone hypothesis
According to another hypothesis, POIS is caused by a hormone imbalance, such as low progesterone, low cortisol, low testosterone, elevated prolactin, hypothyroidism, or low DHEA.
POIS could be caused by a defect in neurosteroid precursor synthesis. If so, the same treatment may not be effective for all individuals. Different individuals could have different missing precursors leading to a deficiency of the same neurosteroid, causing similar symptoms.
Withdrawal hypothesis
The majority of POIS symptoms like fatigue, muscle pains, sweating, mood disturbances, irritability, and poor concentration are also caused by withdrawal from different drug classes and natural reinforcers. It is unknown whether there is a relationship between hypersexuality, pornography addiction, compulsive sexual behavior and POIS. Some evidence indicates that POIS patients have a history of excessive masturbation, suggesting that POIS could be a consequence of sex addiction. There is anecdotal evidence on porn addiction internet forums, that many men experience POIS like symptoms after ejaculation.
Other possibilities
POIS could be caused by hyperglycemia or by chemical imbalances in the brain.
Sexual activity for the first time may set the stage for an associated asthma attack or may aggravate pre-existing asthma. Intense emotional stimuli during sexual intercourse can lead to autonomic imbalance with parasympathetic over-reactivity, releasing mast cell mediators that can provoke postcoital asthma and / or rhinitis in these patients.
It is also possible that the causes of POIS are different in different individuals. POIS could represent "a spectrum of syndromes of differing" causes.
None of the proposed causes for POIS can fully explain the connection between POIS and lifelong premature ejaculation.
Diagnosis
There is no generally agreed upon diagnostic criteria for POIS. One group has developed five preliminary criteria for diagnosing POIS. These are:
one or more of the following symptoms: sensation of a flu-like state, extreme fatigue or exhaustion, weakness of musculature, experiences of feverishness or perspiration, mood disturbances and / or irritability, memory difficulties, concentration problems, incoherent speech, congestion of nose or watery nose, itching eyes;
all symptoms occur immediately (e.g., seconds), soon (e.g., minutes), or within a few hours after ejaculation that is initiated by coitus, and / or masturbation, and / or spontaneously (e.g., during sleep);
symptoms occur always or nearly always, e.g., in more than 90% of ejaculation events;
most of these symptoms last for about 2–7 days; and
disappear spontaneously.
POIS is prone to being erroneously ascribed to psychological factors such as hypochondriasis or somatic symptom disorder.
An online survey study suggested that only a small number of self-reported POIS fulfill entirely the five criteria. This study proposed to change the Criterion 3 with “In at least one ejaculatory setting (sex, masturbation, or nocturnal emission), symptoms occur after all or almost all ejaculations.”
Management
There is no standard method of treating or managing POIS. Patients need to be thoroughly examined in an attempt to find the causes of their POIS symptoms, which are often difficult to determine, and which vary across patients. Once a cause is hypothesized, an appropriate treatment can be attempted. At times, more than one treatment is attempted, until one that works is found.
Affected individuals typically avoid sexual activity, especially ejaculation, or schedule it for times when they can rest and recover for several days afterwards. In case post-coital tristesse (PCT) is suspected, patients could be treated with selective serotonin reuptake inhibitors.
In one patient, the POIS symptoms were so severe, that he decided to undergo removal of the testicles, prostate, and seminal vesicles in order to relieve them. The POIS symptoms were cured by this.
Another patient, in whom POIS was suspected to be caused by cytokine release, was successfully treated with nonsteroidal anti-inflammatory drugs (NSAIDs) just prior to and for a day or two after ejaculation. The patient took diclofenac 75 mg 1 to 2 hours prior to sexual activity with orgasm, and continued twice daily for 24 to 48 hours.
One POIS patient with erectile dysfunction and premature ejaculation had much lower severity of symptoms on those occasions when he was able to maintain penile erection long enough to achieve vaginal penetration and ejaculate inside his partner. The patient took tadalafil to treat his erectile dysfunction and premature ejaculation. This increased the number of occasions on which he was able to ejaculate inside his partner, and decreased the number of occasions on which he experienced POIS symptoms. This patient is thought to have Dhat syndrome rather than true POIS.
Two patients, in whom POIS was suspected to be caused by auto-immune reaction to their own semen, were successfully treated by allergen immunotherapy with their own autologous semen. They were given multiple subcutaneous injections of their own semen for three years. Treatment with autologous semen "might take 3 to 5 years before any clinically relevant symptom reduction would become manifest".
Treatments are not always successful, especially when the cause of POIS in a particular patient has not been determined. In one patient, all of whose routine laboratory tests were normal, the following were attempted, all without success: ibuprofen, 400 mg on demand; tramadol 50 mg one hour pre-coitally; and escitalopram 10 mg daily at bedtime for 3 months.
Epidemiology
The prevalence of POIS is unknown. POIS is listed as a rare disease by the American National Institutes of Health and the European Orphanet. It is thought to be underdiagnosed and underreported. POIS seems to affect mostly men from around the world, of various ages and relationship statuses.
Women
It is possible that a similar disease exists in women, though, as of 2016, there is only one documented female patient.
References
External links
Sexual health
Rare syndromes
Orgasm
Ejaculation
Urology
Autoimmune diseases | 0.786556 | 0.996551 | 0.783843 |
Convulsion | A convulsion is a medical condition where the body muscles contract and relax rapidly and repeatedly, resulting in uncontrolled shaking. Because epileptic seizures typically include convulsions, the term convulsion is often used as a synonym for seizure. However, not all epileptic seizures result in convulsions, and not all convulsions are caused by epileptic seizures. Non-epileptic convulsions have no relation with epilepsy, and are caused by non-epileptic seizures.
Convulsions can be caused by epilepsy, infections (including a severe form of listeriosis which is caused by eating food contaminated by Listeria Monocytogenes), brain trauma, or other medical conditions. They can also occur from an electric shock or improperly enriched air for scuba diving.
The word fit is sometimes used to mean a convulsion or epileptic seizure.
Signs and symptoms
A person having a convulsion may experience several different symptoms, such as a brief blackout, confusion, drooling, loss of bowel or bladder control, sudden shaking of the entire body, uncontrollable muscle spasms, or temporary cessation of breathing. Symptoms usually last from a few seconds to several minutes, although they can last longer.
Convulsions in children are not necessarily benign, and may lead to brain damage if prolonged. In these patients, the frequency of occurrence should not downplay their significance, as a worsening seizure state may reflect the damage caused by successive attacks. Symptoms may include:
Lack of awareness
Loss of consciousness
Eyes rolling back
Changes to breathing
Stiffening of the arms, legs, or whole body
Jerky movements of the arms, legs, body, or head
Lack of control over movements
Inability to respond
Causes
Most convulsions are the result of abnormal electrical activity in the brain. Often, a specific cause is not clear. Numerous conditions can cause a convulsion.
Convulsions can be caused by specific chemicals in the blood, as well as infections like meningitis or encephalitis. Other possibilities include celiac disease, head trauma, stroke, or lack of oxygen to the brain. Sometimes the convulsion can be caused by genetic defects or brain tumors. Convulsions can also occur when the blood sugar is too low or there is a deficiency of vitamin B6 (pyridoxine). The pathophysiology of convulsion remains ambiguous.
Convulsions are often caused by epileptic seizures, febrile seizures, non-epileptic seizures, or paroxysmal kinesigenic dyskinesia. In rare cases, it may be triggered by reactions to certain medications, such as antidepressants, stimulants, and antihistamines.
Epileptic seizures
Epilepsy is a neuronal disorder with multifactorial manifestations. It is a noncontagious illness and is usually associated with sudden attacks of seizures, which are an immediate and initial anomaly in the electrical activity of the brain that disrupts part or all of the body. Various areas of the brain can be disturbed by epileptic events. Epileptic seizures can have contrary clinical features. Epileptic seizures can have long-lasting effects on cerebral blood flow.
Various kinds of epileptic seizures affect 60 million people worldwide.
Generalized seizures
The most common type of seizure is called a generalized seizure, also known as a generalized convulsion. This is characterized by a loss of consciousness which may lead to the person collapsing. The body stiffens for about a minute and then jerks uncontrollably for the next minute. During this, the patient may fall and injure themselves or bite their tongue, their eyes roll back, and lose control of their bladder. A familial history of seizures puts a person at a greater risk of developing them. Generalized seizures have been broadly classified into two categories: motor and non-motor.
A generalized tonic-clonic seizure (GTCS), also known as a grand mal seizure, is a whole-body seizure that has a tonic phase followed by clonic muscle retrenchments. GTCSs can happen in people of all ages. GTCSs are very hazardous, and they increase the risk of injuries and sudden unexpected death in epilepsy (SUDEP). SUDEP is a sudden, unexpected, nontraumatic death in patients with epilepsy. Strong convulsions that are related to GTCSs can also cause falls and severe injuries.
Not all generalized seizures produce convulsions. For example, in an absence seizure, also known as a petit mal seizure, the brain experiences electrical disturbances but the body remains motionless and unresponsive.
Febrile convulsion
A common cause of convulsions in children is febrile seizures, a type of seizure associated with a high body temperature. This high temperature is a usual immune response to infection, and in febrile convulsions, the reason for the fever is extra-cranial (such as a body-wide viral infection). In Nigeria, malaria—which can cause sudden, high fevers—is a significant cause of convulsions among children under 5 years of age.
Febrile seizures fall into two categories: simple and complex. A simple febrile seizure is generalized, occurs singularly, and lasts less than 15 minutes. A complex febrile seizure can be focused in an area of the body, occur more than once, and lasts for more than 15 minutes. Febrile seizures affect 2–4% of children in the United States and Western Europe, it is the most common childhood seizure. The exact reason for febrile convulsion is unidentified, though it might be the outcome of the interchange between environmental and genetic factors.
Psychogenic non-epileptic seizures
Psychogenic non-epileptic seizures (PNES) are described as neurobehavioral conditions or "psychogenic illnesses" which occur not due to the electrical disturbances in a person's brain but due to mental and emotional stress. PNES are an important differential diagnosis and a common occurrence in epilepsy centers. According to the 5th Edison of Diagnostic and Statistical Manual of Mental Disorders (DSM 5), PNES is classified as a "conversion disorder" or Functional Neurologic Symptom Disorder characterized by alterations in behavior, motor activity, consciousness, and sensation. A few neuroimaging (functional and structural) studies suggest that PNES may replicate sensorimotor alterations, emotional regulation, cognitive control, and integration of neural circuits.
Paroxysmal kinesigenic dyskinesia
There is a linkage between infantile convulsion and paroxysmal dyskinesia. Paroxysmal kinesigenic dyskinesia (PKD) is characterized by sudden involuntary movement caused by sudden stress or excitement. The relationship between convulsion and PKD is mainly due to the common mechanism of pathophysiology.
Notes
References
Symptoms and signs: Nervous system
Medical terminology | 0.786429 | 0.996699 | 0.783833 |
Human skin | The human skin is the outer covering of the body and is the largest organ of the integumentary system. The skin has up to seven layers of ectodermal tissue guarding muscles, bones, ligaments and internal organs. Human skin is similar to most of the other mammals' skin, and it is very similar to pig skin. Though nearly all human skin is covered with hair follicles, it can appear hairless. There are two general types of skin: hairy and glabrous skin (hairless). The adjective cutaneous literally means "of the skin" (from Latin cutis, skin).
Skin plays an important immunity role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, synthesis of vitamin D, and the protection of vitamin B folates. Severely damaged skin will try to heal by forming scar tissue. This is often discoloured and depigmented.
In humans, skin pigmentation (affected by melanin) varies among populations, and skin type can range from dry to non-dry and from oily to non-oily. Such skin variety provides a rich and diverse habitat for the approximately one thousand species of bacteria from nineteen phyla which have been found on human skin.
Structure
Human skin shares anatomical, physiological, biochemical and immunological properties with other mammalian lines. Pig skin especially shares similar epidermal and dermal thickness ratios to human skin: pig and human skin share similar hair follicle and blood vessel patterns; biochemically the dermal collagen and elastin content is similar in pig and human skin; and pig skin and human skin have similar physical responses to various growth factors.
Skin has mesodermal cells which produce pigmentation, such as melanin provided by melanocytes, which absorb some of the potentially dangerous ultraviolet radiation (UV) in sunlight. It contains DNA repair enzymes that help reverse UV damage. People lacking the genes for these enzymes have high rates of skin cancer. One form predominantly produced by UV light, malignant melanoma, is particularly invasive, causing it to spread quickly, and can often be deadly. Human skin pigmentation varies substantially between populations; this has led to the classification of people(s) on the basis of skin colour.
In terms of surface area, the skin is the second largest organ in the human body (the inside of the small intestine is 15 to 20 times larger). For the average adult human, the skin has a surface area of . The thickness of the skin varies considerably over all parts of the body, and between men and women, and young and old. An example is the skin on the forearm, which is on average in males and in females. of skin holds 650 sweat glands, 20 blood vessels, 60,000 melanocytes, and more than 1,000 nerve endings. The average human skin cell is about in diameter, but there are variants. A skin cell usually ranges from , depending on a variety of factors.
Skin is composed of three primary layers: the epidermis, the dermis and the hypodermis.
Epidermis
The epidermis, "epi" coming from the Greek language meaning "over" or "upon", is the outermost layer of the skin. It forms the waterproof, protective wrap over the body's surface, which also serves as a barrier to infection and is made up of stratified squamous epithelium with an underlying basal lamina.
The epidermis contains no blood vessels, and cells in the deepest layers are nourished almost exclusively by diffused oxygen from the surrounding air and to a far lesser degree by blood capillaries extending to the outer layers of the dermis. The main type of cells that make up the epidermis are Merkel cells, keratinocytes, with melanocytes and Langerhans cells also present. The epidermis can be further subdivided into the following strata (beginning with the outermost layer): corneum, lucidum (only in palms of hands and bottoms of feet), granulosum, spinosum, and basale. Cells are formed through mitosis at the basale layer. The daughter cells (see cell division) move up the strata changing shape and composition as they die due to isolation from their blood source. The cytoplasm is released and the protein keratin is inserted. They eventually reach the corneum and slough off (desquamation). This process is called "keratinization". This keratinized layer of skin is responsible for keeping water in the body and keeping other harmful chemicals and pathogens out, making skin a natural barrier to infection.
The epidermis contains no blood vessels and is nourished by diffusion from the dermis. The main type of cells that make up the epidermis are keratinocytes, melanocytes, Langerhans cells, and Merkel cells. The epidermis helps the skin regulate body temperature.
Layers
The skin has up to seven layers of ectodermal tissue and guards the underlying muscles, bones, ligaments and internal organs. The epidermis is divided into several layers, where cells are formed through mitosis at the innermost layers. They move up the strata changing shape and composition as they differentiate and become filled with keratin. After reaching the top layer stratum corneum they are eventually 'sloughed off', or desquamated. This process is called keratinization and takes place within weeks.
It was previously believed that the stratum corneum was "a simple, biologically inactive, outer epidermal layer comprising a fibrillar lattice of dead keratin". It is now understood that this is not true, and that the stratum corneum should be considered to be a live tissue. While it is true that the stratum corneum is mainly composed of terminally differentiated keratinocytes called corneocytes that are anucleated, these cells remain alive and metabolically functional until desquamated.
Sublayers
The epidermis is divided into the following 5 sublayers or strata:
Stratum corneum
Stratum lucidum
Stratum granulosum
Stratum spinosum
Stratum basale (also called "stratum germinativum")
Blood capillaries are found beneath the epidermis and are linked to an arteriole and a venule. Arterial shunt vessels may bypass the network in ears, the nose and fingertips.
Genes and proteins expressed in the epidermis
About 70% of all human protein-coding genes are expressed in the skin. Almost 500 genes have an elevated pattern of expression in the skin. There are fewer than 100 genes that are specific for the skin, and these are expressed in the epidermis. An analysis of the corresponding proteins show that these are mainly expressed in keratinocytes and have functions related to squamous differentiation and cornification.
Dermis
The dermis is the layer of skin beneath the epidermis that consists of connective tissue and cushions the body from stress and strain. The dermis is tightly connected to the epidermis by a basement membrane. It also harbours many nerve endings that provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as from the stratum basale of the epidermis.
The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
Papillary region
The papillary region is composed of loose areolar connective tissue. It is named for its finger-like projections called papillae, which extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's surface. These epidermal ridges occur in patterns (see: fingerprint) that are genetically and epigenetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.
Reticular region
The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibres that weave throughout it. These protein fibres give the dermis its properties of strength, extensibility, and elasticity.
Also located within the reticular region are the roots of the hairs, sebaceous glands, sweat glands, receptors, nails, and blood vessels.
Tattoo ink is held in the dermis. Stretch marks, often from pregnancy and obesity, are also located in the dermis.
Subcutaneous tissue
The subcutaneous tissue (also hypodermis and subcutis) is not part of the skin, but lies below the dermis of the cutis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue, adipose tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (subcutaneous tissue contains 50% of body fat). Fat serves as padding and insulation for the body.
Cross-section
Cell count and cell mass
Skin cell table
The below table identifies the skin cell count and aggregate cell mass estimates for a 70 kg adult male (ICRP-23; ICRP-89, ICRP-110).
Tissue mass is defined at 3.3 kg (ICRP-89, ICRP110) and addresses the skin's epidermis, dermis, hair follicles, and glands. The cell data is extracted from 'The Human Cell Count and Cell Size Distribution', Tissue-Table tab in the Supporting Information SO1 Dataset (xlsx). The 1200 record Dataset is supported by extensive references for cell size, cell count, and aggregate cell mass.
Detailed data for below cell groups are further subdivided into all the cell types listed in the above sections and categorized by epidermal, dermal, hair follicle, and glandular subcategories in the dataset and on the dataset's graphical website interface. While adipocytes in the hypodermal adipose tissue are treated separately in the ICRP tissue categories, fat content (minus cell-membrane-lipids) resident in the dermal layer (Table-105, ICRP-23) is addressed by the below interstitial-adipocytes in the dermal layer.
Development
Skin colour
Human skin shows high skin colour variety from the darkest brown to the lightest pinkish-white hues. Human skin shows higher variation in colour than any other single mammalian species and is the result of natural selection. Skin pigmentation in humans evolved to primarily regulate the amount of ultraviolet radiation (UVR) penetrating the skin, controlling its biochemical effects.
The actual skin colour of different humans is affected by many substances, although the single most important substance determining human skin colour is the pigment melanin. Melanin is produced within the skin in cells called melanocytes and it is the main determinant of the skin colour of darker-skinned humans. The skin colour of people with light skin is determined mainly by the bluish-white connective tissue under the dermis and by the haemoglobin circulating in the veins of the dermis. The red colour underlying the skin becomes more visible, especially in the face, when, as consequence of physical exercise or the stimulation of the nervous system (anger, fear), arterioles dilate.
There are at least five different pigments that determine the colour of the skin. These pigments are present at different levels and places.
Melanin: It is brown in colour and present in the basal layer of the epidermis.
Melanoid: It resembles melanin but is present diffusely throughout the epidermis.
Carotene: This pigment is yellow to orange in colour. It is present in the stratum corneum and fat cells of dermis and superficial fascia.
Hemoglobin (also spelled haemoglobin): It is found in blood and is not a pigment of the skin but develops a purple colour.
Oxyhemoglobin: It is also found in blood and is not a pigment of the skin. It develops a red colour.
There is a correlation between the geographic distribution of UV radiation (UVR) and the distribution of indigenous skin pigmentation around the world. Areas that highlight higher amounts of UVR reflect darker-skinned populations, generally located nearer towards the equator. Areas that are far from the tropics and closer to the poles have lower concentration of UVR, which is reflected in lighter-skinned populations.
In the same population it has been observed that adult human females are considerably lighter in skin pigmentation than males. Females need more calcium during pregnancy and lactation, and vitamin D, which is synthesized from sunlight helps in absorbing calcium. For this reason it is thought that females may have evolved to have lighter skin in order to help their bodies absorb more calcium.
The Fitzpatrick scale is a numerical classification schema for human skin colour developed in 1975 as a way to classify the typical response of different types of skin to ultraviolet (UV) light:
Ageing
As skin ages, it becomes thinner and more easily damaged. Intensifying this effect is the decreasing ability of skin to heal itself as a person ages.
Among other things, skin ageing is noted by a decrease in volume and elasticity. There are many internal and external causes to skin ageing. For example, ageing skin receives less blood flow and lower glandular activity.
A validated comprehensive grading scale has categorized the clinical findings of skin ageing as laxity (sagging), rhytids (wrinkles), and the various facets of photoageing, including erythema (redness), and telangiectasia, dyspigmentation (brown discolouration), solar elastosis (yellowing), keratoses (abnormal growths) and poor texture.
Cortisol causes degradation of collagen, accelerating skin ageing.
Anti-ageing supplements are used to treat skin ageing.
Photoageing
Photoageing has two main concerns: an increased risk for skin cancer and the appearance of damaged skin. In younger skin, sun damage will heal faster since the cells in the epidermis have a faster turnover rate, while in the older population the skin becomes thinner and the epidermis turnover rate for cell repair is lower, which may result in the dermis layer being damaged.
UV-induced DNA damage
UV-irradiation of human skin cells generates damages in DNA through direct photochemical reactions at adjacent thymine or cytosine residues on the same strand of DNA. Cyclobutane pyrimidine dimers formed by two adjacent thymine bases, or by two adjacent cytosine bases, in DNA are the most frequent types of DNA damage induced by UV. Humans, as well as other organisms, are capable of repairing such UV-induced damages by the process of nucleotide excision repair. In humans this repair process protects against skin cancer.
Types
Though most human skin is covered with hair follicles, some parts can be hairless. There are two general types of skin, hairy and glabrous skin (hairless). The adjective cutaneous means "of the skin" (from Latin cutis, skin).
Functions
Skin performs the following functions:
Protection: an anatomical barrier from pathogens and damage between the internal and external environment in bodily defence; Langerhans cells in the skin are part of the adaptive immune system. Perspiration contains lysozyme that break the bonds within the cell walls of bacteria.
Sensation: contains a variety of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury; see somatosensory system and haptics.
Heat regulation: the skin contains a blood supply far greater than its requirements, which allows precise control of energy loss by radiation, convection and conduction. Dilated blood vessels increase perfusion and heat loss, while constricted vessels greatly reduce cutaneous blood flow and conserve heat.
Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to fluid loss. Loss of this function contributes to the massive fluid loss in burns.
Aesthetics and communication: others see our skin and can assess our mood, physical state and attractiveness.
Storage and synthesis: acts as a storage centre for lipids and water, as well as a means of synthesis of vitamin D by action of UV on certain parts of the skin.
Excretion: sweat contains urea, however its concentration is 1/130th that of urine, hence excretion by sweating is at most a secondary function to temperature regulation.
Absorption: the cells comprising the outermost 0.25–0.40 mm of the skin are "almost exclusively supplied by external oxygen", although the "contribution to total respiration is negligible". In addition, medicine can be administered through the skin, by ointments or by means of adhesive patch, such as the nicotine patch or iontophoresis. The skin is an important site of transport in many other organisms.
Water resistance: The skin acts as a water-resistant barrier so essential nutrients are not washed out of the body.
Skin flora
The human skin is a rich environment for microbes. Around 1,000 species of bacteria from 19 bacterial phyla have been found. Most come from only four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). Propionibacteria and Staphylococci species were the main species in sebaceous areas. There are three main ecological areas: moist, dry and sebaceous. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but dominated by Betaproteobacteria and Flavobacteriales. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae, and umbilical cord stump. Most similarly were beside the nostril, nares (inside
the nostril), and on the back.
Reflecting upon the diversity of the human skin researchers on the human skin microbiome have observed: "hairy, moist underarms lie a short distance from smooth dry forearms, but these two niches are likely as ecologically dissimilar as rainforests are to deserts."
The NIH conducted the Human Microbiome Project to characterize the human microbiota, which includes that on the skin and the role of this microbiome in health and disease.
Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings.
Clinical significance
Diseases of the skin include skin infections and skin neoplasms (including skin cancer). Dermatology is the branch of medicine that deals with conditions of the skin.
There are seven cervical, twelve thoracic, five lumbar, and five sacral. Certain diseases like shingles, caused by varicella-zoster infection, have pain sensations and eruptive rashes involving dermatomal distribution. Dermatomes are helpful in the diagnosis of vertebral spinal injury levels. Aside from the dermatomes, the epidermis cells are susceptible to neoplastic changes, resulting in various cancer types.
The skin is also valuable for diagnosis of other conditions, since many medical signs show through the skin. Skin color affects the visibility of these signs, a source of misdiagnosis in unaware medical personnel.
Society and culture
Hygiene and skin care
The skin supports its own ecosystems of microorganisms, including yeasts and bacteria, which cannot be removed by any amount of cleaning. Estimates place the number of individual bacteria on the surface of human skin at , though this figure varies greatly over the average of human skin. Oily surfaces, such as the face, may contain over . Despite these vast quantities, all of the bacteria found on the skin's surface would fit into a volume the size of a pea. In general, the microorganisms keep one another in check and are part of a healthy skin. When the balance is disturbed, there may be an overgrowth and infection, such as when antibiotics kill microbes, resulting in an overgrowth of yeast. The skin is continuous with the inner epithelial lining of the body at the orifices, each of which supports its own complement of microbes.
Cosmetics should be used carefully on the skin because these may cause allergic reactions. Each season requires suitable clothing in order to facilitate the evaporation of the sweat. Sunlight, water and air play an important role in keeping the skin healthy.
Oily skin
Oily skin is caused by over-active sebaceous glands, that produce a substance called sebum, a naturally healthy skin lubricant. A high glycemic-index diet and dairy products (except for cheese) consumption increase IGF-1 generation, which in turn increases sebum production. Overwashing the skin does not cause sebum overproduction but may cause dryness.
When the skin produces excessive sebum, it becomes heavy and thick in texture, known as oily skin. Oily skin is typified by shininess, blemishes and pimples. The oily-skin type is not necessarily bad, since such skin is less prone to wrinkling, or other signs of ageing, because the oil helps to keep needed moisture locked into the epidermis (outermost layer of skin). The negative aspect of the oily-skin type is that oily complexions are especially susceptible to clogged pores, blackheads, and buildup of dead skin cells on the surface of the skin. Oily skin can be sallow and rough in texture and tends to have large, clearly visible pores everywhere, except around the eyes and neck.
Permeability
Human skin has a low permeability; that is, most foreign substances are unable to penetrate and diffuse through the skin. Skin's outermost layer, the stratum corneum, is an effective barrier to most inorganic nanosized particles. This protects the body from external particles such as toxins by not allowing them to come into contact with internal tissues. However, in some cases it is desirable to allow particles entry to the body through the skin. Potential medical applications of such particle transfer has prompted developments in nanomedicine and biology to increase skin permeability. One application of transcutaneous particle delivery could be to locate and treat cancer. Nanomedical researchers seek to target the epidermis and other layers of active cell division where nanoparticles can interact directly with cells that have lost their growth-control mechanisms (cancer cells). Such direct interaction could be used to more accurately diagnose properties of specific tumours or to treat them by delivering drugs with cellular specificity.
Nanoparticles
Nanoparticles 40 nm in diameter and smaller have been successful in penetrating the skin. Research confirms that nanoparticles larger than 40 nm do not penetrate the skin past the stratum corneum. Most particles that do penetrate will diffuse through skin cells, but some will travel down hair follicles and reach the dermis layer.
The permeability of skin relative to different shapes of nanoparticles has also been studied. Research has shown that spherical particles have a better ability to penetrate the skin compared to oblong (ellipsoidal) particles because spheres are symmetric in all three spatial dimensions. One study compared the two shapes and recorded data that showed spherical particles located deep in the epidermis and dermis whereas ellipsoidal particles were mainly found in the stratum corneum and epidermal layers. Nanorods are used in experiments because of their unique fluorescent properties but have shown mediocre penetration.
Nanoparticles of different materials have shown skin's permeability limitations. In many experiments, gold nanoparticles 40 nm in diameter or smaller are used and have shown to penetrate to the epidermis. Titanium oxide (TiO2), zinc oxide (ZnO), and silver nanoparticles are ineffective in penetrating the skin past the stratum corneum. Cadmium selenide (CdSe) quantum dots have proven to penetrate very effectively when they have certain properties. Because CdSe is toxic to living organisms, the particle must be covered in a surface group. An experiment comparing the permeability of quantum dots coated in polyethylene glycol (PEG), PEG-amine, and carboxylic acid concluded the PEG and PEG-amine surface groups allowed for the greatest penetration of particles. The carboxylic acid coated particles did not penetrate past the stratum corneum.
Increasing permeability
Scientists previously believed that the skin was an effective barrier to inorganic particles. Damage from mechanical stressors was believed to be the only way to increase its permeability.
Recently, simpler and more effective methods for increasing skin permeability have been developed. Ultraviolet radiation (UVR) slightly damages the surface of skin and causes a time-dependent defect allowing easier penetration of nanoparticles. The UVR's high energy causes a restructuring of cells, weakening the boundary between the stratum corneum and the epidermal layer. The damage of the skin is typically measured by the transepidermal water loss (TEWL), though it may take 3–5 days for the TEWL to reach its peak value. When the TEWL reaches its highest value, the maximum density of nanoparticles is able to permeate the skin. While the effect of increased permeability after UVR exposure can lead to an increase in the number of particles that permeate the skin, the specific permeability of skin after UVR exposure relative to particles of different sizes and materials has not been determined.
There are other methods to increase nanoparticle penetration by skin damage: tape stripping is the process in which tape is applied to skin then lifted to remove the top layer of skin; skin abrasion is done by shaving the top 5–10 μm off the surface of the skin; chemical enhancement applies chemicals such as polyvinylpyrrolidone (PVP), dimethyl sulfoxide (DMSO), and oleic acid to the surface of the skin to increase permeability; electroporation increases skin permeability by the application of short pulses of electric fields. The pulses are high voltage and on the order of milliseconds when applied. Charged molecules penetrate the skin more frequently than neutral molecules after the skin has been exposed to electric field pulses. Results have shown molecules on the order of 100 μm to easily permeate electroporated skin.
Applications
A large area of interest in nanomedicine is the transdermal patch because of the possibility of a painless application of therapeutic agents with very few side effects. Transdermal patches have been limited to administer a small number of drugs, such as nicotine, because of the limitations in permeability of the skin. Development of techniques that increase skin permeability has led to more drugs that can be applied via transdermal patches and more options for patients.
Increasing the permeability of skin allows nanoparticles to penetrate and target cancer cells. Nanoparticles along with multi-modal imaging techniques have been used as a way to diagnose cancer non-invasively. Skin with high permeability allowed quantum dots with an antibody attached to the surface for active targeting to successfully penetrate and identify cancerous tumours in mice. Tumour targeting is beneficial because the particles can be excited using fluorescence microscopy and emit light energy and heat that will destroy cancer cells.
Sunblock and sunscreen
Sunblock and sunscreen are different important skin-care products though both offer full protection from the sun.
Sunblock—Sunblock is opaque and stronger than sunscreen, since it is able to block most of the UVA/UVB rays and radiation from the sun, and does not need to be reapplied several times in a day. Titanium dioxide and zinc oxide are two of the important ingredients in sunblock.
Sunscreen—Sunscreen is more transparent once applied to the skin and also has the ability to protect against UVA/UVB rays, although the sunscreen's ingredients have the ability to break down at a faster rate once exposed to sunlight, and some of the radiation is able to penetrate to the skin. In order for sunscreen to be more effective it is necessary to consistently reapply and use one with a higher sun protection factor.
Diet
Vitamin A, also known as retinoids, benefits the skin by normalizing keratinization, downregulating sebum production, which contributes to acne, and reversing and treating photodamage, striae, and cellulite.
Vitamin D and analogues are used to downregulate the cutaneous immune system and epithelial proliferation while promoting differentiation.
Vitamin C is an antioxidant that regulates collagen synthesis, forms barrier lipids, regenerates vitamin E, and provides photoprotection.
Vitamin E is a membrane antioxidant that protects against oxidative damage and also provides protection against harmful UV rays.
Several scientific studies confirmed that changes in baseline nutritional status affects skin condition.
Mayo Clinic lists foods they state help the skin: fruits and vegetables, whole-grains, dark leafy greens, nuts, and seeds.
See also
Acid mantle
Adam and Eve
Anthropodermic bibliopegy
Artificial skin
Callus – thick area of skin
List of cutaneous conditions
Cutaneous structure development
Fingerprint – skin on fingertips
Human body
Hyperpigmentation – about excess skin colour
Intertriginous
Meissner's corpuscle
Nude beaches
Nude swimming
Nudity
Pacinian corpuscle
Polyphenol antioxidant
Skin cancer
Skin lesion
Skin repair
Sunbathing
References
External links
Organs (anatomy) | 0.785533 | 0.997185 | 0.783322 |
Structure | A structure is an arrangement and organization of interrelated elements in a material object or system, or the object or system so organized. Material structures include man-made objects such as buildings and machines and natural objects such as biological organisms, minerals and chemicals. Abstract structures include data structures in computer science and musical form. Types of structure include a hierarchy (a cascade of one-to-many relationships), a network featuring many-to-many links, or a lattice featuring connections between components that are neighbors in space.
Load-bearing
Buildings, aircraft, skeletons, anthills, beaver dams, bridges and salt domes are all examples of load-bearing structures. The results of construction are divided into buildings and non-building structures, and make up the infrastructure of a human society. Built structures are broadly divided by their varying design approaches and standards, into categories including building structures, architectural structures, civil engineering structures and mechanical structures.
The effects of loads on physical structures are determined through structural analysis, which is one of the tasks of structural engineering. The structural elements can be classified as one-dimensional (ropes, struts, beams, arches), two-dimensional (membranes, plates, slab, shells, vaults), or three-dimensional (solid masses). Three-dimensional elements were the main option available to early structures such as Chichen Itza. A one-dimensional element has one dimension much larger than the other two, so the other dimensions can be neglected in calculations; however, the ratio of the smaller dimensions and the composition can determine the flexural and compressive stiffness of the element. Two-dimensional elements with a thin third dimension have little of either but can resist biaxial traction.
The structure elements are combined in structural systems. The majority of everyday load-bearing structures are section-active structures like frames, which are primarily composed of one-dimensional (bending) structures. Other types are Vector-active structures such as trusses, surface-active structures such as shells and folded plates, form-active structures such as cable or membrane structures, and hybrid structures.
Load-bearing biological structures such as bones, teeth, shells, and tendons derive their strength from a multilevel hierarchy of structures employing biominerals and proteins, at the bottom of which are collagen fibrils.
Biological
In biology, one of the properties of life is its highly ordered structure, which can be observed at multiple levels such as in cells, tissues, organs, and organisms.
In another context, structure can also observed in macromolecules, particularly proteins and nucleic acids. The function of these molecules is determined by their shape as well as their composition, and their structure has multiple levels. Protein structure has a four-level hierarchy. The primary structure is the sequence of amino acids that make it up. It has a peptide backbone made up of a repeated sequence of a nitrogen and two carbon atoms. The secondary structure consists of repeated patterns determined by hydrogen bonding. The two basic types are the α-helix and the β-pleated sheet. The tertiary structure is a back and forth bending of the polypeptide chain, and the quaternary structure is the way that tertiary units come together and interact. Structural biology is concerned with biomolecular structure of macromolecules.
Chemical
Chemical structure refers to both molecular geometry and electronic structure. The structure can be represented by a variety of diagrams called structural formulas. Lewis structures use a dot notation to represent the valence electrons for an atom; these are the electrons that determine the role of the atom in chemical reactions. Bonds between atoms can be represented by lines with one line for each pair of electrons that is shared. In a simplified version of such a diagram, called a skeletal formula, only carbon-carbon bonds and functional groups are shown.
Atoms in a crystal have a structure that involves repetition of a basic unit called a unit cell. The atoms can be modeled as points on a lattice, and one can explore the effect of symmetry operations that include rotations about a point, reflections about a symmetry planes, and translations (movements of all the points by the same amount). Each crystal has a finite group, called the space group, of such operations that map it onto itself; there are 230 possible space groups. By Neumann's law, the symmetry of a crystal determines what physical properties, including piezoelectricity and ferromagnetism, the crystal can have.
Mathematical
Musical
A large part of numerical analysis involves identifying and interpreting the structure of musical works. Structure can be found at the level of part of a work, the entire work, or a group of works. Elements of music such as pitch, duration and timbre combine into small elements like motifs and phrases, and these in turn combine in larger structures. Not all music (for example, that of John Cage) has a hierarchical organization, but hierarchy makes it easier for a listener to understand and remember the music.
In analogy to linguistic terminology, motifs and phrases can be combined to make complete musical ideas such as sentences and phrases. A larger form is known as the period. One such form that was widely used between 1600 and 1900 has two phrases, an antecedent and a consequent, with a half cadence in the middle and a full cadence at the end providing punctuation. On a larger scale are single-movement forms such as the sonata form and the contrapuntal form, and multi-movement forms such as the symphony.
Social
A social structure is a pattern of relationships. They are social organizations of individuals in various life situations. Structures are applicable to people in how a society is as a system organized by a characteristic pattern of relationships. This is known as the social organization of the group. Sociologists have studied the changing structure of these groups. Structure and agency are two confronted theories about human behaviour. The debate surrounding the influence of structure and agency on human thought is one of the central issues in sociology. In this context, agency refers to the individual human capacity to act independently and make free choices. Structure here refers to factors such as social class, religion, gender, ethnicity, customs, etc. that seem to limit or influence individual opportunities.
Data
In computer science, a data structure is a way of organizing information in a computer so that it can be used efficiently. Data structures are built out of two basic types: An array has an index that can be used for immediate access to any data item (some programming languages require array size to be initialized). A linked list can be reorganized, grown or shrunk, but its elements must be accessed with a pointer that links them together in a particular order. Out of these any number of other data structures can be created such as stacks, queues, trees and hash tables.
In solving a problem, a data structure is generally an integral part of the algorithm. In modern programming style, algorithms and data structures are encapsulated together in an abstract data type.
Software
Software architecture is the specific choices made between possible alternatives within a framework. For example, a framework might require a database and the architecture would specify the type and manufacturer of the database. The structure of software is the way in which it is partitioned into interrelated components. A key structural issue is minimizing dependencies between these components. This makes it possible to change one component without requiring changes in others. The purpose of structure is to optimise for (brevity, readability, traceability, isolation and encapsulation, maintainability, extensibility, performance and efficiency), examples being: language choice, code, functions, libraries, builds, system evolution, or diagrams for flow logic and design. Structural elements reflect the requirements of the application: for example, if the system requires a high fault tolerance, then a redundant structure is needed so that if a component fails it has backups. A high redundancy is an essential part of the design of several systems in the Space Shuttle.
Logical
As a branch of philosophy, logic is concerned with distinguishing good arguments from poor ones. A chief concern is with the structure of arguments. An argument consists of one or more premises from which a conclusion is inferred. The steps in this inference can be expressed in a formal way and their structure analyzed. Two basic types of inference are deduction and induction. In a valid deduction, the conclusion necessarily follows from the premises, regardless of whether they are true or not. An invalid deduction contains some error in the analysis. An inductive argument claims that if the premises are true, the conclusion is likely.
See also
Abstract structure
Mathematical structure
Structural geology
Structure (mathematical logic)
Structuralism (philosophy of science)
References
Further reading
External links
(syllabus and reading list) | 0.786986 | 0.995282 | 0.783273 |
Acclimatization | Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do.
Names
The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it.
Methods
Biochemical
In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments.
Morphological
Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species).
The theory
While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes).
Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data.
The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation.
Examples
Plants
Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate.
In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux.
Animals
Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity).
Humans
The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected.
Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake.
See also
Acclimatisation society
Beneficial acclimation hypothesis
Heat index
Introduced species
Phenotypic plasticity
Wind chill
References
Physiology
Ecological processes
Climate
Biology terminology | 0.787664 | 0.994293 | 0.783168 |
Sepsis | Sepsis is a potentially life-threatening condition that arises when the body's response to infection causes injury to its own tissues and organs.
This initial stage of sepsis is followed by suppression of the immune system. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. There may also be symptoms related to a specific infection, such as a cough with pneumonia, or painful urination with a kidney infection. The very young, old, and people with a weakened immune system may have no symptoms of a specific infection, and their body temperature may be low or normal instead of constituting a fever. Severe sepsis causes poor organ function or blood flow. The presence of low blood pressure, high blood lactate, or low urine output may suggest poor blood flow. Septic shock is low blood pressure due to sepsis that does not improve after fluid replacement.
Sepsis is caused by many organisms including bacteria, viruses and fungi. Common locations for the primary infection include the lungs, brain, urinary tract, skin, and abdominal organs. Risk factors include being very young or old, a weakened immune system from conditions such as cancer or diabetes, major trauma, and burns. Previously, a sepsis diagnosis required the presence of at least two systemic inflammatory response syndrome (SIRS) criteria in the setting of presumed infection. In 2016, a shortened sequential organ failure assessment score (SOFA score), known as the quick SOFA score (qSOFA), replaced the SIRS system of diagnosis. qSOFA criteria for sepsis include at least two of the following three: increased breathing rate, change in the level of consciousness, and low blood pressure. Sepsis guidelines recommend obtaining blood cultures before starting antibiotics; however, the diagnosis does not require the blood to be infected. Medical imaging is helpful when looking for the possible location of the infection. Other potential causes of similar signs and symptoms include anaphylaxis, adrenal insufficiency, low blood volume, heart failure, and pulmonary embolism.
Sepsis requires immediate treatment with intravenous fluids and antimicrobials. Ongoing care often continues in an intensive care unit. If an adequate trial of fluid replacement is not enough to maintain blood pressure, then the use of medications that raise blood pressure becomes necessary. Mechanical ventilation and dialysis may be needed to support the function of the lungs and kidneys, respectively. A central venous catheter and an arterial catheter may be placed for access to the bloodstream and to guide treatment. Other helpful measurements include cardiac output and superior vena cava oxygen saturation. People with sepsis need preventive measures for deep vein thrombosis, stress ulcers, and pressure ulcers unless other conditions prevent such interventions. Some people might benefit from tight control of blood sugar levels with insulin. The use of corticosteroids is controversial, with some reviews finding benefit, and others not.
Disease severity partly determines the outcome. The risk of death from sepsis is as high as 30%, while for severe sepsis it is as high as 50%, and septic shock 80%. Sepsis affected about 49 million people in 2017, with 11 million deaths (1 in 5 deaths worldwide). In the developed world, approximately 0.2 to 3 people per 1000 are affected by sepsis yearly, resulting in about a million cases per year in the United States. Rates of disease have been increasing. Some data indicate that sepsis is more common among males than females, however, other data show a greater prevalence of the disease among women. Descriptions of sepsis date back to the time of Hippocrates.
Signs and symptoms
In addition to symptoms related to the actual cause, people with sepsis may have a fever, low body temperature, rapid breathing, a fast heart rate, confusion, and edema. Early signs include a rapid heart rate, decreased urination, and high blood sugar. Signs of established sepsis include confusion, metabolic acidosis (which may be accompanied by a faster breathing rate that leads to respiratory alkalosis), low blood pressure due to decreased systemic vascular resistance, higher cardiac output, and disorders in blood-clotting that may lead to organ failure. Fever is the most common presenting symptom in sepsis, but fever may be absent in some people such as the elderly or those who are immunocompromised.
The drop in blood pressure seen in sepsis can cause lightheadedness and is part of the criteria for septic shock.
Oxidative stress is observed in septic shock, with circulating levels of copper and vitamin C being decreased.
Diastolic blood pressure falls during the early stages of sepsis, causing a widening/increasing of pulse pressure, which is the difference between the systolic and diastolic blood pressures. If sepsis becomes severe and hemodynamic compromise advances, the systolic pressure also decreases, causing a narrowing/decreasing of pulse pressure. A pulse pressure of over 70 mmHg in patients with sepsis is correlated with an increased chance of survival. A widened pulse pressure is also correlated with an increased chance that someone with sepsis will benefit from and respond to IV fluids.
Cause
Infections leading to sepsis are usually bacterial but may be fungal, parasitic or viral. Gram-positive bacteria were the primary cause of sepsis before the introduction of antibiotics in the 1950s. After the introduction of antibiotics, gram-negative bacteria became the predominant cause of sepsis from the 1960s to the 1980s. After the 1980s, gram-positive bacteria, most commonly staphylococci, are thought to cause more than 50% of cases of sepsis. Other commonly implicated bacteria include Streptococcus pyogenes, Escherichia coli, Pseudomonas aeruginosa, and Klebsiella species. Fungal sepsis accounts for approximately 5% of severe sepsis and septic shock cases; the most common cause of fungal sepsis is an infection by Candida species of yeast, a frequent hospital-acquired infection. The most common causes for parasitic sepsis are Plasmodium (which leads to malaria), Schistosoma and Echinococcus.
The most common sites of infection resulting in severe sepsis are the lungs, the abdomen, and the urinary tract. Typically, 50% of all sepsis cases start as an infection in the lungs. In one-third to one-half of cases, the source of infection is unclear.
Pathophysiology
Sepsis is caused by a combination of factors related to the particular invading pathogen(s) and to the status of the immune system of the host. The early phase of sepsis characterized by excessive inflammation (sometimes resulting in a cytokine storm) may be followed by a prolonged period of decreased functioning of the immune system. Either of these phases may prove fatal. On the other hand, systemic inflammatory response syndrome (SIRS) occurs in people without the presence of infection, for example, in those with burns, polytrauma, or the initial state in pancreatitis and chemical pneumonitis. However, sepsis also causes similar response to SIRS.
Microbial factors
Bacterial virulence factors, such as glycocalyx and various adhesins, allow colonization, immune evasion, and establishment of disease in the host. Sepsis caused by gram-negative bacteria is thought to be largely due to a response by the host to the lipid A component of lipopolysaccharide, also called endotoxin. Sepsis caused by gram-positive bacteria may result from an immunological response to cell wall lipoteichoic acid. Bacterial exotoxins that act as superantigens also may cause sepsis. Superantigens simultaneously bind major histocompatibility complex and T-cell receptors in the absence of antigen presentation. This forced receptor interaction induces the production of pro-inflammatory chemical signals (cytokines) by T-cells.
There are a number of microbial factors that may cause the typical septic inflammatory cascade. An invading pathogen is recognized by its pathogen-associated molecular patterns (PAMPs). Examples of PAMPs include lipopolysaccharides and flagellin in gram-negative bacteria, muramyl dipeptide in the peptidoglycan of the gram-positive bacterial cell wall, and CpG bacterial DNA. These PAMPs are recognized by the pattern recognition receptors (PRRs) of the innate immune system, which may be membrane-bound or cytosolic. There are four families of PRRs: the toll-like receptors, the C-type lectin receptors, the NOD-like receptors, and the RIG-I-like receptors. Invariably, the association of a PAMP and a PRR will cause a series of intracellular signalling cascades. Consequentially, transcription factors such as nuclear factor-kappa B and activator protein-1, will up-regulate the expression of pro-inflammatory and anti-inflammatory cytokines.
Host factors
Upon detection of microbial antigens, the host systemic immune system is activated. Immune cells not only recognise pathogen-associated molecular patterns but also damage-associated molecular patterns from damaged tissues. An uncontrolled immune response is then activated because leukocytes are not recruited to the specific site of infection, but instead they are recruited all over the body. Then, an immunosuppression state ensues when the proinflammatory T helper cell 1 (TH1) is shifted to TH2, mediated by interleukin 10, which is known as "compensatory anti-inflammatory response syndrome". The apoptosis (cell death) of lymphocytes further worsens the immunosuppression. Neutrophils, monocytes, macrophages, dendritic cells, CD4+ T cells, and B cells all undergo apoptosis, whereas regulatory T cells are more apoptosis resistant. Subsequently, multiple organ failure ensues because tissues are unable to use oxygen efficiently due to inhibition of cytochrome c oxidase.
Inflammatory responses cause multiple organ dysfunction syndrome through various mechanisms as described below. Increased permeability of the lung vessels causes leaking of fluids into alveoli, which results in pulmonary edema and acute respiratory distress syndrome (ARDS). Impaired utilization of oxygen in the liver impairs bile salt transport, causing jaundice (yellowish discoloration of the skin). In kidneys, inadequate oxygenation results in tubular epithelial cell injury (of the cells lining the kidney tubules), and thus causes acute kidney injury (AKI). Meanwhile, in the heart, impaired calcium transport, and low production of adenosine triphosphate (ATP), can cause myocardial depression, reducing cardiac contractility and causing heart failure. In the gastrointestinal tract, increased permeability of the mucosa alters the microflora, causing mucosal bleeding and paralytic ileus. In the central nervous system, direct damage of the brain cells and disturbances of neurotransmissions causes altered mental status. Cytokines such as tumor necrosis factor, interleukin 1, and interleukin 6 may activate procoagulation factors in the cells lining blood vessels, leading to endothelial damage. The damaged endothelial surface inhibits anticoagulant properties as well as increases antifibrinolysis, which may lead to intravascular clotting, the formation of blood clots in small blood vessels, and multiple organ failure.
The low blood pressure seen in those with sepsis is the result of various processes, including excessive production of chemicals that dilate blood vessels such as nitric oxide, a deficiency of chemicals that constrict blood vessels such as vasopressin, and activation of ATP-sensitive potassium channels. In those with severe sepsis and septic shock, this sequence of events leads to a type of circulatory shock known as distributive shock.
Diagnosis
Early diagnosis is necessary to properly manage sepsis, as the initiation of rapid therapy is key to reducing deaths from severe sepsis. Some hospitals use alerts generated from electronic health records to bring attention to potential cases as early as possible.
Within the first three hours of suspected sepsis, diagnostic studies should include white blood cell counts, measuring serum lactate, and obtaining appropriate cultures before starting antibiotics, so long as this does not delay their use by more than 45 minutes. To identify the causative organism(s), at least two sets of blood cultures using bottles with media for aerobic and anaerobic organisms are necessary. At least one should be drawn through the skin and one through each vascular access device (such as an IV catheter) that has been in place more than 48 hours. Bacteria are present in the blood in only about 30% of cases. Another possible method of detection is by polymerase chain reaction. If other sources of infection are suspected, cultures of these sources, such as urine, cerebrospinal fluid, wounds, or respiratory secretions, also should be obtained, as long as this does not delay the use of antibiotics.
Within six hours, if blood pressure remains low despite initial fluid resuscitation of 30 mL/kg, or if initial lactate is ≥ four mmol/L (36 mg/dL), central venous pressure and central venous oxygen saturation should be measured. Lactate should be re-measured if the initial lactate was elevated. Evidence for point of care lactate measurement over usual methods of measurement, however, is poor.
Within twelve hours, it is essential to diagnose or exclude any source of infection that would require emergent source control, such as a necrotizing soft tissue infection, an infection causing inflammation of the abdominal cavity lining, an infection of the bile duct, or an intestinal infarction. A pierced internal organ (free air on an abdominal X-ray or CT scan), an abnormal chest X-ray consistent with pneumonia (with focal opacification), or petechiae, purpura, or purpura fulminans may indicate the presence of an infection.
Definitions
Previously, SIRS criteria had been used to define sepsis. If the SIRS criteria are negative, it is very unlikely the person has sepsis; if it is positive, there is just a moderate probability that the person has sepsis. According to SIRS, there were different levels of sepsis: sepsis, severe sepsis, and septic shock. The definition of SIRS is shown below:
SIRS is the presence of two or more of the following: abnormal body temperature, heart rate, respiratory rate, or blood gas, and white blood cell count.
Sepsis is defined as SIRS in response to an infectious process.
Severe sepsis is defined as sepsis with sepsis-induced organ dysfunction or tissue hypoperfusion (manifesting as hypotension, elevated lactate, or decreased urine output). Severe sepsis is an infectious disease state associated with multiple organ dysfunction syndrome (MODS)
Septic shock is severe sepsis plus persistently low blood pressure, despite the administration of intravenous fluids.
In 2016 a new consensus was reached to replace screening by systemic inflammatory response syndrome (SIRS) with the sequential organ failure assessment (SOFA score) and the abbreviated version (qSOFA). The three criteria for the qSOFA score include a respiratory rate greater than or equal to 22 breaths per minute, systolic blood pressure 100 mmHg or less and altered mental status. Sepsis is suspected when 2 of the qSOFA criteria are met. The SOFA score was intended to be used in the intensive care unit (ICU) where it is administered upon admission to the ICU and then repeated every 48 hours, whereas the qSOFA could be used outside the ICU. Some advantages of the qSOFA score are that it can be administered quickly and does not require labs. However, the American College of Chest Physicians (CHEST) raised concerns that qSOFA and SOFA criteria may lead to delayed diagnosis of serious infection, leading to delayed treatment. Although SIRS criteria can be too sensitive and not specific enough in identifying sepsis, SOFA also has its limitations and is not intended to replace the SIRS definition. qSOFA has also been found to be poorly sensitive though decently specific for the risk of death with SIRS possibly better for screening. NOTE - Surviving Sepsis Campaign 2021 Guidelines recommends "against using qSOFA compared with SIRS, NEWS, or MEWS as a single screening tool for sepsis or septic shock".
End-organ dysfunction
Examples of end-organ dysfunction include the following:
Lungs: acute respiratory distress syndrome (ARDS) (PaO2/FiO2 ratio < 300), different ratio in pediatric acute respiratory distress syndrome
Brain: encephalopathy symptoms including agitation, confusion, coma; causes may include ischemia, bleeding, formation of blood clots in small blood vessels, microabscesses, multifocal necrotizing leukoencephalopathy
Liver: disruption of protein synthetic function manifests acutely as progressive disruption of blood clotting due to an inability to synthesize clotting factors and disruption of metabolic functions leads to impaired bilirubin metabolism, resulting in elevated unconjugated serum bilirubin levels
Kidney: low urine output or no urine output, electrolyte abnormalities, or volume overload
Heart: systolic and diastolic heart failure, likely due to chemical signals that depress myocyte function, cellular damage, manifest as a troponin leak (although not necessarily ischemic in nature)
More specific definitions of end-organ dysfunction exist for SIRS in pediatrics.
Cardiovascular dysfunction (after fluid resuscitation with at least 40 mL/kg of crystalloid)
hypotension with blood pressure < 5th percentile for age or systolic blood pressure < 2 standard deviations below normal for age, or
vasopressor requirement, or
two of the following criteria:
unexplained metabolic acidosis with base deficit > 5 mEq/L
lactic acidosis: serum lactate 2 times the upper limit of normal
oliguria (urine output )
prolonged capillary refill > 5 seconds
core to peripheral temperature difference
Respiratory dysfunction (in the absence of a cyanotic heart defect or a known chronic respiratory disease)
the ratio of the arterial partial-pressure of oxygen to the fraction of oxygen in the gases inspired (PaO2/FiO2) < 300 (the definition of acute lung injury), or
arterial partial-pressure of carbon dioxide (PaCO2) > 65 torr (20 mmHg) over baseline PaCO2 (evidence of hypercapnic respiratory failure), or
supplemental oxygen requirement of greater than FiO2 0.5 to maintain oxygen saturation ≥ 92%
Neurologic dysfunction
Glasgow Coma Score (GCS) ≤ 11, or
altered mental status with drop in GCS of 3 or more points in a person with developmental delay/intellectual disability
Hematologic dysfunction
platelet count or 50% drop from maximum in chronically thrombocytopenic, or
international normalized ratio (INR) > 2
Disseminated intravascular coagulation
Kidney dysfunction
serum creatinine ≥ 2 times the upper limit of normal for age or 2-fold increase in baseline creatinine in people with chronic kidney disease
Liver dysfunction (only applicable to infants > 1 month)
total serum bilirubin ≥ 4 mg/dL, or
alanine aminotransferase (ALT) ≥ 2 times the upper limit of normal
Consensus definitions, however, continue to evolve, with the latest expanding the list of signs and symptoms of sepsis to reflect clinical bedside experience.
Biomarkers
Biomarkers can help diagnosis because they can point to the presence or severity of sepsis, although their exact role in the management of sepsis remains undefined. A 2013 review concluded moderate-quality evidence exists to support the use of the procalcitonin level as a method to distinguish sepsis from non-infectious causes of SIRS. The same review found the sensitivity of the test to be 77% and the specificity to be 79%. The authors suggested that procalcitonin may serve as a helpful diagnostic marker for sepsis, but cautioned that its level alone does not definitively make the diagnosis. More current literature recommends utilizing the PCT to direct antibiotic therapy for improved antibiotic stewardship and better patient outcomes.
A 2012 systematic review found that soluble urokinase-type plasminogen activator receptor (SuPAR) is a nonspecific marker of inflammation and does not accurately diagnose sepsis. This same review concluded, however, that SuPAR has prognostic value, as higher SuPAR levels are associated with an increased rate of death in those with sepsis. Serial measurement of lactate levels (approximately every 4 to 6 hours) may guide treatment and is associated with lower mortality in sepsis.
Differential diagnosis
The differential diagnosis for sepsis is broad and has to examine (to exclude) the non-infectious conditions that may cause the systemic signs of SIRS: alcohol withdrawal, acute pancreatitis, burns, pulmonary embolism, thyrotoxicosis, anaphylaxis, adrenal insufficiency, and neurogenic shock. Hyperinflammatory syndromes such as hemophagocytic lymphohistiocytosis (HLH) may have similar symptoms and are on the differential diagnosis.
Neonatal sepsis
In common clinical usage, neonatal sepsis refers to a bacterial blood stream infection in the first month of life, such as meningitis, pneumonia, pyelonephritis, or gastroenteritis, but neonatal sepsis also may be due to infection with fungi, viruses, or parasites. Criteria with regard to hemodynamic compromise or respiratory failure are not useful because they present too late for intervention.
Management
Early recognition and focused management may improve the outcomes in sepsis. Current professional recommendations include a number of actions ("bundles") to be followed as soon as possible after diagnosis. Within the first three hours, someone with sepsis should have received antibiotics and, intravenous fluids if there is evidence of either low blood pressure or other evidence for inadequate blood supply to organs (as evidenced by a raised level of lactate); blood cultures also should be obtained within this time period. After six hours the blood pressure should be adequate, close monitoring of blood pressure and blood supply to organs should be in place, and the lactate should be measured again if initially it was raised. A related bundle, the "Sepsis Six", is in widespread use in the United Kingdom; this requires the administration of antibiotics within an hour of recognition, blood cultures, lactate, and hemoglobin determination, urine output monitoring, high-flow oxygen, and intravenous fluids.
Apart from the timely administration of fluids and antibiotics, the management of sepsis also involves surgical drainage of infected fluid collections and appropriate support for organ dysfunction. This may include hemodialysis in kidney failure, mechanical ventilation in lung dysfunction, transfusion of blood products, and drug and fluid therapy for circulatory failure. Ensuring adequate nutrition—preferably by enteral feeding, but if necessary, by parenteral nutrition—is important during prolonged illness. Medication to prevent deep vein thrombosis and gastric ulcers also may be used.
Antibiotics
Two sets of blood cultures (aerobic and anaerobic) are recommended without delaying the initiation of antibiotics. Cultures from other sites such as respiratory secretions, urine, wounds, cerebrospinal fluid, and catheter insertion sites (in-situ more than 48 hours) are recommended if infections from these sites are suspected. In severe sepsis and septic shock, broad-spectrum antibiotics (usually two, a β-lactam antibiotic with broad coverage, or broad-spectrum carbapenem combined with fluoroquinolones, macrolides, or aminoglycosides) are recommended. The choice of antibiotics is important in determining the survival of the person. Some recommend they be given within one hour of making the diagnosis, stating that for every hour of delay in the administration of antibiotics, there is an associated 6% rise in mortality. Others did not find a benefit with early administration.
Several factors determine the most appropriate choice for the initial antibiotic regimen. These factors include local patterns of bacterial sensitivity to antibiotics, whether the infection is thought to be a hospital or community-acquired infection, and which organ systems are thought to be infected. Antibiotic regimens should be reassessed daily and narrowed if appropriate. Treatment duration is typically 7–10 days with the type of antibiotic used directed by the results of cultures. If the culture result is negative, antibiotics should be de-escalated according to the person's clinical response or stopped altogether if an infection is not present to decrease the chances that the person is infected with multiple drug resistance organisms. In case of people having a high risk of being infected with multiple drug resistant organisms such as Pseudomonas aeruginosa, Acinetobacter baumannii, the addition of an antibiotic specific to the gram-negative organism is recommended. For methicillin-resistant Staphylococcus aureus (MRSA), vancomycin or teicoplanin is recommended. For Legionella infection, addition of macrolide or fluoroquinolone is chosen. If fungal infection is suspected, an echinocandin, such as caspofungin or micafungin, is chosen for people with severe sepsis, followed by triazole (fluconazole and itraconazole) for less ill people. Prolonged antibiotic prophylaxis is not recommended in people who has SIRS without any infectious origin such as acute pancreatitis and burns unless sepsis is suspected.
Once-daily dosing of aminoglycoside is sufficient to achieve peak plasma concentration for a clinical response without kidney toxicity. Meanwhile, for antibiotics with low volume distribution (vancomycin, teicoplanin, colistin), a loading dose is required to achieve an adequate therapeutic level to fight infections. Frequent infusions of beta-lactam antibiotics without exceeding total daily dose would help to keep the antibiotics level above minimum inhibitory concentration (MIC), thus providing a better clinical response. Giving beta-lactam antibiotics continuously may be better than giving them intermittently. Access to therapeutic drug monitoring is important to ensure adequate drug therapeutic level while at the same time preventing the drug from reaching toxic level.
Intravenous fluids
The Surviving Sepsis Campaign has recommended 30 mL/kg of fluid to be given in adults in the first three hours followed by fluid titration according to blood pressure, urine output, respiratory rate, and oxygen saturation with a target mean arterial pressure (MAP) of 65 mmHg. In children an initial amount of 20 mL/kg is reasonable in shock. In cases of severe sepsis and septic shock where a central venous catheter is used to measure blood pressures dynamically, fluids should be administered until the central venous pressure reaches 8–12 mmHg. Once these goals are met, the central venous oxygen saturation (ScvO2), i.e., the oxygen saturation of venous blood as it returns to the heart as measured at the vena cava, is optimized. If the ScvO2 is less than 70%, blood may be given to reach a hemoglobin of 10 g/dL and then inotropes are added until the ScvO2 is optimized. In those with acute respiratory distress syndrome (ARDS) and sufficient tissue blood fluid, more fluids should be given carefully.
Crystalloid solution is recommended as the fluid of choice for resuscitation. Albumin can be used if a large amount of crystalloid is required for resuscitation. Crystalloid solutions shows little difference with hydroxyethyl starch in terms of risk of death. Starches also carry an increased risk of acute kidney injury, and need for blood transfusion. Various colloid solutions (such as modified gelatin) carry no advantage over crystalloid. Albumin also appears to be of no benefit over crystalloids.
Blood products
The Surviving Sepsis Campaign recommended packed red blood cells transfusion for hemoglobin levels below 70 g/L if there is no myocardial ischemia, hypoxemia, or acute bleeding. In a 2014 trial, blood transfusions to keep target hemoglobin above 70 or 90 g/L did not make any difference to survival rates; meanwhile, those with a lower threshold of transfusion received fewer transfusions in total. Erythropoietin is not recommended in the treatment of anemia with septic shock because it may precipitate blood clotting events. Fresh frozen plasma transfusion usually does not correct the underlying clotting abnormalities before a planned surgical procedure. However, platelet transfusion is suggested for platelet counts below (10 × 109/L) without any risk of bleeding, or (20 × 109/L) with high risk of bleeding, or (50 × 109/L) with active bleeding, before a planned surgery or an invasive procedure. IV immunoglobulin is not recommended because its beneficial effects are uncertain. Monoclonal and polyclonal preparations of intravenous immunoglobulin (IVIG) do not lower the rate of death in newborns and adults with sepsis. Evidence for the use of IgM-enriched polyclonal preparations of IVIG is inconsistent. On the other hand, the use of antithrombin to treat disseminated intravascular coagulation is also not useful. Meanwhile, the blood purification technique (such as hemoperfusion, plasma filtration, and coupled plasma filtration adsorption) to remove inflammatory mediators and bacterial toxins from the blood also does not demonstrate any survival benefit for septic shock.
Vasopressors
If the person has been sufficiently fluid resuscitated but the mean arterial pressure is not greater than 65 mmHg, vasopressors are recommended. Norepinephrine (noradrenaline) is recommended as the initial choice. Delaying initiation of vasopressor therapy during septic shock is associated with increased mortality.
Norepinephrine is often used as a first-line treatment for hypotensive septic shock because evidence shows that there is a relative deficiency of vasopressin when shock continues for 24 to 48 hours. Norepinephrine raises blood pressure through a vasoconstriction effect, with little effect on stroke volume and heart rate. In some people, the required dose of vasopressor needed to increase the mean arterial pressure can become exceedingly high that it becomes toxic. In order to reduce the required dose of vasopressor, epinephrine may be added. Epinephrine is not often used as a first-line treatment for hypotensive shock because it reduces blood flow to the abdominal organs and increases lactate levels. Vasopressin can be used in septic shock because studies have shown that there is a relative deficiency of vasopressin when shock continues for 24 to 48 hours. However, vasopressin reduces blood flow to the heart, finger/toes, and abdominal organs, resulting in a lack of oxygen supply to these tissues. Dopamine is typically not recommended. Although dopamine is useful to increase the stroke volume of the heart, it causes more abnormal heart rhythms than norepinephrine and also has an immunosuppressive effect. Dopamine is not proven to have protective properties on the kidneys. Dobutamine can also be used in hypotensive septic shock to increase cardiac output and correct blood flow to the tissues. Dobutamine is not used as often as epinephrine due to its associated side effects, which include reducing blood flow to the gut. Additionally, dobutamine increases the cardiac output by abnormally increasing the heart rate.
Steroids
The use of steroids in sepsis is controversial. Studies do not give a clear picture as to whether and when glucocorticoids should be used. The 2016 Surviving Sepsis Campaign recommends low dose hydrocortisone only if both intravenous fluids and vasopressors are not able to adequately treat septic shock. The 2021 Surviving Sepsis Campaign recommends IV corticosteroids for adults with septic shock who have an ongoing requirement for vasopressor therapy. A 2019 Cochrane review found low-quality evidence of benefit, as did two 2019 reviews.
During critical illness, a state of adrenal insufficiency and tissue resistance to corticosteroids may occur. This has been termed critical illness–related corticosteroid insufficiency. Treatment with corticosteroids might be most beneficial in those with septic shock and early severe ARDS, whereas its role in others such as those with pancreatitis or severe pneumonia is unclear. However, the exact way of determining corticosteroid insufficiency remains problematic. It should be suspected in those poorly responding to resuscitation with fluids and vasopressors. Neither ACTH stimulation testing nor random cortisol levels are recommended to confirm the diagnosis. The method of stopping glucocorticoid drugs is variable, and it is unclear whether they should be slowly decreased or simply abruptly stopped. However, the 2016 Surviving Sepsis Campaign recommended to taper steroids when vasopressors are no longer needed.
Anesthesia
A target tidal volume of 6 mL/kg of predicted body weight (PBW) and a plateau pressure less than 30 cm H2O is recommended for those who require ventilation due to sepsis-induced severe ARDS. High positive end expiratory pressure (PEEP) is recommended for moderate to severe ARDS in sepsis as it opens more lung units for oxygen exchange. Predicted body weight is calculated based on sex and height, and tools for this are available. Recruitment maneuvers may be necessary for severe ARDS by briefly raising the transpulmonary pressure. It is recommended that the head of the bed be raised if possible to improve ventilation. However, β2 adrenergic receptor agonists are not recommended to treat ARDS because it may reduce survival rates and precipitate abnormal heart rhythms. A spontaneous breathing trial using continuous positive airway pressure (CPAP), T piece, or inspiratory pressure augmentation can be helpful in reducing the duration of ventilation. Minimizing intermittent or continuous sedation is helpful in reducing the duration of mechanical ventilation.
General anesthesia is recommended for people with sepsis who require surgical procedures to remove the infective source. Usually, inhalational and intravenous anesthetics are used. Requirements for anesthetics may be reduced in sepsis. Inhalational anesthetics can reduce the level of proinflammatory cytokines, altering leukocyte adhesion and proliferation, inducing apoptosis (cell death) of the lymphocytes, possibly with a toxic effect on mitochondrial function. Although etomidate has a minimal effect on the cardiovascular system, it is often not recommended as a medication to help with intubation in this situation due to concerns it may lead to poor adrenal function and an increased risk of death. The small amount of evidence there is, however, has not found a change in the risk of death with etomidate.
Paralytic agents are not suggested for use in sepsis cases in the absence of ARDS, as a growing body of evidence points to reduced durations of mechanical ventilation, ICU and hospital stays. However, paralytic use in ARDS cases remains controversial. When appropriately used, paralytics may aid successful mechanical ventilation, however, evidence has also suggested that mechanical ventilation in severe sepsis does not improve oxygen consumption and delivery.
Source control
Source control refers to physical interventions to control a focus of infection and reduce conditions favorable to microorganism growth or host defense impairment, such as drainage of pus from an abscess. It is one of the oldest procedures for control of infections, giving rise to the Latin phrase Ubi pus, ibi evacua, and remains important despite the emergence of more modern treatments.
Early goal directed therapy
Early goal directed therapy (EGDT) is an approach to the management of severe sepsis during the initial 6 hours after diagnosis. It is a step-wise approach, with the physiologic goal of optimizing cardiac preload, afterload, and contractility. It includes giving early antibiotics. EGDT also involves monitoring of hemodynamic parameters and specific interventions to achieve key resuscitation targets which include maintaining a central venous pressure between 8–12 mmHg, a mean arterial pressure of between 65 and 90 mmHg, a central venous oxygen saturation (ScvO2) greater than 70% and a urine output of greater than 0.5 mL/kg/hour. The goal is to optimize oxygen delivery to tissues and achieve a balance between systemic oxygen delivery and demand. An appropriate decrease in serum lactate may be equivalent to ScvO2 and easier to obtain.
In the original trial, early goal-directed therapy was found to reduce mortality from 46.5% to 30.5% in those with sepsis, and the Surviving Sepsis Campaign has been recommending its use. However, three more recent large randomized control trials (ProCESS, ARISE, and ProMISe), did not demonstrate a 90-day mortality benefit of early goal-directed therapy when compared to standard therapy in severe sepsis. It is likely that some parts of EGDT are more important than others. Following these trials the use of EGDT is still considered reasonable.
Newborns
Neonatal sepsis can be difficult to diagnose as newborns may be asymptomatic. If a newborn shows signs and symptoms suggestive of sepsis, antibiotics are immediately started and are either changed to target a specific organism identified by diagnostic testing or discontinued after an infectious cause for the symptoms has been ruled out. Despite early intervention, death occurs in 13% of children who develop septic shock, with the risk partly based on other health problems. For those without multiple organ system failures or who require only one inotropic agent, mortality is low.
Other
Treating fever in sepsis, including people in septic shock, has not been associated with any improvement in mortality over a period of 28 days. Treatment of fever still occurs for other reasons.
A 2012 Cochrane review concluded that N-acetylcysteine does not reduce mortality in those with SIRS or sepsis and may even be harmful.
Recombinant activated protein C (drotrecogin alpha) was originally introduced for severe sepsis (as identified by a high APACHE II score), where it was thought to confer a survival benefit. However, subsequent studies showed that it increased adverse events—bleeding risk in particular—and did not decrease mortality. It was removed from sale in 2011. Another medication known as eritoran also has not shown benefit.
In those with high blood sugar levels, insulin to bring it down to 7.8–10 mmol/L (140–180 mg/dL) is recommended with lower levels potentially worsening outcomes. Glucose levels taken from capillary blood should be interpreted with care because such measurements may not be accurate. If a person has an arterial catheter, arterial blood is recommended for blood glucose testing.
Intermittent or continuous renal replacement therapy may be used if indicated. However, sodium bicarbonate is not recommended for a person with lactic acidosis secondary to hypoperfusion. Low-molecular-weight heparin (LMWH), unfractionated heparin (UFH), and mechanical prophylaxis with intermittent pneumatic compression devices are recommended for any person with sepsis at moderate to high risk of venous thromboembolism. Stress ulcer prevention with proton-pump inhibitor (PPI) and H2 antagonist are useful in a person with risk factors of developing upper gastrointestinal bleeding (UGIB) such as on mechanical ventilation for more than 48 hours, coagulation disorders, liver disease, and renal replacement therapy. Achieving partial or full enteral feeding (delivery of nutrients through a feeding tube) is chosen as the best approach to provide nutrition for a person who is contraindicated for oral intake or unable to tolerate orally in the first seven days of sepsis when compared to intravenous nutrition. However, omega-3 fatty acids are not recommended as immune supplements for a person with sepsis or septic shock. The usage of prokinetic agents such as metoclopramide, domperidone, and erythromycin are recommended for those who are septic and unable to tolerate enteral feeding. However, these agents may precipitate prolongation of the QT interval and consequently provoke a ventricular arrhythmia such as torsades de pointes. The usage of prokinetic agents should be reassessed daily and stopped if no longer indicated.
People in sepsis may have micronutrient deficiencies, including low levels of vitamin C. Reviews mention that an intake of 3.0 g/day, which requires intravenous administration, may needed to maintain normal plasma concentrations in people with sepsis or severe burn injury.
Prognosis
Sepsis will prove fatal in approximately 24.4% of people, and septic shock will prove fatal in 34.7% of people within 30 days (32.2% and 38.5% after 90 days).
Lactate is a useful method of determining prognosis, with those who have a level greater than 4 mmol/L having a mortality of 40% and those with a level of less than 2 mmol/L having a mortality of less than 15%.
There are a number of prognostic stratification systems, such as APACHE II and Mortality in Emergency Department Sepsis. APACHE II factors in the person's age, underlying condition, and various physiologic variables to yield estimates of the risk of dying of severe sepsis. Of the individual covariates, the severity of the underlying disease most strongly influences the risk of death. Septic shock is also a strong predictor of short- and long-term mortality. Case-fatality rates are similar for culture-positive and culture-negative severe sepsis. The Mortality in Emergency Department Sepsis (MEDS) score is simpler and useful in the emergency department environment.
Some people may experience severe long-term cognitive decline following an episode of severe sepsis, but the absence of baseline neuropsychological data in most people with sepsis makes the incidence of this difficult to quantify or to study.
Epidemiology
Sepsis causes millions of deaths globally each year and is the most common cause of death in people who have been hospitalized. The number of new cases worldwide of sepsis is estimated to be 18 million cases per year. In the United States sepsis affects approximately 3 in 1,000 people, and severe sepsis contributes to more than 200,000 deaths per year.
Sepsis occurs in 1–2% of all hospitalizations and accounts for as much as 25% of ICU bed utilization. Due to it rarely being reported as a primary diagnosis (often being a complication of cancer or other illness), the incidence, mortality, and morbidity rates of sepsis are likely underestimated. A study of U.S. states found approximately 651 hospital stays per 100,000 population with a sepsis diagnosis in 2010. It is the second-leading cause of death in non-coronary intensive care unit (ICU) and the tenth-most-common cause of death overall (the first being heart disease). Children under 12 months of age and elderly people have the highest incidence of severe sepsis. Among people from the U.S. who had multiple sepsis hospital admissions in 2010, those who were discharged to a skilled nursing facility or long-term care following the initial hospitalization were more likely to be readmitted than those discharged to another form of care. A study of 18 U.S. states found that, amongst people with Medicare in 2011, sepsis was the second most common principal reason for readmission within 30 days.
Several medical conditions increase a person's susceptibility to infection and developing sepsis. Common sepsis risk factors include age (especially the very young and old); conditions that weaken the immune system such as cancer, diabetes, or the absence of a spleen; and major trauma and burns.
From 1979 to 2000, data from the United States National Hospital Discharge Survey showed that the incidence of sepsis increased fourfold, to 240 cases per 100,000 population, with a higher incidence in men when compared to women. However, the global prevalence of sepsis has been estimated to be higher in women. During the same time frame, the in-hospital case fatality rate was reduced from 28% to 18%. However, according to the nationwide inpatient sample from the United States, the incidence of severe sepsis increased from 200 per 10,000 population in 2003 to 300 cases in 2007 for population aged more than 18 years. The incidence rate is particularly high among infants, with an incidence of 500 cases per 100,000 population. Mortality related to sepsis increases with age, from less than 10% in the age group of 3 to 5 years to 60% by sixth decade of life. The increase in the average age of the population, alongside the presence of more people with chronic diseases or on immunosuppressive medications, and also the increase in the number of invasive procedures being performed, has led to an increased rate of sepsis.
History
The term "σήψις" (sepsis) was introduced by Hippocrates in the fourth century BC, and it meant the process of decay or decomposition of organic matter. In the eleventh century, Avicenna used the term "blood rot" for diseases linked to severe purulent process. Though severe systemic toxicity had already been observed, it was only in the 19th century that the specific term – sepsis – was used for this condition.
The terms "septicemia", also spelled "septicaemia", and "blood poisoning" referred to the microorganisms or their toxins in the blood. The International Statistical Classification of Diseases and Related Health Problems (ICD) version 9, which was in use in the US until 2013, used the term septicemia with numerous modifiers for different diagnoses, such as "Streptococcal septicemia". All those diagnoses have been converted to sepsis, again with modifiers, in ICD-10, such as "Sepsis due to streptococcus".
The current terms are dependent on the microorganism that is present: bacteremia if bacteria are present in the blood at abnormal levels and are the causative issue, viremia for viruses, and fungemia for a fungus.
By the end of the 19th century, it was widely believed that microbes produced substances that could injure the mammalian host and that soluble toxins released during infection caused the fever and shock that were commonplace during severe infections. Pfeiffer coined the term endotoxin at the beginning of the 20th century to denote the pyrogenic principle associated with Vibrio cholerae. It was soon realized that endotoxins were expressed by most and perhaps all gram-negative bacteria. The lipopolysaccharide character of enteric endotoxins was elucidated in 1944 by Shear. The molecular character of this material was determined by Luderitz et al. in 1973.
It was discovered in 1965 that a strain of C3H/HeJ mouse was immune to the endotoxin-induced shock. The genetic locus for this effect was dubbed Lps. These mice were also found to be hyper susceptible to infection by gram-negative bacteria. These observations were finally linked in 1998 by the discovery of the toll-like receptor gene 4 (TLR 4). Genetic mapping work, performed over a period of five years, showed that TLR4 was the sole candidate locus within the Lps critical region; this strongly implied that a mutation within TLR4 must account for the lipopolysaccharide resistance phenotype. The defect in the TLR4 gene that led to the endotoxin resistant phenotype was discovered to be due to a mutation in the cytoplasm.
Controversy occurred in the scientific community over the use of mouse models in research into sepsis in 2013 when scientists published a review of the mouse immune system compared to the human immune system and showed that on a systems level, the two worked very differently; the authors noted that as of the date of their article over 150 clinical trials of sepsis had been conducted in humans, almost all of them supported by promising data in mice and that all of them had failed. The authors called for abandoning the use of mouse models in sepsis research; others rejected that but called for more caution in interpreting the results of mouse studies, and more careful design of preclinical studies. One approach is to rely more on studying biopsies and clinical data from people who have had sepsis, to try to identify biomarkers and drug targets for intervention.
Society and culture
Economics
Sepsis was the most expensive condition treated in United States' hospital stays in 2013, at an aggregate cost of $23.6 billion for nearly 1.3 million hospitalizations. Costs for sepsis hospital stays more than quadrupled since 1997 with an 11.5 percent annual increase. By payer, it was the most costly condition billed to Medicare and the uninsured, the second-most costly billed to Medicaid, and the fourth-most costly billed to private insurance.
Education
A large international collaboration entitled the "Surviving Sepsis Campaign" was established in 2002 to educate people about sepsis and to improve outcomes with sepsis. The Campaign has published an evidence-based review of management strategies for severe sepsis, with the aim to publish a complete set of guidelines in subsequent years. The guidelines were updated in 2016 and again in 2021.
Sepsis Alliance is a charitable organization based in the United States that was created to raise sepsis awareness among both the general public and healthcare professionals.
Research
Some authors suggest that initiating sepsis by the normally mutualistic (or neutral) members of the microbiome may not always be an accidental side effect of the deteriorating host immune system. Rather it is often an adaptive microbial response to a sudden decline of host survival chances. Under this scenario, the microbe species provoking sepsis benefit from monopolizing the future cadaver, utilizing its biomass as decomposers, and then transmitted through soil or water to establish mutualistic relations with new individuals. The bacteria Streptococcus pneumoniae, Escherichia coli, Proteus spp., Pseudomonas aeruginosa, Staphylococcus aureus, Klebsiella spp., Clostridium spp., Lactobacillus spp., Bacteroides spp. and the fungi Candida spp. are all capable of such a high level of phenotypic plasticity. Evidently, not all cases of sepsis arise through such adaptive microbial strategy switches.
Paul E. Marik's "Marik protocol", also known as the "HAT" protocol, proposed a combination of hydrocortisone, vitamin C, and thiamine as a treatment for preventing sepsis for people in intensive care. Marik's own initial research, published in 2017, showed a dramatic evidence of benefit, leading to the protocol becoming popular among intensive care physicians, especially after the protocol received attention on social media and National Public Radio, leading to criticism of science by press conference from the wider medical community. Subsequent independent research failed to replicate Marik's positive results, indicating the possibility that they had been compromised by bias. A systematic review of trials in 2021 found that the claimed benefits of the protocol could not be confirmed.
Overall, the evidence for any role for vitamin C in the treatment of sepsis remains unclear .
See also
Capnocytophaga canimorsus (Bacteria that can lead to pupura fulminans and severe acute sepsis after a dog bite)
References
External links
SIRS, Sepsis, and Septic Shock Criteria
Articles containing video clips
Infectious diseases
Intensive care medicine
Medical emergencies
Causes of amputation
Neonatology
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | 0.783169 | 0.999908 | 0.783097 |
Convalescence | Convalescence is the gradual recovery of health and strength after illness or injury.
Details
It refers to the later stage of an infectious disease or illness when the patient recovers and returns to previous health, but may continue to be a source of infection to others even if feeling better. In this sense, "recovery" can be considered a synonymous term. This also sometimes includes patient care after a major surgery, under which they are required to visit the doctor for regular check-ups.
Convalescent care facilities are sometimes recognized by the acronym TCF (Transitional Convalescent Facilities).
Traditionally, time has been allowed for convalescence to happen. Nowadays, in some instances, where there is a shortage of hospital beds or of trained staff, medical settings can feel rushed and may have drifted away from a focus on convalescence.
See also
Rehabilitation, therapy to control a medical condition such as an addiction
Recuperation (recovery), a period of physical or mental recovery
Recuperation (sociology), a sociological concept
Relapse, reappearance of symptoms
Remission, absence of symptoms in chronic diseases
References
External links
Health care
Medical phenomena | 0.787311 | 0.993762 | 0.7824 |
Humorism | Humorism, the humoral theory, or humoralism, was a system of medicine detailing a supposed makeup and workings of the human body, adopted by Ancient Greek and Roman physicians and philosophers.
Humorism began to fall out of favor in the 17th century and it was definitively disproved in the 1850s with the advent of germ theory, which was able to show that many diseases previously thought to be humoral were in fact caused by microbes.
Origin
The concept of "humors" may have origins in Ancient Egyptian medicine, or Mesopotamia, though it was not systemized until ancient Greek thinkers. The word humor is a translation of Greek , (literally 'juice' or 'sap', metaphorically 'flavor'). Early texts on Indian Ayurveda medicine presented a theory of three humors (doṣas), which they sometimes linked with the five elements: earth, water, fire, air, and space.
The concept of "humors" (chemical systems regulating human behaviour) became more prominent from the writing of medical theorist Alcmaeon of Croton (c. 540–500 BC). His list of humors was longer and included fundamental elements described by Empedocles, such as water, earth, fire, air, etc. Hippocrates is usually credited with applying this idea to medicine. In contrast to Alcmaeon, Hippocrates suggested that humors are the vital bodily fluids: blood, phlegm, yellow bile, and black bile. Alcmaeon and Hippocrates posited that an extreme excess or deficiency of any of the humors (bodily fluid) in a person can be a sign of illness. Hippocrates, and then Galen, suggested that a moderate imbalance in the mixture of these fluids produces behavioral patterns. One of the treatises attributed to Hippocrates, On the Nature of Man, describes the theory as follows:
The Human body contains blood, phlegm, yellow bile, and black bile. These are the things that make up its constitution and cause its pains and health. Health is primarily that state in which these constituent substances are in the correct proportion to each other, both in strength and quantity, and are well mixed. Pain occurs when one of the substances presents either a deficiency or an excess, or is separated in the body and not mixed with others. The body depends heavily on the four humors because their balanced combination helps to keep people in good health. Having the right amount of humor is essential for health. The pathophysiology of disease is consequently brought on by humor excesses and/or deficiencies.
The existence of fundamental biochemical substances and structural components in the body remains a compellingly shared point with Hippocratic beliefs, despite the fact that current science has moved away from those four Hippocratic humors.
Although the theory of the four humors does appear in some Hippocratic texts, other Hippocratic writers accepted the existence of only two humors, while some refrained from discussing the humoral theory at all. Humoralism, or the doctrine of the four temperaments, as a medical theory retained its popularity for centuries, largely through the influence of the writings of Galen (129–201 AD). The four essential elements—humors—that make up the human body, according to Hippocrates, are in harmony with one another and act as a catalyst for preserving health. Hippocrates' theory of four humors was linked with the popular theory of the four elements (earth, fire, water, and air) proposed by Empedocles, but this link was not proposed by Hippocrates or Galen, who referred primarily to bodily fluids. While Galen thought that humors were formed in the body, rather than ingested, he believed that different foods had varying potential to act upon the body to produce different humors. Warm foods, for example, tended to produce yellow bile, while cold foods tended to produce phlegm. Seasons of the year, periods of life, geographic regions, and occupations also influenced the nature of the humors formed. As such, certain seasons and geographic areas were understood to cause imbalances in the humors, leading to varying types of disease across time and place. For example, cities exposed to hot winds were seen as having higher rates of digestive problems as a result of excess phlegm running down from the head, while cities exposed to cold winds were associated with diseases of the lungs, acute diseases, and "hardness of the bowels", as well as ophthalmies (issues of the eyes), and nosebleeds. Cities to the west, meanwhile, were believed to produce weak, unhealthy, pale people that were subject to all manners of disease. In the treatise, On Airs, Waters, and Places, a Hippocratic physician is described arriving to an unnamed city where they test various factors of nature including the wind, water, and soil to predict the direct influence on the diseases specific to the city based on the season and the individual.
A fundamental idea of Hippocratic medicine was the endeavor to pinpoint the origins of illnesses in both the physiology of the human body and the influence of potentially hazardous environmental variables like air, water, and nutrition, and every humor has a distinct composition and is secreted by a different organ. Aristotle's concept of eucrasia—a state resembling equilibrium—and its relationship to the right balance of the four humors allow for the maintenance of human health, offering a more mathematical approach to medicine.
The imbalance of humors, or dyscrasia, was thought to be the direct cause of all diseases. Health was associated with a balance of humors, or eucrasia. The qualities of the humors, in turn, influenced the nature of the diseases they caused. Yellow bile caused warm diseases and phlegm caused cold diseases. In On the Temperaments, Galen further emphasized the importance of the qualities. An ideal temperament involved a proportionally balanced mixture of the four qualities. Galen identified four temperaments in which one of the qualities (warm, cold, moist, or dry) predominated, and four more in which a combination of two (warm and moist, warm and dry, cold and dry, or cold and moist) dominated. These last four, named for the humors with which they were associated—sanguine, choleric, melancholic and phlegmatic—eventually became better known than the others. While the term temperament came to refer just to psychological dispositions, Galen used it to refer to bodily dispositions, which determined a person's susceptibility to particular diseases, as well as behavioral and emotional inclinations.
Disease could also be the result of the "corruption" of one or more of the humors, which could be caused by environmental circumstances, dietary changes, or many other factors. These deficits were thought to be caused by vapors inhaled or absorbed by the body. Greeks and Romans, and the later Muslim and Western European medical establishments that adopted and adapted classical medical philosophy, believed that each of these humors would wax and wane in the body, depending on diet and activity. When a patient was suffering from a surplus or imbalance of one of the four humors, then said patient's personality and/or physical health could be negatively affected.
Therefore, the goal of treatment was to rid the body of some of the excess humor through techniques like purging, bloodletting, catharsis, diuresis, and others. Bloodletting was already a prominent medical procedure by the first century, but venesection took on even more significance once Galen of Pergamum declared blood to be the most prevalent humor. The volume of blood extracted ranged from a few drops to several litres over the course of several days, depending on the patient's condition and the doctor's practice.
Four humors
Even though humorism theory had several models that used two, three, and five components, the most famous model consists of the four humors described by Hippocrates and developed further by Galen. The four humors of Hippocratic medicine are black bile (Greek: , ), yellow bile (Greek: , ), phlegm (Greek: , ), and blood (Greek: , ). Each corresponds to one of the traditional four temperaments. Based on Hippocratic medicine, it was believed that for a body to be healthy, the four humors should be balanced in amount and strength. The proper blending and balance of the four humors was known as .
Humorism theory was improved by Galen, who incorporated his understanding of the humors into his interpretation of the human body. He believed the interactions of the humors within the body were the key to investigating the physical nature and function of the organ systems. Galen combined his interpretation of the humors with his collection of ideas concerning nature from past philosophers in order to find conclusions about how the body works. For example, Galen maintained the idea of the presence of the Platonic tripartite soul, which consisted of " (spiritedness), (directed spiritedness, i.e. desire), and (wisdom)". Through this, Galen found a connection between these three parts of the soul and the three major organs that were recognized at the time: the brain, the heart, and the liver. This idea of connecting vital parts of the soul to vital parts of the body was derived from Aristotle's sense of explaining physical observations, and Galen utilized it to build his view of the human body. The organs (named ) had specific functions (called ) that contributed to the maintenance of the human body, and the expression of these functions is shown in characteristic activities (called ) of a person. While the correspondence of parts of the body to the soul was an influential concept, Galen decided that the interaction of the four humors with natural bodily mechanisms were responsible for human development and this connection inspired his understanding of the nature of the components of the body.
Galen recalls the correspondence between humors and seasons in his On the Doctrines of Hippocrates and Plato, and says that, "As for ages and the seasons, the child corresponds to spring, the young man to summer, the mature man to autumn, and the old man to winter". He also related a correspondence between humors and seasons based on the properties of both. Blood, as a humor, was considered hot and wet. This gave it a correspondence to spring. Yellow bile was considered hot and dry, which related it to summer. Black bile was considered cold and dry, and thus related to autumn. Phlegm, cold and wet, was related to winter.
Galen also believed that the characteristics of the soul follow the mixtures of the body, but he did not apply this idea to the Hippocratic humors. He believed that phlegm did not influence character. In his On Hippocrates The Nature of Man, Galen stated: "Sharpness and intelligence are caused by yellow bile in the soul, perseverance and consistency by the melancholic humor, and simplicity and naivety by blood. But the nature of phlegm has no effect on the character of the soul." He further said that blood is a mixture of the four elements: water, air, fire, and earth.
These terms only partly correspond to modern medical terminology, in which there is no distinction between black and yellow bile, and phlegm has a very different meaning. It was believed that the humors were the basic substances from which all liquids in the body were made. Robin Fåhræus (1921), a Swedish physician who devised the erythrocyte sedimentation rate, suggested that the four humors were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen: a dark clot forms at the bottom (the "black bile"); above the clot is a layer of red blood cells (the "blood"); above this is a whitish layer of white blood cells (the "phlegm"); the top layer is clear yellow serum (the "yellow bile").
Many Greek texts were written during the golden age of the theory of the four humors in Greek medicine after Galen. One of those texts was an anonymous treatise called On the Constitution of the Universe and of Man, published in the mid-19th century by J. L. Ideler. In this text, the author establishes the relationship between elements of the universe (air, water, earth, fire) and elements of the man (blood, yellow bile, black bile, phlegm). He said that:
The people who have red blood are friendly. They joke and laugh about their bodies, and they are rose-tinted, slightly red, and have pretty skin.
The people who have yellow bile are bitter, short tempered, and daring. They appear greenish and have yellow skin.
The people who are composed of black bile are lazy, fearful, and sickly. They have black hair and black eyes.
Those who have phlegm are low spirited, forgetful, and have white hair.
Seventeenth century English playwright Ben Jonson wrote humor plays, where character types were based on their humoral complexion.
Blood
It was thought that the nutritional value of the blood was the source of energy for the body and the soul. Blood was believed to consist of small proportional amounts of the other three humors. This meant that taking a blood sample would allow for determination of the balance of the four humors in the body. It was associated with a sanguine nature (enthusiastic, active, and social). Blood is considered to be hot and wet, sharing these characteristics with the season of spring.
Yellow bile
Yellow bile was associated with a choleric nature (ambitious, decisive, aggressive, and short-tempered). It was thought to be fluid found within the gallbladder, or in excretions such as vomit and feces. The associated qualities for yellow bile are hot and dry with the natural association of summer and fire. It was believed that an excess of this humor in an individual would result in emotional irregularities such as increased anger or irrational behaviour.
Black bile
Black bile was associated with a melancholy nature, the word melancholy itself deriving from the Greek for 'black bile', . Depression was attributed to excess or unnatural black bile secreted by the spleen.
Cancer was also attributed to an excess of black bile concentrated in a specific area. The seasonal association of black bile was to autumn as the cold and dry characteristics of the season reflect the nature of man.
Phlegm
Phlegm was associated with all phlegmatic nature, thought to be associated with reserved behavior. The phlegm of humorism is far from phlegm as it is defined today. Phlegm was used as a general term to describe white or colorless secretions such as pus, mucus, saliva, sweat, or semen. Phlegm was also associated with the brain, possibly due to the color and consistency of brain tissue. The French physiologist and Nobel laureate Charles Richet, when describing humorism's "phlegm or pituitary secretion" in 1910, asked rhetorically, "this strange liquid, which is the cause of tumours, of chlorosis, of rheumatism, and cacochymia – where is it? Who will ever see it? Who has ever seen it? What can we say of this fanciful classification of humors into four groups, of which two are absolutely imaginary?" The seasonal association of phlegm is winter due to the natural properties of being cold and wet.
Humor production
Humors were believed to be produced via digestion as the final products of hepatic digestion. Digestion is a continuous process taking place in every animal, and it can be divided into four sequential stages. The gastric digestion stage, the hepatic digestion stage, the vascular digestion stage, and the tissue digestion stage. Each stage digests food until it becomes suitable for use by the body. In gastric digestion, food is made into chylous, which is suitable for the liver to absorb and carry on digestion. Chylous is changed into chymous in the hepatic digestion stage. Chymous is composed of the four humors: blood, phlegm, yellow bile, and black bile. These four humors then circulate in the blood vessels. In the last stage of digestion, tissue digestion, food becomes similar to the organ tissue for which it is destined.
If anything goes wrong leading up to the production of humors, there will be an imbalance leading to disease. Proper organ functioning is necessary in the production of good humor. The stomach and liver also have to function normally for proper digestion. If there are any abnormalities in gastric digestion, the liver, blood vessels, and tissues cannot be provided with the raw chylous, which can cause abnormal humor and blood composition. A healthy functioning liver is not capable of converting abnormal chylous into normal chylous and normal humors.
Humors are the end product of gastric digestion, but they are not the end product of the digestion cycle, so an abnormal humor produced by hepatic digestion will affect other digestive organs.
Relation to jaundice
According to Hippocratic humoral theory, jaundice is present in the Hippocratic Corpus. Some of the first descriptions of jaundice come from the Hippocratic physicians (icterus). The ailment appears multiple times in the Hippocratic Corpus, where its genesis, description, prognosis, and therapy are given. The five kinds of jaundice mentioned in the Hippocratic Corpus all share a yellow or greenish skin color.
A modern doctor will undoubtedly start to think of the symptoms listed in contemporary atlases of medicine after reading the clinical symptoms of each variety of jaundice listed in the Hippocratic Corpus. Despite the fact that the Hippocratic physicians' therapeutic approaches have little to do with contemporary medical practice, their capacity for observation as they described the various forms of jaundice is remarkable. In the Hippocratic Corpus, the Hippocratic physicians make multiple references to jaundice. At that time, jaundice was viewed as an illness unto itself rather than a symptom brought on by a disease.
Unification with Empedocles's model
Empedocles's theory suggested that there are four elements: earth, fire, water, and air, with the earth producing the natural systems. Since this theory was influential for centuries, later scholars paired qualities associated with each humor as described by Hippocrates/Galen with seasons and "basic elements" as described by Empedocles.
The following table shows the four humors with their corresponding elements, seasons, sites of formation, and resulting temperaments:
Influence and legacy
Islamic medicine
Medieval medical tradition in the Golden Age of Islam adopted the theory of humorism from Greco-Roman medicine, notably via the Persian polymath Avicenna's The Canon of Medicine (1025). Avicenna summarized the four humors and temperaments as follows:
Perso-Arabic and Indian medicine
The Unani school of medicine, practiced in Perso-Arabic countries, India, and Pakistan, is based on Galenic and Avicennian medicine in its emphasis on the four humors as a fundamental part of the methodologic paradigm.
Western medicine
The humoralist system of medicine was highly individualistic, for all patients were said to have their own unique humoral composition. From Hippocrates onward, the humoral theory was adopted by Greek, Roman and Islamic physicians, and dominated the view of the human body among European physicians until at least 1543 when it was first seriously challenged by Andreas Vesalius, who mostly criticized Galen's theories of human anatomy and not the chemical hypothesis of behavioural regulation (temperament).
Typical 18th-century practices such as bleeding a sick person or applying hot cups to a person were based on the humoral theory of imbalances of fluids (blood and bile in those cases). Methods of treatment like bloodletting, emetics and purges were aimed at expelling a surplus of a humor. Apocroustics were medications intended to stop the flux of malignant humors to a diseased body part.
16th-century Swiss physician Paracelsus further developed the idea that beneficial medical substances could be found in herbs, minerals and various alchemical combinations thereof. These beliefs were the foundation of mainstream Western medicine well into the 17th century. Specific minerals or herbs were used to treat ailments simple to complex, from an uncomplicated upper respiratory infection to the plague. For example, chamomile was used to decrease heat, and lower excessive bile humor. Arsenic was used in a poultice bag to 'draw out' the excess humor(s) that led to symptoms of the plague. Apophlegmatisms, in pre-modern medicine, were medications chewed in order to draw away phlegm and humors.
Although advances in cellular pathology and chemistry criticized humoralism by the 17th century, the theory had dominated Western medical thinking for more than 2,000 years. Only in some instances did the theory of humoralism wane into obscurity. One such instance occurred in the sixth and seventh centuries in the Byzantine Empire when traditional secular Greek culture gave way to Christian influences. Though the use of humoralist medicine continued during this time, its influence was diminished in favor of religion. The revival of Greek humoralism, owing in part to changing social and economic factors, did not begin until the early ninth century. Use of the practice in modern times is pseudoscience.
Modern use
Humoral theory was the grand unified theory of medicine, before the invention of modern medicine, for more than 2000 years. The theory was one of the fundamental tenets of the teachings of the Greek physician-philosopher Hippocrates (460–370 BC), who is regarded as the first practitioner of medicine, appropriately referred to as the "Father of Modern Medicine".
With the advent of the Doctrine of Specific Etiology, the humoral theory's demise hastened even further. This demonstrates that there is only one precise cause and one specific issue for each and every sickness or disorder that has been diagnosed. Additionally, the identification of messenger molecules like hormones, growth factors, and neurotransmitters suggests that the humoral theory has not yet been made fully moribund. Humoral theory is still present in modern medical terminology, which refers to humoral immunity when discussing elements of immunity that circulate in the bloodstream, such as hormones and antibodies.
Modern medicine refers to humoral immunity or humoral regulation when describing substances such as hormones and antibodies, but this is not a remnant of the humor theory. It is merely a literal use of humoral, i.e. pertaining to bodily fluids (such as blood and lymph).
The concept of humorism was not definitively disproven until 1858. There were no studies performed to prove or disprove the impact of dysfunction in known bodily organs producing named fluids (humors) on temperament traits simply because the list of temperament traits was not defined up until the end of the 20th century.
Culture
Theophrastus and others developed a set of characters based on the humors. Those with too much blood were sanguine. Those with too much phlegm were phlegmatic. Those with too much yellow bile were choleric, and those with too much black bile were melancholic. The idea of human personality based on humors contributed to the character comedies of Menander and, later, Plautus.
Through the neo-classical revival in Europe, the humor theory dominated medical practice, and the theory of humoral types made periodic appearances in drama. The humors were an important and popular iconographic theme in European art, found in paintings, tapestries, and sets of prints.
The humors can be found in Elizabethan works, such as in The Taming of the Shrew, in which the character Petruchio, a choleric man, uses humoral therapy techniques on Katherina, a choleric woman, in order to tame her into the socially acceptable phlegmatic woman. Some examples include: he yells at the servants for serving mutton, a choleric food, to two people who are already choleric; he deprives Katherina of sleep; and he, Katherina and their servant Grumio endure a cold walk home, for cold temperatures were said to tame choleric temperaments.
The theory of the four humors features prominently in Rupert Thomson's 2005 novel Divided Kingdom.
See also
Classical element
Comedy of humours
Three Doshas of Ayurveda
Five temperaments
Mitama
Wu Xing (Five Principles of Chinese philosophy)
References
Bibliography
Conrad, Lawrence I. The Western Medical Tradition 800 BC to AD 1800. Cambridge: Cambridge University Press, 2011.
Edwards. "A treatise concerning the plague and the pox discovering as well the meanes how to preserve from the danger of these infectious contagions, as also how to cure those which are infected with either of them". 1652.
Emtiazy, M., Keshavarz, M., Khodadoost, M., Kamalinejad, M., Gooshahgir, S. A., Shahrad Bajestani, H., ... Alizad, M. (2012). Relation between Body Humors and Hypercholesterolemia: An Iranian Traditional Medicine Perspective Based on the Teaching of Avicenna. Iranian Red Crescent Medical Journal, 14(3), 133–138.
Moore, Philip. "The hope of health wherin is conteined a goodlie regimente of life: as medicine, good diet and the goodlie vertues of sonderie herbes, doen by Philip Moore." 1564.
Burton, Robert. 1621. The Anatomy of Melancholy, Book I, New York 2001, p. 147: "The radical or innate is daily supplied by nourishment, which some call cambium, and make those secondary humors of ros and gluten to maintain it [...]".
Williams, William F. Encyclopedia of Pseudoscience: From Alien Abductions to Zone Therapy. Hoboken: Taylor and Francis, 2013.
External links
BBC Radio4's In Our Time. Episode on the four humors in MP3 format, 45 minutes.
Humorism
Obsolete medical theories
Ancient Greek medicine
Mythological substances | 0.782428 | 0.999138 | 0.781754 |
Organ system | An organ system is a biological system consisting of a group of organs that work together to perform one or more functions. Each organ has a specialized role in a organism body, and is made up of distinct tissues.
Humans
Main article: List of systems of the human body
There are 11 distinct organ systems in human beings, which form the basis of human anatomy and physiology. The 11 organ systems: the respiratory system, digestive and excretory system, circulatory system, urinary system, integumentary system, skeletal system, muscular system, endocrine system, lymphatic system, nervous system, and reproductive system. There are other systems in the body that are not organ systems—for example, the immune system protects the organism from infection, but it is not an organ system since it is not composed of organs. Some organs are in more than one system—for example, the nose is in the respiratory system and also serves as a sensory organ in the nervous system; the testes and ovaries are both part of the reproductive and endocrine systems.
Other animals
Other animals have similar organ systems to humans although simpler animals may have fewer organs in an organ system or even fewer organ systems.
Plants
Plants have two major organs systems. Vascular plants have two distinct organ systems: a shoot system, and a root system. The shoot system consists stems, leaves, and the reproductive parts of the plant (flowers and fruits). The shoot system generally grows above ground, where it absorbs the light needed for photosynthesis. The root system, which supports the plants and absorbs water and minerals, is usually underground.
See also
References
Organ systems | 0.78329 | 0.99721 | 0.781104 |
Clothing | Clothing (also known as clothes, garments, dress, apparel, or attire) is any item worn on the body. Typically, clothing is made of fabrics or textiles, but over time it has included garments made from animal skin and other thin sheets of materials and natural products found in the environment, put together. The wearing of clothing is mostly restricted to human beings and is a feature of all human societies. The amount and type of clothing worn depends on gender, body type, social factors, and geographic considerations. Garments cover the body, footwear covers the feet, gloves cover the hands, while hats and headgear cover the head, and underwear covers the private parts.
Clothing serves many purposes: it can serve as protection from the elements, rough surfaces, sharp stones, rash-causing plants, and insect bites, by providing a barrier between the skin and the environment. Clothing can insulate against cold or hot conditions, and it can provide a hygienic barrier, keeping infectious and toxic materials away from the body. It can protect feet from injury and discomfort or facilitate navigation in varied environments. Clothing also provides protection from ultraviolet radiation. It may be used to prevent glare or increase visual acuity in harsh environments, such as brimmed hats. Clothing is used for protection against injury in specific tasks and occupations, sports, and warfare. Fashioned with pockets, belts, or loops, clothing may provide a means to carry things while freeing the hands.
Clothing has significant social factors as well. Wearing clothes is a variable social norm. It may connote modesty. Being deprived of clothing in front of others may be embarrassing. In many parts of the world, not wearing clothes in public so that genitals, breast, or buttocks are visible could be considered indecent exposure. Pubic area or genital coverage is the most frequently encountered minimum found cross-culturally and regardless of climate, implying social convention as the basis of customs. Clothing also may be used to communicate social status, wealth, group identity, and individualism.
Some forms of personal protective equipment amount to clothing, such as coveralls, chaps or a doctor's white coat, with similar requirements for maintenance and cleaning as other textiles (boxing gloves function both as protective equipment and as a sparring weapon, so the equipment aspect rises above the glove aspect). More specialized forms of protective equipment, such as face shields are classified as protective accessories. At the far extreme, self-enclosing diving suits or space suits are form-fitting body covers, and amount to a form of dress, without being clothing per se, while containing enough high technology to amount to more of a tool than a garment. This line will continue to blur as wearable technology embeds assistive devices directly into the fabric itself; the enabling innovations are ultra low power consumption and flexible electronic substrates.
Clothing also hybridizes into a personal transportation system (ice skates, roller skates, cargo pants, other outdoor survival gear, one-man band) or concealment system (stage magicians, hidden linings or pockets in tradecraft, integrated holsters for concealed carry, merchandise-laden trench coats on the black market — where the purpose of the clothing often carries over into disguise). A mode of dress fit to purpose, whether stylistic or functional, is known as an outfit or ensemble.
Origin and history
Early use
Estimates of when humans began wearing clothes vary from 40,000 to as many as 3 million years ago, but recent studies suggest humans were wearing clothing at least 100,000 years ago.
Recent studies by Ralf Kittler, Manfred Kayser and Mark Stoneking—anthropologists at the Max Planck Institute for Evolutionary Anthropology—have attempted to constrain the most recent date of the introduction of clothing with an indirect method relying on lice. The rationale for this method of dating stems from the fact that the human body louse cannot live outside of clothing, dying after only a few hours without shelter. This strongly implies that the date of the body louse's speciation from its parent, Pediculus humanus, can have taken place no earlier than the earliest human adoption of clothing. This date, at which the body louse (P. humanus corporis) diverged from both its parent species and its sibling subspecies, the head louse (P. humanus capitis), can be determined by the number of mutations each has developed during the intervening time. Such mutations occur at a known rate and the date of last-common-ancestor for two species can therefore be estimated from their frequency. These studies have produced dates from 40,000 to 170,000 years ago, with a greatest likelihood of speciation lying at about 107,000 years ago.
Kittler, Kayser and Stoneking suggest that the invention of clothing may have coincided with the northward migration of modern Homo sapiens away from the warm climate of Africa, which is thought to have begun between 100,000 and 50,000 years ago. A second group of researchers, also relying on the genetic clock, estimate that clothing originated between 30,000 and 114,000 years ago.
Dating with direct archeological evidence produces dates consistent with those of lice. In September 2021, scientists reported evidence of clothes being made 120,000 years ago based on findings in deposits in Morocco.
The development of clothing is deeply connected to human evolution, with early garments likely consisting of animal skins and natural fibers adapted for protection and social signaling. According to anthropologists and archaeologists, the earliest clothing likely consisted of fur, leather, leaves, or grass that was draped, wrapped, or tied around the body. Knowledge of such clothing remains inferential, as clothing materials deteriorate quickly compared with stone, bone, shell, and metal artifacts. Archeologists have identified very early sewing needles of bone and ivory from about 30,000 BC, found near Kostenki, Russia in 1988, and in 2016 a needle at least 50,000 years old from Denisova Cave in Siberia made by Denisovans. Dyed flax fibers that date back to 34,000 BC and could have been used in clothing have been found in a prehistoric cave in Georgia.
Making clothing
Several distinct human cultures, including those residing in the Arctic Circle, have historically crafted their garments exclusively from treated and adorned animal furs and skins. In contrast, numerous other societies have complemented or substituted leather and skins with textiles woven, knitted, or twined from a diverse array of animal and plant fibers, such as wool, linen, cotton, silk, hemp, and ramie.
Although modern consumers may take the production of clothing for granted, making fabric by hand is a tedious and labor-intensive process involving fiber making, spinning, and weaving. The textile industry was the first to be mechanized – with the powered loom – during the Industrial Revolution.
Different cultures have evolved various ways of creating clothes out of cloth. One approach involves draping the cloth. Many people wore, and still wear, garments consisting of rectangles of cloth wrapped to fit – for example, the dhoti for men and the sari for women in the Indian subcontinent, the Scottish kilt, and the Javanese sarong. The clothes may be tied up (dhoti and sari) or implement pins or belts to hold the garments in place (kilt and sarong). The cloth remains uncut, and people of various sizes can wear the garment.
Another approach involves measuring, cutting, and sewing the cloth by hand or with a sewing machine. Clothing can be cut from a sewing pattern and adjusted by a tailor to the wearer's measurements. An adjustable sewing mannequin or dress form is used to create form-fitting clothing. If the fabric is expensive, the tailor tries to use every bit of the cloth rectangle in constructing the clothing; perhaps cutting triangular pieces from one corner of the cloth, and adding them elsewhere as gussets. Traditional European patterns for shirts and chemises take this approach. These remnants can also be reused to make patchwork pockets, hats, vests, and skirts.
Modern European fashion treats cloth much less conservatively, typically cutting in such a way as to leave various odd-shaped cloth remnants. Industrial sewing operations sell these as waste; domestic sewers may turn them into quilts.
In the thousands of years that humans have been making clothing, they have created an astonishing array of styles, many of which have been reconstructed from surviving garments, photographs, paintings, mosaics, etc., as well as from written descriptions. Costume history can inspire current fashion designers, as well as costumiers for plays, films, television, and historical reenactment.
Clothing as comfort
Comfort is related to various perceptions, physiological, social, and psychological needs, and after food, it is clothing that satisfies these comfort needs. Clothing provides aesthetic, tactile, thermal, moisture, and pressure comfort.
Aesthetic comfort Visual perception is influenced by color, fabric construction, style, garment fit, fashion compatibility, and finish of clothing material. Aesthetic comfort is necessary for psychological and social comfort.
Thermoregulation and thermophysiological comfort Thermophysiological comfort is the capacity of the clothing material that makes the balance of moisture and heat between the body and the environment. It is a property of textile materials that creates ease by maintaining moisture and thermal levels in a human's resting and active states. The selection of textile material significantly affects the comfort of the wearer. Different textile fibers have unique properties that make them suitable for use in various environments. Natural fibers are breathable and absorb moisture, and synthetic fibers are hydrophobic; they repel moisture and do not allow air to pass. :: Different environments demand a diverse selection of clothing materials. Hence, the appropriate choice is important. The major determinants that influence thermophysiological comfort are permeable construction, heat, and moisture transfer rate.
Thermal comfort One primary criterion for our physiological needs is thermal comfort. The heat dissipation effectiveness of clothing gives the wearer a neither very hot nor very cold feel. The optimum temperature for thermal comfort of the skin surface is between , i.e., a neutral temperature. Thermophysiology reacts whenever the temperature falls below or exceeds the neutral point on either side; it is discomforting below 28 and above 30 degrees. Clothing maintains a thermal balance; it keeps the skin dry and cool. It helps to keep the body from overheating while avoiding heat from the environment.
Moisture comfort Moisture comfort is the prevention of a damp sensation. According to Hollies' research, it feels uncomfortable when more than "50% to 65% of the body is wet."
Tactile comfort Tactile comfort is a resistance to the discomfort related to the friction created by clothing against the body. It is related to the smoothness, roughness, softness, and stiffness of the fabric used in clothing. The degree of tactile discomfort may vary between individuals, which is possible due to various factors including allergies, tickling, prickling, skin abrasion, coolness, and the fabric's weight, structure, and thickness. There are specific surface finishes (mechanical and chemical) that can enhance tactile comfort. Fleece sweatshirts and velvet clothing, for example. Soft, clingy, stiff, heavy, light, hard, sticky, scratchy, prickly are all terms used to describe tactile sensations.
Pressure comfort The comfort of the human body's pressure receptors' (present in the skin) sensory response towards clothing. Fabric with lycra feels more comfortable because of this response and superior pressure comfort. The sensation response is influenced by the material's structure: snugging, looseness, heavy, light, soft, or stiff structuring.
Functions
The most obvious function of clothing is to protect the wearer from the elements. It serves to prevent wind damage and provides protection from sunburn. In the cold, it offers thermal insulation. Shelter can reduce the functional need for clothing. For example, coats, hats, gloves, and other outer layers are normally removed when entering a warm place. Similarly, clothing has seasonal and regional aspects so that thinner materials and fewer layers of clothing generally are worn in warmer regions and seasons than in colder ones. Boots, hats, jackets, ponchos, and coats designed to protect from rain and snow are specialized clothing items.
Clothing has been made from a wide variety of materials, ranging from leather and furs to woven fabrics, to elaborate and exotic natural and synthetic fabrics. Not all body coverings are regarded as clothing. Articles carried rather than worn normally are considered accessories rather than clothing (such as Handbags), items worn on a single part of the body and easily removed (scarves), worn purely for adornment (jewelry), or items that do not serve a protective function. For instance, corrective eyeglasses, Arctic goggles, and sunglasses would not be considered an accessory because of their protective functions.
Clothing protects against many things that might injure or irritate the naked human body, including rain, snow, wind, and other weather, as well as from the sun. Garments that are too sheer, thin, small, or tight offer less protection. Appropriate clothes can also reduce risk during activities such as work or sport. Some clothing protects from specific hazards, such as insects, toxic chemicals, weather, weapons, and contact with abrasive substances.
Humans have devised clothing solutions to environmental or other hazards: such as space suits, armor, diving suits, swimsuits, bee-keeper gear, motorcycle leathers, high-visibility clothing, and other pieces of protective clothing. The distinction between clothing and protective equipment is not always clear-cut since clothes designed to be fashionable often have protective value, and clothes designed for function often have corporate fashion in their design.
The choice of clothes also has social implications. They cover parts of the body that social norms require to be covered, act as a form of adornment, and serve other social purposes. Someone who lacks the means to procure appropriate clothing due to poverty or affordability, or lack of inclination, sometimes is said to be worn, ragged, or shabby.
Clothing performs a range of social and cultural functions, such as individual, occupational, gender differentiation, and social status. In many societies, norms about clothing reflect standards of modesty, religion, gender, and social status. Clothing may also function as adornment and an expression of personal taste or style.
Scholarship
Function of clothing
Serious books on clothing and its functions appear from the nineteenth century as European colonial powers interacted with new environments such as tropical ones in Asia. Some scientific research into the multiple functions of clothing in the first half of the twentieth century, with publications such as J.C. Flügel's Psychology of Clothes in 1930, and Newburgh's seminal Physiology of Heat Regulation and The Science of Clothing in 1949. By 1968, the field of Environmental Physiology had advanced and expanded significantly, but the science of clothing in relation to environmental physiology had changed little. There has since been considerable research, and the knowledge base has grown significantly, but the main concepts remain unchanged, and indeed, Newburgh's book continues to be cited by contemporary authors, including those attempting to develop thermoregulatory models of clothing development.
History of clothing
Clothing reveals much about human history. According to Professor Kiki Smith of Smith College, garments preserved in collections are resources for study similar to books and paintings. Scholars around the world have studied a wide range of clothing topics, including the history of specific items of clothing, clothing styles in different cultural groups, and the business of clothing and fashion. The textile curator Linda Baumgarten writes that "clothing provides a remarkable picture of the daily lives, beliefs, expectations, and hopes of those who lived in the past.
Clothing presents a number of challenges to historians. Clothing made of textiles or skins is subject to decay, and the erosion of physical integrity may be seen as a loss of cultural information. Costume collections often focus on important pieces of clothing considered unique or otherwise significant, limiting the opportunities scholars have to study everyday clothing.
Cultural aspects
Clothing has long served as a marker of social status, gender, and cultural identity, reflecting broader societal structures and values.
Gender differentiation
In most cultures, gender differentiation of clothing is considered appropriate. The differences are in styles, colors, fabrics, and types.
In contemporary Western societies, skirts, dresses, and high-heeled shoes are usually seen as women's clothing, while neckties usually are seen as men's clothing. Trousers were once seen as exclusively men's clothing, but nowadays are worn by both genders. Men's clothes are often more practical (that is, they can function well under a wide variety of situations), but a wider range of clothing styles is available for women. Typically, men are allowed to bare their chests in a greater variety of public places. It is generally common for a woman to wear clothing perceived as masculine, while the opposite is seen as unusual. Contemporary men may sometimes choose to wear men's skirts such as togas or kilts in particular cultures, especially on ceremonial occasions. In previous times, such garments often were worn as normal daily clothing by men.
In some cultures, sumptuary laws regulate what men and women are required to wear. Islam requires women to wear certain forms of attire, usually hijab. What items required varies in different Muslim societies; however, women are usually required to cover more of their bodies than men. Articles of clothing Muslim women wear under these laws or traditions range from the head-scarf to the burqa.
Some contemporary clothing styles designed to be worn by either gender, such as T-shirts, have started out as menswear, but some articles, such as the fedora, originally were a style for women.
Social status
During the early modern period, individuals utilized their attire as a significant method of conveying and asserting their social status. Individuals employed the utilization of high-quality fabrics and trendy designs as a means of communicating their wealth and social standing, as well as an indication of their knowledge and understanding of current fashion trends to the general public. As a result, clothing played a significant role in making the social hierarchy perceptible to all members of society.
In some societies, clothing may be used to indicate rank or status. In ancient Rome, for example, only senators could wear garments dyed with Tyrian purple. In traditional Hawaiian society, only high-ranking chiefs could wear feather cloaks and palaoa, or carved whale teeth. In China, before establishment of the republic, only the emperor could wear yellow. History provides many examples of elaborate sumptuary laws that regulated what people could wear. In societies without such laws, which includes most modern societies, social status is signaled by the purchase of rare or luxury items that are limited by cost to those with wealth or status. In addition, peer pressure influences clothing choice.
Religion
Some religious clothing might be considered a special case of occupational clothing. Sometimes it is worn only during the performance of religious ceremonies. However, it may be worn every day as a marker for special religious status. Sikhs wear a turban as it is a part of their religion.
In some religions such as Hinduism, Sikhism, Buddhism, and Jainism the cleanliness of religious dresses is of paramount importance and considered to indicate purity. Jewish ritual requires rending (tearing) of one's upper garment as a sign of mourning. The Quran says about husbands and wives, regarding clothing: "...They are clothing/covering (Libaas) for you; and you for them" (chapter 2:187). Christian clergy members wear religious vestments during liturgical services and may wear specific non-liturgical clothing at other times.
Clothing appears in numerous contexts in the Bible. The most prominent passages are: the story of Adam and Eve who made coverings for themselves out of fig leaves, Joseph's coat of many colors, and the clothing of Judah and Tamar, Mordecai and Esther. Furthermore, the priests officiating in the Temple in Jerusalem had very specific garments, the lack of which made one liable to death.
Contemporary clothing
Western dress code
The Western dress code has changed over the past 500+ years. The mechanization of the textile industry made many varieties of cloth widely available at affordable prices. Styles have changed, and the availability of synthetic fabrics has changed the definition of what is "stylish". In the latter half of the twentieth century, blue jeans became very popular, and are now worn to events that normally demand formal attire. Activewear has also become a large and growing market.
In the Western dress code, jeans are worn by both men and women. There are several unique styles of jeans found that include: high rise jeans, mid rise jeans, low rise jeans, bootcut jeans, straight jeans, cropped jeans, skinny jeans, cuffed jeans, boyfriend jeans, and capri jeans.
The licensing of designer names was pioneered by designers such as Pierre Cardin, Yves Saint Laurent, and Guy Laroche in the 1960s and has been a common practice within the fashion industry from about the 1970s. Among the more popular include Marc Jacobs and Gucci, named for Marc Jacobs Guccio Gucci respectively.
Spread of western styles
By the early years of the twenty-first century, western clothing styles had, to some extent, become international styles. This process began hundreds of years earlier, during the periods of European colonialism. The process of cultural dissemination has been perpetuated over the centuries, spreading Western culture and styles, most recently as Western media corporations have penetrated markets throughout the world. Fast fashion clothing has also become a global phenomenon. These garments are less expensive, mass-produced Western clothing. Also, donated used clothing from Western countries is delivered to people in poor countries by charity organizations.
Ethnic and cultural heritage
People may wear ethnic or national dress on special occasions or in certain roles or occupations. For example, most Korean men and women have adopted Western-style dress for daily wear, but still wear traditional hanboks on special occasions, such as weddings and cultural holidays. Also, items of Western dress may be worn or accessorized in distinctive, non-Western ways. A Tongan man may combine a used T-shirt with a Tongan wrapped skirt, or tupenu.
Sport and activity
For practical, comfort or safety reasons, most sports and physical activities are practised wearing special clothing. Common sportswear garments include shorts, T-shirts, tennis shirts, leotards, tracksuits, and trainers. Specialized garments include wet suits (for swimming, diving, or surfing), salopettes (for skiing), and leotards (for gymnastics). Also, spandex materials often are used as base layers to soak up sweat. Spandex is preferable for active sports that require form fitting garments, such as volleyball, wrestling, track and field, dance, gymnastics, and swimming.
Fashion
Paris set the 1900–1940 fashion trends for Europe and North America. In the 1920s the goal was all about getting loose. Women wore dresses all day, every day. Day dresses had a drop waist, which was a sash or belt around the low waist or hip and a skirt that hung anywhere from the ankle on up to the knee, never above. Day wear had sleeves (long to mid-bicep) and a skirt that was straight, pleated, hank hemmed, or tiered. Jewelry was not conspicuous. Hair was often bobbed, giving a boyish look.
In the early twenty-first century a diverse range of styles exists in fashion, varying by geography, exposure to modern media, economic conditions, and ranging from expensive haute couture, to traditional garb, to thrift store grunge. Fashion shows are events for designers to show off new and often extravagant designs.
Political issues
Working conditions in the garments industry
Although mechanization transformed most aspects of human clothing industry, by the mid-twentieth century, garment workers have continued to labor under challenging conditions that demand repetitive manual labor. Often, mass-produced clothing is made in what are considered by some to be sweatshops, typified by long work hours, lack of benefits, and lack of worker representation. While most examples of such conditions are found in developing countries, clothes made in industrialized nations may also be manufactured under similar conditions.
Coalitions of NGOs, designers (including Katharine Hamnett, American Apparel, Veja, Quiksilver, eVocal, and Edun), and campaign groups such as the Clean Clothes Campaign (CCC) and the Institute for Global Labour and Human Rights as well as textile and clothing trade unions have sought to improve these conditions by sponsoring awareness-raising events, which draw the attention of both the media and the general public to the plight of the workers.
Outsourcing production to low wage countries such as Bangladesh, China, India, Indonesia, Pakistan, and Sri Lanka became possible when the Multi Fibre Agreement (MFA) was abolished. The MFA, which placed quotas on textiles imports, was deemed a protectionist measure. Although many countries recognize treaties such as the International Labour Organization, which attempt to set standards for worker safety and rights, many countries have made exceptions to certain parts of the treaties or failed to thoroughly enforce them. India for example has not ratified sections 87 and 92 of the treaty.
The production of textiles has functioned as a consistent industry for developing nations, providing work and wages, whether construed as exploitative or not, to millions of people.
Fur
The use of animal fur in clothing dates to prehistoric times. Currently, although fur is still used by indigenous people in arctic zones and higher elevations for its warmth and protection, in developed countries it is associated with expensive, designer clothing. Once uncontroversial, recently it has been the focus of campaigns on the grounds that campaigners consider it cruel and unnecessary. PETA and other animal and animal liberation groups have called attention to fur farming and other practices they consider cruel.
Real fur in fashion is contentious, with Copenhagen (2022) and London (2018) fashion weeks banning real fur in its runway shows following protests and government attention to the issue. Fashion houses such as Gucci and Chanel have banned the use of fur in its garments. Versace and Furla also stopped using fur in their collections in early 2018. In 2020, the outdoor brand Canada Goose announced it would discontinue the use of new coyote fur on parka trims following protests.
Governing bodies have issued legislation banning the sale of new real fur garments. In 2021, Israel was the first government to ban the sale of real fur garments, with the exception of those worn as part of a religious faith. In 2019, the state of California banned fur trapping, with a total ban on the sale of all new fur garments except those made of sheep, cow, and rabbit fur going into effect on January 1, 2023.
Life cycle
Clothing maintenance
Clothing suffers assault both from within and without. The human body sheds skin cells and body oils, and it exudes sweat, urine, and feces that may soil clothing. From the outside, sun damage, moisture, abrasion, and dirt assault garments. Fleas and lice can hide in seams. If not cleaned and refurbished, clothing becomes worn and loses its aesthetics and functionality (as when buttons fall off, seams come undone, fabrics thin or tear, and zippers fail).
Often, people wear an item of clothing until it falls apart. Some materials present problems. Cleaning leather is difficult, and bark cloth (tapa) cannot be washed without dissolving it. Owners may patch tears and rips, and brush off surface dirt, but materials such as these inevitably age.
Most clothing consists of cloth, however, and most cloth can be laundered and mended (patching, darning, but compare felt).
Laundry, ironing, storage
Humans have developed many specialized methods for laundering clothing, ranging from early methods of pounding clothes against rocks in running streams, to the latest in electronic washing machines and dry cleaning (dissolving dirt in solvents other than water). Hot water washing (boiling), chemical cleaning, and ironing are all traditional methods of sterilizing fabrics for hygiene purposes.
Many kinds of clothing are designed to be ironed before they are worn to remove wrinkles. Most modern formal and semi-formal clothing is in this category (for example, dress shirts and suits). Ironed clothes are believed to look clean, fresh, and neat. Much contemporary casual clothing is made of knit materials that do not readily wrinkle, and do not require ironing. Some clothing is permanent press, having been treated with a coating (such as polytetrafluoroethylene) that suppresses wrinkles and creates a smooth appearance without ironing. Excess lint or debris may end up on the clothing in between launderings. In such cases, a lint remover may be useful.
Once clothes have been laundered and possibly ironed, usually they are hung on clothes hangers or folded, to keep them fresh until they are worn. Clothes are folded to allow them to be stored compactly, to prevent creasing, to preserve creases, or to present them in a more pleasing manner, for instance, when they are put on sale in stores.
Certain types of insects and larvae feed on clothing and textiles, such as the black carpet beetle and clothing moths. To deter such pests, clothes may be stored in cedar-lined closets or chests, or placed in drawers or containers with materials having pest repellent properties, such as lavender or mothballs. Airtight containers (such as sealed, heavy-duty plastic bags) may deter insect pest damage to clothing materials as well.
Non-iron
A resin used for making non-wrinkle shirts releases formaldehyde, which could cause contact dermatitis for some people; no disclosure requirements exist, and in 2008 the U.S. Government Accountability Office tested formaldehyde in clothing and found that generally the highest levels were in non-wrinkle shirts and pants. In 1999, a study of the effect of washing on the formaldehyde levels found that after six months of routine washing, 7 of 27 shirts still had levels in excess of 75 ppm (the safe limit for direct skin exposure).
Mending
When the raw material – cloth – was worth more than labor, it made sense to expend labor in saving it. In past times, mending was an art. A meticulous tailor or seamstress could mend rips with thread raveled from hems and seam edges so skillfully that the tear was practically invisible. Today clothing is considered a consumable item. Mass-manufactured clothing is less expensive than the labor required to repair it. Many people buy a new piece of clothing rather than spend time mending. The thrifty still replace zippers and buttons and sew up ripped hems, however. Other mending techniques include darning and invisible mending or upcycling through visible mending inspired in Japanese Sashiko.
Recycling
It is estimated that 80 billion to 150 billion garments are produced annually. Used, unwearable clothing can be repurposed for quilts, rags, rugs, bandages, and many other household uses. Neutral colored or undyed cellulose fibers can be recycled into paper. In Western societies, used clothing is often thrown out or donated to charity (such as through a clothing bin). It is also sold to consignment shops, dress agencies, flea markets, and in online auctions. Also, used clothing often is collected on an industrial scale to be sorted and shipped for re-use in poorer countries. Globally, used clothes are worth $4 billion, with the U.S. as the leading exporter at $575 million.
Synthetics, which come primarily from petrochemicals, are not renewable or biodegradable.
Excess inventory of clothing is sometimes destroyed to preserve brand value.
Global trade
EU member states imported €166 billion of clothes in 2018; 51% came from outside the EU (€84 billion). EU member states exported €116 billion of clothes in 2018, including 77% to other EU member states.
According to the World Trade Organization (WTO) report, the value of global clothing exports in 2022 reached US$790.1 billion, up 10.6% from 2021. China is the world's largest clothing exporter, with a value of US$178.4 billion, accounting for 22.6% of the global market share. Next are Bangladesh (US$40.8 billion), Vietnam (US$39.8 billion), India (US$36.1 billion), and Turkey (US$29.7 billion).
In Vietnam, clothing exports continue to be one of the leading export sectors, contributing significantly to the export turnover and economic growth of the country. According to the General Department of Customs of Vietnam, the value of Vietnam's clothing exports in 2022 reached US$39.8 billion, up 14.2% from 2021. Of which, clothing exports to the United States reached US$18.8 billion, accounting for 47.3% of the market share; exports to the EU reached US$9.8 billion, accounting for 24.6% of the market share.
See also
Children's clothing
Clothing fetish
Clothing laws by country
Cotton recycling
Global trade of secondhand clothing
Higg Index
List of individual dresses
Organic cotton
Reconstructed clothing
Right to clothing
Sustainable fashion
Textile recycling
Vintage clothing
Zero-waste fashion
References
Further reading
ebook
Paperback
External links
Official website of the Textile and Apparel Association – scholarly publications (archived 16 February 2008)
Articles containing video clips | 0.781823 | 0.999052 | 0.781081 |
Body (biology) | A body is the physical material of an organism. It is only used for organisms which are in one part or whole. There are organisms which change from single cells to whole organisms: for example, slime molds. For them the term 'body' would mean the multicellular stage. Other uses:
Plant body: plants are modular, with modules being created by meristems and the body generally consisting of both the shoot system and the root system, with the body's development being influenced by its environment.
Cell body: here it may be used for cells like neurons which have long axons (nerve fibres). The cell body is the part with the nucleus in it.
The body of a dead person is also called a corpse or cadaver. The dead bodies of vertebrate animals and insects are sometimes called carcasses.
The human body has a head, neck, torso, two arms, two legs and the genitals of the groin, which differ between males and females.
The branch of biology dealing with the study of the bodies and their specific structural features is called morphology. Anatomy is a branch of morphology that deals with the structure of the body at a level higher than tissue. Anatomy is closely related to histology, which studies the structure of tissues, as well as cytology, which studies the structure and function of the individual cells, from which the tissues and organs of the studied macroorganism are built. Taken together, anatomy, histology, cytology and embryology represent a morphology
The study of functions and mechanisms in a body is physiology.
Human body
Here are the names of the body parts of a woman and a man.
References
Morphology (biology) | 0.792018 | 0.985362 | 0.780424 |
Human microbiome | The human microbiome is the aggregate of all microbiota that reside on or within human tissues and biofluids along with the corresponding anatomical sites in which they reside, including the gastrointestinal tract, skin, mammary glands, seminal fluid, uterus, ovarian follicles, lung, saliva, oral mucosa, conjunctiva, and the biliary tract. Types of human microbiota include bacteria, archaea, fungi, protists, and viruses. Though micro-animals can also live on the human body, they are typically excluded from this definition. In the context of genomics, the term human microbiome is sometimes used to refer to the collective genomes of resident microorganisms; however, the term human metagenome has the same meaning.
The human body hosts many microorganisms, with approximately the same order of magnitude of non-human cells as human cells. Some microorganisms that humans host are commensal, meaning they co-exist without harming humans; others have a mutualistic relationship with their human hosts. Conversely, some non-pathogenic microorganisms can harm human hosts via the metabolites they produce, like trimethylamine, which the human body converts to trimethylamine N-oxide via FMO3-mediated oxidation. Certain microorganisms perform tasks that are known to be useful to the human host, but the role of most of them is not well understood. Those that are expected to be present, and that under normal circumstances do not cause disease, are sometimes deemed normal flora or normal microbiota.
During early life, the establishment of a diverse and balanced human microbiota plays a critical role in shaping an individual's long-term health. Studies have shown that the composition of the gut microbiota during infancy is influenced by various factors, including mode of delivery, breastfeeding, and exposure to environmental factors. There are several beneficial species of bacteria and potential probiotics present in breast milk. Research has highlighted the beneficial effects of a healthy microbiota in early life, such as the promotion of immune system development, regulation of metabolism, and protection against pathogenic microorganisms. Understanding the complex interplay between the human microbiota and early life health is crucial for developing interventions and strategies to support optimal microbiota development and improve overall health outcomes in individuals.
The Human Microbiome Project (HMP) took on the project of sequencing the genome of the human microbiota, focusing particularly on the microbiota that normally inhabit the skin, mouth, nose, digestive tract, and vagina. It reached a milestone in 2012 when it published its initial results.
Terminology
Though widely known as flora or microflora, this is a misnomer in technical terms, since the word root flora pertains to plants, and biota refers to the total collection of organisms in a particular ecosystem. Recently, the more appropriate term microbiota is applied, though its use has not eclipsed the entrenched use and recognition of flora with regard to bacteria and other microorganisms. Both terms are being used in different literature.
Relative numbers
The number of bacterial cells in the human body is estimated to be around 38 trillion, while the estimate for human cells is around 30 trillion. The number of bacterial genes is estimated to be 2 million, 100 times the number of approximately 20,000 human genes.
Study
The problem of elucidating the human microbiome is essentially identifying the members of a microbial community, which includes bacteria, eukaryotes, and viruses. This is done primarily using deoxyribonucleic acid (DNA)-based studies, though ribonucleic acid (RNA), protein and metabolite based studies are also performed. DNA-based microbiome studies typically can be categorized as either targeted amplicon studies or, more recently, shotgun metagenomic studies. The former focuses on specific known marker genes and is primarily informative taxonomically, while the latter is an entire metagenomic approach which can also be used to study the functional potential of the community. One of the challenges that is present in human microbiome studies, but not in other metagenomic studies, is to avoid including the host DNA in the study.
Aside from simply elucidating the composition of the human microbiome, one of the major questions involving the human microbiome is whether there is a "core", that is, whether there is a subset of the community that is shared among most humans. If there is a core, then it would be possible to associate certain community compositions with disease states, which is one of the goals of the HMP. It is known that the human microbiome (such as the gut microbiota) is highly variable both within a single subject and among different individuals, a phenomenon which is also observed in mice.
On 13 June 2012, a major milestone of the HMP was announced by the National Institutes of Health (NIH) director Francis Collins. The announcement was accompanied with a series of coordinated articles published in Nature and several journals in the Public Library of Science (PLoS) on the same day. By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans. From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool), and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA. The researchers calculated that more than 10,000 microbial species occupy the human ecosystem, and they have identified 81–99% of the genera.
Analysis after the processing
The statistical analysis is essential to validate the obtained results (ANOVA can be used to size the differences between the groups); if it is paired with graphical tools, the outcome is easily visualized and understood.
Once a metagenome is assembled, it is possible to infer the functional potential of the microbiome. The computational challenges for this type of analysis are greater than for single genomes, because usually metagenomes assemblers have poorer quality, and many recovered genes are non-complete or fragmented. After the gene identification step, the data can be used to carry out a functional annotation by means of multiple alignment of the target genes against orthologs databases.
Marker gene analysis
It is a technique that exploits primers to target a specific genetic region and enables to determine the microbial phylogenies. The genetic region is characterized by a highly variable region which can confer detailed identification; it is delimited by conserved regions, which function as binding sites for primers used in PCR. The main gene used to characterize bacteria and archaea is 16S rRNA gene, while fungi identification is based on Internal Transcribed Spacer (ITS). The technique is fast and not so expensive and enables to obtain a low-resolution classification of a microbial sample; it is optimal for samples that may be contaminated by host DNA. Primer affinity varies among all DNA sequences, which may result in biases during the amplification reaction; indeed, low-abundance samples are susceptible to overamplification errors, since the other contaminating microorganisms result to be over-represented in case of increasing the PCR cycles. Therefore, the optimization of primer selection can help to decrease such errors, although it requires complete knowledge of the microorganisms present in the sample, and their relative abundances.
Marker gene analysis can be influenced by the primer choice; in this kind of analysis, it is desirable to use a well-validated protocol (such as the one used in the Earth Microbiome Project). The first thing to do in a marker gene amplicon analysis is to remove sequencing errors; a lot of sequencing platforms are very reliable, but most of the apparent sequence diversity is still due to errors during the sequencing process. To reduce this phenomenon a first approach is to cluster sequences into Operational taxonomic unit (OTUs): this process consolidates similar sequences (a 97% similarity threshold is usually adopted) into a single feature that can be used in further analysis steps; this method however would discard SNPs because they would get clustered into a single OTU. Another approach is Oligotyping, which includes position-specific information from 16s rRNA sequencing to detect small nucleotide variations and from discriminating between closely related distinct taxa. These methods give as an output a table of DNA sequences and counts of the different sequences per sample rather than OTU.
Another important step in the analysis is to assign a taxonomic name to microbial sequences in the data. This can be done using machine learning approaches that can reach an accuracy at genus-level of about 80%. Other popular analysis packages provide support for taxonomic classification using exact matches to reference databases and should provide greater specificity, but poor sensitivity. Unclassified microorganism should be further checked for organelle sequences.
Phylogenetic analysis
Many methods that exploit phylogenetic inference use the 16SRNA gene for Archea and Bacteria and the 18SRNA gene for Eukaryotes. Phylogenetic comparative methods (PCS) are based on the comparison of multiple traits among microorganisms; the principle is: the closely they are related, the higher number of traits they share. Usually PCS are coupled with phylogenetic generalized least square (PGLS) or other statistical analysis to get more significant results. Ancestral state reconstruction is used in microbiome studies to impute trait values for taxa whose traits are unknown. This is commonly performed with PICRUSt, which relies on available databases. Phylogenetic variables are chosen by researchers according to the type of study: through the selection of some variables with significant biological informations, it is possible to reduce the dimension of the data to analyse.
Phylogenetic aware distance is usually performed with UniFrac or similar tools, such as Soresen's index or Rao's D, to quantify the differences between the different communities. All this methods are negatively affected by horizontal gene transmission (HGT), since it can generate errors and lead to the correlation of distant species. There are different ways to reduce the negative impact of HGT: the use of multiple genes or computational tools to assess the probability of putative HGT events.
Ecological Network analysis
Microbial communities develop in a very complex dynamic which can be viewed and analyzed as an ecosystem. The ecological interactions between microbes govern its change, equilibrium and stability, and can be represented by a population dynamic model.
The ongoing study of ecological features of the microbiome is growing rapidly and allows to understand the fundamental properties of the microbiome. Understanding the underlying rules of microbial community could help with treating diseases related to unstable microbial communities.
A very basic question is if different humans, who share different microbial communities, have the same underlying microbial dynamics. Increasing evidence and indications have found that the dynamics is indeed universal. This question is a basic step that will allow scientists to develop treatment strategies, based on the complex dynamics of human microbial communities.
There are more important properties on which considerations should be taken into account for developing interventions strategies for controlling the human microbial dynamics. Controlling the microbial communities could result in solving very bad and harmful diseases.
Types
Bacteria
Populations of microbes (such as bacteria and yeasts) inhabit the skin and mucosal surfaces in various parts of the body. Their role forms part of normal, healthy human physiology, however if microbe numbers grow beyond their typical ranges (often due to a compromised immune system) or if microbes populate (such as through poor hygiene or injury) areas of the body normally not colonized or sterile (such as the blood, or the lower respiratory tract, or the abdominal cavity), disease can result (causing, respectively, bacteremia/sepsis, pneumonia, and peritonitis).
The Human Microbiome Project found that individuals host thousands of bacterial types, different body sites having their own distinctive communities. Skin and vaginal sites showed smaller diversity than the mouth and gut, these showing the greatest richness. The bacterial makeup for a given site on a body varies from person to person, not only in type, but also in abundance. Bacteria of the same species found throughout the mouth are of multiple subtypes, preferring to inhabit distinctly different locations in the mouth. Even the enterotypes in the human gut, previously thought to be well understood, are from a broad spectrum of communities with blurred taxon boundaries.
It is estimated that 500 to 1,000 species of bacteria live in the human gut but belong to just a few phyla: Bacillota and Bacteroidota dominate but there are also Pseudomonadota, Verrucomicrobiota, Actinobacteriota, Fusobacteriota, and "Cyanobacteria".
A number of types of bacteria, such as Actinomyces viscosus and A. naeslundii, live in the mouth, where they are part of a sticky substance called plaque. If this is not removed by brushing, it hardens into calculus (also called tartar). The same bacteria also secrete acids that dissolve tooth enamel, causing tooth decay.
The vaginal microflora consist mostly of various lactobacillus species. It was long thought that the most common of these species was Lactobacillus acidophilus, but it has later been shown that L. iners is in fact most common, followed by L. crispatus. Other lactobacilli found in the vagina are L. jensenii, L. delbruekii and L. gasseri. Disturbance of the vaginal flora can lead to infections such as bacterial vaginosis and candiadiasis.
Archaea
Archaea are present in the human gut, but, in contrast to the enormous variety of bacteria in this organ, the numbers of archaeal species are much more limited. The dominant group are the methanogens, particularly Methanobrevibacter smithii and Methanosphaera stadtmanae. However, colonization by methanogens is variable, and only about 50% of humans have easily detectable populations of these organisms.
As of 2007, no clear examples of archaeal pathogens were known, although a relationship has been proposed between the presence of some methanogens and human periodontal disease. Methane-dominant small intestinal bacterial overgrowth (SIBO) is also predominantly caused by methanogens, and Methanobrevibacter smithii in particular.
Fungi
Fungi, in particular yeasts, are present in the human gut. The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. Yeasts are also present on the skin, such as Malassezia species, where they consume oils secreted from the sebaceous glands.
Viruses
Viruses, especially bacterial viruses (bacteriophages), colonize various body sites. These colonized sites include the skin, gut, lungs, and oral cavity. Virus communities have been associated with some diseases, and do not simply reflect the bacterial communities.
In January 2024, biologists reported the discovery of "obelisks", a new class of viroid-like elements, and "oblins", their related group of proteins, in the human microbiome.
Anatomical areas
Skin
A study of 20 skin sites on each of ten healthy humans found 205 identified genera in 19 bacterial phyla, with most sequences assigned to four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). A large number of fungal genera are present on healthy human skin, with some variability by region of the body; however, during pathological conditions, certain genera tend to dominate in the affected region. For example, Malassezia is dominant in atopic dermatitis and Acremonium is dominant on dandruff-affected scalps.
The skin acts as a barrier to deter the invasion of pathogenic microbes. The human skin contains microbes that reside either in or on the skin and can be residential or transient. Resident microorganism types vary in relation to skin type on the human body. A majority of microbes reside on superficial cells on the skin or prefer to associate with glands. These glands such as oil or sweat glands provide the microbes with water, amino acids, and fatty acids. In addition, resident bacteria that associated with oil glands are often Gram-positive and can be pathogenic.
Conjunctiva
A small number of bacteria and fungi are normally present in the conjunctiva. Classes of bacteria include Gram-positive cocci (e.g., Staphylococcus and Streptococcus) and Gram-negative rods and cocci (e.g., Haemophilus and Neisseria) are present. Fungal genera include Candida, Aspergillus, and Penicillium. The lachrymal glands continuously secrete, keeping the conjunctiva moist, while intermittent blinking lubricates the conjunctiva and washes away foreign material. Tears contain bactericides such as lysozyme, so that microorganisms have difficulty in surviving the lysozyme and settling on the epithelial surfaces.
Gastrointestinal tract
In humans, the composition of the gastrointestinal microbiome is established during birth. Birth by Cesarean section or vaginal delivery also influences the gut's microbial composition. Babies born through the vaginal canal have non-pathogenic, beneficial gut microbiota similar to those found in the mother. However, the gut microbiota of babies delivered by C-section harbors more pathogenic bacteria such as Escherichia coli and Staphylococcus and it takes longer to develop non-pathogenic, beneficial gut microbiota.
The relationship between some gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Some human gut microorganisms benefit the host by fermenting dietary fiber into short-chain fatty acids (SCFAs), such as acetic acid and butyric acid, which are then absorbed by the host. Intestinal bacteria also play a role in synthesizing vitamin B and vitamin K as well as metabolizing bile acids, sterols, and xenobiotics. The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ, and dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions.
The composition of human gut microbiota changes over time, when the diet changes, and as overall health changes. A systematic review of 15 human randomized controlled trials from July 2016 found that certain commercially available strains of probiotic bacteria from the Bifidobacterium and Lactobacillus genera (B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei), when taken by mouth in daily doses of 109–1010 colony forming units (CFU) for 1–2 months, possess treatment efficacy (i.e., improves behavioral outcomes) in certain central nervous system disorders – including anxiety, depression, autism spectrum disorder, and obsessive–compulsive disorder – and improves certain aspects of memory.
Urethra and bladder
The genitourinary system appears to have a microbiota, which is an unexpected finding in light of the long-standing use of standard clinical microbiological culture methods to detect bacteria in urine when people show signs of a urinary tract infection; it is common for these tests to show no bacteria present. It appears that common culture methods do not detect many kinds of bacteria and other microorganisms that are normally present. As of 2017, sequencing methods were used to identify these microorganisms to determine if there are differences in microbiota between people with urinary tract problems and those who are healthy. To properly assess the microbiome of the bladder as opposed to the genitourinary system, the urine specimen should be collected directly from the bladder, which is often done with a catheter.
Vagina
Vaginal microbiota refers to those species and genera that colonize the vagina. These organisms play an important role in protecting against infections and maintaining vaginal health. The most abundant vaginal microorganisms found in premenopausal women are from the genus Lactobacillus, which suppress pathogens by producing hydrogen peroxide and lactic acid. Bacterial species composition and ratios vary depending on the stage of the menstrual cycle. Ethnicity also influences vaginal flora. The occurrence of hydrogen peroxide-producing lactobacilli is lower in African American women and vaginal pH is higher. Other influential factors such as sexual intercourse and antibiotics have been linked to the loss of lactobacilli. Moreover, studies have found that sexual intercourse with a condom does appear to change lactobacilli levels, and does increase the level of Escherichia coli within the vaginal flora. Changes in the normal, healthy vaginal microbiota is an indication of infections, such as candidiasis or bacterial vaginosis. Candida albicans inhibits the growth of Lactobacillus species, while Lactobacillus species which produce hydrogen peroxide inhibit the growth and virulence of Candida albicans in both the vagina and the gut.
Fungal genera that have been detected in the vagina include Candida, Pichia, Eurotium, Alternaria, Rhodotorula, and Cladosporium, among others.
Placenta
Until recently the placenta was considered to be a sterile organ but commensal, nonpathogenic bacterial species and genera have been identified that reside in the placental tissue. However, the existence of a microbiome in the placenta is controversial as criticized in several researches. So called "placental microbiome" is likely derived from contamination of regents because low-biomass samples are easily contaminated.
Uterus
Until recently, the upper reproductive tract of women was considered to be a sterile environment. A variety of microorganisms inhabit the uterus of healthy, asymptomatic women of reproductive age. The microbiome of the uterus differs significantly from that of the vagina and gastrointestinal tract.
Oral cavity
The environment present in the human mouth allows the growth of characteristic microorganisms found there. It provides a source of water and nutrients, as well as a moderate temperature. Resident microbes of the mouth adhere to the teeth and gums to resist mechanical flushing from the mouth to stomach where acid-sensitive microbes are destroyed by hydrochloric acid.
Anaerobic bacteria in the oral cavity include: Actinomyces, Arachnia, Bacteroides, Bifidobacterium, Eubacterium, Fusobacterium, Lactobacillus, Leptotrichia, Peptococcus, Peptostreptococcus, Propionibacterium, Selenomonas, Treponema, and Veillonella. Genera of fungi that are frequently found in the mouth include Candida, Cladosporium, Aspergillus, Fusarium, Glomus, Alternaria, Penicillium, and Cryptococcus, among others.
Bacteria accumulate on both the hard and soft oral tissues in biofilm allowing them to adhere and strive in the oral environment while protected from the environmental factors and antimicrobial agents. Saliva plays a key biofilm homeostatic role allowing recolonization of bacteria for formation and controlling growth by detaching biofilm buildup. It also provides a means of nutrients and temperature regulation. The location of the biofilm determines the type of exposed nutrients it receives.
Oral bacteria have evolved mechanisms to sense their environment and evade or modify the host. However, a highly efficient innate host defense system constantly monitors the bacterial colonization and prevents bacterial invasion of local tissues. A dynamic equilibrium exists between dental plaque bacteria and the innate host defense system.
This dynamic between host oral cavity and oral microbes plays a key role in health and disease as it provides entry into the body.
A healthy equilibrium presents a symbiotic relationship where oral microbes limit growth and adherence of pathogens while the host provides an environment for them to flourish. Ecological changes such as change of immune status, shift of resident microbes and nutrient availability shift from a mutual to parasitic relationship resulting in the host being prone to oral and systemic disease. Systemic diseases such as diabetes and cardiovascular diseases has been correlated to poor oral health. Of particular interest is the role of oral microorganisms in the two major dental diseases: dental caries and periodontal disease. Pathogen colonization at the periodontium cause an excessive immune response resulting in a periodontal pocket- a deepened space between the tooth and gingiva. This acts as a protected blood-rich reservoir with nutrients for anaerobic pathogens. Systemic disease at various sites of the body can result from oral microbes entering the blood bypassing periodontal pockets and oral membranes.
Persistent proper oral hygiene is the primary method for preventing oral and systemic disease. It reduces the density of biofilm and overgrowth of potential pathogenic bacteria resulting in disease. However, proper oral hygiene may not be enough as the oral microbiome, genetics, and changes to immune response play a factor in developing chronic infections. Use of antibiotics could treat already spreading infection but ineffective against bacteria within biofilms.
Nasal cavity
The healthy nasal microbiome is dominated by Corynebacterium, and Staphylococcus. The mucosal microbiome plays a critical role in modulating viral infection.
Lung
Much like the oral cavity, the upper and lower respiratory system possess mechanical deterrents to remove microbes. Goblet cells produce mucus which traps microbes and moves them out of the respiratory system via continuously moving ciliated epithelial cells. In addition, a bactericidal effect is generated by nasal mucus which contains the enzyme lysozyme. The upper and lower respiratory tract appears to have its own set of microbiota. Pulmonary bacterial microbiota belong to 9 major bacterial genera: Prevotella, Sphingomonas, Pseudomonas, Acinetobacter, Fusobacterium, Megasphaera, Veillonella, Staphylococcus, and Streptococcus. Some of the bacteria considered "normal biota" in the respiratory tract can cause serious disease especially in immunocompromised individuals; these include Streptococcus pyogenes, Haemophilus influenzae, Streptococcus pneumoniae, Neisseria meningitidis, and Staphylococcus aureus. Fungal genera that compose the pulmonary mycobiome include Candida, Malassezia, Neosartorya, Saccharomyces, and Aspergillus, among others.
Unusual distributions of bacterial and fungal genera in the respiratory tract is observed in people with cystic fibrosis. Their bacterial flora often contains antibiotic-resistant and slow-growing bacteria, and the frequency of these pathogens changes in relation to age.
Biliary tract
Traditionally the biliary tract has been considered to be normally sterile, and the presence of microorganisms in bile is a marker of pathological process. This assumption was confirmed by failure in allocation of bacterial strains from the normal bile duct. Papers began emerging in 2013 showing that the normal biliary microbiota is a separate functional layer which protects a biliary tract from colonization by exogenous microorganisms.
Disease and death
Human bodies rely on the innumerable bacterial genes as the source of essential nutrients. Both metagenomic and epidemiological studies indicate vital roles for the human microbiome in preventing a wide range of diseases, from type 2 diabetes and obesity to inflammatory bowel disease, Parkinson's disease, and even mental health conditions like depression. A symbiotic relationship between the gut microbiota and different bacteria may influence an individual's immune response. Metabolites generated by gut microbes appear to be causative factors in type 2 diabetes. Although in its infancy, microbiome-based treatment is also showing promise, most notably for treating drug-resistant C. difficile infection and in diabetes treatment.
Clostridioides difficile infection
An overwhelming presence of the bacteria, C. difficile, leads to an infection of the gastrointestinal tract, normally associated to dysbiosis with the microbiota believed to have been caused by the administration of antibiotics. Use of antibiotics eradicates the beneficial gut flora within the gastrointestinal tract, which normally prevents pathogenic bacteria from establishing dominance. Traditional treatment for C. difficile infections includes an additional regime of antibiotics, however, efficacy rates average between 20 and 30%. Recognizing the importance of healthy gut bacteria, researchers turned to a procedure known as fecal microbiota transplant (FMT), where patients experiencing gastrointestinal diseases, such as C. difficile infection (CDI), receive fecal content from a healthy individual in hopes of restoring a normal functioning intestinal microbiota. Fecal microbiota transplant is approximately 85–90% effective in people with CDI for whom antibiotics have not worked or in whom the disease recurs following antibiotics. Most people with CDI recover with one FMT treatment.
Cancer
Although cancer is generally a disease of host genetics and environmental factors, microorganisms are implicated in some 20% of human cancers. Particularly for potential factors in colon cancer, bacterial density is one million times higher than in the small intestine, and approximately 12-fold more cancers occur in the colon compared to the small intestine, possibly establishing a pathogenic role for microbiota in colon and rectal cancers. Microbial density may be used as a prognostic tool in assessment of colorectal cancers.
The microbiota may affect carcinogenesis in three broad ways: (i) altering the balance of tumor cell proliferation and death, (ii) regulating immune system function, and (iii) influencing metabolism of host-produced factors, foods and pharmaceuticals. Tumors arising at boundary surfaces, such as the skin, oropharynx and respiratory, digestive and urogenital tracts, harbor a microbiota. Substantial microbe presence at a tumor site does not establish association or causal links. Instead, microbes may find tumor oxygen tension or nutrient profile supportive. Decreased populations of specific microbes or induced oxidative stress may also increase risks. Of the around 1030 microbes on earth, ten are designated by the International Agency for Research on Cancer as human carcinogens. Microbes may secrete proteins or other factors directly drive cell proliferation in the host, or may up- or down-regulate the host immune system including driving acute or chronic inflammation in ways that contribute to carcinogenesis.
Concerning the relationship of immune function and development of inflammation, mucosal surface barriers are subject to environmental risks and must rapidly repair to maintain homeostasis. Compromised host or microbiota resiliency also reduce resistance to malignancy, possibly inducing inflammation and cancer. Once barriers are breached, microbes can elicit proinflammatory or immunosuppressive programs through various pathways. For example, cancer-associated microbes appear to activate NF-κΒ signaling within the tumor microenvironment. Other pattern recognition receptors, such as nucleotide-binding oligomerization domain–like receptor (NLR) family members NOD-2, NLRP3, NLRP6 and NLRP12, may play a role in mediating colorectal cancer. Likewise Helicobacter pylori appears to increase the risk of gastric cancer, due to its driving a chronic inflammatory response in the stomach.
Inflammatory bowel disease
Inflammatory bowel disease consists of two different diseases: ulcerative colitis and Crohn's disease and both of these diseases present with disruptions in the gut microbiota (also known as dysbiosis). This dysbiosis presents itself in the form of decreased microbial diversity in the gut, and is correlated to defects in host genes that changes the innate immune response in individuals.
Human immunodeficiency virus
The HIV disease progression influences the composition and function of the gut microbiota, with notable differences between HIV-negative, HIV-positive, and post-ART HIV-positive populations. HIV decreases the integrity of the gut epithelial barrier function by affecting tight junctions. This breakdown allows for translocation across the gut epithelium, which is thought to contribute to increases in inflammation seen in people with HIV.
Vaginal microbiota plays a role in the infectivity of HIV, with an increased risk of infection and transmission when the woman has bacterial vaginosis, a condition characterized by an abnormal balance of vaginal bacteria. The enhanced infectivity is seen with the increase in pro-inflammatory cytokines and CCR5 + CD4+ cells in the vagina. However, a decrease in infectivity is seen with increased levels of vaginal Lactobacillus, which promotes an anti-inflammatory condition.
Gut microbiome of centenarians
Humans who are 100 years old or older, called centenarians, have a distinct gut microbiome. This microbiome is characteristically enriched in microorganisms that are able to synthesize novel secondary bile acids. These secondary bile acids include various isoforms of lithocholic acid that may contribute to healthy aging.
Death
With death, the microbiome of the living body collapses and a different composition of microorganisms named necrobiome establishes itself as an important active constituent of the complex physical decomposition process. Its predictable changes over time are thought to be useful to help determine the time of death.
Environmental health
Studies in 2009 questioned whether the decline in biota (including microfauna) as a result of human intervention might impede human health, hospital safety procedures, food product design, and treatments of disease.
Changes, modulation and transmission
Hygiene, probiotics, prebiotics, synbiotics, microbiota transplants (fecal or skin), antibiotics, exercise, diet, breastfeeding, aging can change the human microbiome across various anatomical systems or regions such as skin and gut.
Person-to-person transmission
The human microbiome is transmitted between a mother and her children, as well as between people living in the same household.
Research
Migration
Primary research indicates that immediate changes in the microbiota may occur when a person migrates from one country to another, such as when Thai immigrants settled in the United States or when Latin Americans immigrated into the United States. Losses of microbiota diversity were greater in obese individuals and children of immigrants.
Cellulose digestion
A 2024 study suggests that gut microbiota capable of digesting cellulose can be found in the human microbiome, and they are less abundant in people living in industrialized societies.
See also
Human Microbiome Project
Human milk microbiome
Human virome
Hygiene hypothesis
Initial acquisition of microbiota
Microbiome
Microbiome Immunity Project
Microorganism
Bibliography
Ed Yong. I Contain Multitudes: The Microbes Within Us and a Grander View of Life. 368 pages, Published 9 August 2016 by Ecco, .
References
External links
The Secret World Inside You Exhibit 2015–2016, American Museum of Natural History
FAQ: Human Microbiome, January 2014 American Society For Microbiology
Bacteriology
Bacteria and humans
Microbiology
Microbiomes
Environmental microbiology
Human genome projects | 0.784289 | 0.995055 | 0.780411 |
Anatomy | Anatomy is the branch of morphology concerned with the study of the internal structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology.
Anatomy is a complex and dynamic field that is constantly evolving as discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
Animal tissues
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissue
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Often called fascia (from the Latin "fascia," meaning "band" or "bandage"), connective tissues give shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.
Epithelium
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle tissue
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
Vertebrate anatomy
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
Fish anatomy
The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Amphibian anatomy
Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist.
In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side.
Reptile anatomy
Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid.
Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers.
Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead.
Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye.
Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey.
Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood.
Bird anatomy
Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks.
The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes.
Mammal anatomy
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a teat and completes its development.
Human anatomy
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Invertebrate anatomy
Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies.
Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles.
Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring.
Arthropod anatomy
Arthropods comprise the largest phylum of invertebrates in the animal kingdom with over a million known species.
Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts.
Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ.
Other branches of anatomy
Surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables medics and veterinarians to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body.
Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals.
Artistic anatomy relates to anatomic studies of body proportions for artistic reasons.
History
Ancient
In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart and its vessels, as well as the brain and its meninges and cerebrospinal fluid, and the liver, spleen, kidneys, uterus and bladder. It showed the blood vessels diverging from the heart. The Ebers Papyrus features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body.
Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded due to a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which contributed to the understanding of the brain, eye, liver, reproductive organs, and nervous system.
The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemaic dynasty of Egypt helped raise Alexandria up, further rivalling other Greek states' cultural and scientific achievements.
Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research, using the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works, making impressive contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs, and nervous system and characterizing the course of the disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He could distinguish the human body's sensory and motor nerves and believed air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carry the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the heart's valves, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves.
Incredible feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus discovered and described not only the salivary glands but also the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland.
The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic period.
In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer, and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through the dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from Greek sometime in the 15th century.
Medieval to early modern
Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, thorax, head, and limbs. It was the standard anatomy textbook for the next century.
Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected.
Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian.
In England, anatomy was the subject of the first public lectures given in any science; these were provided by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians.
Late modern
Medical schools began to be set up in the United States towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection, and these were difficult to obtain. Philadelphia, Baltimore, and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were, in consequence, protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery".
The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically.
Before the modern medical era, the primary means for studying the internal structures of the body were dissection of the dead and inspection, palpation, and auscultation of the living. The advent of microscopy opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope, and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. The study of small structures involved passing light through them, and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different tissue types. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a significant advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids, and other biological molecules gave rise to a new field of molecular anatomy.
Equally important advances have occurred in non-invasive techniques for examining the body's interior structures. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled the examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations.
See also
Anatomical model
Outline of human anatomy
Plastination
References
External links
Anatomy, In Our Time. BBC Radio 4. Melvyn Bragg with guests Ruth Richardson, Andrew Cunningham and Harold Ellis.
"Anatomy of the Human Body". 20th edition. 1918. Henry Gray
Anatomia Collection: anatomical plates 1522 to 1867 (digitized books and images)
Lyman, Henry Munson. The Book of Health (1898). Science History Institute Digital Collections .
Gunther von Hagens True Anatomy for New Ways of Teaching.
Sources
Anatomical terminology
Branches of biology
Morphology (biology) | 0.781211 | 0.998938 | 0.780381 |
Protection | Protection is any measure taken to guard a thing against damage caused by outside forces. Protection can be provided to physical objects, including organisms, to systems, and to intangible things like civil and political rights. Although the mechanisms for providing protection vary widely, the basic meaning of the term remains the same. This is illustrated by an explanation found in a manual on electrical wiring:
Some kind of protection is a characteristic of all life, as living things have evolved at least some protective mechanisms to counter damaging environmental phenomena, such as ultraviolet light. Biological membranes such as bark on trees and skin on animals offer protection from various threats, with skin playing a key role in protecting organisms against pathogens and excessive water loss. Additional structures like scales and hair offer further protection from the elements and from predators, with some animals having features such as spines or camouflage serving exclusively as anti-predator adaptations. Many animals supplement the protection afforded by their physiology by burrowing or otherwise adopting habitats or behaviors that insulate them from potential sources of harm. Humans originally began wearing clothing and building shelters in prehistoric times for protection from the elements. Both humans and animals are also often concerned with the protection of others, with adult animals being particularly inclined to seek to protect their young from elements of nature and from predators.
In the human sphere of activity, the concept of protection has been extended to nonliving objects, including technological systems such as computers, and to intangible things such as intellectual property, beliefs, and economic systems. Humans seek to protect locations of historical and cultural significance through historic preservation efforts, and are also concerned with protecting the environment from damage caused by human activity, and with protecting the Earth as a whole from potentially harmful objects from space.
Physical protection
Protection of objects
Fire protection, including passive fire protection measures such as physical firewalls and fireproofing, and active fire protection measures, such as fire sprinkler systems.
Waterproofing, though application of surface layers that repel water.
Rot-proofing and rustproofing
Thermal conductivity resistance
Impact resistance
Radiation protection, protection of people and the environment from radiation
Dust resistance
Conservation and restoration of immovable cultural property, including a large number of techniques to preserve sites of historical or archaeological value
Protection of persons
Close protection, physical protection and security from danger of very important persons
Climbing protection, safety measures in climbing
Diplomatic protection
Humanitarian protection, the protection of civilians, in conflict zones and other humanitarian crises
Journalism source protection
Personal protective equipment
Safe sex practices to afford sexual protection against pregnancy and disease, particularly the use of condoms
Executive protection, security measures taken to ensure the safety of important persons
Protection racket, a criminal scheme involving exchanging money from "protection" against violence
Right of asylum, protection for those seeking asylum from persecution by political groups and to ensure safe passage
Workplace or employment retaliation, protecting individuals in the workplace such as from being fired for opposing, aiding and complaining about workplace practices
Protection of systems
Protection of technological systems
Protection of technological systems is often symbolized by the use of a padlock icon, such as "🔒", or a padlock image.
Protection mechanism, in computer science. In computer sciences the separation of protection and security is a design choice. William Wulf has identified protection as a mechanism and security as a policy.
Power-system protection, in power engineering
A way of encapsulation in object-oriented programming
Protection of ecological systems
Environmental protection, the practice of protecting the natural environment
Protection of social systems
Consumer protection, laws governing sales and credit practices involving the public.
Protectionism, an economic policy of protecting a country's market from competitors.
Protection of rights, with respect to civil and political rights.
Data protection through information privacy measures.
Intellectual property protection.
See also
Safety
Security
References
Safety | 0.793021 | 0.983983 | 0.78032 |
Connective tissue disease | Connective tissue disease, also known as connective tissue disorder, or collagen vascular diseases, refers to any disorder that affect the connective tissue. The body's structures are held together by connective tissues, consisting of two distinct proteins: elastin and collagen. Tendons, ligaments, skin, cartilage, bone, and blood vessels are all made of collagen. Skin and ligaments contain elastin. The proteins and the body's surrounding tissues may suffer damage when these connective tissues become inflamed.
The two main categories of connective tissue diseases are (1) a set of relatively rare genetic disorders affecting the primary structure of connective tissue, and (2) a variety of acquired diseases where the connective tissues are the site of multiple, more or less distinct immunological and inflammatory reactions.
Diseases in which inflammation or weakness of collagen tends to occur are also referred to as collagen diseases. Collagen vascular diseases can be (but are not necessarily) associated with collagen and blood vessel abnormalities that are autoimmune in nature.
Some connective tissue diseases have strong or weak genetic inheritance risks. Others may be due to environmental factors, or a combination of genetic and environmental influences.
Classification
Connective tissue diseases can be classified into two groups: (1) a group of relatively rare genetic disorders affecting the primary structure of connective tissue; and (2) a number of acquired conditions where the connective tissues are the site of multiple, more or less distinct immune and inflammatory reactions.
Heritable connective tissue disorders
Hereditary connective tissue disorders are a diverse set of broad, single-gene disorders that impact one or more of the main components of connective tissues, such as ground substance (glycosaminoglycans), collagen, or elastin. Many result in anomalies of the skeleton and joints, which can substantially impair normal growth and development. In contrast to acquired connective tissue diseases, these conditions are uncommon.
Marfan syndrome - inherited as an autosomal dominant characteristic, due to mutations in the FBN1 gene that encodes fibrillin 1.
Homocystinuria - condition of methionine metabolism brought on by a cystathionine β-synthase deficit that causes a build-up of homocysteine and its metabolites in the urine and blood.
Ehlers–Danlos syndrome - diverse collection of disorders distinguished by the fragility of soft connective tissues and widespread symptoms affecting the skin, ligaments, joints, blood vessels, and internal organs.
Osteogenesis imperfecta - hereditary condition marked by reduced bone mass, weakened bones, increased brittleness, and short stature.
Alkaptonuria - inborn error of metabolism caused by mutations in the HGO gene and homogentisate 1,2-dioxygenase deficiency.
Pseudoxanthoma elasticum - rare multisystem disease marked by gradual calcification and fragmentation of elastic fibres.
Mucopolysaccharidosis - a class of hereditary illnesses distinguished by the excretion of mucopolysaccharide in the urine.
Fibrodysplasia ossificans progressiva - rare and debilitating hereditary disorder characterized by progressive heterotopic ossification and congenital skeletal malformations.
Familial osteochondritis dissecans - separation of the subchondral bone and cartilage from the surrounding tissue.
Stickler syndrome - autosomal dominant disorder distinguished by skeletal, ocular, and orofacial abnormalities.
Alport syndrome - hereditary kidney disease is distinguished by structural abnormalities and malfunction in the glomerular basement membrane, as well as basement membranes in other organs such as the eye and ear.
Congenital contractural arachnodactyly - autosomal dominant disorder defined by arachnodactyly, multiple flexion contractures, abnormal pinnae, severe kyphoscoliosis, and muscular hypoplasia.
Epidermolysis bullosa - hereditary, diverse grouping of rare genetic dermatoses that are marked by blisters and mucocutaneous fragility.
Loeys–Dietz syndrome - autosomal dominant condition linked to a wide range of systemic manifestations, such as skeletal, cutaneous, vascular, and craniofacial abnormalities.
Hypermobility spectrum disorder - a variety of connective tissue diseases that are marked by ongoing pain and joint hypermobility.
Autoimmune connective tissue disorders
Acquired connective tissue diseases share certain clinical features, such as joint inflammation, inflammation of serous membranes, and vasculitis, as well as a high frequency of involvement of various internal organs that are particularly rich in connective tissue.
Rheumatoid arthritis - autoimmune disease with an unclear cause that manifests as symmetric, erosive synovitis and, occasionally, extraarticular involvement.
Systemic lupus erythematosus - chronic, complex autoimmune inflammatory disorder that can affect every organ in the body.
Scleroderma - diverse collection of autoimmune fibrosing conditions.
Dermatomyositis and polymyositis - autoimmune myopathies that are clinically characterized by extramuscular symptoms, muscle inflammation, proximal muscle weakening, and oftentimes the detection of autoantibodies.
Vasculitis - disease that results in blood vessel inflammation.
Sjögren syndrome - a systemic autoimmune illness that mostly affects the exocrine glands and causes mucosal surfaces, especially those in the mouth and eyes, to become extremely dry.
Rheumatic fever - multisystem inflammatory illness that develops after group A streptococcal pharyngitis.
Amyloidosis - uncommon condition caused by protein mutations or changes in the body that result in twisted clusters of malformed proteins accumulating on organs and tissues.
Osteoarthritis - common articular cartilage degenerative disease linked to hypertrophic bone abnormalities.
Thrombotic thrombocytopenic purpura - uncommon and potentially fatal thrombotic microangiopathy characterized by severe thrombocytopenia, organ ischemia connected to diffuse microvascular platelet rich-thrombi, and microangiopathic hemolytic anemia.
Relapsing polychondritis - uncommon multisystem autoimmune disease with an unclear etiology that is marked by progressive cartilaginous tissue loss and recurring episodes of inflammation.
Mixed connective tissue disease - systemic autoimmune disease that shares characteristics with two or more other systemic autoimmune diseases, such as rheumatoid arthritis, polymyositis/dermatomyositis, systemic lupus erythematosus, and systemic sclerosis.
Undifferentiated connective tissue disease - unclassifiable systemic autoimmune disorders that do not meet any of the current classification requirements for connective tissue diseases yet have clinical and serological signs similar to connective tissue diseases.
Psoriatic arthritis - inflammatory musculoskeletal condition linked to psoriasis.
See also
Overlap syndrome
Connective tissue
References
Further reading
External links | 0.782297 | 0.997249 | 0.780145 |
Fat | In nutrition, biology, and chemistry, fat usually means any ester of fatty acids, or a mixture of such compounds, most commonly those that occur in living beings or in food.
The term often refers specifically to triglycerides (triple esters of glycerol), that are the main components of vegetable oils and of fatty tissue in animals; or, even more narrowly, to triglycerides that are solid or semisolid at room temperature, thus excluding oils. The term may also be used more broadly as a synonym of lipid—any substance of biological relevance, composed of carbon, hydrogen, or oxygen, that is insoluble in water but soluble in non-polar solvents. In this sense, besides the triglycerides, the term would include several other types of compounds like mono- and diglycerides, phospholipids (such as lecithin), sterols (such as cholesterol), waxes (such as beeswax), and free fatty acids, which are usually present in human diet in smaller amounts.
Fats are one of the three main macronutrient groups in human diet, along with carbohydrates and proteins, and the main components of common food products like milk, butter, tallow, lard, salt pork, and cooking oils. They are a major and dense source of food energy for many animals and play important structural and metabolic functions in most living beings, including energy storage, waterproofing, and thermal insulation. The human body can produce the fat it requires from other food ingredients, except for a few essential fatty acids that must be included in the diet. Dietary fats are also the carriers of some flavor and aroma ingredients and vitamins that are not water-soluble.
Biological importance
In humans and many animals, fats serve both as energy sources and as stores for energy in excess of what the body needs immediately. Each gram of fat when burned or metabolized releases about nine food calories (37 kJ = 8.8 kcal).
Fats are also sources of essential fatty acids, an important dietary requirement. Vitamins A, D, E, and K are fat-soluble, meaning they can only be digested, absorbed, and transported in conjunction with fats.
Fats play a vital role in maintaining healthy skin and hair, insulating body organs against shock, maintaining body temperature, and promoting healthy cell function. Fat also serves as a useful buffer against a host of diseases. When a particular substance, whether chemical or biotic, reaches unsafe levels in the bloodstream, the body can effectively dilute—or at least maintain equilibrium of—the offending substances by storing it in new fat tissue. This helps to protect vital organs, until such time as the offending substances can be metabolized or removed from the body by such means as excretion, urination, accidental or intentional bloodletting, sebum excretion, and hair growth.
Adipose tissue
In animals, adipose tissue, or fatty tissue is the body's means of storing metabolic energy over extended periods of time. Adipocytes (fat cells) store fat derived from the diet and from liver metabolism. Under energy stress these cells may degrade their stored fat to supply fatty acids and also glycerol to the circulation. These metabolic activities are regulated by several hormones (e.g., insulin, glucagon and epinephrine). Adipose tissue also secretes the hormone leptin.
Production and processing
A variety of chemical and physical techniques are used for the production and processing of fats, both industrially and in cottage or home settings. They include:
Pressing to extract liquid fats from fruits, seeds, or algae, e.g. olive oil from olives
Solvent extraction using solvents like hexane or supercritical carbon dioxide
Rendering, the melting of fat in adipose tissue, e.g. to produce tallow, lard, fish oil, and whale oil
Churning of milk to produce butter
Hydrogenation to increase the degree of saturation of the fatty acids
Interesterification, the rearrangement of fatty acids across different triglycerides
Winterization to remove oil components with higher melting points
Clarification of butter
Metabolism
The pancreatic lipase acts at the ester bond, hydrolyzing the bond and "releasing" the fatty acid. In triglyceride form, lipids cannot be absorbed by the duodenum. Fatty acids, monoglycerides (one glycerol, one fatty acid), and some diglycerides are absorbed by the duodenum, once the triglycerides have been broken down.
In the intestine, following the secretion of lipases and bile, triglycerides are split into monoacylglycerol and free fatty acids in a process called lipolysis. They are subsequently moved to absorptive enterocyte cells lining the intestines. The triglycerides are rebuilt in the enterocytes from their fragments and packaged together with cholesterol and proteins to form chylomicrons. These are excreted from the cells and collected by the lymph system and transported to the large vessels near the heart before being mixed into the blood. Various tissues can capture the chylomicrons, releasing the triglycerides to be used as a source of energy. Liver cells can synthesize and store triglycerides. When the body requires fatty acids as an energy source, the hormone glucagon signals the breakdown of the triglycerides by hormone-sensitive lipase to release free fatty acids. As the brain cannot utilize fatty acids as an energy source (unless converted to a ketone), the glycerol component of triglycerides can be converted into glucose, via gluconeogenesis by conversion into dihydroxyacetone phosphate and then into glyceraldehyde 3-phosphate, for brain fuel when it is broken down. Fat cells may also be broken down for that reason if the brain's needs ever outweigh the body's.
Triglycerides cannot pass through cell membranes freely. Special enzymes on the walls of blood vessels called lipoprotein lipases must break down triglycerides into free fatty acids and glycerol. Fatty acids can then be taken up by cells via fatty acid transport proteins (FATPs).
Triglycerides, as major components of very-low-density lipoprotein (VLDL) and chylomicrons, play an important role in metabolism as energy sources and transporters of dietary fat. They contain more than twice as much energy (approximately 9kcal/g or 38kJ/g) as carbohydrates (approximately 4kcal/g or 17kJ/g).
Nutritional and health aspects
The most common type of fat, in human diet and most living beings, is a triglyceride, an ester of the triple alcohol glycerol and three fatty acids. The molecule of a triglyceride can be described as resulting from a condensation reaction (specifically, esterification) between each of glycerol's –OH groups and the HO– part of the carboxyl group of each fatty acid, forming an ester bridge with elimination of a water molecule .
Other less common types of fats include diglycerides and monoglycerides, where the esterification is limited to two or just one of glycerol's –OH groups. Other alcohols, such as cetyl alcohol (predominant in spermaceti), may replace glycerol. In the phospholipids, one of the fatty acids is replaced by phosphoric acid or a monoester thereof.
The benefits and risks of various amounts and types of dietary fats have been the object of much study, and are still highly controversial topics.
Essential fatty acids
There are two essential fatty acids (EFAs) in human nutrition: alpha-Linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid). The adult body can synthesize other lipids that it needs from these two.
Dietary sources
Saturated vs. unsaturated fats
Different foods contain different amounts of fat with different proportions of saturated and unsaturated fatty acids. Some animal products, like beef and dairy products made with whole or reduced fat milk like yogurt, ice cream, cheese and butter have mostly saturated fatty acids (and some have significant contents of dietary cholesterol). Other animal products, like pork, poultry, eggs, and seafood have mostly unsaturated fats. Industrialized baked goods may use fats with high unsaturated fat contents as well, especially those containing partially hydrogenated oils, and processed foods that are deep-fried in hydrogenated oil are high in saturated fat content.
Plants and fish oil generally contain a higher proportion of unsaturated acids, although there are exceptions such as coconut oil and palm kernel oil. Foods containing unsaturated fats include avocado, nuts, olive oils, and vegetable oils such as canola.
Many scientific studies have found that replacing saturated fats with cis unsaturated fats in the diet reduces risk of cardiovascular diseases (CVDs), diabetes, or death. These studies prompted many medical organizations and public health departments, including the World Health Organization (WHO), to officially issue that advice. Some countries with such recommendations include:
United Kingdom
United States
India
Canada
Australia
Singapore
New Zealand
Hong Kong
A 2004 review concluded that "no lower safe limit of specific saturated fatty acid intakes has been identified" and recommended that the influence of varying saturated fatty acid intakes against a background of different individual lifestyles and genetic backgrounds should be the focus in future studies.
This advice is often oversimplified by labeling the two kinds of fats as bad fats and good fats, respectively. However, since the fats and oils in most natural and traditionally processed foods contain both unsaturated and saturated fatty acids, the complete exclusion of saturated fat is unrealistic and possibly unwise. For instance, some foods rich in saturated fat, such as coconut and palm oil, are an important source of cheap dietary calories for a large fraction of the population in developing countries.
Concerns were also expressed at a 2010 conference of the American Dietetic Association that a blanket recommendation to avoid saturated fats could drive people to also reduce the amount of polyunsaturated fats, which may have health benefits, and/or replace fats by refined carbohydrates — which carry a high risk of obesity and heart disease.
For these reasons, the U.S. Food and Drug Administration, for example, recommends to consume at least 10% (7% for high-risk groups) of calories from saturated fat, with an average of 30% (or less) of total calories from all fat. A general 7% limit was recommended also by the American Heart Association (AHA) in 2006.
The WHO/FAO report also recommended replacing fats so as to reduce the content of myristic and palmitic acids, specifically.
The so-called Mediterranean diet, prevalent in many countries in the Mediterranean Sea area, includes more total fat than the diet of Northern European countries, but most of it is in the form of unsaturated fatty acids (specifically, monounsaturated and omega-3) from olive oil and fish, vegetables, and certain meats like lamb, while consumption of saturated fat is minimal in comparison.
A 2017 review found evidence that a Mediterranean-style diet could reduce the risk of cardiovascular diseases, overall cancer incidence, neurodegenerative diseases, diabetes, and mortality rate. A 2018 review showed that a Mediterranean-like diet may improve overall health status, such as reduced risk of non-communicable diseases. It also may reduce the social and economic costs of diet-related illnesses.
A small number of contemporary reviews have challenged this negative view of saturated fats. For example, an evaluation of evidence from 1966 to 1973 of the observed health impact of replacing dietary saturated fat with linoleic acid found that it increased rates of death from all causes, coronary heart disease, and cardiovascular disease. These studies have been disputed by many scientists, and the consensus in the medical community is that saturated fat and cardiovascular disease are closely related. Still, these discordant studies fueled debate over the merits of substituting polyunsaturated fats for saturated fats.
Cardiovascular disease
The effect of saturated fat on cardiovascular disease has been extensively studied. The general consensus is that there is evidence of moderate-quality of a strong, consistent, and graded relationship between saturated fat intake, blood cholesterol levels, and the incidence of cardiovascular disease. The relationships are accepted as causal, including by many government and medical organizations.
A 2017 review by the AHA estimated that replacement of saturated fat with polyunsaturated fat in the American diet could reduce the risk of cardiovascular diseases by 30%.
The consumption of saturated fat is generally considered a risk factor for dyslipidemia—abnormal blood lipid levels, including high total cholesterol, high levels of triglycerides, high levels of low-density lipoprotein (LDL, "bad" cholesterol) or low levels of high-density lipoprotein (HDL, "good" cholesterol). These parameters in turn are believed to be risk indicators for some types of cardiovascular disease. These effects were observed in children too.
Several meta-analyses (reviews and consolidations of multiple previously published experimental studies) have confirmed a significant relationship between saturated fat and high serum cholesterol levels, which in turn have been claimed to have a causal relation with increased risk of cardiovascular disease (the so-called lipid hypothesis). However, high cholesterol may be caused by many factors. Other indicators, such as high LDL/HDL ratio, have proved to be more predictive. In a study of myocardial infarction in 52 countries, the ApoB/ApoA1 (related to LDL and HDL, respectively) ratio was the strongest predictor of CVD among all risk factors. There are other pathways involving obesity, triglyceride levels, insulin sensitivity, endothelial function, and thrombogenicity, among others, that play a role in CVD, although it seems, in the absence of an adverse blood lipid profile, the other known risk factors have only a weak atherogenic effect. Different saturated fatty acids have differing effects on various lipid levels.
Cancer
The evidence for a relation between saturated fat intake and cancer is significantly weaker, and there does not seem to be a clear medical consensus about it.
Several reviews of case–control studies have found that saturated fat intake is associated with increased breast cancer risk.
Another review found limited evidence for a positive relationship between consuming animal fat and incidence of colorectal cancer.
Other meta-analyses found evidence for increased risk of ovarian cancer by high consumption of saturated fat.
Some studies have indicated that serum myristic acid and palmitic acid and dietary myristic and palmitic saturated fatty acids and serum palmitic combined with alpha-tocopherol supplementation are associated with increased risk of prostate cancer in a dose-dependent manner. These associations may, however, reflect differences in intake or metabolism of these fatty acids between the precancer cases and controls, rather than being an actual cause.
Bones
Various animal studies have indicated that the intake of saturated fat has a negative effect on the mineral density of bones. One study suggested that men may be particularly vulnerable.
Disposition and overall health
Studies have shown that substituting monounsaturated fatty acids for saturated ones is associated with increased daily physical activity and resting energy expenditure. More physical activity, less anger, and less irritability were associated with a higher-oleic acid diet than one of a palmitic acid diet.
Monounsaturated vs. polyunsaturated fat
The most common fatty acids in human diet are unsaturated or mono-unsaturated. Monounsaturated fats are found in animal flesh such as red meat, whole milk products, nuts, and high fat fruits such as olives and avocados. Olive oil is about 75% monounsaturated fat. The high oleic variety sunflower oil contains at least 70% monounsaturated fat. Canola oil and cashews are both about 58% monounsaturated fat. Tallow (beef fat) is about 50% monounsaturated fat, and lard is about 40% monounsaturated fat. Other sources include hazelnut, avocado oil, macadamia nut oil, grapeseed oil, groundnut oil (peanut oil), sesame oil, corn oil, popcorn, whole grain wheat, cereal, oatmeal, almond oil, hemp oil, and tea-oil camellia.
Polyunsaturated fatty acids can be found mostly in nuts, seeds, fish, seed oils, and oysters.
Food sources of polyunsaturated fats include:
Insulin resistance and sensitivity
MUFAs (especially oleic acid) have been found to lower the incidence of insulin resistance; PUFAs (especially large amounts of arachidonic acid) and SFAs (such as arachidic acid) increased it. These ratios can be indexed in the phospholipids of human skeletal muscle and in other tissues as well. This relationship between dietary fats and insulin resistance is presumed secondary to the relationship between insulin resistance and inflammation, which is partially modulated by dietary fat ratios (omega−3/6/9) with both omega−3 and −9 thought to be anti-inflammatory, and omega−6 pro-inflammatory (as well as by numerous other dietary components, particularly polyphenols and exercise, with both of these anti-inflammatory). Although both pro- and anti-inflammatory types of fat are biologically necessary, fat dietary ratios in most US diets are skewed towards omega−6, with subsequent disinhibition of inflammation and potentiation of insulin resistance. This is contrary to the suggestion that polyunsaturated fats are shown to be protective against insulin resistance.
The large scale KANWU study found that increasing MUFA and decreasing SFA intake could improve insulin sensitivity, but only when the overall fat intake of the diet was low. However, some MUFAs may promote insulin resistance (like the SFAs), whereas PUFAs may protect against it.
Cancer
Levels of oleic acid along with other MUFAs in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. MUFAs and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
Results from observational clinical trials on PUFA intake and cancer have been inconsistent and vary by numerous factors of cancer incidence, including gender and genetic risk. Some studies have shown associations between higher intakes and/or blood levels of omega-3 PUFAs and a decreased risk of certain cancers, including breast and colorectal cancer, while other studies found no associations with cancer risk.
Pregnancy disorders
Polyunsaturated fat supplementation was found to have no effect on the incidence of pregnancy-related disorders, such as hypertension or preeclampsia, but may increase the length of gestation slightly and decreased the incidence of early premature births.
Expert panels in the United States and Europe recommend that pregnant and lactating women consume higher amounts of polyunsaturated fats than the general population to enhance the DHA status of the fetus and newborn.
"Cis fat" vs. "trans fat"
In nature, unsaturated fatty acids generally have double bonds in cis configuration (with the adjacent C–C bonds on the same side) as opposed to trans. Nevertheless, trans fatty acids (TFAs) occur in small amounts in meat and milk of ruminants (such as cattle and sheep), typically 2–5% of total fat. Natural TFAs, which include conjugated linoleic acid (CLA) and vaccenic acid, originate in the rumen of these animals. CLA has two double bonds, one in the cis configuration and one in trans, which makes it simultaneously a cis- and a trans-fatty acid.
Concerns about trans fatty acids in human diet were raised when they were found to be an unintentional byproduct of the partial hydrogenation of vegetable and fish oils. While these trans fatty acids (popularly called "trans fats") are edible, they have been implicated in many health problems.
The hydrogenation process, invented and patented by Wilhelm Normann in 1902, made it possible to turn relatively cheap liquid fats such as whale or fish oil into more solid fats and to extend their shelf-life by preventing rancidification. (The source fat and the process were initially kept secret to avoid consumer distaste.) This process was widely adopted by the food industry in the early 1900s; first for the production of margarine, a replacement for butter and shortening, and eventually for various other fats used in snack food, packaged baked goods, and deep fried products.
Full hydrogenation of a fat or oil produces a fully saturated fat. However, hydrogenation generally was interrupted before completion, to yield a fat product with specific melting point, hardness, and other properties. Partial hydrogenation turns some of the cis double bonds into trans bonds by an isomerization reaction. The trans configuration is favored because it is the lower energy form.
This side reaction accounts for most of the trans fatty acids consumed today, by far. An analysis of some industrialized foods in 2006 found up to 30% "trans fats" in artificial shortening, 10% in breads and cake products, 8% in cookies and crackers, 4% in salty snacks, 7% in cake frostings and sweets, and 26% in margarine and other processed spreads. Another 2010 analysis however found only 0.2% of trans fats in margarine and other processed spreads. Up to 45% of the total fat in those foods containing man-made trans fats formed by partially hydrogenating plant fats may be trans fat. Baking shortenings, unless reformulated, contain around 30% trans fats compared to their total fats. High-fat dairy products such as butter contain about 4%. Margarines not reformulated to reduce trans fats may contain up to 15% trans fat by weight, but some reformulated ones are less than 1% trans fat.
High levels of TFAs have been recorded in popular "fast food" meals. An analysis of samples of McDonald's French fries collected in 2004 and 2005 found that fries served in New York City contained twice as much trans fat as in Hungary, and 28 times as much as in Denmark, where trans fats are restricted. For Kentucky Fried Chicken products, the pattern was reversed: the Hungarian product containing twice the trans fat of the New York product. Even within the United States, there was variation, with fries in New York containing 30% more trans fat than those from Atlanta.
Cardiovascular disease
Numerous studies have found that consumption of TFAs increases risk of cardiovascular disease. The Harvard School of Public Health advises that replacing TFAs and saturated fats with cis monounsaturated and polyunsaturated fats is beneficial for health.
Consuming trans fats has been shown to increase the risk of coronary artery disease in part by raising levels of low-density lipoprotein (LDL, often termed "bad cholesterol"), lowering levels of high-density lipoprotein (HDL, often termed "good cholesterol"), increasing triglycerides in the bloodstream and promoting systemic inflammation.
The primary health risk identified for trans fat consumption is an elevated risk of coronary artery disease (CAD). A 1994 study estimated that over 30,000 cardiac deaths per year in the United States are attributable to the consumption of trans fats. By 2006 upper estimates of 100,000 deaths were suggested. A comprehensive review of studies of trans fats published in 2006 in the New England Journal of Medicine reports a strong and reliable connection between trans fat consumption and CAD, concluding that "On a per-calorie basis, trans fats appear to increase the risk of CAD more than any other macronutrient, conferring a substantially increased risk at low levels of consumption (1 to 3% of total energy intake)".
The major evidence for the effect of trans fat on CAD comes from the Nurses' Health Study – a cohort study that has been following 120,000 female nurses since its inception in 1976. In this study, Hu and colleagues analyzed data from 900 coronary events from the study's population during 14 years of followup. He determined that a nurse's CAD risk roughly doubled (relative risk of 1.93, CI: 1.43 to 2.61) for each 2% increase in trans fat calories consumed (instead of carbohydrate calories). By contrast, for each 5% increase in saturated fat calories (instead of carbohydrate calories) there was a 17% increase in risk (relative risk of 1.17, CI: 0.97 to 1.41). "The replacement of saturated fat or trans unsaturated fat by cis (unhydrogenated) unsaturated fats was associated with larger reductions in risk than an isocaloric replacement by carbohydrates." Hu also reports on the benefits of reducing trans fat consumption. Replacing 2% of food energy from trans fat with non-trans unsaturated fats more than halves the risk of CAD (53%). By comparison, replacing a larger 5% of food energy from saturated fat with non-trans unsaturated fats reduces the risk of CAD by 43%.
Another study considered deaths due to CAD, with consumption of trans fats being linked to an increase in mortality, and consumption of polyunsaturated fats being linked to a decrease in mortality.
Trans fat has been found to act like saturated in raising the blood level of LDL ("bad cholesterol"); but, unlike saturated fat, it also decreases levels of HDL ("good cholesterol"). The net increase in LDL/HDL ratio with trans fat, a widely accepted indicator of risk for coronary artery disease, is approximately double that due to saturated fat. One randomized crossover study published in 2003 comparing the effect of eating a meal on blood lipids of (relatively) cis and trans-fat-rich meals showed that cholesteryl ester transfer (CET) was 28% higher after the trans meal than after the cis meal and that lipoprotein concentrations were enriched in apolipoprotein(a) after the trans meals.
The citokyne test is a potentially more reliable indicator of CAD risk, although is still being studied. A study of over 700 nurses showed that those in the highest quartile of trans fat consumption had blood levels of C-reactive protein (CRP) that were 73% higher than those in the lowest quartile.
Breast feeding
It has been established that trans fats in human breast milk fluctuate with maternal consumption of trans fat, and that the amount of trans fats in the bloodstream of breastfed infants fluctuates with the amounts found in their milk. In 1999, reported percentages of trans fats (compared to total fats) in human milk ranged from 1% in Spain, 2% in France, 4% in Germany, and 7% in Canada and the United States.
Other health risks
There are suggestions that the negative consequences of trans fat consumption go beyond the cardiovascular risk. In general, there is much less scientific consensus asserting that eating trans fat specifically increases the risk of other chronic health problems:
Alzheimer's disease: A study published in Archives of Neurology in February 2003 suggested that the intake of both trans fats and saturated fats promotes the development of Alzheimer disease, although not confirmed in an animal model. It has been found that trans fats impaired memory and learning in middle-age rats. The brains of rats that ate trans-fats had fewer proteins critical to healthy neurological function. Inflammation in and around the hippocampus, the part of the brain responsible for learning and memory. These are the exact types of changes normally seen at the onset of Alzheimer's, but seen after six weeks, even though the rats were still young.
Cancer: There is no scientific consensus that consuming trans fats significantly increases cancer risks across the board. The American Cancer Society states that a relationship between trans fats and cancer "has not been determined." One study has found a positive connection between trans fat and prostate cancer. However, a larger study found a correlation between trans fats and a significant decrease in high-grade prostate cancer. An increased intake of trans fatty acids may raise the risk of breast cancer by 75%, suggest the results from the French part of the European Prospective Investigation into Cancer and Nutrition.
Diabetes: There is a growing concern that the risk of type 2 diabetes increases with trans fat consumption. However, consensus has not been reached. For example, one study found that risk is higher for those in the highest quartile of trans fat consumption. Another study has found no diabetes risk once other factors such as total fat intake and BMI were accounted for.
Obesity: Research indicates that trans fat may increase weight gain and abdominal fat, despite a similar caloric intake. A 6-year experiment revealed that monkeys fed a trans fat diet gained 7.2% of their body weight, as compared to 1.8% for monkeys on a mono-unsaturated fat diet. Although obesity is frequently linked to trans fat in the popular media, this is generally in the context of eating too many calories; there is not a strong scientific consensus connecting trans fat and obesity, although the 6-year experiment did find such a link, concluding that "under controlled feeding conditions, long-term TFA consumption was an independent factor in weight gain. TFAs enhanced intra-abdominal deposition of fat, even in the absence of caloric excess, and were associated with insulin resistance, with evidence that there is impaired post-insulin receptor binding signal transduction."
Infertility in women: One 2007 study found, "Each 2% increase in the intake of energy from trans unsaturated fats, as opposed to that from carbohydrates, was associated with a 73% greater risk of ovulatory infertility...".
Major depressive disorder: Spanish researchers analysed the diets of 12,059 people over six years and found that those who ate the most trans fats had a 48 per cent higher risk of depression than those who did not eat trans fats. One mechanism may be trans-fats' substitution for docosahexaenoic acid (DHA) levels in the orbitofrontal cortex (OFC). Very high intake of trans-fatty acids (43% of total fat) in mice from 2 to 16 months of age was associated with lowered DHA levels in the brain (p=0.001). When the brains of 15 major depressive subjects who had committed suicide were examined post-mortem and compared against 27 age-matched controls, the suicidal brains were found to have 16% less (male average) to 32% less (female average) DHA in the OFC. The OFC controls reward, reward expectation, and empathy (all of which are reduced in depressive mood disorders) and regulates the limbic system.
Behavioral irritability and aggression: a 2012 observational analysis of subjects of an earlier study found a strong relation between dietary trans fat acids and self-reported behavioral aggression and irritability, suggesting but not establishing causality.
Diminished memory: In a 2015 article, researchers re-analyzing results from the 1999-2005 UCSD Statin Study argue that "greater dietary trans fatty acid consumption is linked to worse word memory in adults during years of high productivity, adults age <45".
Acne: According to a 2015 study, trans fats are one of several components of Western pattern diets which promote acne, along with carbohydrates with high glycemic load such as refined sugars or refined starches, milk and dairy products, and saturated fats, while omega-3 fatty acids, which reduce acne, are deficient in Western pattern diets.
Biochemical mechanisms
The exact biochemical process by which trans fats produce specific health problems are a topic of continuing research. Intake of dietary trans fat perturbs the body's ability to metabolize essential fatty acids (EFAs, including omega-3) leading to changes in the phospholipid fatty acid composition of the arterial walls, thereby raising risk of coronary artery disease.
Trans double bonds are claimed to induce a linear conformation to the molecule, favoring its rigid packing as in plaque formation. The geometry of the cis double bond, in contrast, is claimed to create a bend in the molecule, thereby precluding rigid formations.
While the mechanisms through which trans fatty acids contribute to coronary artery disease are fairly well understood, the mechanism for their effects on diabetes is still under investigation. They may impair the metabolism of long-chain polyunsaturated fatty acids (LCPUFAs). However, maternal pregnancy trans fatty acid intake has been inversely associated with LCPUFAs levels in infants at birth thought to underlie the positive association between breastfeeding and intelligence.
Trans fats are processed by the liver differently than other fats. They may cause liver dysfunction by interfering with delta 6 desaturase, an enzyme involved in converting essential fatty acids to arachidonic acid and prostaglandins, both of which are important to the functioning of cells.
Natural "trans fats" in dairy products
Some trans fatty acids occur in natural fats and traditionally processed foods. Vaccenic acid occurs in breast milk, and some isomers of conjugated linoleic acid (CLA) are found in meat and dairy products from ruminants. Butter, for example, contains about 3% trans fat.
The U.S. National Dairy Council has asserted that the trans fats present in animal foods are of a different type than those in partially hydrogenated oils, and do not appear to exhibit the same negative effects. A review agrees with the conclusion (stating that "the sum of the current evidence suggests that the Public health implications of consuming trans fats from ruminant products are relatively limited") but cautions that this may be due to the low consumption of trans fats from animal sources compared to artificial ones.
In 2008 a meta-analysis found that all trans fats, regardless of natural or artificial origin equally raise LDL and lower HDL levels. Other studies though have shown different results when it comes to animal-based trans fats like conjugated linoleic acid (CLA). Although CLA is known for its anticancer properties, researchers have also found that the cis-9, trans-11 form of CLA can reduce the risk for cardiovascular disease and help fight inflammation.
Two Canadian studies have shown that vaccenic acid, a TFA that naturally occurs in dairy products, could be beneficial compared to hydrogenated vegetable shortening, or a mixture of pork lard and soy fat, by lowering total LDL and triglyceride levels. A study by the US Department of Agriculture showed that vaccenic acid raises both HDL and LDL cholesterol, whereas industrial trans fats only raise LDL with no beneficial effect on HDL.
Official recommendations
In light of recognized evidence and scientific agreement, nutritional authorities consider all trans fats equally harmful for health and recommend that their consumption be reduced to trace amounts. In 2003, the WHO recommended that trans fats make up no more than 0.9% of a person's diet and, in 2018, introduced a 6-step guide to eliminate industrially-produced trans-fatty acids from the global food supply.
The National Academy of Sciences (NAS) advises the U.S. and Canadian governments on nutritional science for use in public policy and product labeling programs. Their 2002 Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids contains their findings and recommendations regarding consumption of trans fat.
Their recommendations are based on two key facts. First, "trans fatty acids are not essential and provide no known benefit to human health", whether of animal or plant origin. Second, given their documented effects on the LDL/HDL ratio, the NAS concluded "that dietary trans fatty acids are more deleterious with respect to coronary artery disease than saturated fatty acids". A 2006 review published in the New England Journal of Medicine (NEJM) that states "from a nutritional standpoint, the consumption of trans fatty acids results in considerable potential harm but no apparent benefit."
Because of these facts and concerns, the NAS has concluded there is no safe level of trans fat consumption. There is no adequate level, recommended daily amount or tolerable upper limit for trans fats. This is because any incremental increase in trans fat intake increases the risk of coronary artery disease.
Despite this concern, the NAS dietary recommendations have not included eliminating trans fat from the diet. This is because trans fat is naturally present in many animal foods in trace quantities, and thus its removal from ordinary diets might introduce undesirable side effects and nutritional imbalances. The NAS has, thus, "recommended that trans fatty acid consumption be as low as possible while consuming a nutritionally adequate diet". Like the NAS, the WHO has tried to balance public health goals with a practical level of trans fat consumption, recommending in 2003 that trans fats be limited to less than 1% of overall energy intake.
Regulatory action
In the last few decades, there has been substantial amount of regulation in many countries, limiting trans fat contents of industrialized and commercial food products.
Alternatives to hydrogenation
The negative public image and strict regulations has led to interest in replacing partial hydrogenation. In fat interesterification, the fatty acids are among a mix of triglycerides. When applied to a suitable blend of oils and saturated fats, possibly followed by separation of unwanted solid or liquid triglycerides, this process could conceivably achieve results similar to those of partial hydrogenation without affecting the fatty acids themselves; in particular, without creating any new "trans fat".
Hydrogenation can be achieved with only small production of trans fat. The high-pressure methods produced margarine containing 5 to 6% trans fat. Based on current U.S. labeling requirements (see below), the manufacturer could claim the product was free of trans fat. The level of trans fat may also be altered by modification of the temperature and the length of time during hydrogenation.
One can mix oils (such as olive, soybean, and canola), water, monoglycerides, and fatty acids to form a "cooking fat" that acts the same way as trans and saturated fats.
Omega-three and omega-six fatty acids
The ω−3 fatty acids have received substantial attention. Among omega-3 fatty acids, neither long-chain nor short-chain forms were consistently associated with breast cancer risk. High levels of docosahexaenoic acid (DHA), however, the most abundant omega-3 polyunsaturated fatty acid in erythrocyte (red blood cell) membranes, were associated with a reduced risk of breast cancer. The DHA obtained through the consumption of polyunsaturated fatty acids is positively associated with cognitive and behavioral performance. In addition, DHA is vital for the grey matter structure of the human brain, as well as retinal stimulation and neurotransmission.
Interesterification
Some studies have investigated the health effects of interesterified (IE) fats, by comparing diets with IE and non-IE fats with the same overall fatty acid composition.
Several experimental studies in humans found no statistical difference on fasting blood lipids between a diet with large amounts of IE fat, having 25-40% C16:0 or C18:0 on the 2-position, and a similar diet with non-IE fat, having only 3-9% C16:0 or C18:0 on the 2-position. A negative result was obtained also in a study that compared the effects on blood cholesterol levels of an IE fat product mimicking cocoa butter and the real non-IE product.
A 2007 study funded by the Malaysian Palm Oil Board claimed that replacing natural palm oil by other interesterified or partially hydrogenated fats caused adverse health effects, such as higher LDL/HDL ratio and plasma glucose levels. However, these effects could be attributed to the higher percentage of saturated acids in the IE and partially hydrogenated fats, rather than to the IE process itself.
Role in disease
In the human body, high levels of triglycerides in the bloodstream have been linked to atherosclerosis, heart disease
and stroke. However, the relative negative impact of raised levels of triglycerides compared to that of LDL:HDL ratios is as yet unknown. The risk can be partly accounted for by a strong inverse relationship between triglyceride level and HDL-cholesterol level. But the risk is also due to high triglyceride levels increasing the quantity of small, dense LDL particles.
Guidelines
The National Cholesterol Education Program has set guidelines for triglyceride levels:
These levels are tested after fasting 8 to 12 hours. Triglyceride levels remain temporarily higher for a period after eating.
The AHA recommends an optimal triglyceride level of 100mg/dL (1.1mmol/L) or lower to improve heart health.
Reducing triglyceride levels
Fat digestion and metabolism
Fats are broken down in the healthy body to release their constituents, glycerol and fatty acids. Glycerol itself can be converted to glucose by the liver and so become a source of energy. Fats and other lipids are broken down in the body by enzymes called lipases produced in the pancreas.
Many cell types can use either glucose or fatty acids as a source of energy for metabolism. In particular, heart and skeletal muscle prefer fatty acids. Despite long-standing assertions to the contrary, fatty acids can also be used as a source of fuel for brain cells through mitochondrial oxidation.
See also
Animal fat
Monounsaturated fat
Diet and heart disease
Fatty acid synthesis
Food composition data
Western pattern diet
Oil
Lipid
References
Nutrients
Macromolecules | 0.781494 | 0.998262 | 0.780136 |
Signs and symptoms | Signs and symptoms are the observed or detectable signs, and experienced symptoms of an illness, injury, or condition.
Signs are objective and externally observable; symptoms are a person's reported subjective experiences.
A sign for example may be a higher or lower temperature than normal, raised or lowered blood pressure or an abnormality showing on a medical scan. A symptom is something out of the ordinary that is experienced by an individual such as feeling feverish, a headache or other pains in the body.
Signs and symptoms
Signs
A medical sign is an objective observable indication of a disease, injury, or medical condition that may be detected during a physical examination. These signs may be visible, such as a rash or bruise, or otherwise detectable such as by using a stethoscope or taking blood pressure. Medical signs, along with symptoms, help in forming a diagnosis. Some examples of signs are nail clubbing of either the fingernails or toenails, an abnormal gait, and a limbal ring a darkened ring around the iris of the eye.
Indications
A sign is different from an "indication" – the activity of a condition 'pointing to' (thus "indicating") a remedy, not the reverse (viz., it is not a remedy 'pointing to' a condition) – which is a specific reason for using a particular treatment.
Symptoms
A symptom is something felt or experienced, such as pain or dizziness. Signs and symptoms are not mutually exclusive, for example a subjective feeling of fever can be noted as sign by using a thermometer that registers a high reading. The CDC lists various diseases by their signs and symptoms such as for measles which includes a high fever, conjunctivitis, and cough, followed a few days later by the measles rash.
Cardinal signs and symptoms
Cardinal signs and symptoms are very specific even to the point of being pathognomonic. A cardinal sign or cardinal symptom can also refer to the major sign or symptom of a disease. Abnormal reflexes can indicate problems with the nervous system. Signs and symptoms are also applied to physiological states outside the context of disease, as for example when referring to the signs and symptoms of pregnancy, or the symptoms of dehydration. Sometimes a disease may be present without showing any signs or symptoms when it is known as being asymptomatic. The disorder may be discovered through tests including scans. An infection may be asymptomatic but still be transmissible.
Syndrome
Signs and symptoms are often non-specific, but some combinations can be suggestive of certain diagnoses, helping to narrow down what may be wrong. A particular set of characteristic signs and symptoms that may be associated with a disorder is known as a syndrome. In cases where the underlying cause is known the syndrome is named as for example Down syndrome and Noonan syndrome. Other syndromes such as acute coronary syndrome may have a number of possible causes.
Terms
Symptomatic
When a disease is evidenced by symptoms it is known as symptomatic. There are many conditions including subclinical infections that display no symptoms, and these are termed asymptomatic.
Signs and symptoms may be mild or severe, brief or longer-lasting when they may become reduced (remission), or then recur (relapse or recrudescence) known as a flare-up. A flare-up may show more severe symptoms.
The term chief complaint, also "presenting problem", is used to describe the initial concern of an individual when seeking medical help, and once this is clearly noted a history of the present illness may be taken. The symptom that ultimately leads to a diagnosis is called a cardinal symptom. Some symptoms can be misleading as a result of referred pain, where for example a pain in the right shoulder may be due to an inflamed gallbladder and not to presumed muscle strain.
Prodrome
Many diseases have an early prodromal stage where a few signs and symptoms may suggest the presence of a disorder before further specific symptoms may emerge. Measles for example has a prodromal presentation that includes a hacking cough, fever, and Koplik's spots in the mouth. Over half of migraine episodes have a prodromal phase. Schizophrenia has a notable prodromal stage, as has dementia.
Nonspecific symptoms
Some symptoms are specific, that is, they are associated with a single, specific medical condition.
Nonspecific symptoms, sometimes also called equivocal symptoms, are not specific to a particular condition. They include unexplained weight loss, headache, pain, fatigue, loss of appetite, night sweats, and malaise. A group of three particular nonspecific symptoms – fever, night sweats, and weight loss – over a period of six months are termed B symptoms associated with lymphoma and indicate a poor prognosis.
Other sub-types of symptoms include:
constitutional or general symptoms, which affect general well-being or the whole body, such as a fever;
concomitant symptoms, which are symptoms that occur at the same time as the primary symptom;
prodromal symptoms, which are the first symptoms of an bigger set of problems;
delayed symptoms, which happen some time after the trigger; and
objective symptoms, which are symptoms whose existence can be observed and confirmed by a healthcare provider.
Vital signs
Vital signs are the four signs that can give an immediate measurement of the body's overall functioning and health status. They are temperature, heart rate, breathing rate, and blood pressure. The ranges of these measurements vary with age, weight, gender and with general health.
A digital application has been developed for use in clinical settings that measures three of the vital signs (not temperature) using just a smartphone, and has been approved by NHS England. The application is registered as Lifelight First, and Lifelight Home is under development (2020) for monitoring-use by people at home using just the camera on their smartphone or tablet. This will additionally measure oxygen saturation and atrial fibrillation. Other devices are then not needed.
Syndromes
Many conditions are indicated by a group of known signs, or signs and symptoms. These can be a group of three known as a triad; a group of four ("tetrad"); or a group of five ("pentad").
An example of a triad is Meltzer's triad presenting purpura a rash, arthralgia painful joints, and myalgia painful and weak muscles. Meltzer's triad indicates the condition cryoglobulinemia. Huntington's disease is a neurodegenerative disease that is characterized by a triad of motor, cognitive, and psychiatric signs and symptoms. A large number of these groups that can be characteristic of a particular disease are known as a syndrome. Noonan syndrome for example, has a diagnostic set of unique facial and musculoskeletal features. Some syndromes such as nephrotic syndrome may have a number of underlying causes that are all related to diseases that affect the kidneys.
Sometimes a child or young adult may have symptoms suggestive of a genetic disorder that cannot be identified even after genetic testing. In such cases the term SWAN (syndrome without a name) may be used. Often a diagnosis may be made at some future point when other more specific symptoms emerge but many cases may remain undiagnosed. The inability to diagnose may be due to a unique combination of symptoms or an overlap of conditions, or to the symptoms being atypical of a known disorder, or to the disorder being extremely rare.
It is possible that a person with a particular syndrome might not display every single one of the signs and/or symptoms that compose/define a syndrome.
Positive and negative
Sensory symptoms can also be described as positive symptoms, or as negative symptoms depending on whether the symptom is abnormally present such as tingling or itchiness, or abnormally absent such as loss of smell. The following terms are used for negative symptoms – hypoesthesia is a partial loss of sensitivity to moderate stimuli, such as pressure, touch, warmth, cold. Anesthesia is the complete loss of sensitivity to stronger stimuli, such as pinprick. Hypoalgesia (analgesia) is loss of sensation to painful stimuli.
Symptoms are also grouped in to negative and positive for some mental disorders such as schizophrenia. Positive symptoms are those that are present in the disorder and are not normally experienced by most individuals and reflects an excess or distortion of normal functions; examples are hallucinations, delusions, and bizarre behavior. Negative symptoms are functions that are normally found but that are diminished or absent, such as apathy and anhedonia.
Dynamic and static
Dynamic symptoms are capable of change depending on circumstance, whereas static symptoms are fixed or unchanging regardless of circumstance. For example, the symptoms of exercise intolerance are dynamic as they are brought on by exercise, but alleviate during rest. Fixed muscle weakness is a static symptom as the muscle will be weak regardless of exercise or rest.
A majority of patients with metabolic myopathies have dynamic rather than static findings, typically experiencing exercise intolerance, muscle pain, and cramps with exercise rather than fixed weakness. Those with the metabolic myopathy of McArdle's disease (GSD-V) and some individuals with phosphoglucomutase deficiency (CDG1T/GSD-XIV), initially experience exercise intolerance during mild-moderate aerobic exercise, but the symptoms alleviate after 6–10 minutes in what is known as "second wind".
Neuropsychiatric
Neuropsychiatric symptoms are present in many degenerative disorders including dementia, and Parkinson's disease. Symptoms commonly include apathy, anxiety, and depression. Neurological and psychiatric symptoms are also present in some genetic disorders such as Wilson's disease. Symptoms of executive dysfunction are often found in many disorders including schizophrenia, and ADHD.
Radiologic
Radiologic signs are abnormal medical findings on imaging scanning. These include the Mickey Mouse sign and the Golden S sign. When using imaging to find the cause of a complaint, another unrelated finding may be found known as an incidental finding.
Cardinal
Cardinal signs and symptoms are those that may be diagnostic, and pathognomonic – of a certainty of diagnosis. Inflammation for example has a recognised group of cardinal signs and symptoms, as does exacerbations of chronic bronchitis, and Parkinson's disease.
In contrast to a pathognomonic cardinal sign, the absence of a sign or symptom can often rule out a condition. This is known by the Latin term sine qua non. For example, the absence of known genetic mutations specific for a hereditary disease would rule out that disease. Another example is where the vaginal pH is less than 4.5, a diagnosis of bacterial vaginosis would be excluded.
Reflexes
A reflex is an automatic response in the body to a stimulus. Its absence, reduced (hypoactive), or exaggerated (hyperactive) response can be a sign of damage to the central nervous system or peripheral nervous system. In the patellar reflex (knee-jerk) for example, its reduction or absence is known as Westphal's sign and may indicate damage to lower motor neurons. When the response is exaggerated damage to the upper motor neurons may be indicated.
Facies
A number of medical conditions are associated with a distinctive facial expression or appearance known as a facies. An example is elfin facies which has facial features like those of the elf, and this may be associated with Williams syndrome, or Donohue syndrome. The most well-known facies is probably the Hippocratic facies that is seen on a person as they near death.
Anamnestic signs
Anamnestic signs (from anamnēstikós, ἀναμνηστικός, "able to recall to mind") are signs that indicate a past condition, for example paralysis in an arm may indicate a past stroke.
Asymptomatic
Some diseases including cancers, and infections may be present but show no signs or symptoms
and these are known as asymptomatic. A gallstone may be asymptomatic and only discovered as an incidental finding. Easily spreadable viral infections such as COVID-19 may be asymptomatic but may still be transmissible.
History
Symptomatology
A symptom (from Greek σύμπτωμα, "accident, misfortune, that which befalls", from συμπίπτω, "I befall", from συν- "together, with" and πίπτω, "I fall") is a departure from normal function or feeling. Symptomatology (also called semiology) is a branch of medicine dealing with the signs and symptoms of a disease. This study also includes the indications of a disease. It was first described as semiotics by Henry Stubbe in 1670 a term now used for the study of sign communication.
Prior to the nineteenth century there was little difference in the powers of observation between physician and patient. Most medical practice was conducted as a co-operative interaction between the physician and patient; this was gradually replaced by a "monolithic consensus of opinion imposed from within the community of medical investigators". Whilst each noticed much the same things, the physician had a more informed interpretation of those things: "the physicians knew what the findings meant and the layman did not".
Development of medical testing
A number of advances introduced mostly in the 19th century, allowed for more objective assessment by the physician in search of a diagnosis, and less need of input from the patient. During the 20th century the introduction of a wide range of imaging techniques and other testing methods such as genetic testing, clinical chemistry tests, molecular diagnostics and pathogenomics have made a huge impact on diagnostic capability.
In 1761 the percussion technique for diagnosing respiratory conditions was discovered by Leopold Auenbrugger. This method of tapping body cavities to note any abnormal sounds had already been in practice for a long time in cardiology. Percussion of the thorax became more widely known after 1808 with the translation of Auenbrugger's work from Latin into French by Jean-Nicolas Corvisart.
In 1819 the introduction of the stethoscope by René Laennec began to replace the centuries-old technique of immediate auscultation – listening to the heart by placing the ear directly on the chest, with mediate auscultation using the stethoscope to listen to the sounds of the heart and respiratory tract. Laennec's publication was translated into English, 1824, by John Forbes.
The 1846 introduction by surgeon John Hutchinson (1811–1861) of the spirometer, an apparatus for assessing the mechanical properties of the lungs via measurements of forced exhalation and forced inhalation. (The recorded lung volumes and air flow rates are used to distinguish between restrictive disease (in which the lung volumes are decreased: e.g., cystic fibrosis) and obstructive diseases (in which the lung volume is normal but the air flow rate is impeded; e.g., emphysema).)
The 1851 invention by Hermann von Helmholtz (1821–1894) of the ophthalmoscope, which allowed physicians to examine the inside of the human eye.
The immediate widespread clinical use of Sir Thomas Clifford Allbutt's (1836–1925) six-inch (rather than twelve-inch) pocket clinical thermometer, which he had devised in 1867.
The 1882 introduction of bacterial cultures by Robert Koch, initially for tuberculosis, being the first laboratory test to confirm bacterial infections.
The 1895 clinical use of X-rays which began almost immediately after they had been discovered that year by Wilhelm Conrad Röntgen (1845–1923).
The 1896 introduction of the sphygmomanometer, designed by Scipione Riva-Rocci (1863–1937), to measure blood pressure.
Diagnosis
The recognition of signs, and noting of symptoms may lead to a diagnosis. Otherwise a physical examination may be carried out, and a medical history taken. Further diagnostic medical tests such as blood tests, scans, and biopsies, may be needed. An X-ray for example would soon be diagnostic of a suspected bone fracture. A noted significance detected during an examination or from a medical test may be known as a medical finding.
Examples
Ascites (build-up of fluid in the abdomen)
Nail clubbing (deformed nails)
Cough
Death rattle (last moments of life)
Hemoptysis (blood-stained sputum)
Jaundice
Organomegaly an enlarged organ such as the liver (hepatomegaly)
Palmar erythema (reddening of hands)
Hypersalivation excessive (saliva)
Unintentional weight loss
See also
Biomarker (medicine)
Focal neurologic signs
References | 0.781466 | 0.998274 | 0.780117 |
Emaciation | Emaciation is defined as the state of extreme thinness from absence of body fat and muscle wasting usually resulting from malnutrition. It is often seen as the opposite of obesity.
Characteristics
Emaciation manifests physically as thin limbs, pronounced and protruding bones, sunken eyes, dry skin, thinning hair, a bloated stomach, and a dry or coated tongue in humans. Emaciation is often accompanied by halitosis, hyponatremia, hypokalemia, anemia, improper function of lymph and the lymphatic system, and pleurisy and edema.
Causes
Emaciation can be caused by undernutrition, malaria and cholera, tuberculosis and other infectious diseases with prolonged fever, parasitic infections, many forms of cancer and their treatments, lead poisoning, and eating disorders like anorexia nervosa.
Emaciation is widespread in least developed countries and was a major cause of death in Nazi concentration camps during World War II.
Animals
A lack of resources in the habitat, disease, or neglect and cruelty from humans in captivity can result in emaciation in animals. In the rehabilitation of emaciated animals, the specific dietary needs of each animal have to be considered to avoid causing harm.
See also
Cachexia
Malnutrition
References
External links
Malnutrition | 0.78207 | 0.996872 | 0.779623 |
Edema | Edema (American English), also spelled oedema (British English), and also known as fluid retention, dropsy and hydropsy, is the build-up of fluid in the body's tissue, a type of swelling. Most commonly, the legs or arms are affected. Symptoms may include skin that feels tight, the area feeling heavy, and joint stiffness. Other symptoms depend on the underlying cause.
Causes may include venous insufficiency, heart failure, kidney problems, low protein levels, liver problems, deep vein thrombosis, infections, angioedema, certain medications, and lymphedema. It may also occur in immobile patients (stroke, spinal cord injury, aging), or with temporary immobility such as prolonged sitting or standing, and during menstruation or pregnancy. The condition is more concerning if it starts suddenly, or pain or shortness of breath is present.
Treatment depends on the underlying cause. If the underlying mechanism involves sodium retention, decreased salt intake and a diuretic may be used. Elevating the legs and support stockings may be useful for edema of the legs. Older people are more commonly affected. The word is from the Ancient Greek oídēma meaning 'swelling'.
Signs and symptoms
Specific area
An edema will occur in specific organs as part of inflammations, tendinitis or pancreatitis, for instance. Certain organs develop edema through tissue specific mechanisms. Examples of edema in specific organs:
Peripheral edema (“dependent” edema of legs) is extracellular fluid accumulation in the lower extremities caused by the effects of gravity, and occurs when fluid pools in the lower parts of the body, including the feet, legs, or hands. This often occurs in immobile patients, such as paraplegics or quadriplegics, pregnant women, or in otherwise healthy people due to hypervolemia or maintaining a standing or seated posture for an extended period of time. It can occur due to diminished venous return of blood to the heart due to congestive heart failure or pulmonary hypertension. It can also occur in patients with increased hydrostatic venous pressure or decreased oncotic venous pressure, due to obstruction of lymphatic or venous vessels draining the lower extremity. Certain drugs (for example, amlodipine) can cause pedal edema.
Cerebral edema is extracellular fluid accumulation in the brain. It can occur in toxic or abnormal metabolic states and conditions such as systemic lupus or reduced oxygen at high altitudes. It causes drowsiness or loss of consciousness, leading to brain herniation and death.
Pulmonary edema occurs when the pressure in blood vessels in the lung is raised because of obstruction to the removal of blood via the pulmonary veins. This is usually due to failure of the left ventricle of the heart. It can also occur in altitude sickness or on inhalation of toxic chemicals. Pulmonary edema produces shortness of breath. Pleural effusions may occur when fluid also accumulates in the pleural cavity.
Edema may also be found in the cornea of the eye with glaucoma, severe conjunctivitis, keratitis, or after surgery. Affected people may perceive coloured haloes around bright lights.
Edema surrounding the eyes is called periorbital edema (puffy eyes) . The periorbital tissues are most noticeably swollen immediately after waking, perhaps as a result of the gravitational redistribution of fluid in the horizontal position.
Common appearances of cutaneous edema are observed with mosquito bites, spider bites, bee stings (wheal and flare), and skin contact with certain plants such as poison ivy or western poison oak, the latter of which are termed contact dermatitis.
Another cutaneous form of edema is myxedema, which is caused by increased deposition of connective tissue. In myxedema (and a variety of other rarer conditions) edema is caused by an increased tendency of the tissue to hold water within its extracellular space. In myxedema, this is due to an increase in hydrophilic carbohydrate-rich molecules (perhaps mostly hyaluronin) deposited in the tissue matrix. Edema forms more easily in dependent areas in the elderly (sitting in chairs at home or on aeroplanes) and this is not well understood. Estrogens alter body weight in part through changes in tissue water content. There may be a variety of poorly understood situations in which transfer of water from tissue matrix to lymphatics is impaired because of changes in the hydrophilicity of the tissue or failure of the 'wicking' function of terminal lymphatic capillaries.
Myoedema is localized mounding of muscle tissue due to percussive pressure, such as flicking the relaxed muscle with the forefinger and thumb. It produces a mound, visible, firm and non-tender at the point of tactile stimulus approximately 1-2 seconds after stimulus, subsiding back to normal after 5-10 seconds. It is a sign in hypothyroid myopathy, such as Hoffmann syndrome.
In lymphedema, abnormal removal of interstitial fluid is caused by failure of the lymphatic system. This may be due to obstruction from, for example, pressure from a cancer or enlarged lymph nodes, destruction of lymph vessels by radiotherapy, or infiltration of the lymphatics by infection (such as elephantiasis). It is most commonly due to a failure of the pumping action of muscles due to immobility, most strikingly in conditions such as multiple sclerosis, or paraplegia. It has been suggested that the edema that occurs in some people following use of aspirin-like cyclo-oxygenase inhibitors such as ibuprofen or indomethacin may be due to inhibition of lymph heart action.
Generalized
A rise in hydrostatic pressure occurs in cardiac failure. A fall in osmotic pressure occurs in nephrotic syndrome and liver failure.
Causes of edema that are generalized to the whole body can cause edema in multiple organs and peripherally. For example, severe heart failure can cause pulmonary edema, pleural effusions, ascites and peripheral edema. Such severe systemic edema is called anasarca. In rare cases, a parvovirus B19 infection may cause generalized edemas.
Although a low plasma oncotic pressure is widely cited for the edema of nephrotic syndrome, most physicians note that the edema may occur before there is any significant protein in the urine (proteinuria) or fall in plasma protein level. Most forms of nephrotic syndrome are due to biochemical and structural changes in the basement membrane of capillaries in the kidney glomeruli, and these changes occur, if to a lesser degree, in the vessels of most other tissues of the body. Thus the resulting increase in permeability that leads to protein in the urine can explain the edema if all other vessels are more permeable as well.
As well as the previously mentioned conditions, edemas often occur during the late stages of pregnancy in some women. This is more common with those of a history of pulmonary problems or poor circulation also being intensified if arthritis is already present in that particular woman. Women who already have arthritic problems most often have to seek medical help for pain caused from over-reactive swelling. Edemas that occur during pregnancy are usually found in the lower part of the leg, usually from the calf down.
Hydrops fetalis is a condition in a baby characterized by an accumulation of fluid in at least two body compartments.
Cause
Heart
The pumping force of the heart should help to keep a normal pressure within the blood vessels. But if the heart begins to fail (a condition known as congestive heart failure) the pressure changes can cause very severe water retention. In this condition water retention is mostly visible in the legs, feet and ankles, but water also collects in the lungs, where it causes a chronic cough. This condition is usually treated with diuretics; otherwise, the water retention may cause breathing problems and additional stress on the heart.
Kidneys
Another cause of severe water retention is kidney failure, where the kidneys are no longer able to filter fluid out of the blood and turn it into urine. Kidney disease often starts with inflammation, for instance in the case of diseases such as nephrotic syndrome or lupus. This type of water retention is usually visible in the form of swollen legs and ankles.
Liver
Cirrhosis (scarring) of the liver is a common cause of edema in the legs and abdominal cavity.
Veins
Phlebetic lymphedema (or phlebolymphedema) is seen in untreated chronic venous insufficiency and is the most common type of edema (approx. 90%). It is a combination venous/lymphatic disorder that originates in defective "leaky" veins that allows the blood to back flow (venous reflux), slowing the return of the blood to the heart (venous stasis). The venous pressure in the legs changes dramatically while standing compared to lying down. How much pressure there is depends on the person's height, in the average adult person, it is 8 mm Hg while lying down and 100 mm Hg while standing.
In venous insufficiency, venous stasis results in abnormally high venous pressure (venous hypertension) and greater permeability of blood capillaries (capillary hyperpermeability), to drain the blood through the lymphatic system. The lymphatic system slowly removes excess fluid and proteins from the veins in the lower legs towards the upper body; however, as it is not as efficient as an unimpaired circulatory system, swelling (edema) is visible, particularly in the ankles and lower leg. The chronic increased fluid in the lymphatic system and capillary hyperpermeability causes an inflammatory response which leads to tissue fibrosis of both veins and lymphatic system, opening of arteriovenous shunts, all of which then worsens the condition in a vicious cycle.
Others
Swollen legs, feet and ankles are common in late pregnancy. The problem is partly caused by the weight of the uterus on the major veins of the pelvis. It usually clears up after delivery of the baby, and is mostly not a cause for concern, though it should always be reported to a doctor.
Lack of exercise is another common cause of water retention in the legs. Exercise helps the leg veins work against gravity to return blood to the heart. If blood travels too slowly and starts to pool in the leg veins, the pressure can force too much fluid out of the leg capillaries into the tissue spaces. The capillaries may break, leaving small blood marks under the skin. The veins themselves can become swollen, painful and distorted – a condition known as varicose veins. Muscle action is needed not only to keep blood flowing through the veins but also to stimulate the lymphatic system to fulfil its "overflow" function. Long-haul flights, lengthy bed-rest, immobility caused by disability and so on, are all potential causes of water retention. Even very small exercises such as rotating ankles and wiggling toes can help to reduce it.
Certain medications are prone to causing water retention. These include estrogens, thereby including drugs for hormone replacement therapy or the combined oral contraceptive pill, as well as non-steroidal anti-inflammatory drugs and beta-blockers.
Premenstrual water retention, causing bloating and breast tenderness, is common.
Mechanism
Six factors can contribute to the formation of edema:
increased hydrostatic pressure;
reduced colloidal or oncotic pressure within blood vessels;
increased tissue colloidal or oncotic pressure;
increased blood vessel wall permeability (such as inflammation);
obstruction of fluid clearance in the lymphatic system;
changes in the water-retaining properties of the tissues themselves. Raised hydrostatic pressure often reflects retention of water and sodium by the kidneys.
Generation of interstitial fluid is regulated by the forces of the Starling equation. Hydrostatic pressure within blood vessels tends to cause water to filter out into the tissue. This leads to a difference in protein concentration between blood plasma and tissue. As a result, the colloidal or oncotic pressure of the higher level of protein in the plasma tends to draw water back into the blood vessels from the tissue. Starling's equation states that the rate of leakage of fluid is determined by the difference between the two forces and also by the permeability of the vessel wall to water, which determines the rate of flow for a given force imbalance. Most water leakage occurs in capillaries or post capillary venules, which have a semi-permeable membrane wall that allows water to pass more freely than protein. (The protein is said to be reflected and the efficiency of reflection is given by a reflection constant of up to 1.) If the gaps between the cells of the vessel wall open up then permeability to water is increased first, but as the gaps increase in size permeability to protein also increases with a fall in reflection coefficient.
Changes in the variables in Starling's equation can contribute to the formation of edemas either by an increase in hydrostatic pressure within the blood vessel, a decrease in the oncotic pressure within the blood vessel or an increase in vessel wall permeability. The latter has two effects. It allows water to flow more freely and it reduces the colloidal or oncotic pressure difference by allowing protein to leave the vessel more easily.
Another set of vessels known as the lymphatic system acts like an "overflow" and can return much excess fluid to the bloodstream. But even the lymphatic system can be overwhelmed, and if there is simply too much fluid, or if the lymphatic system is congested, then the fluid will remain in the tissues, causing swellings in legs, ankles, feet, abdomen or any other part of the body.
Molecular biology
The excessive extracellular fluid (interstitial fluid) in edemas is to a substantial degree caused by an increased permeability of the smallest blood vessels (capillaries). This permeability is modulated by numerous biochemical chain reactions and can therefore be unbalenced by many influences.
Involved in these processes are, among others, the transmembrane proteins occludin, claudins, tight junction protein ZO-1, cadherins, catenins and actinin, which are directed by intracellular signal chains, in particular in connection with the enzyme protein kinase C.
Diagnosis
Edema may be described as pitting edema, or non-pitting edema. Pitting edema is when, after pressure is applied to a small area, the indentation persists after the release of the pressure. Peripheral pitting edema, as shown in the illustration, is the more common type, resulting from water retention. It can be caused by systemic diseases, pregnancy in some women, either directly or as a result of heart failure, or local conditions such as varicose veins, thrombophlebitis, insect bites, and dermatitis.
Non-pitting edema is observed when the indentation does not persist. It is associated with such conditions as lymphedema, lipedema, and myxedema.
Edema caused by malnutrition defines kwashiorkor, an acute form of childhood protein-energy malnutrition characterized by edema, irritability, anorexia, ulcerating dermatoses, and an enlarged liver with fatty infiltrates.
Treatment
When possible, treatment involves resolving the underlying cause. Many cases of heart or kidney disease are treated with diuretics.
Treatment may also involve positioning the affected body parts to improve drainage. For example, swelling in feet or ankles may be reduced by having the person lie down in bed or sit with the feet propped up on cushions. Intermittent pneumatic compression can be used to pressurize tissue in a limb, forcing fluids—both blood and lymph—to flow out of the compressed area.
References
External links
Causes of death
Medical signs
Lymphatic vessel diseases
Wikipedia medicine articles ready to translate | 0.779679 | 0.9998 | 0.779524 |
Heat exhaustion | Heat exhaustion is a heat-related illness characterized by the body's inability to effectively cool itself, typically occurring in high ambient temperatures or during intense physical exertion. In heat exhaustion, core body temperature ranges from 37 °C to 40 °C (98.6 °F to 104 °F). Symptoms include profuse sweating, weakness, dizziness, headache, nausea, and lowered blood pressure, resulting from dehydration and serum electrolyte depletion. Heat-related illnesses lie on a spectrum of severity, where heat exhaustion is considered less severe than heat stroke but more severe than heat cramps and heat syncope.
Climate change and increasing global temperatures have led to more frequent and intense heat waves, raising the incidence of heat exhaustion. Risk factors include hot and humid weather, prolonged heat exposure, intense physical exertion, limited access to water or cooling, and certain medications that can exacerbate fluid and serum electrolyte losses including diuretics, antihypertensives, anticholinergics, and antidepressants. Children, older adults, and individuals with certain pre-existing health conditions are more susceptible to heat exhaustion due to their reduced ability to regulate core body temperature.
Prevention strategies include wearing loose and lightweight clothing, avoiding strenuous activity in extreme heat, maintaining adequate hydration, and gradually acclimatizing to hot conditions. Public health measures, such as heat warnings and community cooling centers, also help prevent heat exhaustion during extreme weather events. Treatment involves moving to a cooler environment, rehydrating, and cooling the body. Untreated heat exhaustion can progress to heat stroke, a life-threatening condition characterized by a core body temperature above 40 °C (104 °F) and central nervous system dysfunction.
Signs and symptoms
Common
Sources:
Elevated heart rate
Lowered blood pressure
Elevated core body temperature (not exceeding 40 °C or 104 °F)
Elevated respiratory rate
Profuse sweating
Dehydration
Serum electrolyte depletion
Weakness and fatigue
Persistent muscle cramps
Skin tingling
Nausea and vomiting
Dizziness and light-headedness
Irritability
Headache
Less common
Sources:
Palor
Hot and dry skin
Core body temperature exceeding 40 °C or 104 °F
Syncope
Central nervous system dysfunction (e.g., altered mental status, loss of spatial awareness, loss of bodily movement control, seizures, etc.)
Comparison with other heat-related illnesses
Common signs and symptoms of heat exhaustion can also be observed in other heat-related illnesses such as heat cramps, heat syncope, and heat stroke. Heat cramps, a mild form of heat-related illness, is characterized by persistent abdominal, quadricipital, and calf muscle contractions. Heat syncope, also referred to as exercise-associated collapse, is a moderate form of heat-related illness characterized by a temporary loss of consciousness. Unlike heat exhaustion, heat cramps and heat syncope do not have systemic effects.
Heat exhaustion is a precursor to heat stroke, a severe form of heat-related illness. Heat stroke is more likely than heat exhaustion to cause palor, hot and dry skin, syncope, and dysfunction of the central nervous system (e.g., altered mental status, loss of spatial awareness, loss of bodily movement control, seizures, etc.). Central nervous system dysfunction and a core body temperature exceeding 40 °C or 104 °F are the primary differentiators between heat exhaustion and heat stroke. One of the earliest indicators of heat stroke is altered mental status, which can manifest as delirium, confusion, reduced alertness, loss of consciousness, etc. Prompt recognition and treatment are crucial to prevent multi-organ failure and death.
Physiology
The human body maintains a core body temperature at around 37 °C or 98.6 °F through mechanisms controlled by the thermoregulatory center within the hypothalamus. When the body is exposed to high ambient temperatures, intense physical exertion, or both, the thermoregulatory center will initiate several processes to dissipate more heat:
Blood vessels near the skin surface dilate, increasing blood flow to the skin to facilitate heat loss through radiation and convection
Heart rate increases to support elevated blood flow to the skin
Eccrine sweat glands in the skin produce sweat, which evaporates from the skin surface
Heat cramps and heat syncope
Heat-related illnesses lie on a spectrum of severity. Conditions on the lower end of this spectrum include heat cramps and heat syncope. The electrolyte depletion theory proposes that increased sweating during intense physical exertion in high ambient temperatures results in a depletion of serum electrolytes (e.g., sodium, potassium, etc.) that causes sustained involuntary muscle contractions, or heat cramps. However, the contribution of intense physical exertion and high ambient temperatures to serum electrolyte depletion in the absence of significant dehydration has been contested by more recent research, which proposes an alternative theory. The neuromuscular theory proposes that muscle fatigue increases the excitability of α1 muscle spindles and decreases the inhibitory input from Golgi tendon organs, leading to sustained involuntary muscle contractions.
In heat syncope, or exercise-associated collapse, there is an increased dilation of blood vessels near the skin's surface and a pooling of blood in the lower extremities due to a decrease in vasomotor tone, which is the extent of control over the constriction and dilation of blood vessels. This results in a drop in blood pressure when not lying down and a temporary reduction in blood flow to the brain, leading to fainting.
Heat exhaustion
Heat exhaustion is a moderate form of heat-related illness characterized by increasingly overwhelmed thermoregulatory mechanisms. In heat exhaustion, the core body temperature rises to between 37 °C and 40 °C (98.6 °F and 104 °F). To dissipate heat, blood flow to the skin can increase up to 8 liters per minute, accounting for a significant proportion of the cardiac output. This increase in peripheral circulation leads to a reduction in central blood volume—the volume of blood contained within the heart, lungs, and large blood vessels. The heart rate further increases, but the cardiac output and blood pressure continue to drop. At the same time, profuse sweating occurs, with losses up 1-2 liters of sweat per hour. This sweating exacerbates the reduction in central blood volume and leads to dehydration and serum electrolyte depletion, particularly hyponatremia (low serum sodium) and hypokalemia (low serum potassium). The combination of decreased blood flow to vital organs and serum electrolyte losses results in various symptoms, mentioned in "Signs and symptoms." Additionally, the body's respiratory rate increases to aid in heat dissipation through the lungs.
Heat stroke
Heat exhaustion can progress to heat stroke, a severe form of heat-related illness characterized by complete failure of thermoregulatory mechanisms. Heat stroke is defined by two key features: a core body temperature above 40 °C (104 °F) and central nervous system dysfunction. In classic heat stroke, sweating ceases due to sweat gland dysfunction or depletion. This loss of evaporative cooling further accelerates heat accumulation. The resulting hyperthermia leads to widespread cellular dysfunction, including:
Alterations in enzyme function
Protein denaturation
Disruption of cellular membranes.
Hyperthermia causes direct cellular damage, triggering a systemic inflammatory response. This inflammatory cascade can result in multi-organ dysfunction, potentially leading to:
Acute kidney injury
Liver failure
Disseminated intravascular coagulation
Causes
There is increasing evidence linking higher temperatures to a variety of diseases and disorders as well as elevated mortality and morbidity rates. The Intergovernmental Panel on Climate Change (IPCC) projects that temperatures will rise by up to 1.5 °C in the future due to ongoing greenhouse gas emissions. Climate change exacerbates extreme temperatures, resulting in more intense and frequent heat waves. As this trend continues, populations with greater susceptibility to heat exhaustion, such as children, older adults, and individuals with chronic diseases, are at an increased risk.
Common causes of heat exhaustion and other heat-related illnesses include:
Prolonged exposure to hot, sunny, or humid weather conditions
Extended time spent in high-temperature environments without adequate cooling
Engaging in strenuous activities through work, exercise, or sports, particularly in hot conditions
Insufficient fluid intake leading to dehydration
Overconsumption of fluids without adequate electrolyte replacement, leading to serum electrolyte depletion
Wearing tight or non-breathable clothing that does not allow heat to escape, trapping heat close to the body
Use of certain medications that impair thermoregulation, such as diuretics, antihypertensives, anticholinergics, and antidepressants
Sudden exposure to high temperatures without gradual acclimatization
Risk factors
Risk factors for heat exhaustion include:
Wearing dark, padded, or insulated clothing, hats, and helmets (e.g., football pads, turnout gear, etc.) that trap heat and impede cooling
Higher body fat percentage, which can hinder heat dissipation
Presence of fever, which elevates body temperature and lowers heat tolerance
Children younger than four years old and adults older than 65 are at a higher risk of serious heat illness due to impaired thermoregulation, even at rest, especially in hot and humid conditions without adequate cooling
Insufficient access to water, air conditioning, or other cooling methods
Use of medications that increase the risk of heat exhaustion, including diuretics, first-generation antihistamines, beta-blockers, antipsychotics, MDMA ('Ecstasy', 'Molly'), and other amphetamines
Medication impact
Medications such as diuretics, antihypertensives, anticholinergics, and antidepressants can cause electrolyte imbalances, drug-induced hypohidrosis (reduced sweating), or drug-induced hyperhydrosis (excessive sweating). This disrupts the body's ability to regulate core temperature and increases the risk of heat exhaustion.
Anticholinergic medications inhibit the parasympathetic arm of the autonomic nervous system involving the muscarinic M3 acetylcholine receptors, which often results in symptoms of dry mouth, increased thirst, as well as an increased risk of dehydration. Other medications containing anticholinergic properties, such as certain antidepressants and first-generation antihistamines, have comparable side effects. For patients at risk of or experiencing heat exacerbation, taking these medications can further increase their risk.
Certain antidepressants, such as tricyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs), as well as opioids that stimulate histamine release, can cause hyperhidrosis, leading to significant fluid and serum electrolyte depletion Though the mechanisms are not fully understood, antihypertensives such as ACE inhibitors, beta-blockers, and diuretics have shown to decrease heat tolerance. In addition, ACE inhibitors and diuretics can cause electrolyte imbalances, increase thirst, and increase risk of dehydration Beta-blockers limit the body's ability to redirect hyperthermic blood away from the body's core and towards the skin for cooling. If dehydration and electrolyte imbalances are left untreated, they can lead to severe complications, progress to a more severe heat-related illness such as a heatstroke, and can potentially be fatal.
The management of drug-induced hypohidrosis and hyperhidrosis should be thoroughly evaluated and discussed with a healthcare professional. Treatment options may include discontinuation of the medication, a dose adjustment, a drug substitution to a different drug-class, adaptation to new behavioral and environmental changes, or the addition of another agent that can counteract the side effects.
Special populations
Pediatrics
Children (under the age of 18 years old) have a lower heat tolerance compared to adults due to decreased homeostatic regulatory systems, increased metabolic rates, and decreased cardiac output. Strenuous exercise in high-temperature conditions is the leading cause of heat-related illness in children. The dehydration stemming from heat-related illness is what puts children at risk for thermoregulatory dysfunction. Thermoregulatory dysfunction only worsens the ability for children to fight heat exhaustion because it leads to decreased sweat capabilities and increased core temperature response. Similar to that of adults, the best way to combat and prevent heat exhaustion in children is to properly condition prior to exercise exertion, hydrate, allow for temperature adjustment, and clothe accordingly.
Pregnancy
Although there are not many studies on how the rates of heat exhaustion differ amongst the pregnant population, the adverse effects due to heat exhaustion in the pregnant population can be fatal. Heat exhaustion becomes much more common within pregnant women who perform the same tasks they had while not pregnant. While their symptoms are no different than the most common, such as dizziness, fatigue, and dehydration, the extreme adverse effects include increased preterm births, miscarriages, and birth defects. The reason for these more serious adverse effects is that pregnancy causes higher metabolic and cardiovascular demands, and the presence of heat exhaustion only amplifies these demands further. The dehydration symptom of heat exhaustion is vital to overcome because proper hydration is deeply necessary for proper development of the fetus and metabolic activity. To combat the dehydration aspect, the amount of water intake must be increased from the intake amount prior to pregnancy and hot environments should be avoided to prevent sweating.
Prevention
Ways to prevent and lower risk of heat exhaustion include:
Public widespread announcements of heat waves or rapid increases in temperature
Staying up to date on daily weather reports
Heat shelters throughout communities
Wearing loose fitting and lighter fabric clothing
Try to stay well hydrated unless fluid intake is limited
For those who are doing lots of extraneous activities or work, find shady cool areas to rest
Avoid prolonged exposure to hot environments, such as tropical sunshine in the middle of the day, Mediterranean forests, or a boiler room
Drink adequate fluids
Avoid exertion and exercise in hot weather
Avoid medications that can be detrimental to the regulation of body heat
Diagnosis
A diagnosis of heat exhaustion most commonly is diagnosed by medical professionals with various physical examinations. Through examination a person would have their temperature checked and questioned about their recent activity. If the medical professionals suspect a person's heat exhaustion has progressed into heat stroke they may then lead with these varying tests to verify;
Blood test, medical professionals when conducting a blood test look for low blood sugar or potassium. They may also look for the presence of unwanted gases in a person's blood.
Urinalysis, an urinalysis or urine test is a test to measure color, clarity, pH levels, glucose concentration, and protein levels. The test additionally can check a person's kidney function, which is common to be affected by classic heat stroke.
Muscle function tests, medical professionals use muscle function tests to check for rhabdomyolysis. Which is severe damage to a persons skeletal muscle tissue.
Treatment
First aid
First aid for heat exhaustion or heat stroke includes:
Moving the person to a shaded, fanned, or air-conditioned place
Removing any excess or tight clothing to facilitate cooling
Applying wet towels or ice packs wrapped in cloth to the forehead, neck, armpits, and groin, and using a fan to cool the person down
Lying the person down on their back and elevating their feet above head level to improve blood circulation
Having the person drink cool water or sports drinks, also referred to as electrolyte drinks, provided they are conscious, alert, and not vomiting (Only applies to heat exhaustion)
Turning the person on their side if they are vomiting to prevent choking
Monitoring the person's vital signs, which includes their heart rate, blood pressure, breathing rate, and body temperature
Monitoring the person's mental status (i.e., confusion, delirium, reduced alertness etc.)
Contacting emergency medical services if their situation does not improve rapidly or worsens
Emergency medical treatment
If an individual with heat exhaustion receives medical treatment, Emergency Medical Technicians (EMTs), doctors, and/or nurses may also:
Provide supplemental oxygen
Administer intravenous fluids and electrolytes if they are too confused to drink and/or are vomiting
Do Not
If an individual is experiencing heat exhaustion or any other heat related illness DO NOT:
Administer fever medications such as aspirin or Tylenol as they can be harmful for the individual
Administer salt tablets as they can worsen dehydration
Use alcohol or caffeine containing products as they can make it harder for the individual to control their body temperature
Give anything by mouth if the person is vomiting or unconscious
Heat warning resources
With high temperatures becoming more frequent, there are resources available to stay up to date on sudden changes in the weather. In the United States, OSHA in collaboration with the NIOSH have a Heat Safety Tool app that notifies their users with real time data on weather forecasts in a certain location, common side effects of heat related illnesses, and how the temperature feels like outside allowing individuals to safely plan out their day based on the weather. Additional resources include monitoring weather in your area of the United States based on zip code using weather.gov, being aware of cooling centers in your area, knowing how to save and use less energy within your household, and being well informed of certain populations who are more vulnerable to heat related illnesses than others. Apart from these resources, there are radio stations and news weather forecasts that continue to provide information on changes in the weather and temeprature both globally and within your area.
Prognosis
After adequate rest and rehydration, most individuals typically recover from their heat exhaustion. However, when heat exhaustion is left untreated, the most common disease progression is heat stroke. According to the CDC, a typical trait indicating a person is having a heat stroke is when their body temperature reaches 104 °F or higher in a span of 10 to 15 minutes. In addition to a high body temperature, they will also experience central nervous system dysfunction such as alteration in their mental status and slurred speech. Another possible illness that heat stroke can lead to is rhabdomyolysis or rapid injury to skeletal muscle especially when heat stroke is caused by physical exertion. When an individual experiences rhabdomyolysis, that damaged skeletal tissue releases toxic muscle components such as myoglobin into the bloodstream and can cause issues such as coca cola colored urine, myalgia, and kidney damage due to the blocked tubules to name a few. If a person is experiencing a heat stroke and is not properly treated, that can further progress to metabolic abnormalities, irreversible damage to multiple organs in the body, and death as a result.
See also
Occupational heat stress
References
Effects of external causes
Wilderness medical emergencies | 0.783257 | 0.994908 | 0.779269 |
Fluid compartments | The human body and even its individual body fluids may be conceptually divided into various fluid compartments, which, although not literally anatomic compartments, do represent a real division in terms of how portions of the body's water, solutes, and suspended elements are segregated. The two main fluid compartments are the intracellular and extracellular compartments. The intracellular compartment is the space within the organism's cells; it is separated from the extracellular compartment by cell membranes.
About two-thirds of the total body water of humans is held in the cells, mostly in the cytosol, and the remainder is found in the extracellular compartment. The extracellular fluids may be divided into three types: interstitial fluid in the "interstitial compartment" (surrounding tissue cells and bathing them in a solution of nutrients and other chemicals), blood plasma and lymph in the "intravascular compartment" (inside the blood vessels and lymphatic vessels), and small amounts of transcellular fluid such as ocular and cerebrospinal fluids in the "transcellular compartment".
The normal processes by which life self-regulates its biochemistry (homeostasis) produce fluid balance across the fluid compartments. Water and electrolytes are continuously moving across barriers (eg, cell membranes, vessel walls), albeit often in small amounts, to maintain this healthy balance. The movement of these molecules is controlled and restricted by various mechanisms. When illnesses upset the balance, electrolyte imbalances can result.
The interstitial and intravascular compartments readily exchange water and solutes, but the third extracellular compartment, the transcellular, is thought of as separate from the other two and not in dynamic equilibrium with them.
The science of fluid balance across fluid compartments has practical application in intravenous therapy, where doctors and nurses must predict fluid shifts and decide which IV fluids to give (for example, isotonic versus hypotonic), how much to give, and how fast (volume or mass per minute or hour).
Intracellular compartment
The intracellular fluid (ICF) is all fluids contained inside the cells, which consists of cytosol and fluid in the cell nucleus. The cytosol is the matrix in which cellular organelles are suspended. The cytosol and organelles together compose the cytoplasm. The cell membranes are the outer barrier. In humans, the intracellular compartment contains on average about of fluid, and under ordinary circumstances remains in osmotic equilibrium. It contains moderate quantities of magnesium and sulfate ions.
In the cell nucleus, the fluid component of the nucleoplasm is called the nucleosol.
Extracellular compartment
The interstitial, intravascular and transcellular compartments comprise the extracellular compartment. Its extracellular fluid (ECF) contains about one-third of total body water.
Intravascular compartment
The main intravascular fluid in mammals is blood, a complex mixture with elements of a suspension (blood cells), colloid (globulins), and solutes (glucose and ions). The blood represents both the intracellular compartment (the fluid inside the blood cells) and the extracellular compartment (the blood plasma). The average volume of plasma in the average male is approximately . The volume of the intravascular compartment is regulated in part by hydrostatic pressure gradients, and by reabsorption by the kidneys.
Interstitial compartment
The interstitial compartment (also called "tissue space") surrounds tissue cells. It is filled with interstitial fluid, including lymph. Interstitial fluid provides the immediate microenvironment that allows for movement of ions, proteins and nutrients across the cell barrier. This fluid is not static, but is continually being refreshed by the blood capillaries and recollected by lymphatic capillaries. In the average male human body, the interstitial space has approximately of fluid.
Transcellular compartment
The transcellular fluid is the portion of total body fluid that is formed by the secretory activity of epithelial cells and is contained within specialized epithelial-lined compartments. Fluid does not normally collect in larger amounts in these spaces, and any significant fluid collection in these spaces is physiologically nonfunctional. Examples of transcellular spaces include the eye, the central nervous system, the peritoneal and pleural cavities, and the joint capsules. A small amount of fluid, called transcellular fluid, does exist normally in such spaces. For example, the aqueous humor, the vitreous humor, the cerebrospinal fluid, the serous fluid produced by the serous membranes, and the synovial fluid produced by the synovial membranes are all transcellular fluids. They are all very important, yet there is not much of each. For example, there is only about of cerebrospinal fluid in the entire central nervous system at any moment. All of the above-mentioned fluids are produced by active cellular processes working with blood plasma as the raw material, and they are all more or less similar to blood plasma except for certain modifications tailored to their function. For example, the cerebrospinal fluid is made by various cells of the CNS, mostly the ependymal cells, from blood plasma.
Fluid shift
Fluid shifts occur when the body's fluids move between the fluid compartments. Physiologically, this occurs by a combination of hydrostatic pressure gradients and osmotic pressure gradients. Water will move from one space into the next passively across a semi permeable membrane until the hydrostatic and osmotic pressure gradients balance each other. Many medical conditions can cause fluid shifts. When fluid moves out of the intravascular compartment (the blood vessels), blood pressure can drop to dangerously low levels, endangering critical organs such as the brain, heart and kidneys; when it shifts out of the cells (the intracellular compartment), cellular processes slow down or cease from intracellular dehydration; when excessive fluid accumulates in the interstitial space, oedema develops; and fluid shifts into the brain cells can cause increased cranial pressure. Fluid shifts may be compensated by fluid replacement or diuretics.
Third spacing
"Third spacing" is the abnormal accumulation of fluid into an extracellular and extravascular space. In medicine, the term is often used with regard to loss of fluid into interstitial spaces, such as with burns or edema, but it can also refer to fluid shifts into a body cavity (transcellular space), such as ascites and pleural effusions. With regard to severe burns, fluids may pool on the burn site (i.e. fluid lying outside of the interstitial tissue, exposed to evaporation) and cause depletion of the fluids. With pancreatitis or ileus, fluids may "leak out" into the peritoneal cavity, also causing depletion of the intracellular, interstitial or vascular compartments.
Patients who undergo long, difficult operations in large surgical fields can collect third-space fluids and become intravascularly depleted despite large volumes of intravenous fluid and blood replacement.
The precise volume of fluid in a patient's third spaces changes over time and is difficult to accurately quantify.
Third spacing conditions may include peritonitis, pyometritis, and pleural effusions. Hydrocephalus and glaucoma are theoretically forms of third spacing, but the volumes are too small to induce significant shifts in blood volumes, or overall body volumes, and thus are generally not referred to as third spacing.
See also
Blood–brain barrier
Compartment (pharmacokinetics)
Distribution (pharmacology) and volume of distribution
References
Physiology
Cell biology | 0.788544 | 0.98803 | 0.779105 |
Neurological disorder | A neurological disorder is any disorder of the nervous system. Structural, biochemical or electrical abnormalities in the brain, spinal cord or other nerves can result in a range of symptoms. Examples of symptoms include paralysis, muscle weakness, poor coordination, loss of sensation, seizures, confusion, pain, tauopathies, and altered levels of consciousness. There are many recognized neurological disorders, some are relatively common, but many are rare.
Interventions for neurological disorders include preventive measures, lifestyle changes, physiotherapy or other therapy, neurorehabilitation, pain management, medication, operations performed by neurosurgeons, or a specific diet. The World Health Organization estimated in 2006 that neurological disorders and their sequelae (direct consequences) affect as many as one billion people worldwide, and identified health inequalities and social stigma/discrimination as major factors contributing to the associated disability and their impact.
Causes
Although the brain and spinal cord are surrounded by tough membranes, enclosed in the bones of the skull and spinal vertebrae, and chemically isolated by the blood–brain barrier, they are very susceptible if compromised. Nerves tend to lie deep under the skin but can still become exposed to damage. Individual neurons, the neural circuits, and nerves into which they form are susceptible to electrochemical and structural disruption. Neuroregeneration may occur in the peripheral nervous system and thus overcome or work around injuries to some extents, but it is thought to be rare in the brain and spinal cord.
The specific causes of neurological problems vary, but can include genetic disorders, congenital abnormalities or disorders, infections, lifestyle, or environmental health problems such as pollution, malnutrition, brain damage, spinal cord injury, nerve injury, or gluten sensitivity (with or without intestinal damage or digestive symptoms). Metal poisoning, where metals accumulate in the human body and disrupt biological processes, has been reported to induce neurological problems, at least in the case of lead. The neurological problem may start in another body system that interacts with the nervous system. For example, cerebrovascular disease involves brain injury due to problems with the blood vessels (cardiovascular system) supplying the brain; autoimmune disorders involve damage caused by the body's own immune system; lysosomal storage diseases such as Niemann–Pick disease can lead to neurological deterioration. The National Institute for Health and Care Excellence recommends considering the evaluation of underlying coeliac disease in people with unexplained neurological symptoms, particularly peripheral neuropathy or ataxia.
In a substantial minority of cases of neurological symptoms, no neurological cause can be identified using current testing procedures, and such "idiopathic" conditions can invite different theories about what is occurring. Generally speaking, a substantial number of neurological disorders may have originated from a previous clinically not recognized viral infection. For example, it is thought that infection with the Hepatitis E virus, which is often initially asymptomatic may provoke neurological disorders, but there are many other examples as well.
Numerous examples have been described of neurological disorders that are associated with mutated DNA repair genes (for reviews see). Inadequate repair of DNA damages can lead directly to cell death and neuron depletion as well as disruptions in the pattern of epigenetic alterations required for normal neuronal function.
DNA damage
Neurons are highly oxygenated cells and as a consequence DNA damage caused by chronic exposure to endogenous reactive oxygen species is a substantial challenge for neurons. Germline mutations deficient in the repair of DNA damages cause neuronal dysfunction and are etiologically linked to many neurological disorders. For example, the neurological disorders, amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) are linked to DNA damage accumulation and DNA repair deficiency.
Classification
Neurological disorders can be categorized according to the primary location affected, the primary type of dysfunction involved, or the primary type of cause. The broadest division is between central nervous system disorders and peripheral nervous system disorders. The Merck Manual lists brain, spinal cord disorders, and nerve disorders in the following overlapping categories:
Brain:
Brain dysfunction according to type:
Apraxia (patterns or sequences of movements)
Agnosia (identifying things or people)
Amnesia (memory)
Aphasia (language)
Dysarthria (speech)
Spinal cord disorders
Peripheral nervous system disorders (e.g., Peripheral neuropathy)
Cranial nerve disorder (e.g., trigeminal neuralgia)
Autonomic nervous system disorders (e.g., dysautonomia, multiple system atrophy)
Epilepsy
Movement disorders of the central and peripheral nervous system such as Parkinson's disease, essential tremor, amyotrophic lateral sclerosis (ALS), and Tourette syndrome
Sleep disorders (e.g., narcolepsy)
Some speech disorders (e.g., stuttering)
Headaches (e.g., migraine, cluster headache, tension headache)
Pain (e.g., complex regional pain syndrome, fibromyalgia)
Delirium
Dementia (e.g., Alzheimer's disease)
Coma and impaired consciousness, (e.g., stupor)
Stroke
Tumors of the nervous system (e.g., cancer)
Multiple sclerosis and other demyelinating diseases
Brain infections
Meningitis
Prion diseases (a type of infectious agent)
Many of the diseases and disorders listed above have neurosurgical treatments available, such as Tourette syndrome, Parkinson's disease, and essential tremor.
Neurological disorders in non-human animals are treated by veterinarians.
Mental functioning
A neurological examination can, to some extent, assess the impact of neurological damage and disease on brain function in terms of behavior, memory, or cognition. Behavioral neurology specializes in this area. In addition, clinical neuropsychology uses neuropsychological assessment to precisely identify and track problems in mental functioning, usually after some sort of brain injury or neurological impairment.
Alternatively, a condition might first be detected through the presence of abnormalities in mental functioning, and further assessment may indicate an underlying neurological disorder. There are sometimes unclear boundaries in the distinction between disorders treated within neurology, and mental disorders treated within the other medical specialty of psychiatry, or other mental health professions such as clinical psychology. In practice, cases may present as one type, but be assessed as more appropriate to the other. Neuropsychiatry deals with mental disorders arising from specific identified diseases of the nervous system.
One area that can be contested is in cases of idiopathic neurological symptoms - conditions where the cause cannot be established. It can be decided in some cases, perhaps by exclusion of any accepted diagnosis, that higher-level brain/mental activity is causing symptoms, referred to as functional symptoms, rather than the symptoms originating in the area of the nervous system from which they may appear to originate. Cases involving these symptoms are classified as functional disorders ("functional" in this context is usually contrasted with the old term "organic disease"). For example, in functional neurologic disorder (FND), those affected present with various neurological symptoms such as functional seizures, numbness, paresthesia, and weakness, among others. Such cases may be contentiously interpreted as being "psychological" rather than "neurological." conversion disorder, If the onset functional symptoms appear to be causally linked to emotional states or responses to social stress or social contexts, it may be referred to as conversion disorder.
On the other hand, dissociation refers to partial or complete disruption of the integration of a person's conscious functioning, such that a person may feel detached from one's emotions, body and/or immediate surroundings. In extreme cases, this may be diagnosed as depersonalization-derealization disorder. There are also conditions viewed as neurological where a person appears to consciously register neurological stimuli that cannot possibly be coming from the part of the nervous system to which they would normally be attributed, such as phantom pain or synesthesia, or where limbs act without conscious direction, as in alien hand syndrome.
Conditions that are classed as mental disorders, learning disabilities, and forms of intellectual disability, are not themselves usually dealt with as neurological disorders. Biological psychiatry seeks to understand mental disorders in terms of their basis in the nervous system, however. In clinical practice, mental disorders are usually indicated by a mental state examination, or other type of structured interview or questionnaire process. At the present time, neuroimaging (brain scans) alone cannot accurately diagnose a mental disorder or tell the risk of developing one; however, it can be used to rule out other medical conditions such as a brain tumor. In research, neuroimaging and other neurological tests can show correlations between reported and observed mental difficulties and certain aspects of neural function or differences in brain structure. In general, numerous fields intersect to try to understand the basic processes involved in mental functioning, many of which are brought together in cognitive science. The distinction between neurological and mental disorders can be a matter of some debate, either in regard to specific facts about the cause of a condition or in regard to the general understanding of brain and mind.
See also
Central nervous system
European Brain Council
Human brain
Mental disorder
Neuroplasticity
Peripheral nervous system
Proctalgia fugax
Hypokalemic sensory overstimulation
References
External links
Disorder Index of the National Institute of Neurological Disorders and Stroke | 0.780363 | 0.998314 | 0.779047 |
Human iron metabolism | Human iron metabolism is the set of chemical reactions that maintain human homeostasis of iron at the systemic and cellular level. Iron is both necessary to the body and potentially toxic. Controlling iron levels in the body is a critically important part of many aspects of human health and disease. Hematologists have been especially interested in systemic iron metabolism, because iron is essential for red blood cells, where most of the human body's iron is contained. Understanding iron metabolism is also important for understanding diseases of iron overload, such as hereditary hemochromatosis, and iron deficiency, such as iron-deficiency anemia.
Importance of iron regulation
Iron is an essential bioelement for most forms of life, from bacteria to mammals. Its importance lies in its ability to mediate electron transfer. In the ferrous state (Fe2+), iron acts as an electron donor, while in the ferric state (Fe3+) it acts as an acceptor. Thus, iron plays a vital role in the catalysis of enzymatic reactions that involve electron transfer (reduction and oxidation, redox). Proteins can contain iron as part of different cofactors, such as iron–sulfur clusters (Fe-S) and heme groups, both of which are assembled in mitochondria.
Cellular respiration
Human cells require iron in order to obtain energy as ATP from a multi-step process known as cellular respiration, more specifically from oxidative phosphorylation at the mitochondrial cristae. Iron is present in the iron–sulfur cluster and heme groups of the electron transport chain proteins that generate a proton gradient that allows ATP synthase to synthesize ATP (chemiosmosis).
Heme groups are part of hemoglobin, a protein found in red blood cells that serves to transport oxygen from the lungs to other tissues. Heme groups are also present in myoglobin to store and diffuse oxygen in muscle cells.
Oxygen transport
The human body needs iron for oxygen transport. Oxygen (O2) is required for the functioning and survival of nearly all cell types. Oxygen is transported from the lungs to the rest of the body bound to the heme group of hemoglobin in red blood cells. In muscles cells, iron binds oxygen to myoglobin, which regulates its release.
Toxicity
Iron is also potentially toxic. Its ability to donate and accept electrons means that it can catalyze the conversion of hydrogen peroxide into free radicals. Free radicals can cause damage to a wide variety of cellular structures, and ultimately kill the cell.
Iron bound to proteins or cofactors such as heme is safe. Also, there are virtually no truly free iron ions in the cell, since they readily form complexes with organic molecules. However, some of the intracellular iron is bound to low-affinity complexes, and is termed labile iron or "free" iron. Iron in such complexes can cause damage as described above.
To prevent that kind of damage, all life forms that use iron bind the iron atoms to proteins. This binding allows cells to benefit from iron while also limiting its ability to do harm. Typical intracellular labile iron concentrations in bacteria are 10-20 micromolar, though they can be 10-fold higher in anaerobic environment, where free radicals and reactive oxygen species are scarcer. In mammalian cells, intracellular labile iron concentrations are typically smaller than 1 micromolar, less than 5 percent of total cellular iron.
Bacterial protection
In response to a systemic bacterial infection, the immune system initiates a process known as "iron withholding". If bacteria are to survive, then they must obtain iron from their environment. Disease-causing bacteria do this in many ways, including releasing iron-binding molecules called siderophores and then reabsorbing them to recover iron, or scavenging iron from hemoglobin and transferrin. The harder the bacteria have to work to get iron, the greater a metabolic price they must pay. That means that iron-deprived bacteria reproduce more slowly. So, control of iron levels appears to be an important defense against many bacterial infections. Certain bacteria species have developed strategies to circumvent that defense, TB causing bacteria can reside within macrophages, which present an iron rich environment and Borrelia burgdorferi uses manganese in place of iron. People with increased amounts of iron, as, for example, in hemochromatosis, are more susceptible to some bacterial infections.
Although this mechanism is an elegant response to short-term bacterial infection, it can cause problems when it goes on so long that the body is deprived of needed iron for red cell production. Inflammatory cytokines stimulate the liver to produce the iron metabolism regulator protein hepcidin, that reduces available iron. If hepcidin levels increase because of non-bacterial sources of inflammation, like viral infection, cancer, auto-immune diseases or other chronic diseases, then the anemia of chronic disease may result. In this case, iron withholding actually impairs health by preventing the manufacture of enough hemoglobin-containing red blood cells.
Body iron stores
Most well-nourished people in industrialized countries have 4 to 5 grams of iron in their bodies (~38 mg iron/kg body weight for women and ~50 mg iron/kg body for men). Of this, about is contained in the hemoglobin needed to carry oxygen through the blood (around 0.5 mg of iron per mL of blood), and most of the rest (approximately 2 grams in adult men, and somewhat less in women of childbearing age) is contained in ferritin complexes that are present in all cells, but most common in bone marrow, liver, and spleen. The liver stores of ferritin are the primary physiologic source of reserve iron in the body. The reserves of iron in industrialized countries tend to be lower in children and women of child-bearing age than in men and in the elderly. Women who must use their stores to compensate for iron lost through menstruation, pregnancy or lactation have lower non-hemoglobin body stores, which may consist of , or even less.
Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body.
Iron deficiency first affects the storage of iron in the body, and depletion of these stores is thought to be relatively asymptomatic, although some vague and non-specific symptoms have been associated with it. Since iron is primarily required for hemoglobin, iron deficiency anemia is the primary clinical manifestation of iron deficiency. Iron-deficient people will suffer or die from organ damage well before their cells run out of the iron needed for intracellular processes like electron transport.
Macrophages of the reticuloendothelial system store iron as part of the process of breaking down and processing hemoglobin from engulfed red blood cells. Iron is also stored as a pigment called hemosiderin, which is an ill-defined deposit of protein and iron, created by macrophages where excess iron is present, either locally or systemically, e.g., among people with iron overload due to frequent blood cell destruction and the necessary transfusions their condition calls for. If systemic iron overload is corrected, over time the hemosiderin is slowly resorbed by the macrophages.
Mechanisms of iron regulation
Human iron homeostasis is regulated at two different levels. Systemic iron levels are balanced by the controlled absorption of dietary iron by enterocytes, the cells that line the interior of the intestines, and the uncontrolled loss of iron from epithelial sloughing, sweat, injuries and blood loss. In addition, systemic iron is continuously recycled. Cellular iron levels are controlled differently by different cell types due to the expression of particular iron regulatory and transport proteins.
Systemic iron regulation
Dietary iron uptake
The absorption of dietary iron is a variable and dynamic process. The amount of iron absorbed compared to the amount ingested is typically low, but may range from 5% to as much as 35% depending on circumstances and type of iron. The efficiency with which iron is absorbed varies depending on the source. Generally, the best-absorbed forms of iron come from animal products. Absorption of dietary iron in iron salt form (as in most supplements) varies somewhat according to the body's need for iron, and is usually between 10% and 20% of iron intake. Absorption of iron from animal products, and some plant products, is in the form of heme iron, and is more efficient, allowing absorption of from 15% to 35% of intake. Heme iron in animals is from blood and heme-containing proteins in meat and mitochondria, whereas in plants, heme iron is present in mitochondria in all cells that use oxygen for respiration.
Like most mineral nutrients, the majority of the iron absorbed from digested food or supplements is absorbed in the duodenum by enterocytes of the duodenal lining. These cells have special molecules that allow them to move iron into the body. To be absorbed, dietary iron can be absorbed as part of a protein such as heme protein or iron must be in its ferrous Fe2+ form. A ferric reductase enzyme on the enterocytes' brush border, duodenal cytochrome B (Dcytb), reduces ferric Fe3+ to Fe2+. A protein called divalent metal transporter 1 (DMT1), which can transport several divalent metals across the plasma membrane, then transports iron across the enterocyte's cell membrane into the cell. If the iron is bound to heme, it is instead transported across the apical membrane by heme carrier protein 1 (HCP1). Heme is then catabolized by microsomal heme oxygenase into biliverdin, releasing Fe2+.
These intestinal lining cells can then either store the iron as ferritin, which is accomplished by Fe2+ binding to apoferritin (in which case the iron will leave the body when the cell dies and is sloughed off into feces), or the cell can release it into the body via the only known iron exporter in mammals, ferroportin. Hephaestin, a ferroxidase that can oxidize Fe2+ to Fe3+ and is found mainly in the small intestine, helps ferroportin transfer iron across the basolateral end of the intestine cells. Upon release into the bloodstream, Fe3+ binds transferrin and circulates to tissues. In contrast, ferroportin is post-translationally repressed by hepcidin, a 25-amino acid peptide hormone. The body regulates iron levels by regulating each of these steps. For instance, enterocytes synthesize more Dcytb, DMT1 and ferroportin in response to iron deficiency anemia. Iron absorption from diet is enhanced in the presence of vitamin C and diminished by excess calcium, zinc, or manganese.
The human body's rate of iron absorption appears to respond to a variety of interdependent factors, including total iron stores, the extent to which the bone marrow is producing new red blood cells, the concentration of hemoglobin in the blood, and the oxygen content of the blood. The body also absorbs less iron during times of inflammation, in order to deprive bacteria of iron. Recent discoveries demonstrate that hepcidin regulation of ferroportin is responsible for the syndrome of anemia of chronic disease.
Iron recycling and loss
Most of the iron in the body is hoarded and recycled by the reticuloendothelial system, which breaks down aged red blood cells. In contrast to iron uptake and recycling, there is no physiologic regulatory mechanism for excreting iron. People lose a small but steady amount by gastrointestinal blood loss, sweating and by shedding cells of the skin and the mucosal lining of the gastrointestinal tract. The total amount of loss for healthy people in the developed world amounts to an estimated average of a day for men, and 1.5–2 mg a day for women with regular menstrual periods. People with gastrointestinal parasitic infections, more commonly found in developing countries, often lose more. Those who cannot regulate absorption well enough get disorders of iron overload. In these diseases, the toxicity of iron starts overwhelming the body's ability to bind and store it.
Cellular iron regulation
Iron import
Most cell types take up iron primarily through receptor-mediated endocytosis via transferrin receptor 1 (TFR1), transferrin receptor 2 (TFR2) and GAPDH. TFR1 has a 30-fold higher affinity for transferrin-bound iron than TFR2 and thus is the main player in this process. The higher order multifunctional glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also acts as a transferrin receptor. Transferrin-bound ferric iron is recognized by these transferrin receptors, triggering a conformational change that causes endocytosis. Iron then enters the cytoplasm from the endosome via importer DMT1 after being reduced to its ferrous state by a STEAP family reductase.
Alternatively, iron can enter the cell directly via plasma membrane divalent cation importers such as DMT1 and ZIP14 (Zrt-Irt-like protein 14). Again, iron enters the cytoplasm in the ferrous state after being reduced in the extracellular space by a reductase such as STEAP2, STEAP3 (in red blood cells), Dcytb (in enterocytes) and SDR2.
Iron import in some cancer cells
Iron can also enter cells via CD44 in complexes bound to hyaluronic acid during epithelial–mesenchymal transition (EMT). In this process, epithelial cells transform into mesenchymal cells with detachment from the basement membrane, to which they’re normally anchored, paving the way for the newly differentiated motile mesenchymal cells to begin migration away from the epithelial layer.
While EMT plays a crucial role in physiological processes like implantation, where it enables the embryo to invade the endometrium to facilitate placental attachment, its dysregulation can also fuel the malignant spread of tumors empowering them to invade surrounding tissues and establish distant colonies (metastasis).
Malignant cells often exhibit a heightened demand for iron, fueling their transition towards a more invasive mesenchymal state. This iron is necessary for the expression of mesenchymal genes, like those encoding transforming growth factor beta (TGF-β), crucial for EMT. Notably, iron’s unique ability to catalyze protein and DNA demethylation plays a vital role in this gene expression process.
Conventional iron uptake pathways, such as those using the transferrin receptor 1 (TfR1), often prove insufficient to meet these elevated iron demands in cancer cells. As a result, various cytokines and growth factors trigger the upregulation of CD44, a surface molecule capable of internalizing iron bound to the hyaluronan complex. This alternative pathway, relying on CD44-mediated endocytosis, becomes the dominant iron uptake mechanism compared to the traditional TfR1-dependent route.
The labile iron pool
In the cytoplasm, ferrous iron is found in a soluble, chelatable state which constitutes the labile iron pool (~0.001 mM). In this pool, iron is thought to be bound to low-mass compounds such as peptides, carboxylates and phosphates, although some might be in a free, hydrated form (aqua ions). Alternatively, iron ions might be bound to specialized proteins known as metallochaperones. Specifically, poly-r(C)-binding proteins PCBP1 and PCBP2 appear to mediate transfer of free iron to ferritin (for storage) and non-heme iron enzymes (for use in catalysis). The labile iron pool is potentially toxic due to iron's ability to generate reactive oxygen species. Iron from this pool can be taken up by mitochondria via mitoferrin to synthesize Fe-S clusters and heme groups.
The storage iron pool
Iron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. Dysfunctional ferritin may accumulate as hemosiderin, which can be problematic in cases of iron overload. The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM.
Iron export
Iron export occurs in a variety of cell types, including neurons, red blood cells, macrophages and enterocytes. The latter two are especially important since systemic iron levels depend upon them. There is only one known iron exporter, ferroportin. It transports ferrous iron out of the cell, generally aided by ceruloplasmin and/or hephaestin (mostly in enterocytes), which oxidize iron to its ferric state so it can bind ferritin in the extracellular medium. Hepcidin causes the internalization of ferroportin, decreasing iron export. Besides, hepcidin seems to downregulate both TFR1 and DMT1 through an unknown mechanism. Another player assisting ferroportin in effecting cellular iron export is GAPDH. A specific post translationally modified isoform of GAPDH is recruited to the surface of iron loaded cells where it recruits apo-transferrin in close proximity to ferroportin so as to rapidly chelate the iron extruded.
The expression of hepcidin, which only occurs in certain cell types such as hepatocytes, is tightly controlled at the transcriptional level and it represents the link between cellular and systemic iron homeostasis due to hepcidin's role as "gatekeeper" of iron release from enterocytes into the rest of the body. Erythroblasts produce erythroferrone, a hormone which inhibits hepcidin and so increases the availability of iron needed for hemoglobin synthesis.
Translational control of cellular iron
Although some control exists at the transcriptional level, the regulation of cellular iron levels is ultimately controlled at the translational level by iron-responsive element-binding proteins IRP1 and especially IRP2. When iron levels are low, these proteins are able to bind to iron-responsive elements (IREs). IREs are stem loop structures in the untranslated regions (UTRs) of mRNA.
Both ferritin and ferroportin contain an IRE in their 5' UTRs, so that under iron deficiency their translation is repressed by IRP2, preventing the unnecessary synthesis of storage protein and the detrimental export of iron. In contrast, TFR1 and some DMT1 variants contain 3' UTR IREs, which bind IRP2 under iron deficiency, stabilizing the mRNA, which guarantees the synthesis of iron importers.
Pathology
Iron deficiency
Functional or actual iron deficiency can result from a variety of causes. These causes can be grouped into several categories:
Increased demand for iron, which the diet cannot accommodate.
Increased loss of iron (usually through loss of blood).
Nutritional deficiency. This can result due to a lack of dietary iron or consumption of foods that inhibit iron absorption. Absorption inhibition has been observed caused by phytates in bran, calcium from supplements or dairy products, and tannins from tea, although in all three of these studies the effect was small and the authors of the studies cited regarding bran and tea note that the effect will probably only have a noticeable impact when most iron is obtained from vegetable sources.
Acid-reducing medications: Acid-reducing medications reduce the absorption of dietary iron. These medications are commonly used for gastritis, reflux disease, and ulcers. Proton pump inhibitors (PPIs), H2 antihistamines, and antacids will reduce iron metabolism.
Damage to the intestinal lining. Examples of causes of this kind of damage include surgery involving the duodenum or diseases like Crohn's or celiac sprue which severely reduce the surface area available for absorption. Helicobacter pylori infections also reduce the availability of iron.
Inflammation leading to hepcidin-induced restriction on iron release from enterocytes (see above).
Is also a common occurrence in pregnant women, and in growing adolescents due to poor diets.
Acute blood loss or acute liver cirrhosis creates a lack of transferrin therefore causing iron to be secreted from the body.
Iron overload
The body is able to substantially reduce the amount of iron it absorbs across the mucosa. It does not seem to be able to entirely shut down the iron transport process. Also, in situations where excess iron damages the intestinal lining itself (for instance, when children eat a large quantity of iron tablets produced for adult consumption), even more iron can enter the bloodstream and cause a potentially deadly syndrome of iron overload. Large amounts of free iron in the circulation will cause damage to critical cells in the liver, the heart and other metabolically active organs.
Iron toxicity results when the amount of circulating iron exceeds the amount of transferrin available to bind it, but the body is able to vigorously regulate its iron uptake. Thus, iron toxicity from ingestion is usually the result of extraordinary circumstances like iron tablet over-consumption rather than variations in diet. The type of acute toxicity from iron ingestion causes severe mucosal damage in the gastrointestinal tract, among other problems.
Excess iron has been linked to higher rates of disease and mortality. For example, breast cancer patients with low ferroportin expression (leading to higher concentrations of intracellular iron) survive for a shorter period of time on average, while high ferroportin expression predicts 90% 10-year survival in breast cancer patients. Similarly, genetic variations in iron transporter genes known to increase serum iron levels also reduce lifespan and the average number of years spent in good health. It has been suggested that mutations that increase iron absorption, such as the ones responsible for hemochromatosis (see below), were selected for during Neolithic times as they provided a selective advantage against iron-deficiency anemia. The increase in systemic iron levels becomes pathological in old age, which supports the notion that antagonistic pleiotropy or "hyperfunction" drives human aging.
Chronic iron toxicity is usually the result of more chronic iron overload syndromes associated with genetic diseases, repeated transfusions or other causes. In such cases the iron stores of an adult may reach 50 grams (10 times normal total body iron) or more. The most common diseases of iron overload are hereditary hemochromatosis (HH), caused by mutations in the HFE gene, and the more severe disease juvenile hemochromatosis (JH), caused by mutations in either hemojuvelin (HJV) or hepcidin (HAMP). The exact mechanisms of most of the various forms of adult hemochromatosis, which make up most of the genetic iron overload disorders, remain unsolved. So, while researchers have been able to identify genetic mutations causing several adult variants of hemochromatosis, they now must turn their attention to the normal function of these mutated genes.
See also
Iron in biology
References
Further reading
electronic-book electronic-
See esp. pp. 513-514.
External links
A comprehensive NIH factsheet on iron and nutrition
Iron Disorders Institute: A nonprofit group concerned with iron disorders; site has helpful links and information on iron-related medical disorders.
An interactive medical learning portal on iron metabolism
Information about iron outside the body
Hematology
Human homeostasis
Biology and pharmacology of chemical elements | 0.785552 | 0.991699 | 0.779031 |
Human biology | Human biology is an interdisciplinary area of academic study that examines humans through the influences and interplay of many diverse fields such as genetics, evolution, physiology, anatomy, epidemiology, anthropology, ecology, nutrition, population genetics, and sociocultural influences. It is closely related to the biomedical sciences, biological anthropology and other biological fields tying in various aspects of human functionality. It wasn't until the 20th century when biogerontologist, Raymond Pearl, founder of the journal Human Biology, phrased the term "human biology" in a way to describe a separate subsection apart from biology.
It is also a portmanteau term that describes all biological aspects of the human body, typically using the human body as a type organism for Mammalia, and in that context it is the basis for many undergraduate University degrees and modules.
Most aspects of human biology are identical or very similar to general mammalian biology. In particular, and as examples, humans :
maintain their body temperature
have an internal skeleton
have a circulatory system
have a nervous system to provide sensory information and operate and coordinate muscular activity.
have a reproductive system in which they bear live young and produce milk.
have an endocrine system and produce and eliminate hormones and other bio-chemical signalling agents
have a respiratory system where air is inhaled into lungs and oxygen is used to produce energy.
have an immune system to protect against disease
Excrete waste as urine and feces.
History
The study of integrated human biology started in the 1920s, sparked by Charles Darwin's theories which were re-conceptualized by many scientists. Human attributes, such as child growth and genetics, were put into question and thus human biology was created.
Typical human attributes
The key aspects of human biology are those ways in which humans are substantially different from other mammals.
Humans have a very large brain in a head that is very large for the size of the animal. This large brain has enabled a range of unique attributes including the development of complex languages and the ability to make and use a complex range of tools.
The upright stance and bipedal locomotion is not unique to humans but humans are the only species to rely almost exclusively on this mode of locomotion. This has resulted in significant changes in the structure of the skeleton including the articulation of the pelvis and the femur and in the articulation of the head.
In comparison with most other mammals, humans are very long lived with an average age at death in the developed world of nearly 80 years old. Humans also have the longest childhood of any mammal with sexual maturity taking 12 to 16 years on average to be completed.
Humans lack fur. Although there is a residual covering of fine hair, which may be more developed in some people, and localised hair covering on the head, axillary and pubic regions, in terms of protection from cold, humans are almost naked. The reason for this development is still much debated.
The human eye can see objects in colour but is not well adapted to low light conditions. The sense of smell and of taste are present but are relatively inferior to a wide range of other mammals. Human hearing is efficient but lacks the acuity of some other mammals. Similarly human sense of touch is well developed especially in the hands where dextrous tasks are performed but the sensitivity is still significantly less than in other animals, particularly those equipped with sensory bristles such as cats.
Scientific investigation
Human biology tries to understand and promotes research on humans as living beings as a scientific discipline. It makes use of various scientific methods, such as experiments and observations, to detail the biochemical and biophysical foundations of human life describe and formulate the underlying processes using models. As a basic science, it provides the knowledge base for medicine. A number of sub-disciplines include anatomy, cytology, histology and morphology.
Medicine
The capabilities of the human brain and the human dexterity in making and using tools, has enabled humans to understand their own biology through scientific experiment, including dissection, autopsy, prophylactic medicine which has, in turn, enable humans to extend their life-span by understanding and mitigating the effects of diseases.
Understanding human biology has enabled and fostered a wider understanding of mammalian biology and by extension, the biology of all living organisms.
Nutrition
Human nutrition is typical of mammalian omnivorous nutrition requiring a balanced input of carbohydrates, fats, proteins, vitamins, and minerals. However, the human diet has a few very specific requirements. These include two specific amino acids, alpha-linolenic acid and linoleic acid without which life is not sustainable in the medium to long term. All other fatty acids can be synthesized from dietary fats. Similarly, human life requires a range of vitamins to be present in food and if these are missing or are supplied at unacceptably low levels, metabolic disorders result which can end in death. The human metabolism is similar to most other mammals except for the need to have an intake of Vitamin C to prevent scurvy and other deficiency diseases. Unusually amongst mammals, a human can synthesize Vitamin D3 using natural UV light from the sun on the skin. This capability may be widespread in the mammalian world but few other mammals share the almost naked skin of humans. The darker the human's skin, the less it can manufacture Vitamin D3.
Other organisms
Human biology also encompasses all those organisms that live on or in the human body. Such organisms range from parasitic insects such as fleas and ticks, parasitic helminths such as liver flukes through to bacterial and viral pathogens. Many of the organisms associated with human biology are the specialised biome in the large intestine and the biotic flora of the skin and pharyngeal and nasal region. Many of these biotic assemblages help protect humans from harm and assist in digestion, and are now known to have complex effects on mood, and well-being.
Social behaviour
Humans in all civilizations are social animals and use their language skills and tool making skills to communicate.
These communication skills enable civilizations to grow and allow for the production of art, literature and music, and for the development of technology. All of these are wholly dependent on the human biological specialisms.
The deployment of these skills has allowed the human race to dominate the terrestrial biome to the detriment of most of the other species.
References
External links
Human Biology Association
Biology Dictionary
Humans | 0.787715 | 0.988973 | 0.779029 |
Exercise physiology | Exercise physiology is the physiology of physical exercise. It is one of the allied health professions, and involves the study of the acute responses and chronic adaptations to exercise. Exercise physiologists are the highest qualified exercise professionals and utilise education, lifestyle intervention and specific forms of exercise to rehabilitate and manage acute and chronic injuries and conditions.
Understanding the effect of exercise involves studying specific changes in muscular, cardiovascular, and neurohumoral systems that lead to changes in functional capacity and strength due to endurance training or strength training. The effect of training on the body has been defined as the reaction to the adaptive responses of the body arising from exercise or as "an elevation of metabolism produced by exercise".
Exercise physiologists study the effect of exercise on pathology, and the mechanisms by which exercise can reduce or reverse disease progression.
History
British physiologist Archibald Hill introduced the concepts of maximal oxygen uptake and oxygen debt in 1922. Hill and German physician Otto Meyerhof shared the 1922 Nobel Prize in Physiology or Medicine for their independent work related to muscle energy metabolism. Building on this work, scientists began measuring oxygen consumption during exercise. Notable contributions were made by Henry Taylor at the University of Minnesota, Scandinavian scientists Per-Olof Åstrand and Bengt Saltin in the 1950s and 60s, the Harvard Fatigue Laboratory, German universities, and the Copenhagen Muscle Research Centre among others.
In some countries it is a Primary Health Care Provider. Accredited Exercise Physiologists (AEP's) are university-trained professionals who prescribe exercise-based interventions to treat various conditions using dose response prescriptions specific to each individual.
Energy expenditure
Humans have a high capacity to expend energy for many hours during sustained exertion. For example, one individual cycling at a speed of through over 50 consecutive days expended a total of 1,145 MJ (273,850 kcal; 273,850 dieter calories) with an average power output of 173.8 W.
Skeletal muscle burns 90 mg (0.5 mmol) of glucose each minute during continuous activity (such as when repetitively extending the human knee), generating ≈24 W of mechanical energy, and since muscle energy conversion is only 22–26% efficient, ≈76 W of heat energy. Resting skeletal muscle has a basal metabolic rate (resting energy consumption) of 0.63 W/kg making a 160 fold difference between the energy consumption of inactive and active muscles. For short duration muscular exertion, energy expenditure can be far greater: an adult human male when jumping up from a squat can mechanically generate 314 W/kg. Such rapid movement can generate twice this amount in nonhuman animals such as bonobos, and in some small lizards.
This energy expenditure is very large compared to the basal resting metabolic rate of the adult human body. This rate varies somewhat with size, gender and age but is typically between 45 W and 85 W.
Total energy expenditure (TEE) due to muscular expended energy is much higher and depends upon the average level of physical work and exercise done during the day. Thus exercise, particularly if sustained for very long periods, dominates the energy metabolism of the body. Physical activity energy expenditure correlates strongly with the gender, age, weight, heart rate, and VO2 max of an individual, during physical activity.
Metabolic changes
Rapid energy sources
Energy needed to perform short lasting, high intensity bursts of activity is derived from anaerobic metabolism within the cytosol of muscle cells, as opposed to aerobic respiration which utilizes oxygen, is sustainable, and occurs in the mitochondria. The quick energy sources consist of the phosphocreatine (PCr) system, fast glycolysis, and adenylate kinase. All of these systems re-synthesize adenosine triphosphate (ATP), which is the universal energy source in all cells. The most rapid source, but the most readily depleted of the above sources is the PCr system which utilizes the enzyme creatine kinase. This enzyme catalyzes a reaction that combines phosphocreatine and adenosine diphosphate (ADP) into ATP and creatine. This resource is short lasting because oxygen is required for the resynthesis of phosphocreatine via mitochondrial creatine kinase. Therefore, under anaerobic conditions, this substrate is finite and only lasts between approximately 10 to 30 seconds of high intensity work. Fast glycolysis, however, can function for approximately 2 minutes prior to fatigue, and predominately uses intracellular glycogen as a substrate. Glycogen is broken down rapidly via glycogen phosphorylase into individual glucose units during intense exercise. Glucose is then oxidized to pyruvate and under anaerobic conditions is reduced to lactic acid. This reaction oxidizes NADH to NAD, thereby releasing a hydrogen ion, promoting acidosis. For this reason, fast glycolysis can not be sustained for long periods of time.
Plasma glucose
Plasma glucose is said to be maintained when there is an equal rate of glucose appearance (entry into the blood) and glucose disposal (removal from the blood). In the healthy individual, the rates of appearance and disposal are essentially equal during exercise of moderate intensity and duration; however, prolonged exercise or sufficiently intense exercise can result in an imbalance leaning towards a higher rate of disposal than appearance, at which point glucose levels fall producing the onset of fatigue. Rate of glucose appearance is dictated by the amount of glucose being absorbed at the gut as well as liver (hepatic) glucose output. Although glucose absorption from the gut is not typically a source of glucose appearance during exercise, the liver is capable of catabolizing stored glycogen (glycogenolysis) as well as synthesizing new glucose from specific reduced carbon molecules (glycerol, pyruvate, and lactate) in a process called gluconeogenesis. The ability of the liver to release glucose into the blood from glycogenolysis is unique, since skeletal muscle, the other major glycogen reservoir, is incapable of doing so. Unlike skeletal muscle, liver cells contain the enzyme glycogen phosphatase, which removes a phosphate group from glucose-6-P to release free glucose. In order for glucose to exit a cell membrane, the removal of this phosphate group is essential. Although gluconeogenesis is an important component of hepatic glucose output, it alone cannot sustain exercise. For this reason, when glycogen stores are depleted during exercise, glucose levels fall and fatigue sets in. Glucose disposal, the other side of the equation, is controlled by the uptake of glucose by the working skeletal muscles. During exercise, despite decreased insulin concentrations, muscle increases GLUT4 translocation and glucose uptake. The mechanism for increased GLUT4 translocation is an area of ongoing research.
glucose control:
As mentioned above, insulin secretion is reduced during exercise, and does not play a major role in maintaining normal blood glucose concentration during exercise, but its counter-regulatory hormones appear in increasing concentrations. Principle among these are glucagon, epinephrine, and growth hormone. All of these hormones stimulate liver (hepatic) glucose output, among other functions. For instance, both epinephrine and growth hormone also stimulate adipocyte lipase, which increases non-esterified fatty acid (NEFA) release. By oxidizing fatty acids, this spares glucose utilization and helps to maintain blood sugar level during exercise.
Exercise for diabetes:
Exercise is a particularly potent tool for glucose control in those who have diabetes mellitus. In a situation of elevated blood glucose (hyperglycemia), moderate exercise can induce greater glucose disposal than appearance, thereby decreasing total plasma glucose concentrations. As stated above, the mechanism for this glucose disposal is independent of insulin, which makes it particularly well-suited for people with diabetes. In addition, there appears to be an increase in sensitivity to insulin for approximately 12–24 hours post-exercise. This is particularly useful for those who have type II diabetes and are producing sufficient insulin but demonstrate peripheral resistance to insulin signaling. However, during extreme hyperglycemic episodes, people with diabetes should avoid exercise due to potential complications associated with ketoacidosis. Exercise could exacerbate ketoacidosis by increasing ketone synthesis in response to increased circulating NEFA's.
Type II diabetes is also intricately linked to obesity, and there may be a connection between type II diabetes and how fat is stored within pancreatic, muscle, and liver cells. Likely due to this connection, weight loss from both exercise and diet tends to increase insulin sensitivity in the majority of people. In some people, this effect can be particularly potent and can result in normal glucose control. Although nobody is technically cured of diabetes, individuals can live normal lives without the fear of diabetic complications; however, regain of weight would assuredly result in diabetes signs and symptoms.
Oxygen
Vigorous physical activity (such as exercise or hard labor) increases the body's demand for oxygen. The first-line physiologic response to this demand is an increase in heart rate, breathing rate, and depth of breathing.
Oxygen consumption (VO2) during exercise is best described by the Fick Equation: VO2=Q x (a-vO2diff), which states that the amount of oxygen consumed is equal to cardiac output (Q) multiplied by the difference between arterial and venous oxygen concentrations. More simply put, oxygen consumption is dictated by the quantity of blood distributed by the heart as well as the working muscle's ability to take up the oxygen within that blood; however, this is a bit of an oversimplification. Although cardiac output is thought to be the limiting factor of this relationship in healthy individuals, it is not the only determinant of VO2 max. That is, factors such as the ability of the lung to oxygenate the blood must also be considered. Various pathologies and anomalies cause conditions such as diffusion limitation, ventilation/perfusion mismatch, and pulmonary shunts that can limit oxygenation of the blood and therefore oxygen distribution. In addition, the oxygen carrying capacity of the blood is also an important determinant of the equation. Oxygen carrying capacity is often the target of exercise (ergogenic aids) aids used in endurance sports to increase the volume percentage of red blood cells (hematocrit), such as through blood doping or the use of erythropoietin (EPO). Furthermore, peripheral oxygen uptake is reliant on a rerouting of blood flow from relatively inactive viscera to the working skeletal muscles, and within the skeletal muscle, capillary to muscle fiber ratio influences oxygen extraction.
Dehydration
Dehydration refers both to hypohydration (dehydration induced prior to exercise) and to exercise-induced dehydration (dehydration that develops during exercise). The latter reduces aerobic endurance performance and results in increased body temperature, heart rate, perceived exertion, and possibly increased reliance on carbohydrate as a fuel source. Although the negative effects of exercise-induced dehydration on exercise performance were clearly demonstrated in the 1940s, athletes continued to believe for years thereafter that fluid intake was not beneficial. More recently, negative effects on performance have been demonstrated with modest (<2%) dehydration, and these effects are exacerbated when the exercise is performed in a hot environment. The effects of hypohydration may vary, depending on whether it is induced through diuretics or sauna exposure, which substantially reduce plasma volume, or prior exercise, which has much less impact on plasma volume. Hypohydration reduces aerobic endurance, but its effects on muscle strength and endurance are not consistent and require further study. Intense prolonged exercise produces metabolic waste heat, and this is removed by sweat-based thermoregulation. A male marathon runner loses each hour around 0.83 L in cool weather and 1.2 L in warm (losses in females are about 68 to 73% lower). People doing heavy exercise may lose two and half times as much fluid in sweat as urine. This can have profound physiological effects. Cycling for 2 hours in the heat (35 °C) with minimal fluid intake causes body mass decline by 3 to 5%, blood volume likewise by 3 to 6%, body temperature to rise constantly, and in comparison with proper fluid intake, higher heart rates, lower stroke volumes and cardiac outputs, reduced skin blood flow, and higher systemic vascular resistance. These effects are largely eliminated by replacing 50 to 80% of the fluid lost in sweat.
Other
Plasma catecholamine concentrations increase 10-fold in whole body exercise.
Ammonia is produced by exercised skeletal muscles from ADP (the precursor of ATP) by purine nucleotide deamination and amino acid catabolism of myofibrils.
interleukin-6 (IL-6) increases in blood circulation due to its release from working skeletal muscles. This release is reduced if glucose is taken, suggesting it is related to energy depletion stresses.
Sodium absorption is affected by the release of interleukin-6 as this can cause the secretion of arginine vasopressin which, in turn, can lead to exercise-associated dangerously low sodium levels (hyponatremia). This loss of sodium in blood plasma can result in swelling of the brain. This can be prevented by awareness of the risk of drinking excessive amounts of fluids during prolonged exercise.
Brain
At rest, the human brain receives 15% of total cardiac output, and uses 20% of the body's energy consumption. The brain is normally dependent for its high energy expenditure upon aerobic metabolism. The brain as a result is highly sensitive to failure of its oxygen supply with loss of consciousness occurring within six to seven seconds, with its EEG going flat in 23 seconds. Therefore, the brain's function would be disrupted if exercise affected its supply of oxygen and glucose.
Protecting the brain from even minor disruption is important since exercise depends upon motor control. Because humans are bipeds, motor control is needed for keeping balance. For this reason, brain energy consumption is increased during intense physical exercise due to the demands in the motor cognition needed to control the body.
Exercise Physiologists treat a range of neurological conditions including (but not limited to): Parkinson's, Alzheimer's, Traumatic Brain Injury, Spinal Cord Injury, Cerebral Palsy and mental health conditions.
Cerebral oxygen
Cerebral autoregulation usually ensures the brain has priority to cardiac output, though this is impaired slightly by exhaustive exercise. During submaximal exercise, cardiac output increases and cerebral blood flow increases beyond the brain's oxygen needs. However, this is not the case for continuous maximal exertion: "Maximal exercise is, despite the increase in capillary oxygenation [in the brain], associated with a reduced mitochondrial O2 content during whole body exercise" The autoregulation of the brain's blood supply is impaired particularly in warm environments
Glucose
In adults, exercise depletes the plasma glucose available to the brain: short intense exercise (35 min ergometer cycling) can reduce brain glucose uptake by 32%.
At rest, energy for the adult brain is normally provided by glucose but the brain has a compensatory capacity to replace some of this with lactate. Research suggests that this can be raised, when a person rests in a brain scanner, to about 17%, with a higher percentage of 25% occurring during hypoglycemia. During intense exercise, lactate has been estimated to provide a third of the brain's energy needs. There is evidence that the brain might, however, in spite of these alternative sources of energy, still suffer an energy crisis since IL-6 (a sign of metabolic stress) is released during exercise from the brain.
Hyperthermia
Humans use sweat thermoregulation for body heat removal, particularly to remove the heat produced during exercise. Moderate dehydration as a consequence of exercise and heat is reported to impair cognition. These impairments can start after body mass lost that is greater than 1%. Cognitive impairment, particularly due to heat and exercise is likely to be due to loss of integrity to the blood brain barrier. Hyperthermia can also lower cerebral blood flow, and raise brain temperature.
Fatigue
Intense activity
Researchers once attributed fatigue to a build-up of lactic acid in muscles. However, this is no longer believed. Rather, lactate may stop muscle fatigue by keeping muscles fully responding to nerve signals. The available oxygen and energy supply, and disturbances of muscle ion homeostasis are the main factors determining exercise performance, at least during brief very intense exercise.
Each muscle contraction involves an action potential that activates voltage sensors, and so releases Ca2+ ions from the muscle fibre's sarcoplasmic reticulum. The action potentials that cause this also require ion changes: Na influxes during the depolarization phase and K effluxes for the repolarization phase. Cl− ions also diffuse into the sarcoplasm to aid the repolarization phase. During intense muscle contraction, the ion pumps that maintain homeostasis of these ions are inactivated and this (with other ion related disruption) causes ionic disturbances. This causes cellular membrane depolarization, inexcitability, and so muscle weakness. Ca2+ leakage from type 1 ryanodine receptor) channels has also been identified with fatigue.
Endurance failure
After intense prolonged exercise, there can be a collapse in body homeostasis. Some famous examples include:
Dorando Pietri in the 1908 Summer Olympic men's marathon ran the wrong way and collapsed several times.
Jim Peters in the marathon of the 1954 Commonwealth Games staggered and collapsed several times, and though he had a five-kilometre (three-mile) lead, failed to finish. Though it was formerly believed that this was due to severe dehydration, more recent research suggests it was the combined effects upon the brain of hyperthermia, hypertonic hypernatraemia associated with dehydration, and possibly hypoglycaemia.
Gabriela Andersen-Schiess in the woman's marathon at the Los Angeles 1984 Summer Olympics in the race's final 400 meters, stopping occasionally and shown signs of heat exhaustion. Though she fell across the finish line, she was released from medical care only two hours later.
Central governor
Tim Noakes, based on an earlier idea by the 1922 Nobel Prize in Physiology or Medicine winner Archibald Hill has proposed the existence of a central governor. In this, the brain continuously adjusts the power output by muscles during exercise in regard to a safe level of exertion. These neural calculations factor in prior length of strenuous exercise, the planned duration of further exertion, and the present metabolic state of the body. This adjusts the number of activated skeletal muscle motor units, and is subjectively experienced as fatigue and exhaustion. The idea of a central governor rejects the earlier idea that fatigue is only caused by mechanical failure of the exercising muscles ("peripheral fatigue"). Instead, the brain models the metabolic limits of the body to ensure that whole body homeostasis is protected, in particular that the heart is guarded from hypoxia, and an emergency reserve is always maintained. The idea of the central governor has been questioned since ‘physiological catastrophes’ can and do occur suggesting that if it did exist, athletes (such as Dorando Pietri, Jim Peters and Gabriela Andersen-Schiess) can override it.
Other factors
Exercise fatigue has also been suggested to be affected by:
brain hyperthermia
glycogen depletion in brain cells
depletion of muscle and liver glycogen (see "hitting the wall")
reactive oxygen species impairing skeletal muscle function
reduced level of glutamate secondary to uptake of ammonia in the brain
Fatigue in diaphragm and abdominal respiratory muscles limiting breathing
Impaired oxygen supply to muscles
Ammonia effects upon the brain
Serotonin pathways in the brain
Cardiac biomarkers
Prolonged exercise such as marathons can increase cardiac biomarkers such as troponin, B-type natriuretic peptide (BNP), and ischemia-modified (aka MI) albumin. This can be misinterpreted by medical personnel as signs of myocardial infarction, or cardiac dysfunction. In these clinical conditions, such cardiac biomarkers are produced by irreversible injury of muscles. In contrast, the processes that create them after strenuous exertion in endurance sports are reversible, with their levels returning to normal within 24-hours (further research, however, is still needed).
Human adaptations
Humans are specifically adapted to engage in prolonged strenuous muscular activity (such as efficient long distance bipedal running). This capacity for endurance running may have evolved to allow the running down of game animals by persistent slow but constant chase over many hours.
Central to the success of this is the ability of the human body to effectively remove muscle heat waste. In most animals, this is stored by allowing a temporary increase in body temperature. This allows them to escape from animals that quickly speed after them for a short duration (the way nearly all predators catch their prey). Humans, unlike other animals that catch prey, remove heat with a specialized thermoregulation based on sweat evaporation. One gram of sweat can remove 2,598 J of heat energy. Another mechanism is increased skin blood flow during exercise that allows for greater convective heat loss that is aided by our upright posture. This skin based cooling has resulted in humans acquiring an increased number of sweat glands, combined with a lack of body fur that would otherwise stop air circulation and efficient evaporation. Because humans can remove exercise heat, they can avoid the fatigue from heat exhaustion that affects animals chased in a persistent manner, and so eventually catch them.
Selective breeding experiments with rodents
Rodents have been specifically bred for exercise behavior or performance in several different studies. For example, laboratory rats have been bred for high or low performance on a motorized treadmill with electrical stimulation as motivation. The high-performance line of rats also exhibits increased voluntary wheel-running behavior as compared with the low-capacity line. In an experimental evolution approach, four replicate lines of laboratory mice have been bred for high levels of voluntary exercise on wheels, while four additional control lines are maintained by breeding without regard to the amount of wheel running. These selected lines of mice also show increased endurance capacity in tests of forced endurance capacity on a motorized treadmill. However, in neither selection experiment have the precise causes of fatigue during either forced or voluntary exercise been determined.
Exercise-induced muscle pain
Physical exercise may cause pain both as an immediate effect that may result from stimulation of free nerve endings by low pH, as well as a delayed onset muscle soreness. The delayed soreness is fundamentally the result of ruptures within the muscle, although apparently not involving the rupture of whole muscle fibers.
Muscle pain can range from a mild soreness to a debilitating injury depending on intensity of exercise, level of training, and other factors.
There is some preliminary evidence to suggest that moderate intensity continuous training has the ability to increase someone's pain threshold.
Education in exercise physiology
Accreditation programs exist with professional bodies in most developed countries, ensuring the quality and consistency of education. In Canada, one may obtain the professional certification title – Certified Exercise Physiologist for those working with clients (both clinical and non clinical) in the health and fitness industry. In Australia, one may obtain the professional certification title - Accredited Exercise Physiologist (AEP) through the professional body Exercise and Sports Science Australia (ESSA). In Australia, it is common for an AEP to also have the qualification of an Accredited Exercise Scientist (AES). The premiere governing body is the American College of Sports Medicine.
An exercise physiologist's area of study may include but is not limited to biochemistry, bioenergetics, cardiopulmonary function, hematology, biomechanics, skeletal muscle physiology, neuroendocrine function, and central and peripheral nervous system function. Furthermore, exercise physiologists range from basic scientists, to clinical researchers, to clinicians, to sports trainers.
Colleges and universities offer exercise physiology as a program of study on various different levels, including undergraduate, graduate degrees and certificates, and doctoral programs. The basis of Exercise Physiology as a major is to prepare students for a career in the field of health sciences. A program that focuses on the scientific study of the physiological processes involved in physical or motor activity, including sensorimotor interactions, response mechanisms, and the effects of injury, disease, and disability. Includes instruction in muscular and skeletal anatomy; molecular and cellular basis of muscle contraction; fuel utilization; neurophysiology of motor mechanics; systemic physiological responses (respiration, blood flow, endocrine secretions, and others); fatigue and exhaustion; muscle and body training; physiology of specific exercises and activities; physiology of injury; and the effects of disabilities and disease. Careers available with a degree in Exercise Physiology can include: non-clinical, client-based work; strength and conditioning specialists; cardiopulmonary treatment; and clinical-based research.
In order to gauge the multiple areas of study, students are taught processes in which to follow on a client-based level. Practical and lecture teachings are instructed in the classroom and in a laboratory setting. These include:
Health and risk assessment: In order to safely work with a client on the job, you must first be able to know the benefits and risks associated with physical activity. Examples of this include knowing specific injuries the body can experience during exercise, how to properly screen a client before their training begins, and what factors to look for that may inhibit their performance.
Exercise testing: Coordinating exercise tests in order to measure body compositions, cardiorespiratory fitness, muscular strength/endurance, and flexibility. Functional tests are also used in order to gain understanding of a more specific part of the body. Once the information is gathered about a client, exercise physiologists must also be able to interpret the test data and decide what health-related outcomes have been discovered.
Exercise prescription: Forming training programs that best meet an individual's health and fitness goals. Must be able to take into account different types of exercises, the reasons/goal for a client's workout, and pre-screened assessments. Knowing how to prescribe exercises for special considerations and populations is also required. These may include age differences, pregnancy, joint diseases, obesity, pulmonary disease, etc.
Curriculum
The curriculum for exercise physiology includes biology, chemistry, and applied sciences. The purpose of the classes selected for this major is to have a proficient understanding of human anatomy, human physiology, and exercise physiology. Includes instruction in muscular and skeletal anatomy; molecular and cellular basis of muscle contraction; fuel utilization; neurophysiology of motor mechanics; systemic physiological responses (respiration, blood flow, endocrine secretions, and others); fatigue and exhaustion; muscle and body training; physiology of specific exercises and activities; physiology of injury; and the effects of disabilities and disease. Not only is a full class schedule needed to complete a degree in Exercise Physiology, but a minimum amount of practicum experience is required and internships are recommended.
See also
Bioenergetics
Excess post-exercise oxygen consumption (EPOC)
Hill's model
Physical therapy
Sports science
Sports medicine
References
External links
Athletic training
Endurance games
Evolutionary biology
Human evolution
Physiology
Strength training
Physical exercise | 0.783431 | 0.993818 | 0.778588 |
Circulatory system | The circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the blood circulatory system; without it the blood would become depleted of fluid.
The lymphatic system also works with the immune system. The circulation of lymph takes much longer than that of blood and, unlike the closed (blood) circulatory system, the lymphatic system is an open system. Some sources describe it as a secondary circulatory system.
The circulatory system can be affected by many cardiovascular diseases. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on disorders of the blood vessels, and lymphatic vessels.
Structure
The circulatory system includes the heart, blood vessels, and blood. The cardiovascular system in all vertebrates, consists of the heart and blood vessels. The circulatory system is further divided into two major circuits – a pulmonary circulation, and a systemic circulation. The pulmonary circulation is a circuit loop from the right heart taking deoxygenated blood to the lungs where it is oxygenated and returned to the left heart. The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body, and returns deoxygenated blood back to the right heart via large veins known as the venae cavae. The systemic circulation can also be defined as two parts – a macrocirculation and a microcirculation. An average adult contains five to six quarts (roughly 4.7 to 5.7 liters) of blood, accounting for approximately 7% of their total body weight. Blood consists of plasma, red blood cells, white blood cells, and platelets. The digestive system also works with the circulatory system to provide the nutrients the system needs to keep the heart pumping.
Further circulatory routes are associated, such as the coronary circulation to the heart itself, the cerebral circulation to the brain, renal circulation to the kidneys, and bronchial circulation to the bronchi in the lungs.
The human circulatory system is closed, meaning that the blood is contained within the vascular network. Nutrients travel through tiny blood vessels of the microcirculation to reach organs. The lymphatic system is an essential subsystem of the circulatory system consisting of a network of lymphatic vessels, lymph nodes, organs, tissues and circulating lymph. This subsystem is an open system. A major function is to carry the lymph, draining and returning interstitial fluid into the lymphatic ducts back to the heart for return to the circulatory system. Another major function is working together with the immune system to provide defense against pathogens.
Heart
The heart pumps blood to all parts of the body providing nutrients and oxygen to every cell, and removing waste products. The left heart pumps oxygenated blood returned from the lungs to the rest of the body in the systemic circulation. The right heart pumps deoxygenated blood to the lungs in the pulmonary circulation. In the human heart there is one atrium and one ventricle for each circulation, and with both a systemic and a pulmonary circulation there are four chambers in total: left atrium, left ventricle, right atrium and right ventricle. The right atrium is the upper chamber of the right side of the heart. The blood that is returned to the right atrium is deoxygenated (poor in oxygen) and passed into the right ventricle to be pumped through the pulmonary artery to the lungs for re-oxygenation and removal of carbon dioxide. The left atrium receives newly oxygenated blood from the lungs as well as the pulmonary vein which is passed into the strong left ventricle to be pumped through the aorta to the different organs of the body.
Pulmonary circulation
The pulmonary circulation is the part of the circulatory system in which oxygen-depleted blood is pumped away from the heart, via the pulmonary artery, to the lungs and returned, oxygenated, to the heart via the pulmonary vein.
Oxygen-deprived blood from the superior and inferior vena cava enters the right atrium of the heart and flows through the tricuspid valve (right atrioventricular valve) into the right ventricle, from which it is then pumped through the pulmonary semilunar valve into the pulmonary artery to the lungs. Gas exchange occurs in the lungs, whereby is released from the blood, and oxygen is absorbed. The pulmonary vein returns the now oxygen-rich blood to the left atrium.
A separate circuit from the systemic circulation, the bronchial circulation supplies blood to the tissue of the larger airways of the lung.
Systemic circulation
The systemic circulation is a circuit loop that delivers oxygenated blood from the left heart to the rest of the body through the aorta. Deoxygenated blood is returned in the systemic circulation to the right heart via two large veins, the inferior vena cava and superior vena cava, where it is pumped from the right atrium into the pulmonary circulation for oxygenation. The systemic circulation can also be defined as having two parts – a macrocirculation and a microcirculation.
Blood vessels
The blood vessels of the circulatory system are the arteries, veins, and capillaries. The large arteries and veins that take blood to, and away from the heart are known as the great vessels.
Arteries
Oxygenated blood enters the systemic circulation when leaving the left ventricle, via the aortic semilunar valve. The first part of the systemic circulation is the aorta, a massive and thick-walled artery. The aorta arches and gives branches supplying the upper part of the body after passing through the aortic opening of the diaphragm at the level of thoracic ten vertebra, it enters the abdomen. Later, it descends down and supplies branches to abdomen, pelvis, perineum and the lower limbs.
The walls of the aorta are elastic. This elasticity helps to maintain the blood pressure throughout the body. When the aorta receives almost five litres of blood from the heart, it recoils and is responsible for pulsating blood pressure. As the aorta branches into smaller arteries, their elasticity goes on decreasing and their compliance goes on increasing.
Capillaries
Arteries branch into small passages called arterioles and then into the capillaries. The capillaries merge to bring blood into the venous system.
Veins
Capillaries merge into venules, which merge into veins. The venous system feeds into the two major veins: the superior vena cava – which mainly drains tissues above the heart – and the inferior vena cava – which mainly drains tissues below the heart. These two large veins empty into the right atrium of the heart.
Portal veins
The general rule is that arteries from the heart branch out into capillaries, which collect into veins leading back to the heart. Portal veins are a slight exception to this. In humans, the only significant example is the hepatic portal vein which combines from capillaries around the gastrointestinal tract where the blood absorbs the various products of digestion; rather than leading directly back to the heart, the hepatic portal vein branches into a second capillary system in the liver.
Coronary circulation
The heart itself is supplied with oxygen and nutrients through a small "loop" of the systemic circulation and derives very little from the blood contained within the four chambers.
The coronary circulation system provides a blood supply to the heart muscle itself. The coronary circulation begins near the origin of the aorta by two coronary arteries: the right coronary artery and the left coronary artery. After nourishing the heart muscle, blood returns through the coronary veins into the coronary sinus and from this one into the right atrium. Backflow of blood through its opening during atrial systole is prevented by the Thebesian valve. The smallest cardiac veins drain directly into the heart chambers.
Cerebral circulation
The brain has a dual blood supply, an anterior and a posterior circulation from arteries at its front and back. The anterior circulation arises from the internal carotid arteries to supply the front of the brain. The posterior circulation arises from the vertebral arteries, to supply the back of the brain and brainstem. The circulation from the front and the back join (anastomise) at the circle of Willis. The neurovascular unit, composed of various cells and vasculature channels within the brain, regulates the flow of blood to activated neurons in order to satisfy their high energy demands.
Renal circulation
The renal circulation is the blood supply to the kidneys, contains many specialized blood vessels and receives around 20% of the cardiac output. It branches from the abdominal aorta and returns blood to the ascending inferior vena cava.
Development
The development of the circulatory system starts with vasculogenesis in the embryo. The human arterial and venous systems develop from different areas in the embryo. The arterial system develops mainly from the aortic arches, six pairs of arches that develop on the upper part of the embryo. The venous system arises from three bilateral veins during weeks 4 – 8 of embryogenesis. Fetal circulation begins within the 8th week of development. Fetal circulation does not include the lungs, which are bypassed via the truncus arteriosus. Before birth the fetus obtains oxygen (and nutrients) from the mother through the placenta and the umbilical cord.
Arteries
The human arterial system originates from the aortic arches and from the dorsal aortae starting from week 4 of embryonic life. The first and second aortic arches regress and form only the maxillary arteries and stapedial arteries respectively. The arterial system itself arises from aortic arches 3, 4 and 6 (aortic arch 5 completely regresses).
The dorsal aortae, present on the dorsal side of the embryo, are initially present on both sides of the embryo. They later fuse to form the basis for the aorta itself. Approximately thirty smaller arteries branch from this at the back and sides. These branches form the intercostal arteries, arteries of the arms and legs, lumbar arteries and the lateral sacral arteries. Branches to the sides of the aorta will form the definitive renal, suprarenal and gonadal arteries. Finally, branches at the front of the aorta consist of the vitelline arteries and umbilical arteries. The vitelline arteries form the celiac, superior and inferior mesenteric arteries of the gastrointestinal tract. After birth, the umbilical arteries will form the internal iliac arteries.
Veins
The human venous system develops mainly from the vitelline veins, the umbilical veins and the cardinal veins, all of which empty into the sinus venosus.
Function
About 98.5% of the oxygen in a sample of arterial blood in a healthy human, breathing air at sea-level pressure, is chemically combined with hemoglobin molecules. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in vertebrates.
Clinical significance
Many diseases affect the circulatory system. These include a number of cardiovascular diseases, affecting the heart and blood vessels; hematologic diseases that affect the blood, such as anemia, and lymphatic diseases affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on the blood vessels.
Cardiovascular disease
Diseases affecting the cardiovascular system are called cardiovascular disease.
Many of these diseases are called "lifestyle diseases" because they develop over time and are related to a person's exercise habits, diet, whether they smoke, and other lifestyle choices a person makes. Atherosclerosis is the precursor to many of these diseases. It is where small atheromatous plaques build up in the walls of medium and large arteries. This may eventually grow or rupture to occlude the arteries. It is also a risk factor for acute coronary syndromes, which are diseases that are characterised by a sudden deficit of oxygenated blood to the heart tissue. Atherosclerosis is also associated with problems such as aneurysm formation or splitting ("dissection") of arteries.
Another major cardiovascular disease involves the creation of a clot, called a "thrombus". These can originate in veins or arteries. Deep venous thrombosis, which mostly occurs in the legs, is one cause of clots in the veins of the legs, particularly when a person has been stationary for a long time. These clots may embolise, meaning travel to another location in the body. The results of this may include pulmonary embolus, transient ischaemic attacks, or stroke.
Cardiovascular diseases may also be congenital in nature, such as heart defects or persistent fetal circulation, where the circulatory changes that are supposed to happen after birth do not. Not all congenital changes to the circulatory system are associated with diseases, a large number are anatomical variations.
Investigations
The function and health of the circulatory system and its parts are measured in a variety of manual and automated ways. These include simple methods such as those that are part of the cardiovascular examination, including the taking of a person's pulse as an indicator of a person's heart rate, the taking of blood pressure through a sphygmomanometer or the use of a stethoscope to listen to the heart for murmurs which may indicate problems with the heart's valves. An electrocardiogram can also be used to evaluate the way in which electricity is conducted through the heart.
Other more invasive means can also be used. A cannula or catheter inserted into an artery may be used to measure pulse pressure or pulmonary wedge pressures. Angiography, which involves injecting a dye into an artery to visualise an arterial tree, can be used in the heart (coronary angiography) or brain. At the same time as the arteries are visualised, blockages or narrowings may be fixed through the insertion of stents, and active bleeds may be managed by the insertion of coils. An MRI may be used to image arteries, called an MRI angiogram. For evaluation of the blood supply to the lungs a CT pulmonary angiogram may be used. Vascular ultrasonography may be used to investigate vascular diseases affecting the venous system and the arterial system including the diagnosis of stenosis, thrombosis or venous insufficiency. An intravascular ultrasound using a catheter is also an option.
Surgery
There are a number of surgical procedures performed on the circulatory system:
Coronary artery bypass surgery
Coronary stent used in angioplasty
Vascular surgery
Vein stripping
Cosmetic procedures
Cardiovascular procedures are more likely to be performed in an inpatient setting than in an ambulatory care setting; in the United States, only 28% of cardiovascular surgeries were performed in the ambulatory care setting.
Other animals
While humans, as well as other vertebrates, have a closed blood circulatory system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open circulatory system containing a heart but limited blood vessels. The most primitive, diploblastic animal phyla lack circulatory systems.
An additional transport system, the lymphatic system, which is only found in animals with a closed blood circulation, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.
The blood vascular system first appeared probably in an ancestor of the triploblasts over 600 million years ago, overcoming the time-distance constraints of diffusion, while endothelium evolved in an ancestral vertebrate some 540–510 million years ago.
Open circulatory system
In arthropods, the open circulatory system is a system in which a fluid in a cavity called the hemocoel bathes the organs directly with oxygen and nutrients, with there being no distinction between blood and interstitial fluid; this combined fluid is called hemolymph or haemolymph. Muscular movements by the animal during locomotion can facilitate hemolymph movement, but diverting flow from one area to another is limited. When the heart relaxes, blood is drawn back toward the heart through open-ended pores (ostia).
Hemolymph fills all of the interior hemocoel of the body and surrounds all cells. Hemolymph is composed of water, inorganic salts (mostly sodium, chloride, potassium, magnesium, and calcium), and organic compounds (mostly carbohydrates, proteins, and lipids). The primary oxygen transporter molecule is hemocyanin.
There are free-floating cells, the hemocytes, within the hemolymph. They play a role in the arthropod immune system.
Closed circulatory system
The circulatory systems of all vertebrates, as well as of annelids (for example, earthworms) and cephalopods (squids, octopuses and relatives) always keep their circulating blood enclosed within heart chambers or blood vessels and are classified as closed, just as in humans. Still, the systems of fish, amphibians, reptiles, and birds show various stages of the evolution of the circulatory system. Closed systems permit blood to be directed to the organs that require it.
In fish, the system has only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation. The heart of fish is, therefore, only a single pump (consisting of two chambers).
In amphibians and most reptiles, a double circulatory system is used, but the heart is not always completely separated into two pumps. Amphibians have a three-chambered heart.
In reptiles, the ventricular septum of the heart is incomplete and the pulmonary artery is equipped with a sphincter muscle. This allows a second possible route of blood flow. Instead of blood flowing through the pulmonary artery to the lungs, the sphincter may be contracted to divert this blood flow through the incomplete ventricular septum into the left ventricle and out through the aorta. This means the blood flows from the capillaries to the heart and back to the capillaries instead of to the lungs. This process is useful to ectothermic (cold-blooded) animals in the regulation of their body temperature.
Mammals, birds and crocodilians show complete separation of the heart into two pumps, for a total of four heart chambers; it is thought that the four-chambered heart of birds and crocodilians evolved independently from that of mammals. Double circulatory systems permit blood to be repressurized after returning from the lungs, speeding up delivery of oxygen to tissues.
No circulatory system
Circulatory systems are absent in some animals, including flatworms. Their body cavity has no lining or enclosed fluid. Instead, a muscular pharynx leads to an extensively branched digestive system that facilitates direct diffusion of nutrients to all cells. The flatworm's dorso-ventrally flattened body shape also restricts the distance of any cell from the digestive system or the exterior of the organism. Oxygen can diffuse from the surrounding water into the cells, and carbon dioxide can diffuse out. Consequently, every cell is able to obtain nutrients, water and oxygen without the need of a transport system.
Some animals, such as jellyfish, have more extensive branching from their gastrovascular cavity (which functions as both a place of digestion and a form of circulation), this branching allows for bodily fluids to reach the outer layers, since the digestion begins in the inner layers.
History
The earliest known writings on the circulatory system are found in the Ebers Papyrus (16th century BCE), an ancient Egyptian medical papyrus containing over 700 prescriptions and remedies, both physical and spiritual. In the papyrus, it acknowledges the connection of the heart to the arteries. The Egyptians thought air came in through the mouth and into the lungs and heart. From the heart, the air travelled to every member through the arteries. Although this concept of the circulatory system is only partially correct, it represents one of the earliest accounts of scientific thought.
In the 6th century BCE, the knowledge of circulation of vital fluids through the body was known to the Ayurvedic physician Sushruta in ancient India. He also seems to have possessed knowledge of the arteries, described as 'channels' by Dwivedi & Dwivedi (2007). The first major ancient Greek research into the circulatory system was completed by Plato in theTimaeus, who argues that blood circulates around the body in accordance with the general rules that govern the motions of the elements in the body; accordingly, he does not place much importance in the heart itself. The valves of the heart were discovered by a physician of the Hippocratic school around the early 3rd century BC. However, their function was not properly understood then. Because blood pools in the veins after death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for the transport of air.
The Greek physician, Herophilus, distinguished veins from arteries but thought that the pulse was a property of arteries themselves. Greek anatomist Erasistratus observed that arteries that were cut during life bleed. He ascribed the fact to the phenomenon that air escaping from an artery is replaced with blood that enters between veins and arteries by very small vessels. Thus he apparently postulated capillaries but with reversed flow of blood.
In 2nd-century AD Rome, the Greek physician Galen knew that blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating organs to all parts of the body where it was consumed and there was no return of blood to the heart or liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves.
Galen believed that the arterial blood was created by venous blood passing from the left ventricle to the right by passing through 'pores' in the interventricular septum, air passed from the lungs via the pulmonary artery to the left side of the heart. As the arterial blood was created 'sooty' vapors were created and passed to the lungs also via the pulmonary artery to be exhaled.
In 1025, The Canon of Medicine by the Persian physician, Avicenna, "erroneously accepted the Greek notion regarding the existence of a hole in the ventricular septum by which the blood traveled between the ventricles." Despite this, Avicenna "correctly wrote on the cardiac cycles and valvular function", and "had a vision of blood circulation" in his Treatise on Pulse. While also refining Galen's erroneous theory of the pulse, Avicenna provided the first correct explanation of pulsation: "Every beat of the pulse comprises two movements and two pauses. Thus, expansion : pause : contraction : pause. [...] The pulse is a movement in the heart and arteries ... which takes the form of alternate expansion and contraction."
In 1242, the Arabian physician, Ibn al-Nafis described the process of pulmonary circulation in greater, more accurate detail than his predecessors, though he believed, as they did, in the notion of vital spirit (pneuma), which he believed was formed in the left ventricle. Ibn al-Nafis stated in his Commentary on Anatomy in Avicenna's Canon:
...the blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa (pulmonary artery) to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa (pulmonary vein) to reach the left chamber of the heart and there form the vital spirit...
In addition, Ibn al-Nafis had an insight into what would become a larger theory of the capillary circulation. He stated that "there must be small communications or pores (manafidh in Arabic) between the pulmonary artery and vein," a prediction that preceded the discovery of the capillary system by more than 400 years. Ibn al-Nafis' theory, however, was confined to blood transit in the lungs and did not extend to the entire body.
Michael Servetus was the first European to describe the function of pulmonary circulation, although his achievement was not widely recognized at the time, for a few reasons. He firstly described it in the "Manuscript of Paris" (near 1546), but this work was never published. And later he published this description, but in a theological treatise, Christianismi Restitutio, not in a book on medicine. Only three copies of the book survived but these remained hidden for decades, the rest were burned shortly after its publication in 1553 because of persecution of Servetus by religious authorities.
A better known discovery of pulmonary circulation was by Vesalius's successor at Padua, Realdo Colombo, in 1559.
Finally, the English physician William Harvey, a pupil of Hieronymus Fabricius (who had earlier described the valves of the veins without recognizing their function), performed a sequence of experiments and published his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus in 1628, which "demonstrated that there had to be a direct connection between the venous and arterial systems throughout the body, and not just the lungs. Most importantly, he argued that the beat of the heart produced a continuous circulation of blood through minute connections at the extremities of the body. This is a conceptual leap that was quite different from Ibn al-Nafis' refinement of the anatomy and bloodflow in the heart and lungs." This work, with its essentially correct exposition, slowly convinced the medical world. However, Harvey was not able to identify the capillary system connecting arteries and veins; these were later discovered by Marcello Malpighi in 1661.
See also
References
External links
Circulatory Pathways in Anatomy and Physiology by OpenStax
The Circulatory System
Michael Servetus Research Study on the Manuscript of Paris by Servetus (1546 description of the Pulmonary Circulation)
Exercise physiology
Angiology | 0.778909 | 0.999572 | 0.778576 |
Central nervous system disease | Central nervous system diseases or central nervous system disorders are a group of neurological disorders that affect the structure or function of the brain or spinal cord, which collectively form the central nervous system (CNS). These disorders may be caused by such things as infection, injury, blood clots, age related degeneration, cancer, autoimmune disfunction, and birth defects. The symptoms vary widely, as do the treatments.
Central nervous system tumors are the most common forms of pediatric cancer. Brain tumors are the most frequent and have the highest mortality.
Some disorders, such as substance addiction, autism, and ADHD may be regarded as CNS disorders, though the classifications are not without dispute.
Signs and symptoms
Every disease has different signs and symptoms. Some of them are persistent headache; pain in the face, back, arms, or legs; an inability to concentrate; loss of feeling; memory loss; loss of muscle strength; tremors; seizures; increased reflexes, spasticity, tics; paralysis; and slurred speech. One should seek medical attention if affected by these.
Causes
Trauma
Any type of traumatic brain injury (TBI) or injury done to the spinal cord can result in a wide spectrum of disabilities in a person. Depending on the section of the brain or spinal cord that experiences the trauma, the outcome may be anticipated.
Infections
Infectious diseases are transmitted in several ways. Some of these infections may affect the brain or spinal cord directly. Generally, an infection is a disease that is caused by the invasion of a microorganism or virus.
Degeneration
Degenerative spinal disorders involve a loss of function in the spine. Pressure on the spinal cord and nerves may be associated with herniation or disc displacement. Brain degeneration also causes central nervous system diseases (i.e. Alzheimer's, Lewy body dementia, Parkinson's, and Huntington's diseases). Studies have shown that obese people may have severe degeneration in the brain due to loss of tissue affecting cognition.
Structural defects
Common structural defects include birth defects, anencephaly, and spina bifida. Children born with structural defects may have malformed limbs, heart problems, and facial abnormalities.
Defects in the formation of the cerebral cortex include microgyria, polymicrogyria, bilateral frontoparietal polymicrogyria, and pachygyria.
CNS Tumors
A tumor is an abnormal growth of body tissue. In the beginning, tumors can be noncancerous, but if they become malignant, they are cancerous. In general, they appear when there is a problem with cellular division. Problems with the body's immune system can lead to tumors.
Autoimmune disorders
An autoimmune disorder is a condition where in the immune system attacks and destroys healthy body tissue. This is caused by a loss of tolerance to proteins in the body, resulting in immune cells recognising these as 'foreign' and directing an immune response against them.
Stroke
A stroke is an interruption of the blood supply to the brain. Approximately every 40 seconds, someone in the US has a stroke. This can happen when a blood vessel is blocked by a blood clot or when a blood vessel ruptures, causing blood to leak to the brain. If the brain cannot get enough oxygen and blood, brain cells can die, leading to permanent damage.
Functions
Spinal cord
The spinal cord transmits sensory reception from the peripheral nervous system. It also conducts motor information to the body's skeletal muscles, cardiac muscles, smooth muscles, and glands. There are 31 pairs of spinal nerves along the spinal cord, all of which consist of both sensory and motor neurons. The spinal cord is protected by vertebrae and connects the peripheral nervous system to the brain, and it acts as a "minor" coordinating center.
Brain
The brain serves as the organic basis of cognition and exerts centralized control over the other organs of the body. The brain is protected by the skull; however, if the brain is damaged, significant impairments in cognition and physiological function or death may occur.
Diagnosis
Types of CNS disorders
Addiction
Addiction is a disorder of the brain's reward system which arises through transcriptional and epigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling, etc.).
Arachnoid cysts
Arachnoid cysts are cerebrospinal fluid covered by arachnoidal cells that may develop on the brain or spinal cord. They are a congenital disorder, and in some cases may not show symptoms. However, if there is a large cyst, symptoms may include headache, seizures, ataxia (lack of muscle control), hemiparesis, and several others. Macrocephaly and ADHD are common among children, while presenile dementia, hydrocephalus (an abnormality of the dynamics of the cerebrospinal fluid), and urinary incontinence are symptoms for elderly patients (65 and older).
Attention deficit/hyperactivity disorder (ADHD)
ADHD is an organic disorder of the nervous system. ADHD, which in severe cases can be debilitating, has symptoms thought to be caused by structural as well as biochemical imbalances in the brain; in particular, low levels of the neurotransmitters dopamine and norepinephrine, which are responsible for controlling and maintaining attention and movement. Many people with ADHD continue to have symptoms well into adulthood. Also of note is an increased risk of the development of Dementia with Lewy bodies, or (DLB), and a direct genetic association of Attention deficit disorder to Parkinson's disease two progressive, and serious, neurological diseases whose symptoms often occur in people over age 65.
Autism
Autism is a neurodevelopmental disorder that is characterized by repetitive patterns of behavior and persistent deficits in social interaction and communication.
Brain tumors
Tumors of the central nervous system constitute around 2% of all cancer in the United States.
Catalepsy
Catalepsy is a nervous disorder characterized by immobility and muscular rigidity, along with a decreased sensitivity to pain. Catalepsy is considered a symptom of serious diseases of the nervous system (e.g., Parkinson's disease, Epilepsy, etc.) rather than a disease by itself. Cataleptic fits can range in duration from several minutes to weeks. Catalepsy often responds to Benzodiazepines (e.g., Lorazepam) in pill and I.V. form.
Encephalitis
Encephalitis is an inflammation of the brain. It is usually caused by a foreign substance or a viral infection. Symptoms of this disease include headache, neck pain, drowsiness, nausea, and fever. If caused by the West Nile virus, it may be lethal to humans, as well as birds and horses.
Epilepsy/Seizures
Epilepsy is an unpredictable, serious, and potentially fatal disorder of the nervous system, thought to be the result of faulty electrical activity in the brain. Epileptic seizures result from abnormal, excessive, or hypersynchronous neuronal activity in the brain. About 50 million people worldwide have epilepsy, and nearly 80% of epilepsy occurs in developing countries. Epilepsy becomes more common as people age. Onset of new cases occurs most frequently in infants and the elderly. Epileptic seizures may occur in recovering patients as a consequence of brain surgery.
Infection
A number of different pathogens (i.e., certain viruses, bacteria, protozoa, fungi, and prions) can cause infections that adversely affect the brain or spinal cord.
Locked-in syndrome
A medical condition, Locked-in syndrome usually resulting from a stroke that damages part of the brainstem, in which the body and most of the facial muscles are paralysed but consciousness remains and the ability to perform certain eye movements is preserved.
Meningitis
Meningitis is an inflammation of the meninges (membranes) of the brain and spinal cord. It is most often caused by a bacterial or viral infection. Fever, vomiting, and a stiff neck are all symptoms of meningitis.
Migraine
A chronic, often debilitating neurological disorder characterized by recurrent moderate to severe headaches, often in association with a number of autonomic nervous system symptoms.
Multiple sclerosis
Multiple sclerosis (MS) is a chronic, inflammatory demyelinating disease, meaning that the myelin sheath of neurons is damaged. Symptoms of MS include visual and sensation problems, muscle weakness, numbness and tingling all over, muscle spasms, poor coordination, and depression. Also, patients with MS have reported extreme fatigue and dizziness, tremors, and bladder leakage.
Myelopathy
Myelopathy is an injury to the spinal cord due to severe compression that may result from trauma, congenital stenosis, degenerative disease or disc herniation. The spinal cord is a group of nerves housed inside the spine that runs almost its entire length.
Tourette's
Tourette's syndrome is an inherited neurological disorder. Early onset may be during childhood, and it is characterized by physical and verbal tics. Tourette's often also includes symptoms of both OCD and ADHD indicating a link between the three disorders. The exact cause of Tourette's, other than genetic factors, is unknown.
Neurodegenerative disorders
Alzheimer's
Alzheimer's is a neurodegenerative disease typically found in people over the age of 65 years. Worldwide, approximately 24 million people have dementia; 60% of these cases are due to Alzheimer's. The ultimate cause is unknown. The clinical sign of Alzheimer's is progressive cognition deterioration.
Huntington's disease
Huntington's disease is a degenerative neurological disorder that is inherited. Degeneration of neuronal cells occurs throughout the brain, especially in the striatum. There is a progressive decline that results in abnormal movements. Statistics show that Huntington's disease may affect 10 per 100,000 people of Western European descent.
Lewy body dementia
Parkinson's
Parkinson's disease, or PD, is a progressive illness of the nervous system. Caused by the death of dopamine-producing brain cells that affect motor skills and speech. Symptoms may include bradykinesia (slow physical movement), muscle rigidity, and tremors. Behavior, thinking, sensation disorders, and the sometimes co-morbid skin condition Seborrheic dermatitis are just some of PD's numerous nonmotor symptoms. Parkinson's disease, Attention deficit/hyperactivity disorder (ADHD) and Bi-polar disorder, all appear to have some connection to one another, as all three nervous system disorders involve lower than normal levels of the brain chemical dopamine (In ADHD, Parkinson's, and the depressive phase of Bi-polar disorder.) or too much dopamine (in Mania or Manic states of Bi-polar disorder) in different areas of the brain:
Treatments
There are a wide range of treatments for central nervous system diseases. These can range from surgery to neural rehabilitation or prescribed medications. The most valued companies worldwide whose leading products are in CNS Care include CSPC Pharma (Hong Kong), Biogen (United States), UCB (Belgium) and Otsuka (Japan) who are active in treatment areas like MS, Alzheimers, Epilepsy and Psychiatry.
See also
Neurodegenerative disease
List of central nervous system infections
References
External links
Central nervous system disorders | 0.786273 | 0.989974 | 0.77839 |
Syndrome | A syndrome is a set of medical signs and symptoms which are correlated with each other and often associated with a particular disease or disorder. The word derives from the Greek σύνδρομον, meaning "concurrence". When a syndrome is paired with a definite cause this becomes a disease. In some instances, a syndrome is so closely linked with a pathogenesis or cause that the words syndrome, disease, and disorder end up being used interchangeably for them. This substitution of terminology often confuses the reality and meaning of medical diagnoses. This is especially true of inherited syndromes. About one third of all phenotypes that are listed in OMIM are described as dysmorphic, which usually refers to the facial gestalt. For example, Down syndrome, Wolf–Hirschhorn syndrome, and Andersen–Tawil syndrome are disorders with known pathogeneses, so each is more than just a set of signs and symptoms, despite the syndrome nomenclature. In other instances, a syndrome is not specific to only one disease. For example, toxic shock syndrome can be caused by various toxins; another medical syndrome named as premotor syndrome can be caused by various brain lesions; and premenstrual syndrome is not a disease but simply a set of symptoms.
If an underlying genetic cause is suspected but not known, a condition may be referred to as a genetic association (often just "association" in context). By definition, an association indicates that the collection of signs and symptoms occurs in combination more frequently than would be likely by chance alone.
Syndromes are often named after the physician or group of physicians that discovered them or initially described the full clinical picture. Such eponymous syndrome names are examples of medical eponyms. Recently, there has been a shift towards naming conditions descriptively (by symptoms or underlying cause) rather than eponymously, but the eponymous syndrome names often persist in common usage.
The defining of syndromes has sometimes been termed syndromology, but it is usually not a separate discipline from nosology and differential diagnosis generally, which inherently involve pattern recognition (both sentient and automated) and differentiation among overlapping sets of signs and symptoms. Teratology (dysmorphology) by its nature involves the defining of congenital syndromes that may include birth defects (pathoanatomy), dysmetabolism (pathophysiology), and neurodevelopmental disorders.
Subsyndromal
When there are a number of symptoms suggesting a particular disease or condition but does not meet the defined criteria used to make a diagnosis of that disease or condition. This can be a bit subjective because it is ultimately up to the clinician to make the diagnosis. This could be because it has not advanced to the level or passed a threshold or just similar symptoms cause by other issues. Subclinical is synonymous since one of its definitions is "where some criteria are met but not enough to achieve clinical status"; but subclinical is not always interchangeable since it can also mean "not detectable or producing effects that are not detectable by the usual clinical tests"; i.e., asymptomatic.
Usage
General medicine
In medicine, a broad definition of syndrome is used, which describes a collection of symptoms and findings without necessarily tying them to a single identifiable pathogenesis. Examples of infectious syndromes include encephalitis and hepatitis, which can both have several different infectious causes. The more specific definition employed in medical genetics describes a subset of all medical syndromes.
Psychiatry and psychopathology
Psychiatric syndromes often called psychopathological syndromes (psychopathology refers both to psychic dysfunctions occurring in mental disorders, and the study of the origin, diagnosis, development, and treatment of mental disorders).
In Russia those psychopathological syndromes are used in modern clinical practice and described in psychiatric literature in the details: asthenic syndrome, obsessive syndrome, emotional syndromes (for example, manic syndrome, depressive syndrome), Cotard's syndrome, catatonic syndrome, hebephrenic syndrome, delusional and hallucinatory syndromes (for example, paranoid syndrome, paranoid-hallucinatory syndrome, Kandinsky-Clérambault's syndrome also known as syndrome of psychic automatism, hallucinosis), paraphrenic syndrome, psychopathic syndromes (includes all personality disorders), clouding of consciousness syndromes (for example, twilight clouding of consciousness, amential syndrome also known as amentia, delirious syndrome, stunned consciousness syndrome, oneiroid syndrome), hysteric syndrome, neurotic syndrome, Korsakoff's syndrome, hypochondriacal syndrome, paranoiac syndrome, senestopathic syndrome, encephalopathic syndrome.
Some examples of psychopathological syndromes used in modern Germany are psychoorganic syndrome, depressive syndrome, paranoid-hallucinatory syndrome, obsessive-compulsive syndrome, autonomic syndrome, hostility syndrome, manic syndrome, apathy syndrome.
Münchausen syndrome, Ganser syndrome, neuroleptic-induced deficit syndrome, olfactory reference syndrome are also well-known.
History
The most important psychopathological syndromes were classified into three groups ranked in order of severity by German psychiatrist Emil Kraepelin (1856—1926). The first group, which includes the mild disorders, consists of five syndromes: emotional, paranoid, hysterical, delirious, and impulsive. The second, intermediate, group includes two syndromes: schizophrenic syndrome and speech-hallucinatory syndrome. The third includes the most severe disorders, and consists of three syndromes: epileptic, oligophrenic and dementia. In Kraepelin's era, epilepsy was viewed as a mental illness; Karl Jaspers also considered "genuine epilepsy" a "psychosis", and described "the three major psychoses" as schizophrenia, epilepsy, and manic-depressive illness.
Medical genetics
In the field of medical genetics, the term "syndrome" is traditionally only used when the underlying genetic cause is known. Thus, trisomy 21 is commonly known as Down syndrome.
Until 2005, CHARGE syndrome was most frequently referred to as "CHARGE association". When the major causative gene (CHD7) for the condition was discovered, the name was changed. The consensus underlying cause of VACTERL association has not been determined, and thus it is not commonly referred to as a "syndrome".
Other fields
In biology, "syndrome" is used in a more general sense to describe characteristic sets of features in various contexts. Examples include behavioral syndromes, as well as pollination syndromes and seed dispersal syndromes.
In orbital mechanics and astronomy, Kessler syndrome refers to the effect where the density of objects in low Earth orbit (LEO) is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.
In quantum error correction theory syndromes correspond to errors in code words which are determined with syndrome measurements, which only collapse the state on an error state, so that the error can be corrected without affecting the quantum information stored in the code words.
Naming
There is no set common convention for the naming of newly identified syndromes. In the past, syndromes were often named after the physician or scientist who identified and described the condition in an initial publication. These are referred to as "eponymous syndromes". In some cases, diseases are named after the patient who initially presents with symptoms, or their home town (Stockholm syndrome). There have been isolated cases of patients being eager to have their syndromes named after them, while their physicians are hesitant. When a syndrome is named after a person, there is some difference of opinion as to whether it should take the possessive form or not (e.g. Down syndrome vs. Down's syndrome). North American usage has tended to favor the non-possessive form, while European references often use the possessive. A 2009 study demonstrated a trend away from the possessive form in Europe in medical literature from 1970 through 2008.
History
Avicenna, in The Canon of Medicine (published 1025) helped lay the groundwork for the idea of a syndrome and pioneered in the diagnosis of a specific disease. The concept of a medical syndrome was further developed in the 17th century by Thomas Sydenham.
Underlying cause
Even in syndromes with no known etiology, the presence of the associated symptoms with a statistically improbable correlation normally leads the researchers to hypothesize that there exists an unknown underlying cause for all the described symptoms.
See also
List of syndromes
Toxidrome
Symptom
Sequence (medicine)
Characteristics of syndromic ASD conditions
References
External links
Whonamedit.com - a repository of medical eponyms
Medical terminology | 0.782012 | 0.995313 | 0.778347 |
Serum sickness | Serum sickness in humans is a reaction to proteins in antiserum derived from a non-human animal source, occurring 5–10 days after exposure. Symptoms often include a rash, joint pain, fever, and lymphadenopathy. It is a type of hypersensitivity, specifically immune complex hypersensitivity (type III). The term serum sickness–like reaction (SSLR) is occasionally used to refer to similar illnesses that arise from the introduction of certain non-protein substances, such as penicillin.
Serum sickness may be diagnosed based on the symptoms, and using a blood test and a urine test. It may be prevented by not using an antitoxin derived from animal serum, and through prophylactic antihistamines or corticosteroids. It usually resolves naturally, but may be treated with corticosteroids, antihistamines, analgesics, and (in severe cases) prednisone. It was first characterized in 1906.
Signs and symptoms
Signs and symptoms can take as long as 14 days after exposure to appear. They may include signs and symptoms commonly associated with hypersensitivity or infections. Common symptoms include:
rashes and redness.
itching and urticaria.
joint pain (arthralgia), especially in finger and toe joints.
fever, usually appears before rash. This may be as high as 40 °C (104 °F).
lymphadenopathy (swelling of lymph nodes), particularly near the site of injection.
malaise.
Other symptoms include glomerulonephritis, blood in the urine, splenomegaly (enlarged spleen), hypotension (decreased blood pressure), and in serious cases circulatory shock.
Complications
Rarely, serum sickness can have severe complications. These include neuritis, myocarditis, laryngeal oedema, pleurisy, and Guillain–Barré syndrome.
Causes
Serum sickness is a type III hypersensitivity reaction, caused by immune complexes. When an antiserum is given, the human immune system can mistake the proteins present for harmful antigens. The body produces antibodies, which combine with these proteins to form immune complexes. These complexes precipitate, enter the walls of blood vessels, and activate the complement cascade, initiating an inflammatory response and consuming much of the available complement component 3 (C3). They can be found circulating in the blood, which differentiates serum sickness from serum sickness-like reaction. The result is a leukocytoclastic vasculitis. This results in hypocomplementemia, a low C3 level in serum. They can also cause more reactions, causing the typical symptoms of serum sickness. This is similar to a generalised Arthus reaction.
Antitoxins and antisera
Serum sickness is usually a result of exposure to antibodies derived from animals. These sera or antitoxins are generally given to prevent or treat an infection or envenomation (venomous bite).
Drugs
Serum sickness may be caused by some routine medications. Some of the drugs associated with serum sickness are:
allopurinol
barbiturates
captopril
cephalosporins
crofab
griseofulvin
penicillins
phenytoin
procainamide
quinidine
streptokinase
sulfonamides
rituximab
ibuprofen
infliximab
oxycodone
Others
Allergenic extracts, hormones and vaccines can also cause serum sickness. However, according to the Johns Hopkins Bloomberg School of Public Health, routinely recommended vaccinations to the general population in the U.S have not been shown to cause serum sickness, as of 2012.
Diagnosis
Diagnosis is based on history given by patient, including recent medications. A blood sample may be taken and tested, which will show thrombocytopenia (low platelets), leukopenia (low white blood cells), high sedimentation of red blood cells, and a decrease in the complement proteins C3 and C4. A urine sample may be taken and tested, which will show proteinuria, and sometimes hematuria (blood in the urine, with hemoglobinuria).
Differential diagnosis
Similar skin symptoms may be caused by lupus, erythema multiforme, and hives.
Prevention
Avoidance of antitoxins that may cause serum sickness is the best way to prevent serum sickness. Sometimes, the benefits of using an antitoxin outweigh the risks in the case of a life-threatening bite or sting. Prophylactic antihistamines or corticosteroids may be used with an antitoxin. Skin testing may be used beforehand in order to identify individuals who may be at risk of a reaction. Physicians should make their patients aware of the drugs or antitoxins to which they are allergic if there is a reaction. The physician will then choose an alternate antitoxin if it is appropriate, or continue with prophylactic measures. This is important if a patient has received an antitoxin before, as the serum sickness caused can be worse and occur more quickly.
Treatment
Antiserum or drug treatment should be stopped as soon as possible. Once treatment has stopped, symptoms usually resolve within seven days. Outcomes are generally good.
Corticosteroids, antihistamines, and analgesics are the main line of treatment. The choice depends on the severity of the reaction. Prednisone may be used in severe cases.
Use of plasmapheresis has also been described.
Epidemiology
Serum sickness is becoming less common over time. Many drugs based on animal serum have been replaced with artificial drugs.
History
Serum sickness was first characterized by Clemens von Pirquet and Béla Schick in 1906.
See also
Hypersensitivity
Arthus reaction
Serum sickness-like reaction
References
External links
Serum sickness-like reactions
Complications of surgical and medical care | 0.784111 | 0.992612 | 0.778318 |
Food | Food is any substance consumed by an organism for nutritional support. Food is usually of plant, animal, or fungal origin and contains essential nutrients such as carbohydrates, fats, proteins, vitamins, or minerals. The substance is ingested by an organism and assimilated by the organism's cells to provide energy, maintain life, or stimulate growth. Different species of animals have different feeding behaviours that satisfy the needs of their metabolisms and have evolved to fill a specific ecological niche within specific geographical contexts.
Omnivorous humans are highly adaptable and have adapted to obtain food in many different ecosystems. Humans generally use cooking to prepare food for consumption. The majority of the food energy required is supplied by the industrial food industry, which produces food through intensive agriculture and distributes it through complex food processing and food distribution systems. This system of conventional agriculture relies heavily on fossil fuels, which means that the food and agricultural systems are one of the major contributors to climate change, accounting for as much as 37% of total greenhouse gas emissions.
The food system has significant impacts on a wide range of other social and political issues, including sustainability, biological diversity, economics, population growth, water supply, and food security. Food safety and security are monitored by international agencies like the International Association for Food Protection, the World Resources Institute, the World Food Programme, the Food and Agriculture Organization, and the International Food Information Council.
Definition and classification
Food is any substance consumed to provide nutritional support and energy to an organism. It can be raw, processed, or formulated and is consumed orally by animals for growth, health, or pleasure. Food is mainly composed of water, lipids, proteins, and carbohydrates. Minerals (e.g., salts) and organic substances (e.g., vitamins) can also be found in food. Plants, algae, and some microorganisms use photosynthesis to make some of their own nutrients. Water is found in many foods and has been defined as a food by itself. Water and fiber have low energy densities, or calories, while fat is the most energy-dense component. Some inorganic (non-food) elements are also essential for plant and animal functioning.
Human food can be classified in various ways, either by related content or by how it is processed. The number and composition of food groups can vary. Most systems include four basic groups that describe their origin and relative nutritional function: Vegetables and Fruit, Cereals and Bread, Dairy, and Meat. Studies that look into diet quality group food into whole grains/cereals, refined grains/cereals, vegetables, fruits, nuts, legumes, eggs, dairy products, fish, red meat, processed meat, and sugar-sweetened beverages. The Food and Agriculture Organization and World Health Organization use a system with nineteen food classifications: cereals, roots, pulses and nuts, milk, eggs, fish and shellfish, meat, insects, vegetables, fruits, fats and oils, sweets and sugars, spices and condiments, beverages, foods for nutritional uses, food additives, composite dishes and savoury snacks.
Food sources
In a given ecosystem, food forms a web of interlocking chains with primary producers at the bottom and apex predators at the top. Other aspects of the web include detrovores (that eat detritis) and decomposers (that break down dead organisms). Primary producers include algae, plants, bacteria and protists that acquire their energy from sunlight. Primary consumers are the herbivores that consume the plants, and secondary consumers are the carnivores that consume those herbivores. Some organisms, including most mammals and birds, diet consists of both animals and plants, and they are considered omnivores. The chain ends with the apex predators, the animals that have no known predators in its ecosystem. Humans are considered apex predators.
Humans are omnivores, finding sustenance in vegetables, fruits, cooked meat, milk, eggs, mushrooms and seaweed. Cereal grain is a staple food that provides more food energy worldwide than any other type of crop. Corn (maize), wheat, and rice account for 87% of all grain production worldwide. Just over half of the world's crops are used to feed humans (55 percent), with 36 percent grown as animal feed and 9 percent for biofuels. Fungi and bacteria are also used in the preparation of fermented foods like bread, wine, cheese and yogurt.
Photosynthesis
During photosynthesis, energy from the sun is absorbed and used to transform water and carbon dioxide in the air or soil into oxygen and glucose. The oxygen is then released, and the glucose stored as an energy reserve. Photosynthetic plants, algae and certain bacteria often represent the lowest point of the food chains, making photosynthesis the primary source of energy and food for nearly all life on earth.
Plants also absorb important nutrients and minerals from the air, natural waters, and soil. Carbon, oxygen and hydrogen are absorbed from the air or water and are the basic nutrients needed for plant survival. The three main nutrients absorbed from the soil for plant growth are nitrogen, phosphorus and potassium, with other important nutrients including calcium, sulfur, magnesium, iron boron, chlorine, manganese, zinc, copper molybdenum and nickel.
Microorganisms
Bacteria and other microorganisms also form the lower rungs of the food chain. They obtain their energy from photosynthesis or by breaking down dead organisms, waste or chemical compounds. Some form symbiotic relationships with other organisms to obtain their nutrients. Bacteria provide a source of food for protozoa, who in turn provide a source of food for other organisms such as small invertebrates. Other organisms that feed on bacteria include nematodes, fan worms, shellfish and a species of snail.
In the marine environment, plankton (which includes bacteria, archaea, algae, protozoa and microscopic fungi) provide a crucial source of food to many small and large aquatic organisms.
Without bacteria, life would scarcely exist because bacteria convert atmospheric nitrogen into nutritious ammonia. Ammonia is the precursor to proteins, nucleic acids, and most vitamins. Since the advent of industrial process for nitrogen fixation, the Haber-Bosch Process, the majority of ammonia in the world is human-made.
Plants
Plants as a food source are divided into seeds, fruits, vegetables, legumes, grains and nuts. Where plants fall within these categories can vary, with botanically described fruits such as the tomato, squash, pepper and eggplant or seeds like peas commonly considered vegetables. Food is a fruit if the part eaten is derived from the reproductive tissue, so seeds, nuts and grains are technically fruit. From a culinary perspective, fruits are generally considered the remains of botanically described fruits after grains, nuts, seeds and fruits used as vegetables are removed. Grains can be defined as seeds that humans eat or harvest, with cereal grains (oats, wheat, rice, corn, barley, rye, sorghum and millet) belonging to the Poaceae (grass) family and pulses coming from the Fabaceae (legume) family. Whole grains are foods that contain all the elements of the original seed (bran, germ, and endosperm). Nuts are dry fruits, distinguishable by their woody shell.
Fleshy fruits (distinguishable from dry fruits like grain, seeds and nuts) can be further classified as stone fruits (cherries and peaches), pome fruits (apples, pears), berries (blackberry, strawberry), citrus (oranges, lemon), melons (watermelon, cantaloupe), Mediterranean fruits (grapes, fig), tropical fruits (banana, pineapple). Vegetables refer to any other part of the plant that can be eaten, including roots, stems, leaves, flowers, bark or the entire plant itself. These include root vegetables (potatoes and carrots), bulbs (onion family), flowers (cauliflower and broccoli), leaf vegetables (spinach and lettuce) and stem vegetables (celery and asparagus).
The carbohydrate, protein and lipid content of plants is highly variable. Carbohydrates are mainly in the form of starch, fructose, glucose and other sugars. Most vitamins are found from plant sources, with exceptions of vitamin D and vitamin B12. Minerals can also be plentiful or not. Fruit can consist of up to 90% water, contain high levels of simple sugars that contribute to their sweet taste, and have a high vitamin C content. Compared to fleshy fruit (excepting Bananas) vegetables are high in starch, potassium, dietary fiber, folate and vitamins and low in fat and calories. Grains are more starch based and nuts have a high protein, fibre, vitamin E and B content. Seeds are a good source of food for animals because they are abundant and contain fibre and healthful fats, such as omega-3 fats. Complicated chemical interactions can enhance or depress bioavailability of certain nutrients. Phytates can prevent the release of some sugars and vitamins.
Animals that only eat plants are called herbivores, with those that mostly just eat fruits known as frugivores, leaves, while shoot eaters are folivores (pandas) and wood eaters termed xylophages (termites). Frugivores include a diverse range of species from annelids to elephants, chimpanzees and many birds. About 182 fish consume seeds or fruit. Animals (domesticated and wild) use as many types of grasses that have adapted to different locations as their main source of nutrients.
Humans eat thousands of plant species; there may be as many as 75,000 edible species of angiosperms, of which perhaps 7,000 are often eaten. Plants can be processed into breads, pasta, cereals, juices and jams or raw ingredients such as sugar, herbs, spices and oils can be extracted. Oilseeds are pressed to produce rich oilssunflower, flaxseed, rapeseed (including canola oil) and sesame.
Many plants and animals have coevolved in such a way that the fruit is a good source of nutrition to the animal who then excretes the seeds some distance away, allowing greater dispersal. Even seed predation can be mutually beneficial, as some seeds can survive the digestion process. Insects are major eaters of seeds, with ants being the only real seed dispersers. Birds, although being major dispersers, only rarely eat seeds as a source of food and can be identified by their thick beak that is used to crack open the seed coat. Mammals eat a more diverse range of seeds, as they are able to crush harder and larger seeds with their teeth.
Animals
Animals are used as food either directly or indirectly. This includes meat, eggs, shellfish and dairy products like milk and cheese. They are an important source of protein and are considered complete proteins for human consumption as they contain all the essential amino acids that the human body needs. One steak, chicken breast or pork chop contains about 30 grams of protein. One large egg has 7 grams of protein. A serving of cheese has about 15 grams of protein. And 1 cup of milk has about 8 grams of protein. Other nutrients found in animal products include calories, fat, essential vitamins (including B12) and minerals (including zinc, iron, calcium, magnesium).
Food products produced by animals include milk produced by mammary glands, which in many cultures is drunk or processed into dairy products (cheese, butter, etc.). Eggs laid by birds and other animals are eaten and bees produce honey, a reduced nectar from flowers that is used as a popular sweetener in many cultures. Some cultures consume blood, such as in blood sausage, as a thickener for sauces, or in a cured, salted form for times of food scarcity, and others use blood in stews such as jugged hare.
Taste
Animals, specifically humans, typically have five different types of tastes: sweet, sour, salty, bitter, and umami. The differing tastes are important for distinguishing between foods that are nutritionally beneficial and those which may contain harmful toxins. As animals have evolved, the tastes that provide the most energy are the most pleasant to eat while others are not enjoyable, although humans in particular can acquire a preference for some substances which are initially unenjoyable. Water, while important for survival, has no taste.
Sweetness is almost always caused by a type of simple sugar such as glucose or fructose, or disaccharides such as sucrose, a molecule combining glucose and fructose. Sourness is caused by acids, such as vinegar in alcoholic beverages. Sour foods include citrus, specifically lemons and limes. Sour is evolutionarily significant as it can signal a food that may have gone rancid due to bacteria. Saltiness is the taste of alkali metal ions such as sodium and potassium. It is found in almost every food in low to moderate proportions to enhance flavor. Bitter taste is a sensation considered unpleasant characterised by having a sharp, pungent taste. Unsweetened dark chocolate, caffeine, lemon rind, and some types of fruit are known to be bitter. Umami, commonly described as savory, is a marker of proteins and characteristic of broths and cooked meats. Foods that have a strong umami flavor include cheese, meat and mushrooms.
While most animals taste buds are located in their mouth, some insects taste receptors are located on their legs and some fish have taste buds along their entire body. Dogs, cats and birds have relatively few taste buds (chickens have about 30), adult humans have between 2000 and 4000, while catfish can have more than a million. Herbivores generally have more than carnivores as they need to tell which plants may be poisonous. Not all mammals share the same tastes: some rodents can taste starch, cats cannot taste sweetness, and several carnivores (including hyenas, dolphins, and sea lions) have lost the ability to sense up to four of the five taste modalities found in humans.
Digestion
Food is broken into nutrient components through digestive process. Proper digestion consists of mechanical processes (chewing, peristalsis) and chemical processes (digestive enzymes and microorganisms). The digestive systems of herbivores and carnivores are very different as plant matter is harder to digest. Carnivores mouths are designed for tearing and biting compared to the grinding action found in herbivores. Herbivores however have comparatively longer digestive tracts and larger stomachs to aid in digesting the cellulose in plants.
Food safety
According to the World Health Organization (WHO), about 600 million people worldwide get sick and 420,000 die each year from eating contaminated food. Diarrhea is the most common illness caused by consuming contaminated food, with about 550 million cases and 230,000 deaths from diarrhea each year. Children under five years of age account for 40% of the burden of foodborne illness, with 125,000 deaths each year.
A 2003 World Health Organization (WHO) report concluded that about 30% of reported food poisoning outbreaks in the WHO European Region occur in private homes. According to the WHO and CDC, in the USA alone, annually, there are 76 million cases of foodborne illness leading to 325,000 hospitalizations and 5,000 deaths.
From 2011 to 2016, on average, there were 668,673 cases of foodborne illness and 21 deaths each year. In addition, during this period, 1,007 food poisoning outbreaks with 30,395 cases of food poisoning were reported.
See also
Food pairing
List of food and drink monuments
References
Further reading
Collingham, E. M. (2011). The Taste of War: World War Two and the Battle for Food
Katz, Solomon (2003). The Encyclopedia of Food and Culture, Scribner
Mobbs, Michael (2012). Sustainable Food Sydney: NewSouth Publishing,
Nestle, Marion (2007). Food Politics: How the Food Industry Influences Nutrition and Health, University Presses of California, revised and expanded edition,
The Future of Food (2015). A panel discussion at the 2015 Digital Life Design (DLD) Annual Conference. "How can we grow and enjoy food, closer to home, further into the future? MIT Media Lab's Kevin Slavin hosts a conversation with food artist, educator, and entrepreneur Emilie Baltz, professor Caleb Harper from MIT Media Lab's CityFarm project, the Barbarian Group's Benjamin Palmer, and Andras Forgacs, the co-founder and CEO of Modern Meadow, who is growing 'victimless' meat in a lab. The discussion addresses issues of sustainable urban farming, ecosystems, technology, food supply chains and their broad environmental and humanitarian implications, and how these changes in food production may change what people may find delicious ... and the other way around." Posted on the official YouTube Channel of DLD
External links
of Food Timeline
Food, BBC Radio 4 discussion with Rebecca Spang, Ivan Day and Felipe Fernandez-Armesto (In Our Time, 27 December 2001)
Food watchlist articles | 0.778647 | 0.999556 | 0.778301 |
Asymptomatic | Asymptomatic (or clinically silent) is an adjective categorising the medical conditions (i.e., injuries or diseases) that patients carry but without experiencing their symptoms, despite an explicit diagnosis (e.g., a positive medical test).
Pre-symptomatic is the adjective categorising the time periods during which the medical conditions are asymptomatic.
Subclinical and paucisymptomatic are other adjectives categorising either the asymptomatic infections (i.e., subclinical infections), or the psychosomatic illnesses and mental disorders expressing a subset of symptoms but not the entire set an explicit medical diagnosis requires.
Examples
An example of an asymptomatic disease is cytomegalovirus (CMV) which is a member of the herpes virus family. "It is estimated that 1% of all newborns are infected with CMV, but the majority of infections are asymptomatic." (Knox, 1983; Kumar et al. 1984) In some diseases, the proportion of asymptomatic cases can be important. For example, in multiple sclerosis it is estimated that around 25% of the cases are asymptomatic, with these cases detected postmortem or just by coincidence (as incidental findings) while treating other diseases.
Importance
Knowing that a condition is asymptomatic is important because:
It may be contagious, and the contribution of asymptomatic and pre-symptomatic infections to the transmission level of a disease helps set the required control measures to keep it from spreading.
It is not required that a person undergo treatment. It does not cause later medical problems such as high blood pressure and hyperlipidaemia.
Be alert to possible problems: asymptomatic hypothyroidism makes a person vulnerable to Wernicke–Korsakoff syndrome or beri-beri following intravenous glucose.
For some conditions, treatment during the asymptomatic phase is vital. If one waits until symptoms develop, it is too late for survival or to prevent damage.
Mental health
Subclinical or subthreshold conditions are those for which the full diagnostic criteria are not met and have not been met in the past, although symptoms are present. This can mean that symptoms are not severe enough to merit a diagnosis, or that symptoms are severe but do not meet the criteria of a condition.
List
These are conditions for which there is a sufficient number of documented individuals that are asymptomatic that it is clinically noted. For a complete list of asymptomatic infections see subclinical infection.
Balanitis xerotica obliterans
Benign lymphoepithelial lesion
Cardiac shunt
Carotid artery dissection
Carotid bruit
Cavernous hemangioma
Chloromas (Myeloid sarcoma)
Cholera
Chronic myelogenous leukemia
Coeliac disease
Coronary artery disease
Coronavirus disease 2019
Cowpox
Diabetic retinopathy
Essential fructosuria
Flu or Influenza strains
Folliculosebaceous cystic hamartoma
Glioblastoma multiforme (occasionally)
Glucocorticoid remediable aldosteronism
Glucose-6-phosphate dehydrogenase deficiency
Hepatitis
Hereditary elliptocytosis
Herpes
Heterophoria
Human coronaviruses (common cold germs)
Hypertension (high blood pressure)
Histidinemia
HIV (AIDS)
HPV
Hyperaldosteronism
hyperlipidaemia
Hyperprolinemia type I
Hypothyroidism
Hypoxia (some cases)
Idiopathic thrombocytopenic purpura
Iridodialysis (when small)
Lesch–Nyhan syndrome (female carriers)
Levo-Transposition of the great arteries
Measles
Meckel's diverticulum
Microvenular hemangioma
Mitral valve prolapse
Monkeypox
Monoclonal B-cell lymphocytosis
Myelolipoma
Nonalcoholic fatty liver disease
Optic disc pit
Osteoporosis
Pertussis (whooping cough)
Pes cavus
Poliomyelitis
Polyorchidism
Pre-eclampsia
Prehypertension
Protrusio acetabuli
Pulmonary contusion
Renal tubular acidosis
Rubella
Smallpox (extinct since the 1980s)
Spermatocele
Sphenoid wing meningioma
Spider angioma
Splenic infarction (though not typically)
Subarachnoid hemorrhage
Tonsillolith
Tuberculosis
Type II diabetes
Typhus
Vaginal intraepithelial neoplasia
Varicella (chickenpox)
Wilson's disease
Millions of women reported lack of symptoms during pregnancy until the point of childbirth or the beginning of labor; they didn't know they were pregnant. This phenomenon is known as cryptic pregnancies.
See also
Symptomatic
Subclinical infection
References
Medical terminology
Symptoms | 0.782813 | 0.994221 | 0.778289 |
Heat illness | Heat illness is a spectrum of disorders due to increased body temperature. It can be caused by either environmental conditions or by exertion. It includes minor conditions such as heat cramps, heat syncope, and heat exhaustion as well as the more severe condition known as heat stroke. It can affect any or all anatomical systems. Heat illnesses include: heat stroke, heat exhaustion, heat syncope, heat edema, heat cramps, heat rash, heat tetany.
Prevention includes avoiding medications that can increase the risk of heat illness, gradual adjustment to heat, and sufficient fluids and electrolytes.
Classification
A number of heat illnesses exist including:
Heat stroke - Defined by a body temperature of greater than due to environmental heat exposure with lack of thermoregulation. Symptoms include dry skin, rapid, strong pulse and dizziness.
Heat exhaustion - Can be a precursor of heatstroke; the symptoms include heavy sweating, rapid breathing and a fast, weak pulse.
Heat syncope - Fainting or dizziness as a result of overheating.
Heat edema - Swelling of extremities due to water retention following dilation of blood vessels in response to heat.
Heat cramps - Muscle pains that happen during heavy exercise in hot weather.
Heat rash - Skin irritation from excessive sweating.
Heat tetany - Usually results from short periods of stress in intense heat. Symptoms may include hyperventilation, respiratory problems, numbness or tingling, or muscle spasms.
Overview of diseases
Hyperthermia, also known as heat stroke, becomes commonplace during periods of sustained high temperature and humidity. Older adults, very young children, and those who are sick or overweight are at a higher risk for heat-related illness. The chronically ill and elderly are often taking prescription medications (e.g., diuretics, anticholinergics, antipsychotics, and antihypertensives) that interfere with the body's ability to dissipate heat.
Heat edema presents as a transient swelling of the hands, feet, and ankles and is generally secondary to increased aldosterone secretion, which enhances water retention. When combined with peripheral vasodilation and venous stasis, the excess fluid accumulates in the dependent areas of the extremities. The heat edema usually resolves within several days after the patient becomes acclimated to the warmer environment. No treatment is required, although wearing support stockings and elevating the affected legs will help minimize the edema.
Heat rash, also known as prickly heat, is a maculopapular rash accompanied by acute inflammation and blocked sweat ducts. The sweat ducts may become dilated and may eventually rupture, producing small pruritic vesicles on an erythematous base. Heat rash affects areas of the body covered by tight clothing. If this continues for a duration of time it can lead to the development of chronic dermatitis or a secondary bacterial infection. Prevention is the best therapy. It is also advised to wear loose-fitting clothing in the heat. Once heat rash has developed, the initial treatment involves the application of chlorhexidine lotion to remove any desquamated skin. The associated itching may be treated with topical or systemic antihistamines. If infection occurs a regimen of antibiotics is required.
Heat cramps are painful, often severe, involuntary spasms of the large muscle groups used in strenuous exercise. Heat cramps tend to occur after intense exertion. They usually develop in people performing heavy exercise while sweating profusely and replenishing fluid loss with non-electrolyte containing water. This is believed to lead to hyponatremia that induces cramping in stressed muscles. Rehydration with salt-containing fluids provides rapid relief. Patients with mild cramps can be given oral .2% salt solutions, while those with severe cramps require IV isotonic fluids. The many sport drinks on the market are a good source of electrolytes and are readily accessible.
Heat syncope is related to heat exposure that produces orthostatic hypotension. This hypotension can precipitate a near-syncopal episode. Heat syncope is believed to result from intense sweating, which leads to dehydration, followed by peripheral vasodilation and reduced venous blood return in the face of decreased vasomotor control. Management of heat syncope consists of cooling and rehydration of the patient using oral rehydration therapy (sport drinks) or isotonic IV fluids. People who experience heat syncope should avoid standing in the heat for long periods of time. They should move to a cooler environment and lie down if they recognize the initial symptoms. Wearing support stockings and engaging in deep knee-bending movements can help promote venous blood return.
Heat exhaustion is considered by experts to be the forerunner of heat stroke (hyperthermia). It may even resemble heat stroke, with the difference being that the neurologic function remains intact. Heat exhaustion is marked by excessive dehydration and electrolyte depletion. Symptoms may include diarrhea, headache, nausea and vomiting, dizziness, tachycardia, malaise, and myalgia. Definitive therapy includes removing patients from the heat and replenishing their fluids. Most patients will require fluid replacement with IV isotonic fluids at first. The salt content is adjusted as necessary once the electrolyte levels are known. After discharge from the hospital, patients are instructed to rest, drink plenty of fluids for 2–3 hours, and avoid the heat for several days. If this advice is not followed it may then lead to heat stroke.
Symptoms
Increased temperatures have been reported to cause heat stroke, heat exhaustion, heat syncope, and heat cramps. Some studies have also looked at how severe heat stroke can lead to permanent damage to organ systems. This damage can increase the risk of early mortality because the damage can cause severe impairment in organ function. Other complications of heat stroke include respiratory distress syndrome in adults and disseminated intravascular coagulation. Some researchers have noted that any compromise to the human body's ability to thermoregulate would in theory increase risk of mortality. This includes illnesses that may affect a person's mobility, awareness, or behavior.
Prevention
Prevention includes avoiding medications that can increase the risk of heat illness (e.g. antihypertensives, diuretics, and anticholinergics), gradual adjustment to heat, and sufficient fluids and electrolytes.
Some common medications that have an effect on thermoregulation can also increase the risk of mortality. Specific examples include anticholinergics, diuretics, phenothiazines and barbiturates.
Epidemiology
Heat stroke is relatively common in sports. About 2 percent of sports-related deaths that occurred in the United States between 1980 and 2006 were caused by exertional heat stroke. Football in the United States has the highest rates. The month of August, which is associated with pre-season football camps across the country, accounts for 66.3% of exertion heat-related illness time-loss events. Heat illness is also not limited geographically and is widely distributed throughout the United States. An average of 5946 persons were treated annually in US hospital emergency departments (2 visits/ 100,00 population) with a hospitalization rate of 7.1%. Most commonly males are brought in 72.5% and persons 15–19 years of age 35.6% When taking into consideration all high school athletes, heat illness occurs at a rate of 1.2 per 100,000 kids. When comparing risk by sport, Football was 11.4 times more likely than all other sports combined to be exposed to an exertional heat illness.
Between 1999 and 2003, the US had a total of 3442 deaths from heat illness. Those who work outdoors are at particular risk for heat illness, though those who work in poorly-cooled spaces indoors are also at risk. Between 1992 and 2006, 423 workers died from heat illness in the US. Exposure to environmental heat led to 37 work-related deaths. There were 2,830 nonfatal occupational injuries and illnesses involving days away from work as well, in 2015. Kansas had the highest heat related injury while on the job with a rate of 1.3 per 10,000 workers, while Texas had the most overall. Due to the much higher state population of Texas, their prevalence was only 0.4 per 10,000 or 4 per 100,000. Of the 37 deaths reported heat illnesses, 33 of the 37 occurred between the summer months of June through September. The most dangerous profession that was documented was transportation and material moving. Transportation and material moving accounted for 720 of the 2,830 reported nonfatal occupational injuries or 25.4 percent. After transportation and material moving, Production placed second followed by protective services, installation, maintenance, and repair and construction all in succession
Effects of climate change
A 2016 U.S. government report said that climate change could result in "tens of thousands of additional premature deaths per year across the United States by the end of this century." Indeed, between 2014 and 2017, heat exposure deaths tripled in Arizona (76 deaths in 2014; 235 deaths in 2017) and increased fivefold in Nevada (29 deaths in 2014; 139 deaths in 2017).
History
Heat illness used to be blamed on a tropical fever named calenture.
See also
Occupational heat stress
References
External links
"Heat Exhaustion" on Medicine.net
Emergency medicine
Effects of external causes
Thermoregulation | 0.787972 | 0.98745 | 0.778083 |
Encephalopathy | Encephalopathy (; ) means any disorder or disease of the brain, especially chronic degenerative conditions. In modern usage, encephalopathy does not refer to a single disease, but rather to a syndrome of overall brain dysfunction; this syndrome has many possible organic and inorganic causes.
Types
There are many types of encephalopathy. Some examples include:
Mitochondrial encephalopathy: Metabolic disorder caused by dysfunction of mitochondrial DNA. Can affect many body systems, particularly the brain and nervous system.
Acute necrotizing encephalopathy, rare disease that occurs following a viral infection.
Glycine encephalopathy: A genetic metabolic disorder involving excess production of glycine.
Hepatic encephalopathy: Arising from advanced cirrhosis of the liver.
Hypoxic ischemic encephalopathy: Permanent or transitory encephalopathy arising from severely reduced oxygen delivery to the brain.
Static encephalopathy: Unchanging, or permanent, brain damage, usually caused by prenatal exposure to ethanol.
Uremic encephalopathy: Arising from high levels of toxins normally cleared by the kidneys—rare where dialysis is readily available.
Wernicke's encephalopathy: Arising from thiamine (B1) deficiency, usually in the setting of alcoholism.
Hashimoto's encephalopathy: Arising from an auto-immune disorder.
Anti-NMDA receptor encephalitis: An auto-immune encephalitis.
Hyperammonemia: A condition caused by high levels of ammonia, which is due to inborn errors of metabolism (including urea cycle disorder or multiple carboxylase deficiency), a diet with excessive levels of protein, deficiencies of specific nutrients such as arginine or biotin, or organ failure.
Hypertensive encephalopathy: Arising from acutely increased blood pressure.
Chronic traumatic encephalopathy: A progressive degenerative disease associated with repeated head trauma, often linked to contact sports.
Lyme encephalopathy: Arising from Lyme disease bacteria, including Borrelia burgdorferi.
Toxic encephalopathy: A form of encephalopathy caused by chemicals and prescription drugs, often resulting in permanent brain damage.
Toxic-metabolic encephalopathy: A catch-all for brain dysfunction caused by infection, organ failure, or intoxication.
Transmissible spongiform encephalopathy: A collection of diseases all caused by prions, and characterized by "spongy" brain tissue (riddled with holes), impaired locomotion or coordination, and a 100% mortality rate. Includes bovine spongiform encephalopathy (mad cow disease), scrapie, and kuru among others.
Neonatal encephalopathy (hypoxic-ischemic encephalopathy): An obstetric form, often occurring due to lack of oxygen in bloodflow to brain-tissue of the fetus during labour or delivery.
Salmonella encephalopathy: A form of encephalopathy caused by food poisoning (especially out of peanuts and rotten meat) often resulting in permanent brain damage and nervous system disorders.
Encephalomyopathy: A combination of encephalopathy and myopathy. Causes may include mitochondrial disease (particularly MELAS) or chronic hypophosphatemia, as may occur in cystinosis.
Creutzfeldt–Jakob disease (CJD; transmissible spongiform encephalopathy).
HIV encephalopathy (encephalopathy associated with HIV infection and AIDS, characterized by atrophy and ill-defined white matter hyperintensity).
Sepsis-associated encephalopathy (this type can occur in the setting of apparent sepsis, trauma, severe burns, or trauma, even without clear identification of an infection).
Epileptic encephalopathies:
Early infantile epileptic encephalopathy (acquired or congenital abnormal cortical development).
Early myoclonic epileptic encephalopathy (possibly due to metabolic disorders).
Gluten encephalopathy: Focal abnormalities of the white matter (generally area of low perfusion) are appreciated through magnetic resonance. Migraine is the most common symptom reported.
BRAT1 Encephalopathy: An ultra-rare autosomal recessive neonatal encephalopathy.
Toxicity from chemotherapy
Chemotherapy medication, for example, fludarabine can cause a
permanent severe global encephalopathy. Ifosfamide can cause
a severe encephalopathy (but it can be reversible with stopping use of the drug and starting the use of methylene blue). Bevacizumab and other anti–vascular endothelial growth factor medication can cause posterior reversible encephalopathy syndrome.
Signs and symptoms
The hallmark of encephalopathy is an altered mental state or delirium. Characteristic of the altered mental state is impairment of the cognition, attention, orientation, sleep–wake cycle and consciousness. An altered state of consciousness may range from failure of selective attention to drowsiness. Hypervigilance may be present; with or without: cognitive deficits, headache, epileptic seizures, myoclonus (involuntary twitching of a muscle or group of muscles) or asterixis ("flapping tremor" of the hand when wrist is extended).
Depending on the type and severity of encephalopathy, common neurological symptoms are loss of cognitive function, subtle personality changes, and an inability to concentrate. Other neurological signs may include dysarthria, hypomimia, problems with movements (they can be clumsy or slow), ataxia, tremor. Other neurological signs may include involuntary grasping and sucking motions, nystagmus (rapid, involuntary eye movement), jactitation (restlessness while in bed), and respiratory abnormalities such as Cheyne-Stokes respiration (cyclic waxing and waning of tidal volume), apneustic respirations and post-hypercapnic apnea. Focal neurological deficits are less common.
Wernicke encephalopathy can co-occur with Korsakoff alcoholic syndrome, characterized by amnestic-confabulatory syndrome: retrograde amnesia, anterograde amnesia, confabulations (invented memories), poor recall and disorientation.
Anti-NMDA receptor encephalitis is the most common autoimmune encephalitis. It can cause paranoid and grandiose delusions, agitation, hallucinations (visual and auditory), bizarre behavior, fear, short-term memory loss, and confusion.
HIV encephalopathy can lead to dementia.
Diagnosis
Blood tests, cerebrospinal fluid examination by lumbar puncture (also known as spinal tap), brain imaging studies, electroencephalography (EEG), neuropsychological testing and similar diagnostic studies may be used to differentiate the various causes of encephalopathy.
Diagnosis is frequently clinical. That is, no set of tests give the diagnosis, but the entire presentation of the illness with nonspecific test results informs the experienced clinician of the diagnosis.
Treatment
Treatment varies according to the type and severity of the encephalopathy. Anticonvulsants may be prescribed to reduce or halt any seizures. Changes to diet and nutritional supplements may help some people. In severe cases, dialysis or organ replacement surgery may be needed.
Sympathomimetic drugs can increase motivation, cognition, motor performance and alertness in persons with encephalopathy caused by brain injury, chronic infections, strokes, brain tumors.
When the encephalopathy is caused by untreated celiac disease or non-celiac gluten sensitivity, the gluten-free diet stops the progression of brain damage and improves the headaches.
Prognosis
Treating the underlying cause of the disorder may improve or reverse symptoms. However, in some cases, the encephalopathy may cause permanent structural changes and irreversible damage to the brain. These permanent deficits can be considered a form of stable dementia. Some encephalopathies can be fatal.
Terminology
Encephalopathy is a difficult term because it can be used to denote either a disease or finding (i.e., an observable sign in a person).
When referring to a finding, encephalopathy refers to permanent (or degenerative) brain injury, or a reversible one. It can be due to direct injury to the brain, or illness remote from the brain. The individual findings that cause a clinician to refer to a person as having encephalopathy include intellectual disability, irritability, agitation, delirium, confusion, somnolence, stupor, coma and psychosis. As such, describing a person as having a clinical picture of encephalopathy is not a very specific description.
When referring to a disease, encephalopathy refers to a wide variety of brain disorders with very different etiologies, prognoses and implications. For example, prion diseases, all of which cause transmissible spongiform encephalopathies, are invariably fatal, but other encephalopathies are reversible and can have a number of causes including nutritional deficiencies and toxins.
See also
Brain damage
Neuroscience
Neurological disorder
Psychoorganic syndrome
References
Adapted from:
Further reading
The Diagnosis of Stupor and Coma by Plum and Posner, , remains one of the best detailed observational references to the condition.
Brain disorders | 0.77963 | 0.997795 | 0.777911 |
Breathing | Breathing (spiration or ventilation) is the rhythmical process of moving air into (inhalation) and out of (exhalation) the lungs to facilitate gas exchange with the internal environment, mostly to flush out carbon dioxide and bring in oxygen.
All aerobic creatures need oxygen for cellular respiration, which extracts energy from the reaction of oxygen with molecules derived from food and produces carbon dioxide as a waste product. Breathing, or external respiration, brings air into the lungs where gas exchange takes place in the alveoli through diffusion. The body's circulatory system transports these gases to and from the cells, where cellular respiration takes place.
The breathing of all vertebrates with lungs consists of repetitive cycles of inhalation and exhalation through a highly branched system of tubes or airways which lead from the nose to the alveoli. The number of respiratory cycles per minute is the breathing or respiratory rate, and is one of the four primary vital signs of life. Under normal conditions the breathing depth and rate is automatically, and unconsciously, controlled by several homeostatic mechanisms which keep the partial pressures of carbon dioxide and oxygen in the arterial blood constant. Keeping the partial pressure of carbon dioxide in the arterial blood unchanged under a wide variety of physiological circumstances, contributes significantly to tight control of the pH of the extracellular fluids (ECF). Over-breathing (hyperventilation) increases the arterial partial pressure of carbon dioxide, causing a rise in the pH of the ECF. Under-breathing (hypoventilation), on the other hand, decreases the arterial partial pressure of carbon dioxide and lowers the pH of the ECF. Both cause distressing symptoms.
Breathing has other important functions. It provides a mechanism for speech, laughter and similar expressions of the emotions. It is also used for reflexes such as yawning, coughing and sneezing. Animals that cannot thermoregulate by perspiration, because they lack sufficient sweat glands, may lose heat by evaporation through panting.
Mechanics
The lungs are not capable of inflating themselves, and will expand only when there is an increase in the volume of the thoracic cavity. In humans, as in the other mammals, this is achieved primarily through the contraction of the diaphragm, but also by the contraction of the intercostal muscles which pull the rib cage upwards and outwards as shown in the diagrams on the right. During forceful inhalation (Figure on the right) the accessory muscles of inhalation, which connect the ribs and sternum to the cervical vertebrae and base of the skull, in many cases through an intermediary attachment to the clavicles, exaggerate the pump handle and bucket handle movements (see illustrations on the left), bringing about a greater change in the volume of the chest cavity. During exhalation (breathing out), at rest, all the muscles of inhalation relax, returning the chest and abdomen to a position called the "resting position", which is determined by their anatomical elasticity. At this point the lungs contain the functional residual capacity of air, which, in the adult human, has a volume of about 2.5–3.0 liters.
During heavy breathing (hyperpnea) as, for instance, during exercise, exhalation is brought about by relaxation of all the muscles of inhalation, (in the same way as at rest), but, in addition, the abdominal muscles, instead of being passive, now contract strongly causing the rib cage to be pulled downwards (front and sides). This not only decreases the size of the rib cage but also pushes the abdominal organs upwards against the diaphragm which consequently bulges deeply into the thorax. The end-exhalatory lung volume is now less air than the resting "functional residual capacity". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human, there is always still at least one liter of residual air left in the lungs after maximum exhalation.
Diaphragmatic breathing causes the abdomen to rhythmically bulge out and fall back. It is, therefore, often referred to as "abdominal breathing". These terms are often used interchangeably because they describe the same action.
When the accessory muscles of inhalation are activated, especially during labored breathing, the clavicles are pulled upwards, as explained above. This external manifestation of the use of the accessory muscles of inhalation is sometimes referred to as clavicular breathing, seen especially during asthma attacks and in people with chronic obstructive pulmonary disease.
Passage of air
Upper airways
Ideally, air is breathed first out and secondly in through the nose. The nasal cavities (between the nostrils and the pharynx) are quite narrow, firstly by being divided in two by the nasal septum, and secondly by lateral walls that have several longitudinal folds, or shelves, called
nasal conchae, thus exposing a large area of nasal mucous membrane to the air as it is inhaled (and exhaled). This causes the inhaled air to take up moisture from the wet mucus, and warmth from the underlying blood vessels, so that the air is very nearly saturated with water vapor and is at almost body temperature by the time it reaches the larynx. Part of this moisture and heat is recaptured as the exhaled air moves out over the partially dried-out, cooled mucus in the nasal passages, during exhalation. The sticky mucus also traps much of the particulate matter that is breathed in, preventing it from reaching the lungs.
Lower airways
The anatomy of a typical mammalian respiratory system, below the structures normally listed among the "upper airways" (the nasal cavities, the pharynx, and larynx), is often described as a respiratory tree or tracheobronchial tree (figure on the left). Larger airways give rise to branches that are slightly narrower, but more numerous than the "trunk" airway that gives rise to the branches. The human respiratory tree may consist of, on average, 23 such branchings into progressively smaller airways, while the respiratory tree of the mouse has up to 13 such branchings. Proximal divisions (those closest to the top of the tree, such as the trachea and bronchi) function mainly to transmit air to the lower airways. Later divisions such as the respiratory bronchioles, alveolar ducts and alveoli are specialized for gas exchange.
The trachea and the first portions of the main bronchi are outside the lungs. The rest of the "tree" branches within the lungs, and ultimately extends to every part of the lungs.
The alveoli are the blind-ended terminals of the "tree", meaning that any air that enters them has to exit the same way it came. A system such as this creates dead space, a term for the volume of air that fills the airways at the end of inhalation, and is breathed out, unchanged, during the next exhalation, never having reached the alveoli. Similarly, the dead space is filled with alveolar air at the end of exhalation, which is the first air to be breathed back into the alveoli during inhalation, before any fresh air which follows after it. The dead space volume of a typical adult human is about 150 ml.
Gas exchange
The primary purpose of breathing is to refresh air in the alveoli so that gas exchange can take place in the blood. The equilibration of the partial pressures of the gases in the alveolar blood and the alveolar air occurs by diffusion. After exhaling, adult human lungs still contain 2.5–3 L of air, their functional residual capacity or FRC. On inhalation, only about 350 mL of new, warm, moistened atmospheric air is brought in and is well mixed with the FRC. Consequently, the gas composition of the FRC changes very little during the breathing cycle. This means that the pulmonary capillary blood always equilibrates with a relatively constant air composition in the lungs and the diffusion rate with arterial blood gases remains equally constant with each breath. Body tissues are therefore not exposed to large swings in oxygen and carbon dioxide tensions in the blood caused by the breathing cycle, and the peripheral and central chemoreceptors measure only gradual changes in dissolved gases. Thus the homeostatic control of the breathing rate depends only on the partial pressures of oxygen and carbon dioxide in the arterial blood, which then also maintains a constant pH of the blood.
Control
The rate and depth of breathing is automatically controlled by the respiratory centers that receive information from the peripheral and central chemoreceptors. These chemoreceptors continuously monitor the partial pressures of carbon dioxide and oxygen in the arterial blood. The first of these sensors are the central chemoreceptors on the surface of the medulla oblongata of the brain stem which are particularly sensitive to pH as well as the partial pressure of carbon dioxide in the blood and cerebrospinal fluid. The second group of sensors measure the partial pressure of oxygen in the arterial blood. Together the latter are known as the peripheral chemoreceptors, and are situated in the aortic and carotid bodies. Information from all of these chemoreceptors is conveyed to the respiratory centers in the pons and medulla oblongata, which responds to fluctuations in the partial pressures of carbon dioxide and oxygen in the arterial blood by adjusting the rate and depth of breathing, in such a way as to restore the partial pressure of carbon dioxide to 5.3 kPa (40 mm Hg), the pH to 7.4 and, to a lesser extent, the partial pressure of oxygen to 13 kPa (100 mm Hg). For example, exercise increases the production of carbon dioxide by the active muscles. This carbon dioxide diffuses into the venous blood and ultimately raises the partial pressure of carbon dioxide in the arterial blood. This is immediately sensed by the carbon dioxide chemoreceptors on the brain stem. The respiratory centers respond to this information by causing the rate and depth of breathing to increase to such an extent that the partial pressures of carbon dioxide and oxygen in the arterial blood return almost immediately to the same levels as at rest. The respiratory centers communicate with the muscles of breathing via motor nerves, of which the phrenic nerves, which innervate the diaphragm, are probably the most important.
Automatic breathing can be overridden to a limited extent by simple choice, or to facilitate swimming, speech, singing or other vocal training. It is impossible to suppress the urge to breathe to the point of hypoxia but training can increase the ability to hold one's breath. Conscious breathing practices have been shown to promote relaxation and stress relief but have not been proven to have any other health benefits.
Other automatic breathing control reflexes also exist. Submersion, particularly of the face, in cold water, triggers a response called the diving reflex. This has the initial result of shutting down the airways against the influx of water. The metabolic rate slows down. This is coupled with intense vasoconstriction of the arteries to the limbs and abdominal viscera, reserving the oxygen that is in blood and lungs at the beginning of the dive almost exclusively for the heart and the brain. The diving reflex is an often-used response in animals that routinely need to dive, such as penguins, seals and whales. It is also more effective in very young infants and children than in adults.
Composition
Inhaled air is by volume 78% nitrogen, 20.95% oxygen and small amounts of other gases including argon, carbon dioxide, neon, helium, and hydrogen.
The gas exhaled is 4% to 5% by volume of carbon dioxide, about a hundredfold increase over the inhaled amount. The volume of oxygen is reduced by about a quarter, 4% to 5%, of total air volume. The typical composition is:
5.0–6.3% water vapor
79% nitrogen
13.6–16.0% oxygen
4.0–5.3% carbon dioxide
1% argon
parts per million (ppm) of hydrogen, from the metabolic activity of microorganisms in the large intestine.
ppm of carbon monoxide from degradation of heme proteins.
4.5 ppm of methanol
1 ppm of ammonia.
Trace many hundreds of volatile organic compounds, especially isoprene and acetone. The presence of certain organic compounds indicates disease.
In addition to air, underwater divers practicing technical diving may breathe oxygen-rich, oxygen-depleted or helium-rich breathing gas mixtures. Oxygen and analgesic gases are sometimes given to patients under medical care. The atmosphere in space suits is pure oxygen. However, this is kept at around 20% of Earthbound atmospheric pressure to regulate the rate of inspiration.
Effects of ambient air pressure
Breathing at altitude
Atmospheric pressure decreases with the height above sea level (altitude) and since the alveoli are open to the outside air through the open airways, the pressure in the lungs also decreases at the same rate with altitude. At altitude, a pressure differential is still required to drive air into and out of the lungs as it is at sea level. The mechanism for breathing at altitude is essentially identical to breathing at sea level but with the following differences:
The atmospheric pressure decreases exponentially with altitude, roughly halving with every rise in altitude. The composition of atmospheric air is, however, almost constant below 80 km, as a result of the continuous mixing effect of the weather. The concentration of oxygen in the air (mmols O2 per liter of air) therefore decreases at the same rate as the atmospheric pressure. At sea level, where the ambient pressure is about 100 kPa, oxygen constitutes 21% of the atmosphere and the partial pressure of oxygen is 21 kPa (i.e. 21% of 100 kPa). At the summit of Mount Everest, , where the total atmospheric pressure is 33.7 kPa, oxygen still constitutes 21% of the atmosphere but its partial pressure is only 7.1 kPa (i.e. 21% of 33.7 kPa = 7.1 kPa). Therefore, a greater volume of air must be inhaled at altitude than at sea level in order to breathe in the same amount of oxygen in a given period.
During inhalation, air is warmed and saturated with water vapor as it passes through the nose and pharynx before it enters the alveoli. The saturated vapor pressure of water is dependent only on temperature; at a body core temperature of 37 °C it is 6.3 kPa (47.0 mmHg), regardless of any other influences, including altitude. Consequently, at sea level, the tracheal air (immediately before the inhaled air enters the alveoli) consists of: water vapor ( = 6.3 kPa), nitrogen ( = 74.0 kPa), oxygen ( = 19.7 kPa) and trace amounts of carbon dioxide and other gases, a total of 100 kPa. In dry air, the at sea level is 21.0 kPa, compared to a of 19.7 kPa in the tracheal air (21% of [100 – 6.3] = 19.7 kPa). At the summit of Mount Everest tracheal air has a total pressure of 33.7 kPa, of which 6.3 kPa is water vapor, reducing the in the tracheal air to 5.8 kPa (21% of [33.7 – 6.3] = 5.8 kPa), beyond what is accounted for by a reduction of atmospheric pressure alone (7.1 kPa).
The pressure gradient forcing air into the lungs during inhalation is also reduced by altitude. Doubling the volume of the lungs halves the pressure in the lungs at any altitude. Having the sea level air pressure (100 kPa) results in a pressure gradient of 50 kPa but doing the same at 5500 m, where the atmospheric pressure is 50 kPa, a doubling of the volume of the lungs results in a pressure gradient of the only 25 kPa. In practice, because we breathe in a gentle, cyclical manner that generates pressure gradients of only 2–3 kPa, this has little effect on the actual rate of inflow into the lungs and is easily compensated for by breathing slightly deeper. The lower viscosity of air at altitude allows air to flow more easily and this also helps compensate for any loss of pressure gradient.
All of the above effects of low atmospheric pressure on breathing are normally accommodated by increasing the respiratory minute volume (the volume of air breathed in — or out — per minute), and the mechanism for doing this is automatic. The exact increase required is determined by the respiratory gases homeostatic mechanism, which regulates the arterial and . This homeostatic mechanism prioritizes the regulation of the arterial over that of oxygen at sea level. That is to say, at sea level the arterial is maintained at very close to 5.3 kPa (or 40 mmHg) under a wide range of circumstances, at the expense of the arterial , which is allowed to vary within a very wide range of values, before eliciting a corrective ventilatory response. However, when the atmospheric pressure (and therefore the atmospheric ) falls to below 75% of its value at sea level, oxygen homeostasis is given priority over carbon dioxide homeostasis. This switch-over occurs at an elevation of about . If this switch occurs relatively abruptly, the hyperventilation at high altitude will cause a severe fall in the arterial with a consequent rise in the pH of the arterial plasma leading to respiratory alkalosis. This is one contributor to high altitude sickness. On the other hand, if the switch to oxygen homeostasis is incomplete, then hypoxia may complicate the clinical picture with potentially fatal results.
Breathing at depth
Pressure increases with the depth of water at the rate of about one atmosphere – slightly more than 100 kPa, or one bar, for every 10 meters. Air breathed underwater by divers is at the ambient pressure of the surrounding water and this has a complex range of physiological and biochemical implications. If not properly managed, breathing compressed gasses underwater may lead to several diving disorders which include pulmonary barotrauma, decompression sickness, nitrogen narcosis, and oxygen toxicity. The effects of breathing gasses under pressure are further complicated by the use of one or more special gas mixtures.
Air is provided by a diving regulator, which reduces the high pressure in a diving cylinder to the ambient pressure. The breathing performance of regulators is a factor when choosing a suitable regulator for the type of diving to be undertaken. It is desirable that breathing from a regulator requires low effort even when supplying large amounts of air. It is also recommended that it supplies air smoothly without any sudden changes in resistance while inhaling or exhaling. In the graph, right, note the initial spike in pressure on exhaling to open the exhaust valve and that the initial drop in pressure on inhaling is soon overcome as the Venturi effect designed into the regulator to allow an easy draw of air. Many regulators have an adjustment to change the ease of inhaling so that breathing is effortless.
Respiratory disorders
Abnormal breathing patterns include Kussmaul breathing, Biot's respiration and Cheyne–Stokes respiration.
Other breathing disorders include shortness of breath (dyspnea), stridor, apnea, sleep apnea (most commonly obstructive sleep apnea), mouth breathing, and snoring. Many conditions are associated with obstructed airways. Chronic mouth breathing may be associated with illness. Hypopnea refers to overly shallow breathing; hyperpnea refers to fast and deep breathing brought on by a demand for more oxygen, as for example by exercise. The terms hypoventilation and hyperventilation also refer to shallow breathing and fast and deep breathing respectively, but under inappropriate circumstances or disease. However, this distinction (between, for instance, hyperpnea and hyperventilation) is not always adhered to, so that these terms are frequently used interchangeably.
A range of breath tests can be used to diagnose diseases such as dietary intolerances.
A rhinomanometer uses acoustic technology to examine the air flow through the nasal passages.
Society and culture
The word "spirit" comes from the Latin spiritus, meaning breath. Historically, breath has often been considered in terms of the concept of life force. The Hebrew Bible refers to God breathing the breath of life into clay to make Adam a living soul (nephesh). It also refers to the breath as returning to God when a mortal dies. The terms spirit, prana, the Polynesian mana, the Hebrew ruach and the psyche in psychology are related to the concept of breath.
In tai chi, aerobic exercise is combined with breathing exercises to strengthen the diaphragm muscles, improve posture and make better use of the body's qi. Different forms of meditation, and yoga advocate various breathing methods. A form of Buddhist meditation called anapanasati meaning mindfulness of breath was first introduced by Buddha. Breathing disciplines are incorporated into meditation, certain forms of yoga such as pranayama, and the Buteyko method as a treatment for asthma and other conditions.
In music, some wind instrument players use a technique called circular breathing. Singers also rely on breath control.
Common cultural expressions related to breathing include: "to catch my breath", "took my breath away", "inspiration", "to expire", "get my breath back".
Breathing and mood
Certain breathing patterns have a tendency to occur with certain moods. Due to this relationship, practitioners of various disciplines consider that they can encourage the occurrence of a particular mood by adopting the breathing pattern that it most commonly occurs in conjunction with. For instance, and perhaps the most common recommendation is that deeper breathing which utilizes the diaphragm and abdomen more can encourage relaxation. Practitioners of different disciplines often interpret the importance of breathing regulation and its perceived influence on mood in different ways. Buddhists may consider that it helps precipitate a sense of inner-peace, holistic healers that it encourages an overall state of health and business advisers that it provides relief from work-based stress.
Breathing and physical exercise
During physical exercise, a deeper breathing pattern is adapted to facilitate greater oxygen absorption. An additional reason for the adoption of a deeper breathing pattern is to strengthen the body's core. During the process of deep breathing, the thoracic diaphragm adopts a lower position in the core and this helps to generate intra-abdominal pressure which strengthens the lumbar spine. Typically, this allows for more powerful physical movements to be performed. As such, it is frequently recommended when lifting heavy weights to take a deep breath or adopt a deeper breathing pattern.
See also
References
Further reading
External links
Respiration
Reflexes
Human body
Gases
Articles containing video clips | 0.780331 | 0.996647 | 0.777714 |
Metabolic acidosis | Metabolic acidosis is a serious electrolyte disorder characterized by an imbalance in the body's acid-base balance. Metabolic acidosis has three main root causes: increased acid production, loss of bicarbonate, and a reduced ability of the kidneys to excrete excess acids. Metabolic acidosis can lead to acidemia, which is defined as arterial blood pH that is lower than 7.35. Acidemia and acidosis are not mutually exclusive – pH and hydrogen ion concentrations also depend on the coexistence of other acid-base disorders; therefore, pH levels in people with metabolic acidosis can range from low to high.
Acute metabolic acidosis, lasting from minutes to several days, often occurs during serious illnesses or hospitalizations, and is generally caused when the body produces an excess amount of organic acids (ketoacids in ketoacidosis, or lactic acid in lactic acidosis). A state of chronic metabolic acidosis, lasting several weeks to years, can be the result of impaired kidney function (chronic kidney disease) and/or bicarbonate wasting. The adverse effects of acute versus chronic metabolic acidosis also differ, with acute metabolic acidosis impacting the cardiovascular system in hospital settings, and chronic metabolic acidosis affecting muscles, bones, kidney and cardiovascular health.
Signs and symptoms
Acute metabolic acidosis
Symptoms are not specific, and diagnosis can be difficult unless patients present with clear indications for blood gas sampling. Symptoms may include palpitations, headache, altered mental status such as severe anxiety due to hypoxia, decreased visual acuity, nausea, vomiting, abdominal pain, altered appetite and weight gain, muscle weakness, bone pain, and joint pain. People with acute metabolic acidosis may exhibit deep, rapid breathing called Kussmaul respirations which is classically associated with diabetic ketoacidosis. Rapid deep breaths increase the amount of carbon dioxide exhaled, thus lowering the serum carbon dioxide levels, resulting in some degree of compensation. Overcompensation via respiratory alkalosis to form an alkalemia does not occur.
Extreme acidemia can also lead to neurological and cardiac complications:
Neurological: lethargy, stupor, coma, seizures
Cardiac: Abnormal heart rhythms (e.g., ventricular tachycardia) and decreased response to epinephrine, both leading to low blood pressure
Physical examination can occasionally reveal signs of the disease, but is often otherwise normal. Cranial nerve abnormalities are reported in ethylene glycol poisoning, and retinal edema can be a sign of methanol intoxication.
Chronic metabolic acidosis
Chronic metabolic acidosis has non-specific clinical symptoms but can be readily diagnosed by testing serum bicarbonate levels in patients with chronic kidney disease (CKD) as part of a comprehensive metabolic panel. Patients with CKD Stages G3–G5 should be routinely screened for metabolic acidosis.
Diagnostic approach and causes
Metabolic acidosis results in a reduced serum pH that is due to metabolic and not respiratory dysfunction. Typically the serum bicarbonate concentration will be <22 mEq/L, below the normal range of 22 to 29 mEq/L, the standard base will be more negative than -2 (base deficit) and the pCO2 will be reduced as a result of hyperventilation in an attempt to restore the pH closer to normal. Occasionally in a mixed acid-base disorder where metabolic acidosis is not the primary disorder present, the pH may be normal or high. In the absence of chronic respiratory alkalosis, metabolic acidosis can be clinically diagnosed by analysis of the calculated serum bicarbonate level.
Causes
Generally, metabolic acidosis occurs when the body produces too much acid (e.g., lactic acidosis, see below section), there is a loss of bicarbonate from the blood, or when the kidneys are not removing enough acid from the body.
Chronic metabolic acidosis is most often caused by a decreased capacity of the kidneys to excrete excess acids through renal ammoniagenesis. The typical Western diet generates 75–100 mEq of acid daily, and individuals with normal kidney function increase the production of ammonia to get rid of this dietary acid. As kidney function declines, the tubules lose the ability to excrete excess acid, and this results in buffering of acid using serum bicarbonate, as well as bone and muscle stores.
There are many causes of acute metabolic acidosis, and thus it is helpful to group them by the presence or absence of a normal anion gap.
Increased anion gap
Causes of increased anion gap include:
Lactic acidosis
Ketoacidosis (e.g., Diabetic, alcoholic, or starvation)
Chronic kidney failure
5-oxoprolinemia due to long-term ingestion of high-doses of acetaminophen with glutathione depletion (often seen with sepsis, liver failure, kidney failure, or malnutrition)
Intoxication:
Salicylates, methanol, ethylene glycol
Organic acids, paraldehyde, ethanol, formaldehyde
Carbon monoxide, cyanide, ibuprofen, metformin
Propylene glycol (metabolized to L and D-lactate and is often found in infusions for certain intravenous medications used in the intensive care unit)
Massive rhabdomyolysis
Isoniazid, iron, phenelzine, tranylcypromine, valproic acid, verapamil
Topiramate
Sulfates
Normal anion gap
Causes of normal anion gap include:
Inorganic acid addition
Infusion/ingestion of HCl,
Gastrointestinal base loss
Diarrhea
Small bowel fistula/drainage
Surgical diversion of urine into gut loops
Renal base loss/acid retention:
Proximal renal tubular acidosis
Distal renal tubular acidosis
Hyperalimentation
Addison disease
Acetazolamide
Spironolactone
Saline infusion
To distinguish between the main types of metabolic acidosis, a clinical tool called the anion gap is very useful. The anion gap is calculated by subtracting the sum of the serum concentrations of major anions, chloride and bicarbonate, from the serum concentration of the major cation, sodium. (The serum potassium concentration may be added to the calculation, but this merely changes the normal reference range for what is considered a normal anion gap)
Because the concentration of serum sodium is greater than the combined concentrations of chloride and bicarbonate an 'anion gap' is noted. In reality serum is electoneutral because of the presence of other minor cations (potassium, calcium and magnesium) and anions (albumin, sulphate and phosphate) that are not measured in the equation that calculates the anion gap.
The normal value for the anion gap is 8–16 mmol/L (12±4). An elevated anion gap (i.e. > 16 mmol/L) indicates the presence of excess 'unmeasured' anions, such as lactic acid in anaerobic metabolism resulting from tissue hypoxia, glycolic and formic acid produced by the metabolism of toxic alcohols, ketoacids produced when acetyl-CoA undergoes ketogenesis rather than entering the tricarboxylic (Krebs) cycle, and failure of renal excretion of products of metabolism such as sulphates and phosphates.
Adjunctive tests are useful in determining the aetiology of a raised anion gap metabolic acidosis including detection of an osmolar gap indicative of the presence of a toxic alcohol, measurement of serum ketones indicative of ketoacidosis and renal function tests and urinanalysis to detect renal dysfunction.
Elevated protein (albumin, globulins) may theoretically increase the anion gap but high levels are not usually encountered clinically. Hypoalbuminaemia, which is frequently encountered clinically, will mask an anion gap. As a rule of thumb, a decrease in serum albumin by 1 G/L will decrease the anion gap by 0.25 mmol/L
Pathophysiology
Compensatory mechanisms
Metabolic acidosis is characterized by a low concentration of bicarbonate, which can happen with increased generation of acids (such as ketoacids or lactic acid), excess loss of by the kidneys or gastrointestinal tract, or an inability to generate sufficient . Thus demonstrating the importance of maintaining balance between acids and bases in the body for maintaining optimal functioning of organs, tissues and cells.
The body regulates the acidity of the blood by four buffering mechanisms.
Bicarbonate buffering system
Intracellular buffering by absorption of hydrogen atoms by various molecules, including proteins, phosphates and carbonate in bone.
Respiratory compensation. Hyperventilation will cause more carbon dioxide to be removed from the body and thereby increases pH.
Kidney compensation
Buffer
The decreased bicarbonate that distinguishes metabolic acidosis is therefore due to two separate processes: the buffer (from water and carbon dioxide) and additional renal generation. The buffer reactions are: H+ + HCO3- <=> H2CO3 <=> CO2 + H2O
The Henderson–Hasselbalch equation mathematically describes the relationship between blood pH and the components of the bicarbonate buffering system: where . In clinical practice, the concentration is usually determined via Henry's law from , the partial pressure in arterial blood:
For example, blood gas machines usually determine bicarbonate concentrations from measured pH and values. Mathematically, the algorithm substitutes the Henry's law formula into the Henderson-Hasselbach equation and then rearranges: At sea level, normal numbers might be and ; these then imply
Consequences
Acute metabolic acidosis
Acute metabolic acidosis most often occurs during hospitalizations, and acute critical illnesses. It is often associated with poor prognosis, with a mortality rate as high as 57% if the pH remains untreated at 7.20. At lower pH levels, acute metabolic acidosis can lead to impaired circulation and end organ function.
Chronic metabolic acidosis
Chronic metabolic acidosis commonly occurs in people with chronic kidney disease (CKD) with an eGFR of less than 45 ml/min/1.73m2, most often with mild to moderate severity; however, metabolic acidosis can manifest earlier on in the course of CKD. Multiple animal and human studies have shown that metabolic acidosis in CKD, given its chronic nature, has a profound adverse impact on cellular function, overall contributing to high morbidities in patients.
The most adverse consequences of chronic metabolic acidosis in people with CKD, and in particular, for those who have end-stage renal disease (ESRD), are detrimental changes to the bones and muscles. Acid buffering leads to loss of bone density, resulting in an increased risk of bone fractures, renal osteodystrophy, and bone disease; as well, increased protein catabolism leads to muscle wasting. Furthermore, metabolic acidosis in CKD is also associated with a reduction in eGFR; it is both a complication of CKD, as well as an underlying cause of CKD progression.
Treatment
Treatment of metabolic acidosis depends on the underlying cause, and should target reversing the main process. When considering course of treatment, it is important to distinguish between acute versus chronic forms.
Acute metabolic acidosis
Bicarbonate therapy is generally administered In patients with severe acute acidemia (pH < 7.11), or with less severe acidemia (pH 7.1–7.2) who have severe acute kidney injury. Bicarbonate therapy is not recommended for people with less severe acidosis (pH ≥ 7.1), unless severe acute kidney injury is present. In the BICAR-ICU trial, bicarbonate therapy for maintaining a pH >7.3 had no overall effect on the composite outcome of all-cause mortality and the presence of at least one organ failure at day 7. However, amongst the sub-group of patients with severe acute kidney injury, bicarbonate therapy significantly decreased the primary composite outcome, and 28-day mortality, along with the need for dialysis.
Chronic metabolic acidosis
For people with chronic kidney disease (CKD), treating metabolic acidosis slows the progression of CKD. Dietary interventions for treatment of chronic metabolic acidosis include base-inducing fruits and vegetables that assist with reducing the urine net acid excretion, and increase TCO2. Recent research has also suggested that dietary protein restriction, through ketoanalogue-supplemented vegetarian very low protein diets are also a nutritionally safe option for correction of metabolic acidosis in people with CKD.
Currently, the most commonly used treatment for chronic metabolic acidosis is oral bicarbonate. The NKF/KDOQI guidelines recommend starting treatment when serum bicarbonate levels are <22 mEq/L, in order to maintain levels ≥ 22 mEq/L. Studies investigating the effects of oral alkali therapy demonstrated improvements in serum bicarbonate levels, resulting in a slower decline in kidney function, and reduction in proteinuria – leading to a reduction in the risk of progressing to kidney failure. However, side effects of oral alkali therapy include gastrointestinal intolerance, worsening edema, and worsening hypertension. Furthermore, large doses of oral alkali are required to treat chronic metabolic acidosis, and the pill burden can limit adherence.
Veverimer (TRC 101) is a promising investigational drug designed to treat metabolic acidosis by binding with the acid in the gastrointestinal tract and removing it from the body through excretion in the feces, in turn decreasing the amount of acid in the body, and increasing the level of bicarbonate in the blood. Results from a Phase 3, double-blind placebo-controlled 12-week clinical trial in people with CKD and metabolic acidosis demonstrated that Veverimer effectively and safely corrected metabolic acidosis in the short-term, and a blinded, placebo-controlled, 40-week extension of the trial assessing long-term safety, demonstrated sustained improvements in physical function and a combined endpoint of death, dialysis, or 50% decline in eGFR.
See also
Delta ratio
Metabolic alkalosis
Pseudohypoxia
Respiratory acidosis
Respiratory alkalosis
Trauma triad of death
Winters' formula
Intravenous bicarbonate
References
External links
Acid–base disturbances | 0.78028 | 0.996505 | 0.777553 |
Infection | An infection is the invasion of tissues by pathogens, their multiplication, and the reaction of host tissues to the infectious agent and the toxins they produce. An infectious disease, also known as a transmissible disease or communicable disease, is an illness resulting from an infection.
Infections can be caused by a wide range of pathogens, most prominently bacteria and viruses. Hosts can fight infections using their immune systems. Mammalian hosts react to infections with an innate response, often involving inflammation, followed by an adaptive response.
Specific medications used to treat infections include antibiotics, antivirals, antifungals, antiprotozoals, and antihelminthics. Infectious diseases resulted in 9.2 million deaths in 2013 (about 17% of all deaths). The branch of medicine that focuses on infections is referred to as infectious diseases.
Types
Infections are caused by infectious agents (pathogens) including:
Bacteria (e.g. Mycobacterium tuberculosis, Staphylococcus aureus, Escherichia coli, Clostridium botulinum, and Salmonella spp.)
Viruses and related agents such as viroids. (E.g. HIV, Rhinovirus, Lyssaviruses such as Rabies virus, Ebolavirus and Severe acute respiratory syndrome coronavirus 2)
Fungi, further subclassified into:
Ascomycota, including yeasts such as Candida (the most common fungal infection); filamentous fungi such as Aspergillus; Pneumocystis species; and dermatophytes, a group of organisms causing infection of skin and other superficial structures in humans.
Basidiomycota, including the human-pathogenic genus Cryptococcus.
Parasites, which are usually divided into:
Unicellular organisms (e.g. malaria, Toxoplasma, Babesia)
Macroparasites (worms or helminths) including nematodes such as parasitic roundworms and pinworms, tapeworms (cestodes), and flukes (trematodes, such as schistosomes). Diseases caused by helminths are sometimes termed infestations, but are sometimes called infections.
Arthropods such as ticks, mites, fleas, and lice, can also cause human disease, which conceptually are similar to infections, but invasion of a human or animal body by these macroparasites is usually termed infestation.
Prions (although they do not secrete toxins)
Signs and symptoms
The signs and symptoms of an infection depend on the type of disease. Some signs of infection affect the whole body generally, such as fatigue, loss of appetite, weight loss, fevers, night sweats, chills, aches and pains. Others are specific to individual body parts, such as skin rashes, coughing, or a runny nose.
In certain cases, infectious diseases may be asymptomatic for much or even all of their course in a given host. In the latter case, the disease may only be defined as a "disease" (which by definition means an illness) in hosts who secondarily become ill after contact with an asymptomatic carrier. An infection is not synonymous with an infectious disease, as some infections do not cause illness in a host.
Bacterial or viral
As bacterial and viral infections can both cause the same kinds of symptoms, it can be difficult to distinguish which is the cause of a specific infection. Distinguishing the two is important, since viral infections cannot be cured by antibiotics whereas bacterial infections can.
Pathophysiology
There is a general chain of events that applies to infections, sometimes called the chain of infection or transmission chain. The chain of events involves several stepswhich include the infectious agent, reservoir, entering a susceptible host, exit and transmission to new hosts. Each of the links must be present in a chronological order for an infection to develop. Understanding these steps helps health care workers target the infection and prevent it from occurring in the first place.
Colonization
Infection begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization. Most humans are not easily infected. Those with compromised or weakened immune systems have an increased susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly susceptible to opportunistic infections. Entrance to the host at host–pathogen interface, generally occurs through the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds. While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids.
Wound colonization refers to non-replicating microorganisms within the wound, while in infected wounds, replicating organisms exist and tissue is injured. All multicellular organisms are colonized to some degree by extrinsic organisms, and the vast majority of these exist in either a mutualistic or commensal relationship with the host. An example of the former is the anaerobic bacteria species, which colonizes the mammalian colon, and an example of the latter are the various species of staphylococcus that exist on human skin. Neither of these colonizations are considered infections. The difference between an infection and a colonization is often only a matter of circumstance. Non-pathogenic organisms can become pathogenic given specific conditions, and even the most virulent organism requires certain circumstances to cause a compromising infection. Some colonizing bacteria, such as Corynebacteria sp. and Viridans streptococci, prevent the adhesion and colonization of pathogenic bacteria and thus have a symbiotic relationship with the host, preventing infection and speeding wound healing.
The variables involved in the outcome of a host becoming inoculated by a pathogen and the ultimate outcome include:
the route of entry of the pathogen and the access to host regions that it gains
the intrinsic virulence of the particular organism
the quantity or load of the initial inoculant
the immune status of the host being colonized
As an example, several staphylococcal species remain harmless on the skin, but, when present in a normally sterile space, such as in the capsule of a joint or the peritoneum, multiply without resistance and cause harm.
An interesting fact that gas chromatography–mass spectrometry, 16S ribosomal RNA analysis, omics, and other advanced technologies have made more apparent to humans in recent decades is that microbial colonization is very common even in environments that humans think of as being nearly sterile. Because it is normal to have bacterial colonization, it is difficult to know which chronic wounds can be classified as infected and how much risk of progression exists. Despite the huge number of wounds seen in clinical practice, there are limited quality data for evaluated symptoms and signs. A review of chronic wounds in the Journal of the American Medical Association's "Rational Clinical Examination Series" quantified the importance of increased pain as an indicator of infection. The review showed that the most useful finding is an increase in the level of pain [likelihood ratio (LR) range, 11–20] makes infection much more likely, but the absence of pain (negative likelihood ratio range, 0.64–0.88) does not rule out infection (summary LR 0.64–0.88).
Disease
Disease can arise if the host's protective immune mechanisms are compromised and the organism inflicts damage on the host. Microorganisms can cause tissue damage by releasing a variety of toxins or destructive enzymes. For example, Clostridium tetani releases a toxin that paralyzes muscles, and staphylococcus releases toxins that produce shock and sepsis. Not all infectious agents cause disease in all hosts. For example, less than 5% of individuals infected with polio develop disease. On the other hand, some infectious agents are highly virulent. The prion causing mad cow disease and Creutzfeldt–Jakob disease invariably kills all animals and people that are infected.
Persistent infections occur because the body is unable to clear the organism after the initial infection. Persistent infections are characterized by the continual presence of the infectious organism, often as latent infection with occasional recurrent relapses of active infection. There are some viruses that can maintain a persistent infection by infecting different cells of the body. Some viruses once acquired never leave the body. A typical example is the herpes virus, which tends to hide in nerves and become reactivated when specific circumstances arise.
Persistent infections cause millions of deaths globally each year. Chronic infections by parasites account for a high morbidity and mortality in many underdeveloped countries.
Transmission
For infecting organisms to survive and repeat the infection cycle in other hosts, they (or their progeny) must leave an existing reservoir and cause infection elsewhere. Infection transmission can take place via many potential routes:
Droplet contact, also known as the respiratory route, and the resultant infection can be termed airborne disease. If an infected person coughs or sneezes on another person the microorganisms, suspended in warm, moist droplets, may enter the body through the nose, mouth or eye surfaces.
Fecal-oral transmission, wherein foodstuffs or water become contaminated (by people not washing their hands before preparing food, or untreated sewage being released into a drinking water supply) and the people who eat and drink them become infected. Common fecal-oral transmitted pathogens include Vibrio cholerae, Giardia species, rotaviruses, Entamoeba histolytica, Escherichia coli, and tape worms. Most of these pathogens cause gastroenteritis.
Sexual transmission, with the result being called sexually transmitted infection.
Oral transmission, diseases that are transmitted primarily by oral means may be caught through direct oral contact such as kissing, or by indirect contact such as by sharing a drinking glass or a cigarette.
Transmission by direct contact, Some diseases that are transmissible by direct contact include athlete's foot, impetigo and warts.
Vehicle transmission, transmission by an inanimate reservoir (food, water, soil).
Vertical transmission, directly from the mother to an embryo, fetus or baby during pregnancy or childbirth. It can occur as a result of a pre-existing infection or one acquired during pregnancy.
Iatrogenic transmission, due to medical procedures such as injection or transplantation of infected material.
Vector-borne transmission, transmitted by a vector, which is an organism that does not cause disease itself but that transmits infection by conveying pathogens from one host to another.
The relationship between virulence versus transmissibility is complex; with studies have shown that there were no clear relationship between the two. There is still a small number of evidence that partially suggests a link between virulence and transmissibility.
Diagnosis
Diagnosis of infectious disease sometimes involves identifying an infectious agent either directly or indirectly. In practice most minor infectious diseases such as warts, cutaneous abscesses, respiratory system infections and diarrheal diseases are diagnosed by their clinical presentation and treated without knowledge of the specific causative agent. Conclusions about the cause of the disease are based upon the likelihood that a patient came in contact with a particular agent, the presence of a microbe in a community, and other epidemiological considerations. Given sufficient effort, all known infectious agents can be specifically identified.
Diagnosis of infectious disease is nearly always initiated by medical history and physical examination. More detailed identification techniques involve the culture of infectious agents isolated from a patient. Culture allows identification of infectious organisms by examining their microscopic features, by detecting the presence of substances produced by pathogens, and by directly identifying an organism by its genotype.
Many infectious organisms are identified without culture and microscopy. This is especially true for viruses, which cannot grow in culture. For some suspected pathogens, doctors may conduct tests that examine a patient's blood or other body fluids for antigens or antibodies that indicate presence of a specific pathogen that the doctor suspects.
Other techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal abnormalities resulting from the growth of an infectious agent. The images are useful in detection of, for example, a bone abscess or a spongiform encephalopathy produced by a prion.
The benefits of identification, however, are often greatly outweighed by the cost, as often there is no specific treatment, the cause is obvious, or the outcome of an infection is likely to be benign.
Symptomatic diagnostics
The diagnosis is aided by the presenting symptoms in any individual with an infectious disease, yet it usually needs additional diagnostic techniques to confirm the suspicion. Some signs are specifically characteristic and indicative of a disease and are called pathognomonic signs; but these are rare. Not all infections are symptomatic.
In children the presence of cyanosis, rapid breathing, poor peripheral perfusion, or a petechial rash increases the risk of a serious infection by greater than 5 fold. Other important indicators include parental concern, clinical instinct, and temperature greater than 40 °C.
Microbial culture
Many diagnostic approaches depend on microbiological culture to isolate a pathogen from the appropriate clinical specimen. In a microbial culture, a growth medium is provided for a specific agent. A sample taken from potentially diseased tissue or fluid is then tested for the presence of an infectious agent able to grow within that medium. Many pathogenic bacteria are easily grown on nutrient agar, a form of solid medium that supplies carbohydrates and proteins necessary for growth, along with copious amounts of water. A single bacterium will grow into a visible mound on the surface of the plate called a colony, which may be separated from other colonies or melded together into a "lawn". The size, color, shape and form of a colony is characteristic of the bacterial species, its specific genetic makeup (its strain), and the environment that supports its growth. Other ingredients are often added to the plate to aid in identification. Plates may contain substances that permit the growth of some bacteria and not others, or that change color in response to certain bacteria and not others. Bacteriological plates such as these are commonly used in the clinical identification of infectious bacterium. Microbial culture may also be used in the identification of viruses: the medium, in this case, being cells grown in culture that the virus can infect, and then alter or kill. In the case of viral identification, a region of dead cells results from viral growth, and is called a "plaque". Eukaryotic parasites may also be grown in culture as a means of identifying a particular agent.
In the absence of suitable plate culture techniques, some microbes require culture within live animals. Bacteria such as Mycobacterium leprae and Treponema pallidum can be grown in animals, although serological and microscopic techniques make the use of live animals unnecessary. Viruses are also usually identified using alternatives to growth in culture or animals. Some viruses may be grown in embryonated eggs. Another useful identification method is Xenodiagnosis, or the use of a vector to support the growth of an infectious agent. Chagas disease is the most significant example, because it is difficult to directly demonstrate the presence of the causative agent, Trypanosoma cruzi in a patient, which therefore makes it difficult to definitively make a diagnosis. In this case, xenodiagnosis involves the use of the vector of the Chagas agent T. cruzi, an uninfected triatomine bug, which takes a blood meal from a person suspected of having been infected. The bug is later inspected for growth of T. cruzi within its gut.
Microscopy
Another principal tool in the diagnosis of infectious disease is microscopy. Virtually all of the culture techniques discussed above rely, at some point, on microscopic examination for definitive identification of the infectious agent. Microscopy may be carried out with simple instruments, such as the compound light microscope, or with instruments as complex as an electron microscope. Samples obtained from patients may be viewed directly under the light microscope, and can often rapidly lead to identification. Microscopy is often also used in conjunction with biochemical staining techniques, and can be made exquisitely specific when used in combination with antibody based techniques. For example, the use of antibodies made artificially fluorescent (fluorescently labeled antibodies) can be directed to bind to and identify a specific antigens present on a pathogen. A fluorescence microscope is then used to detect fluorescently labeled antibodies bound to internalized antigens within clinical samples or cultured cells. This technique is especially useful in the diagnosis of viral diseases, where the light microscope is incapable of identifying a virus directly.
Other microscopic procedures may also aid in identifying infectious agents. Almost all cells readily stain with a number of basic dyes due to the electrostatic attraction between negatively charged cellular molecules and the positive charge on the dye. A cell is normally transparent under a microscope, and using a stain increases the contrast of a cell with its background. Staining a cell with a dye such as Giemsa stain or crystal violet allows a microscopist to describe its size, shape, internal and external components and its associations with other cells. The response of bacteria to different staining procedures is used in the taxonomic classification of microbes as well. Two methods, the Gram stain and the acid-fast stain, are the standard approaches used to classify bacteria and to diagnosis of disease. The Gram stain identifies the bacterial groups Bacillota and Actinomycetota, both of which contain many significant human pathogens. The acid-fast staining procedure identifies the Actinomycetota genera Mycobacterium and Nocardia.
Biochemical tests
Biochemical tests used in the identification of infectious agents include the detection of metabolic or enzymatic products characteristic of a particular infectious agent. Since bacteria ferment carbohydrates in patterns characteristic of their genus and species, the detection of fermentation products is commonly used in bacterial identification. Acids, alcohols and gases are usually detected in these tests when bacteria are grown in selective liquid or solid media.
The isolation of enzymes from infected tissue can also provide the basis of a biochemical diagnosis of an infectious disease. For example, humans can make neither RNA replicases nor reverse transcriptase, and the presence of these enzymes are characteristic., of specific types of viral infections. The ability of the viral protein hemagglutinin to bind red blood cells together into a detectable matrix may also be characterized as a biochemical test for viral infection, although strictly speaking hemagglutinin is not an enzyme and has no metabolic function.
Serological methods are highly sensitive, specific and often extremely rapid tests used to identify microorganisms. These tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen, usually a protein or carbohydrate made by an infectious agent, is bound by the antibody. This binding then sets off a chain of events that can be visibly obvious in various ways, dependent upon the test. For example, "Strep throat" is often diagnosed within minutes, and is based on the appearance of antigens made by the causative agent, S. pyogenes, that is retrieved from a patient's throat with a cotton swab. Serological tests, if available, are usually the preferred route of identification, however the tests are costly to develop and the reagents used in the test often require refrigeration. Some serological methods are extremely costly, although when commonly used, such as with the "strep test", they can be inexpensive.
Complex serological techniques have been developed into what are known as immunoassays. Immunoassays can use the basic antibody – antigen binding as the basis to produce an electro-magnetic or particle radiation signal, which can be detected by some form of instrumentation. Signal of unknowns can be compared to that of standards allowing quantitation of the target antigen. To aid in the diagnosis of infectious diseases, immunoassays can detect or measure antigens from either infectious agents or proteins generated by an infected organism in response to a foreign agent. For example, immunoassay A may detect the presence of a surface protein from a virus particle. Immunoassay B on the other hand may detect or measure antibodies produced by an organism's immune system that are made to neutralize and allow the destruction of the virus.
Instrumentation can be used to read extremely small signals created by secondary reactions linked to the antibody – antigen binding. Instrumentation can control sampling, reagent use, reaction times, signal detection, calculation of results, and data management to yield a cost-effective automated process for diagnosis of infectious disease.
PCR-based diagnostics
Technologies based upon the polymerase chain reaction (PCR) method will become nearly ubiquitous gold standards of diagnostics of the near future, for several reasons. First, the catalog of infectious agents has grown to the point that virtually all of the significant infectious agents of the human population have been identified. Second, an infectious agent must grow within the human body to cause disease; essentially it must amplify its own nucleic acids to cause a disease. This amplification of nucleic acid in infected tissue offers an opportunity to detect the infectious agent by using PCR. Third, the essential tools for directing PCR, primers, are derived from the genomes of infectious agents, and with time those genomes will be known if they are not already.
Thus, the technological ability to detect any infectious agent rapidly and specifically is currently available. The only remaining blockades to the use of PCR as a standard tool of diagnosis are in its cost and application, neither of which is insurmountable. The diagnosis of a few diseases will not benefit from the development of PCR methods, such as some of the clostridial diseases (tetanus and botulism). These diseases are fundamentally biological poisonings by relatively small numbers of infectious bacteria that produce extremely potent neurotoxins. A significant proliferation of the infectious agent does not occur, this limits the ability of PCR to detect the presence of any bacteria.
Metagenomic sequencing
Given the wide range of bacterial, viral, fungal, protozoal, and helminthic pathogens that cause debilitating and life-threatening illnesses, the ability to quickly identify the cause of infection is important yet often challenging. For example, more than half of cases of encephalitis, a severe illness affecting the brain, remain undiagnosed, despite extensive testing using the standard of care (microbiological culture) and state-of-the-art clinical laboratory methods. Metagenomic sequencing-based diagnostic tests are currently being developed for clinical use and show promise as a sensitive, specific, and rapid way to diagnose infection using a single all-encompassing test. This test is similar to current PCR tests; however, an untargeted whole genome amplification is used rather than primers for a specific infectious agent. This amplification step is followed by next-generation sequencing or third-generation sequencing, alignment comparisons, and taxonomic classification using large databases of thousands of pathogen and commensal reference genomes. Simultaneously, antimicrobial resistance genes within pathogen and plasmid genomes are sequenced and aligned to the taxonomically classified pathogen genomes to generate an antimicrobial resistance profile – analogous to antibiotic sensitivity testing – to facilitate antimicrobial stewardship and allow for the optimization of treatment using the most effective drugs for a patient's infection.
Metagenomic sequencing could prove especially useful for diagnosis when the patient is immunocompromised. An ever-wider array of infectious agents can cause serious harm to individuals with immunosuppression, so clinical screening must often be broader. Additionally, the expression of symptoms is often atypical, making a clinical diagnosis based on presentation more difficult. Thirdly, diagnostic methods that rely on the detection of antibodies are more likely to fail. A rapid, sensitive, specific, and untargeted test for all known human pathogens that detects the presence of the organism's DNA rather than antibodies is therefore highly desirable.
Indication of tests
There is usually an indication for a specific identification of an infectious agent only when such identification can aid in the treatment or prevention of the disease, or to advance knowledge of the course of an illness prior to the development of effective therapeutic or preventative measures. For example, in the early 1980s, prior to the appearance of AZT for the treatment of AIDS, the course of the disease was closely followed by monitoring the composition of patient blood samples, even though the outcome would not offer the patient any further treatment options. In part, these studies on the appearance of HIV in specific communities permitted the advancement of hypotheses as to the route of transmission of the virus. By understanding how the disease was transmitted, resources could be targeted to the communities at greatest risk in campaigns aimed at reducing the number of new infections. The specific serological diagnostic identification, and later genotypic or molecular identification, of HIV also enabled the development of hypotheses as to the temporal and geographical origins of the virus, as well as a myriad of other hypothesis. The development of molecular diagnostic tools have enabled physicians and researchers to monitor the efficacy of treatment with anti-retroviral drugs. Molecular diagnostics are now commonly used to identify HIV in healthy people long before the onset of illness and have been used to demonstrate the existence of people who are genetically resistant to HIV infection. Thus, while there still is no cure for AIDS, there is great therapeutic and predictive benefit to identifying the virus and monitoring the virus levels within the blood of infected individuals, both for the patient and for the community at large.
Classification
Subclinical versus clinical (latent versus apparent)
Symptomatic infections are apparent and clinical, whereas an infection that is active but does not produce noticeable symptoms may be called inapparent, silent, subclinical, or occult. An infection that is inactive or dormant is called a latent infection. An example of a latent bacterial infection is latent tuberculosis. Some viral infections can also be latent, examples of latent viral infections are any of those from the Herpesviridae family.
The word infection can denote any presence of a particular pathogen at all (no matter how little) but also is often used in a sense implying a clinically apparent infection (in other words, a case of infectious disease). This fact occasionally creates some ambiguity or prompts some usage discussion; to get around this it is common for health professionals to speak of colonization (rather than infection) when they mean that some of the pathogens are present but that no clinically apparent infection (no disease) is present.
Course of infection
Different terms are used to describe how and where infections present over time. In an acute infection, symptoms develop rapidly; its course can either be rapid or protracted. In chronic infection, symptoms usually develop gradually over weeks or months and are slow to resolve. In subacute infections, symptoms take longer to develop than in acute infections but arise more quickly than those of chronic infections. A focal infection is an initial site of infection from which organisms travel via the bloodstream to another area of the body.
Primary versus opportunistic
Among the many varieties of microorganisms, relatively few cause disease in otherwise healthy individuals. Infectious disease results from the interplay between those few pathogens and the defenses of the hosts they infect. The appearance and severity of disease resulting from any pathogen depend upon the ability of that pathogen to damage the host as well as the ability of the host to resist the pathogen. However, a host's immune system can also cause damage to the host itself in an attempt to control the infection. Clinicians, therefore, classify infectious microorganisms or microbes according to the status of host defenses – either as primary pathogens or as opportunistic pathogens.
Primary pathogens
Primary pathogens cause disease as a result of their presence or activity within the normal, healthy host, and their intrinsic virulence (the severity of the disease they cause) is, in part, a necessary consequence of their need to reproduce and spread. Many of the most common primary pathogens of humans only infect humans, however, many serious diseases are caused by organisms acquired from the environment or that infect non-human hosts.
Opportunistic pathogens
Opportunistic pathogens can cause an infectious disease in a host with depressed resistance (immunodeficiency) or if they have unusual access to the inside of the body (for example, via trauma). Opportunistic infection may be caused by microbes ordinarily in contact with the host, such as pathogenic bacteria or fungi in the gastrointestinal or the upper respiratory tract, and they may also result from (otherwise innocuous) microbes acquired from other hosts (as in Clostridioides difficile colitis) or from the environment as a result of traumatic introduction (as in surgical wound infections or compound fractures). An opportunistic disease requires impairment of host defenses, which may occur as a result of genetic defects (such as chronic granulomatous disease), exposure to antimicrobial drugs or immunosuppressive chemicals (as might occur following poisoning or cancer chemotherapy), exposure to ionizing radiation, or as a result of an infectious disease with immunosuppressive activity (such as with measles, malaria or HIV disease). Primary pathogens may also cause more severe disease in a host with depressed resistance than would normally occur in an immunosufficient host.
Secondary infection
While a primary infection can practically be viewed as the root cause of an individual's current health problem, a secondary infection is a sequela or complication of that root cause. For example, an infection due to a burn or penetrating trauma (the root cause) is a secondary infection. Primary pathogens often cause primary infection and often cause secondary infection. Usually, opportunistic infections are viewed as secondary infections (because immunodeficiency or injury was the predisposing factor).
Other types of infection
Other types of infection consist of mixed, iatrogenic, nosocomial, and community-acquired infection. A mixed infection is an infection that is caused by two or more pathogens. An example of this is appendicitis, which is caused by Bacteroides fragilis and Escherichia coli. The second is an iatrogenic infection. This type of infection is one that is transmitted from a health care worker to a patient. A nosocomial infection is also one that occurs in a health care setting. Nosocomial infections are those that are acquired during a hospital stay. Lastly, a community-acquired infection is one in which the infection is acquired from a whole community.
Infectious or not
One manner of proving that a given disease is infectious, is to satisfy Koch's postulates (first proposed by Robert Koch), which require that first, the infectious agent be identifiable only in patients who have the disease, and not in healthy controls, and second, that patients who contract the infectious agent also develop the disease. These postulates were first used in the discovery that Mycobacteria species cause tuberculosis.
However, Koch's postulates cannot usually be tested in modern practice for ethical reasons. Proving them would require experimental infection of a healthy individual with a pathogen produced as a pure culture. Conversely, even clearly infectious diseases do not always meet the infectious criteria; for example, Treponema pallidum, the causative spirochete of syphilis, cannot be cultured in vitro – however the organism can be cultured in rabbit testes. It is less clear that a pure culture comes from an animal source serving as host than it is when derived from microbes derived from plate culture.
Epidemiology, or the study and analysis of who, why and where disease occurs, and what determines whether various populations have a disease, is another important tool used to understand infectious disease. Epidemiologists may determine differences among groups within a population, such as whether certain age groups have a greater or lesser rate of infection; whether groups living in different neighborhoods are more likely to be infected; and by other factors, such as gender and race. Researchers also may assess whether a disease outbreak is sporadic, or just an occasional occurrence; endemic, with a steady level of regular cases occurring in a region; epidemic, with a fast arising, and unusually high number of cases in a region; or pandemic, which is a global epidemic. If the cause of the infectious disease is unknown, epidemiology can be used to assist with tracking down the sources of infection.
Contagiousness
Infectious diseases are sometimes called contagious diseases when they are easily transmitted by contact with an ill person or their secretions (e.g., influenza). Thus, a contagious disease is a subset of infectious disease that is especially infective or easily transmitted. Other types of infectious, transmissible, or communicable diseases with more specialized routes of infection, such as vector transmission or sexual transmission, are usually not regarded as "contagious", and often do not require medical isolation (sometimes loosely called quarantine) of those affected. However, this specialized connotation of the word "contagious" and "contagious disease" (easy transmissibility) is not always respected in popular use.
Infectious diseases are commonly transmitted from person to person through direct contact. The types of contact are through person to person and droplet spread. Indirect contact such as airborne transmission, contaminated objects, food and drinking water, animal person contact, animal reservoirs, insect bites, and environmental reservoirs are another way infectious diseases are transmitted.
By anatomic location
Infections can be classified by the anatomic location or organ system infected, including:
Urinary tract infection
Skin infection
Respiratory tract infection
Odontogenic infection (an infection that originates within a tooth or in the closely surrounding tissues)
Vaginal infections
Intra-amniotic infection
In addition, locations of inflammation where infection is the most common cause include pneumonia, meningitis and salpingitis.
Prevention
Techniques like hand washing, wearing gowns, and wearing face masks can help prevent infections from being passed from one person to another. Aseptic technique was introduced in medicine and surgery in the late 19th century and greatly reduced the incidence of infections caused by surgery. Frequent hand washing remains the most important defense against the spread of unwanted organisms. There are other forms of prevention such as avoiding the use of illicit drugs, using a condom, wearing gloves, and having a healthy lifestyle with a balanced diet and regular exercise. Cooking foods well and avoiding foods that have been left outside for a long time is also important.
Antimicrobial substances used to prevent transmission of infections include:
antiseptics, which are applied to living tissue/skin
disinfectants, which destroy microorganisms found on non-living objects.
antibiotics, called prophylactic when given as prevention rather as treatment of infection. However, long term use of antibiotics leads to resistance of bacteria. While humans do not become immune to antibiotics, the bacteria does. Thus, avoiding using antibiotics longer than necessary helps preventing bacteria from forming mutations that aide in antibiotic resistance.
One of the ways to prevent or slow down the transmission of infectious diseases is to recognize the different characteristics of various diseases. Some critical disease characteristics that should be evaluated include virulence, distance traveled by those affected, and level of contagiousness. The human strains of Ebola virus, for example, incapacitate those infected extremely quickly and kill them soon after. As a result, those affected by this disease do not have the opportunity to travel very far from the initial infection zone. Also, this virus must spread through skin lesions or permeable membranes such as the eye. Thus, the initial stage of Ebola is not very contagious since its victims experience only internal hemorrhaging. As a result of the above features, the spread of Ebola is very rapid and usually stays within a relatively confined geographical area. In contrast, the human immunodeficiency virus (HIV) kills its victims very slowly by attacking their immune system. As a result, many of its victims transmit the virus to other individuals before even realizing that they are carrying the disease. Also, the relatively low virulence allows its victims to travel long distances, increasing the likelihood of an epidemic.
Another effective way to decrease the transmission rate of infectious diseases is to recognize the effects of small-world networks. In epidemics, there are often extensive interactions within hubs or groups of infected individuals and other interactions within discrete hubs of susceptible individuals. Despite the low interaction between discrete hubs, the disease can jump and spread in a susceptible hub via a single or few interactions with an infected hub. Thus, infection rates in small-world networks can be reduced somewhat if interactions between individuals within infected hubs are eliminated (Figure 1). However, infection rates can be drastically reduced if the main focus is on the prevention of transmission jumps between hubs. The use of needle exchange programs in areas with a high density of drug users with HIV is an example of the successful implementation of this treatment method. Another example is the use of ring culling or vaccination of potentially susceptible livestock in adjacent farms to prevent the spread of the foot-and-mouth virus in 2001.
A general method to prevent transmission of vector-borne pathogens is pest control.
In cases where infection is merely suspected, individuals may be quarantined until the incubation period has passed and the disease manifests itself or the person remains healthy. Groups may undergo quarantine, or in the case of communities, a cordon sanitaire may be imposed to prevent infection from spreading beyond the community, or in the case of protective sequestration, into a community. Public health authorities may implement other forms of social distancing, such as school closings, lockdowns or temporary restrictions (e.g. circuit breakers) to control an epidemic.
Immunity
Infection with most pathogens does not result in death of the host and the offending organism is ultimately cleared after the symptoms of the disease have waned. This process requires immune mechanisms to kill or inactivate the inoculum of the pathogen. Specific acquired immunity against infectious diseases may be mediated by antibodies and/or T lymphocytes. Immunity mediated by these two factors may be manifested by:
a direct effect upon a pathogen, such as antibody-initiated complement-dependent bacteriolysis, opsonoization, phagocytosis and killing, as occurs for some bacteria,
neutralization of viruses so that these organisms cannot enter cells,
or by T lymphocytes, which will kill a cell parasitized by a microorganism.
The immune system response to a microorganism often causes symptoms such as a high fever and inflammation, and has the potential to be more devastating than direct damage caused by a microbe.
Resistance to infection (immunity) may be acquired following a disease, by asymptomatic carriage of the pathogen, by harboring an organism with a similar structure (crossreacting), or by vaccination. Knowledge of the protective antigens and specific acquired host immune factors is more complete for primary pathogens than for opportunistic pathogens. There is also the phenomenon of herd immunity which offers a measure of protection to those otherwise vulnerable people when a large enough proportion of the population has acquired immunity from certain infections.
Immune resistance to an infectious disease requires a critical level of either antigen-specific antibodies and/or T cells when the host encounters the pathogen. Some individuals develop natural serum antibodies to the surface polysaccharides of some agents although they have had little or no contact with the agent, these natural antibodies confer specific protection to adults and are passively transmitted to newborns.
Host genetic factors
The organism that is the target of an infecting action of a specific infectious agent is called the host. The host harbouring an agent that is in a mature or sexually active stage phase is called the definitive host. The intermediate host comes in contact during the larvae stage. A host can be anything living and can attain to asexual and sexual reproduction.
The clearance of the pathogens, either treatment-induced or spontaneous, it can be influenced by the genetic variants carried by the individual patients. For instance, for genotype 1 hepatitis C treated with Pegylated interferon-alpha-2a or Pegylated interferon-alpha-2b (brand names Pegasys or PEG-Intron) combined with ribavirin, it has been shown that genetic polymorphisms near the human IL28B gene, encoding interferon lambda 3, are associated with significant differences in the treatment-induced clearance of the virus. This finding, originally reported in Nature, showed that genotype 1 hepatitis C patients carrying certain genetic variant alleles near the IL28B gene are more possibly to achieve sustained virological response after the treatment than others. Later report from Nature demonstrated that the same genetic variants are also associated with the natural clearance of the genotype 1 hepatitis C virus.
Treatments
When infection attacks the body, anti-infective drugs can suppress the infection. Several broad types of anti-infective drugs exist, depending on the type of organism targeted; they include antibacterial (antibiotic; including antitubercular), antiviral, antifungal and antiparasitic (including antiprotozoal and antihelminthic) agents. Depending on the severity and the type of infection, the antibiotic may be given by mouth or by injection, or may be applied topically. Severe infections of the brain are usually treated with intravenous antibiotics. Sometimes, multiple antibiotics are used in case there is resistance to one antibiotic. Antibiotics only work for bacteria and do not affect viruses. Antibiotics work by slowing down the multiplication of bacteria or killing the bacteria. The most common classes of antibiotics used in medicine include penicillin, cephalosporins, aminoglycosides, macrolides, quinolones and tetracyclines.
Not all infections require treatment, and for many self-limiting infections the treatment may cause more side-effects than benefits. Antimicrobial stewardship is the concept that healthcare providers should treat an infection with an antimicrobial that specifically works well for the target pathogen for the shortest amount of time and to only treat when there is a known or highly suspected pathogen that will respond to the medication.
Susceptibility to infection
Pandemics such as COVID-19 show that people dramatically differ in their susceptibility to infection. This may be because of general health, age, or their immune status, e.g. when they have been infected previously. However, it also has become clear that there are genetic factor which determine susceptibility to infection. For instance, up to 40% of SARS-CoV-2 infections may be asymptomatic, suggesting that many people are naturally protected from disease. Large genetic studies have defined risk factors for severe SARS-CoV-2 infections, and genome sequences from 659 patients with severe COVID-19 revealed genetic variants that appear to be associated with life-threatening disease. One gene identified in these studies is type I interferon (IFN). Autoantibodies against type I IFNs were found in up to 13.7% of patients with life-threatening COVID-19, indicating that a complex interaction between genetics and the immune system is important for natural resistance to Covid.
Similarly, mutations in the ERAP2 gene, encoding endoplasmic reticulum aminopeptidase 2, seem to increase the susceptibility to the plague, the disease caused by an infection with the bacteria Yersinia pestis. People who inherited two copies of a complete variant of the gene were twice as likely to have survived the plague as those who inherited two copies of a truncated variant.
Susceptibility also determined the epidemiology of infection, given that different populations have different genetic and environmental conditions that affect infections.
Epidemiology
An estimated 1,680 million people died of infectious diseases in the 20th century and about 10 million in 2010.
The World Health Organization collects information on global deaths by International Classification of Disease (ICD) code categories. The following table lists the top infectious disease by number of deaths in 2002. 1993 data is included for comparison.
The top three single agent/disease killers are HIV/AIDS, TB and malaria. While the number of deaths due to nearly every disease have decreased, deaths due to HIV/AIDS have increased fourfold. Childhood diseases include pertussis, poliomyelitis, diphtheria, measles and tetanus. Children also make up a large percentage of lower respiratory and diarrheal deaths. In 2012, approximately 3.1 million people have died due to lower respiratory infections, making it the number 4 leading cause of death in the world.
Historic pandemics
With their potential for unpredictable and explosive impacts, infectious diseases have been major actors in human history. A pandemic (or global epidemic) is a disease that affects people over an extensive geographical area. For example:
Plague of Justinian, from 541 to 542, killed between 50% and 60% of Europe's population.
The Black Death of 1347 to 1352 killed 25 million in Europe over five years. The plague reduced the old world population from an estimated 450 million to between 350 and 375 million in the 14th century.
The introduction of smallpox, measles, and typhus to the areas of Central and South America by European explorers during the 15th and 16th centuries caused pandemics among the native inhabitants. Between 1518 and 1568 disease pandemics are said to have caused the population of Mexico to fall from 20 million to 3 million.
The first European influenza epidemic occurred between 1556 and 1560, with an estimated mortality rate of 20%.
Smallpox killed an estimated 60 million Europeans during the 18th century (approximately 400,000 per year). Up to 30% of those infected, including 80% of the children under 5 years of age, died from the disease, and one-third of the survivors went blind.
In the 19th century, tuberculosis killed an estimated one-quarter of the adult population of Europe; by 1918 one in six deaths in France were still caused by TB.
The Influenza Pandemic of 1918 (or the Spanish flu) killed 25–50 million people (about 2% of world population of 1.7 billion). Today Influenza kills about 250,000 to 500,000 worldwide each year.
Emerging diseases
In most cases, microorganisms live in harmony with their hosts via mutual or commensal interactions. Diseases can emerge when existing parasites become pathogenic or when new pathogenic parasites enter a new host.
Coevolution between parasite and host can lead to hosts becoming resistant to the parasites or the parasites may evolve greater virulence, leading to immunopathological disease.
Human activity is involved with many emerging infectious diseases, such as environmental change enabling a parasite to occupy new niches. When that happens, a pathogen that had been confined to a remote habitat has a wider distribution and possibly a new host organism. Parasites jumping from nonhuman to human hosts are known as zoonoses. Under disease invasion, when a parasite invades a new host species, it may become pathogenic in the new host.
Several human activities have led to the emergence of zoonotic human pathogens, including viruses, bacteria, protozoa, and rickettsia, and spread of vector-borne diseases, see also globalization and disease and wildlife disease:
Encroachment on wildlife habitats. The construction of new villages and housing developments in rural areas force animals to live in dense populations, creating opportunities for microbes to mutate and emerge.
Changes in agriculture. The introduction of new crops attracts new crop pests and the microbes they carry to farming communities, exposing people to unfamiliar diseases.
The destruction of rain forests. As countries make use of their rain forests, by building roads through forests and clearing areas for settlement or commercial ventures, people encounter insects and other animals harboring previously unknown microorganisms.
Uncontrolled urbanization. The rapid growth of cities in many developing countries tends to concentrate large numbers of people into crowded areas with poor sanitation. These conditions foster transmission of contagious diseases.
Modern transport. Ships and other cargo carriers often harbor unintended "passengers", that can spread diseases to faraway destinations. While with international jet-airplane travel, people infected with a disease can carry it to distant lands, or home to their families, before their first symptoms appear.
Germ theory of disease
In Antiquity, the Greek historian Thucydides ( – ) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. In his On the Different Types of Fever, the Greco-Roman physician Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. In the Sushruta Samhita, the ancient Indian physician Sushruta theorized: "Leprosy, fever, consumption, diseases of the eye, and other infectious diseases spread from one person to another by sexual union, physical contact, eating together, sleeping together, sitting together, and the use of same clothes, garlands and pastes." This book has been dated to about the sixth century BC.
A basic form of contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025), which later became the most authoritative medical textbook in Europe up until the 16th century. In Book IV of the Canon, Ibn Sina discussed epidemics, outlining the classical miasma theory and attempting to blend it with his own early contagion theory. He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. The concept of invisible contagion was later discussed by several Islamic scholars in the Ayyubid Sultanate who referred to them as najasat ("impure substances"). The fiqh scholar Ibn al-Haj al-Abdari (–1336), while discussing Islamic diet and hygiene, gave warnings about how contagion can contaminate water, food, and garments, and could spread through the water supply, and may have implied contagion to be unseen particles.
When the Black Death bubonic plague reached Al-Andalus in the 14th century, the Arab physicians Ibn Khatima and Ibn al-Khatib (1313–1374) hypothesised that infectious diseases were caused by "minute bodies" and described how they can be transmitted through garments, vessels and earrings. Ideas of contagion became more popular in Europe during the Renaissance, particularly through the writing of the Italian physician Girolamo Fracastoro. Anton van Leeuwenhoek (1632–1723) advanced the science of microscopy by being the first to observe microorganisms, allowing for easy visualization of bacteria.
In the mid-19th century John Snow and William Budd did important work demonstrating the contagiousness of typhoid and cholera through contaminated water. Both are credited with decreasing epidemics of cholera in their towns by implementing measures to prevent contamination of water. Louis Pasteur proved beyond doubt that certain diseases are caused by infectious agents, and developed a vaccine for rabies. Robert Koch provided the study of infectious diseases with a scientific basis known as Koch's postulates. Edward Jenner, Jonas Salk and Albert Sabin developed effective vaccines for smallpox and polio, which would later result in the eradication and near-eradication of these diseases, respectively. Alexander Fleming discovered the world's first antibiotic, penicillin, which Florey and Chain then developed. Gerhard Domagk developed sulphonamides, the first broad spectrum synthetic antibacterial drugs.
Medical specialists
The medical treatment of infectious diseases falls into the medical field of Infectious Disease and in some cases the study of propagation pertains to the field of Epidemiology. Generally, infections are initially diagnosed by primary care physicians or internal medicine specialists. For example, an "uncomplicated" pneumonia will generally be treated by the internist or the pulmonologist (lung physician). The work of the infectious diseases specialist therefore entails working with both patients and general practitioners, as well as laboratory scientists, immunologists, bacteriologists and other specialists.
An infectious disease team may be alerted when:
The disease has not been definitively diagnosed after an initial workup
The patient is immunocompromised (for example, in AIDS or after chemotherapy);
The infectious agent is of an uncommon nature (e.g. tropical diseases);
The disease has not responded to first line antibiotics;
The disease might be dangerous to other patients, and the patient might have to be isolated
Society and culture
Several studies have reported associations between pathogen load in an area and human behavior. Higher pathogen load is associated with decreased size of ethnic and religious groups in an area. This may be due high pathogen load favoring avoidance of other groups, which may reduce pathogen transmission, or a high pathogen load preventing the creation of large settlements and armies that enforce a common culture. Higher pathogen load is also associated with more restricted sexual behavior, which may reduce pathogen transmission. It also associated with higher preferences for health and attractiveness in mates. Higher fertility rates and shorter or less parental care per child is another association that may be a compensation for the higher mortality rate. There is also an association with polygyny which may be due to higher pathogen load, making selecting males with a high genetic resistance increasingly important. Higher pathogen load is also associated with more collectivism and less individualism, which may limit contacts with outside groups and infections. There are alternative explanations for at least some of the associations although some of these explanations may in turn ultimately be due to pathogen load. Thus, polygyny may also be due to a lower male: female ratio in these areas but this may ultimately be due to male infants having increased mortality from infectious diseases. Another example is that poor socioeconomic factors may ultimately in part be due to high pathogen load preventing economic development.
Fossil record
Evidence of infection in fossil remains is a subject of interest for paleopathologists, scientists who study occurrences of injuries and illness in extinct life forms. Signs of infection have been discovered in the bones of carnivorous dinosaurs. When present, however, these infections seem to tend to be confined to only small regions of the body. A skull attributed to the early carnivorous dinosaur Herrerasaurus ischigualastensis exhibits pit-like wounds surrounded by swollen and porous bone. The unusual texture of the bone around the wounds suggests they were affected by a short-lived, non-lethal infection. Scientists who studied the skull speculated that the bite marks were received in a fight with another Herrerasaurus. Other carnivorous dinosaurs with documented evidence of infection include Acrocanthosaurus, Allosaurus, Tyrannosaurus and a tyrannosaur from the Kirtland Formation. The infections from both tyrannosaurs were received by being bitten during a fight, like the Herrerasaurus specimen.
Outer space
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space.
See also
Biological hazard
Blood-borne disease
Coinfection
Copenhagen Consensus
Cordon sanitaire
Epidemiological transition
Foodborne illness
Hospital-acquired infection
Eradication of infectious diseases
Infection control
Isolation (health care)
List of causes of death by rate
List of diseases caused by insects
List of infectious diseases
Mathematical modelling of infectious disease
Multiplicity of infection
Neglected tropical diseases
Outline of infectious disease concepts
Sentinel surveillance
Spillover infection
Threshold host density
Transmission (medicine)
Vaccine-preventable diseases
Waterborne diseases
References
External links
European Center for Disease Prevention and Control
U.S. Centers for Disease Control and Prevention,
Infectious Disease Society of America (IDSA)
Vaccine Research Center Information concerning vaccine research clinical trials for Emerging and re-Emerging Infectious Diseases.
Microbes & Infection (journal)
Epidemiology | 0.778653 | 0.998538 | 0.777515 |
Pathogenesis | In pathology, pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. The word comes .
Description
Types of pathogenesis include microbial infection, inflammation, malignancy and tissue breakdown. For example, bacterial pathogenesis is the process by which bacteria cause infectious illness.
Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system (skin tumors and lymphoma after a renal transplant, which requires immunosuppression), Streptococcus pneumoniae is spread through contact with respiratory secretions, such as saliva, mucus, or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply.
The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented. Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology. Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference.
See also
Causal inference
Epidemiology
Molecular pathological epidemiology
Molecular pathology
Pathology
Pathophysiology
Salutogenesis
References
Further reading
Pathology | 0.783112 | 0.992474 | 0.777219 |
Community-acquired pneumonia | Community-acquired pneumonia (CAP) refers to pneumonia (any of several lung diseases) contracted by a person outside of the healthcare system. In contrast, hospital-acquired pneumonia (HAP) is seen in patients who have recently visited a hospital or who live in long-term care facilities. CAP is common, affecting people of all ages, and its symptoms occur as a result of oxygen-absorbing areas of the lung (alveoli) filling with fluid. This inhibits lung function, causing dyspnea, fever, chest pains and cough.
CAP, the most common type of pneumonia, is a leading cause of illness and death worldwide. Its causes include bacteria, viruses, fungi and parasites. CAP is diagnosed by assessing symptoms, performing a physical examination, by x-ray or by sputum examination. Patients with CAP sometimes require hospitalization, and it is treated primarily with antibiotics, antipyretics and cough medicine. Some forms of CAP can be prevented by vaccination and by abstaining from tobacco products.
Signs and symptoms
Common symptoms
Coughing which produces greenish or yellow sputum
A high fever, accompanied by sweating, chills and shivering
Sharp, stabbing chest pains
Rapid, shallow, often painful breathing
Less-common symptoms
Coughing up blood (hemoptysis)
Headaches, including migraines
Loss of appetite
Excessive fatigue
Bluish skin (cyanosis)
Nausea
Vomiting
Diarrhea
Joint pain (arthralgia)
Muscle aches (myalgia)
Rapid heartbeat
Dizziness or lightheadedness
In the elderly
New or worsening confusion
Hypothermia
Poor coordination, which may lead to falls
In infants
Unusual sleepiness
Yellowing of the skin (jaundice)
Difficulty feeding
Complications
Major complications of CAP include:
Sepsis - A life-threatening reaction to infection. A common cause of sepsis is bacterial pneumonia, frequently the result of infection with streptococcus pneumoniae. Patients with sepsis require intensive care with blood pressure monitoring and support against hypotension. Sepsis can cause liver, kidney and heart damage.
Respiratory failure - CAP patients often have dyspnea, which may require support. Non-invasive machines (such as bilevel positive airway pressure), a tracheal tube or a ventilator may be used.
Pleural effusion and empyema - Microorganisms from the lung may trigger fluid collection in the pleural cavity, or empyema. Pleural fluid, if present, should be collected with a needle and examined. Depending on the results, complete drainage of the fluid with a chest tube may be necessary to prevent proliferation of the infection. Antibiotics, which do not penetrate the pleural cavity well, are less effective.
Abscess - A pocket of fluid and bacteria may appear on X-ray as a cavity in the lung. Abscesses, typical of aspiration pneumonia, usually contain a mixture of anaerobic bacteria. Although antibiotics can usually cure abscesses, sometimes they require drainage by a surgeon or radiologist.
Causes
Many different microorganisms can cause CAP. However, the most common cause is Streptococcus pneumoniae. Certain groups of people are more susceptible to CAP-causing pathogens - infants, adults with chronic conditions (such as chronic obstructive pulmonary disease), and senior citizens. Alcoholics and others with compromised immune systems are more likely to develop CAP from Haemophilus influenzae or Pneumocystis jirovecii. A definitive cause is identified in only half the cases.
Neonates and infants
It is possible for a fetus to develop a lung infection before birth by aspirating infected amniotic fluid or through a blood-borne infection which crossed the placenta. Infants can also inhale contaminated fluid from the vagina at birth. The most prevalent pathogen causing CAP in newborns is Streptococcus agalactiae, also known as group-B streptococcus (GBS). GBS causes more than half of CAP in the first week after birth. Other bacterial causes of neonatal CAP include Listeria monocytogenes and a variety of mycobacteria. CAP-causing viruses may also be transferred from mother to child; herpes simplex virus, the most common, is life-threatening, and adenoviridae, mumps and enterovirus can also cause pneumonia. Another cause of neonatal CAP is Chlamydia trachomatis, which, though acquired at birth, does not cause pneumonia until two to four weeks later. It usually presents with no fever and a characteristic, staccato cough.
CAP in older infants reflects increased exposure to microorganisms, with common bacterial causes including Streptococcus pneumoniae, Escherichia coli, Klebsiella pneumoniae, Moraxella catarrhalis and Staphylococcus aureus. Maternally-derived syphilis is also a cause of CAP in infants. Viral causes include human respiratory syncytial virus (RSV), human metapneumovirus, adenovirus, human parainfluenza viruses, influenza and rhinovirus, and RSV is a common source of illness and hospitalization in infants. CAP caused by fungi or parasites is not usually seen in otherwise-healthy infants.
Children
Although children older than one month tend to be at risk for the same microorganisms as adults, children under five years of age are much less likely to have pneumonia caused by Mycoplasma pneumoniae, Chlamydophila pneumoniae or Legionella pneumophila than older children. In contrast, older children and teenagers are more likely to acquire Mycoplasma pneumoniae and Chlamydophila pneumoniae than adults.
Adults
A full spectrum of microorganisms is responsible for CAP in adults, and patients with certain risk factors are more susceptible to infections by certain groups of microorganisms. Identifying people at risk for infection by these organisms aids in appropriate treatment.
Many less-common organisms can cause CAP in adults; these may be determined by identifying specific risk factors, or when treatment for more common causes fails.
Risk factors
Some patients have an underlying problem which increases their risk of infection. Some risk factors are:
Obstruction - When part of the airway (bronchus) leading to the alveoli is obstructed, the lung cannot eliminate fluid; this can lead to pneumonia. One cause of obstruction, especially in young children, is inhalation of a foreign object such as a marble or toy. The object lodges in a small airway, and pneumonia develops in the obstructed area of the lung. Another cause of obstruction is lung cancer, which can block the flow of air.
Lung disease - Patients with underlying lung disease are more likely to develop pneumonia. Diseases such as emphysema and habits such as smoking result in more frequent and more severe bouts of pneumonia. In children, recurrent pneumonia may indicate cystic fibrosis or pulmonary sequestration.
Immune problems - Immune-deficient patients, such as those with HIV/AIDS, are more likely to develop pneumonia. Other immune problems that increase the risk of developing pneumonia range from severe childhood immune deficiencies, such as Wiskott–Aldrich syndrome, to the less severe common variable immunodeficiency.
Pathophysiology
The symptoms of CAP are the result of lung infection by microorganisms and the response of the immune system to the infection. Mechanisms of infection are different for viruses and other microorganisms.
Viruses
Up to 20 percent of CAP cases can be attributed to viruses. The most common viral causes are influenza, parainfluenza, human respiratory syncytial virus, human metapneumovirus and adenovirus. Less common viruses which may cause serious illness include chickenpox, SARS, avian flu and hantavirus.
Typically, a virus enters the lungs through the inhalation of water droplets and invades the cells lining the airways and the alveoli. This leads to cell death; the cells are killed by the virus or they self-destruct. Further lung damage occurs when the immune system responds to the infection. White blood cells, particularly lymphocytes, activate chemicals known as cytokines which cause fluid to leak into the alveoli. The combination of cell destruction and fluid-filled alveoli interrupts the transportation of oxygen into the bloodstream. In addition to their effects on the lungs, many viruses affect other organs. Viral infections weaken the immune system, making the body more susceptible to bacterial infection, including bacterial pneumonia.
Bacteria and fungi
Although most cases of bacterial pneumonia are caused by Streptococcus pneumoniae, infections by atypical bacteria such as Mycoplasma pneumoniae, Chlamydophila pneumoniae, and Legionella pneumophila can also cause CAP. Enteric gram-negative bacteria, such as Escherichia coli and Klebsiella pneumoniae, are a group of bacteria that typically live in the large intestine; contamination of food and water by these bacteria can result in outbreaks of pneumonia. Pseudomonas aeruginosa, an uncommon cause of CAP, is a difficult bacteria to treat.
Bacteria and fungi typically enter the lungs by inhalation of water droplets, although they can reach the lung through the bloodstream if an infection is present. In the alveoli, bacteria and fungi travel into the spaces between cells and adjacent alveoli through connecting pores. The immune system responds by releasing neutrophil granulocytes, white blood cells responsible for attacking microorganisms, into the lungs. The neutrophils engulf and kill the microorganisms, releasing cytokines which activate the entire immune system. This response causes fever, chills and fatigue, common symptoms of CAP. The neutrophils, bacteria and fluids leaked from surrounding blood vessels fill the alveoli, impairing oxygen transport. Bacteria may travel from the lung to the bloodstream, causing septic shock (very low blood pressure which damages the brain, kidney, and heart).
Parasites
A variety of parasites can affect the lungs, generally entering the body through the skin or by being swallowed. They then travel to the lungs through the blood, where the combination of cell destruction and immune response disrupts oxygen transport.
Diagnosis
Patients with symptoms of CAP require evaluation. Diagnosis of pneumonia is made clinically, rather than on the basis of a particular test. Evaluation begins with a physical examination by a health provider, which may reveal fever, an increased respiratory rate (tachypnea), low blood pressure (hypotension), a fast heart rate (tachycardia) and changes in the amount of oxygen in the blood. Palpating the chest as it expands and tapping the chest wall to identify dull, non-resonant areas can identify stiffness and fluid, signs of CAP. Listening to the lungs with a stethoscope (auscultation) can also reveal signs associated with CAP. A lack of normal breath sounds or the presence of crackles can indicate fluid consolidation. Increased vibration of the chest when speaking, known as tactile fremitus, and increased volume of whispered speech during auscultation can also indicate the presence of fluid.
Several tests can identify the cause of CAP. Blood cultures can isolate bacteria or fungi in the bloodstream. Sputum Gram staining and culture can also reveal the causative microorganism. In severe cases, bronchoscopy can collect fluid for culture. Special tests, such as urinalysis, can be performed if an uncommon microorganism is suspected.
Chest X-rays and X-ray computed tomography (CT) can reveal areas of opacity (seen as white), indicating consolidation. CAP does not always appear on x-rays, sometimes because the disease is in its initial stages or involves a part of the lung not clearly visible on x-ray. In some cases, chest CT can reveal pneumonia not seen on x-rays. However, congestive heart failure or other types of lung damage can mimic CAP on x-ray.
When signs of pneumonia are discovered during evaluation, chest X-rays and examination of the blood and sputum for infectious microorganisms may be done to support a diagnosis of CAP. The diagnostic tools employed will depend on the severity of illness, local practices and concern about complications of the infection. All patients with CAP should have their blood oxygen monitored with pulse oximetry. In some cases, arterial blood gas analysis may be required to determine the amount of oxygen in the blood. A complete blood count (CBC) may reveal extra white blood cells, indicating infection.
Prevention
CAP may be prevented by treating underlying illnesses that increases its risk, by smoking cessation, and by vaccination. Vaccination against Haemophilus influenzae and Streptococcus pneumoniae in the first year of life has been protective against childhood CAP. A vaccine against Streptococcus pneumoniae, available for adults, is recommended for healthy individuals over 65 and all adults with COPD, heart failure, diabetes mellitus, cirrhosis, alcoholism, cerebrospinal fluid leaks or who have had a splenectomy. Re-vaccination may be required after five or ten years.
Patients who have been vaccinated against Streptococcus pneumoniae, health professionals, nursing-home residents and pregnant women should be vaccinated annually against influenza. During an outbreak, drugs such as amantadine, rimantadine, zanamivir and oseltamivir have been demonstrated to prevent influenza.
Treatment
CAP is treated with an antibiotic that kills the infecting microorganism; treatment also aims at managing complications. If the causative microorganism is unidentified, which is often the case, the laboratory identifies the most effective antibiotic; this may take several days.
Health professionals consider a person's risk factors for various organisms when choosing an initial antibiotic. Additional consideration is given to the treatment setting; most patients are cured by oral medication, while others must be hospitalized for intravenous therapy or intensive care.
Current treatment guidelines recommend a beta-lactam, like amoxicillin, and a macrolide, like azithromycin or clarithromycin, or a quinolone, such as levofloxacin. Doxycycline is the antibiotic of choice in the UK for atypical bacteria, due to increased Clostridioides difficile infection in hospital patients linked to the increased use of clarithromycin.
Ceftriaxone and azithromycin are often used to treat community-acquired pneumonia, which usually presents with a few days of cough, fever, and shortness of breath. Chest x-ray typically reveals a lobar infiltrate (rather than diffuse).
Newborns
Most newborn infants with CAP are hospitalized, receiving IV ampicillin and gentamicin for at least ten days to treat the common causative agents Streptococcus agalactiae, Listeria monocytogenes and Escherichia coli. To treat the herpes simplex virus, IV acyclovir is administered for 21 days.
Children
Treatment of CAP in children depends on the child's age and the severity of illness. Children under five are not usually treated for atypical bacteria. If hospitalization is not required, a seven-day course of amoxicillin is often prescribed, with co-trimaxazole as an alternative when there is allergy to penicillins. Further studies are needed to confirm the efficacy of newer antibiotics. With the increase in drug-resistant Streptococcus pneumoniae, antibiotics such as cefpodoxime may become more popular. Hospitalized children receive intravenous ampicillin, ceftriaxone or cefotaxime, and a recent study found that a three-day course of antibiotics seems sufficient for most mild-to-moderate CAP in children.
Adults
In 2001 the American Thoracic Society, drawing on the work of the British and Canadian Thoracic Societies, established guidelines for the management of adult CAP by dividing patients into four categories based on common organisms:
Healthy outpatients without risk factors: This group (the largest) is composed of otherwise-healthy patients without risk factors for DRSP, enteric gram-negative bacteria, Pseudomonas or other, less common, causes of CAP. Primary microorganisms are viruses, atypical bacteria, penicillin-sensitive Streptococcus pneumoniae and Haemophilus influenzae. Recommended drugs are macrolide antibiotics, such as azithromycin or clarithromycin, for seven to ten days. A shorter course of these antibiotics has been investigated, however, there is not sufficient evidence to make recommendations.
Outpatients with underlying illness or risk factors: Although this group does not require hospitalization, they have underlying health problems such as emphysema or heart failure or are at risk for DRSP or enteric gram-negative bacteria. They may be treated with a quinolone active against Streptococcus pneumoniae (such as levofloxacin) or a β-lactam antibiotic (such as cefpodoxime, cefuroxime, amoxicillin or amoxicillin/clavulanic acid) and a macrolide antibiotic, such as azithromycin or clarithromycin, for seven to ten days.
Hospitalized patients without risk for Pseudomonas: This group requires intravenous antibiotics, with a quinolone active against Streptococcus pneumoniae (such as levofloxacin), a β-lactam antibiotic (such as cefotaxime, ceftriaxone, ampicillin/sulbactam or high-dose ampicillin plus a macrolide antibiotic (such as azithromycin or clarithromycin) for seven to ten days.
Intensive-care patients at risk for Pseudomonas aeruginosa: These patients require antibiotics targeting this difficult-to-eradicate bacterium. One regimen is an intravenous antipseudomonal beta-lactam such as cefepime, imipenem, meropenem or piperacillin/tazobactam, plus an IV antipseudomonal fluoroquinolone such as levofloxacin. Another is an IV antipseudomonal beta-lactam such as cefepime, imipenem, meropenem or piperacillin/tazobactam, plus an aminoglycoside such as gentamicin or tobramycin, plus a macrolide (such as azithromycin) or a nonpseudomonal fluoroquinolone such as ciprofloxacin.
For mild-to-moderate CAP, shorter courses of antibiotics (3–7 days) seem to be sufficient.
Some patients with CAP will be at increased risk of death despite antimicrobial treatment. A key reason for this is the host's exaggerated inflammatory response. There is a tension between controlling the infection on one hand and minimizing damage to other tissues on the other. Some recent research focuses on immunomodulatory therapy that can modulate the immune response in order to reduce injury to the lung and other affected organs such as the heart. Although the evidence for these agents has not resulted in their routine use, their potential benefits are promising.
Hospitalization
Some CAP patients require intensive care, with clinical prediction rules such as the pneumonia severity index and CURB-65 guiding the decision whether or not to hospitalize. Factors increasing the need for hospitalization include:
Age greater than 65
Underlying chronic illnesses
Respiratory rate greater than 30 per minute
Systolic blood pressure less than 90 mmHg
Heart rate greater than 125 per minute
Temperature below 35 or over 40 °C
Confusion
Evidence of infection outside the lung
Laboratory results indicating hospitalization include:
Arterial oxygen tension less than 60 mm Hg
Carbon dioxide over 50 mmHg or pH under 7.35 while breathing room air
Hematocrit under 30 percent
Creatinine over 1.2 mg/dl or blood urea nitrogen over 20 mg/dl
White-blood-cell count under 4 × 10^9/L or over 30 × 10^9/L
Neutrophil count under 1 x 10^9/L
X-ray findings indicating hospitalization include:
Involvement of more than one lobe of the lung
Presence of a cavity
Pleural effusion
Prognosis
The CAP outpatient mortality rate is less than one percent, with fever typically responding within the first two days of therapy, and other symptoms abating in the first week. However, X-rays may remain abnormal for at least a month. Hospitalized patients have an average mortality rate of 12 percent, with the rate rising to 40 percent for patients with bloodstream infections or those who require intensive care. Factors increasing mortality are identical to those indicating hospitalization.
When CAP does not respond to treatment, this may indicate a previously unknown health problem, a treatment complication, inappropriate antibiotics for the causative organism, a previously unsuspected microorganism (such as tuberculosis) or a condition mimicking CAP (such as granuloma with polyangiitis). Additional tests include X-ray computed tomography, bronchoscopy or lung biopsy.
Epidemiology
CAP is common worldwide, and is a major cause of death in all age groups. In children, most deaths (over two million a year) occur in the newborn period. According to a World Health Organization estimate, one in three newborn deaths result from pneumonia. Mortality decreases with age until late adulthood, with the elderly at risk for CAP and its associated mortality.
More CAP cases occur during the winter than at other times of the year. CAP is more common in males than females, and more common in black people than Caucasians. Patients with underlying illnesses (such as Alzheimer's disease, cystic fibrosis, COPD, tobacco smoking, alcoholism or immune-system problems) have an increased risk of developing pneumonia.
See also
Bacterial pneumonia
Viral pneumonia
Fungal pneumonia
Parasitic pneumonia
References
External links
Infectious Diseases Society of America/American Thoracic Society Consensus Guidelines on the Management of Community-Acquired Pneumonia in Adults PDF
Pneumonia
Infectious diseases | 0.781883 | 0.993723 | 0.776975 |
Pathogen | In biology, a pathogen (, "suffering", "passion" and , "producer of"), in the oldest and broadest sense, is any organism or agent that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ.
The term pathogen came into use in the 1880s. Typically, the term pathogen is used to describe an infectious microorganism or agent, such as a virus, bacterium, protozoan, prion, viroid, or fungus. Small animals, such as helminths and insects, can also cause or transmit disease. However, these animals are usually referred to as parasites rather than pathogens. The scientific study of microscopic organisms, including microscopic pathogenic organisms, is called microbiology, while parasitology refers to the scientific study of parasites and the organisms that host them.
There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen.
Diseases in humans that are caused by infectious agents are known as pathogenic diseases. Not all diseases are caused by pathogens, such as black lung from exposure to the pollutant coal dust, genetic disorders like sickle cell disease, and autoimmune diseases like lupus.
Pathogenicity
Pathogenicity is the potential disease-causing capacity of pathogens, involving a combination of infectivity (pathogen's ability to infect hosts) and virulence (severity of host disease). Koch's postulates are used to establish causal relationships between microbial pathogens and diseases. Whereas meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens, cholera is only caused by some strains of Vibrio cholerae. Additionally, some pathogens may only cause disease in hosts with an immunodeficiency. These opportunistic infections often involve hospital-acquired infections among patients already combating another condition.
Infectivity involves pathogen transmission through direct contact with the bodily fluids or airborne droplets of infected hosts, indirect contact involving contaminated areas/items, or transfer by living vectors like mosquitos and ticks. The basic reproduction number of an infection is the expected number of subsequent cases it is likely to cause through transmission.
Virulence involves pathogens extracting host nutrients for their survival, evading host immune systems by producing microbial toxins and causing immunosuppression. Optimal virulence describes a theorized equilibrium between a pathogen spreading to additional hosts to parasitize resources, while lowering their virulence to keep hosts living for vertical transmission to their offspring.
Types
Algae
Algae are single-celled eukaryotes that are generally non-pathogenic. Green algae from the genus Prototheca lack chlorophyll and are known to cause the disease protothecosis in humans, dogs, cats, and cattle, typically involving the soil-associated species Prototheca wickerhami.
Bacteria
Bacteria are single-celled prokaryotes that range in size from 0.15 and 700 μM. While the vast majority are either harmless or beneficial to their hosts, such as members of the human gut microbiome that support digestion, a small percentage are pathogenic and cause infectious diseases. Bacterial virulence factors include adherence factors to attach to host cells, invasion factors supporting entry into host cells, capsules to prevent opsonization and phagocytosis, toxins, and siderophores to acquire iron.
The bacterial disease tuberculosis, primarily caused by Mycobacterium tuberculosis, has one of the highest disease burdens, killing 1.6 million people in 2021, mostly in Africa and Southeast Asia. Bacterial pneumonia is primarily caused by Streptococcus pneumoniae, Staphylococcus aureus, Klebsiella pneumoniae, and Haemophilus influenzae. Foodborne illnesses typically involve Campylobacter, Clostridium perfringens, Escherichia coli, Listeria monocytogenes, and Salmonella. Other infectious diseases caused by pathogenic bacteria include tetanus, typhoid fever, diphtheria, and leprosy.
Fungi
Fungi are eukaryotic organisms that can function as pathogens. There are approximately 300 known fungi that are pathogenic to humans, including Candida albicans, which is the most common cause of thrush, and Cryptococcus neoformans, which can cause a severe form of meningitis. Typical fungal spores are 4.7 μm long or smaller.
Prions
Prions are misfolded proteins that transmit their abnormal folding pattern to other copies of the protein without using nucleic acids. Besides obtaining prions from others, these misfolded proteins arise from genetic differences, either due to family history or sporadic mutations. Plants uptake prions from contaminated soil and transport them into their stem and leaves, potentially transmitting the prions to herbivorous animals. Additionally, wood, rocks, plastic, glass, cement, stainless steel, and aluminum have been shown binding, retaining, and releasing prions, showcasing that the proteins resist environmental degradation.
Prions are best known for causing transmissible spongiform encephalopathy (TSE) diseases like Creutzfeldt–Jakob disease (CJD), variant Creutzfeldt–Jakob disease (vCJD), Gerstmann–Sträussler–Scheinker syndrome (GSS), fatal familial insomnia (FFI), and kuru in humans.
While prions are typically viewed as pathogens that cause protein amyloid fibers to accumulate into neurodegenerative plaques, Susan Lindquist led research showing that yeast use prions to pass on evolutionarily beneficial traits.
Viroids
Not to be confused with virusoids or viruses, viroids are the smallest known infectious pathogens. Viroids are small single-stranded, circular RNA that are only known to cause plant diseases, such as the potato spindle tuber viroid that affects various agricultural crops. Viroid RNA is not protected by a protein coat, and it does not encode any proteins, only acting as a ribozyme to catalyze other biochemical reactions.
Viruses
Viruses are generally between 20-200 nm in diameter. For survival and replication, viruses inject their genome into host cells, insert those genes into the host genome, and hijack the host's machinery to produce hundreds of new viruses until the cell bursts open to release them for additional infections. The lytic cycle describes this active state of rapidly killing hosts, while the lysogenic cycle describes potentially hundreds of years of dormancy while integrated in the host genome. Alongside the taxonomy organized by the International Committee on Taxonomy of Viruses (ICTV), the Baltimore classification separates viruses by seven classes of mRNA production:
I: dsDNA viruses (e.g., Adenoviruses, Herpesviruses, and Poxviruses) cause herpes, chickenpox, and smallpox
II: ssDNA viruses (+ strand or "sense") DNA (e.g., Parvoviruses) include parvovirus B19
III: dsRNA viruses (e.g., Reoviruses) include rotaviruses
IV: (+)ssRNA viruses (+ strand or sense) RNA (e.g., Coronaviruses, Picornaviruses, and Togaviruses) cause COVID-19, dengue fever, Hepatitis A, Hepatitis C, rubella, and yellow fever
V: (−)ssRNA viruses (− strand or antisense) RNA (e.g., Orthomyxoviruses and Rhabdoviruses) cause ebola, influenza, measles, mumps, and rabies
VI: ssRNA-RT viruses (+ strand or sense) RNA with DNA intermediate in life-cycle (e.g., Retroviruses) cause HIV/AIDS
VII: dsDNA-RT viruses DNA with RNA intermediate in life-cycle (e.g., Hepadnaviruses) cause Hepatitis B
Other parasites
Protozoans are single-celled eukaryotes that feed on microorganisms and organic tissues. Many protozoans act as pathogenic parasites to cause diseases like malaria, amoebiasis, giardiasis, toxoplasmosis, cryptosporidiosis, trichomoniasis, Chagas disease, leishmaniasis, African trypanosomiasis (sleeping sickness), Acanthamoeba keratitis, and primary amoebic meningoencephalitis (naegleriasis).
Parasitic worms (helminths) are macroparasites that can be seen by the naked eye. Worms live and feed in their living host, acquiring nutrients and shelter in the digestive tract or bloodstream of their host. They also manipulate the host's immune system by secreting immunomodulatory products which allows them to live in their host for years. Helminthiasis is the generalized term for parasitic worm infections, which typically involve roundworms, tapeworms, and flatworms.
Pathogen hosts
Bacteria
While bacteria are typically viewed as pathogens, they serve as hosts to bacteriophage viruses (commonly known as phages). The bacteriophage life cycle involves the viruses injecting their genome into bacterial cells, inserting those genes into the bacterial genome, and hijacking the bacteria's machinery to produce hundreds of new phages until the cell bursts open to release them for additional infections. Typically, bacteriophages are only capable of infecting a specific species or strain.
Streptococcus pyogenes uses a Cas9 nuclease to cleave foreign DNA matching the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated with bacteriophages, removing the viral genes to avoid infection. This mechanism has been modified for artificial CRISPR gene editing.
Plants
Plants can play host to a wide range of pathogen types, including viruses, bacteria, fungi, nematodes, and even other plants. Notable plant viruses include the papaya ringspot virus, which has caused millions of dollars of damage to farmers in Hawaii and Southeast Asia, and the tobacco mosaic virus which caused scientist Martinus Beijerinck to coin the term "virus" in 1898. Bacterial plant pathogens cause leaf spots, blight, and rot in many plant species. The most common bacterial pathogens for plants are Pseudomonas syringae and Ralstonia solanacearum, which cause leaf browning and other issues in potatoes, tomatoes, and bananas.
Fungi are another major pathogen type for plants. They can cause a wide variety of issues such as shorter plant height, growths or pits on tree trunks, root or seed rot, and leaf spots. Common and serious plant fungi include the rice blast fungus, Dutch elm disease, chestnut blight and the black knot and brown rot diseases of cherries, plums, and peaches. It is estimated that pathogenic fungi alone cause up to a 65% reduction in crop yield.
Overall, plants have a wide array of pathogens and it has been estimated that only 3% of the disease caused by plant pathogens can be managed.
Animals
Animals often get infected with many of the same or similar pathogens as humans including prions, viruses, bacteria, and fungi. While wild animals often get illnesses, the larger danger is for livestock animals. It is estimated that in rural settings, 90% or more of livestock deaths can be attributed to pathogens. Animal transmissible spongiform encephalopathy (TSEs) involving prions include bovine spongiform encephalopathy (mad cow disease), chronic wasting disease, scrapie, transmissible mink encephalopathy, feline spongiform encephalopathy, and ungulate spongiform encephalopathy. Other animal diseases include a variety of immunodeficiency disorders caused by viruses related to human immunodeficiency virus (HIV), such as BIV and FIV.
Humans
Humans can be infected with many types of pathogens, including prions, viruses, bacteria, and fungi, causing symptoms like sneezing, coughing, fever, vomiting, and potentially lethal organ failure. While some symptoms are caused by the pathogenic infection, others are caused by the immune system's efforts to kill the pathogen, such as feverishly high body temperatures meant to denature pathogenic cells.
Treatment
Prions
Despite many attempts, no therapy has been shown to halt the progression of prion diseases.
Viruses
A variety of prevention and treatment options exist for some viral pathogens. Vaccines are one common and effective preventive measure against a variety of viral pathogens. Vaccines prime the immune system of the host, so that when the potential host encounters the virus in the wild, the immune system can defend against infection quickly. Vaccines designed against viruses include annual influenza vaccines and the two-dose MMR vaccine against measles, mumps, and rubella. Vaccines are not available against the viruses responsible for HIV/AIDS, dengue, and chikungunya.
Treatment of viral infections often involves treating the symptoms of the infection, rather than providing medication to combat the viral pathogen itself. Treating the symptoms of a viral infection gives the host immune system time to develop antibodies against the viral pathogen. However, for HIV, highly active antiretroviral therapy (HAART) is conducted to prevent the viral disease from progressing into AIDS as immune cells are lost.
Bacteria
Much like viral pathogens, infection by certain bacterial pathogens can be prevented via vaccines. Vaccines against bacterial pathogens include the anthrax vaccine and pneumococcal vaccine. Many other bacterial pathogens lack vaccines as a preventive measure, but infection by these bacteria can often be treated or prevented with antibiotics. Common antibiotics include amoxicillin, ciprofloxacin, and doxycycline. Each antibiotic has different bacteria that it is effective against and has different mechanisms to kill that bacteria. For example, doxycycline inhibits the synthesis of new proteins in both gram-negative and gram-positive bacteria, which makes it a broad-spectrum antibiotic capable of killing most bacterial species.
Due to misuse of antibiotics, such as prematurely ended prescriptions exposing bacteria to evolutionary pressure under sublethal doses, some bacterial pathogens have developed antibiotic resistance. For example, a genetically distinct strain of Staphylococcus aureus called MRSA is resistant to the commonly prescribed beta-lactam antibiotics. A 2013 report from the Centers for Disease Control and Prevention (CDC) estimated that in the United States, at least 2 million people get an antibiotic-resistant bacterial infection annually, with at least 23,000 of those patients dying from the infection.
Due to their indispensability in combating bacteria, new antibiotics are required for medical care. One target for new antimicrobial medications involves inhibiting DNA methyltransferases, as these proteins control the levels of expression for other genes, such as those encoding virulence factors.
Fungi
Infection by fungal pathogens is treated with anti-fungal medication. Athlete's foot, jock itch, and ringworm are fungal skin infections that are treated with topical anti-fungal medications like clotrimazole. Infections involving the yeast species Candida albicans cause oral thrush and vaginal yeast infections. These internal infections can either be treated with anti-fungal creams or with oral medication. Common anti-fungal drugs for internal infections include the echinocandin family of drugs and fluconazole.
Algae
While algae are commonly not thought of as pathogens, the genus Prototheca causes disease in humans. Treatment for protothecosis is currently under investigation, and there is no consistency in clinical treatment.
Sexual interactions
Many pathogens are capable of sexual interaction. Among pathogenic bacteria, sexual interaction occurs between cells of the same species by the process of genetic transformation. Transformation involves the transfer of DNA from a donor cell to a recipient cell and the integration of the donor DNA into the recipient genome through genetic recombination. The bacterial pathogens Helicobacter pylori, Haemophilus influenzae, Legionella pneumophila, Neisseria gonorrhoeae, and Streptococcus pneumoniae frequently undergo transformation to modify their genome for additional traits and evasion of host immune cells.
Eukaryotic pathogens are often capable of sexual interaction by a process involving meiosis and fertilization. Meiosis involves the intimate pairing of homologous chromosomes and recombination between them. Examples of eukaryotic pathogens capable of sex include the protozoan parasites Plasmodium falciparum, Toxoplasma gondii, Trypanosoma brucei, Giardia intestinalis, and the fungi Aspergillus fumigatus, Candida albicans and Cryptococcus neoformans.
Viruses may also undergo sexual interaction when two or more viral genomes enter the same host cell. This process involves pairing of homologous genomes and recombination between them by a process referred to as multiplicity reactivation. The herpes simplex virus, human immunodeficiency virus, and vaccinia virus undergo this form of sexual interaction.
These processes of sexual recombination between homologous genomes supports repairs to genetic damage caused by environmental stressors and host immune systems.
See also
Antigenic escape
Ecological competence
Emerging Pathogens Institute
Human pathogen
Pathogen-Host Interaction Database (PHI-base)
References
External links
Pronunciation Guide to Microorganisms (1)
Pronunciation Guide to Microorganisms (2)
Infectious diseases
Microbiology
Hazardous materials | 0.778142 | 0.99841 | 0.776905 |
Malaise | In medicine, malaise is a feeling of general discomfort, uneasiness or lack of wellbeing and often the first sign of an infection or other disease. The word has existed in French since at least the 12th century.
The term is often used figuratively in other contexts, in addition to its meaning as a general state of angst or melancholia.
Cause
Malaise is a non-specific symptom and can be present in the slightest ailment, such as an emotion (causing fainting, a vasovagal response) or hunger (light hypoglycemia), to the most serious conditions (cancer, stroke, heart attack, internal bleeding, etc.).
Malaise expresses a patient's uneasiness that "something is not right" that may need a medical examination to determine the significance.
Malaise is thought to be caused by the activation of an immune response, and the associated pro-inflammatory cytokines.
Figurative use
"Economic malaise" refers to an economy that is stagnant or in recession (compare depression). The term is particularly associated with the 1973–75 United States recession. An era of American automotive history, centered around the 1970s, is similarly called the "malaise era."
The "Crisis of Confidence" speech made by US President Jimmy Carter in 1979 is commonly referred to as the "malaise speech", although the word itself was not actually in the speech.
See also
Ennui
Fatigue (medical)
Malaise Créole
Post-exertional malaise
Prodrome
Torpor
Notes and references
External links
Symptoms and signs
Emotions
French medical phrases | 0.778511 | 0.997388 | 0.776478 |
Composition of the human body | Body composition may be analyzed in various ways. This can be done in terms of the chemical elements present, or by molecular structure e.g., water, protein, fats (or lipids), hydroxylapatite (in bones), carbohydrates (such as glycogen and glucose) and DNA. In terms of tissue type, the body may be analyzed into water, fat, connective tissue, muscle, bone, etc. In terms of cell type, the body contains hundreds of different types of cells, but notably, the largest number of cells contained in a human body (though not the largest mass of cells) are not human cells, but bacteria residing in the normal human gastrointestinal tract.
Elements
About 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life. The remaining elements are trace elements, of which more than a dozen are thought on the basis of good evidence to be necessary for life. All of the mass of the trace elements put together (less than 10 grams for a human body) do not add up to the body mass of magnesium, the least common of the 11 non-trace elements.
Other elements
Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple common contaminants without function (examples: caesium, titanium), while many others are thought to be active toxins, depending on amount (cadmium, mercury, lead, radioactives). In humans, arsenic is toxic, and its levels in foods and dietary supplements are closely monitored to reduce or eliminate its intake.
Some elements (silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Bromine is used by some (though not all) bacteria, fungi, diatoms, and seaweeds, and opportunistically in eosinophils in humans. One study has indicated bromine to be necessary to collagen IV synthesis in humans. Fluorine is used by a number of plants to manufacture toxins but in humans its only known function is as a local topical hardening agent in tooth enamel.
Elemental composition list
The average adult human body contains approximately atoms and contains at least detectable traces of 60 chemical elements. About 29 of these elements are thought to play an active positive role in life and health in humans.
The relative amounts of each element vary by individual, mainly due to differences in the proportion of fat, muscle and bone in their body. Persons with more fat will have a higher proportion of carbon and a lower proportion of most other elements (the proportion of hydrogen will be about the same).
The numbers in the table are averages of different numbers reported by different references.
The adult human body averages ~53% water. This varies substantially by age, sex, and adiposity. In a large sample of adults of all ages and both sexes, the figure for water fraction by weight was found to be 48 ±6% for females and 58 ±8% water for males. Water is ~11% hydrogen by mass but ~67% hydrogen by atomic percent, and these numbers along with the complementary % numbers for oxygen in water, are the largest contributors to overall mass and atomic composition figures. Because of water content, the human body contains more oxygen by mass than any other element, but more hydrogen by atom-fraction than any element.
The elements listed below as "Essential in humans" are those listed by the US Food and Drug Administration as essential nutrients, as well as six additional elements: oxygen, carbon, hydrogen, and nitrogen (the fundamental building blocks of life on Earth), sulfur (essential to all cells) and cobalt (a necessary component of vitamin B12). Elements listed as "Possibly" or "Probably" essential are those cited by the US National Research Council as beneficial to human health and possibly or probably essential.
*Iron = ~3 g in males, ~2.3 g in females
Of the 94 naturally occurring chemical elements, 61 are listed in the table above. Of the remaining 33, it is not known how many occur in the human body.
Most of the elements needed for life are relatively common in the Earth's crust. Aluminium, the third most common element in the Earth's crust (after oxygen and silicon), serves no function in living cells, but is toxic in large amounts, depending on its physical and chemical forms and magnitude, duration, frequency of exposure, and how it was absorbed by the human body. Transferrins can bind aluminium.
Periodic table
Composition
The composition of the human body can be classified as follows:
Water
Proteins
Fats (or lipids)
Hydroxyapatite in bones
Carbohydrates such as glycogen and glucose
DNA and RNA
Inorganic ions such as sodium, potassium, chloride, bicarbonate, phosphate
Gases mainly being oxygen, carbon dioxide
Many cofactors.
The estimated contents of a typical 20-micrometre human cell is as follows:
Tissues
Body composition can also be expressed in terms of various types of material, such as:
Muscle
Fat
Bone and teeth
Nervous tissue (brain and nerves)
Hormones
Connective tissue
Body fluids (blood, lymph, urine)
Contents of digestive tract, including intestinal gas
Air in lungs
Epithelium
Composition by cell type
There are many species of bacteria and other microorganisms that live on or inside the healthy human body. In fact, there are roughly as many microbial as human cells in the human body by number.
(much less by mass or volume). Some of these symbionts are necessary for our health. Those that neither help nor harm humans are called commensal organisms.
See also
List of organs of the human body
Hydrostatic weighing
Dietary element
Composition of blood
List of human blood components
Body composition
Abundance of elements in Earth's crust
Abundance of the chemical elements
References
Biochemistry
Human anatomy
Human physiology | 0.777758 | 0.998279 | 0.77642 |
Dysentery | {{Infobox medical condition (new)
| name = Dysentery
| synonyms = Bloody diarrhea
| image = Dysentery Patient, Burma Hospital, Siam Art.IWMART1541787.jpg
| caption = A person with dysentery in a Burmese POW camp, 1943
| field = Infectious disease
| symptoms = Bloody diarrhea, abdominal pain, fever
| complications = Dehydration
| onset =
| duration = Less than a week
| types =
| causes = Usually Shigella or Entamoeba histolytica
| risks or H.Nana = Contamination of food and water with feces due to poor sanitation
| diagnosis = Based on symptoms, Stool test
| differential =
| prevention = Hand washing, food safety
| treatment = Drinking sufficient fluids, antibiotics (severe cases)
| medication =
| prognosis =
| frequency = Occurs often in many parts of the world
| deaths = 1.1 million a year
}}
Dysentery ( , ), historically known as the bloody flux, is a type of gastroenteritis that results in bloody diarrhea. Other symptoms may include fever, abdominal pain, and a feeling of incomplete defecation. Complications may include dehydration.
The cause of dysentery is usually the bacteria from genus Shigella, in which case it is known as shigellosis, or the amoeba Entamoeba histolytica; then it is called amoebiasis. Other causes may include certain chemicals, other bacteria, other protozoa, or parasitic worms. It may spread between people. Risk factors include contamination of food and water with feces due to poor sanitation. The underlying mechanism involves inflammation of the intestine, especially of the colon.
Efforts to prevent dysentery include hand washing and food safety measures while traveling in countries of high risk. While the condition generally resolves on its own within a week, drinking sufficient fluids such as oral rehydration solution is important. Antibiotics such as azithromycin may be used to treat cases associated with travelling in the developing world. While medications used to decrease diarrhea such as loperamide are not recommended on their own, they may be used together with antibiotics.Shigella results in about 165 million cases of diarrhea and 1.1 million deaths a year with nearly all cases in the developing world. In areas with poor sanitation nearly half of cases of diarrhea are due to Entamoeba histolytica. Entamoeba histolytica affects millions of people and results in more than 55,000 deaths a year. It commonly occurs in less developed areas of Central and South America, Africa, and Asia. Dysentery has been described at least since the time of Hippocrates.
Signs and symptoms
The most common form of dysentery is bacillary dysentery, which is typically a mild sickness, causing symptoms normally consisting of mild abdominal pains and frequent passage of loose stools or diarrhea. Symptoms normally present themselves after 1–3 days, and are usually no longer present after a week. The frequency of urges to defecate, the large volume of liquid feces ejected, and the presence of blood, mucus, or pus depends on the pathogen causing the disease. Temporary lactose intolerance can occur, as well. In some occasions, severe abdominal cramps, fever, shock, and delirium can all be symptoms.
In extreme cases, people may pass more than one liter of fluid per hour. More often, individuals will complain of diarrhea with blood, accompanied by extreme abdominal pain, rectal pain and a low-grade fever. Rapid weight loss and muscle aches sometimes also accompany dysentery, while nausea and vomiting are rare.
On rare occasions, the amoebic parasite will invade the body through the bloodstream and spread beyond the intestines. In such cases, it may more seriously infect other organs such as the brain, lungs, and most commonly the liver.
Cause
Dysentery results from bacterial, or parasitic infections. Viruses do not generally cause the disease. These pathogens typically reach the large intestine after entering orally, through ingestion of contaminated food or water, oral contact with contaminated objects or hands, and so on. Each specific pathogen has its own mechanism or pathogenesis, but in general, the result is damage to the intestinal linings, leading to the inflammatory immune responses. This can cause elevated physical temperature, painful spasms of the intestinal muscles (cramping), swelling due to fluid leaking from capillaries of the intestine (edema) and further tissue damage by the body's immune cells and the chemicals, called cytokines, which are released to fight the infection. The result can be impaired nutrient absorption, excessive water and mineral loss through the stools due to breakdown of the control mechanisms in the intestinal tissue that normally remove water from the stools, and in severe cases, the entry of pathogenic organisms into the bloodstream. Anemia may also arise due to the blood loss through diarrhea.
Bacterial infections that cause bloody diarrhea are typically classified as being either invasive or toxogenic. Invasive species cause damage directly by invading into the mucosa. The toxogenic species do not invade, but cause cellular damage by secreting toxins, resulting in bloody diarrhea. This is also in contrast to toxins that cause watery diarrhea, which usually do not cause cellular damage, but rather they take over cellular machinery for a portion of life of the cell.
Definitions of dysentery can vary by region and by medical specialty. The U. S. Centers for Disease Control and Prevention (CDC) limits its definition to "diarrhea with visible blood". Others define the term more broadly. These differences in definition must be taken into account when defining mechanisms. For example, using the CDC definition requires that intestinal tissue be so severely damaged that blood vessels have ruptured, allowing visible quantities of blood to be lost with defecation. Other definitions require less specific damage.
Amoebic dysentery
Amoebiasis, also known as amoebic dysentery, is caused by an infection from the amoeba Entamoeba histolytica, which is found mainly in tropical areas. Proper treatment of the underlying infection of amoebic dysentery is important; insufficiently treated amoebiasis can lie dormant for years and subsequently lead to severe, potentially fatal, complications.
When amoebae inside the bowel of an infected person are ready to leave the body, they group together and form a shell that surrounds and protects them. This group of amoebae is known as a cyst, which is then passed out of the person's body in the feces and can survive outside the body. If hygiene standards are poor – for example, if the person does not dispose of the feces hygienically – then it can contaminate the surroundings, such as nearby food and water.
If another person then eats or drinks food or water that has been contaminated with feces containing the cyst, that person will also become infected with the amoebae. Amoebic dysentery is particularly common in parts of the world where human feces are used as fertilizer.
After entering the person's body through the mouth, the cyst travels down into the stomach. The amoebae inside the cyst are protected from the stomach's digestive acid. From the stomach, the cyst travels to the intestines, where it breaks open and releases the amoebae, causing the infection. The amoebae can burrow into the walls of the intestines and cause small abscesses and ulcers to form. The cycle then begins again.
Bacillary dysentery
Dysentery may also be caused by shigellosis, an infection by bacteria of the genus Shigella, and is then known as bacillary dysentery (or Marlow syndrome). The term bacillary dysentery etymologically might seem to refer to any dysentery caused by any bacilliform bacteria, but its meaning is restricted by convention to Shigella dysentery.
Other bacteria
Some strains of Escherichia coli cause bloody diarrhea. The typical culprits are enterohemorrhagic Escherichia coli, of which O157:H7 is the best known. These types of E. coli also make Shiga toxin.
Diagnosis
A diagnosis may be made by taking a history and doing a brief examination. Dysentery should not be confused with hematochezia, which is the passage of fresh blood through the anus, usually in or with stools.
Physical exam
The mouth, skin, and lips may appear dry due to dehydration. Lower abdominal tenderness may also be present.
Stool and blood tests
Cultures of stool samples are examined to identify the organism causing dysentery. Usually, several samples must be obtained due to the number of amoebae, which changes daily. Blood tests can be used to measure abnormalities in the levels of essential minerals and salts.
Prevention
Efforts to prevent dysentery include hand washing and food safety measures while traveling in areas of high risk.
Vaccine
Although there is currently no vaccine that protects against Shigella infection, several are in development. Vaccination may eventually become a part of the strategy to reduce the incidence and severity of diarrhea, particularly among children in low-resource settings. For example, Shigella is a longstanding World Health Organization (WHO) target for vaccine development, and sharp declines in age-specific diarrhea/dysentery attack rates for this pathogen indicate that natural immunity does develop following exposure; thus, vaccination to prevent this disease should be feasible. The development of vaccines against these types of infection has been hampered by technical constraints, insufficient support for coordination, and a lack of market forces for research and development. Most vaccine development efforts are taking place in the public sector or as research programs within biotechnology companies.
Treatment
Dysentery is managed by maintaining fluids using oral rehydration therapy. If this treatment cannot be adequately maintained due to vomiting or the profuseness of diarrhea, hospital admission may be required for intravenous fluid replacement. In ideal situations, no antimicrobial therapy should be administered until microbiological microscopy and culture studies have established the specific infection involved. When laboratory services are not available, it may be necessary to administer a combination of drugs, including an amoebicidal drug to kill the parasite, and an antibiotic to treat any associated bacterial infection. Laudanum (Deodorized Tincture of Opium)] may be used for severe pain and to combat severe diarrhea.
If shigellosis is suspected and it is not too severe, letting it run its course may be reasonable – usually less than a week. If the case is severe, antibiotics such as ciprofloxacin or TMP-SMX may be useful. However, many strains of Shigella are becoming resistant to common antibiotics, and effective medications are often in short supply in developing countries. If necessary, a doctor may have to reserve antibiotics for those at highest risk for death, including young children, people over 50, and anyone suffering from dehydration or malnutrition.
Amoebic dysentery is often treated with two antimicrobial drugs such as metronidazole and paromomycin or iodoquinol.
Prognosis
With correct treatment, most cases of amoebic and bacterial dysentery subside within 10 days, and most individuals achieve a full recovery within two to four weeks after beginning proper treatment. If the disease is left untreated, the prognosis varies with the immune status of the individual patient and the severity of disease. Extreme dehydration can delay recovery and significantly raises the risk for serious complications including death.
Epidemiology
Insufficient data exists, but Shigella is estimated to have caused the death of 34,000 children under the age of five in 2013, and 40,000 deaths in people over five years of age. Amoebiasis infects over 50 million people each year, of whom 50,000 die (one per thousand).
History
Shigella evolved with the human expansion out of Africa 50,000 to 200,000 years ago.
The seed, leaves, and bark of the kapok tree have been used in traditional medicines by indigenous peoples of the rainforest regions in the Americas, west-central Africa, and Southeast Asia in the treatment of this disease.
In 1915, Australian bacteriologist Fannie Eleanor Williams was serving as a medic in Greece with the Australian Imperial Force, receiving casualties directly from Gallipoli. In Gallipoli, dysentery was severely affecting soldiers and causing significant loss of manpower. Williams carried out serological investigations into dysentery, co-authoring several groundbreaking papers with Sir Charles Martin, director of the Lister Institute. The result of their work into dysentery was increased demand for specific diagnostics and curative sera.Bacillus subtilis was marketed throughout America and Europe from 1946 as an immunostimulatory aid in the treatment of gut and urinary tract diseases such as rotavirus and Shigella'', but declined in popularity after the introduction of consumer antibiotics.
Notable cases
580: Childesinda, son of Chilperic I, Frankish king, died of dysentery as a child
580: Austregilde, Frankish queen, died of dysentery. According to Gregory of Tours she blamed her doctors for her death and asked her husband, King Guntram, to kill them after she died, which he did.
685: Constantine IV, the Byzantine emperor, died of dysentery in September 685.
1183: Henry the Young King died of dysentery at the castle of Martel on 11 June 1183.
1216: John, King of England died of dysentery at Newark Castle on 19 October 1216.
1270: Louis IX of France died of dysentery in Tunis while commanding his troops for the Eighth Crusade on 25 August 1270.
1307: Edward I of England caught dysentery on his way to the Scottish border and died in his servants' arms on 7 July 1307.
1322: Philip V of France died of dysentery at the Abbey of Longchamp (site of the present hippodrome in the Bois de Boulogne) in Paris while visiting his daughter, Blanche, who had taken her vows as a nun there in 1322. He died on 3 January 1322.
1376: Edward the Black Prince, son of Edward III of England and heir to the English throne. Died of apparent dysentery in June, after a months-long period of illness during which he predicted his own imminent death, in his 46th year.
1422: King Henry V of England died suddenly on 31 August 1422 at the Château de Vincennes, apparently from dysentery, which he had contracted during the siege of Meaux. He was 35 years old and had reigned for nine years.
1536: Erasmus, Dutch renaissance humanist and theologian. At Basel.
1596: Sir Francis Drake, vice admiral, died of dysentery on 28 January 1596 whilst anchored off the coast of Portobelo.
1605: Akbar, ruler of the Mughal Empire of South Asia, died of dysentery. On 3 October 1605, he fell ill with an attack of dysentery, from which he never recovered. He is believed to have died on or about 27 October 1605, after which his body was buried in a mausoleum in Agra, present-day India.
1675: Jacques Marquette died of dysentery on his way north from what is today Chicago, traveling to the mission where he intended to spend the rest of his life.
1676: Nathaniel Bacon died of dysentery after taking control of Virginia following Bacon's Rebellion. He is believed to have died in October 1676, allowing Virginia's ruling elite to regain control.
1680: Shivaji, founder and ruler of the Maratha Empire of South Asia, died of dysentery on 3 April 1680. In 1680, Shivaji fell ill with fever and dysentery, dying around 3–5 April 1680 at the age of 52 on the eve of Hanuman Jayanti. He was cremated at Raigad Fort, where his Samadhi is built in Mahad, Raigad district of Maharashtra, India.
1827: Queen Nandi kaBhebhe, (mother of Shaka Zulu) died of dysentery on 10 October 1827.
1873: The explorer David Livingstone died of dysentery on 1 May 1873.
1896: Phan Đình Phùng, a Vietnamese revolutionary who led rebel armies against French colonial forces in Vietnam, died of dysentery as the French surrounded his forces on 21 January 1896.
1910: Luo Yixiu, first wife of Mao Zedong, died of dysentery on 11 February 1910. She was 20 years old.
1930: The French explorer and writer Michel Vieuchange died of dysentery in Agadir on 30 November 1930, on his return from the "forbidden city" of Smara. He was nursed by his brother, Doctor Jean Vieuchange, who was unable to save him. The notebooks and photographs, edited by Jean Vieuchange, went on to become bestsellers.
1942: The Selarang Barracks incident in the summer of 1942 during World War II involved the forced crowding of 17,000 Anglo-Australian prisoners-of-war (POWs) by their Japanese captors in the areas around the barracks square for nearly five days with little water and no sanitation after the Selarang Barracks POWs refused to sign a pledge not to escape. The incident ended with the surrender of the Australian commanders due to the spreading of dysentery among their men.
See also
Cholera
References
Works cited
Conditions diagnosed by stool test
Diarrhea
Intestinal infectious diseases
Waterborne diseases
Wikipedia medicine articles ready to translate | 0.776703 | 0.999557 | 0.776359 |
Shock (circulatory) | Shock is the state of insufficient blood flow to the tissues of the body as a result of problems with the circulatory system. The flow of blood is critically important to body tissues particularly in the delivery of oxygen as is vital to sustain metabolic proceses.The flow of blood or cardiac output is a difficult thing to measure. The usual indicator of the status of status of the circulatory system in most situations which has a value that can be easily measured is blood pressure. Initial symptoms of shock may include weakness, fast heart rate, fast breathing, sweating, anxiety, and increased thirst. This may be followed by confusion, unconsciousness, or cardiac arrest, as complications worsen.
Shock is divided into four main types based on the underlying cause: hypovolemic, cardiogenic, obstructive, and distributive shock. Hypovolemic shock, also known as low volume shock, may be from bleeding, diarrhea, or vomiting. Cardiogenic shock may be due to a heart attack or cardiac contusion. Obstructive shock may be due to cardiac tamponade or a tension pneumothorax. Distributive shock may be due to sepsis, anaphylaxis, injury to the upper spinal cord, or certain overdoses.
The diagnosis is generally based on a combination of symptoms, physical examination, and laboratory tests. A decreased pulse pressure (systolic blood pressure minus diastolic blood pressure) or a fast heart rate raises concerns.
Signs and symptoms
The presentation of shock is variable, with some people having only minimal symptoms such as confusion and weakness. While the general signs for all types of shock are low blood pressure, decreased urine output, and confusion, these may not always be present. While a fast heart rate is common, in those on β-blockers, those who are athletic, and in 30% of cases of those with shock due to intra abdominal bleeding, heart rate may be normal or slow. Specific subtypes of shock may have additional symptoms.
Dry mucous membrane, reduced skin turgor, prolonged capillary refill time, weak peripheral pulses, and cold extremities can be early signs of shock.
Low volume
Hypovolemic shock is the most common type of shock and is caused by insufficient circulating volume. The most common cause of hypovolemic shock is hemorrhage (internal or external); however, vomiting and diarrhea are more common causes in children. Other causes include burns, as well as excess urine loss due to diabetic ketoacidosis and diabetes insipidus.
Signs and symptoms of hypovolemic shock include:
A rapid, weak, thready pulse due to decreased blood flow combined with tachycardia
Cool skin due to vasoconstriction and stimulation of vasoconstriction
Rapid and shallow breathing due to sympathetic nervous system stimulation and acidosis
Hypothermia due to decreased perfusion and evaporation of sweat
Thirst and dry mouth, due to fluid depletion
Cold and mottled skin (livedo reticularis), especially extremities, due to insufficient perfusion of the skin
The severity of hemorrhagic shock can be graded on a 1–4 scale on the physical signs. The shock index (heart rate divided by systolic blood pressure) is a stronger predictor of the impact of blood loss than heart rate and blood pressure alone. This relationship has not been well established in pregnancy-related bleeding.
Cardiogenic
Cardiogenic shock is caused by the failure of the heart to pump effectively. This can be due to damage to the heart muscle, most often from a large myocardial infarction. Other causes of cardiogenic shock include dysrhythmias, cardiomyopathy/myocarditis, congestive heart failure (CHF), myocardial contusion, or valvular heart disease problems.
Symptoms of cardiogenic shock include:
Distended jugular veins due to increased jugular venous pressure
Weak or absent pulse
Abnormal heart rhythms, often a fast heart rate
Pulsus paradoxus in case of tamponade
Reduced blood pressure
Shortness of breath, due to pulmonary congestion
Obstructive
Obstructive shock is a form of shock associated with physical obstruction of the great vessels of the systemic or pulmonary circulation. Several conditions can result in this form of shock.
Cardiac tamponade, in which fluid in the pericardium prevents inflow of blood into the heart (venous return).
Constrictive pericarditis, in which the pericardium shrinks and hardens, is similar in presentation.
Tension pneumothorax; Through increased intrathoracic pressure, venous return is impeded.
Pulmonary embolism is thromboembolism of the lungs, hindering oxygenation and return of blood to the heart.
Aortic stenosis hinders circulation by obstructing the cardiac output.
Hypertrophic sub-aortic stenosis is overly thick ventricular muscle that dynamically occludes the ventricular outflow tract.
Abdominal compartment syndrome defined as an increase in intra-abdominal pressure to > 20 mmHg with organ dysfunction. Increased intra-abdominal pressure can result from sepsis and severe abdominal trauma. This increased pressure reduces venous return, thereby reducing lung-heart function, resulting in signs and symptoms of shock.
Many of the signs of obstructive shock are similar to cardiogenic shock, although treatments differ. Symptoms of obstructive shock include:
Abnormal heart rhythms, often a fast heart rate.
Reduced blood pressure.
Cool, clammy, mottled skin, often due to low blood pressure and vasoconstriction.
Decreased urine output.
Distributive
Distributive shock is low blood pressure due to a dilation of blood vessels within the body. This can be caused by systemic infection (septic shock), a severe allergic reaction (anaphylaxis), or spinal cord injury (neurogenic shock).
Septic shock is the most common cause of distributive shock. It is caused by an overwhelming systemic infection resulting in vasodilation leading to hypotension. Septic shock can be caused by Gram negative bacteria such as (among others) Escherichia coli, Proteus species, Klebsiella pneumoniae (which have an endotoxin on their surface that produces adverse biochemical, immunological and occasionally neurological effects which are harmful to the body), other Gram-positive cocci, such as pneumococci and streptococci, and certain fungi as well as Gram-positive bacterial toxins. Septic shock also includes some elements of cardiogenic shock. In 1992, the ACCP/SCCM Consensus Conference Committee defined septic shock: " ... sepsis-induced hypotension (systolic blood pressure < 90 mmHg or a reduction of 40 mmHg from baseline) despite adequate fluid resuscitation along with the presence of perfusion abnormalities that may include, but are not limited to: lactic acidosis, oliguria, or an acute alteration in mental status. Patients who are receiving inotropic or vasopressor agents may have a normalized blood pressure at the time that perfusion abnormalities are identified. The pathophysiology behind septic shock is as follows: 1) Systemic leukocyte adhesion to endothelial cells 2) Reduced contractility of the heart 3) Activation of the coagulation pathways, resulting in disseminated intravascular coagulation 4). Increased levels of neutrophils
The main manifestations of septic shock are due to the massive release of histamine which causes intense dilation of the blood vessels. People with septic shock will also likely be positive for SIRS criteria. The most generally accepted treatment for these patients is early recognition of symptoms, and early administration of broad spectrum and organism specific antibiotics.
Signs of septic shock include:
Abnormal heart rhythms, often a fast heart rate
Reduced blood pressure
Decreased urine output
Altered mental status
Anaphylactic shock is caused by a severe anaphylactic reaction to an allergen, antigen, drug, or foreign protein causing the release of histamine which causes widespread vasodilation, leading to hypotension and increased capillary permeability. Signs typically occur after exposure to an allergen and may include:
Skin changes, such as hives, itching, flushing, and swelling.
Wheezing and shortness of breath.
Abdominal pain, diarrhea, and vomiting.
Lightheadedness, confusion, headaches, loss of consciousness.
High spinal injuries may cause neurogenic shock, which is commonly classified as a subset of distributive shock. The classic symptoms include a slow heart rate due to loss of cardiac sympathetic tone and warm skin due to dilation of the peripheral blood vessels. (This term can be confused with spinal shock which is a recoverable loss of function of the spinal cord after injury and does not refer to the hemodynamic instability.)
Endocrine
Although not officially classified as a subcategory of shock, many endocrinological disturbances in their severe form can result in shock.
Hypothyroidism (can be considered a form of cardiogenic shock) in people who are critically ill patients reduces cardiac output and can lead to hypotension and respiratory insufficiency.
Thyrotoxicosis (cardiogenic shock) may induce a reversible cardiomyopathy.
Acute adrenal insufficiency (distributive shock) is frequently the result of discontinuing corticosteroid treatment without tapering the dosage. However, surgery and intercurrent disease in patients on corticosteroid therapy without adjusting the dosage to accommodate for increased requirements may also result in this condition.
Relative adrenal insufficiency (distributive shock) in critically ill patients where present hormone levels are insufficient to meet the higher demands.
Cause
Shock is a common end point of many medical conditions. Shock triggered by a serious allergic reaction is known as anaphylactic shock, shock triggered by severe dehydration or blood loss is known as hypovolemic shock, shock caused by sepsis is known as septic shock, etc. Shock itself is a life-threatening condition as a result of compromised body circulation. It can be divided into four main types based on the underlying cause: hypovolemic, distributive, cardiogenic, and obstructive. A few additional classifications are occasionally used, such as endocrinologic shock.
Pathophysiology
Shock is a complex and continuous condition, and there is no sudden transition from one stage to the next. At a cellular level, shock is the process of oxygen demand becoming greater than oxygen supply.
One of the key dangers of shock is that it progresses by a positive feedback loop. Poor blood supply leads to cellular damage, which results in an inflammatory response to increase blood flow to the affected area. Normally, this causes the blood supply level to match with tissue demand for nutrients. However, if there is enough increased demand in some areas, it can deprive other areas of sufficient supply, which then start demanding more. This then leads to an ever escalating cascade.
As such, shock is a runaway condition of homeostatic failure, where the usual corrective mechanisms relating to oxygenation of the body no longer function in a stable way. When it occurs, immediate treatment is critical in order to return an individual's metabolism into a stable, self-correcting trajectory. Otherwise the condition can become increasingly difficult to correct, surprisingly quickly, and then progress to a fatal outcome. In the particular case of anaphylactic shock, progression to death might take just a few minutes.
Initial
During the Initial stage (Stage 1), the state of hypoperfusion causes hypoxia. Due to the lack of oxygen, the cells perform lactic acid fermentation. Since oxygen, the terminal electron acceptor in the electron transport chain, is not abundant, this slows down entry of pyruvate into the Krebs cycle, resulting in its accumulation. The accumulating pyruvate is converted to lactate (lactic acid) by lactate dehydrogenase. The accumulating lactate causes lactic acidosis.
Compensatory
The Compensatory stage (Stage 2) is characterised by the body employing physiological mechanisms, including neural, hormonal, and bio-chemical mechanisms, in an attempt to reverse the condition. As a result of the acidosis, the person will begin to hyperventilate in order to rid the body of carbon dioxide (CO2) since it indirectly acts to acidify the blood; the body attempts to return to acid–base homeostasis by removing that acidifying agent. The baroreceptors in the arteries detect the hypotension resulting from large amounts of blood being redirected to distant tissues, and cause the release of epinephrine and norepinephrine. Norepinephrine causes predominately vasoconstriction with a mild increase in heart rate, whereas epinephrine predominately causes an increase in heart rate with a small effect on the vascular tone; the combined effect results in an increase in blood pressure. The renin–angiotensin axis is activated, and arginine vasopressin (anti-diuretic hormone) is released to conserve fluid by reducing its excretion via the renal system. These hormones cause the vasoconstriction of the kidneys, gastrointestinal tract, and other organs to divert blood to the heart, lungs and brain. The lack of blood to the renal system causes the characteristic low urine production. However, the effects of the renin–angiotensin axis take time and are of little importance to the immediate homeostatic mediation of shock.
Progressive/decompensated
The Progressive stage (stage 3) results if the underlying cause of the shock is not successfully treated. During this stage, compensatory mechanisms begin to fail. Due to the decreased perfusion of the cells in the body, sodium ions build up within the intracellular space while potassium ions leak out. Due to lack of oxygen, cellular respiration diminishes and anaerobic metabolism predominates. As anaerobic metabolism continues, the arteriolar smooth muscle and precapillary sphincters relax such that blood remains in the capillaries. Due to this, the hydrostatic pressure will increase and, combined with histamine release, will lead to leakage of fluid and protein into the surrounding tissues. As this fluid is lost, the blood concentration and viscosity increase, causing sludging of the micro-circulation. The prolonged vasoconstriction will also cause the vital organs to be compromised due to reduced perfusion. If the bowel becomes sufficiently ischemic, bacteria may enter the blood stream, resulting in the increased complication of endotoxic shock.
Refractory
At Refractory stage (stage 4), the vital organs have failed and the shock can no longer be reversed. Brain damage and cell death are occurring, and death will occur imminently. One of the primary reasons that shock is irreversible at this point is that much of the cellular ATP (the basic energy source for cells) has been degraded into adenosine in the absence of oxygen as an electron receptor in the mitochondrial matrix. Adenosine easily perfuses out of cellular membranes into extracellular fluid, furthering capillary vasodilation, and then is transformed into uric acid. Because cells can only produce adenosine at a rate of about 2% of the cell's total need per hour, even restoring oxygen is futile at this point because there is no adenosine to phosphorylate into ATP.
Diagnosis
The diagnosis of shock is commonly based on a combination of symptoms, physical examination, and laboratory tests. Many signs and symptoms are not sensitive or specific for shock, thus many clinical decision-making tools have been developed to identify shock at an early stage. A high degree of suspicion is necessary for the proper diagnosis of shock.
Shock is, hemodynamically speaking, inadequate blood flow or cardiac output, Unfortunately, the measurement of cardiac output requires an invasive catheter, such as a pulmonary artery catheter. Mixed venous oxygen saturation (SmvO2) is one of the methods of calculating cardiac output with a pulmonary artery catheter. Central venous oxygen saturation (ScvO2) as measured via a central line correlates well with SmvO2 and is easier to acquire.
Tissue oxygenation
is critically dependent on blood flow. When the oxygenation of tissues is compromised anaerobic metabolism will begin and lactic acid will be produced.
Management
Treatment of shock is based on the likely underlying cause. An open airway and sufficient breathing should be established. Any ongoing bleeding should be stopped, which may require surgery or embolization. Intravenous fluid, such as Ringer's lactate or packed red blood cells, is often given. Efforts to maintain a normal body temperature are also important. Vasopressors may be useful in certain cases. Shock is both common and has a high risk of death. In the United States about 1.2 million people present to the emergency room each year with shock and their risk of death is between 20 and 50%.
The best evidence exists for the treatment of septic shock in adults. However, the pathophysiology of shock in children appears to be similar so treatment methodologies have been extrapolated to children. Management may include securing the airway via intubation if necessary to decrease the work of breathing and for guarding against respiratory arrest. Oxygen supplementation, intravenous fluids, passive leg raising (not Trendelenburg position) should be started and blood transfusions added if blood loss is severe. In select cases, compression devices like non-pneumatic anti-shock garments (or the deprecated military anti-shock trousers) can be used to prevent further blood loss and concentrate fluid in the body's head and core. It is important to keep the person warm to avoid hypothermia as well as adequately manage pain and anxiety as these can increase oxygen consumption. Negative impact by shock is reversible if it's recognized and treated early in time.
Fluids
Aggressive intravenous fluids are recommended in most types of shock (e.g. 1–2 liter normal saline bolus over 10 minutes or 20 mL/kg in a child) which is usually instituted as the person is being further evaluated. Colloids and crystalloids appear to be equally effective with respect to outcomes., Balanced crystalloids and normal saline also appear to be equally effective in critically ill patients. If the person remains in shock after initial resuscitation, packed red blood cells should be administered to keep the hemoglobin greater than 100 g/L.
For those with hemorrhagic shock, the current evidence supports limiting the use of fluids for penetrating thorax and abdominal injuries allowing mild hypotension to persist (known as permissive hypotension). Targets include a mean arterial pressure of 60 mmHg, a systolic blood pressure of 70–90 mmHg, or until the patient has adequate mentation and peripheral pulses. Hypertonic fluid may also be an option in this group.
Medications
Vasopressors may be used if blood pressure does not improve with fluids. Common vasopressors used in shock include: norepinephrine, phenylephrine, dopamine, and dobutamine.
There is no evidence of substantial benefit of one vasopressor over another; however, using dopamine leads to an increased risk of arrhythmia when compared with norepinephrine. Vasopressors have not been found to improve outcomes when used for hemorrhagic shock from trauma but may be of use in neurogenic shock. Activated protein C (Xigris), while once aggressively promoted for the management of septic shock, has been found not to improve survival and is associated with a number of complications. Activated protein C was withdrawn from the market in 2011, and clinical trials were discontinued. The use of sodium bicarbonate is controversial as it has not been shown to improve outcomes. If used at all it should only be considered if the blood pH is less than 7.0.
People with anaphylactic shock are commonly treated with epinephrine. Antihistamines, such as Benadryl (diphenhydramine) or ranitidine are also commonly administered. Albuterol, normal saline, and steroids are also commonly given.
Mechanical support
Intra-aortic balloon pump (IABP) – a device inserted into the aorta that mechanically raises the blood pressure. Use of Intra-aortic balloon pumps is not recommended in cardiogenic shock.
Ventricular assist device (VAD) – A mechanical pump that helps pump blood throughout the body. Commonly used in short term cases of refractory primary cardiogenic shock.
Artificial heart (TAH)
Extracorporeal membrane oxygenation (ECMO) – an external device that completely replaces the work of the heart.
Treatment goals
The goal of treatment is to achieve a urine output of greater than 0.5 mL/kg/h, a central venous pressure of 8–12 mmHg and a mean arterial pressure of 65–95 mmHg. In trauma the goal is to stop the bleeding which in many cases requires surgical interventions. A good urine output indicates that the kidneys are getting enough blood flow.
Epidemiology
Septic shock (a form of distributive shock) is the most common form of shock. Shock from blood loss occurs in about 1–2% of trauma cases. Overall, up to one-third of people admitted to the intensive care unit (ICU) are in circulatory shock. Of these, cardiogenic shock accounts for approximately 20%, hypovolemic about 20%, and septic shock about 60% of cases.
Prognosis
The prognosis of shock depends on the underlying cause and the nature and extent of concurrent problems. Low volume, anaphylactic, and neurogenic shock are readily treatable and respond well to medical therapy. Septic shock, especially septic shock where treatment is delayed or the antimicrobial drugs are ineffective, however has a mortality rate between 30% and 80%; cardiogenic shock has a mortality rate of up to 70% to 90%, though quick treatment with vasopressors and inotropic drugs, cardiac surgery, and the use of assistive devices can lower the mortality.
History
There is no evidence of the word shock being used in its modern-day form prior to 1743. However, there is evidence that Hippocrates used the word exemia to signify a state of being "drained of blood". Shock or "choc" was first described in a trauma victim in the English translation of Henri-François LeDran's 1740 text, Traité ou Reflexions Tire'es de la Pratique sur les Playes d'armes à feu (A treatise, or reflections, drawn from practice on gun-shot wounds.) In this text he describes "choc" as a reaction to the sudden impact of a missile. However, the first English writer to use the word shock in its modern-day connotation was James Latta, in 1795.
Prior to World War I, there were several competing hypotheses behind the pathophysiology of shock. Of the various theories, the most well regarded was a theory penned by George W. Crile who suggested in his 1899 monograph, "An Experimental Research into Surgical Shock", that shock was quintessentially defined as a state of circulatory collapse (vasodilation) due to excessive nervous stimulation. Other competing theories around the turn of the century included one penned by Malcom in 1907, in which the assertion was that prolonged vasoconstriction led to the pathophysiological signs and symptoms of shock. In the following World War I, research concerning shock resulted in experiments by Walter B. Cannon of Harvard and William M. Bayliss of London in 1919 that showed that an increase in permeability of the capillaries in response to trauma or toxins was responsible for many clinical manifestations of shock. In 1972 Hinshaw and Cox suggested the classification system for shock which is still used today.
References
External links
Intensive care medicine
Medical emergencies
Causes of death
Wikipedia medicine articles ready to translate | 0.777841 | 0.997887 | 0.776197 |
Functional disorder | Functional disorders are a group of recognisable medical conditions which are due to changes to the functioning of the systems of the body rather than due to a disease affecting the structure of the body.
Functional disorders are common and complex phenomena that pose challenges to medical systems. Traditionally in western medicine, the body is thought of as consisting of different organ systems, but it is less well understood how the systems interconnect or communicate. Functional disorders can affect the interplay of several organ systems (for example gastrointestinal, respiratory, musculoskeletal or neurological) leading to multiple and variable symptoms. Less commonly there is a single prominent symptom or organ system affected.
Most symptoms that are caused by structural disease can also be caused by a functional disorder. Because of this, individuals often undergo many medical investigations before the diagnosis is clear. Though research is growing to support explanatory models of functional disorders, structural scans such as MRIs, or laboratory investigation such as blood tests do not usually explain the symptoms or the symptom burden. This difficulty in 'seeing' the processes underlying the symptoms of functional disorders has often resulted in these conditions being misunderstood and sometimes stigmatised within medicine and society.
Despite being associated with high disability, functional symptoms are not a threat to life, and are considered modifiable with appropriate treatment.
Definition
Functional disorders are mostly understood as conditions characterised by:
persistent and troublesome symptoms
associated with impairment or disability
where the pathophysiological basis is related to problems with the functioning and communication of the body systems (as opposed to disease affecting the structure of organs or tissues)
Examples
There are many different functional disorder diagnoses that might be given depending on the symptom or syndrome that is most troublesome. There are many examples of symptoms that individuals may experience; some of these include persistent or recurrent pain, fatigue, weakness, shortness of breath or bowel problems. Single symptoms may be assigned a diagnostic label, such as "functional chest pain", "functional constipation" or "functional seizures". Characteristic collections of symptoms might be described as one of the functional somatic syndromes. A syndrome is a collection of symptoms. Somatic means 'of the body'. Examples of functional somatic syndromes include: irritable bowel syndrome; cyclic vomiting syndrome; some persistent fatigue and chronic pain syndromes, such as fibromyalgia (chronic widespread pain), or chronic pelvic pain; interstitial cystitis; functional neurologic disorder; and multiple chemical sensitivity.
Overlap
Most medical specialties define their own functional somatic syndrome, and a patient may end up with several of these diagnoses without understanding how they are connected. There is overlap in symptoms between all the functional disorder diagnoses. For example, it is not uncommon to have a diagnosis of irritable bowel syndrome (IBS) and chronic widespread pain/fibromyalgia. All functional disorders share risk factors and factors that contribute to their persistence. Increasingly researchers and clinicians are recognising the relationships between these syndromes.
Classification
The terminology for functional disorders has been fraught with confusion and controversy, with many different terms used to describe them. Sometimes functional disorders are equated or mistakenly confused with diagnoses like category of "somatoform disorders", "medically unexplained symptoms", "psychogenic symptoms" or "conversion disorders". Many historical terms are now no longer thought of as accurate, and are considered by many to be stigmatising.
Psychiatric illnesses have historically also been considered as functional disorders in some classification systems, as they often fulfil the criteria above. Whether a given medical condition is termed a functional disorder depends in part on the state of knowledge. Some diseases, such as epilepsy, were historically categorized as functional disorders but are no longer classified that way.
Prevalence
Functional disorders can affect individuals of all ages, ethnic groups and socioeconomic backgrounds. In clinical populations, functional disorders are common and have been found to present in around one-third of consultations in both specialist practice and primary care. Chronic courses of disorders are common and are associated with high disability, health-care usage and social costs.
Rates differ in the clinical population compared with the general population, and will vary depending on the criteria used to make the diagnosis. For example, irritable bowel syndrome is thought to affect 4.1%, and fibromyalgia 0.2–11.4% of the global population.
A recent large study carried out on population samples in Denmark showed the following: In total, 16.3% of adults reported symptoms fulfilling the criteria for at least one Functional Somatic Syndrome, and 16.1% fulfilled criteria for Bodily Distress Syndrome.
Diagnosis
The diagnosis of functional disorders is usually made in the healthcare setting most often by a doctor — this could be a primary care physician or family doctor, hospital physician or specialist in the area of psychosomatic medicine or a consultant-liaison psychiatrist. The primary care physician or family doctor will generally play an important role in coordinating treatment with a secondary care clinician if necessary.
The diagnosis is essentially clinical, whereby the clinician undertakes a thorough medical and mental health history and physical examination. Diagnosis should be based on the nature of the presenting symptoms, and is a "rule in" as opposed to "rule out" diagnosis — this means it is based on the presence of positive symptoms and signs that follow a characteristic pattern. There is usually a process of clinical reasoning to reach this point and assessment might require several visits, ideally with the same doctor.
In the clinical setting, there are no laboratory or imaging tests that can consistently be used to diagnose the conditions; however, as is the case with all diagnoses, often additional diagnostic tests (such as blood tests, or diagnostic imaging) will be undertaken to consider the presence of underlying disease. There are however diagnostic criteria that can be used to help a doctor assess whether an individual is likely to suffer from a particular functional syndrome. These are usually based on the presence or absence of characteristic clinical signs and symptoms. Self-report questionnaires may also be useful.
There has been a tradition of a separate diagnostic classification systems for "somatic" and "mental" disorder classifications. Currently, the 11th version of the International Classification System of Diseases (ICD-11) has specific diagnostic criteria for certain disorders which would be considered by many clinicians to be functional somatic disorders, such as IBS or chronic widespread pain/fibromyalgia, and dissociative neurological symptom disorder.
In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) the older term somatoform (DSM-IV) has been replaced by somatic symptom disorder, which is a disorder characterised by persistent somatic (physical) symptoms, and associated psychological problems to the degree that it interferes with daily functioning and causes distress. (APA, 2022). Bodily distress disorder is a related term in the ICD-11.
Somatic symptom disorder and bodily distress disorder have significant overlap with functional disorders and are often assigned if someone would benefit from psychological therapies addressing psychological or behavioural factors which contribute to the persistence of symptoms. However, people with symptoms partly explained by structural disease (for example, cancer) may also meet the criteria for diagnosis of functional disorders, somatic symptom disorder and bodily distress disorder.
It is not unusual for a functional disorder to coexist with another diagnosis (for example, functional seizures can coexist with epilepsy, or irritable bowel syndrome with inflammatory bowel disease. This is important to recognise as additional treatment approaches might be indicated in order that the patient achieves adequate relief from their symptoms.
The diagnostic process is considered an important step in order for treatment to move forward successfully. When healthcare professionals are giving a diagnosis and carrying out treatment, it is important to communicate openly and honestly and not to fall into the trap of dualistic concepts – that is "either mental or physical" thinking; or attempt to "reattribute" symptoms to a predominantly psychosocial cause. It often important to recognise the need to cease unnecessary additional diagnostic testing if a clear diagnosis has been established .
Causes
Explanatory models that support our understanding of functional disorders take into account the multiple factors involved in symptom development. A personalised, tailored approach is usually needed in order to consider the factors which relate to that individual's biomedical, psychological, social, and material environment.
More recent functional neuroimaging studies have suggested malfunctioning of neural circuits involved in stress processing, emotional regulation, self-agency, interoception, and sensorimotor integration. A recent article in Scientific American proposed that important brain structures suspected in the pathophysiology of functional neurological disorder include increased activity of the amygdala and decreased activity within the right temporoparietal junction.
Healthcare professionals might find it useful to consider three main categories of factors: predisposing, precipitating, and perpetuating (maintaining) factors.
Predisposing factors
These are factors that make the person more vulnerable to the onset of a functional disorder; and include biological, psychological and social factors. Like all health conditions, some people are probably predisposed to develop functional disorders due to their genetic make-up. However, no single genes have been identified that are associated with functional disorders. Epigenetic mechanisms (mechanisms that affect interaction of genes with their environment) are likely to be important, and have been studied in relation to the hypothalamic–pituitary–adrenal axis. Other predisposing factors include current or prior somatic/physical illness or injury, and endocrine, immunological or microbial factors.
Functional disorders are diagnosed more frequently in female patients. The reasons for this are complex and multifactorial, likely to include both biological and social factors. Female sex hormones might affect the functioning of the immune system, for example. Medical bias possibly contributes to the sex differences in diagnosis: women are more likely to be diagnosed than men with a functional disorder by doctors.
People with functional disorders also have higher rates of pre-existing mental and physical health conditions, including depression and anxiety disorders, Post-traumatic stress disorder, multiple sclerosis and epilepsy. Personality style has been suggested as a risk factor in the development of functional disorders but the effect of any individual personality trait is variable and weak. Alexithymia (difficulties recognising and naming emotions) has been widely studied in patients with functional disorders and is sometimes addressed as part of treatment. Migration, cultural and family understanding of illness, are also factors that influence the chance of an individual developing a functional disorder. Being exposed to illness in the family while growing up or having parents who are healthcare professionals are sometimes considered risk factors. Adverse childhood experiences and traumatic experiences of all kinds are known important risk factors. Newer hypotheses have suggested minority stressors may play a role in the development of functional disorders in marginalized communities.
Precipitating factors
These are the factors that for some patients appear to trigger the onset of a functional disorder. Typically, these involve either an acute cause of physical or emotional stress, for example an operation, a viral illness, a car accident, a sudden bereavement, or a period of intense and prolonged overload of chronic stressors (for example relationship difficulties, job or financial stress, or caring responsibilities). Not all affected individuals will be able to identify obvious precipitating factors and some functional disorders develop gradually over time.
Perpetuating factors
These are the factors that contribute to the development of functional disorder as a persistent condition and maintaining symptoms. These can include the condition of the physiological systems including the immune and neuroimmune systems, the endocrine system, the musculoskeletal system, the sleep-wake cycle, the brain and nervous system, the person's thoughts and experience, their experience of the body, social situation and environment. All these layers interact with each other. Illness mechanisms are important therapeutically as they are seen as potential targets of treatment.
The exact illness mechanisms that are responsible for maintaining an individual's functional disorder should be considered on an individual basis. However, various models have been suggested to account for how symptoms develop and continue. For some people there seems to be a process of central-sensitisation, chronic low grade inflammation or altered stress reactivity mediated through the hypothalamic-pituitary-adrenal (HPA) axis (Fischer et al., 2022). For some people attentional mechanisms are likely to be important. Commonly, illness-perceptions or behaviours and expectations (Henningsen, Van den Bergh et al. 2018 ) contribute to maintaining an impaired physiological condition.
Perpetuating illness mechanisms are often conceptualized as "vicious cycles", which highlights the non-linear patterns of causality characteristic of these disorders. Other people adopt a pattern of trying to achieve a lot on "good days" which results in exhaustion for days following and a flare up of symptoms, which has led to various energy management tools being used in the patient community, such as "Spoon Theory."
Depression, PTSD, sleep disorders, and anxiety disorders can also perpetuate functional disorders and should be identified and treated where they are present. Side effects or withdrawal effects of medication often need to be considered. Iatrogenic factors such as lack of a clear diagnosis, not feeling believed or not taken seriously by a healthcare professional, multiple (invasive) diagnostic procedures, ineffective treatments and not getting an explanation for symptoms can increase worry and unhelpful illness behaviours. Stigmatising medical attitudes and unnecessary medical interventions (tests, surgeries or drugs) can also cause harm and worsen symptoms.
Treatment
Functional disorders can be treated successfully and are considered reversible conditions. Treatment strategies should integrate biological, psychological and social perspectives. The body of research around evidence-based treatment in functional disorders is growing.
With regard to self-management, there are many basic things that can be done to optimise recovery. Learning about and understanding the condition is helpful in itself. Many people are able to use bodily complaints as a signal to slow down and reassess their balance between exertion and recovery. Bodily complaints can be used as a signal to begin incorporating stress reduction and balanced lifestyle measures (routine, regular activity and relaxation, diet, social engagement) that can help reduce symptoms and are central to improving quality of life. Mindfulness practice can be helpful for some people. Family members or friends can also be helpful in supporting recovery.
Most affected people benefit from support and encouragement in this process, ideally through a multi-disciplinary team with expertise in treating functional disorders. Family members or friends may also be helpful in supporting recovery. The aim of treatment overall is to first create the conditions necessary for recovery, and then plan a programme of rehabilitation to re-train mind-body connections making use of the body's ability to change. Particular strategies can be taught to manage bowel symptoms, pain or seizures. Though medication alone should not be considered curative in functional disorders, medication to reduce symptoms might be indicated in some instances, for example where mood or pain is a significant issue, preventing adequate engagement in rehabilitation. It is important to address accompanying factors such as sleep disorders, pain, depression and anxiety, and concentration difficulties.
Physiotherapy may be relevant for exercise and activation programs, or when weakness or pain is a problem. Psychotherapy might be helpful to explore a pattern of thoughts, actions and behaviours that could be driving a negative cycle – for example tackling illness expectations or preoccupations about symptoms. Some existing evidence-based treatments include cognitive behavioural therapy (CBT) for functional neurological disorder; physiotherapy for functional motor symptoms, and dietary modification or gut targeting agents for irritable bowel syndrome.
Controversies and stigma
Despite some progress in the last decade, people with functional disorders continue to suffer subtle and overt forms of discrimination by clinicians, researchers and the public. Stigma is a common experience for individuals who present with functional symptoms and is often driven by historical narratives and factual inaccuracies. Given that functional disorders do not usually have specific biomarkers or findings on structural imaging that are typically undertaken in routine clinical practice, this leads to potential for symptoms to be misunderstood, invalidated, or dismissed, leading to adverse experiences when individuals are seeking help.
Part of this stigma is also driven by theories around "mind body dualism", which frequently surfaces as an area of importance for patients, researchers and clinicians in the realm of functional disorders. Artificial separation of the mind/brain/body (for example the use of phrases such as; "physical versus psychological" or "organic versus non-organic") furthers misunderstanding and misconceptions around these disorders, and only serves to hinder progress in scientific domain and for patients seeking treatment. Some patient groups have fought to have their illnesses not classified as functional disorders, because in some insurance based health-care systems these have attracted lower insurance payments. Current research is moving away from dualistic theories, and recognising the importance of the whole person, both mind and body, in diagnosis and treatment of these conditions.
People with functional disorders frequently describe experiences of doubt, blame, and of being seen as less 'genuine' than those with other disorders. Some clinicians perceive those individuals with functional disorders are imagining their symptoms, are malingering, or doubt the level of voluntary control they have over their symptoms. As a result, individuals with these disorders often wait long periods of time to be seen by specialists and receive appropriate treatment. Currently, there is a lack of specialised treatment services for functional disorders in many countries. However, research is growing in this area, and it is hoped that the implementation of the increased scientific understanding of functional disorders and their treatment will allow effective clinical services supporting individuals with functional disorders to develop. Patient membership organisations/advocate groups have been instrumental in gaining recognition for individuals with these disorders.
Research
Directions for research involve understanding more about the processes underlying functional disorders, identifying what leads to symptom persistence and improving integrated care/treatment pathways for patients.
Research into the biological mechanisms which underpin functional disorders is ongoing. Understanding how stress effects the body over a lifetime, for example via the immune endocrine and autonomic nervous systems, is important Ying-Chih et.al 2020, Tak et. al. 2011, Nater et al. 2011). Subtle dysfunctions of these systems, for example through low grade chronic inflammation, or dysfunctional breathing patterns, are increasingly thought to underlie functional disorders and their treatment. However, more research is needed before these theoretical mechanisms can be used clinically to guide treatment for an individual patient.
See also
Idiopathic disease
Functional gastrointestinal disorder
Functional neurologic disorder
Functional symptom
Psychosomatic medicine
References
Diseases and disorders
Medical terminology | 0.783265 | 0.990923 | 0.776155 |
Biomechanics | Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
Today computational mechanics goes far beyond pure mechanics, and involves other physical actions: chemistry, heat and mass transfer, electric and magnetic stimuli and many others.
Etymology
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
Subfields
Biofluid mechanics
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Recently, respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
Biotribology
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
Comparative biomechanics
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are Animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion, has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
Computational biomechanics
Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine) and SOniCS, as well as the SOFA, FEniCS frameworks and FEBio.
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
Continuum biomechanics
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.
Neuromechanics
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
Plant biomechanics
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
Sports biomechanics
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
Vascular biomechanics
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
Immunomechanics
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
Other applied subfields of biomechanics include
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
History
Antiquity
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
Renaissance
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
Industrial era
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
Applications
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
See also
Biomechatronics
Biomedical engineering
Cardiovascular System Dynamics Society
Evolutionary physiology
Forensic biomechanics
International Society of Biomechanics
List of biofluid mechanics research groups
Mechanics of human sexuality
OpenSim (simulation toolkit)
Physical oncology
References
Further reading
External links
Biomechanics and Movement Science Listserver (Biomch-L)
Biomechanics Links
A Genealogy of Biomechanics
Motor control | 0.779515 | 0.995541 | 0.77604 |
Physical fitness | Physical fitness is a state of health and well-being and, more specifically, the ability to perform aspects of sports, occupations, and daily activities. Physical fitness is generally achieved through proper nutrition, moderate-vigorous physical exercise, and sufficient rest along with a formal recovery plan.
Before the Industrial Revolution, fitness was defined as the capacity to carry out the day's activities without undue fatigue or lethargy. However, with automation and changes in lifestyles, physical fitness is now considered a measure of the body's ability to function efficiently and effectively in work and leisure activities, to be healthy, to resist hypokinetic diseases, to improve immune system function, and to meet emergency situations.
Overview
Fitness is defined as the quality or state of being fit and healthy. Around 1950, perhaps consistent with the Industrial Revolution and the treatise of World War II, the term "fitness" increased in western vernacular by a factor of ten. The modern definition of fitness describes either a person or machine's ability to perform a specific function or a holistic definition of human adaptability to cope with various situations. This has led to an interrelation of human fitness and physical attractiveness that has mobilized global fitness and fitness equipment industries. Regarding specific function, fitness is attributed to persons who possess significant aerobic or anaerobic ability (i.e., endurance or strength). A well-rounded fitness program improves a person in all aspects of fitness compared to practicing only one, such as only cardio/respiratory or only weight training.
A comprehensive fitness program tailored to an individual typically focuses on one or more specific skills, and on age- or health-related needs such as bone health. Many sources also cite mental, social and emotional health as an important part of overall fitness. This is often presented in textbooks as a triangle made up of three points, which represent physical, emotional, and mental fitness. Physical fitness has been shown to have benefits in preventing ill health and assisting recovery from injury or illness. Along with the physical health benefits of fitness, it has also been shown to have a positive impact on mental health as well by assisting in treating anxiety and depression.
Physical fitness can also prevent or treat many other chronic health conditions brought on by unhealthy lifestyle or aging as well and has been listed frequently as one of the most popular and advantageous self-care therapies. Working out can also help some people sleep better by building up sleeping pressure and possibly alleviate some mood disorders in certain individuals.
Developing research has demonstrated that many of the benefits of exercise are mediated through the role of skeletal muscle as an endocrine organ. That is, contracting muscles release multiple substances known as myokines, which promote the growth of new tissue, tissue repair, and various anti-inflammatory functions, which in turn reduce the risk of developing various inflammatory diseases.
Activity guidelines
The 2018 Physical Activity Guidelines for Americans were released by the U.S. Department of Health and Human Services to provide science-based guidance for people ages 3 years and older to improve their health by participating in regular physical activity. These guidelines recommend that all adults should move more and sit less throughout the day to improve health-related quality of life including mental, emotional, and physical health. For substantial health benefits, adults should perform at least 150 to 300 minutes of moderate-intensity, or 75 to 150 minutes per week of vigorous-intensity aerobic physical activity, or an equivalent combination of both spread throughout the week. The recommendation for physical activity to occur in bouts of at least 10 minutes has been eliminated, as new research suggests that bouts of any length contribute to the health benefits linked to the accumulated volume of physical activity. Additional health benefits may be achieved by engaging in more than 300 minutes (5 hours) of moderate-intensity physical activity per week. Adults should also do muscle-strengthening activities that are of moderate or greater intensity and involve all major muscle groups on two or more days a week, as these activities provide additional health benefits.
Guidelines in the United Kingdom released in July 2011 include the following points:
The intensity at which a person exercises is key, and light activity such as strolling and house work is unlikely to have much positive impact on the health of most people. For aerobic exercise to be beneficial, it must raise the heart rate and cause perspiration. A person should do a minimum of 150 minutes a week of moderate-intensity aerobic exercise. There are more health benefits gained if a person exercises beyond 150 minutes.
Sedentary time (time spent not standing, such as when on a chair or in bed) is bad for a person's health, and no amount of exercise can negate the effects of sitting for too long.
These guidelines are now much more in line with those used in the U.S., which also includes recommendations for muscle-building and bone-strengthening activities such as lifting weights and yoga.
Exercise
Aerobic exercise
Cardiorespiratory fitness can be measured using VO2 max, a measure of the amount of oxygen the body can uptake and utilize. Aerobic exercise, which improves cardiorespiratory fitness and increase stamina, involves movement that increases the heart rate to improve the body's oxygen consumption. This form of exercise is an important part of all training regiments, whether for professional athletes or for the everyday person.
Prominent examples of aerobic exercises include:
Jogging – Running at a steady and gentle pace. This form of exercise is great for maintaining weight and building a cardiovascular base to later perform more intense exercises.
Working on elliptical trainer – This is a stationary exercise machine used to perform walking, or running without causing excessive stress on the joints. This form of exercise is perfect for people with achy hips, knees, and ankles.
Walking – Moving at a fairly regular pace for a short, medium or long distance.
Treadmill training – Many treadmills have programs set up that offer numerous different workout plans. One effective cardiovascular activity would be to switch between running and walking. Typically warm up first by walking and then switch off between walking for three minutes and running for three minutes.
Swimming – Using the arms and legs to keep oneself afloat in water and moving either forwards or backward. This is a good full-body exercise for those who are looking to strengthen their core while improving cardiovascular endurance.
Cycling – Riding a bicycle typically involves longer distances than walking or jogging. This is another low-impact exercise on the joints and is great for improving leg strength.
Anaerobic exercise
Anaerobic exercise features high-intensity movements performed in a short period of time. It is a fast, high-intensity exercise that does not require the body to utilize oxygen to produce energy. It helps to promote strength, endurance, speed, and power; and is used by bodybuilders to build workout intensity. Anaerobic exercises are thought to increase the metabolic rate, thereby allowing one to burn additional calories as the body recovers from exercise due to an increase in body temperature and excess post-exercise oxygen consumption (EPOC) after the exercise ended.
Prominent examples of anaerobic exercises include:
Weight training – A common type of strength training for developing the strength and size of skeletal muscles.
Isometric exercise – Helps to maintain strength. A muscle action in which no visible movement occurs and the resistance matches the muscular tension.
Sprinting – Running short distances as fast as possible, training for muscle explosiveness.
Interval training – Alternating short bursts (lasting around 30 seconds) of intense activity with longer intervals (three to four minutes) of less intense activity. This type of activity also builds speed and endurance.
Training
Specific or task-oriented fitness is a person's ability to perform in a specific activity, such as sports or military service, with a reasonable efficiency. Specific training prepares athletes to perform well in their sport. These include, among others:
100 m sprint: In a sprint, the athlete must be trained to work anaerobically throughout the race, an example of how to do this would be interval training.
Century ride: Cyclists must be prepared aerobically for a bike ride of 100 miles or more.
Middle distance running: Athletes require both speed and endurance to gain benefit out of this training. The hard-working muscles are at their peak for a longer period of time as they are being used at that level for the longer period of time.
Marathon: In this case, the athlete must be trained to work aerobically, and their endurance must be built-up to a maximum.
Many firefighters and police officers undergo regular fitness testing to determine if they are capable of the physically demanding tasks required of the job.
Members of armed forces are often required to pass a formal fitness test. For example, soldiers of the U.S. Army must be able to pass the Army Physical Fitness Test (APFT).
Hill sprints: Requires a high level of fitness to begin with; the exercise is particularly good for the leg muscles. The Army often trains to do mountain climbing and races.
Plyometric and isometric exercises: An excellent way to build strength and increase muscular endurance.
Sand running creates less strain on leg muscles than running on grass or concrete. This is because sand collapses beneath the foot, which softens the landing. Sand training is an effective way to lose weight and become fit, as more effort is needed (one and a half times more) to run on the soft sand than on a hard surface.
Aquajogging is a form of exercise that decreases strain on joints and bones. The water supplies minimal impact to muscles and bones, which is good for those recovering from injury. Furthermore, the resistance of the water as one jogs through it provides an enhanced effect of exercise (the deeper you are, the greater the force needed to pull your leg through).
Swimming: Squatting exercise helps in enhancing a swimmer's start.
For physical fitness activity to benefit an individual, the exertion must trigger a sufficient amount of stimuli. Exercise with the correct amount of intensity, duration, and frequency can produce a significant amount of improvement. The person may overall feel better, but the physical effects on the human body take weeks or months to notice—and possibly years for full development. For training purposes, exercise must provide a stress or demand on either a function or tissue. To continue improvements, this demand must eventually increase little over an extended period of time. This sort of exercise training has three basic principles: overload, specificity, and progression. These principles are related to health but also enhancement of physical working capacity.
High intensity interval training
High-intensity interval training (HIIT) consists of repeated, short bursts of exercise, completed at a high level of intensity. These sets of intense activity are followed by a predetermined time of rest or low-intensity activity. Studies have shown that exercising at a higher intensity can have the effect of increasing cardiac benefits for humans when compared with exercising at a low or moderate level. When one's workout consists of a HIIT session, their body has to work harder to replace the oxygen it lost. Research into the benefits of HIIT have shown that it can be very successful for reducing fat, especially around the abdominal region. Furthermore, when compared to continuous moderate exercise, HIIT proves to burn more calories and increase the amount of fat burned post- HIIT session. Lack of time is one of the main reasons stated for not exercising; HIIT is a great alternative for those people because the duration of a HIIT session can be as short as 10 minutes, making it much quicker than conventional workouts.
Effects
Controlling blood pressure
Physical fitness has been proven to support the body's blood pressure. Staying active and exercising regularly builds a stronger heart. The heart is the main organ in charge of systolic blood pressure and diastolic blood pressure. Engaging in a physical activity raises blood pressure. Once the subject stops the activity, the blood pressure returns to normal. The more physical activity, the easier this process becomes, resulting in a fitter cardiovascular profile. Through regular physical fitness, it becomes easier to create a rise in blood pressure. This lowers the force on the arteries, and lowers the overall blood pressure.
Cancer prevention
Centers for disease control and prevention provide lifestyle guidelines for maintaining a balanced diet and engaging in physical activity to reduce the risk of disease. The WCRF/ American Institute for Cancer Research (AICR) published a list of recommendations that reflect the dietary and exercise behaviors which are proven to reduce incidence of cancer.
The WCRF/AICR recommendations include the following:
Be as lean as possible without becoming underweight.
Each week, adults should engage in at least 150 minutes of moderate-intensity physical activity or 75 minutes of vigorous-intensity physical activity.
Children should engage in at least one hour of moderate or vigorous physical activity each week.
Be physically active for at least thirty minutes every day.
Avoid sugar, and limit the consumption of energy-packed foods.
Balance one's diet with a variety of vegetables, grains, fruits, legumes, etc.
Limit sodium intake and the consumption of red meats and processed meats.
Limit alcoholic drinks to two for men and one for women a day.
These recommendations are also widely supported by the American Cancer Society. The guidelines have been evaluated and individuals who have higher guideline adherence scores have substantially reduced cancer risk as well as improved outcomes of a multitude of chronic health problems. Regular physical activity is a factor that helps reduce an individual's blood pressure and improves cholesterol levels, two key components that correlate with heart disease and type 2 diabetes. The American Cancer Society encourages the public to "adopt a physically active lifestyle" by meeting the criteria in a variety of physical activities such as hiking, swimming, circuit training, resistance training, lifting, etc. It is understood that cancer is not a disease that can be cured by physical fitness alone, however, because it is a multifactorial disease, physical fitness is a controllable prevention. The large associations between physical fitness and reduced cancer risk are enough to provide a strategy of preventative interventions.
The American Cancer Society asserts different levels of activity ranging from moderate to vigorous to clarify the recommended time spent on a physical activity. These classifications of physical activity consider intentional exercise and basic activities performed on a daily basis and give the public a greater understanding of what fitness levels suffice as future disease prevention.
Inflammation
Studies have shown an association between increased physical activity and reduced inflammation. It produces both a short-term inflammatory response and a long-term anti-inflammatory effect. Physical activity reduces inflammation in conjunction with or independent of changes in body weight. However, the mechanisms linking physical activity to inflammation are unknown.
Immune system
Physical activity boosts the immune system. This is dependent on the concentration of endogenous factors (such as sex hormones, metabolic hormones and growth hormones), body temperature, blood flow, hydration status and body position. Physical activity has been shown to increase the levels of natural killer (NK) cells, NK T cells, macrophages, neutrophils and eosinophils, complements, cytokines, antibodies and T cytotoxic cells. However, the mechanism linking physical activity to immune system is not fully understood.
Weight control
Achieving resilience through physical fitness promotes a vast and complex range of health-related benefits. Individuals who keep up physical fitness levels generally regulate their distribution of body fat and prevent obesity. Studies prove that running uses calories in the body that come from the macronutrients eaten daily. In order for the body to be able to run, it will use those ingested calories, therefore it will burn calories. Abdominal fat, specifically visceral fat, is most directly affected by engaging in aerobic exercise. Strength training has been known to increase the amount of muscle in the body, however, it can also reduce body fat. Sex steroid hormones, insulin, and appropriate immune responses are factors that mediate metabolism in relation to abdominal fat. Therefore, physical fitness provides weight control through regulation of these bodily functions.
Menopause and physical fitness
Menopause is often said to have occurred when a woman has had no vaginal bleeding for over a year since her last menstrual cycle. There are a number of symptoms connected to menopause, most of which can affect the quality of life of a woman involved in this stage of her life. One way to reduce the severity of the symptoms is to exercise and keep a healthy level of fitness. Prior to and during menopause, as the female body changes, there can be physical, physiological or internal changes to the body. These changes can be reduced or even prevented with regular exercise. These changes include:
Preventing weight gain: around menopause women tend to experience a reduction in muscle mass and an increase in fat levels. Increasing the amount of physical exercise undertaken can help to prevent these changes.
Reducing the risk of breast cancer: weight loss from regular exercise may offer protection from breast cancer.
Strengthening bones: physical activity can slow the bone loss associated with menopause, reducing the chance of bone fractures and osteoporosis.
Reducing the risk of disease: excess weight can increase the risk of heart disease and type 2 diabetes, and regular physical activity can counter these effects.
Boosting mood: being involved in regular activities can improve psychological health, an effect that can be seen at any age and not just during or after menopause.
The Melbourne Women's Midlife Health Project followed 438 women over an eight-year period providing evidence showing that even though physical activity was not associated with vasomotor symptoms (more commonly known as hot flashes) in this cohort at the beginning, women who reported they were physically active every day at the beginning were 49% less likely to have reported bothersome hot flushes. This is in contrast to women whose level of activity decreased and were more likely to experience bothersome hot flushes.
Mental health
Studies have shown that physical activity can improve mental health and well-being. This improvement is due to an increase in blood flow to the brain, allowing for the release of hormones as well as a decrease of stress hormone levels in the body (e.g., cortisol, adrenaline) while also stimulating the human body's mood boosters and natural painkillers. Not only does exercise release these feel-good hormones, it can also help relieve stress and help build confidence. The same way exercising can help humans to have a healthier life, it also can improve sleep quality. Based on studies, even 10 minutes of exercise per day can help insomnia. These trends improve as physical activity is performed on a consistent basis, which makes exercise effective in relieving symptoms of depression and anxiety, positively impacting mental health and bringing about several other benefits. For example:
Physical activity has been linked to the alleviation of depression and anxiety symptoms.
In patients with schizophrenia, physical fitness has been shown to improve their quality of life and decrease the effects of schizophrenia.
Being fit can improve one's self-esteem.
Working out can improve one's mental alertness and it can reduce fatigue.
Studies have shown a reduction in stress levels.
Increased opportunity for social interaction, allowing for improved social skills
To achieve some of these benefits, the Centers for Disease Control and Prevention suggests at least 30–60 minutes of exercise 3-5 times a week.
Different forms of exercise have been proven to improve mental health and reduce the risk of depression, anxiety, and suicide.
Benefits of Exercise on Mental health include ... Improved sleep, Stress relief, Improvement in mood, Increased energy and stamina, Reduced tiredness that can increase mental alertness. There are beneficial effects for mental health as well as physical health when it comes to exercise.
History
In the 1940s, an emigrant M.D. from Austria named Hans Kraus began testing children in the U.S. and Europe for what he termed, "Muscular Fitness." (in other words, muscular functionality) Through his testing, he found children in the U.S. to be far less physically capable than European children. Kraus published some alarming papers in various journals and got the attention of some powerful people, including a senator from Pennsylvania who took the findings to President Dwight D. Eisenhower. President Eisenhower was "shocked." He set up a series of conferences and committees; then in July 1956, Eisenhower established the President's Council on Youth Fitness.
In Greece, physical fitness was considered to be an essential component of a healthy life and it was the norm for men to frequent a gymnasium. Physical fitness regimes were also considered to be of paramount importance in a nation's ability to train soldiers for an effective military force. Partly for these reasons, organized fitness regimes have been in existence throughout known history and evidence of them can be found in many countries.
Gymnasiums which would seem familiar today began to become increasingly common in the 19th century. The industrial revolution had led to a more sedentary lifestyle for many people and there was an increased awareness that this had the potential to be harmful to health. This was a key motivating factor for the forming of a physical culture movement, especially in Europe and the USA. This movement advocated increased levels of physical fitness for men, women, and children and sought to do so through various forms of indoor and outdoor activity, and education. In many ways, it laid the foundations for modern fitness culture.
Education
The following is a list of some institutions that educate people about physical fitness:
American Council on Exercise (ACE)
National Academy of Sports Medicine (NASM)
International Sports Science Association (ISSA)
See also
References
External links
Physical exercise
Strength training | 0.776803 | 0.998974 | 0.776006 |
Anabolism | Anabolism is the set of metabolic pathways that construct macromolecules like DNA or RNA from smaller units. These reactions require energy, known also as an endergonic process. Anabolism is the building-up aspect of metabolism, whereas catabolism is the breaking-down aspect. Anabolism is usually synonymous with biosynthesis.
Pathway
Polymerization, an anabolic pathway used to build macromolecules such as nucleic acids, proteins, and polysaccharides, uses condensation reactions to join monomers. Macromolecules are created from smaller molecules using enzymes and cofactors.
Energy source
Anabolism is powered by catabolism, where large molecules are broken down into smaller parts and then used up in cellular respiration. Many anabolic processes are powered by the cleavage of adenosine triphosphate (ATP). Anabolism usually involves reduction and decreases entropy, making it unfavorable without energy input. The starting materials, called the precursor molecules, are joined using the chemical energy made available from hydrolyzing ATP, reducing the cofactors NAD+, NADP+, and FAD, or performing other favorable side reactions. Occasionally it can also be driven by entropy without energy input, in cases like the formation of the phospholipid bilayer of a cell, where hydrophobic interactions aggregate the molecules.
Cofactors
The reducing agents NADH, NADPH, and FADH2, as well as metal ions, act as cofactors at various steps in anabolic pathways. NADH, NADPH, and FADH2 act as electron carriers, while charged metal ions within enzymes stabilize charged functional groups on substrates.
Substrates
Substrates for anabolism are mostly intermediates taken from catabolic pathways during periods of high energy charge in the cell.
Functions
Anabolic processes build organs and tissues. These processes produce growth and differentiation of cells and increase in body size, a process that involves synthesis of complex molecules. Examples of anabolic processes include the growth and mineralization of bone and increases in muscle mass.
Anabolic hormones
Endocrinologists have traditionally classified hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The classic anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.
Photosynthetic carbohydrate synthesis
Photosynthetic carbohydrate synthesis in plants and certain bacteria is an anabolic process that produces glucose, cellulose, starch, lipids, and proteins from CO2. It uses the energy produced from the light-driven reactions of photosynthesis, and creates the precursors to these large molecules via carbon assimilation in the photosynthetic carbon reduction cycle, a.k.a. the Calvin cycle.
Amino acid biosynthesis
All amino acids are formed from intermediates in the catabolic processes of glycolysis, the citric acid cycle, or the pentose phosphate pathway. From glycolysis, glucose 6-phosphate is a precursor for histidine; 3-phosphoglycerate is a precursor for glycine and cysteine; phosphoenol pyruvate, combined with the 3-phosphoglycerate-derivative erythrose 4-phosphate, forms tryptophan, phenylalanine, and tyrosine; and pyruvate is a precursor for alanine, valine, leucine, and isoleucine. From the citric acid cycle, α-ketoglutarate is converted into glutamate and subsequently glutamine, proline, and arginine; and oxaloacetate is converted into aspartate and subsequently asparagine, methionine, threonine, and lysine.
Glycogen storage
During periods of high blood sugar, glucose 6-phosphate from glycolysis is diverted to the glycogen-storing pathway. It is changed to glucose-1-phosphate by phosphoglucomutase and then to UDP-glucose by UTP--glucose-1-phosphate uridylyltransferase. Glycogen synthase adds this UDP-glucose to a glycogen chain.
Gluconeogenesis
Glucagon is traditionally a catabolic hormone, but also stimulates the anabolic process of gluconeogenesis by the liver, and to a lesser extent the kidney cortex and intestines, during starvation to prevent low blood sugar. It is the process of converting pyruvate into glucose. Pyruvate can come from the breakdown of glucose, lactate, amino acids, or glycerol. The gluconeogenesis pathway has many reversible enzymatic processes in common with glycolysis, but it is not the process of glycolysis in reverse. It uses different irreversible enzymes to ensure the overall pathway runs in one direction only.
Regulation
Anabolism operates with separate enzymes from catalysis, which undergo irreversible steps at some point in their pathways. This allows the cell to regulate the rate of production and prevent an infinite loop, also known as a futile cycle, from forming with catabolism.
The balance between anabolism and catabolism is sensitive to ADP and ATP, otherwise known as the energy charge of the cell. High amounts of ATP cause cells to favor the anabolic pathway and slow catabolic activity, while excess ADP slows anabolism and favors catabolism. These pathways are also regulated by circadian rhythms, with processes such as glycolysis fluctuating to match an animal's normal periods of activity throughout the day.
Etymology
The word anabolism is from Neo-Latin, with roots from , "upward" and , "to throw".
References
Metabolism | 0.780378 | 0.99432 | 0.775945 |
Chronic condition | A chronic condition (also known as chronic disease or chronic illness) is a health condition or disease that is persistent or otherwise long-lasting in its effects or a disease that comes with time. The term chronic is often applied when the course of the disease lasts for more than three months. Common chronic diseases include diabetes, functional gastrointestinal disorder, eczema, arthritis, asthma, chronic obstructive pulmonary disease, autoimmune diseases, genetic disorders and some viral diseases such as hepatitis C and acquired immunodeficiency syndrome. An illness which is lifelong because it ends in death is a terminal illness. It is possible and not unexpected for an illness to change in definition from terminal to chronic. Diabetes and HIV for example were once terminal yet are now considered chronic due to the availability of insulin for diabetics and daily drug treatment for individuals with HIV which allow these individuals to live while managing symptoms.
In medicine, chronic conditions are distinguished from those that are acute. An acute condition typically affects one portion of the body and responds to treatment. A chronic condition, on the other hand, usually affects multiple areas of the body, is not fully responsive to treatment, and persists for an extended period of time.
Chronic conditions may have periods of remission or relapse where the disease temporarily goes away, or subsequently reappears. Periods of remission and relapse are commonly discussed when referring to substance abuse disorders which some consider to fall under the category of chronic condition.
Chronic conditions are often associated with non-communicable diseases which are distinguished by their non-infectious causes. Some chronic conditions though, are caused by transmissible infections such as HIV/AIDS.
63% of all deaths worldwide are from chronic conditions. Chronic diseases constitute a major cause of mortality, and the World Health Organization (WHO) attributes 38 million deaths a year to non-communicable diseases. In the United States approximately 40% of adults have at least two chronic conditions. Living with two or more chronic conditions is referred to as multimorbidity.
Types
Chronic conditions have often been used to describe the various health related states of the human body such as syndromes, physical impairments, disabilities as well as diseases. Epidemiologists have found interest in chronic conditions due to the fact they contribute to disease, disability, and diminished physical and/or mental capacity.
For example, high blood pressure or hypertension is considered to be not only a chronic condition itself but also correlated with diseases such as heart attack or stroke. Additionally, some socioeconomic factors may be considered as a chronic condition as they lead to disability in daily life. An important one that public health officials in the social science setting have begun highlighting is chronic poverty.
Researchers, particularly those studying the United States, utilize the Chronic Condition Indicator (CCI) which maps ICD codes as "chronic" or "non-chronic".
The list below includes these chronic conditions and diseases:
In 2015 the World Health Organization produced a report on non-communicable diseases, citing the four major types as:
Cancers
Cardiovascular diseases, including cerebrovascular disease, heart failure, and ischemic cardiopathy
Chronic respiratory diseases, such as asthma and chronic obstructive pulmonary disease (COPD)
Diabetes mellitus (type 1, type 2, pre-diabetes, gestational diabetes)
Other examples of chronic diseases and health conditions include:
Alzheimer's disease
Atrial fibrillation
Attention deficit hyperactivity disorder
Autoimmune diseases, such as ulcerative colitis, lupus erythematosus, Crohn's disease, coeliac disease, Hashimoto's thyroiditis, and relapsing polychondritis
Blindness
Cerebral palsy (all types)
Chronic graft-versus-host disease
Chronic hepatitis
Chronic pancreatitis
Chronic kidney disease
Chronic osteoarticular diseases, such as osteoarthritis and rheumatoid arthritis
Chronic pain syndromes, such as post-vasectomy pain syndrome and complex regional pain syndrome
Dermatological conditions such as atopic dermatitis and psoriasis
Deafness and hearing impairment
Ehlers–Danlos syndrome (various types)
Endometriosis
Epilepsy
Fetal alcohol spectrum disorder
Fibromyalgia
HIV/AIDS
Hereditary spherocytosis
Huntington's disease
Hypertension
Mental illness
Migraines
Multiple sclerosis
Myalgic encephalomyelitis ( chronic fatigue syndrome)
Narcolepsy
Obesity
Osteoporosis
Parkinson's disease
Periodontal disease
Polycystic Ovarian Syndrome
Postural orthostatic tachycardia syndrome
Sickle cell anemia and other hemoglobin disorders
Substance Abuse Disorders
Sleep apnea
Thyroid disease
Tuberculosis
And many more.
Risk factors
While risk factors vary with age and gender, many of the common chronic diseases in the US are caused by dietary, lifestyle and metabolic risk factors. Therefore, these conditions might be prevented by behavioral changes, such as quitting smoking, adopting a healthy diet, and increasing physical activity. Social determinants are important risk factors for chronic diseases. Social factors, e.g., socioeconomic status, education level, and race/ethnicity, are a major cause for the disparities observed in the care of chronic disease. Lack of access and delay in receiving care result in worse outcomes for patients from minorities and underserved populations. Those barriers to medical care complicate patients monitoring and continuity in treatment.
In the US, minorities and low-income populations are less likely to seek, access and receive preventive services necessary to detect conditions at an early stage.
The majority of US health care and economic costs associated with medical conditions are incurred by chronic diseases and conditions and associated health risk behaviors. Eighty-four percent of all health care spending in 2006 was for the 50% of the population who have one or more common chronic medical conditions (CDC, 2014).
There are several psychosocial risk and resistance factors among children with chronic illness and their family members. Adults with chronic illness were significantly more likely to report life dissatisfaction than those without chronic illness. Compared to their healthy peers, children with chronic illness have about a twofold increase in psychiatric disorders. Higher parental depression and other family stressors predicted more problems among patients. In addition, sibling problems along with the burden of illness on the family as a whole led to more psychological strain on the patients and their families.
Prevention
A growing body of evidence supports that prevention is effective in reducing the effect of chronic conditions; in particular, early detection results in less severe outcomes. Clinical preventive services include screening for the existence of the disease or predisposition to its development, counseling and immunizations against infectious agents. Despite their effectiveness, the utilization of preventive services is typically lower than for regular medical services. In contrast to their apparent cost in time and money, the benefits of preventive services are not directly perceived by patient because their effects are on the long term or might be greater for society as a whole than at the individual level.
Therefore, public health programs are important in educating the public, and promoting healthy lifestyles and awareness about chronic diseases. While those programs can benefit from funding at different levels (state, federal, private) their implementation is mostly in charge of local agencies and community-based organizations.
Studies have shown that public health programs are effective in reducing mortality rates associated to cardiovascular disease, diabetes and cancer, but the results are somewhat heterogeneous depending on the type of condition and the type of programs involved. For example, results from different approaches in cancer prevention and screening depended highly on the type of cancer.
The rising number of patient with chronic diseases has renewed the interest in prevention and its potential role in helping control costs. In 2008, the Trust for America's Health produced a report that estimated investing $10 per person annually in community-based programs of proven effectiveness and promoting healthy lifestyle (increase in physical activity, healthier diet and preventing tobacco use) could save more than $16 billion annually within a period of just five years.
A 2017 review (updated in 2022) found that it is uncertain whether school-based policies on targeting risk factors on chronic diseases such as healthy eating policies, physical activity policies, and tobacco policies can improve student health behaviours or knowledge of staffs and students. The updated review in 2022 did determine a slight improvement in measures of obesity and physical activity as the use of improved strategies lead to increased implementation interventions but continued to call for additional research to address questions related to alcohol use and risk. Encouraging those with chronic conditions to continue with their outpatient (ambulatory) medical care and attend scheduled medical appointments may help improve outcomes and reduce medical costs due to missed appointments. Finding patient-centered alternatives to doctors or consultants scheduling medical appointments has been suggested as a means of improving the number of people with chronic conditions that miss medical appointments, however there is no strong evidence that these approaches make a difference.
Nursing
Nursing can play an important role in assisting patients with chronic diseases achieve longevity and experience wellness. Scholars point out that the current neoliberal era emphasizes self-care, in both affluent and low-income communities. This self-care focus extends to the nursing of patients with chronic diseases, replacing a more holistic role for nursing with an emphasis on patients managing their own health conditions. Critics note that this is challenging if not impossible for patients with chronic disease in low-income communities where health care systems, and economic and social structures do not fully support this practice.
A study in Ethiopia showcases a nursing-heavy approach to the management of chronic disease. Foregrounding the problem of distance from healthcare facility, the study recommends patients increase their request for care. It uses nurses and health officers to fill, in a cost-efficient way, the large unmet need for chronic disease treatment. They led their health centers staffed by nurses and health officers; so, there are specific training required for involvement in the programmed must be carried out regularly, to ensure that new staff is educated in administering chronic disease care. The program shows that community-based care and education, primarily driven by nurses and health officers, works. It highlights the importance of nurses following up with individuals in the community, and allowing nurses flexibility in meeting their patients' needs and educating them for self-care in their homes.
Epidemiology
The epidemiology of chronic disease is diverse and the epidemiology of some chronic diseases can change in response to new treatments. In the treatment of HIV, the success of anti-retroviral therapies means that many patients will experience this infection as a chronic disease that for many will span several decades of their chronic life.
Some epidemiology of chronic disease can apply to multiple diagnosis. Obesity and body fat distribution for example contribute and are risk factors for many chronic diseases such as diabetes, heart, and kidney disease. Other epidemiological factors, such as social, socioeconomic, and environment do not have a straightforward cause and effect relationship with chronic disease diagnosis. While typically higher socioeconomic status is correlated with lower occurrence of chronic disease, it is not known is there is a direct cause and effect relationship between these two variables.
The epidemiology of communicable chronic diseases such as AIDS is also different from that of noncommunicable chronic disease. While Social factors do play a role in AIDS prevalence, only exposure is truly needed to contract this chronic disease. Communicable chronic diseases are also typically only treatable with medication intervention, rather than lifestyle change as some non-communicable chronic diseases can be treated.
United States
As of 2003, there are a few programs which aim to gain more knowledge on the epidemiology of chronic disease using data collection. The hope of these programs is to gather epidemiological data on various chronic diseases across the United States and demonstrate how this knowledge can be valuable in addressing chronic disease.
In the United States, as of 2004 nearly one in two Americans (133 million) has at least one chronic medical condition, with most subjects (58%) between the ages of 18 and 64. The number is projected to increase by more than one percent per year by 2030, resulting in an estimated chronically ill population of 171 million. The most common chronic conditions are high blood pressure, arthritis, respiratory diseases like emphysema, and high cholesterol.
Based on data from 2014 Medical Expenditure Panel Survey (MEPS), about 60% of adult Americans were estimated to have one chronic illness, with about 40% having more than one; this rate appears to be mostly unchanged from 2008. MEPS data from 1998 showed 45% of adult Americans had at least one chronic illness, and 21% had more than one.
According to research by the CDC, chronic disease is also especially a concern in the elderly population in America. Chronic diseases like stroke, heart disease, and cancer were among the leading causes of death among Americans aged 65 or older in 2002, accounting for 61% of all deaths among this subset of the population. It is estimated that at least 80% of older Americans are currently living with some form of a chronic condition, with 50% of this population having two or more chronic conditions. The two most common chronic conditions in the elderly are high blood pressure and arthritis, with diabetes, coronary heart disease, and cancer also being reported among the elder population.
In examining the statistics of chronic disease among the living elderly, it is also important to make note of the statistics pertaining to fatalities as a result of chronic disease. Heart disease is the leading cause of death from chronic disease for adults older than 65, followed by cancer, stroke, diabetes, chronic lower respiratory diseases, influenza and pneumonia, and, finally, Alzheimer's disease. Though the rates of chronic disease differ by race for those living with chronic illness, the statistics for leading causes of death among elderly are nearly identical across racial/ethnic groups.
Chronic illnesses cause about 70% of deaths in the US and in 2002 chronic conditions (heart disease, cancers, stroke, chronic respiratory diseases, diabetes, Alzheimer's disease, mental illness and kidney diseases) were six of the top ten causes of mortality in the general US population.
Economic impact
United States
Chronic diseases are a major factor in the continuous growth of medical care spending. In 2002, the U.S. Department of Health and Human Services stated that the health care for chronic diseases cost the most among all health problems in the U.S. Healthy People 2010 reported that more than 75% of the $2 trillion spent annually in U.S. medical care are due to chronic conditions; spending are even higher in proportion for Medicare beneficiaries (aged 65 years and older). Furthermore, in 2017 it was estimated that 90% of the $3.3 billion spent on healthcare in the United States was due to the treatment of chronic diseases and conditions. Spending growth is driven in part by the greater prevalence of chronic illnesses and the longer life expectancy of the population. Also, improvement in treatments has significantly extended the lifespans of patients with chronic diseases but results in additional costs over long period of time. A striking success is the development of combined antiviral therapies that led to remarkable improvement in survival rates and quality of life of HIV-infected patients.
In addition to direct costs in health care, chronic diseases are a significant burden to the economy, through limitations in daily activities, loss in productivity and loss of days of work. A particular concern is the rising rates of overweight and obesity in all segments of the U.S. population. Obesity itself is a medical condition and not a disease, but it constitutes a major risk factor for developing chronic illnesses, such as diabetes, stroke, cardiovascular disease and cancers. Obesity results in significant health care spending and indirect costs, as illustrated by a recent study from the Texas comptroller reporting that obesity alone cost Texas businesses an extra $9.5 billion in 2009, including more than $4 billion for health care, $5 billion for lost productivity and absenteeism, and $321 million for disability.
Social and personal impact
There have been recent links between social factors and prevalence as well as outcome of chronic conditions.
Mental health
The connection between loneliness, overall health, and chronic conditions has recently been highlighted. Some studies have shown that loneliness has detrimental health effects similar to that of smoking and obesity. One study found that feelings of isolation are associated with higher self reporting of health as poor, and feelings of loneliness increased the likelihood of mental health disorders in individuals.
The connection between chronic illness and loneliness is established, yet oftentimes ignored in treatment. One study for example found that a greater number of chronic illnesses per individual were associated with feelings of loneliness. Some of the possible reasons for this listed are an inability to maintain independence as well as the chronic illness being a source of stress for the individual. A study of loneliness in adults over age 65 found that low levels of loneliness as well as high levels of familial support were associated with better outcomes of multiple chronic conditions such as hypertension and diabetes.
There are some recent movements in the medical sphere to address these connections when treating patients with chronic illness. The biopsychosocial approach for example, developed in 2006 focuses on patients "patient's personality, family, culture, and health dynamics." Physicians are leaning more towards a psychosocial approach to chronic illness to aid the increasing number of individuals diagnosed with these conditions. Despite this movement, there is still criticism that chronic conditions are not being treated appropriately, and there is not enough emphasis on the behavioral aspects of chronic conditions or psychological types of support for patients.
The mental health intersectionality on those with chronic conditions is a large aspect often overlooked by doctors. And chronic illness therapists are available for support to help with the mental toll of chronic illness a it is often underestimated in society. Adults with chronic illness that restrict their daily life present with more depression and lower self-esteem than healthy adults and adults with non-restricting chronic illness. The emotional influence of chronic illness also has an effect on the intellectual and educational development of the individual. For example, people living with type 1 diabetes endure a lifetime of monotonous and rigorous health care management usually involving daily blood glucose monitoring, insulin injections, and constant self-care. This type of constant attention that is required by type 1 diabetes and other chronic illness can result in psychological maladjustment. There have been several theories, namely one called diabetes resilience theory, that posit that protective processes buffer the impact of risk factors on the individual's development and functioning.
Financial cost
People with chronic conditions pay more out-of-pocket; a study found that Americans spent $2,243 more on average. The financial burden can increase medication non-adherence.
In some countries, laws protect patients with chronic conditions from excessive financial responsibility; for example, as of 2008 France limited copayments for those with chronic conditions, and Germany limits cost sharing to 1% of income versus 2% for the general public.
Within the medical-industrial complex, chronic illnesses can impact the relationship between pharmaceutical companies and people with chronic conditions. Life-saving drugs, or life-extending drugs, can be inflated for a profit. There is little regulation on the cost of chronic illness drugs, which suggests that abusing the lack of a drug cap can create a large market for drug revenue. Likewise, certain chronic conditions can last throughout one's lifetime and create pathways for pharmaceutical companies to take advantage of this.
Gender
Gender influences how chronic disease is viewed and treated in society. Women's chronic health issues are often considered to be most worthy of treatment or most severe when the chronic condition interferes with a woman's fertility. Historically, there is less of a focus on a woman's chronic conditions when it interferes with other aspects of her life or well-being. Many women report feeling less than or even "half of a woman" due to the pressures that society puts on the importance of fertility and health when it comes to typically feminine ideals. These kinds of social barriers interfere with women's ability to perform various other activities in life and fully work toward their aspirations.
Socioeconomic class and race
Race is also implicated in chronic illness, although there may be many other factors involved. Racial minorities are 1.5-2 times more likely to have most chronic diseases than white individuals. Non-Hispanic blacks are 40% more likely to have high blood pressure that non-Hispanic whites, diagnosed diabetes is 77% higher among non-Hispanic blacks, and American Indians and Alaska Natives are 60% more likely to be obese than non-Hispanic whites. Some of this prevalence has been suggested to be in part from environmental racism. Flint, Michigan, for example, had high levels of lead poisoning in their drinkable water after waste was dumped into low-value housing areas. There are also higher rates of asthma in children who live in lower income areas due to an abundance of pollutants being released on a much larger scale in these areas.
Advocacy and research organizations
In Europe, the European Chronic Disease Alliance was formed in 2011, which represents over 100,000 healthcare workers.
In the United States, there are a number of nonprofits focused on chronic conditions, including entities focused on specific diseases such as the American Diabetes Association, Alzheimer's Association, or Crohn's and Colitis Foundation. There are also broader groups focused on advocacy or research into chronic illness in general, such as the National Association of Chronic Disease Directors, Partnership to Fight Chronic Disease, the Chronic Disease Coalition which arose in Oregon in 2015, and the Chronic Policy Care Alliance.
See also
Chronic care management
Chronic disease in China
Chronic disease in Northern Ontario
Chronic Illness (journal)
Chronic pain
Long COVID
Course (medicine)
Disability studies
Disease management (health)
Dynamic treatment regimes
Medical tattoo
Multimorbidity
Natural history of disease
Virtual Wards (a UK term)
References
Further reading
</ref>
External links
Center for Managing Chronic Disease, University of Michigan
CHRODIS: EU Joint Action on Chronic Diseases and Promoting Healthy Ageing Across the Life-Cycle
MEDICC Review theme issue on Confronting Chronic Diseases With longer life expectancies in most countries and the globalization of "Western" diets and sedentarism, the main burden of disease and death from these conditions is falling on already-disadvantaged developing countries and poor communities everywhere.
Public Health Agency of Canada: Chronic Disease
World Health Organization: Chronic Disease and Health Promotion
Medical terminology
Human diseases and disorders
Disability by type | 0.776834 | 0.998721 | 0.77584 |
Survival skills | Survival skills are techniques used to sustain life in any type of natural environment or built environment. These techniques are meant to provide basic necessities for human life, including water, food, and shelter. Survival skills also support proper knowledge and interactions with animals and plants to promote the sustaining of life over time.
Survival skills are basic ideas and abilities that ancient people invented and passed down for thousands of years. Today, survival skills are often associated with surviving in a disaster situation.
Outdoor activities such as hiking, backpacking, horseback riding, fishing, and hunting all require basic wilderness survival skills, especially to handle emergencies. Individuals who practice survival skills as a type of outdoor recreation or hobby may describe themselves as survivalists. Survival skills are often used by people living off-grid lifestyles such as homesteaders. Bushcraft and primitive living are most often self-implemented but require many of the same skills.
First aid
First aid (wilderness first aid in particular) can help a person survive and function with injuries and illnesses that would otherwise kill or compromise them. Common and dangerous injuries include:
Bites from snakes, spiders, and other wild animals
Bone fractures
Burns
Drowsiness
Headache
Heart attack
Hemorrhage
Hypothermia and hyperthermia
Infection from food, animal contact, or drinking non-potable water
Poisoning from poisonous plants or fungi
Sprains, particularly of the ankle
Vomiting
Wounds, which may become infected
The person may need to apply the contents of a first aid kit or, if possessing the required knowledge, naturally occurring medicinal plants, immobilize injured limbs, or even transport incapacitated comrades.
Shelter
Many people who are forced into survival situations often have an elevated risk of danger because of direct exposure to the elements. Many people in survival situations die of hypothermia or hyperthermia, or animal attacks. An effective shelter can range from a natural shelter, such as a cave, overhanging rock outcrop, or a fallen-down tree, to an intermediate form of man-made shelter such as a debris hut, tree pit shelter, or snow cave, to a completely man-made structure such as a tarp, tent, or a longhouse. It is noted that some common properties between these structures are:
Location (away from hazards, such as cliffs; and nearby materials, like food sources)
Insulation (from the ground, rain, wind, air, or sun)
Heat Source (either body heat or fire-heated)
Personal or Group Shelter (having multiple individuals)
Fire
Fire is a tool that helps meet many survival needs. A campfire can be used to boil water, rendering it safe to drink, and to cook food. Fire also creates a sense of safety and protection, which can provide an overlooked psychological boost. When temperatures are low, fire can postpone or prevent the risk of hypothermia. In a wilderness survival situation, fire can provide a sense of home in addition to being an essential energy source. Fire may deter wild animals from interfering with an individual, though some wild animals may also be attracted to the light and heat of a fire.
There are numerous methods for starting a fire in a survival situation. Fires are either started with the case of the solar spark lighter, or through a spark, as in the case of a flint striker. Fires will often be extinguished if either there is excessive wind, or if the fuel or environment is too wet. Lighting a fire without a lighter or matches, e.g. by using natural flint and metal with tinder, is a frequent subject of both books on survival and in survival courses, because it allows an individual to start a fire with few materials in the event of a disaster. There is an emphasis placed on practicing fire-making skills before venturing into the wilderness. Producing fire under adverse conditions has been made much easier by the introduction of tools such as the magnesium striker, solar spark lighter, and the fire piston.
Water
A human being can survive an average of three to five days without water. Since the human body is composed of an average of 60% water, it should be no surprise that water is higher on the list than food. The need for water dictates that unnecessary water loss by perspiration should be avoided in survival situations. Perspiration and the need for water increase with exercise. Although human water intake varies greatly depending on factors like age and gender, the average human should drink about 13 cups or 3 liters per day. Many people in survival situations perish due to dehydration, and/or the debilitating effects of water-borne pathogens from untreated water.
A typical person will lose a minimum of two to four liters of water per day under ordinary conditions, and more in hot, dry, or cold weather. Four to six liters of water or other liquids are generally required each day in the wilderness to avoid dehydration and to keep the body functioning properly. The U.S. Army survival manual does not recommend drinking water only when thirsty, as this leads to inadequate hydration. Instead, water should be consumed at regular intervals. Other groups recommend rationing water through "water discipline."
A lack of water causes dehydration, which may result in lethargy, headaches, dizziness, confusion, and eventually death. Even mild dehydration reduces endurance and impairs concentration, which is dangerous in a survival situation where clear thinking is essential. Dark yellow or brown urine is a diagnostic indicator of dehydration. To avoid dehydration, a high priority is typically assigned to locating a supply of drinking water and making provisions to render that water as safe as possible.
Recent thinking is that boiling or commercial filters are significantly safer than the use of chemicals, with the exception of chlorine dioxide.
Food
Culinary root tubers, fruit, edible mushrooms, edible nuts, edible beans, edible cereals or edible leaves, edible cacti, ants and algae can be gathered and, if needed, prepared (mostly by boiling). With the exception of leaves, these foods are relatively high in calories, providing some energy to the body. Plants are some of the easiest food sources to find in the jungle, forest, or desert because they are stationary and can thus be obtained without exerting much effort. Animal trapping, hunting, and fishing allow a survivalist to acquire high-calorie meat but require certain skills and equipment (such as bows, snares, and nets).
Focusing on survival until rescued, the Boy Scouts of America especially discourages foraging for wild foods on the grounds that the knowledge and skills needed to make a safe decision are unlikely to be possessed by those finding themselves in a wilderness survival situation.
Navigation
When going on a hike or trip in an unfamiliar location, search and rescue advises to notify a trusted contact of your destination, your planned return time, and then notify them when returning. In the event you do not return in the specified time frame, (e.g. 12 hours of the scheduled return time), your contact can contact the police for search and rescue.
Survival situations can often be resolved by finding a way to safety, or a more suitable location to wait for rescue. Types of navigation include:
Celestial navigation, using the sun and the night sky to locate the cardinal directions and to maintain course of travel
Using a map, compass or GPS receiver
Dead reckoning
Natural navigation, using the condition of surrounding natural objects (i.e. moss on a tree, snow on a hill, direction of running water, etc.)
Mental preparedness
Mental clarity and preparedness are critical to survival. The will to live in a life-and-death situation often separates those that live and those that do not. Even well-trained survival experts may be mentally affected in disaster situations. It is critical to be calm and focused during a disaster.
To the extent that stress results from testing human limits, the benefits of learning to function under stress and determining those limits may outweigh the downside of stress. There are certain strategies and mental tools that can help people cope better in a survival situation, including focusing on manageable tasks, having a Plan B available, and recognizing denial.
Urban survival
Earthquake
Governments such as the United States and New Zealand advise that in an earthquake, one should "Drop, Cover, and Hold."
New Zealand Civil Defense explains it this way:
DROP down on your hands and knees. This protects you from falling but lets you move if you need to.
COVER your head and neck (or your entire body if possible) under a sturdy table or desk (if it is within a few steps of you). If there is no shelter nearby, cover your head and neck with your arms and hands.
HOLD on to your shelter (or your position to protect your head and neck) until the shaking stops. If the shaking shifts your shelter around, move with it.
The United States Federal Emergency Management Agency (FEMA) adds that in the event of a building collapse, it is advised that you:
Seek protection under a structure like a table
Cover your mouth with your shirt to filter out dust
Don't move until you are confident that something won't topple on you
Use your phone light to signal for help, or call
Important survival items
Survivalists often carry a "survival kit." The contents of these kits vary considerably, but generally consist of items that are necessary or useful in potential survival situations, depending on the anticipated needs and location. For wilderness survival, these kits often contain items like a knife, water vessel, fire-starting equipment, first aid equipment, tools to obtain food (such as snare wire, fish hooks, or firearms), a light source, navigational aids, and signaling or communications devices. Multi-purpose tools are often chosen because they serve multiple purposes, allowing the user to reduce weight and save space.
Preconstructed survival kits may be purchased from various retailers, or individual components may be bought and assembled into a kit.
Controversial survival skills
Some survival books promote the "Universal Edibility Test." Allegedly, it is possible to distinguish edible foods from toxic ones by exposing your skin and mouth to progressively greater amounts of the food in question, with waiting periods and checks for symptoms between these exposures. However, many experts reject this method, stating that even a small amount of some "potential foods" can cause physical discomfort, illness, or even death.
Many mainstream survival experts have recommended the act of drinking urine in times of dehydration and malnutrition. However, the U.S. Army Survival Field Manual (FM 21–76) instructs that this technique is a myth and should never be used. There are several reasons to avoid drinking urine, including the high salt content of urine, potential contaminants, and the risk of bacterial exposure, despite urine often being touted as "sterile."
Many classic western movies, classic survival books, and even some school textbooks suggest that using your mouth to suck the venom out of a venomous snake bite is an appropriate treatment. However, venom that has entered the bloodstream cannot be sucked out, and it may be dangerous for a rescuer to attempt to do so. Similarly, some survivalists promote the belief that when bitten by a venomous snake, drinking your urine provides natural anti-venom. Effective snakebite treatment involves pressure bandages and prompt medical treatment, and may require antivenom.
Seizonjutsu
Seizonjutsu (生存術) are survivsl skills such as gathering, hunting tracking etc used in Ninjutsu and expertise in meteorology, botanics and training for physical strength to endure hardships in the outback.
See also
Alone (TV show)
Bicycle touring
Bushcraft
Distress signal
Hazards of outdoor recreation
Mini survival kit
Survivalism
Ten Essentials
Woodcraft
References
Further reading
Mountaineering: The Freedom of the Hills; 8th Ed; Mountaineers Books; 596 pages; 1960 to 2010; .
The Knowledge: How to Rebuild Our World from Scratch; Penguin Books; 352 pages; 2014; .
External links
Media
Seizonjutsu - Ninja Survival Training Videos
Foraging | 0.782475 | 0.991502 | 0.775826 |
Biochemistry | Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and maybe saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol).
In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology | 0.776886 | 0.998529 | 0.775744 |
Erythema | Erythema is redness of the skin or mucous membranes, caused by hyperemia (increased blood flow) in superficial capillaries. It occurs with any skin injury, infection, or inflammation. Examples of erythema not associated with pathology include nervous blushes.
Types
Erythema ab igne
Erythema chronicum migrans
Erythema induratum
Erythema infectiosum (or fifth disease)
Erythema marginatum
Erythema migrans
Erythema multiforme (EM)
Erythema nodosum
Erythema toxicum
Erythema elevatum diutinum
Erythema gyratum repens
Keratolytic winter erythema
Palmar erythema
Causes
It can be caused by infection, massage, electrical treatment, acne medication, allergies, exercise, solar radiation (sunburn), photosensitization, acute radiation syndrome, mercury toxicity, blister agents, niacin administration, or waxing and tweezing of the hairs—any of which can cause the affected capillaries to dilate, resulting in redness. Erythema is a common side effect of radiotherapy treatment due to patient exposure to ionizing radiation.
Diagnosis
Erythema disappears on finger pressure (blanching), whereas purpura or bleeding in the skin and pigmentation do not. There is no temperature elevation, unless it is associated with the dilation of arteries in the deeper layer of the skin.
See also
Hyperemia
Flushing (physiology)
List of cutaneous conditions
References
External links
Dermatologic terminology
Radiation health effects
Symptoms and signs: Skin and subcutaneous tissue | 0.77718 | 0.998009 | 0.775633 |
Human musculoskeletal system | The human musculoskeletal system (also known as the human locomotor system, and previously the activity system) is an organ system that gives humans the ability to move using their muscular and skeletal systems. The musculoskeletal system provides form, support, stability, and movement to the body.
It is made up of the bones of the skeleton, muscles, cartilage, tendons, ligaments, joints, and other connective tissue that supports and binds tissues and organs together. The musculoskeletal system's primary functions include supporting the body, allowing motion, and protecting vital organs. The skeletal portion of the system serves as the main storage system for calcium and phosphorus and contains critical components of the hematopoietic system.
This system describes how bones are connected to other bones and muscle fibers via connective tissue such as tendons and ligaments. The bones provide stability to the body. Muscles keep bones in place and also play a role in the movement of bones. To allow motion, different bones are connected by joints. Cartilage prevents the bone ends from rubbing directly onto each other. Muscles contract to move the bone attached at the joint.
There are, however, diseases and disorders that may adversely affect the function and overall effectiveness of the system. These diseases can be difficult to diagnose due to the close relation of the musculoskeletal system to other internal systems. The musculoskeletal system refers to the system having its muscles attached to an internal skeletal system and is necessary for humans to move to a more favorable position. Complex issues and injuries involving the musculoskeletal system are usually handled by a physiatrist (specialist in physical medicine and rehabilitation) or an orthopaedic surgeon.
Subsystems
Skeletal
The skeletal system serves many important functions; it provides the shape and form for the body, support and protection, allows bodily movement, produces blood for the body, and stores minerals. The number of bones in the human skeletal system is a controversial topic. Humans are born with over 300 bones; however, many bones fuse together between birth and maturity. As a result, an average adult skeleton consists of 206 bones. The number of bones varies according to the method used to derive the count. While some consider certain structures to be a single bone with multiple parts, others may see it as a single part with multiple bones. There are five general classifications of bones. These are long bones, short bones, flat bones, irregular bones, and sesamoid bones. The human skeleton is composed of both fused and individual bones supported by ligaments, tendons, muscles and cartilage. It is a complex structure with two distinct divisions; the axial skeleton, which includes the vertebral column, and the appendicular skeleton.
Function
The skeletal system serves as a framework for tissues and organs to attach themselves to. This system acts as a protective structure for vital organs. Major examples of this are the brain being protected by the skull and the lungs being protected by the rib cage.
Located in long bones are two distinctions of bone marrow (yellow and red). The yellow marrow has fatty connective tissue and is found in the marrow cavity. During starvation, the body uses the fat in yellow marrow for energy. The red marrow of some bones is an important site for blood cell production, approximately 2.6 million red blood cells per second in order to replace existing cells that have been destroyed by the liver. Here all erythrocytes, platelets, and most leukocytes form in adults. From the red marrow, erythrocytes, platelets, and leukocytes migrate to the blood to do their special tasks.
Another function of bones is the storage of certain minerals. Calcium and phosphorus are among the main minerals being stored. The importance of this storage "device" helps to regulate mineral balance in the bloodstream. When the fluctuation of minerals is high, these minerals are stored in the bone; when it is low they will be withdrawn from the bone.
Muscular
There are three types of muscles—cardiac, skeletal, and smooth. Smooth muscles are used to control the flow of substances within the lumens of hollow organs, and are not consciously controlled. Skeletal and cardiac muscles have striations that are visible under a microscope due to the components within their cells. Only skeletal and smooth muscles are part of the musculoskeletal system and only the muscles can move the body. Cardiac muscles are found in the heart and are used only to circulate blood; like the smooth muscles, these muscles are not under conscious control. Skeletal muscles are attached to bones and arranged in opposing groups around joints. Muscles are innervated, whereby nervous signals are communicated by nerves, which conduct electrical currents from the central nervous system and cause the muscles to contract.
Contraction initiation
In mammals, when a muscle contracts, a series of reactions occur. Muscle contraction is stimulated by the motor neuron sending a message to the muscles from the somatic nervous system. Depolarization of the motor neuron results in neurotransmitters being released from the nerve terminal. The space between the nerve terminal and the muscle cell is called the neuromuscular junction. These neurotransmitters diffuse across the synapse and bind to specific receptor sites on the cell membrane of the muscle fiber. When enough receptors are stimulated, an action potential is generated and the permeability of the sarcolemma is altered. This process is known as initiation.
Tendons
A tendon is a tough, flexible band of fibrous connective tissue that connects muscles to bones. The extra-cellular connective tissue between muscle fibers binds to tendons at the distal and proximal ends, and the tendon binds to the periosteum of individual bones at the muscle's origin and insertion. As muscles contract, tendons transmit the forces to the relatively rigid bones, pulling on them and causing movement. Tendons can stretch substantially, allowing them to function as springs during locomotion, thereby saving energy.
Joints, ligaments and bursae
Joints are structures that connect individual bones and may allow bones to move against each other to cause movement. There are three divisions of joints, diarthroses which allow extensive mobility between two or more articular heads; amphiarthrosis, which is a joint that allows some movement, and false joints or synarthroses, joints that are immovable, that allow little or no movement and are predominantly fibrous. Synovial joints, joints that are not directly joined, are lubricated by a solution called synovial fluid that is produced by the synovial membranes. This fluid lowers the friction between the articular surfaces and is kept within an articular capsule, binding the joint with its taut tissue.
Ligaments
A ligament is a small band of dense, white, fibrous elastic tissue. Ligaments connect the ends of bones together in order to form a joint. Most ligaments limit dislocation, or prevent certain movements that may cause breaks. Since they are only elastic they increasingly lengthen when under pressure. When this occurs the ligament may be susceptible to break resulting in an unstable joint.
Ligaments may also restrict some actions: movements such as hyper extension and hyper flexion are restricted by ligaments to an extent. Also ligaments prevent certain directional movement.
Bursae
A bursa is a small fluid-filled sac made of white fibrous tissue and lined with synovial membrane. Bursa may also be formed by a synovial membrane that extends outside of the joint capsule. It provides a cushion between bones and tendons or muscles around a joint; bursa are filled with synovial fluid and are found around almost every major joint of the body.
Clinical significance
Because many other body systems, including the vascular, nervous, and integumentary systems, are interrelated, disorders of one of these systems may also affect the musculoskeletal system and complicate the diagnosis of the disorder's origin. Diseases of the musculoskeletal system mostly encompass functional disorders or motion discrepancies; the level of impairment depends specifically on the problem and its severity. In a study of hospitalizations in the United States, the most common inpatient OR procedures in 2012 involved the musculoskeletal system: knee arthroplasty, laminectomy, hip replacement, and spinal fusion.
Articular (of or pertaining to the joints) disorders are the most common. However, also among the diagnoses are: primary muscular diseases, neurologic (related to the medical science that deals with the nervous system and disorders affecting it) deficits, toxins, endocrine abnormalities, metabolic disorders, infectious diseases, blood and vascular disorders, and nutritional imbalances.
Disorders of muscles from another body system can bring about irregularities such as: impairment of ocular motion and control, respiratory dysfunction, and bladder malfunction. Complete paralysis, paresis, or ataxia may be caused by primary muscular dysfunctions of infectious or toxic origin; however, the primary disorder is usually related to the nervous system, with the muscular system acting as the effector organ, an organ capable of responding to a stimulus, especially a nerve impulse.
One understated disorder that begins during pregnancy is pelvic girdle pain. It is complex, multi-factorial, and likely to be also represented by a series of sub-groups driven by pain varying from peripheral or central nervous system, altered laxity/stiffness of muscles, laxity to injury of tendinous/ligamentous structures
to maladaptive body mechanics.
See also
Skeletal muscles of the human body
Skeletal muscle
Muscular system
References
Dance science | 0.779887 | 0.994288 | 0.775432 |
Stress (biology) | Stress, whether physiological, biological or psychological, is an organism's response to a stressor such as an environmental condition. When stressed by stimuli that alter an organism's environment, multiple systems respond across the body. In humans and most mammals, the autonomic nervous system and hypothalamic-pituitary-adrenal (HPA) axis are the two major systems that respond to stress. Two well-known hormones that humans produce during stressful situations are adrenaline and cortisol.
The sympathoadrenal medullary (SAM) axis may activate the fight-or-flight response through the sympathetic nervous system, which dedicates energy to more relevant bodily systems to acute adaptation to stress, while the parasympathetic nervous system returns the body to homeostasis.
The second major physiological stress-response center, the HPA axis, regulates the release of cortisol, which influences many bodily functions such as metabolic, psychological and immunological functions. The SAM and HPA axes are regulated by several brain regions, including the limbic system, prefrontal cortex, amygdala, hypothalamus, and stria terminalis. Through these mechanisms, stress can alter memory functions, reward, immune function, metabolism and susceptibility to diseases.
Disease risk is particularly pertinent to mental illnesses, whereby chronic or severe stress remains a common risk factor for several mental illnesses.
Psychology
Acute stressful situations where the stress experienced is severe is a cause of change psychologically to the detriment of the well-being of the individual, such that symptomatic derealization and depersonalization, and anxiety and hyperarousal, are experienced. The International Classification of Diseases includes a group of mental and behavioral disorders which have their aetiology in reaction to severe stress and the consequent adaptive response. Chronic stress, and a lack of coping resources available, or used by an individual, can often lead to the development of psychological issues such as delusions, depression and anxiety (see below for further information). Chronic stress also causes brain atrophy, which is the loss of neurons and the connections between them. It affects the part of the brain that is important for learning, responding to the stressors and cognitive flexibility.
Chronic stressors may not be as intense as acute stressors such as natural disaster or a major accident, but persist over longer periods of time and tend to have a more negative effect on health because they are sustained and thus require the body's physiological response to occur daily. This depletes the body's energy more quickly and usually occurs over long periods of time, especially when these microstressors cannot be avoided (i.e. stress of living in a dangerous neighborhood). See allostatic load for further discussion of the biological process by which chronic stress may affect the body. For example, studies have found that caregivers, particularly those of dementia patients, have higher levels of depression and slightly worse physical health than non-caregivers.
When humans are under chronic stress, permanent changes in their physiological, emotional, and behavioral responses may occur. Chronic stress can include events such as caring for a spouse with dementia, or may result from brief focal events that have long term effects, such as experiencing a sexual assault. Studies have also shown that psychological stress may directly contribute to the disproportionately high rates of coronary heart disease morbidity and mortality and its etiologic risk factors. Specifically, acute and chronic stress have been shown to raise serum lipids and are associated with clinical coronary events.
However, it is possible for individuals to exhibit hardiness—a term referring to the ability to be both chronically stressed and healthy. Even though psychological stress is often connected with illness or disease, most healthy individuals can still remain disease-free after being confronted with chronic stressful events. This suggests that there are individual differences in vulnerability to the potential pathogenic effects of stress; individual differences in vulnerability arise due to both genetic and psychological factors. In addition, the age at which the stress is experienced can dictate its effect on health. Research suggests chronic stress at a young age can have lifelong effects on the biological, psychological, and behavioral responses to stress later in life.
Etymology and historical usage
The term "stress" had none of its contemporary connotations before the 1920s. It is a form of the Middle English destresse, derived via Old French from the Latin stringere, "to draw tight". The word had long been in use in physics to refer to the internal distribution of a force exerted on a material body, resulting in strain. In the 1920s and '30s, biological and psychological circles occasionally used "stress" to refer to a physiological or environmental perturbation that could cause physiological and mental "strain". The amount of strain in reaction to stress depends on the resilience. Excessive strain would appear as illness.
Walter Cannon used it in 1926 to refer to external factors that disrupted what he called homeostasis. But "...stress as an explanation of lived experience is absent from both lay and expert life narratives before the 1930s". Physiological stress represents a wide range of physical responses that occur as a direct effect of a stressor causing an upset in the homeostasis of the body. Upon immediate disruption of either psychological or physical equilibrium the body responds by stimulating the nervous, endocrine, and immune systems. The reaction of these systems causes a number of physical changes that have both short- and long-term effects on the body.
The Holmes and Rahe stress scale was developed as a method of assessing the risk of disease from life changes. The scale lists both positive and negative changes that elicit stress. These include things such as a major holiday or marriage, or death of a spouse and firing from a job.
Biological need for equilibrium
Homeostasis is a concept central to the idea of stress. In biology, most biochemical processes strive to maintain equilibrium (homeostasis), a steady state that exists more as an ideal and less as an achievable condition. Environmental factors, internal or external stimuli, continually disrupt homeostasis; an organism's present condition is a state of constant flux moving about a homeostatic point that is that organism's optimal condition for living. Factors causing an organism's condition to diverge too far from homeostasis can be experienced as stress. A life-threatening situation such as a major physical trauma or prolonged starvation can greatly disrupt homeostasis. On the other hand, an organism's attempt at restoring conditions back to or near homeostasis, often consuming energy and natural resources, can also be interpreted as stress. The brain cannot sustain an equilibrium under chronic stress; the accumulation of such an ever-deepening deficit is called chronic stress.
The ambiguity in defining this phenomenon was first recognized by Hans Selye (1907–1982) in 1926. In 1951 a commentator loosely summarized Selye's view of stress as something that "...in addition to being itself, was also the cause of itself, and the result of itself".
First to use the term in a biological context, Selye continued to define stress as "the non-specific response of the body to any demand placed upon it". Neuroscientists such as Bruce McEwen and Jaap Koolhaas believe that stress, based on years of empirical research, "should be restricted to conditions where an environmental demand exceeds the natural regulatory capacity of an organism". The brain cannot live in an harsh family environment, it needs some sort of stability between another brain. People who have reported being raised in harsh environments such as verbal and physical aggression have showed a more immune dysfunction and more metabolic dysfunction. Indeed, in 1995 Toates already defined stress as a "chronic state that arises only when defense mechanisms are either being chronically stretched or are actually failing," while according to Ursin (1988) stress results from an inconsistency between expected events ("set value") and perceived events ("actual value") that cannot be resolved satisfactorily, which also puts stress into the broader context of cognitive-consistency theory.
Biological background
Stress can have many profound effects on the human biological systems. Biology primarily attempts to explain major concepts of stress using a stimulus-response paradigm, broadly comparable to how a psychobiological sensory system operates. The central nervous system (brain and spinal cord) plays a crucial role in the body's stress-related mechanisms. Whether one should interpret these mechanisms as the body's response to a stressor or embody the act of stress itself is part of the ambiguity in defining what exactly stress is.
The central nervous system works closely with the body's endocrine system to regulate these mechanisms. The sympathetic nervous system becomes primarily active during a stress response, regulating many of the body's physiological functions in ways that ought to make an organism more adaptive to its environment. Below there follows a brief biological background of neuroanatomy and neurochemistry and how they relate to stress.
Stress, either severe, acute stress or chronic low-grade stress may induce abnormalities in three principal regulatory systems in the body: serotonin systems, catecholamine systems, and the hypothalamic-pituitary-adrenocortical axis. Aggressive behavior has also been associated with abnormalities in these systems.
Biology of stress
The brain endocrine interactions are relevant in the translation of stress into physiological and psychological changes. The autonomic nervous system (ANS), as mentioned above, plays an important role in translating stress into a response. The ANS responds reflexively to both physical stressors (for example baroreception), and to higher level inputs from the brain.
The ANS is composed of the parasympathetic nervous system and sympathetic nervous system, two branches that are both tonically active with opposing activities. The ANS directly innervates tissue through the postganglionic nerves, which is controlled by preganglionic neurons originating in the intermediolateral cell column. The ANS receives inputs from the medulla, hypothalamus, limbic system, prefrontal cortex, midbrain and monoamine nuclei.
The activity of the sympathetic nervous system drives what is called the "fight or flight" response. The fight or flight response to emergency or stress involves mydriasis, increased heart rate and force contraction, vasoconstriction, bronchodilation, glycogenolysis, gluconeogenesis, lipolysis, sweating, decreased motility of the digestive system, secretion of the epinephrine and cortisol from the adrenal medulla, and relaxation of the bladder wall. The parasympathetic nervous response, "rest and digest", involves return to maintaining homeostasis, and involves miosis, bronchoconstriction, increased activity of the digestive system, and contraction of the bladder walls. Complex relationships between protective and vulnerability factors on the effect of childhood home stress on psychological illness, cardiovascular illness and adaption have been observed. ANS related mechanisms are thought to contribute to increased risk of cardiovascular disease after major stressful events.
The HPA axis is a neuroendocrine system that mediates a stress response. Neurons in the hypothalamus, particularly the paraventricular nucleus, release vasopressin and corticotropin releasing hormone, which travel through the hypophysial portal vessel where they travel to and bind to the corticotropin-releasing hormone receptor on the anterior pituitary gland. Multiple CRH peptides have been identified, and receptors have been identified on multiple areas of the brain, including the amygdala. CRH is the main regulatory molecule of the release of ACTH.
The secretion of ACTH into systemic circulation allows it to bind to and activate Melanocortin receptor, where it stimulates the release of steroid hormones. Steroid hormones bind to glucocorticoid receptors in the brain, providing negative feedback by reducing ACTH release. Some evidence supports a second long term feedback that is non-sensitive to cortisol secretion. The PVN of the hypothalamus receives inputs from the nucleus of the solitary tract, and lamina terminalis. Through these inputs, it receives and can respond to changes in blood.
The PVN innervation from the brain stem nuclei, particularly the noradrenergic nuclei stimulate CRH release. Other regions of the hypothalamus both directly and indirectly inhibit HPA axis activity. Hypothalamic neurons involved in regulating energy balance also influence HPA axis activity through the release of neurotransmitters such as neuropeptide Y, which stimulates HPA axis activity. Generally, the amygdala stimulates, and the prefrontal cortex and hippocampus attenuate, HPA axis activity; however, complex relationships do exist between the regions.
The immune system may be heavily influenced by stress. The sympathetic nervous system innervates various immunological structures, such as bone marrow and the spleen, allowing for it to regulate immune function. The adrenergic substances released by the sympathetic nervous system can also bind to and influence various immunological cells, further providing a connection between the systems. The HPA axis ultimately results in the release of cortisol, which generally has immunosuppressive effects. However, the effect of stress on the immune system is disputed, and various models have been proposed in an attempt to account for both the supposedly "immunodeficiency" linked diseases and diseases involving hyper activation of the immune system. One model proposed to account for this suggests a push towards an imbalance of cellular immunity(Th1) and humoral immunity(Th2). The proposed imbalance involved hyperactivity of the Th2 system leading to some forms of immune hypersensitivity, while also increasing risk of some illnesses associated with decreased immune system function, such as infection and cancer.
Effects of chronic stress
Chronic stress is a term sometimes used to differentiate it from acute stress. Definitions differ, and may be along the lines of continual activation of the stress response, stress that causes an allostatic shift in bodily functions, or just as "prolonged stress". For example, results of one study demonstrated that individuals who reported relationship conflict lasting one month or longer have a greater risk of developing illness and show slower wound healing. It can also reduce the benefits of receiving common vaccines. Similarly, the effects that acute stressors have on the immune system may be increased when there is perceived stress and/or anxiety due to other events. For example, students who are taking exams show weaker immune responses if they also report stress due to daily hassles. While responses to acute stressors typically do not impose a health burden on young, healthy individuals, chronic stress in older or unhealthy individuals may have long-term effects that are detrimental to health.
Immunological
Acute time-limited stressors, or stressors that lasted less than two hours, results in an up regulation of natural immunity and down regulation of specific immunity. This type of stress saw in increase in granulocytes, natural killer cells, IgA, Interleukin 6, and an increase in cell cytotoxicity. Brief naturalistic stressors elicit a shift from Th1 (cellular) to Th2 (humoral) immunity, while decreased T-cell proliferation, and natural killer cell cytotoxicity. Stressful event sequences did not elicit a consistent immune response; however, some observations such as decreased T-Cell proliferation and cytotoxicity, increase or decrease in natural killer cell cytotoxicity, and an increase in mitogen PHA. Chronic stress elicited a shift toward Th2 immunity, as well as decreased interleukin 2, T cell proliferation, and antibody response to the influenza vaccine. Distant stressors did not consistently elicit a change in immune function. Another response to high impacts of chronic stress that lasts for a long period of time, is more immune dysfunction and more metabolic dysfunction. It is proven in studies that when continuously being in stressful situations, it is more likely to get sick. Also, when being exposed to stress, some claim that the body metabolizes the food in a certain way that adds extra calories to the meal, regardless of the nutritional values of the food.
Infectious
Some studies have observed increased risk of upper respiratory tract infection during chronic life stress. In patients with HIV, increased life stress and cortisol was associated with poorer progression of HIV. Also with an increased level of stress, studies have proven evidence that it can reactivate latent herpes viruses.
Chronic disease
A link has been suggested between chronic stress and cardiovascular disease. Stress appears to play a role in hypertension, and may further predispose people to other conditions associated with hypertension. Stress may precipitate abuse of drugs and/or alcohol. Stress may also contribute to aging and chronic diseases in aging, such as depression and metabolic disorders.
The immune system also plays a role in stress and the early stages of wound healing. It is responsible for preparing the tissue for repair and promoting recruitment of certain cells to the wound area. Consistent with the fact that stress alters the production of cytokines, Graham et al. found that chronic stress associated with care giving for a person with Alzheimer's disease leads to delayed wound healing. Results indicated that biopsy wounds healed 25% more slowly in the chronically stressed group, or those caring for a person with Alzheimer's disease.
Development
Chronic stress has also been shown to impair developmental growth in children by lowering the pituitary gland's production of growth hormone, as in children associated with a home environment involving serious marital discord, alcoholism, or child abuse. Chronic stress also has a lot of illnesses and health care problems other than mental that comes with it. Severe chronic stress for long periods of time can lead to an increased chance of catching illnesses such as diabetes, cancer, depression, heart disease and Alzheimer's disease. More generally, prenatal life, infancy, childhood, and adolescence are critical periods in which the vulnerability to stressors is particularly high. This can lead to psychiatric and physical diseases which have long term impacts on an individual.
Psychopathology
Chronic stress is seen to affect the parts of the brain where memories are processed through and stored. When people feel stressed, stress hormones get over-secreted, which affects the brain. This secretion is made up of glucocorticoids, including cortisol, which are steroid hormones that the adrenal gland releases, although this can increase storage of flashbulb memories it decreases long-term potentiation (LTP). The hippocampus is important in the brain for storing certain kinds of memories and damage to the hippocampus can cause trouble in storing new memories but old memories, memories stored before the damage, are not lost. Also high cortisol levels can be tied to the deterioration of the hippocampus and decline of memory that many older adults start to experience with age. These mechanisms and processes may therefore contribute to age-related disease, or originate risk for earlier-onset disorders. For instance, extreme stress (e.g. trauma) is a requisite factor to produce stress-related disorders such as post-traumatic stress disorder.
Chronic stress also shifts learning, forming a preference for habit based learning, and decreased task flexibility and spatial working memory, probably through alterations of the dopaminergic systems. Stress may also increase reward associated with food, leading to weight gain and further changes in eating habits. Stress may contribute to various disorders, such as fibromyalgia, chronic fatigue syndrome, depression, as well as other mental illnesses and functional somatic syndromes.
Psychological concepts
Eustress
Selye published in year 1975 a model dividing stress into eustress and distress. Where stress enhances function (physical or mental, such as through strength training or challenging work), it may be considered eustress. Persistent stress that is not resolved through coping or adaptation, deemed distress, may lead to anxiety or withdrawal (depression) behavior.
The difference between experiences that result in eustress and those that result in distress is determined by the disparity between an experience (real or imagined) and personal expectations, and resources to cope with the stress. Alarming experiences, either real or imagined, can trigger a stress response.
Coping
Responses to stress include adaptation, psychological coping such as stress management, anxiety, and depression. Over the long term, distress can lead to diminished health and/or increased propensity to illness; to avoid this, stress must be managed.
Stress management encompasses techniques intended to equip a person with effective coping mechanisms for dealing with psychological stress, with stress defined as a person's physiological response to an internal or external stimulus that triggers the fight-or-flight response. Stress management is effective when a person uses strategies to cope with or alter stressful situations.
There are several ways of coping with stress, such as controlling the source of stress or learning to set limits and to say "no" to some of the demands that bosses or family members may make.
A person's capacity to tolerate the source of stress may be increased by thinking about another topic such as a hobby, listening to music, or spending time in a wilderness.
A way to control stress is first dealing with what is causing the stress if it is something the individual has control over. Other methods to control stress and reduce it can be: to not procrastinate and leave tasks for the last minute, do things you like, exercise, do breathing routines, go out with friends, and take a break. Having support from a loved one also helps a lot in reducing stress.
One study showed that the power of having support from a loved one, or just having social support, lowered stress in individual subjects. Painful shocks were applied to married women's ankles. In some trials women were able to hold their husband's hand, in other trials they held a stranger's hand, and then held no one's hand. When the women were holding their husband's hand, the response was reduced in many brain areas. When holding the stranger's hand the response was reduced a little, but not as much as when they were holding their husband's hand. Social support helps reduce stress and even more so if the support is from a loved one.
Cognitive appraisal
Lazarus argued that, in order for a psychosocial situation to be stressful, it must be appraised as such. He argued that cognitive processes of appraisal are central in determining whether a situation is potentially threatening, constitutes a harm/loss or a challenge, or is benign.
Both personal and environmental factors influence this primary appraisal, which then triggers the selection of coping processes. Problem-focused coping is directed at managing the problem, whereas emotion-focused coping processes are directed at managing the negative emotions. Secondary appraisal refers to the evaluation of the resources available to cope with the problem, and may alter the primary appraisal.
In other words, primary appraisal includes the perception of how stressful the problem is and the secondary appraisal of estimating whether one has more than or less than adequate resources to deal with the problem that affects the overall appraisal of stressfulness. Further, coping is flexible in that, in general, the individual examines the effectiveness of the coping on the situation; if it is not having the desired effect, they will, in general, try different strategies.
Assessment
Health risk factors
Both negative and positive stressors can lead to stress. The intensity and duration of stress changes depending on the circumstances and emotional condition of the person with it (Arnold. E and Boggs. K. 2007). Some common categories and examples of stressors include:
Sensory input such as pain, bright light, noise, temperatures, or environmental issues such as a lack of control over environmental circumstances, such as food, air and/or water quality, housing, health, freedom, or mobility.
Social issues can also cause stress, such as struggles with conspecific or difficult individuals and social defeat, or relationship conflict, deception, or break ups, and major events such as birth and deaths, marriage, and divorce.
Life experiences such as poverty, unemployment, clinical depression, obsessive compulsive disorder, heavy drinking, or insufficient sleep can also cause stress. Students and workers may face performance pressure stress from exams and project deadlines.
Adverse experiences during development (e.g. prenatal exposure to maternal stress, poor attachment histories, sexual abuse) are thought to contribute to deficits in the maturity of an individual's stress response systems. One evaluation of the different stresses in people's lives is the Holmes and Rahe stress scale.
General adaptation syndrome
Physiologists define stress as how the body reacts to a stressor - a stimulus, real or imagined. Acute stressors affect an organism in the short term; chronic stressors over the longer term. The general adaptation syndrome (GAS), developed by Hans Selye, is a profile of how organisms respond to stress; GAS is characterized by three phases: a nonspecific alarm mobilization phase, which promotes sympathetic nervous system activity; a resistance phase, during which the organism makes efforts to cope with the threat; and an exhaustion phase, which occurs if the organism fails to overcome the threat and depletes its physiological resources.
Stage 1
Alarm is the first stage, which is divided into two phases: the shock phase and the antishock phase.
Shock phase: During this phase, the body can endure changes such as hypovolemia, hypoosmolarity, hyponatremia, hypochloremia, hypoglycemia—the stressor effect. This phase resembles Addison's disease. The organism's resistance to the stressor drops temporarily below the normal range and some level of shock (e.g. circulatory shock) may be experienced.
Antishock phase: When the threat or stressor is identified or realized, the body starts to respond and is in a state of alarm. During this stage, the locus coeruleus and sympathetic nervous system activate the production of catecholamines including adrenaline, engaging the popularly-known fight-or-flight response. Adrenaline temporarily provides increased muscular tonus, increased blood pressure due to peripheral vasoconstriction and tachycardia, and increased glucose in blood. There is also some activation of the HPA axis, producing glucocorticoids (cortisol, aka the S-hormone or stress-hormone).
Stage 2
Resistance is the second stage. During this stage, increased secretion of glucocorticoids intensifies the body's systemic response. Glucocorticoids can increase the concentration of glucose, fat, and amino acid in blood. In high doses, one glucocorticoid, cortisol, begins to act similarly to a mineralocorticoid (aldosterone) and brings the body to a state similar to hyperaldosteronism. If the stressor persists, it becomes necessary to attempt some means of coping with the stress. The body attempts to respond to stressful stimuli, but after prolonged activation, the body's chemical resources will be gradually depleted, leading to the final stage.
Stage 3
The third stage could be either exhaustion or recovery:
Recovery stage follows when the system's compensation mechanisms have successfully overcome the stressor effect (or have completely eliminated the factor which caused the stress). The high glucose, fat and amino acid levels in blood prove useful for anabolic reactions, restoration of homeostasis and regeneration of cells.
Exhaustion is the alternative third stage in the GAS model. At this point, all of the body's resources are eventually depleted and the body is unable to maintain normal function. The initial autonomic nervous system symptoms may reappear (panic attacks, muscle aches, sore eyes, difficulty breathing, fatigue, heartburn, high blood pressure, and difficulty sleeping, etc.). If stage three is extended, long-term damage may result (prolonged vasoconstriction results in ischemia which in turn leads to cell necrosis), as the body's immune system becomes exhausted, and bodily functions become impaired, resulting in decompensation.
The result can manifest itself in obvious illnesses, such as general trouble with the digestive system (e.g. occult bleeding, melena, constipation/obstipation), diabetes, or even cardiovascular problems (angina pectoris), along with clinical depression and other mental illnesses.
History in research
The current usage of the word stress arose out of Hans Selye's 1930s experiments. He started to use the term to refer not just to the agent but to the state of the organism as it responded and adapted to the environment. His theories of a universal non-specific stress response attracted great interest and contention in academic physiology and he undertook extensive research programs and publication efforts.
While the work attracted continued support from advocates of psychosomatic medicine, many in experimental physiology concluded that his concepts were too vague and unmeasurable. During the 1950s, Selye turned away from the laboratory to promote his concept through popular books and lecture tours. He wrote for both non-academic physicians and, in an international bestseller entitled Stress of Life, for the general public.
A broad biopsychosocial concept of stress and adaptation offered the promise of helping everyone achieve health and happiness by successfully responding to changing global challenges and the problems of modern civilization. Selye coined the term "eustress" for positive stress, by contrast to distress. He argued that all people have a natural urge and need to work for their own benefit, a message that found favor with industrialists and governments. He also coined the term stressor to refer to the causative event or stimulus, as opposed to the resulting state of stress.
Selye was in contact with the tobacco industry from 1958 and they were undeclared allies in litigation and the promotion of the concept of stress, clouding the link between smoking and cancer, and portraying smoking as a "diversion", or in Selye's concept a "deviation", from environmental stress.
From the late 1960s, academic psychologists started to adopt Selye's concept; they sought to quantify "life stress" by scoring "significant life events", and a large amount of research was undertaken to examine links between stress and disease of all kinds. By the late 1970s, stress had become the medical area of greatest concern to the general population, and more basic research was called for to better address the issue. There was also renewed laboratory research into the neuroendocrine, molecular, and immunological bases of stress, conceived as a useful heuristic not necessarily tied to Selye's original hypotheses. The US military became a key center of stress research, attempting to understand and reduce combat neurosis and psychiatric casualties.
The psychiatric diagnosis post-traumatic stress disorder (PTSD) was coined in the mid-1970s, in part through the efforts of anti-Vietnam War activists and the Vietnam Veterans Against the War, and Chaim F. Shatan. The condition was added to the Diagnostic and Statistical Manual of Mental Disorders as posttraumatic stress disorder in 1980. PTSD was considered a severe and ongoing emotional reaction to an extreme psychological trauma, and as such often associated with soldiers, police officers, and other emergency personnel. The stressor may involve threat to life (or viewing the actual death of someone else), serious physical injury, or threat to physical or psychological integrity. In some cases, it can also be from profound psychological and emotional trauma, apart from any actual physical harm or threat. Often, however, the two are combined.
By the 1990s, "stress" had become an integral part of modern scientific understanding in all areas of physiology and human functioning, and one of the great metaphors of Western life. Focus grew on stress in certain settings, such as workplace stress, and stress management techniques were developed. The term also became a euphemism, a way of referring to problems and eliciting sympathy without being explicitly confessional, just "stressed out". It came to cover a huge range of phenomena from mild irritation to the kind of severe problems that might result in a real breakdown of health. In popular usage, almost any event or situation between these extremes could be described as stressful.
The American Psychological Association's 2015 Stress In America Study found that nationwide stress is on the rise and that the three leading sources of stress were "money", "family responsibility", and "work".
See also
Autonomic nervous system
Defense physiology
HPA axis
Inflammation
Plant stress measurement
Trier social stress test
Xenohormesis
Stress in early childhood
Weathering hypothesis
Endorphins
References
External links
The American Institute of Stress
"Research on Work-Related Stress", European Agency for Safety and Health at Work (EU-OSHA)
Coping With Stress
Endocrine system
Sympathetic nervous system | 0.777208 | 0.997265 | 0.775083 |
Electrolyte imbalance | Electrolyte imbalance, or water-electrolyte imbalance, is an abnormality in the concentration of electrolytes in the body. Electrolytes play a vital role in maintaining homeostasis in the body. They help to regulate heart and neurological function, fluid balance, oxygen delivery, acid–base balance and much more. Electrolyte imbalances can develop by consuming too little or too much electrolyte as well as excreting too little or too much electrolyte. Examples of electrolytes include calcium, chloride, magnesium, phosphate, potassium, and sodium.
Electrolyte disturbances are involved in many disease processes and are an important part of patient management in medicine. The causes, severity, treatment, and outcomes of these disturbances can differ greatly depending on the implicated electrolyte. The most serious electrolyte disturbances involve abnormalities in the levels of sodium, potassium or calcium. Other electrolyte imbalances are less common and often occur in conjunction with major electrolyte changes. The kidney is the most important organ in maintaining appropriate fluid and electrolyte balance, but other factors such as hormonal changes and physiological stress play a role.
Overview
Anions and cations
Calcium, magnesium, potassium, and sodium ions are cations (+), while chloride, and phosphate ions are anions (−).
Causes
Chronic laxative abuse or severe diarrhea or vomiting can lead to dehydration and electrolyte imbalance.
Malnutrition
People with malnutrition are at especially high risk for an electrolyte imbalance. Severe electrolyte imbalances must be treated carefully as there are risks with overcorrecting too quickly, which can result in arrhythmias, brain herniation, or refeeding syndrome depending on the cause of imbalance.
General function
Electrolytes are important because they are what cells (especially nerve, heart and muscle cells) use to maintain voltages across their cell membranes. Electrolytes have different functions, and an important one is to carry electrical impulses between cells. Kidneys work to keep the electrolyte concentrations in blood constant despite changes in the body. For example, during heavy exercise, electrolytes are lost in sweat, particularly in the form of sodium and potassium. The kidneys can also generate dilute urine to balance sodium levels. These electrolytes must be replaced to keep the electrolyte concentrations of the body fluids constant. Hyponatremia, or low sodium, is the most commonly seen type of electrolyte imbalance.
Treatment of electrolyte imbalance depends on the specific electrolyte involved and whether the levels are too high or too low. The level of aggressiveness of treatment and choice of treatment may change depending on the severity of the disturbance. If the levels of an electrolyte are too low, a common response to electrolyte imbalance may be to prescribe supplementation. However, if the electrolyte involved is sodium, the issue is often water excess rather than sodium deficiency. Supplementation for these people may correct the electrolyte imbalance but at the expense of volume overload. For newborn children, this has serious risks. Because each individual electrolyte affects physiological function differently, they must be considered separately when discussing causes, treatment, and complications.
Calcium
Though calcium is the most plentiful electrolyte in the body, a large percentage of it is used to form the bones. It is mainly absorbed and excreted through the GI system. The majority of calcium resides extracellularly, and it is crucial for the function of neurons, muscle cells, function of enzymes, and coagulation. The normal range for calcium concentration in the body is 8.5 - 10.5 mg/dL. The parathyroid gland is responsible for sensing changes in calcium concentration and regulating the electrolyte with parathyroid hormone.
Hypercalcemia
Hypercalcemia describes when the concentration of calcium in the blood is too high. This occurs above 10.5 mg/dL.
Causes
The most common causes of hypercalcemia are certain types of cancer, hyperparathyroidism, hyperthyroidism, pheochromocytoma, excessive ingestion of vitamin D, sarcoidosis, and tuberculosis. Hyperparathyroidism and malignancy are the predominant causes. It can also be caused by muscle cell breakdown, prolonged immobilization, dehydration.
Symptoms
The predominant symptoms of hypercalcemia are abdominal pain, constipation, extreme thirst, excessive urination, kidney stones, nausea and vomiting. In severe cases where the calcium concentration is >14 mg/dL, individuals may experience confusion, altered mental status, coma, and seizure.
Treatment
Primary treatment of hypercalcemia consists of administering IV fluids. If the hypercalcemia is severe and/or associated with cancer, it may be treated with bisphosphonates. For very severe cases, hemodialysis may be considered for rapid removal of calcium from the blood.
Hypocalcemia
Hypocalcemia describes when calcium levels are too low in the blood, usually less than 8.5 mg/dL.
Causes
Hypoparathyroidism and vitamin D deficiency are common causes of hypocalcemia. It can also be caused by malnutrition, blood transfusion, ethylene glycol intoxication, and pancreatitis.
Symptoms
Neurological and cardiovascular symptoms are the most common manifestations of hypocalcemia. Patients may experience muscle cramping or twitching, and numbness around the mouth and fingers. They may also have shortness of breath, low blood pressure, and cardiac arrhythmias.
Treatment
Patients with hypocalcemia may be treated with either oral or IV calcium. Typically, IV calcium is reserved for patients with severe hypocalcemia. It is also important to check magnesium levels in patients with hypocalcemia and to replace magnesium if it is low.
Chloride
Chloride, after sodium, is the second most abundant electrolyte in the blood and most abundant in the extracellular fluid. Most of the chloride in the body is from salt (NaCl) in the diet. Chloride is part of gastric acid (HCl), which plays a role in absorption of electrolytes, activating enzymes, and killing bacteria. The levels of chloride in the blood can help determine if there are underlying metabolic disorders. Generally, chloride has an inverse relationship with bicarbonate, an electrolyte that indicates acid-base status. Overall, treatment of chloride imbalances involve addressing the underlying cause rather than supplementing or avoiding chloride.
Hyperchloremia
Causes
Hyperchloremia, or high chloride levels, is usually associated with excess chloride intake (e.g., saltwater drowning), fluid loss (e.g., diarrhea, sweating), and metabolic acidosis.
Symptoms
Patients are usually asymptomatic with mild hyperchloremia. Symptoms associated with hyperchloremia are usually caused by the underlying cause of this electrolyte imbalance.
Treatment
Treat the underlying cause, which commonly includes increasing fluid intake.
Hypochloremia
Causes
Hypochloremia, or low chloride levels, are commonly associated with gastrointestinal (e.g., vomiting) and kidney (e.g., diuretics) losses. Greater water or sodium intake relative to chloride also can contribute to hypochloremia.
Symptoms
Patients are usually asymptomatic with mild hypochloremia. Symptoms associated with hypochloremia are usually caused by the underlying cause of this electrolyte imbalance.
Treatment
Treat the underlying cause, which commonly includes increasing fluid intake.
Magnesium
Magnesium is mostly found in the bones and within cells. Approximately 1% of total magnesium in the body is found in the blood. Magnesium is important in control of metabolism and is involved in numerous enzyme reactions. A normal range is 0.70 - 1.10 mmol/L. The kidney is responsible for maintaining the magnesium levels in this narrow range.
Hypermagnesemia
Hypermagnesemia, or abnormally high levels of magnesium in the blood, is relatively rare in individuals with normal kidney function. This is defined by a magnesium concentration >2.5 mg/dL.
Causes
Hypermagnesemia typically occurs in individuals with abnormal kidney function. This imbalance can also occur with use of antacids or laxatives that contain magnesium. Most cases of hypermagnesemia can be prevented by avoiding magnesium-containing medications.
Symptoms
Mild symptoms include nausea, flushing, tiredness. Neurologic symptoms are seen most commonly including decreased deep tendon reflexes. Severe symptoms include paralysis, respiratory failure, and bradycardia progressing to cardiac arrest.
Treatment
If kidney function is normal, stopping the source of magnesium intake is sufficient. Diuretics can help increase magnesium excretion in the urine. Severe symptoms may be treated with dialysis to directly remove magnesium from the blood.
Hypomagnesemia
Hypomagnesemia, or low magnesium levels in the blood, can occur in up to 12% of hospitalized patients. Symptoms or effects of hypomagnesemia can occur after relatively small deficits.
Causes
Major causes of hypomagnesemia are from gastrointestinal losses such as vomiting and diarrhea. Another major cause is from kidney losses from diuretics, alcohol use, hypercalcemia, and genetic disorders. Low dietary intake can also contribute to magnesium deficiency.
Symptoms
Hypomagnesemia is typically associated with other electrolyte abnormalities, such as hypokalemia and hypocalcemia. For this reason, there may be overlap in symptoms seen in these other electrolyte deficiencies. Severe symptoms include arrhythmias, seizures, and tetany.
Treatment
The first step in treatment is determining whether the deficiency is caused by a gastrointestinal or kidney problem. People with no or minimal symptoms are given oral magnesium; however, many people experience diarrhea and other gastrointestinal discomfort. Those who cannot tolerate or receive magnesium, or those with severe symptoms can receive intravenous magnesium.
Hypomagnesemia may prevent the normalization of other electrolyte deficiencies. If other electrolyte deficiencies are associated, normalizing magnesium levels may be necessary to treat the other deficiencies.
Phosphate
Hyperphosphatemia
Hypophosphatemia
Potassium
Potassium resides mainly inside the cells of the body, so its concentration in the blood can range anywhere from 3.5 mEq/L to 5 mEq/L. The kidneys are responsible for excreting the majority of potassium from the body. This means their function is crucial for maintaining a proper balance of potassium in the blood stream.
Hyperkalemia
Hyperkalemia means the concentration of potassium in the blood is too high. This occurs when the concentration of potassium is >5 mEq/L. It can lead to cardiac arrhythmias and even death. As such it is considered to be the most dangerous electrolyte disturbance.
Causes
Hyperkalemia is typically caused by decreased excretion by the kidneys, shift of potassium to the extracellular space, or increased consumption of potassium rich foods in patients with kidney failure. The most common cause of hyperkalemia is lab error due to potassium released as blood cells from the sample break down. Other common causes are kidney disease, cell death, acidosis, and drugs that affect kidney function.
Symptoms
Part of the danger of hyperkalemia is that it is often asymptomatic, and only detected during normal lab work done by primary care physicians. As potassium levels get higher, individuals may begin to experience nausea, vomiting, and diarrhea. Patients with severe hyperkalemia, defined by levels above 7 mEq/L, may experience muscle cramps, numbness, tingling, absence of reflexes, and paralysis. Patients may experience arrhythmias that can result in death.
Treatment
There are three mainstays of treatment of hyperkalemia. These are stabilization of cardiac cells, shift of potassium into the cells, and removal of potassium from the body. Stabilization of cardiac muscle cells is done by administering calcium intravenously. Shift of potassium into the cells is done using both insulin and albuterol inhalers. Excretion of potassium from the body is done using either hemodialysis, loop diuretics, or a resin that causes potassium to be excreted in the fecal matter.
Hypokalemia
The most common electrolyte disturbance, hypokalemia means that the concentration of potassium is <3.5 mEq/L. It often occurs concurrently with low magnesium levels.
Causes
Low potassium is caused by increased excretion of potassium, decreased consumption of potassium rich foods, movement of potassium into the cells, or certain endocrine diseases. Excretion is the most common cause of hypokalemia and can be caused by diuretic use, metabolic acidosis, diabetic ketoacidosis, hyperaldosteronism, and renal tubular acidosis. Potassium can also be lost through vomiting and diarrhea.
Symptoms
Hypokalemia is often asymptomatic, and symptoms may not appear until potassium concentration is <2.5 mEq/L. Typical symptoms consist of muscle weakness and cramping. Low potassium can also cause cardiac arrhythmias.
Treatment
Hypokalemia is treated by replacing the body's potassium. This can occur either orally or intravenously. Because low potassium is usually accompanied by low magnesium, patients are often given magnesium alongside potassium.
Sodium
Sodium is the most abundant electrolyte in the blood. Sodium and its homeostasis in the human body is highly dependent on fluids. The human body is approximately 60% water, a percentage which is also known as total body water. The total body water can be divided into two compartments called extracellular fluid (ECF) and intracellular fluid (ICF). The majority of the sodium in the body stays in the extracellular fluid compartment. This compartment consists of the fluid surrounding the cells and the fluid inside the blood vessels. ECF has a sodium concentration of approximately 140 mEq/L. Because cell membranes are permeable to water but not sodium, the movement of water across membranes affects the concentration of sodium in the blood. Sodium acts as a force that pulls water across membranes, and water moves from places with lower sodium concentration to places with higher sodium concentration. This happens through a process called osmosis. When evaluating sodium imbalances, both total body water and total body sodium must be considered.
Hypernatremia
Hypernatremia means that the concentration of sodium in the blood is too high. An individual is considered to be having high sodium at levels above 145 mEq/L of sodium. Hypernatremia is not common in individuals with no other health concerns. Most individuals with this disorder have either experienced loss of water from diarrhea, altered sense of thirst, inability to consume water, inability of kidneys to make concentrated urine, or increased salt intake.
Causes
There are three types of hypernatremia each with different causes. The first is dehydration along with low total body sodium. This is most commonly caused by heatstroke, burns, extreme sweating, vomiting, and diarrhea. The second is low total body water with normal body sodium. This can be caused by diabetes insipidus, renal disease, hypothalamic dysfunction, sickle cell disease, and certain drugs. The third is increased total body sodium which is caused by increased ingestion, Conn's syndrome, or Cushing's syndrome.
Symptoms
Symptoms of hypernatremia may vary depending on type and how quickly the electrolyte disturbance developed. Common symptoms are dehydration, nausea, vomiting, fatigue, weakness, increased thirst, and excess urination. Patients may be on medications that caused the imbalance such as diuretics or nonsteroidal anti-inflammatory drugs. Some patients may have no obvious symptoms at all.
Treatment
It is crucial to first assess the stability of the patient. If there are any signs of shock such as tachycardia or hypotension, these must be treated immediately with IV saline infusion. Once the patient is stable, it is important to identify the underlying cause of hypernatremia as that may affect the treatment plan. The final step in treatment is to calculate the patients free water deficit, and to replace it at a steady rate using a combination of oral or IV fluids. The rate of replacement of fluids varies depending on how long the patient has been hypernatremic. Lowering the sodium level too quickly can cause cerebral edema.
Hyponatremia
Hyponatremia means that the concentration of sodium in the blood is too low. It is generally defined as a concentration lower than 135 mEq/L. This relatively common electrolyte disorder can indicate the presence of a disease process, but in the hospital setting is more often due to administration of Hypotonic fluids. The majority of hospitalized patients only experience mild hyponatremia, with levels above 130 mEq/L. Only 1-4% of patients experience levels lower than 130 mEq/L.
Causes
Hyponatremia has many causes including heart failure, chronic kidney disease, liver disease, treatment with thiazide diuretics, psychogenic polydipsia, and syndrome of inappropriate antidiuretic hormone secretion. It can also be found in the postoperative state, and in the setting of accidental water intoxication as can be seen with intense exercise. Common causes in pediatric patients may be diarrheal illness, frequent feedings with dilute formula, water intoxication via excessive consumption, and enemas. Pseudohyponatremia is a false low sodium reading that can be caused by high levels of fats or proteins in the blood. Dilutional hyponatremia can happen in diabetics as high glucose levels pull water into the blood stream causing the sodium concentration to be lower. Diagnosis of the cause of hyponatremia relies on three factors: volume status, plasma osmolality, urine sodium levels and urine osmolality.
Symptoms
Many individuals with mild hyponatremia will not experience symptoms. Severity of symptoms is directly correlated with severity of hyponatremia and rapidness of onset. General symptoms include loss of appetite, nausea, vomiting, confusion, agitation, and weakness. More concerning symptoms involve the central nervous system and include seizures, coma, and death due to brain herniation. These usually do not occur until sodium levels fall below 120 mEq/L.
Treatment
Considerations for treatment include symptom severity, time to onset, volume status, underlying cause, and sodium levels. If the sodium level is <120 mEq/L, the person can be treated with hypertonic saline as extremely low levels are associated with severe neurological symptoms. In non-emergent situations, it is important to correct the sodium slowly to minimize risk of osmotic demyelination syndrome. If a person has low total body water and low sodium they are typically given fluids. If a person has high total body water (such as due to heart failure or kidney disease) they may be placed on fluid restriction, salt restriction, and treated with a diuretic. If a person has a normal volume of total body water, they may be placed on fluid restriction alone.
Dietary sources
Diet significantly contributes to electrolyte stores and blood levels. Below are a list of foods that are associated with higher levels of these electrolytes.
Sodium
It is recommended that an individual consumes less than 2,300 mg of sodium daily as part of a healthy diet. A significant portion of our sodium intake comes from just a few types of food, which may be surprising, as large sources of sodium may not taste salty.
Breads
Soups
Cured meats and cold cuts
Cheese
Savory snacks (e.g., chips, crackers, pretzels)
Phosphate
In minerals, phosphorus generally occurs as phosphate. Good sources of phosphorus includes baking powder, instant pudding, cottonseed meal, hemp seeds, fortified beverages, dried whey.
Potassium
Good sources of potassium are found in a variety of fruits and vegetables. Recommend potassium intake for adults ranges from 2,300 mg to 3,400 mg depending on age and gender.
Beans and lentils
Dark leafy greens (e.g., spinach, kale)
Apples
Apricots
Potatoes
Squash
Bananas
Dates
Calcium
Dairy is a major contributor of calcium to diet in the United States. The recommended calcium intake for adults range from 1,000 mg to 1,300 mg depending on age and gender.
Yogurt
Cheese
Milk
Tofu
Canned sardines
Magnesium
Magnesium is found in a variety of vegetables, meats, and grains. Foods high in fiber generally are a source of magnesium. The recommended magnesium intake for adults range from 360 mg to 420 mg depending on age and gender.
Epsom salt
Nuts and seeds (e.g., pumpkin seeds, almonds, peanuts)
Dark leafy greens (e.g., spinach)
Beans
Fortified cereals
See also
Acidosis
Alkalosis
Dehydration
Malnutrition
Starvation
Sports drink
References
External links
Causes of death | 0.777842 | 0.996295 | 0.77496 |
Environmental hazard | Environmental hazards are those hazards that affect biomes or ecosystems. Well known examples include oil spills, water pollution, slash and burn deforestation, air pollution, ground fissures, and build-up of atmospheric carbon dioxide. Physical exposure to environmental hazards is usually involuntary
Types
Environmental hazards can be categorized in many different ways. One of them is — chemical, physical, biological, and psychological.
Chemical hazards are substances that can cause harm or damage to humans, animals, or the environment. They can be in the form of solids, liquids, gases, mists, dusts, fumes, and vapors. Exposure can occur through inhalation, skin absorption, ingestion, or direct contact. Chemical hazards include substances such as pesticides, solvents, acids, bases, reactive metals, and poisonous gases. Exposure to these substances can result in health effects such as skin irritation, respiratory problems, organ damage, neurological effects, and cancer.
Physical hazards are factors within the environment that can harm the body without necessarily touching it. They include a wide range of environmental factors such as noise, vibration, extreme temperatures, radiation, and ergonomic hazards. Physical hazards may lead to injuries like burns, fractures, hearing loss, vision impairment, or other physical harm. They can be present in many work settings such as construction sites, manufacturing plants, and even office spaces.
Biological hazards, also known as biohazards, are organic substances that pose a threat to the health of living organisms, primarily humans. This can include medical waste, samples of a microorganism, virus, or toxin (from a biological source) that can impact human health. Biological hazards can also include substances harmful to animals. Examples of biological hazards include bacteria, viruses, fungi, other microorganisms and their associated toxins. They may cause a myriad of diseases, from flu to more serious and potentially fatal diseases.
Psychological hazards are aspects of work and work environments that can cause psychological harm or mental ill-health. These include factors such as stress, workplace bullying, fatigue, burnout, and violence, among others. These hazards can lead to psychological issues like anxiety, depression, and post-traumatic stress disorder (PTSD). Psychological hazards can exist in any type of workplace, and their management is a crucial aspect of occupational health and safety.
Environmental hazard identification
Environmental hazard identification is the first step in environmental risk assessment, which is the process of assessing the likelihood, or risk, of adverse effects resulting from a given environmental stressor. Hazard identification is the determination of whether, and under what conditions, a given environmental stressor has the potential to cause harm.
In hazard identification, sources of data on the risks associated with prospective hazards are identified. For instance, if a site is known to be contaminated with a variety of industrial pollutants, hazard identification will determine which of these chemicals could result in adverse human health effects, and what effects they could cause. Risk assessors rely on both laboratory (e.g., toxicological) and epidemiological data to make these determinations.Conceptual model of exposure
Hazards have the potential to cause adverse effects only if they come into contact with populations that may be harmed. For this reason, hazard identification includes the development of a conceptual model of exposure. Conceptual models communicate the pathway connecting sources of a given hazard to the potentially exposed population(s). The U.S. Agency for Toxic Substances and Disease Registry establishes five elements that should be included in a conceptual model of exposure:
The source of the hazard in question
Environmental fate and transport, or how the hazard moves and changes in the environment after its release
Exposure point or area, or the place at which an exposed person comes into contact with the hazard
Exposure route, or the manner by which an exposed person comes into contact with the hazard (e.g., orally, dermally, or by inhalation)
Potentially exposed populations.
Evaluating hazard data
Once a conceptual model of exposure is developed for a given hazard, measurements should be taken to determine the presence and quantity of the hazard. These measurements should be compared to appropriate reference levels to determine whether a hazard exists. For instance, if arsenic is detected in tap water from a given well, the detected concentrations should be compared with regulatory thresholds for allowable levels of arsenic in drinking water. If the detected levels are consistently lower than these limits, arsenic may not be a chemical of potential concern for the purposes of this risk assessment. When interpreting hazard data, risk assessors must consider the sensitivity of the instrument and method used to take these measurements, including any relevant detection limits (i.e., the lowest level of a given substance that an instrument or method is capable of detecting).
Chemical
Chemical hazards are defined in the Globally Harmonized System and in the European Union chemical regulations. They are caused by chemical substances causing significant damage to the environment. The label is particularly applicable towards substances with aquatic toxicity. An example is zinc oxide, a common paint pigment, which is extremely toxic to aquatic life.
Toxicity or other hazards do not imply an environmental hazard, because elimination by sunlight (photolysis), water (hydrolysis) or organisms (biological elimination) neutralizes many reactive or poisonous substances. Persistence towards these elimination mechanisms combined with toxicity gives the substance the ability to do damage in the long term. Also, the lack of immediate human toxicity does not mean the substance is environmentally nonhazardous. For example, tanker truck-sized spills of substances such as milk can cause a lot of damage in the local aquatic ecosystems: the added biological oxygen demand causes rapid eutrophication, leading to anoxic conditions in the water body.
All hazards in this category are mainly anthropogenic although there exist a number of natural carcinogens and chemical elements like radon and lead may turn up in health-critical concentrations in the natural environment:
temp break
agents in animals destined for human consumption
- a contaminant of fresh water sources (water wells)
- carcinogenic
s
s
s
s
s
s
s in animals destined for human consumption
in paint
s
s
s
and other natural sources of radioactivity
Physical
A physical hazard is a type of occupational hazard that involves environmental hazards that can cause harm with or without contact. Below is a list of examples:
s
Biological
Biological hazards, also known as biohazards, refer to biological substances that pose a threat to the health of living organisms, primarily that of humans. This can include medical waste or samples of a microorganism, virus or toxin (from a biological source) that can affect human health. Examples include:
, a common allergen
(BSE)
s
s
(river blindness)
s
s
(SARS)
Psychological
Psychological hazards include but are not limited to stress, violence and other workplace stressors. Work is generally beneficial to mental health and personal wellbeing. It provides people with structure and purpose and a sense of identity.
See also
References
Environmental health
Hazards
Public health | 0.779824 | 0.993679 | 0.774895 |
Organ (biology) | In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The same is true for the musculoskeletal system because of the relationship between the muscular and skeletal systems.
Cardiovascular system: pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Digestive system: digestion and processing food with salivary glands, esophagus, stomach, liver, gallbladder, pancreas, intestines, colon, mesentery, rectum and anus.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroids and adrenals, i.e., adrenal glands.
Excretory system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream, the lymph and the nodes and vessels that transport it including the immune system: defending against disease-causing agents with leukocytes, tonsils, adenoids, thymus and spleen.
Integumentary system: skin, hair and nails of mammals. Also scales of fish, reptiles, and birds, and feathers of birds.
Muscular system: movement with muscles.
Nervous system: collecting, transferring and processing information with brain, spinal cord and nerves.
Reproductive system: the sex organs, such as ovaries, oviducts, uterus, vulva, vagina, testicles, vasa deferentia, seminal vesicles, prostate and penis.
Respiratory system: the organs used for breathing, the pharynx, larynx, trachea, bronchi, lungs and diaphragm.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Viscera
In the study of anatomy, viscera (: viscus) refers to the internal organs of the abdominal, thoracic, and pelvic cavities. The abdominal organs may be classified as solid organs or hollow organs. The solid organs are the liver, pancreas, spleen, kidneys, and adrenal glands. The hollow organs of the abdomen are the stomach, intestines, gallbladder, bladder, and rectum. In the thoracic cavity, the heart is a hollow, muscular organ. Splanchnology is the study of the viscera. The term "visceral" is contrasted with the term "", meaning "of or relating to the wall of a body part, organ or cavity". The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides.
Origin and evolution
The organ level of organisation in animals can be first detected in flatworms and the more derived phyla, i.e. the bilaterians. The less-advanced taxa (i.e. Placozoa, Porifera, Ctenophora and Cnidaria) do not show consolidation of their tissues into organs.
More complex animals are composed of different organs, which have evolved over time. For example, the liver and heart evolved in the chordates about 550-500 million years ago, while the gut and brain are even more ancient, arising in the ancestor of vertebrates, insects, molluscs, and worms about 700–650 million years ago.
Given the ancient origin of most vertebrate organs, researchers have looked for model systems, where organs have evolved more recently, and ideally have evolved multiple times independently. An outstanding model for this kind of research is the placenta, which has evolved more than 100 times independently in vertebrates, has evolved relatively recently in some lineages, and exists in intermediate forms in extant taxa. Studies on the evolution of the placenta have identified a variety of genetic and physiological processes that contribute to the origin and evolution of organs, these include the re-purposing of existing animal tissues, the acquisition of new functional properties by these tissues, and novel interactions of distinct tissue types.
Plants
The study of plant organs is covered in plant morphology. Organs of plants can be divided into vegetative and reproductive. Vegetative plant organs include roots, stems, and leaves. The reproductive organs are variable. In flowering plants, they are represented by the flower, seed and fruit. In conifers, the organ that bears the reproductive structures is called a cone. In other divisions (phyla) of plants, the reproductive organs are called strobili, in Lycopodiophyta, or simply gametophores in mosses. Common organ system designations in plants include the differentiation of shoot and root. All parts of the plant above ground (in non-epiphytes), including the functionally distinct leaf and flower organs, may be classified together as the shoot organ system.
The vegetative organs are essential for maintaining the life of a plant. While there can be 11 organ systems in animals, there are far fewer in plants, where some perform the vital functions, such as photosynthesis, while the reproductive organs are essential in reproduction. However, if there is asexual vegetative reproduction, the vegetative organs are those that create the new generation of plants (see clonal colony).
Society and culture
Many societies have a system for organ donation, in which a living or deceased donor's organ are transplanted into a person with a failing organ. The transplantation of larger solid organs often requires immunosuppression to prevent organ rejection or graft-versus-host disease.
There is considerable interest throughout the world in creating laboratory-grown or artificial organs.
Organ transplants
Beginning in the 20th century, organ transplants began to take place as scientists knew more about the anatomy of organs. These came later in time as procedures were often dangerous and difficult. Both the source and method of obtaining the organ to transplant are major ethical issues to consider, and because organs as resources for transplant are always more limited than demand for them, various notions of justice, including distributive justice, are developed in the ethical analysis. This situation continues as long as transplantation relies upon organ donors rather than technological innovation, testing, and industrial manufacturing.
History
The English word "organ" dates back to the twelfth century and refers to any musical instrument. By the late 14th century, the musical term's meaning had narrowed to refer specifically to the keyboard-based instrument. At the same time, a second meaning arose, in reference to a "body part adapted to a certain function".
Plant organs are made from tissue composed of different types of tissue. The three tissue types are ground, vascular, and dermal. When three or more organs are present, it is called an organ system.
The adjective visceral, also splanchnic, is used for anything pertaining to the internal organs. Historically, viscera of animals were examined by Roman pagan priests like the haruspices or the augurs in order to divine the future by their shape, dimensions or other factors. This practice remains an important ritual in some remote, tribal societies.
The term "visceral" is contrasted with the term "", meaning "of or relating to the wall of a body part, organ or cavity" The two terms are often used in describing a membrane or piece of connective tissue, referring to the opposing sides.
Antiquity
Aristotle used the word frequently in his philosophy, both to describe the organs of plants or animals (e.g. the roots of a tree, the heart or liver of an animal) because, in ancient Greek, the word 'organon' means 'tool', and Aristotle believed that the organs of the body were tools for us by means of which we can do things. For similar reasons, his logical works, taken as a whole, are referred to as the Organon because logic is a tool for philosophical thinking. Earlier thinkers, such as those who wrote texts in the Hippocratic corpus, generally did not believe that there were organs of the body but only different parts of the body.
Some alchemists (e.g. Paracelsus) adopted the Hermetic Qabalah assignment between the seven vital organs and the seven classical planets as follows:
Chinese traditional medicine recognizes eleven organs, associated with the five Chinese traditional elements and with yin and yang, as follows:
The Chinese associated the five elements with the five planets (Jupiter, Mars, Venus, Saturn, and Mercury) similar to the way the classical planets were associated with different metals. The yin and yang distinction approximates the modern notion of solid and hollow organs.
See also
List of organs of the human body
Organoid
Organ-on-a-chip
Situs inversus
References
External links
Levels of organization (Biology) | 0.77488 | 0.999724 | 0.774666 |
Heat stroke | Heat stroke or heatstroke, also known as sun-stroke, is a severe heat illness that results in a body temperature greater than , along with red skin, headache, dizziness, and confusion. Sweating is generally present in exertional heatstroke, but not in classic heatstroke. The start of heat stroke can be sudden or gradual. Heatstroke is a life-threatening condition due to the potential for multi-organ dysfunction, with typical complications including seizures, rhabdomyolysis, or kidney failure.
Heat stroke occurs because of high external temperatures and/or physical exertion. It usually occurs under preventable prolonged exposure to extreme environmental or exertional heat. However, certain health conditions can increase the risk of heat stroke, and patients, especially children, with certain genetic predispositions are vulnerable to heatstroke under relatively mild conditions.
Preventive measures include drinking sufficient fluids and avoiding excessive heat. Treatment is by rapid physical cooling of the body and supportive care. Recommended methods include spraying the person with water and using a fan, putting the person in ice water, or giving cold intravenous fluids. Adding ice packs around a person is beneficial but does not by itself achieve the fastest possible cooling.
Heat stroke results in more than 600 deaths a year in the United States. Rates increased between 1995 and 2015. Purely exercise-induced heat stroke, though a medical emergency, tends to be self-limiting (the patient stops exercising from cramp or exhaustion) and fewer than 5% of cases are fatal. Non-exertional heatstroke is a much greater danger: even the healthiest person, if left in a heatstroke-inducing environment without medical attention, will continue to deteriorate to the point of death, and 65% of the most severe cases are fatal even with treatment.
Signs and symptoms
Heat stroke generally presents with a hyperthermia of greater than in combination with disorientation. There is generally a lack of sweating in classic heatstroke, while sweating is generally present in exertional heatstroke.
Early symptoms of heat stroke include behavioral changes, confusion, delirium, dizziness, weakness, agitation, combativeness, slurred speech, nausea, and vomiting. In some individuals with exertional heatstroke, seizures and sphincter incontinence have also been reported. Additionally, in exertional heat stroke, the affected person may sweat excessively. Rhabdomyolysis, which is characterized by skeletal muscle breakdown with the products of muscle breakdown entering the bloodstream and causing organ dysfunction, is seen with exertional heatstroke.
If treatment is delayed, patients could develop vital organ damage, unconsciousness and even organ failure. In the absence of prompt and adequate treatment, heatstroke can be fatal.
Causes
Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive heat in the physical environment, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. Substances that inhibit cooling and cause dehydration such as alcohol, stimulants, medications, and age-related physiological changes predispose to so-called "classic" or non-exertional heat stroke (NEHS), most often in elderly and infirm individuals in summer situations with insufficient ventilation.
Young children have age specific physiologic differences that make them more susceptible to heat stroke including an increased surface area to mass ratio (leading to increased environmental heat absorption), an underdeveloped thermoregulatory system, a decreased sweating rate and a decreased blood volume to body size ratio (leading to decreased compensatory heat dissipation by redirecting blood to the skin).
Exertional heat stroke
Exertional heat stroke (EHS) can happen in young people without health problems or medications most often in athletes, outdoor laborers, or military personnel engaged in strenuous hot-weather activity or in first responders wearing heavy personal protective equipment. In environments that are not only hot but also humid, it is important to recognize that humidity reduces the degree to which the body can cool itself by perspiration and evaporation. For humans and other warm-blooded animals, excessive body temperature can disrupt enzymes regulating biochemical reactions that are essential for cellular respiration and the functioning of major organs.
Cars
When the outside temperature is , the temperature inside a car parked in direct sunlight can quickly exceed . Young children or elderly adults left alone in a vehicle are at particular risk of succumbing to heat stroke. "Heat stroke in children and in the elderly can occur within minutes, even if a car window is opened slightly." As these groups of individuals may not be able to open car doors or to express discomfort verbally (or audibly, inside a closed car), their plight may not be immediately noticed by others in the vicinity. In 2018, 51 children in the United States died in hot cars, more than the previous high of 49 in 2010.
Dogs are even more susceptible than humans to heat stroke in cars, as they cannot produce whole-body sweat to cool themselves. Leaving the dog at home with plenty of water on hot days is recommended instead, or, if a dog must be brought along, it can be tied up in the shade outside the destination and provided with a full water bowl.
Pathophysiology
The pathophysiology of heat stroke involves an intense heat overload followed by a failure of the body's thermoregulatory mechanisms. More specifically, heat stroke leads to inflammatory and coagulation responses that can damage the vascular endothelium and result in numerous platelet complications, including decreased platelet counts, platelet clumping, and suppressed platelet release from bone marrow.
Growing evidence also suggests the existence of a second pathway underlying heat stroke that involves heat and exercise-driven endotoxemia. Although its exact mechanism is not yet fully understood, this model theorizes that extreme exercise and heat disrupt the intestinal barrier by making it more permeable and allowing lipopolysaccharides (LPS) from gram-negative bacteria within the gut to move into the circulatory system. High blood LPS levels can then trigger a systemic inflammatory response and eventually lead to sepsis and related consequences like blood coagulation, multi-organ failure, necrosis, and central nervous system dysfunction.
Diagnosis
Heat stroke is a clinical diagnosis, based on signs and symptoms. It is diagnosed based on an elevated core body temperature (usually above 40 degrees Celsius), a history of heat exposure or physical exertion, and neurologic dysfunction. However, high body temperature does not necessarily indicate that heat stroke is present, such as with people in high-performance endurance sports or with people experiencing fevers. In others with heatstroke, the core body temperature is not always above 40 degrees Celsius. Therefore, heat stroke is more accurately diagnosed based on a constellation of symptoms rather than just a specific temperature threshold. Tachycardia (or a rapid heart rate), tachypnea (rapid breathing) and hypotension (low blood pressure) are common clinical findings. Those with classic heat stroke usually have dry skin, whereas those with exertional heat stroke usually have wet or sweaty skin.
A core body temperature (such as a rectal temperature) is the preferred method for monitoring body temperature in the diagnosis and management of heat stroke as it is more accurate than peripheral body temperatures (such as an oral or axillary temperatures).
Other conditions which may present similarly to heat stroke include meningitis, encephalitis, epilepsy, drug toxicity, severe dehydration, and certain metabolic syndromes such as serotonin syndrome, neuroleptic malignant syndrome, malignant hyperthermia and thyroid storm.
Prevention
The risk of heat stroke can be reduced by observing precautions to avoid overheating and dehydration. Light, loose-fitting clothes will allow perspiration to evaporate and cool the body. Wide-brimmed hats in light colors help prevent the sun from warming the head and neck. Vents on a hat will help cool the head, as will sweatbands wetted with cool water. Strenuous exercise should be avoided during hot weather, especially in the sun peak hours. Strenuous exercise should also be avoided if a person is ill and exercise intensity should match one's fitness level. Avoiding confined spaces (such as automobiles) without air-conditioning or adequate ventilation.
During heat waves and hot seasons further measures that can be taken to avoid classic heat stroke include staying in air conditioned areas, using fans, taking frequent cold showers, and increasing social contact and well being checks (especially for the elderly or disabled persons).
In hot weather, people need to drink plenty of cool liquids and mineral salts to replace fluids lost from sweating. Thirst is not a reliable sign that a person needs fluids. A better indicator is the color of urine. A dark yellow color may indicate dehydration.
Some measures that can help protect workers from heat stress include:
Know signs/symptoms of heat-related illnesses.
Block out direct sun and other heat sources.
Drink fluids often, and before you are thirsty.
Wear lightweight, light-colored, loose-fitting clothes.
Avoid beverages containing alcohol or caffeine.
Treatment
Treatment of heat stroke involves rapid mechanical cooling along with standard resuscitation measures.
The body temperature must be lowered quickly via conduction, convection, or evaporation. During cooling, the body temperature should be lowered to less than 39 degrees Celsius, ideally less than 38-38.5 degrees Celsius.
In the field, the person should be moved to a cool area, such as indoors or to a shaded area. Clothing should be removed to promote heat loss through passive cooling. Conductive cooling methods such as ice-water immersion should also be used, if possible. Evaporative and convective cooling by a combination of cool water spray or cold compresses with constant air flow over the body, such as with a fan or air-conditioning unit, is also an effective alternative.
In hospital mechanical cooling methods include ice water immersion, infusion of cold intravenous fluids, placing ice packs or wet gauze around the person, and fanning. Aggressive ice-water immersion remains the gold standard for exertional heat stroke and may also be used for classic heat stroke. This method may require the effort of several people and the person should be monitored carefully during the treatment process. Immersion should be avoided for an unconscious person but, if there is no alternative, it can be applied with the person's head above water. A rapid and effective cooling usually reverses concomitant organ dysfunction.
Immersion in very cold water was once thought to be counterproductive by reducing blood flow to the skin and thereby preventing heat from escaping the body core. However, research has shown that this mechanism does not play a dominant role in the decrease in core body temperature brought on by cold water.
Dantrolene, a muscle relaxant used to treat other forms of hyperthermia, is not an effective treatment for heat stroke. Antipyretics such as aspirin and acetaminophen are also not recommended as a means to lower body temperature in the treatment of heat stroke and their use may lead to worsening liver damage.
A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest.
The person's condition should be reassessed and stabilized by trained medical personnel. And the person's heart rate and breathing should be monitored. IV fluid resuscitation is usually needed for circulatory failure and organ dysfunction and is also indicated if rhabdomyolysis is present. In severe cases hemodialysis and ventilator support may be needed.
Prognosis
In elderly people who experience classic heat stroke the mortality exceeds 50%. The mortality rate in exertional heat stroke is less than 5%.
It was long believed that heat strokes lead only rarely to permanent deficits and that convalescence is almost complete. However, following the 1995 Chicago heat wave, researchers from the University of Chicago Medical Center studied all 58 patients with heat stroke severe enough to require intensive care at 12 area hospitals between July 12 and 20, 1995, ranging in age from 25 to 95 years. Nearly half of these patients died within a year 21 percent before and 28 percent after release from the hospital. Many of the survivors had permanent loss of independent function; one-third had severe functional impairment at discharge, and none of them had improved after one year. The study also recognized that because of overcrowded conditions in all the participating hospitals during the crisis, the immediate care which is critical was not as comprehensive as it should have been.
In rare cases, brain damage has been reported as a permanent sequela of severe heat stroke, most commonly cerebellar atrophy.
Epidemiology
Various aspects can affect the incidence of heat stroke, including sex, age, geographical location, and occupation. The incidence of heat stroke is higher among men; however, the incidence of other heat illnesses is higher among women. The incidence of other heat illnesses in women compared with men ranged from 1.30 to 2.89 per 1000 person-years versus 0.98 to 1.98 per 1000 person-years.
Different parts of the world also have different rates of heat stroke.
During the 2003 European heatwave more than 70,000 people died of heat related illnesses, and during the 2022 European heatwave (which saw the highest temperatures ever recorded in Europe), 61,672 people died from heat related illnesses.
Society and culture
In Slavic mythology, there is a personification of sunstroke, Poludnitsa (lady midday), a feminine demon clad in white that causes impairment or death to people working in the fields at midday. There was a traditional short break in harvest work at noon, to avoid attack by the demon. Antonín Dvořák's symphonic poem, The Noon Witch, was inspired by this tradition.
Other animals
Heatstroke can affect livestock, especially in hot, humid weather; or if the horse, cow, sheep or other is unfit, overweight, has a dense coat, is overworked, or is left in a horsebox in full sun. Symptoms include drooling, panting, high temperature, sweating, and rapid pulse.
The animal should be moved to shade, drenched in cold water and offered water or electrolyte to drink.
See also
Hyperthermia
Heat exhaustion
Occupational heat stress
References
External links
Heat stroke on MedicineNet.com
Effects of external causes
Medical emergencies
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Thermoregulation | 0.776278 | 0.997862 | 0.774618 |
Fascia | A fascia (; : fasciae or fascias; adjective fascial; ) is a generic term for macroscopic membranous bodily structures. Fasciae are classified as superficial, visceral or deep, and further designated according to their anatomical location.
The knowledge of fascial structures is essential in surgery, as they create borders for infectious processes (for example Psoas abscess) and haematoma. An increase in pressure may result in a compartment syndrome, where a prompt fasciotomy may be necessary. For this reason, profound descriptions of fascial structures are available in anatomical literature from the 19th century.
Function
Fasciae were traditionally thought of as passive structures that transmit mechanical tension generated by muscular activities or external forces throughout the body. An important function of muscle fasciae is to reduce friction of muscular force. In doing so, fasciae provide a supportive and movable wrapping for nerves and blood vessels as they pass through and between muscles.
In the tradition of medical dissections it has been common practice to carefully clean muscles and other organs from their surrounding fasciae in order to study their detailed topography and function. However, this practice tends to ignore that e.g. many muscle fibers insert into their fascial envelopes and that the function of many organs is significantly altered when their related fasciae are removed. This insight contributed to several modern biomechanical concepts of the human body, in which fascial tissues take over important stabilizing and connecting functions, by distributing tensional forces across several joints in a network-like manner similar to the architectural concept of tensegrity.
Starting in 2018 this concept of the fascial tissue serving as a body-wide tensional support system has been successfully expressed as an educational model with the Fascial Net Plastination Project.
Fascial tissues are frequently innervated by sensory nerve endings. These include myelinated as well as unmyelinated nerves. Research indicates that fascia has proprioceptive (the ability to determine the body's orientation with respect to itself) as well as interoceptive (the ability to discern sensations within the body like the heartbeat) capabilities.
Fascial tissues – particularly those with tendinous or aponeurotic properties – are also able to store and release elastic potential energy.
Anatomical compartments
A fascial compartment is a section within the body that contains muscles and nerves and is surrounded by fascia. In the human body, the limbs can each be divided into two segments: The upper limb can be divided into the arm and the forearm and the sectional compartments of both of these – the fascial compartments of the arm and the fascial compartments of the forearm contain an anterior and a posterior compartment. The lower limbs can be divided into two segments – the leg and the thigh – and these contain the fascial compartments of the leg and the fascial compartments of the thigh.
Clinical significance
Fascia itself becomes clinically important when it loses stiffness, becomes too stiff, or has decreased shearing ability. When inflammatory fasciitis or trauma causes fibrosis and adhesions, fascial tissue fails to differentiate the adjacent structures effectively. This can happen after surgery, where the fascia has been incised and healing includes a scar that traverses the surrounding structures.
Fascial Net Plastination Project
The Fascial Net Plastination Project (FNPP) is an anatomical research initiative spearheaded by fascia researcher Robert Schleip. The project aims to enhance the study of fascia through the technique of plastination. Led by an international team of fascia experts and anatomists, the FNPP resulted in the creation of a full-body fascia plastinate known as FR:EIA (Fascia Revealed: Educating Interconnected Anatomy). This plastinate provides a detailed view of the human fascial network, allowing for a better understanding of its structure and function as an interconnected tissue throughout the body.
FR:EIA was unveiled at the 2021 Fascia Research Congress and is currently exhibited at the Body Worlds exhibition in Berlin. This project represents a significant contribution to the visualization of fascia and has the potential to influence future research in fields such as medicine, physical therapy, and movement science.
Terminology
There exists some controversy about what structures are considered "fascia" and how they should be classified.
The current version of the International Federation of Associations of Anatomists divides into:
Fascia craniocervicalis
Fascia trunci
Fascia parietalis
Fascia extraserosalis
Fascia visceralis
Fasciae membrorum
Fasciae musculorum
Fascia investiens
Fascia propria musculi
Previous terminology
Two former, rather commonly used systems are:
The one specified in the 1983 edition of Nomina Anatomica (NA 1983)
The one specified in the 1997 edition of Terminologia Anatomica (TA 1997)
Superficial
Superficial fascia is the lowermost layer of the skin in nearly all of the regions of the body, that blends with the reticular dermis layer. It is present on the face, over the upper portion of the sternocleidomastoid, at the nape of the neck and overlying the breastbone. It consists mainly of loose areolar and fatty adipose connective tissue and is the layer that primarily determines the shape of a body. In addition to its subcutaneous presence, superficial fascia surrounds organs, glands and neurovascular bundles, and fills otherwise empty space at many other locations. It serves as a storage medium of fat and water; as a passageway for lymph, nerve and blood vessels; and as a protective padding to cushion and insulate.
Superficial fascia is present, but does not contain fat, in the eyelid, ear, scrotum, penis and clitoris.
Due to its viscoelastic properties, superficial fascia can stretch to accommodate the deposition of adipose that accompanies both ordinary and prenatal weight gain. After pregnancy and weight loss, the superficial fascia slowly reverts to its original level of tension.
Visceral
Visceral fascia (also called subserous fascia) suspends the organs within their cavities and wraps them in layers of connective tissue membranes. Each of the organs is covered in a double layer of fascia; these layers are separated by a thin serous membrane.
The outermost wall of the organ is known as the parietal layer
The skin of the organ is known as the visceral layer. The organs have specialized names for their visceral fasciae. In the brain, they are known as meninges; in the heart they are known as pericardia; in the lungs, they are known as pleurae; and in the abdomen, they are known as peritonea.
Visceral fascia is less extensible than superficial fascia. Due to its suspensory role for the organs, it needs to maintain its tone rather consistently. If it is too lax, it contributes to organ prolapse, yet if it is hypertonic, it restricts proper organ motility.
Deep
Deep fascia is a layer of dense fibrous connective tissue which surrounds individual muscles and divides groups of muscles into fascial compartments.
This fascia has a high density of elastin fibre that determines its extensibility or resilience. Deep fascia was originally considered to be essentially avascular but later investigations have confirmed a rich presence of thin blood vessels. Deep fascia is also richly supplied with sensory receptors. Examples of deep fascia are fascia lata, fascia cruris, brachial fascia, plantar fascia, thoracolumbar fascia and Buck's fascia.
See also
Clavipectoral fascia
Endothoracic fascia
Extracellular matrix
Interstitial cell
Pectoral fascia
Thoracolumbar fascia
Fascia (architecture)
References
External links
Fascia Research
Connective tissue | 0.776161 | 0.997833 | 0.77448 |
Inflammation | Inflammation (from ) is part of the biological response of body tissues to harmful stimuli, such as pathogens, damaged cells, or irritants. The five cardinal signs are heat, pain, redness, swelling, and loss of function (Latin calor, dolor, rubor, tumor, and functio laesa).
Inflammation is a generic response, and therefore is considered a mechanism of innate immunity, whereas adaptive immunity is specific to each pathogen.
Inflammation is a protective response involving immune cells, blood vessels, and molecular mediators. The function of inflammation is to eliminate the initial cause of cell injury, clear out damaged cells and tissues, and initiate tissue repair. Too little inflammation could lead to progressive tissue destruction by the harmful stimulus (e.g. bacteria) and compromise the survival of the organism. However inflammation can also have negative effects. Too much inflammation, in the form of chronic inflammation, is associated with various diseases, such as hay fever, periodontal disease, atherosclerosis, and osteoarthritis.
Inflammation can be classified as acute or chronic. Acute inflammation is the initial response of the body to harmful stimuli, and is achieved by the increased movement of plasma and leukocytes (in particular granulocytes) from the blood into the injured tissues. A series of biochemical events propagates and matures the inflammatory response, involving the local vascular system, the immune system, and various cells in the injured tissue. Prolonged inflammation, known as chronic inflammation, leads to a progressive shift in the type of cells present at the site of inflammation, such as mononuclear cells, and involves simultaneous destruction and healing of the tissue.
Inflammation has also been classified as Type 1 and Type 2 based on the type of cytokines and helper T cells (Th1 and Th2) involved.
Meaning
The earliest known reference for the term inflammation is around the early 15th century. The word root comes from Old French inflammation around the 14th century, which then comes from Latin inflammatio or inflammationem. Literally, the term relates to the word "flame", as the property of being "set on fire" or "to burn".
The term inflammation is not a synonym for infection. Infection describes the interaction between the action of microbial invasion and the reaction of the body's inflammatory response—the two components are considered together in discussion of infection, and the word is used to imply a microbial invasive cause for the observed inflammatory reaction. Inflammation, on the other hand, describes just the body's immunovascular response, regardless of cause. But, because of the two are often correlated, words ending in the suffix -itis (which means inflammation) are sometimes informally described as referring to infection: for example, the word urethritis strictly means only "urethral inflammation", but clinical health care providers usually discuss urethritis as a urethral infection because urethral microbial invasion is the most common cause of urethritis. However, the inflammation–infection distinction is crucial in situations in pathology and medical diagnosis that involve inflammation that is not driven by microbial invasion, such as cases of atherosclerosis, trauma, ischemia, and autoimmune diseases (including type III hypersensitivity).
Causes
Types
Appendicitis
Bursitis
Colitis
Cystitis
Dermatitis
Epididymitis
Encephalitis
Gingivitis
Meningitis
Myelitis
Myocarditis
Nephritis
Neuritis
Pancreatitis
Periodontitis
Pharyngitis
Phlebitis
Prostatitis
RSD/CRPS
Rhinitis
Sinusitis
Tendonitis
Tonsillitis
Urethritis
Vasculitis
Vaginitis
Acute
Acute inflammation is a short-term process, usually appearing within a few minutes or hours and begins to cease upon the removal of the injurious stimulus. It involves a coordinated and systemic mobilization response locally of various immune, endocrine and neurological mediators of acute inflammation. In a normal healthy response, it becomes activated, clears the pathogen and begins a repair process and then ceases.
Acute inflammation occurs immediately upon injury, lasting only a few days. Cytokines and chemokines promote the migration of neutrophils and macrophages to the site of inflammation. Pathogens, allergens, toxins, burns, and frostbite are some of the typical causes of acute inflammation. Toll-like receptors (TLRs) recognize microbial pathogens. Acute inflammation can be a defensive mechanism to protect tissues against injury. Inflammation lasting 2–6 weeks is designated subacute inflammation.
Cardinal signs
Inflammation is characterized by five cardinal signs, (the traditional names of which come from Latin):
Dolor (pain)
Calor (heat)
Rubor (redness)
Tumor (swelling)
Functio laesa (loss of function)
The first four (classical signs) were described by Celsus (–38 AD).
Pain is due to the release of chemicals such as bradykinin and histamine that stimulate nerve endings. (Acute inflammation of the lung (usually as in response to pneumonia) does not cause pain unless the inflammation involves the parietal pleura, which does have pain-sensitive nerve endings.) Heat and redness are due to increased blood flow at body core temperature to the inflamed site. Swelling is caused by accumulation of fluid.
Loss of function
The fifth sign, loss of function, is believed to have been added later by Galen, Thomas Sydenham or Rudolf Virchow. Examples of loss of function include pain that inhibits mobility, severe swelling that prevents movement, having a worse sense of smell during a cold, or having difficulty breathing when bronchitis is present. Loss of function has multiple causes.
Acute process
The process of acute inflammation is initiated by resident immune cells already present in the involved tissue, mainly resident macrophages, dendritic cells, histiocytes, Kupffer cells and mast cells. These cells possess surface receptors known as pattern recognition receptors (PRRs), which recognize (i.e., bind) two subclasses of molecules: pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs). PAMPs are compounds that are associated with various pathogens, but which are distinguishable from host molecules. DAMPs are compounds that are associated with host-related injury and cell damage.
At the onset of an infection, burn, or other injuries, these cells undergo activation (one of the PRRs recognize a PAMP or DAMP) and release inflammatory mediators responsible for the clinical signs of inflammation. Vasodilation and its resulting increased blood flow causes the redness (rubor) and increased heat (calor). Increased permeability of the blood vessels results in an exudation (leakage) of plasma proteins and fluid into the tissue (edema), which manifests itself as swelling (tumor). Some of the released mediators such as bradykinin increase the sensitivity to pain (hyperalgesia, dolor). The mediator molecules also alter the blood vessels to permit the migration of leukocytes, mainly neutrophils and macrophages, to flow out of the blood vessels (extravasation) and into the tissue. The neutrophils migrate along a chemotactic gradient created by the local cells to reach the site of injury. The loss of function (functio laesa) is probably the result of a neurological reflex in response to pain.
In addition to cell-derived mediators, several acellular biochemical cascade systems—consisting of preformed plasma proteins—act in parallel to initiate and propagate the inflammatory response. These include the complement system activated by bacteria and the coagulation and fibrinolysis systems activated by necrosis (e.g., burn, trauma).
Acute inflammation may be regarded as the first line of defense against injury. Acute inflammatory response requires constant stimulation to be sustained. Inflammatory mediators are short-lived and are quickly degraded in the tissue. Hence, acute inflammation begins to cease once the stimulus has been removed.
Chronic
Chronic inflammation is inflammation that lasts for months or years. Macrophages, lymphocytes, and plasma cells predominate in chronic inflammation, in contrast to the neutrophils that predominate in acute inflammation. Diabetes, cardiovascular disease, allergies, and chronic obstructive pulmonary disease (COPD) are examples of diseases mediated by chronic inflammation. Obesity, smoking, stress and insufficient diet are some of the factors that promote chronic inflammation. A 2014 study reported that 60% of Americans had at least one chronic inflammatory condition, and 42% had more than one.
Cardinal signs
Common signs and symptoms that develop during chronic inflammation are:
Body pain, arthralgia, myalgia
Chronic fatigue and insomnia
Depression, anxiety and mood disorders
Gastrointestinal complications such as constipation, diarrhea, and acid reflux
Weight gain or loss
Frequent infections
Vascular component
Vasodilation and increased permeability
As defined, acute inflammation is an immunovascular response to inflammatory stimuli, which can include infection or trauma. This means acute inflammation can be broadly divided into a vascular phase that occurs first, followed by a cellular phase involving immune cells (more specifically myeloid granulocytes in the acute setting). The vascular component of acute inflammation involves the movement of plasma fluid, containing important proteins such as fibrin and immunoglobulins (antibodies), into inflamed tissue.
Upon contact with PAMPs, tissue macrophages and mastocytes release vasoactive amines such as histamine and serotonin, as well as eicosanoids such as prostaglandin E2 and leukotriene B4 to remodel the local vasculature. Macrophages and endothelial cells release nitric oxide. These mediators vasodilate and permeabilize the blood vessels, which results in the net distribution of blood plasma from the vessel into the tissue space. The increased collection of fluid into the tissue causes it to swell (edema). This exuded tissue fluid contains various antimicrobial mediators from the plasma such as complement, lysozyme, antibodies, which can immediately deal damage to microbes, and opsonise the microbes in preparation for the cellular phase. If the inflammatory stimulus is a lacerating wound, exuded platelets, coagulants, plasmin and kinins can clot the wounded area using vitamin K-dependent mechanisms and provide haemostasis in the first instance. These clotting mediators also provide a structural staging framework at the inflammatory tissue site in the form of a fibrin lattice – as would construction scaffolding at a construction site – for the purpose of aiding phagocytic debridement and wound repair later on. Some of the exuded tissue fluid is also funneled by lymphatics to the regional lymph nodes, flushing bacteria along to start the recognition and attack phase of the adaptive immune system.
Acute inflammation is characterized by marked vascular changes, including vasodilation, increased permeability and increased blood flow, which are induced by the actions of various inflammatory mediators. Vasodilation occurs first at the arteriole level, progressing to the capillary level, and brings about a net increase in the amount of blood present, causing the redness and heat of inflammation. Increased permeability of the vessels results in the movement of plasma into the tissues, with resultant stasis due to the increase in the concentration of the cells within blood – a condition characterized by enlarged vessels packed with cells. Stasis allows leukocytes to marginate (move) along the endothelium, a process critical to their recruitment into the tissues. Normal flowing blood prevents this, as the shearing force along the periphery of the vessels moves cells in the blood into the middle of the vessel.
Plasma cascade systems
The complement system, when activated, creates a cascade of chemical reactions that promotes opsonization, chemotaxis, and agglutination, and produces the MAC.
The kinin system generates proteins capable of sustaining vasodilation and other physical inflammatory effects.
The coagulation system or clotting cascade, which forms a protective protein mesh over sites of injury.
The fibrinolysis system, which acts in opposition to the coagulation system, to counterbalance clotting and generate several other inflammatory mediators.
Plasma-derived mediators
Cellular component
The cellular component involves leukocytes, which normally reside in blood and must move into the inflamed tissue via extravasation to aid in inflammation. Some act as phagocytes, ingesting bacteria, viruses, and cellular debris. Others release enzymatic granules that damage pathogenic invaders. Leukocytes also release inflammatory mediators that develop and maintain the inflammatory response. In general, acute inflammation is mediated by granulocytes, whereas chronic inflammation is mediated by mononuclear cells such as monocytes and lymphocytes.
Leukocyte extravasation
Various leukocytes, particularly neutrophils, are critically involved in the initiation and maintenance of inflammation. These cells must be able to move to the site of injury from their usual location in the blood, therefore mechanisms exist to recruit and direct leukocytes to the appropriate place. The process of leukocyte movement from the blood to the tissues through the blood vessels is known as extravasation and can be broadly divided up into a number of steps:
Leukocyte margination and endothelial adhesion: The white blood cells within the vessels which are generally centrally located move peripherally towards the walls of the vessels. Activated macrophages in the tissue release cytokines such as IL-1 and TNFα, which in turn leads to production of chemokines that bind to proteoglycans forming gradient in the inflamed tissue and along the endothelial wall. Inflammatory cytokines induce the immediate expression of P-selectin on endothelial cell surfaces and P-selectin binds weakly to carbohydrate ligands on the surface of leukocytes and causes them to "roll" along the endothelial surface as bonds are made and broken. Cytokines released from injured cells induce the expression of E-selectin on endothelial cells, which functions similarly to P-selectin. Cytokines also induce the expression of integrin ligands such as ICAM-1 and VCAM-1 on endothelial cells, which mediate the adhesion and further slow leukocytes down. These weakly bound leukocytes are free to detach if not activated by chemokines produced in injured tissue after signal transduction via respective G protein-coupled receptors that activates integrins on the leukocyte surface for firm adhesion. Such activation increases the affinity of bound integrin receptors for ICAM-1 and VCAM-1 on the endothelial cell surface, firmly binding the leukocytes to the endothelium.
Migration across the endothelium, known as transmigration, via the process of diapedesis: Chemokine gradients stimulate the adhered leukocytes to move between adjacent endothelial cells. The endothelial cells retract and the leukocytes pass through the basement membrane into the surrounding tissue using adhesion molecules such as ICAM-1.
Movement of leukocytes within the tissue via chemotaxis: Leukocytes reaching the tissue interstitium bind to extracellular matrix proteins via expressed integrins and CD44 to prevent them from leaving the site. A variety of molecules behave as chemoattractants, for example, C3a or C5a (the anaphylatoxins), and cause the leukocytes to move along a chemotactic gradient towards the source of inflammation.
Phagocytosis
Extravasated neutrophils in the cellular phase come into contact with microbes at the inflamed tissue. Phagocytes express cell-surface endocytic pattern recognition receptors (PRRs) that have affinity and efficacy against non-specific microbe-associated molecular patterns (PAMPs). Most PAMPs that bind to endocytic PRRs and initiate phagocytosis are cell wall components, including complex carbohydrates such as mannans and β-glucans, lipopolysaccharides (LPS), peptidoglycans, and surface proteins. Endocytic PRRs on phagocytes reflect these molecular patterns, with C-type lectin receptors binding to mannans and β-glucans, and scavenger receptors binding to LPS.
Upon endocytic PRR binding, actin-myosin cytoskeletal rearrangement adjacent to the plasma membrane occurs in a way that endocytoses the plasma membrane containing the PRR-PAMP complex, and the microbe. Phosphatidylinositol and Vps34-Vps15-Beclin1 signalling pathways have been implicated to traffic the endocytosed phagosome to intracellular lysosomes, where fusion of the phagosome and the lysosome produces a phagolysosome. The reactive oxygen species, superoxides and hypochlorite bleach within the phagolysosomes then kill microbes inside the phagocyte.
Phagocytic efficacy can be enhanced by opsonization. Plasma derived complement C3b and antibodies that exude into the inflamed tissue during the vascular phase bind to and coat the microbial antigens. As well as endocytic PRRs, phagocytes also express opsonin receptors Fc receptor and complement receptor 1 (CR1), which bind to antibodies and C3b, respectively. The co-stimulation of endocytic PRR and opsonin receptor increases the efficacy of the phagocytic process, enhancing the lysosomal elimination of the infective agent.
Cell-derived mediators
Morphologic patterns
Specific patterns of acute and chronic inflammation are seen during particular situations that arise in the body, such as when inflammation occurs on an epithelial surface, or pyogenic bacteria are involved.
Granulomatous inflammation: Characterised by the formation of granulomas, they are the result of a limited but diverse number of diseases, which include among others tuberculosis, leprosy, sarcoidosis, and syphilis.
Fibrinous inflammation: Inflammation resulting in a large increase in vascular permeability allows fibrin to pass through the blood vessels. If an appropriate procoagulative stimulus is present, such as cancer cells, a fibrinous exudate is deposited. This is commonly seen in serous cavities, where the conversion of fibrinous exudate into a scar can occur between serous membranes, limiting their function. The deposit sometimes forms a pseudomembrane sheet. During inflammation of the intestine (pseudomembranous colitis), pseudomembranous tubes can be formed.
Purulent inflammation: Inflammation resulting in large amount of pus, which consists of neutrophils, dead cells, and fluid. Infection by pyogenic bacteria such as staphylococci is characteristic of this kind of inflammation. Large, localised collections of pus enclosed by surrounding tissues are called abscesses.
Serous inflammation: Characterised by the copious effusion of non-viscous serous fluid, commonly produced by mesothelial cells of serous membranes, but may be derived from blood plasma. Skin blisters exemplify this pattern of inflammation.
Ulcerative inflammation: Inflammation occurring near an epithelium can result in the necrotic loss of tissue from the surface, exposing lower layers. The subsequent excavation in the epithelium is known as an ulcer.
Disorders
Inflammatory abnormalities are a large group of disorders that underlie a vast variety of human diseases. The immune system is often involved with inflammatory disorders, as demonstrated in both allergic reactions and some myopathies, with many immune system disorders resulting in abnormal inflammation. Non-immune diseases with causal origins in inflammatory processes include cancer, atherosclerosis, and ischemic heart disease.
Examples of disorders associated with inflammation include:
Acne vulgaris
Asthma
Autoimmune diseases
Autoinflammatory diseases
Celiac disease
Chronic prostatitis
Colitis
Diverticulitis
Familial Mediterranean Fever
Glomerulonephritis
Hidradenitis suppurativa
Hypersensitivities
Inflammatory bowel diseases
Interstitial cystitis
Lichen planus
Mast Cell Activation Syndrome
Mastocytosis
Otitis
Pelvic inflammatory disease
Peripheral ulcerative keratitis
Pneumonia
Reperfusion injury
Rheumatic fever
Rheumatoid arthritis
Rhinitis
Sarcoidosis
Transplant rejection
Vasculitis
Atherosclerosis
Atherosclerosis, formerly considered a lipid storage disease, is now understood as a chronic inflammatory condition involving the arterial walls. Research has established a fundamental role for inflammation in mediating all stages of atherosclerosis from initiation through progression and, ultimately, the thrombotic complications from it.
These new findings reveal links between traditional risk factors like cholesterol levels and the underlying mechanisms of atherogenesis. Clinical studies have shown that this emerging biology of inflammation in atherosclerosis applies directly to people. Elevation in markers of inflammation predicts outcomes of people with acute coronary syndromes, independently of myocardial damage. In addition, low-grade chronic inflammation, as indicated by levels of the inflammatory marker C-reactive protein, prospectively defines risk of atherosclerotic complications, thus adding to prognostic information provided by traditional risk factors. Moreover, certain treatments that reduce coronary risk also limit inflammation. In the case of lipid lowering with statins, the anti-inflammatory effect does not appear to correlate with reduction in low-density lipoprotein levels. These new insights on inflammation contribute to the etiology of atherosclerosis, and the practical clinical applications in risk stratification and the targeting of therapy for atherosclerosis.
Allergy
An allergic reaction, formally known as type 1 hypersensitivity, is the result of an inappropriate immune response triggering inflammation, vasodilation, and nerve irritation. A common example is hay fever, which is caused by a hypersensitive response by mast cells to allergens. Pre-sensitised mast cells respond by degranulating, releasing vasoactive chemicals such as histamine. These chemicals propagate an excessive inflammatory response characterised by blood vessel dilation, production of pro-inflammatory molecules, cytokine release, and recruitment of leukocytes. Severe inflammatory response may mature into a systemic response known as anaphylaxis.
Myopathies
Inflammatory myopathies are caused by the immune system inappropriately attacking components of muscle, leading to signs of muscle inflammation. They may occur in conjunction with other immune disorders, such as systemic sclerosis, and include dermatomyositis, polymyositis, and inclusion body myositis.
Leukocyte defects
Due to the central role of leukocytes in the development and propagation of inflammation, defects in leukocyte functionality often result in a decreased capacity for inflammatory defense with subsequent vulnerability to infection. Dysfunctional leukocytes may be unable to correctly bind to blood vessels due to surface receptor mutations, digest bacteria (Chédiak–Higashi syndrome), or produce microbicides (chronic granulomatous disease). In addition, diseases affecting the bone marrow may result in abnormal or few leukocytes.
Pharmacological
Certain drugs or exogenous chemical compounds are known to affect inflammation. Vitamin A deficiency, for example, causes an increase in inflammatory responses, and anti-inflammatory drugs work specifically by inhibiting the enzymes that produce inflammatory eicosanoids. Additionally, certain illicit drugs such as cocaine and ecstasy may exert some of their detrimental effects by activating transcription factors intimately involved with inflammation (e.g. NF-κB).
Cancer
Inflammation orchestrates the microenvironment around tumours, contributing to proliferation, survival and migration. Cancer cells use selectins, chemokines and their receptors for invasion, migration and metastasis. On the other hand, many cells of the immune system contribute to cancer immunology, suppressing cancer.
Molecular intersection between receptors of steroid hormones, which have important effects on cellular development, and transcription factors that play key roles in inflammation, such as NF-κB, may mediate some of the most critical effects of inflammatory stimuli on cancer cells. This capacity of a mediator of inflammation to influence the effects of steroid hormones in cells is very likely to affect carcinogenesis. On the other hand, due to the modular nature of many steroid hormone receptors, this interaction may offer ways to interfere with cancer progression, through targeting of a specific protein domain in a specific cell type. Such an approach may limit side effects that are unrelated to the tumor of interest, and may help preserve vital homeostatic functions and developmental processes in the organism.
According to a review of 2009, recent data suggests that cancer-related inflammation (CRI) may lead to accumulation of random genetic alterations in cancer cells.
Role in cancer
In 1863, Rudolf Virchow hypothesized that the origin of cancer was at sites of chronic inflammation. As of 2012, chronic inflammation was estimated to contribute to approximately 15% to 25% of human cancers.
Mediators and DNA damage in cancer
An inflammatory mediator is a messenger that acts on blood vessels and/or cells to promote an inflammatory response. Inflammatory mediators that contribute to neoplasia include prostaglandins, inflammatory cytokines such as IL-1β, TNF-α, IL-6 and IL-15 and chemokines such as IL-8 and GRO-alpha. These inflammatory mediators, and others, orchestrate an environment that fosters proliferation and survival.
Inflammation also causes DNA damages due to the induction of reactive oxygen species (ROS) by various intracellular inflammatory mediators. In addition, leukocytes and other phagocytic cells attracted to the site of inflammation induce DNA damages in proliferating cells through their generation of ROS and reactive nitrogen species (RNS). ROS and RNS are normally produced by these cells to fight infection. ROS, alone, cause more than 20 types of DNA damage. Oxidative DNA damages cause both mutations and epigenetic alterations. RNS also cause mutagenic DNA damages.
A normal cell may undergo carcinogenesis to become a cancer cell if it is frequently subjected to DNA damage during long periods of chronic inflammation. DNA damages may cause genetic mutations due to inaccurate repair. In addition, mistakes in the DNA repair process may cause epigenetic alterations. Mutations and epigenetic alterations that are replicated and provide a selective advantage during somatic cell proliferation may be carcinogenic.
Genome-wide analyses of human cancer tissues reveal that a single typical cancer cell may possess roughly 100 mutations in coding regions, 10–20 of which are "driver mutations" that contribute to cancer development. However, chronic inflammation also causes epigenetic changes such as DNA methylations, that are often more common than mutations. Typically, several hundreds to thousands of genes are methylated in a cancer cell (see DNA methylation in cancer). Sites of oxidative damage in chromatin can recruit complexes that contain DNA methyltransferases (DNMTs), a histone deacetylase (SIRT1), and a histone methyltransferase (EZH2), and thus induce DNA methylation. DNA methylation of a CpG island in a promoter region may cause silencing of its downstream gene (see CpG site and regulation of transcription in cancer). DNA repair genes, in particular, are frequently inactivated by methylation in various cancers (see hypermethylation of DNA repair genes in cancer). A 2018 report evaluated the relative importance of mutations and epigenetic alterations in progression to two different types of cancer. This report showed that epigenetic alterations were much more important than mutations in generating gastric cancers (associated with inflammation). However, mutations and epigenetic alterations were of roughly equal importance in generating esophageal squamous cell cancers (associated with tobacco chemicals and acetaldehyde, a product of alcohol metabolism).
HIV and AIDS
It has long been recognized that infection with HIV is characterized not only by development of profound immunodeficiency but also by sustained inflammation and immune activation. A substantial body of evidence implicates chronic inflammation as a critical driver of immune dysfunction, premature appearance of aging-related diseases, and immune deficiency. Many now regard HIV infection not only as an evolving virus-induced immunodeficiency, but also as chronic inflammatory disease. Even after the introduction of effective antiretroviral therapy (ART) and effective suppression of viremia in HIV-infected individuals, chronic inflammation persists. Animal studies also support the relationship between immune activation and progressive cellular immune deficiency: SIVsm infection of its natural nonhuman primate hosts, the sooty mangabey, causes high-level viral replication but limited evidence of disease. This lack of pathogenicity is accompanied by a lack of inflammation, immune activation and cellular proliferation. In sharp contrast, experimental SIVsm infection of rhesus macaque produces immune activation and AIDS-like disease with many parallels to human HIV infection.
Delineating how CD4 T cells are depleted and how chronic inflammation and immune activation are induced lies at the heart of understanding HIV pathogenesisone of the top priorities for HIV research by the Office of AIDS Research, National Institutes of Health. Recent studies demonstrated that caspase-1-mediated pyroptosis, a highly inflammatory form of programmed cell death, drives CD4 T-cell depletion and inflammation by HIV. These are the two signature events that propel HIV disease progression to AIDS. Pyroptosis appears to create a pathogenic vicious cycle in which dying CD4 T cells and other immune cells (including macrophages and neutrophils) release inflammatory signals that recruit more cells into the infected lymphoid tissues to die. The feed-forward nature of this inflammatory response produces chronic inflammation and tissue injury. Identifying pyroptosis as the predominant mechanism that causes CD4 T-cell depletion and chronic inflammation, provides novel therapeutic opportunities, namely caspase-1 which controls the pyroptotic pathway. In this regard, pyroptosis of CD4 T cells and secretion of pro-inflammatory cytokines such as IL-1β and IL-18 can be blocked in HIV-infected human lymphoid tissues by addition of the caspase-1 inhibitor VX-765, which has already proven to be safe and well tolerated in phase II human clinical trials. These findings could propel development of an entirely new class of "anti-AIDS" therapies that act by targeting the host rather than the virus. Such agents would almost certainly be used in combination with ART. By promoting "tolerance" of the virus instead of suppressing its replication, VX-765 or related drugs may mimic the evolutionary solutions occurring in multiple monkey hosts (e.g. the sooty mangabey) infected with species-specific lentiviruses that have led to a lack of disease, no decline in CD4 T-cell counts, and no chronic inflammation.
Resolution
The inflammatory response must be actively terminated when no longer needed to prevent unnecessary "bystander" damage to tissues. Failure to do so results in chronic inflammation, and cellular destruction. Resolution of inflammation occurs by different mechanisms in different tissues.
Mechanisms that serve to terminate inflammation include:
Connection to depression
There is evidence for a link between inflammation and depression. Inflammatory processes can be triggered by negative cognitions or their consequences, such as stress, violence, or deprivation. Thus, negative cognitions can cause inflammation that can, in turn, lead to depression.
In addition, there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a "sickness mode".
Classical symptoms of being physically sick, such as lethargy, show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment.
Inflammations that lead to serious depression could be caused by common infections such as those caused by a virus, bacteria or even parasites.
Connection to delirium
There is evidence for a link between inflammation and delirium based on the results of a recent longitudinal study investigating CRP in COVID-19 patients.
Systemic effects
An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system, where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation, it may gain access to the lymphatic system via nearby lymph vessels. An infection of the lymph vessels is known as lymphangitis, and infection of a lymph node is known as lymphadenitis. When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.
When inflammation overwhelms the host, systemic inflammatory response syndrome is diagnosed. When it is due to infection, the term sepsis is applied, with the terms bacteremia being applied specifically for bacterial sepsis and viremia specifically to viral sepsis. Vasodilation and organ dysfunction are serious problems associated with widespread infection that may lead to septic shock and death.
Acute-phase proteins
Inflammation also is characterized by high systemic levels of acute-phase proteins. In acute inflammation, these proteins prove beneficial; however, in chronic inflammation, they can contribute to amyloidosis. These proteins include C-reactive protein, serum amyloid A, and serum amyloid P, which cause a range of systemic effects including:
Fever
Increased blood pressure
Decreased sweating
Malaise
Loss of appetite
Somnolence
Leukocyte numbers
Inflammation often affects the numbers of leukocytes present in the body:
Leukocytosis is often seen during inflammation induced by infection, where it results in a large increase in the amount of leukocytes in the blood, especially immature cells. Leukocyte numbers usually increase to between 15 000 and 20 000 cells per microliter, but extreme cases can see it approach 100 000 cells per microliter. Bacterial infection usually results in an increase of neutrophils, creating neutrophilia, whereas diseases such as asthma, hay fever, and parasite infestation result in an increase in eosinophils, creating eosinophilia.
Leukopenia can be induced by certain infections and diseases, including viral infection, Rickettsia infection, some protozoa, tuberculosis, and some cancers.
Interleukins and obesity
With the discovery of interleukins (IL), the concept of systemic inflammation developed. Although the processes involved are identical to tissue inflammation, systemic inflammation is not confined to a particular tissue but involves the endothelium and other organ systems.
Chronic inflammation is widely observed in obesity. Obese people commonly have many elevated markers of inflammation, including:
IL-6 (Interleukin-6)
Low-grade chronic inflammation is characterized by a two- to threefold increase in the systemic concentrations of cytokines such as TNF-α, IL-6, and CRP. Waist circumference correlates significantly with systemic inflammatory response.
Loss of white adipose tissue reduces levels of inflammation markers. As of 2017 the association of systemic inflammation with insulin resistance and type 2 diabetes, and with atherosclerosis was under preliminary research, although rigorous clinical trials had not been conducted to confirm such relationships.
C-reactive protein (CRP) is generated at a higher level in obese people, and may increase the risk for cardiovascular diseases.
Outcomes
The outcome in a particular circumstance will be determined by the tissue in which the injury has occurred—and the injurious agent that is causing it. Here are the possible outcomes to inflammation:
ResolutionThe complete restoration of the inflamed tissue back to a normal status. Inflammatory measures such as vasodilation, chemical production, and leukocyte infiltration cease, and damaged parenchymal cells regenerate. Such is usually the outcome when limited or short-lived inflammation has occurred.
FibrosisLarge amounts of tissue destruction, or damage in tissues unable to regenerate, cannot be regenerated completely by the body. Fibrous scarring occurs in these areas of damage, forming a scar composed primarily of collagen. The scar will not contain any specialized structures, such as parenchymal cells, hence functional impairment may occur.
Abscess formationA cavity is formed containing pus, an opaque liquid containing dead white blood cells and bacteria with general debris from destroyed cells.
Chronic inflammationIn acute inflammation, if the injurious agent persists then chronic inflammation will ensue. This process, marked by inflammation lasting many days, months or even years, may lead to the formation of a chronic wound. Chronic inflammation is characterised by the dominating presence of macrophages in the injured tissue. These cells are powerful defensive agents of the body, but the toxins they release—including reactive oxygen species—are injurious to the organism's own tissues as well as invading agents. As a consequence, chronic inflammation is almost always accompanied by tissue destruction.
Examples
Inflammation is usually indicated by adding the suffix "itis", as shown below. However, some conditions, such as asthma and pneumonia, do not follow this convention. More examples are available at List of types of inflammation.
See also
Notes
References
External links
Immunology
Animal physiology
Inflammations
Human physiology | 0.775109 | 0.999116 | 0.774424 |
Vitality | Vitality (, , ) is the capacity to live, grow, or develop. Vitality is also the characteristic that distinguishes living from non-living things. To experience vitality is regarded as a basic psychological drive and, in philosophy, a component to the will to live. As such, people seek to maximize their vitality or their experience of vitality—that which corresponds to an enhanced physiological capacity and mental state.
Overview
The pursuit and maintenance of health and vitality have been at the forefront of medicine and natural philosophy throughout history. Life depends upon various biological processes known as vital processes. Historically, these vital processes have been viewed as having either mechanistic or non-mechanistic causes. The latter point of view is characteristic of vitalism, the doctrine that the phenomena of life cannot be explained by purely chemical and physical mechanisms.
Prior to the 19th century, theoreticians often held that human lifespan had been less limited in the past, and that aging was due to a loss of, and failure to maintain, vitality.
A commonly held view was that people are born with finite vitality, which diminishes over time until illness and debility set in, and finally death.
Religion
In traditional cultures, the capacity for life is often directly equated with the or . This can be found in the Hindu concept , where vitality in the body derives from a subtle principle in the air and in food, as well as in Hebrew and ancient Greek texts.
Jainism
Vitality and DNA damage
Low vitality or fatigue is a common complaint by older patients. and may reflect an underlying medical illness. Vitality level was measured in 2,487 Copenhagen patients using a standardized, subjective, self-reported vitality scale and was found to be inversely related to DNA damage (as measured in peripheral blood mononuclear cells). DNA damage indicates cellular disfunction.
See also
Urban vitality
Vitalism
References
Jain philosophical concepts
Natural philosophy
Philosophy of life
Quality of life | 0.78821 | 0.982506 | 0.774421 |
Delirium | Delirium (formerly acute confusional state, an ambiguous term which is now discouraged) is a specific state of acute confusion attributable to the direct physiological consequence of a medical condition, effects of a psychoactive substance, or multiple causes, which usually develops over the course of hours to days. As a syndrome, delirium presents with disturbances in attention, awareness, and higher-order cognition. People with delirium may experience other neuropsychiatric disturbances, including changes in psychomotor activity (e.g. hyperactive, hypoactive, or mixed level of activity), disrupted sleep-wake cycle, emotional disturbances, disturbances of consciousness, or, altered state of consciousness, as well as perceptual disturbances (e.g. hallucinations and delusions), although these features are not required for diagnosis.
Diagnostically, delirium encompasses both the syndrome of acute confusion and its underlying organic process known as an acute encephalopathy. The cause of delirium may be either a disease process inside the brain or a process outside the brain that nonetheless affects the brain. Delirium may be the result of an underlying medical condition (e.g., infection or hypoxia), side effect of a medication, substance intoxication (e.g., opioids or hallucinogenic deliriants), substance withdrawal (e.g., alcohol or sedatives), or from multiple factors affecting one's overall health (e.g., malnutrition, pain, etc.). In contrast, the emotional and behavioral features due to primary psychiatric disorders (e.g., as in schizophrenia, bipolar disorder) do not meet the diagnostic criteria for 'delirium'.
Delirium may be difficult to diagnose without first establishing a person's usual mental function or 'cognitive baseline'. Delirium can be confused with multiple psychiatric disorders or chronic organic brain syndromes because of many overlapping signs and symptoms in common with dementia, depression, psychosis, etc. Delirium may occur in persons with existing mental illness, baseline intellectual disability, or dementia, entirely unrelated to any of these conditions. Delirium is often confused with schizophrenia, psychosis, organic brain syndromes, and more, because of similar signs and symptoms of these disorders.
Treatment of delirium requires identifying and managing the underlying causes, managing delirium symptoms, and reducing the risk of complications. In some cases, temporary or symptomatic treatments are used to comfort the person or to facilitate other care (e.g., preventing people from pulling out a breathing tube). Antipsychotics are not supported for the treatment or prevention of delirium among those who are in hospital; however, they may be used in cases where a person has distressing experiences such as hallucinations or if the person poses a danger to themselves or others. When delirium is caused by alcohol or sedative-hypnotic withdrawal, benzodiazepines are typically used as a treatment. There is evidence that the risk of delirium in hospitalized people can be reduced by non-pharmacological care bundles (see ). According to the text of DSM-5-TR, although delirium affects only 1–2% of the overall population, 18–35% of adults presenting to the hospital will have delirium, and delirium will occur in 29–65% of people who are hospitalized. Delirium occurs in 11–51% of older adults after surgery, in 81% of those in the ICU, and in 20–22% of individuals in nursing homes or post-acute care settings. Among those requiring critical care, delirium is a risk factor for death within the next year.
Because of the confusion caused by similar signs and symptoms of delirium with other Neuropsychiatric disorders like Schizophrenia and Psychosis, treating delirium can be difficult, and might even cause death of the patient due to being treated with the wrong medications.
Definition
In common usage, delirium can refer to drowsiness, agitation, disorientation, or hallucinations. In medical terminology, however, the core features of delirium include an acute disturbance in attention, awareness, and global cognition.
Although slight differences exist between the definitions of delirium in the DSM-5-TR and ICD-10, the core features are broadly the same. In 2022, the American Psychiatric Association released the fifth edition text revision of the DSM (DSM-5-TR) with the following criteria for diagnosis:
A. Disturbance in attention and awareness. This is a required symptom and involves easy distraction, inability to maintain attentional focus, and varying levels of alertness.
B. Onset is acute (from hours to days), representing a change from baseline mentation and often with fluctuations throughout the day
C. At least one additional cognitive disturbance (in memory, orientation, language, visuospatial ability, or perception)
D. The disturbances (criteria A and C) are not better explained by another neurocognitive disorder
E. There is evidence that the disturbances above are a "direct physiological consequence" of another medical condition, substance intoxication or withdrawal, toxin, or various combinations of causes
Signs and symptoms
Delirium exists across a range of arousal levels, either as a state between normal wakefulness/alertness and coma (hypoactive) or as a state of heightened psychophysiological arousal (hyperactive). It can also alternate between the two (mixed level of activity). While requiring an acute disturbance in attention, awareness, and cognition, the syndrome of delirium encompasses a broad range of additional neuropsychiatric disturbances.
Inattention: A disturbance in attention is required for delirium diagnosis. This may present as an impaired ability to direct, focus, sustain, or shift attention.
Memory impairment: The memory impairment that occurs in delirium is often due to an inability to encode new information, largely as a result of having impaired attention. Older memories already in storage are retained without need of concentration, so previously formed long-term memories (i.e., those formed before the onset of delirium) are usually preserved in all but the most severe cases of delirium, though recall of such information may be impaired due to global impairment in cognition.
Disorientation: A person may be disoriented to self, place, or time. Additionally, a person may be 'disoriented to situation' and not recognize their environment or appreciate what is going on around them.
Disorganized thinking: Disorganized thinking is usually noticed with speech that makes limited sense with apparent irrelevancies, and can involve poverty of speech, loose associations, perseveration, tangentiality, and other signs of a formal thought disorder.
Language disturbances: Anomic aphasia, paraphasia, impaired comprehension, agraphia, and word-finding difficulties all involve impairment of linguistic information processing.
Sleep/wake disturbances: Sleep disturbances in delirium reflect disruption in both sleep/wake and circadian rhythm regulation, typically characterized by fragmented sleep or even sleep-wake cycle reversal (i.e., active at night, sleeping during the day), including as an early sign preceding the onset of delirium.
Psychotic and other erroneous beliefs: Symptoms of psychosis include suspiciousness, overvalued ideation and frank delusions. Delusions are typically poorly formed and less stereotyped than in schizophrenia or Alzheimer's disease. They usually relate to persecutory themes of impending danger or threat in the immediate environment (e.g., being poisoned by nurses).
Perceptual disturbances: These can include illusions, which involve the misperception of real stimuli in the environment, or hallucinations, which involve the perception of stimuli that do not exist.
Mood lability: Distortions to perceived or communicated emotional states as well as fluctuating emotional states can manifest in delirium (e.g., rapid changes between terror, sadness, joking, fear, anger, and frustration).
Motor activity changes: Delirium has been commonly classified into psychomotor subtypes of hypoactive, hyperactive, and mixed level of activity, though studies are inconsistent as to their prevalence. Hypoactive cases are prone to non-detection or misdiagnosis as depression. A range of studies suggests that motor subtypes differ regarding underlying pathophysiology, treatment needs, functional prognosis, and risk of mortality, though inconsistent subtype definitions and poorer detection of hypoactive subtypes may influence the interpretation of these findings. The notion of unifying hypoactive and hyperactive states under the construct of delirium is commonly attributed to Lipowski.
Hyperactive symptoms include hyper-vigilance, restlessness, fast or loud speech, irritability, combativeness, impatience, swearing, singing, laughing, uncooperativeness, euphoria, anger, wandering, easy startling, fast motor responses, distractibility, tangentiality, nightmares, and persistent thoughts (hyperactive sub-typing is defined with at least three of the above).
Hypoactive symptoms include decreased alertness, sparse or slow speech, lethargy, slowed movements, staring, and apathy.
Mixed level of activity describes instances of delirium where activity level is either normal or fluctuating between hyperactive and hypoactive.
Causes
Delirium arises through the interaction of a number of predisposing and precipitating factors.
Individuals with multiple and/or significant predisposing factors are at high risk for an episode of delirium with a single and/or mild precipitating factor. Conversely, delirium may only result in low risk individuals if they experience a serious or multiple precipitating factors. These factors can change over time, thus an individual's risk of delirium is modifiable (see ).
Predisposing factors
Important predisposing factors include the following:
65 or more years of age
Cognitive impairment/dementia
Physical morbidity (e.g., biventricular failure, cancer, cerebrovascular disease)
Psychiatric morbidity (e.g., depression)
Sensory impairment (i.e., vision and hearing)
Functional dependence (e.g., requiring assistance for self-care or mobility)
Dehydration/malnutrition
Substance use disorder, including alcohol use disorder
Precipitating factors
Any serious, acute biological factor that affects neurotransmitter, neuroendocrine, or neuroinflammatory pathways can precipitate an episode of delirium in a vulnerable brain. Certain elements of the clinical environment have also been associated with the risk of developing delirium. Some of the most common precipitating factors are listed below:
Prolonged sleep restriction or deprivation
Environmental, psychophysiological stress (as found in acute care settings)
Inadequately controlled pain
Immobilization, use of physical restraints
Urinary retention, use of bladder catheter
Emotional stress
Severe constipation/fecal impaction
Medications
Sedatives (benzodiazepines, opioids), anticholinergics, dopaminergics, corticosteroids, polypharmacy
General anesthetic
Substance intoxication or withdrawal
Primary neurologic conditions
Severe drop in blood pressure, relative to the person's normal blood pressure (orthostatic hypotension) resulting in inadequate blood flow to the brain (cerebral hypoperfusion)
Stroke/transient ischemic attack(TIA)
Intracranial bleeding
Meningitis, encephalitis
Concurrent illness
Infections – especially respiratory (e.g. pneumonia, COVID-19) and urinary tract infections
Iatrogenic complications
Hypoxia, hypercapnea, anemia
Poor nutritional status, dehydration, electrolyte imbalances, hypoglycemia
Shock, heart attacks, heart failure
Metabolic derangements (e.g. SIADH, Addison's disease, hyperthyroidism)
Chronic/terminal illness (e.g. cancer)
Post-traumatic event (e.g. fall, fracture)
Mercury poisoning (e.g. erethism)
Major surgery (e.g. cardiac, orthopedic, vascular surgery)
Pathophysiology
The pathophysiology of delirium is still not well understood, despite extensive research.
Animal models
The lack of animal models that are relevant to delirium has left many key questions in delirium pathophysiology unanswered. Earliest rodent models of delirium used atropine (a muscarinic acetylcholine receptor blocker) to induce cognitive and electroencephalography (EEG) changes similar to delirium, and other anticholinergic drugs, such as biperiden and hyoscine, have produced similar effects. Along with clinical studies using various drugs with anticholinergic activity, these models have contributed to a "cholinergic deficiency hypothesis" of delirium.
Profound systemic inflammation occurring during sepsis is also known to cause delirium (often termed sepsis-associated encephalopathy). Animal models used to study the interactions between prior degenerative disease and overlying systemic inflammation have shown that even mild systemic inflammation causes acute and transient deficits in working memory among diseased animals. Prior dementia or age-associated cognitive impairment is the primary predisposing factor for clinical delirium and "prior pathology" as defined by these new animal models may consist of synaptic loss, abnormal network connectivity, and "primed microglia" brain macrophages stimulated by prior neurodegenerative disease and aging to amplify subsequent inflammatory responses in the central nervous system (CNS).
Cerebrospinal fluid
Studies of cerebrospinal fluid (CSF) in delirium are difficult to perform. Apart from the general difficulty of recruiting participants who are often unable to give consent, the inherently invasive nature of CSF sampling makes such research particularly challenging. However, a few studies have managed to sample CSF from persons undergoing spinal anesthesia for elective or emergency surgery.
A 2018 systematic review showed that, broadly, delirium may be associated with neurotransmitter imbalance (namely serotonin and dopamine signaling), reversible fall in somatostatin, and increased cortisol. The leading "neuroinflammatory hypothesis" (where neurodegenerative disease and aging leads the brain to respond to peripheral inflammation with an exaggerated CNS inflammatory response) has been described, but current evidence is still conflicting and fails to concretely support this hypothesis.
Neuroimaging
Neuroimaging provides an important avenue to explore the mechanisms that are responsible for delirium. Despite progress in the development of magnetic resonance imaging (MRI), the large variety in imaging-based findings has limited our understanding of the changes in the brain that may be linked to delirium. Some challenges associated with imaging people diagnosed with delirium include participant recruitment and inadequate consideration of important confounding factors such as history of dementia and/or depression, which are known to be associated with overlapping changes in the brain also observed on MRI.
Evidence for changes in structural and functional markers include: changes in white-matter integrity (white matter lesions), decreases in brain volume (likely as a result of tissue atrophy), abnormal functional connectivity of brain regions responsible for normal processing of executive function, sensory processing, attention, emotional regulation, memory, and orientation, differences in autoregulation of the vascular vessels in the brain, reduction in cerebral blood flow and possible changes in brain metabolism (including cerebral tissue oxygenation and glucose hypometabolism). Altogether, these changes in MRI-based measurements invite further investigation of the mechanisms that may underlie delirium, as a potential avenue to improve clinical management of people with this condition.
Neurophysiology
Electroencephalography (EEG) allows for continuous capture of global brain function and brain connectivity, and is useful in understanding real-time physiologic changes during delirium. Since the 1950s, delirium has been known to be associated with slowing of resting-state EEG rhythms, with abnormally decreased background alpha power and increased theta and delta frequency activity.
From such evidence, a 2018 systematic review proposed a conceptual model that delirium results when insults/stressors trigger a breakdown of brain network dynamics in individuals with low brain resilience (i.e. people who already have underlying problems of low neural connectivity and/or low neuroplasticity like those with Alzheimer's disease).
Neuropathology
Only a handful of studies exist where there has been an attempt to correlate delirium with pathological findings at autopsy. One research study has been reported on 7 people who died during ICU admission. Each case was admitted with a range of primary pathologies, but all had acute respiratory distress syndrome and/or septic shock contributing to the delirium, 6 showed evidence of low brain perfusion and diffuse vascular injury, and 5 showed hippocampal involvement. A case-control study showed that 9 delirium cases showed higher expression of HLA-DR and CD68 (markers of microglial activation), IL-6 (cytokines pro-inflammatory and anti-inflammatory activities) and GFAP (marker of astrocyte activity) than age-matched controls; this supports a neuroinflammatory cause to delirium, but the conclusions are limited by methodological issues.
A 2017 retrospective study correlating autopsy data with mini–mental state examination (MMSE) scores from 987 brain donors found that delirium combined with a pathological process of dementia accelerated MMSE score decline more than either individual process.
Diagnosis
The DSM-5-TR criteria are often the standard for diagnosing delirium clinically. However, early recognition of delirium's features using screening instruments, along with taking a careful history, can help in making a diagnosis of delirium. A diagnosis of delirium generally requires knowledge of a person's baseline level of cognitive function. This is especially important for treating people who have neurocognitive or neurodevelopmental disorders, whose baseline mental status may be mistaken as delirium.
General settings
Guidelines recommend that delirium should be diagnosed consistently when present. Much evidence reveals that in most centers delirium is greatly under-diagnosed. A systematic review of large scale routine data studies reporting data on delirium detection tools showed important variations in tool completion rates and tool positive score rates. Some tools, even if completed at high rates, showed delirium positive score rates that there much lower than the expected delirium occurrence level, suggesting low sensitivity in practice.
There is evidence that delirium detection and coding rates can show improvements in response to guidelines and education; for example, whole country data in England and Scotland (sample size 7.7M patients per year) show that there were large increases (3-4 fold) in delirium coding between 2012 and 2020. Delirium detection in general acute care settings can be assisted by the use of validated delirium screening tools. Many such tools have been published, and they differ in a variety of characteristics (e.g., duration, complexity, and need for training). It is also important to ensure that a given tool has been validated for the setting where it is being used.
Examples of tools in use in clinical practice include:
Confusion Assessment Method (CAM), including variants such as the 3-Minute Diagnostic Interview for the CAM (3D-CAM) and brief CAM (bCAM)
Delirium Observation Screening Scale (DOS)
Nursing Delirium Screening Scale (Nu-DESC)
Recognizing Acute Delirium As part of your Routine (RADAR)
4AT (4 A's Test)
Delirium Diagnostic Tool-Provisional (DDT-Pro), also for subsyndromal delirium
Intensive care unit
People who are in the ICU are at greater risk of delirium, and ICU delirium may lead to prolonged ventilation, longer stays in the hospital, increased stress on family and caregivers, and an increased chance of death. In the ICU, international guidelines recommend that every person admitted gets checked for delirium every day (usually twice or more a day) using a validated clinical tool. Key elements of detecting delirium in the ICU are whether a person can pay attention during a listening task and follow simple commands. The two most widely used are the Confusion Assessment Method for the ICU (CAM-ICU) and the Intensive Care Delirium Screening Checklist (ICDSC). Translations of these tools exist in over 20 languages and are used ICUs globally with instructional videos and implementation tips available. For children in need of intensive care there are validated clinical tools adjusted according to age. The recommended tools are preschool and pediatric Confusion Assessment Methods for the ICU (ps/pCAM-ICU) or the Cornell Assessment for Pediatric Delirium (CAPD) as the most valid and reliable delirium monitoring tools in critically ill children or adolescents.
More emphasis is placed on regular screening over the choice of tool used. This, coupled with proper documentation and informed awareness by the healthcare team, can affect clinical outcomes. Without using one of these tools, 75% of ICU delirium can be missed by the healthcare team, leaving the person without any likely interventions to help reduce the duration of delirium.
Differential diagnosis
There are conditions that might have similar clinical presentations to those seen in delirium. These include dementia, depression, psychosis, catatonia, and other conditions that affect cognitive function.
Dementia: This group of disorders is acquired (non-congenital) with usually irreversible cognitive and psychosocial functional decline. Dementia usually results from an identifiable degenerative brain disease (e.g., Alzheimer disease or Huntington's disease), requires chronic impairment (versus acute onset in delirium), and is typically not associated with changes in level of consciousness. Dementia is different from delirium in that dementia lasts long-term while delirium lasts short-term.
Depression: Similar symptoms exist between depression and delirium (especially the hypoactive subtype). Gathering a history from other caregivers can clarify baseline mentation.
Psychosis: In general, people with primary psychosis have intact cognitive function; however, primary psychosis can mimic delirium when it presents with disorganized thoughts and mood dysregulation. This is particularly true in the condition known as delirious mania.
Other mental illnesses: Some mental illnesses, such as a manic episode of bipolar disorder, depersonalization disorder, or other dissociative conditions, can present with features similar to that of delirium. Such condition, however, would not qualify for a diagnosis of delirium per DSM-5-TR criterion D (i.e., fluctuating cognitive symptoms occurring as part of a primary mental disorder are results of the said mental disorder itself), while physical disorders (e.g., infections, hypoxia, etc.) can precipitate delirium as a mental side-effect/symptom.
Prevention
Treating delirium that is already established is challenging and for this reason, preventing delirium before it begins is ideal. Prevention approaches include screening to identify people who are at risk, and medication-based and non-medication based (non-pharmacological) treatments.
An estimated 30–40% of all cases of delirium could be prevented in cognitively at-risk populations, and high rates of delirium reflect negatively on the quality of care. Episodes of delirium can be prevented by identifying hospitalized people at risk of the condition. This includes individuals over age 65, with a cognitive impairment, undergoing major surgery, or with severe illness. Routine delirium screening is recommended in such populations. It is thought that a personalized approach to prevention that includes different approaches together can decrease rates of delirium by 27% among the elderly.
In 1999, Sharon K. Inouye at Yale University, founded the Hospital Elder Life Program (HELP) which has since become recognized as a proven model for preventing delirium. HELP prevents delirium among the elderly through active participation and engagement with these individuals. There are two working parts to this program, medical professionals such as a trained nurse, and volunteers, who are overseen by the nurse. The volunteer program equips each trainee with the adequate basic geriatric knowledge and interpersonal skills to interact with patients. Volunteers perform the range of motion exercises, cognitive stimulation, and general conversation with elderly patients who are staying in the hospital. Alternative effective delirium prevention programs have been developed, some of which do not require volunteers.
Prevention efforts often fall on caregivers. Caregivers often have a lot expected of them and this is where socioeconomic status plays a role in prevention. If prevention requires constant mental stimulation and daily exercise, this takes time out of the caregiver's day. Based on socioeconomic classes, this may be valuable time that would be used working to support the family. This leads to a disproportionate amount of individuals who experience delirium being from marginalized identities. Programs such as the Hospital Elder Life Program can attempt to combat these societal issues by providing additional support and education about delirium that may not otherwise be accessible.
Non-pharmacological
Delirium may be prevented and treated by using non-pharmacologic approaches focused on risk factors, such as constipation, dehydration, low oxygen levels, immobility, visual or hearing impairment, sleep disturbance, functional decline, and by removing or minimizing problematic medications. Ensuring a therapeutic environment (e.g., individualized care, clear communication, adequate reorientation and lighting during daytime, promoting uninterrupted sleep hygiene with minimal noise and light at night, minimizing room relocation, having familiar objects like family pictures, providing earplugs, and providing adequate nutrition, pain control, and assistance toward early mobilization) may also aid in preventing delirium. Research into pharmacologic prevention and treatment is weak and insufficient to make proper recommendations.
Pharmacological
Melatonin and other pharmacological agents have been studied for delirium prevention, but evidence is conflicting. Avoidance or cautious use of benzodiazepines has been recommended for reducing the risk of delirium in critically ill individuals. It is unclear if the medication donepezil, a cholinesterase inhibitor, reduces delirium following surgery. There is also no clear evidence to suggest that citicoline, methylprednisolone, or antipsychotic medications prevent delirium. A review of intravenous versus inhalational maintenance of anaesthesia for postoperative cognitive outcomes in elderly people undergoing non-cardiac surgery showed little or no difference in postoperative delirium according to the type of anaesthetic maintenance agents in five studies (321 participants). The authors of this review were uncertain whether maintenance of anaesthesia with propofol-based total intravenous anaesthesia (TIVA) or with inhalational agents can affect the incidence rate of postoperative delirium.
Interventions for preventing delirium in long-term care or hospital
The current evidence suggests that software-based interventions to identify medications that could contribute to delirium risk and recommend a pharmacist's medication review probably reduces incidence of delirium in older adults in long-term care. The benefits of hydration reminders and education on risk factors and care homes' solutions for reducing delirium is still uncertain.
For inpatients in a hospital setting, numerous approaches have been suggested to prevent episodes of delirium including targeting risk factors such as sleep deprivation, mobility problems, dehydration, and impairments to a person's sensory system. Often a 'multicomponent' approach by an interdisciplinary team of health care professionals is suggested for people in the hospital at risk of delirium, and there is some evidence that this may decrease to incidence of delirium by up to 43% and may reduce the length of time that the person is hospitalized.
Treatment
Most often, delirium is reversible; however, people with delirium require treatment for the underlying cause(s) and often to prevent injury and other poor outcomes directly related to delirium.
Treatment of delirium requires attention to multiple domains including the following:
Identify and treat the underlying medical disorder or cause(s)
Addressing any other possible predisposing and precipitating factors that might be disrupting brain function
Optimize physiology and conditions for brain recovery (e.g., oxygenation, hydration, nutrition, electrolytes, metabolites, medication review)
Detect and manage distress and behavioral disturbances (e.g., pain control)
Maintaining mobility
Provide rehabilitation through cognitive engagement and mobilization
Communicate effectively with the person experiencing delirium and their carers or caregivers
Provide adequate follow-up including consideration of possible dementia and post-traumatic stress.
Multidomain interventions
These interventions are the first steps in managing acute delirium, and there are many overlaps with delirium preventative strategies. In addition to treating immediate life-threatening causes of delirium (e.g., low O, low blood pressure, low glucose, dehydration), interventions include optimizing the hospital environment by reducing ambient noise, providing proper lighting, offering pain relief, promoting healthy sleep-wake cycles, and minimizing room changes. Although multicomponent care and comprehensive geriatric care are more specialized for a person experiencing delirium, several studies have been unable to find evidence showing they reduce the duration of delirium.
Family, friends, and other caregivers can offer frequent reassurance, tactile and verbal orientation, cognitive stimulation (e.g. regular visits, familiar objects, clocks, calendars, etc.), and means to stay engaged (e.g. making hearing aids and eyeglasses readily available). Sometimes verbal and non-verbal deescalation techniques may be required to offer reassurances and calm the person experiencing delirium. Restraints should rarely be used as an intervention for delirium. The use of restraints has been recognized as a risk factor for injury and aggravating symptoms, especially in older hospitalized people with delirium. The only cases where restraints should sparingly be used during delirium is in the protection of life-sustaining interventions, such as endotracheal tubes.
Another approached called the "T-A-DA (tolerate, anticipate, don't agitate) method" can be an effective management technique for older people with delirium, where abnormal behaviors (including hallucinations and delusions) are tolerated and unchallenged, as long as caregiver safety and the safety of the person experiencing delirium is not threatened. Implementation of this model may require a designated area in the hospital. All unnecessary attachments are removed to anticipate for greater mobility, and agitation is prevented by avoiding excessive reorientation/questioning.
Medications
The use of medications for delirium is generally restricted to managing its distressing or dangerous neuropsychiatric disturbances. Short-term use (one week or less) of low-dose haloperidol is among the more common pharmacological approaches to delirium. Evidence for effectiveness of atypical antipsychotics (e.g. risperidone, olanzapine, ziprasidone, and quetiapine) is emerging, with the benefit for fewer side effects Use antipsychotic drugs with caution or not at all for people with conditions such as Parkinson's disease or dementia with Lewy bodies. Evidence for the effectiveness of medications (including antipsychotics and benzodiazepines) in treating delirium is weak.
Benzodiazepines can cause or worsen delirium, and there is no reliable evidence of efficacy for treating non-anxiety-related delirium. Similarly, people with dementia with Lewy bodies may have significant side effects with antipsychotics, and should either be treated with a none or small doses of benzodiazepines.
The antidepressant trazodone is occasionally used in the treatment of delirium, but it carries a risk of over-sedation, and its use has not been well studied.
For adults with delirium that are in the ICU, medications are used commonly to improve the symptoms. Dexmedetomidine may shorten the length of the delirium in adults who are critically ill, and rivastigmine is not suggested. For adults with delirium who are near the end of their life (on palliative care) high quality evidence to support or refute the use of most medications to treat delirium is not available. Low quality evidence indicates that the antipsychotic medications risperidone or haloperidol may make the delirium slightly worse in people who are terminally ill, when compared to a placebo treatment. There is also moderate to low quality evidence to suggest that haloperidol and risperidone may be associated with a slight increase in side effects, specifically extrapyramidal symptoms, if the person near the end of their life has delirium that is mild to moderate in severity.
Prognosis
There is substantial evidence that delirium results in long-term poor outcomes in older persons admitted to hospital. This systematic review only included studies that looked for an independent effect of delirium (i.e., after accounting for other associations with poor outcomes, for example co-morbidity or illness severity).
In older persons admitted to hospital, individuals experiencing delirium are twice as likely to die than those who do not (meta-analysis of 12 studies). In the only prospective study conducted in the general population, older persons reporting delirium also showed higher mortality (60% increase). A large (N=82,770) two-centre study in unselected older emergency population found that delirium detected as part of normal care using the 4AT tool was strongly linked to 30-day mortality, hospital length of stay, and days at home in the year following the 4AT test date.
Institutionalization was also twice as likely after an admission with delirium (meta-analysis of seven studies). In a community-based population examining individuals after an episode of severe infection (though not specifically delirium), these persons acquired more functional limitations (i.e., required more assistance with their care needs) than those not experiencing infection. After an episode of delirium in the general population, functional dependence increased threefold.
The association between delirium and dementia is complex. The systematic review estimated a 13-fold increase in dementia after delirium (meta-analysis of two studies). However, it is difficult to be certain that this is accurate because the population admitted to hospital includes persons with undiagnosed dementia (i.e., the dementia was present before the delirium, rather than caused by it). In prospective studies, people hospitalised from any cause appear to be at greater risk of dementia and faster trajectories of cognitive decline, but these studies did not specifically look at delirium. In the only population-based prospective study of delirium, older persons had an eight-fold increase in dementia and faster cognitive decline. The same association is also evident in persons already diagnosed with Alzheimer's dementia.
Recent long-term studies showed that many people still meet criteria for delirium for a prolonged period after hospital discharge, with up to 21% of people showing persistent delirium at 6 months post-discharge.
Dementia in ICU survivors
Between 50% and 70% of people admitted to the ICU have permanent problems with brain dysfunction similar to those experienced by people with Alzheimer's or those with a traumatic brain injury, leaving many ICU survivors permanently disabled. This is a distressing personal and public health problem and continues to receive increasing attention in ongoing investigations.
The implications of such an "acquired dementia-like illness" can profoundly debilitate a person's livelihood level, often dismantling his/her life in practical ways like impairing one's ability to find a car in a parking lot, complete shopping lists, or perform job-related tasks done previously for years. The societal implications can be enormous when considering work-force issues related to the inability of wage-earners to work due to their own ICU stay or that of someone else they must care for.
Epidemiology
The highest rates of delirium (often 50–75% of people) occur among those who are critically ill in the intensive care unit (ICU). This was historically referred to as "ICU psychosis" or "ICU syndrome"; however, these terms are now widely disfavored in relation to the operationalized term ICU delirium. Since the advent of validated and easy-to-implement delirium instruments for people admitted to the ICU such as the Confusion Assessment Method for the ICU (CAM-ICU) and the Intensive Care Delirium Screening Checklist (ICDSC), it has been recognized that most ICU delirium is hypoactive, and can easily be missed unless evaluated regularly. The causes of delirium depend on the underlying illnesses, new problems like sepsis and low oxygen levels, and the sedative and pain medicines that are nearly universally given to all ICU patients. Outside the ICU, on hospital wards and in nursing homes, the problem of delirium is also a very important medical problem, especially for older patients.
The most recent area of the hospital in which delirium is just beginning to be monitored routinely in many centers is the Emergency Department, where the prevalence of delirium among older adults is about 10%. A systematic review of delirium in general medical inpatients showed that estimates of delirium prevalence on admission ranged 10–31%. About 5–10% of older adults who are admitted to hospital develop a new episode of delirium while in hospital. Rates of delirium vary widely across general hospital wards. Estimates of the prevalence of delirium in nursing homes are between 10% and 45%.
Society and culture
Delirium is one of the oldest forms of mental disorder known in medical history. The Roman author Aulus Cornelius Celsus used the term to describe mental disturbance from head trauma or fever in his work De Medicina. Sims (1995, p. 31) points out a "superb detailed and lengthy description" of delirium in "The Stroller's Tale" from Charles Dickens' The Pickwick Papers. Historically, delirium has also been noted for its cognitive sequelae. For instance, the English medical writer Philip Barrow noted in 1583 that if delirium (or "frensy") resolves, it may be followed by a loss of memory and reasoning power.
Costs
In the US, the cost of a hospital admission for people with delirium is estimated at between $16k and $64k, suggesting the national burden of delirium may range from $38 bn to $150 bn per year (2008 estimate). In the UK, the cost is estimated as £13k per admission.
References
Further reading
External links
Cognitive disorders
Intensive care medicine
Psychopathological syndromes | 0.775465 | 0.998587 | 0.774369 |
Stupor | Stupor is the lack of critical mental function and a level of consciousness, in which an affected person is almost entirely unresponsive and responds only to intense stimuli such as pain. The word derives from the Latin stupor ("numbness, insensibility").
Signs and symptoms
Stupor is characterised by impaired reaction to external stimuli. Those in a stuporous state are rigid, mute and only appear to be conscious, as the eyes are open and follow surrounding objects. If not stimulated externally, a patient with stupor will appear to be in a sleepy state most of the time. In some extreme cases of severe depressive disorders the patient can become motionless, lose their appetite and become mute. Short periods of restricted responsivity can be achieved by intense stimulation (e.g. pain, bright light, loud noise, shock).
Causes
Stupor is associated with infectious diseases, complicated toxic states (e.g. heavy metals), severe hypothermia, mental illnesses (e.g. schizophrenia, major depressive disorder), epilepsy, vascular illnesses (e.g. hypertensive encephalopathy), acute stress reaction (shock), neoplasms (e.g. brain tumors), brain disorders (e.g. alzheimers, dementia, fatal insomnia), B12 deficiency, major trauma, alcohol poisoning, vitamin D excess, and other conditions.
Lesions of the ascending reticular activation system on height of the pons and metencephalon have been shown to cause stupor. The incidence is higher after left-sided lesions.
Management
Because stupors are caused by another health condition, treatment focuses on uncovering and treating the cause. Doctors may administer IV antibiotics or fluids to treat infections and nutritional deficits, or conduct an MRI to check for lesions on the brain.
See also
Torpor
Notes
References
C. Lafosse, Zakboek Neuropsychologische Symptomatologie, p. 37, .
External links
Symptoms and signs of mental disorders | 0.777918 | 0.99531 | 0.77427 |
Anemia | Anemia or anaemia (British English) is a blood disorder in which the blood has a reduced ability to carry oxygen. This can be due to a lower than normal number of red blood cells, a reduction in the amount of hemoglobin available for oxygen transport, or abnormalities in hemoglobin that impair its function.
The name is derived .
When anemia comes on slowly, the symptoms are often vague, such as tiredness, weakness, shortness of breath, headaches, and a reduced ability to exercise.
When anemia is acute, symptoms may include confusion, feeling like one is going to pass out, loss of consciousness, and increased thirst. Anemia must be significant before a person becomes noticeably pale. Additional symptoms may occur depending on the underlying cause. Anemia can be temporary or long term and can range from mild to severe.
Anemia can be caused by blood loss, decreased red blood cell production, and increased red blood cell breakdown. Causes of blood loss include bleeding due to inflammation of the stomach or intestines, bleeding from surgery, serious injury, or blood donation. Causes of decreased production include iron deficiency, folate deficiency, vitamin B12 deficiency, thalassemia and a number of bone marrow tumors. Causes of increased breakdown include genetic disorders such as sickle cell anemia, infections such as malaria, and certain autoimmune diseases like autoimmune hemolytic anemia.
Anemia can also be classified based on the size of the red blood cells and amount of hemoglobin in each cell. If the cells are small, it is called microcytic anemia; if they are large, it is called macrocytic anemia; and if they are normal sized, it is called normocytic anemia. The diagnosis of anemia in men is based on a hemoglobin of less than 130 to 140 g/L (13 to 14 g/dL); in women, it is less than 120 to 130 g/L (12 to 13 g/dL). Further testing is then required to determine the cause.
Treatment depends on the specific cause. Certain groups of individuals, such as pregnant women, can benefit from the use of iron pills for prevention. Dietary supplementation, without determining the specific cause, is not recommended. The use of blood transfusions is typically based on a person's signs and symptoms. In those without symptoms, they are not recommended unless hemoglobin levels are less than 60 to 80 g/L (6 to 8 g/dL). These recommendations may also apply to some people with acute bleeding. Erythropoiesis-stimulating agents are only recommended in those with severe anemia.
Anemia is the most common blood disorder, affecting about a fifth to a third of the global population. Iron-deficiency anemia is the most common cause of anemia worldwide, and affects nearly one billion people.
In 2013, anemia due to iron deficiency resulted in about 183,000 deaths – down from 213,000 deaths in 1990. This condition is most prevalent in children with also an above average prevalence in elderly and women of reproductive age (especially during pregnancy). Anemia is one of the six WHO global nutrition targets for 2025 and for diet-related global targets endorsed by World Health Assembly in 2012 and 2013. Efforts to reach global targets contribute to reaching Sustainable Development Goals (SDGs), with anemia as one of the targets in SDG 2 for achieving zero world hunger.
Signs and symptoms
A person with anemia may not have any symptoms, depending on the underlying cause, and no symptoms may be noticed, as the anemia is initially mild, and then the symptoms become worse as the anemia worsens. A patient with anemia may report feeling tired, weak, decreased ability to concentrate, and sometimes shortness of breath on exertion. These symptoms are unspecific and none of the symptoms alone or in combination show a good predictive value for the presence of anemia in non-clinical patients.
Symptoms of anemia can come on quickly or slowly. Early on there may be few or no symptoms. If the anemia continues slowly (chronic), the body may adapt and compensate for this change. In this case, no symptoms may appear until the anemia becomes more severe. Symptoms can include feeling tired, weak, dizziness, headaches, intolerance to physical exertion, shortness of breath, difficulty concentrating, irregular or rapid heartbeat, cold hands and feet, cold intolerance, pale or yellow skin, poor appetite, easy bruising and bleeding, and muscle weakness.
Anemia that develops quickly, often, has more severe symptoms, including, feeling faint, chest pain, sweating, increased thirst, and confusion. There may be also additional symptoms depending on the underlying cause.
In more severe anemia, the body may compensate for the lack of oxygen-carrying capability of the blood by increasing cardiac output. The person may have symptoms related to this, such as palpitations, angina (if pre-existing heart disease is present), intermittent claudication of the legs, and symptoms of heart failure.
On examination, the signs exhibited may include pallor (pale skin, mucosa, conjunctiva and nail beds), but this is not a reliable sign. A blue coloration of the sclera may be noticed in some cases of iron-deficiency anemia. There may be signs of specific causes of anemia, e.g. koilonychia (in iron deficiency), jaundice (when anemia results from abnormal break down of red blood cells – in hemolytic anemia), nerve cell damage (vitamin B12 deficiency), bone deformities (found in thalassemia major) or leg ulcers (seen in sickle-cell disease). In severe anemia, there may be signs of a hyperdynamic circulation: tachycardia (a fast heart rate), bounding pulse, flow murmurs, and cardiac ventricular hypertrophy (enlargement). There may be signs of heart failure.
Pica, the consumption of non-food items such as ice, paper, wax, grass, hair or dirt, may be a symptom of iron deficiency; although it occurs often in those who have normal levels of hemoglobin.
Chronic anemia may result in behavioral disturbances in children as a direct result of impaired neurological development in infants, and reduced academic performance in children of school age. Restless legs syndrome is more common in people with iron-deficiency anemia than in the general population.
Causes
The causes of anemia may be classified as impaired red blood cell (RBC) production, increased RBC destruction (hemolytic anemia), blood loss and fluid overload (hypervolemia). Several of these may interplay to cause anemia. The most common cause of anemia is blood loss, but this usually does not cause any lasting symptoms unless a relatively impaired RBC production develops, in turn, most commonly by iron deficiency.
Impaired production
Disturbance of proliferation and differentiation of stem cells
Pure red cell aplasia
Aplastic anemia affects all kinds of blood cells. Fanconi anemia is a hereditary disorder or defect featuring aplastic anemia and various other abnormalities.
Anemia of kidney failure due to insufficient production of the hormone erythropoietin
Anemia of endocrine disease
Disturbance of proliferation and maturation of erythroblasts
Pernicious anemia is a form of megaloblastic anemia due to vitamin B12 deficiency dependent on impaired absorption of vitamin B12. Lack of dietary B12 causes non-pernicious megaloblastic anemia.
Anemia of folate deficiency, as with vitamin B12, causes megaloblastic anemia
Anemia of prematurity, by diminished erythropoietin response to declining hematocrit levels, combined with blood loss from laboratory testing, generally occurs in premature infants at two to six weeks of age.
Iron-deficiency anemia, resulting in deficient heme synthesis
Thalassemias, causing deficient globin synthesis
Congenital dyserythropoietic anemias, causing ineffective erythropoiesis
Anemia of kidney failure (also causing stem cell dysfunction)
Other mechanisms of impaired RBC production
Myelophthisic anemia or myelophthisis is a severe type of anemia resulting from the replacement of bone marrow by other materials, such as malignant tumors, fibrosis, or granulomas.
Myelodysplastic syndrome
anemia of chronic inflammation
Leukoerythroblastic anemia is caused by space-occupying lesions in the bone marrow that prevent normal production of blood cells.
Increased destruction
Anemias of increased red blood cell destruction are generally classified as hemolytic anemias. These types generally feature jaundice, and elevated levels of lactate dehydrogenase.
Intrinsic (intracorpuscular) abnormalities cause premature destruction. All of these, except paroxysmal nocturnal hemoglobinuria, are hereditary genetic disorders.
Hereditary spherocytosis is a hereditary defect that results in defects in the RBC cell membrane, causing the erythrocytes to be sequestered and destroyed by the spleen.
Hereditary elliptocytosis is another defect in membrane skeleton proteins.
Abetalipoproteinemia, causing defects in membrane lipids
Enzyme deficiencies
Pyruvate kinase and hexokinase deficiencies, causing defect glycolysis
Glucose-6-phosphate dehydrogenase deficiency and glutathione synthetase deficiency, causing increased oxidative stress
Hemoglobinopathies
Sickle cell anemia
Hemoglobinopathies causing unstable hemoglobins
Paroxysmal nocturnal hemoglobinuria
Extrinsic (extracorpuscular) abnormalities
Antibody-mediated
Warm autoimmune hemolytic anemia is caused by autoimmune attack against red blood cells, primarily by IgG. It is the most common of the autoimmune hemolytic diseases. It can be idiopathic, that is, without any known cause, drug-associated or secondary to another disease such as systemic lupus erythematosus, or a malignancy, such as chronic lymphocytic leukemia.
Cold agglutinin hemolytic anemia is primarily mediated by IgM. It can be idiopathic or result from an underlying condition.
Rh disease, one of the causes of hemolytic disease of the newborn
Transfusion reaction to blood transfusions
Mechanical trauma to red blood cells
Microangiopathic hemolytic anemias, including thrombotic thrombocytopenic purpura and disseminated intravascular coagulation
Infections, including malaria
Heart surgery
Haemodialysis
Parasitic
Trypanosoma congolense alters the surfaces of RBCs of its host and this may explain T. c. induced anemia
Blood loss
Anemia of prematurity, from frequent blood sampling for laboratory testing, combined with insufficient RBC production
Trauma or surgery, causing acute blood loss
Gastrointestinal tract lesions, causing either acute bleeds (e.g. variceal lesions, peptic ulcers, hemorrhoids) or chronic blood loss (e.g. angiodysplasia)
Gynecologic disturbances, also generally causing chronic blood loss
From menstruation, mostly among young women or older women who have fibroids
Many type of cancers, including colorectal cancer and cancer of the urinary bladder, may cause acute or chronic blood loss, especially at advanced stages
Infection by intestinal nematodes feeding on blood, such as hookworms and the whipworm Trichuris trichiura
Iatrogenic anemia, blood loss from repeated blood draws and medical procedures.
The roots of the words anemia and ischemia both refer to the basic idea of "lack of blood", but anemia and ischemia are not the same thing in modern medical terminology. The word anemia used alone implies widespread effects from blood that either is too scarce (e.g., blood loss) or is dysfunctional in its oxygen-supplying ability (due to whatever type of hemoglobin or erythrocyte problem). In contrast, the word ischemia refers solely to the lack of blood (poor perfusion). Thus ischemia in a body part can cause localized anemic effects within those tissues.
Fluid overload
Fluid overload (hypervolemia) causes decreased hemoglobin concentration and apparent anemia:
General causes of hypervolemia include excessive sodium or fluid intake, sodium or water retention and fluid shift into the intravascular space.
From the 6th week of pregnancy, hormonal changes cause an increase in the mother's blood volume due to an increase in plasma.
Intestinal inflammation
Certain gastrointestinal disorders can cause anemia. The mechanisms involved are multifactorial and not limited to malabsorption but mainly related to chronic intestinal inflammation, which causes dysregulation of hepcidin that leads to decreased access of iron to the circulation.
Helicobacter pylori infection.
Gluten-related disorders: untreated celiac disease and non-celiac gluten sensitivity. Anemia can be the only manifestation of celiac disease, in absence of gastrointestinal or any other symptoms.
Inflammatory bowel disease.
Diagnosis
Definitions
There are a number of definitions of anemia; reviews provide comparison and contrast of them. A strict but broad definition is an absolute decrease in red blood cell mass, however, a broader definition is a lowered ability of the blood to carry oxygen. An operational definition is a decrease in whole-blood hemoglobin concentration of more than 2 standard deviations below the mean of an age- and sex-matched reference range.
It is difficult to directly measure RBC mass, so the hematocrit (amount of RBCs) or the hemoglobin (Hb) in the blood are often used instead to indirectly estimate the value. Hematocrit; however, is concentration dependent and is therefore not completely accurate. For example, during pregnancy a woman's RBC mass is normal but because of an increase in blood volume the hemoglobin and hematocrit are diluted and thus decreased. Another example would be bleeding where the RBC mass would decrease but the concentrations of hemoglobin and hematocrit initially remains normal until fluids shift from other areas of the body to the intravascular space.
The anemia is also classified by severity into mild (110 g/L to normal), moderate (80 g/L to 110 g/L), and severe anemia (less than 80 g/L) in adults. Different values are used in pregnancy and children.
Testing
Anemia is typically diagnosed on a complete blood count. Apart from reporting the number of red blood cells and the hemoglobin level, the automatic counters also measure the size of the red blood cells by flow cytometry, which is an important tool in distinguishing between the causes of anemia. Examination of a stained blood smear using a microscope can also be helpful, and it is sometimes a necessity in regions of the world where automated analysis is less accessible.
A blood test will provide counts of white blood cells, red blood cells and platelets. If anemia appears, further tests may determine what type it is, and whether it has a serious cause. although of that, it is possible to refer to the genetic history and physical diagnosis. These tests may also include serum ferritin, iron studies, vitamin B12, genetic testing, and a bone marrow sample, if needed.
Reticulocyte counts, and the "kinetic" approach to anemia, have become more common than in the past in the large medical centers of the United States and some other wealthy nations, in part because some automatic counters now have the capacity to include reticulocyte counts. A reticulocyte count is a quantitative measure of the bone marrow's production of new red blood cells. The reticulocyte production index is a calculation of the ratio between the level of anemia and the extent to which the reticulocyte count has risen in response. If the degree of anemia is significant, even a "normal" reticulocyte count actually may reflect an inadequate response.
If an automated count is not available, a reticulocyte count can be done manually following special staining of the blood film. In manual examination, activity of the bone marrow can also be gauged qualitatively by subtle changes in the numbers and the morphology of young RBCs by examination under a microscope. Newly formed RBCs are usually slightly larger than older RBCs and show polychromasia. Even where the source of blood loss is obvious, evaluation of erythropoiesis can help assess whether the bone marrow will be able to compensate for the loss and at what rate.
When the cause is not obvious, clinicians use other tests, such as: ESR, serum iron, transferrin, RBC folate level, hemoglobin electrophoresis, renal function tests (e.g. serum creatinine) although the tests will depend on the clinical hypothesis that is being investigated.
When the diagnosis remains difficult, a bone marrow examination allows direct examination of the precursors to red cells, although is rarely used as is painful, invasive and is hence reserved for cases where severe pathology needs to be determined or excluded.
Red blood cell size
In the morphological approach, anemia is classified by the size of red blood cells; this is either done automatically or on microscopic examination of a peripheral blood smear. The size is reflected in the mean corpuscular volume (MCV). If the cells are smaller than normal (under 80 fl), the anemia is said to be microcytic; if they are normal size (80–100 fl), normocytic; and if they are larger than normal (over 100 fl), the anemia is classified as macrocytic. This scheme quickly exposes some of the most common causes of anemia; for instance, a microcytic anemia is often the result of iron deficiency.
In clinical workup, the MCV will be one of the first pieces of information available, so even among clinicians who consider the "kinetic" approach more useful philosophically, morphology will remain an important element of classification and diagnosis.
Limitations of MCV include cases where the underlying cause is due to a combination of factors – such as iron deficiency (a cause of microcytosis) and vitamin B12 deficiency (a cause of macrocytosis) where the net result can be normocytic cells.
Production vs. destruction or loss
The "kinetic" approach to anemia yields arguably the most clinically relevant classification of anemia. This classification depends on evaluation of several hematological parameters, particularly the blood reticulocyte (precursor of mature RBCs) count. This then yields the classification of defects by decreased RBC production versus increased RBC destruction or loss. Clinical signs of loss or destruction include abnormal peripheral blood smear with signs of hemolysis; elevated LDH suggesting cell destruction; or clinical signs of bleeding, such as guaiac-positive stool, radiographic findings, or frank bleeding.
The following is a simplified schematic of this approach:
* For instance, sickle cell anemia with superimposed iron deficiency; chronic gastric bleeding with B12 and folate deficiency; and other instances of anemia with more than one cause.
** Confirm by repeating reticulocyte count: ongoing combination of low reticulocyte production index, normal MCV and hemolysis or loss may be seen in bone marrow failure or anemia of chronic disease, with superimposed or related hemolysis or blood loss.
Here is a schematic representation of how to consider anemia with MCV as the starting point:
Other characteristics visible on the peripheral smear may provide valuable clues about a more specific diagnosis; for example, abnormal white blood cells may point to a cause in the bone marrow.
Microcytic
Microcytic anemia is primarily a result of hemoglobin synthesis failure/insufficiency, which could be caused by several etiologies:
Iron-deficiency anemia is the most common type of anemia overall and it has many causes. RBCs often appear hypochromic (paler than usual) and microcytic (smaller than usual) when viewed with a microscope.
Iron-deficiency anemia is due to insufficient dietary intake or absorption of iron to meet the body's needs. Infants, toddlers, and pregnant women have higher than average needs. Increased iron intake is also needed to offset blood losses due to digestive tract issues, frequent blood donations, or heavy menstrual periods. Iron is an essential part of hemoglobin, and low iron levels result in decreased incorporation of hemoglobin into red blood cells. In the United States, 12% of all women of childbearing age have iron deficiency, compared with only 2% of adult men. The incidence is as high as 20% among African American and Mexican American women. In India it is even more than 50%. Studies have linked iron deficiency without anemia to poor school performance and lower IQ in teenage girls, although this may be due to socioeconomic factors. Iron deficiency is the most prevalent deficiency state on a worldwide basis. It is sometimes the cause of abnormal fissuring of the angular (corner) sections of the lips (angular stomatitis).
In the United States, the most common cause of iron deficiency is bleeding or blood loss, usually from the gastrointestinal tract. Fecal occult blood testing, upper endoscopy and lower endoscopy should be performed to identify bleeding lesions. In older men and women, the chances are higher that bleeding from the gastrointestinal tract could be due to colon polyps or colorectal cancer.
Worldwide, the most common cause of iron-deficiency anemia is parasitic infestation (hookworms, amebiasis, schistosomiasis and whipworms).
The Mentzer index (mean cell volume divided by the RBC count) predicts whether microcytic anemia may be due to iron deficiency or thalassemia, although it requires confirmation.
Macrocytic
Megaloblastic anemia, the most common cause of macrocytic anemia, is due to a deficiency of either vitamin B12, folic acid, or both. Deficiency in folate or vitamin B12 can be due either to inadequate intake or insufficient absorption. Folate deficiency normally does not produce neurological symptoms, while B12 deficiency does.
Pernicious anemia is caused by a lack of intrinsic factor, which is required to absorb vitamin B12 from food. A lack of intrinsic factor may arise from an autoimmune condition targeting the parietal cells (atrophic gastritis) that produce intrinsic factor or against intrinsic factor itself. These lead to poor absorption of vitamin B12.
Macrocytic anemia can also be caused by the removal of the functional portion of the stomach, such as during gastric bypass surgery, leading to reduced vitamin B12/folate absorption. Therefore, one must always be aware of anemia following this procedure.
Hypothyroidism
Alcoholism commonly causes a macrocytosis, although not specifically anemia. Other types of liver disease can also cause macrocytosis.
Drugs such as methotrexate, zidovudine, and other substances may inhibit DNA replication such as heavy metals
Macrocytic anemia can be further divided into "megaloblastic anemia" or "nonmegaloblastic macrocytic anemia". The cause of megaloblastic anemia is primarily a failure of DNA synthesis with preserved RNA synthesis, which results in restricted cell division of the progenitor cells. The megaloblastic anemias often present with neutrophil hypersegmentation (six to 10 lobes). The nonmegaloblastic macrocytic anemias have different etiologies (i.e. unimpaired DNA globin synthesis,) which occur, for example, in alcoholism.
In addition to the nonspecific symptoms of anemia, specific features of vitamin B12 deficiency include peripheral neuropathy and subacute combined degeneration of the cord with resulting balance difficulties from posterior column spinal cord pathology. Other features may include a smooth, red tongue and glossitis.
The treatment for vitamin B12-deficient anemia was first devised by William Murphy, who bled dogs to make them anemic, and then fed them various substances to see what (if anything) would make them healthy again. He discovered that ingesting large amounts of liver seemed to cure the disease. George Minot and George Whipple then set about to isolate the curative substance chemically and ultimately were able to isolate the vitamin B12 from the liver. All three shared the 1934 Nobel Prize in Medicine.
Normocytic
Normocytic anemia occurs when the overall hemoglobin levels are decreased, but the red blood cell size (mean corpuscular volume) remains normal. Causes include:
Dimorphic
A dimorphic appearance on a peripheral blood smear occurs when there are two simultaneous populations of red blood cells, typically of different size and hemoglobin content (this last feature affecting the color of the red blood cell on a stained peripheral blood smear). For example, a person recently transfused for iron deficiency would have small, pale, iron deficient red blood cells (RBCs) and the donor RBCs of normal size and color. Similarly, a person transfused for severe folate or vitamin B12 deficiency would have two cell populations, but, in this case, the patient's RBCs would be larger and paler than the donor's RBCs.
A person with sideroblastic anemia (a defect in heme synthesis, commonly caused by alcoholism, but also drugs/toxins, nutritional deficiencies, a few acquired and rare congenital diseases) can have a dimorphic smear from the sideroblastic anemia alone. Evidence for multiple causes appears with an elevated RBC distribution width (RDW), indicating a wider-than-normal range of red cell sizes, also seen in common nutritional anemia.
Heinz body anemia
Heinz bodies form in the cytoplasm of RBCs and appear as small dark dots under the microscope. In animals, Heinz body anemia has many causes. It may be drug-induced, for example in cats and dogs by acetaminophen (paracetamol), or may be caused by eating various plants or other substances:
In cats and dogs after eating either raw or cooked plants from the genus Allium, for example, onions or garlic.
In dogs after ingestion of zinc, for example, after eating U.S. pennies minted after 1982.
In horses which eat dry or wilted red maple leaves.
Hyperanemia
Hyperanemia is a severe form of anemia, in which the hematocrit is below 10%.
Refractory anemia
Refractory anemia, an anemia which does not respond to treatment, is often seen secondary to myelodysplastic syndromes. Iron-deficiency anemia may also be refractory as a manifestation of gastrointestinal problems which disrupt iron absorption or cause occult bleeding.
Transfusion dependent
Transfusion dependent anemia is a form of anemia where ongoing blood transfusion are required. Most people with myelodysplastic syndrome develop this state at some point in time. Beta thalassemia may also result in transfusion dependence. Concerns from repeated blood transfusions include iron overload. This iron overload may require chelation therapy.
Treatment
The global market for anemia treatments is estimated at more than USD 23 billion per year and is fast growing because of the rising prevalence and awareness of anemia. The types of anemia treated with drugs are iron-deficiency anemia, thalassemia, aplastic anemia, hemolytic anemia, sickle cell anemia, and pernicious anemia, the most important of them being deficiency and sickle cell anemia with together 60% of market share because of highest prevalence as well as higher treatment costs compared with other types. Treatment for anemia depends on cause and severity. Vitamin supplements given orally (folic acid or vitamin B12) or intramuscularly (vitamin B12) will replace specific deficiencies.
Apart from that, iron supplements, antibiotics, immunosuppressant, bone marrow stimulants, corticosteroids, gene therapy and iron chelating agents are forms of anemia treatment drugs, with immunosuppressants and corticosteroids accounting for 58% of the market share. A paradigm shift towards gene therapy and monoclonal antibody therapies is observed.
Oral iron
Nutritional iron deficiency is common in developing nations. An estimated two-thirds of children and of women of childbearing age in most developing nations are estimated to have iron deficiency without anemia with one-third of them having an iron deficiency with anemia. Iron deficiency due to inadequate dietary iron intake is rare in men and postmenopausal women. The diagnosis of iron deficiency mandates a search for potential sources of blood loss, such as gastrointestinal bleeding from ulcers or colon cancer.
Mild to moderate iron-deficiency anemia is treated by oral iron supplementation with ferrous sulfate, ferrous fumarate, or ferrous gluconate. Daily iron supplements have been shown to be effective in reducing anemia in women of childbearing age. When taking iron supplements, stomach upset or darkening of the feces are commonly experienced. The stomach upset can be alleviated by taking the iron with food; however, this decreases the amount of iron absorbed. Vitamin C aids in the body's ability to absorb iron, so taking oral iron supplements with orange juice is of benefit.
In the anemia of chronic kidney disease, recombinant erythropoietin or epoetin alfa is recommended to stimulate RBC production, and if iron deficiency and inflammation are also present, concurrent parenteral iron is also recommended.
Injectable iron
In cases where oral iron has either proven ineffective, would be too slow (for example, pre-operatively), or where absorption is impeded (for example in cases of inflammation), parenteral iron preparations can be used. Parenteral iron can improve iron stores rapidly and is also effective for treating people with postpartum haemorrhage, inflammatory bowel disease, and chronic heart failure. The body can absorb up to 6 mg iron daily from the gastrointestinal tract. In many cases, the patient has a deficit of over 1,000 mg of iron which would require several months to replace. This can be given concurrently with erythropoietin to ensure sufficient iron for increased rates of erythropoiesis.
Blood transfusions
Blood transfusions in those without symptoms is not recommended until the hemoglobin is below 60 to 80 g/L (6 to 8 g/dL). In those with coronary artery disease who are not actively bleeding transfusions are only recommended when the hemoglobin is below 70 to 80g/L (7 to 8 g/dL). Transfusing earlier does not improve survival. Transfusions otherwise should only be undertaken in cases of cardiovascular instability.
A 2012 review concluded that when considering blood transfusions for anaemia in people with advanced cancer who have fatigue and breathlessness (not related to cancer treatment or haemorrhage), consideration should be given to whether there are alternative strategies can be tried before a blood transfusion.
Vitamin B12 intramuscular injections
In many cases, vitamin B12 is used by intramuscular injection in severe cases or cases of malabsorption of dietary-B12. Pernicious anemia caused by loss of intrinsic factor cannot be prevented. If there are other, reversible causes of low vitamin B12 levels, the cause must be treated.
Vitamin B12 deficiency anemia is usually easily treated by providing the necessary level of vitamin B12 supplementation. The injections are quick-acting, and symptoms usually go away within one to two weeks. As the condition improves, doses are reduced to weeks and then can be given monthly. Intramuscular therapy leads to more rapid improvement and should be considered in patients with severe deficiency or severe neurologic symptoms. Treatment should begin rapidly for severe neurological symptoms, as some changes can become permanent. In some individuals lifelong treatment may be needed.
Erythropoiesis-stimulating agents
The objective for the administration of an erythropoiesis-stimulating agent (ESA) is to maintain hemoglobin at the lowest level that both minimizes transfusions and meets the individual person's needs. They should not be used for mild or moderate anemia. They are not recommended in people with chronic kidney disease unless hemoglobin levels are less than 10 g/dL or they have symptoms of anemia. Their use should be along with parenteral iron. The 2020 Cochrane Anaesthesia Review Group review of erythropoietin (EPO) plus iron versus control treatment including placebo or iron for preoperative anaemic adults undergoing non-cardiac surgery demonstrated that patients were much less likely to require red cell transfusion and in those transfused, the volumes were unchanged (mean difference -0.09, 95% CI -0.23 to 0.05). Pre-operative hemoglobin concentration was increased in those receiving 'high dose' EPO, but not 'low dose'.
Hyperbaric oxygen
Treatment of exceptional blood loss (anemia) is recognized as an indication for hyperbaric oxygen (HBO) by the Undersea and Hyperbaric Medical Society. The use of HBO is indicated when oxygen delivery to tissue is not sufficient in patients who cannot be given blood transfusions for medical or religious reasons. HBO may be used for medical reasons when threat of blood product incompatibility or concern for transmissible disease are factors. The beliefs of some religions (ex: Jehovah's Witnesses) may require they use the HBO method. A 2005 review of the use of HBO in severe anemia found all publications reported positive results.
Preoperative anemia
An estimated 30% of adults who require non-cardiac surgery have anemia. In order to determine an appropriate preoperative treatment, it is suggested that the cause of anemia be first determined. There is moderate level medical evidence that supports a combination of iron supplementation and erythropoietin treatment to help reduce the requirement for red blood cell transfusions after surgery in those who have preoperative anemia.
Epidemiology
Anemia affects 27% of the world's population with iron-deficiency anemia accounting for more than 60% of it. A moderate degree of iron-deficiency anemia affected approximately 610 million people worldwide or 8.8% of the population. It is somewhat more common in females (9.9%) than males (7.8%). Mild iron-deficiency anemia affects another 375 million. Severe anaemia is prevalent globally, and especially in sub-Saharan Africa where it is associated with infections including malaria and invasive bacterial infections.
History
Signs of severe anemia in human bones from 4000 years ago have been uncovered in Thailand.
References
External links
WHO fact sheet on anaemia
Anemia, U.S. National Library of Medicine
Anemias
Hematopathology
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Transfusion medicine | 0.774688 | 0.999459 | 0.774269 |
Injury | Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants.
Injuries can be caused in many ways, including mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury. Cells too can repair damage to a certain degree.
Taxonomic range
Animals
Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors.
Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent.
Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury.
Humans
Injury in humans has been studied extensively for its importance in medicine. Much of medical practice, including emergency medicine and pain management, is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence, activity when injured and the role of human intent. In addition to physical harm, injuries can cause psychological harm, including post-traumatic stress disorder.
Plants
In plants, injuries result from the eating of plant parts by herbivorous animals including insects and mammals, from damage to tissues by plant pathogens such as bacteria and fungi, which may gain entry after herbivore damage or in other ways, and from abiotic factors such as heat, freezing, flooding, lightning, and pollutants such as ozone. Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobial chemicals, and in woody plants by regrowing over wounds.
Cell injury
Cell injury is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused.
References
Biological concepts | 0.778451 | 0.994342 | 0.774047 |
Cyanide poisoning | Cyanide poisoning is poisoning that results from exposure to any of a number of forms of cyanide. Early symptoms include headache, dizziness, fast heart rate, shortness of breath, and vomiting. This phase may then be followed by seizures, slow heart rate, low blood pressure, loss of consciousness, and cardiac arrest. Onset of symptoms usually occurs within a few minutes. Some survivors have long-term neurological problems.
Toxic cyanide-containing compounds include hydrogen cyanide gas and a number of cyanide salts. Poisoning is relatively common following breathing in smoke from a house fire. Other potential routes of exposure include workplaces involved in metal polishing, certain insecticides, the medication sodium nitroprusside, and certain seeds such as those of apples and apricots. Liquid forms of cyanide can be absorbed through the skin. Cyanide ions interfere with cellular respiration, resulting in the body's tissues being unable to use oxygen.
Diagnosis is often difficult. It may be suspected in a person following a house fire who has a decreased level of consciousness, low blood pressure, or high lactic acid. Blood levels of cyanide can be measured but take time. Levels of 0.5–1 mg/L are mild, 1–2 mg/L are moderate, 2–3 mg/L are severe, and greater than 3 mg/L generally result in death.
If exposure is suspected, the person should be removed from the source of the exposure and decontaminated. Treatment involves supportive care and giving the person 100% oxygen. Hydroxocobalamin (vitamin B12a) appears to be useful as an antidote and is generally first-line. Sodium thiosulphate may also be given. Historically, cyanide has been used for mass suicide and it was used for genocide by the Nazis.
Signs and symptoms
Acute exposure
If hydrogen cyanide is inhaled, it can cause a coma with seizures, apnea, and cardiac arrest, with death following in a matter of seconds. At lower doses, loss of consciousness may be preceded by general weakness, dizziness, headaches, vertigo, confusion, and perceived difficulty in breathing. At the first stages of unconsciousness, breathing is often sufficient or even rapid, although the state of the person progresses towards a deep coma, sometimes accompanied by pulmonary edema, and finally cardiac arrest. A cherry red skin color that darkens may be present as the result of increased venous hemoglobin oxygen saturation. Despite the similar name, cyanide does not directly cause cyanosis. A fatal dose for humans can be as low as 1.5 mg/kg body weight. Other sources claim a lethal dose is 1–3 mg per kg body weight for vertebrates.
Chronic exposure
Exposure to lower levels of cyanide over a long period (e.g., after use of improperly processed cassava roots; cassava is a staple food in various parts of West Africa) results in increased blood cyanide levels, which can result in weakness and a variety of symptoms, including permanent paralysis, nervous lesions, hypothyroidism, and miscarriages. Other effects include mild liver and kidney damage.
Causes
Cyanide poisoning can result from the ingestion of cyanide salts, imbibing pure liquid prussic acid, skin absorption of prussic acid, intravenous infusion of nitroprusside for hypertensive crisis, or the inhalation of hydrogen cyanide gas. The last typically occurs through one of three mechanisms:
The gas is directly released from canisters (e.g., as part of a pesticide, insecticide, or Zyklon B).
It is generated on site by reacting potassium cyanide or sodium cyanide with sulfuric acid (e.g., in a modern American gas chamber).
Fumes arise during a building fire or any similar scenario involving the burning of polyurethane, vinyl or other polymer products that required nitriles in their production.
As potential contributing factors, cyanide is present in:
Tobacco smoke.
Many seeds or kernels such as those of almonds, apricots, apples, oranges, and flaxseed.
Foods including cassava (also known as tapioca, yuca or manioc) and bamboo shoots.
As a potential harm-reduction factor, Vitamin B12, in the form of hydroxocobalamin (also spelled hydroxycobalamin), might reduce the negative effects of chronic exposure; whereas, a deficiency might worsen negative health effects following exposure to cyanide.
Mechanism
Cyanide is a potent cytochrome c oxidase (COX, a.k.a. Complex IV) inhibitor, causing asphyxiation of cells. As such, cyanide poisoning is a form of histotoxic hypoxia, because it interferes with the ability of cells to take or use oxygen via oxidative phosphorylation.
Specifically, cyanide binds to the heme a3-CuB binuclear center of COX (and thus is a non-competitive inhibitor of it). This prevents electrons passing through COX from being transferred to O2, which not only blocks the mitochondrial electron transport chain, it also interferes with the pumping of a proton out of the mitochondrial matrix which would otherwise occur at this stage. Therefore, cyanide interferes not only with aerobic respiration but also with the ATP synthesis pathway it facilitates, owing to the close relationship between those two processes.
One antidote for cyanide poisoning, nitrite (i.e., via amyl nitrite), works by converting ferrohemoglobin to ferrihemoglobin, which can then compete with COX for free cyanide (as the cyanide will bind to the iron in its heme groups instead). Ferrihemoglobin cannot carry oxygen, but the amount of ferrihemoglobin that can be formed without impairing oxygen transport is much greater than the amount of COX in the body.
Cyanide is a broad-spectrum poison because the reaction it inhibits is essential to aerobic metabolism; COX is found in many forms of life. However, susceptibility to cyanide is far from uniform across affected species; for instance, plants have an alternative electron transfer pathway available that passes electrons directly from ubiquinone to O2, which confers cyanide resistance by bypassing COX.
Diagnosis
Lactate is produced by anaerobic glycolysis when oxygen concentration becomes too low for the normal aerobic respiration pathway. Cyanide poisoning inhibits aerobic respiration and therefore increases anaerobic glycolysis which causes a rise of lactate in the plasma. A lactate concentration above 10 mmol per liter is an indicator of cyanide poisoning, as defined by the presence of a blood cyanide concentration above 40 μmol per liter. Lactate levels greater than 6 mmol/L after reported or strongly suspected pure cyanide poisoning, such as cyanide-containing smoke exposure, suggests significant cyanide exposure. However, lactate alone is not diagnostic of cyanide poisoning because lactosis is also triggered by many other things, including mitochondrial dysfunction.
Methods of detection include colorimetric assays such as the Prussian blue test, the pyridine-barbiturate assay, also known as the "Conway diffusion method" and the taurine fluorescence-HPLC but like all colorimetric assays these are prone to false positives. Lipid peroxidation resulting in "TBARS", an artifact of heart attack produces dialdehydes that cross-react with the pyridine-barbiturate assay. Meanwhile, the taurine-fluorescence-HPLC assay used for cyanide detection is identical to the assay used to detect glutathione in spinal fluid.
Cyanide and thiocyanate assays have been run with mass spectrometry (LC/MS/MS), which are considered specific tests. Since cyanide has a short half-life, the main metabolite, thiocyanate is typically measured to determine exposure.
Treatment
Decontamination
Decontamination of people exposed to hydrogen cyanide gas only requires removal of the outer clothing and the washing of their hair. Those exposed to liquids or powders generally require full decontamination.
Antidote
The International Programme on Chemical Safety issued a survey (IPCS/CEC Evaluation of Antidotes Series) that lists the following antidotal agents and their effects: oxygen, sodium thiosulfate, amyl nitrite, sodium nitrite, 4-dimethylaminophenol, hydroxocobalamin, and dicobalt edetate ('Kelocyanor'), as well as several others. Another commonly-recommended antidote is 'solutions A and B' (a solution of ferrous sulfate in aqueous citric acid, and aqueous sodium carbonate, respectively).
The United States standard cyanide antidote kit first uses a small inhaled dose of amyl nitrite, followed by intravenous sodium nitrite, followed by intravenous sodium thiosulfate. Hydroxocobalamin was approved for use in the US in late 2006 and is available in Cyanokit antidote kits. Sulfanegen TEA, which could be delivered to the body through an intra-muscular (IM) injection, detoxifies cyanide and converts the cyanide into thiocyanate, a less toxic substance. Alternative methods of treating cyanide intoxication are used in other countries.
The Irish Health Service Executive (HSE) has recommended against the use of solutions A and B because of their limited shelf life, potential to cause iron poisoning, and limited applicability (effective only in cases of cyanide ingestion, whereas the main modes of poisoning are inhalation and skin contact). The HSE has also questioned the usefulness of amyl nitrite due to storage/availability problems, risk of abuse, and lack of evidence of significant benefits. It also states that the availability of kelocyanor at the workplace may mislead doctors into treating a patient for cyanide poisoning when this is an erroneous diagnosis. The HSE no longer recommends a particular cyanide antidote.
History
Fires
The República Cromañón nightclub fire broke out in Buenos Aires, Argentina on 30 December 2004, killing 194 people and leaving at least 1,492 injured. Most of the victims died from inhaling poisonous gases, including carbon monoxide. After the fire, the technical institution INTI found that the level of toxicity due to the materials and volume of the building was 225 ppm of cyanide in the air. A lethal dose for rats is between 150 ppm and 220 ppm, meaning the air in the building was highly toxic.
On 27 January 2013, a fire at the Kiss nightclub in the city of Santa Maria, in the south of Brazil, caused the poisoning of hundreds of young people by cyanide released by the combustion of soundproofing foam made with polyurethane. By March 2013, 245 fatalities were confirmed.
Gas chambers
Research of hydrogen cyanide by chemists Carl Wilhelm Scheele and Claude Bernard would become central to understanding the lethality of future gas chambers. In early 1942, Zyklon B, which contains hydrogen cyanide, emerged as the preferred killing tool of Nazi Germany for use in extermination camps during the Holocaust. The chemical was used to murder roughly one million people in gas chambers installed in extermination camps at Auschwitz-Birkenau, Majdanek, and elsewhere. Most of the people who were murdered were Jews, and by far the majority of these murders took place at Auschwitz. The constituents of Zyklon B were manufactured by several companies under licenses for Degesch, a corporation co-owned by IG Farben, Degussa and Th. Goldschmidt AG. It was sold to the German Army and the Schutzstaffel (SS) by the distributors Heli and Testa, with Heli supplying it to concentration camps at Mauthausen, Dachau, and Buchenwald and Testa to Auschwitz and Majdanek. Camps also occasionally bought Zyklon B directly from the manufacturers. Of the 729 tonnes of Zyklon B sold in Germany in 1942–44, 56 tonnes (about eight percent of domestic sales) were sold to concentration camps. Auschwitz received 23.8 tonnes, of which six tonnes were used for fumigation. The remainder was used in the gas chambers or lost to spoilage (the product had a stated shelf life of only three months). Testa conducted fumigations for the Wehrmacht and supplied them with Zyklon B. They also offered courses to the SS in the safe handling and use of the material for fumigation purposes. In April 1941, the German agriculture and interior ministries designated the SS as an authorized applier of the chemical, and thus they were able to use it without any further training or governmental oversight.
Hydrogen cyanide gas has been used for judicial execution in some states of the United States, where cyanide was generated by reaction between potassium cyanide (or sodium cyanide) dropped into a compartment containing sulfuric acid, directly below the chair in the gas chamber.
Suicide
Cyanide salts are sometimes used as fast-acting suicide devices. Cyanide reacts at a higher level with high stomach acidity.
On 26 January 1904, company promoter and swindler Whitaker Wright died by suicide by ingesting cyanide in a court anteroom immediately after being convicted of fraud.
In February 1937, the Uruguayan short story writer Horacio Quiroga died by suicide by drinking cyanide at a hospital in Buenos Aires.
In 1937, polymer chemist Wallace Carothers died by suicide by cyanide.
In the 1943 Operation Gunnerside to destroy the Vemork Heavy Water Plant in World War II (an attempt to stop or slow German atomic bomb progress), the commandos were given cyanide tablets (cyanide enclosed in rubber) kept in the mouth and were instructed to bite into them in case of German capture. The tablets ensured death within three minutes.
Cyanide, in the form of pure liquid prussic acid (a historical name for hydrogen cyanide), was the favored suicide agent of Nazi Germany. Erwin Rommel (1944), Adolf Hitler's wife, Eva Braun (1945), and Nazi leaders Heinrich Himmler (1945), possibly Martin Bormann (1945), and Hermann Göring (1946) all died by suicide by ingesting it.
It is speculated that, in 1954, Alan Turing used an apple that had been injected with a solution of cyanide to die by suicide after being convicted of having a homosexual relationship, which was illegal at the time in the United Kingdom, and forced to undergo hormonal castration to avoid prison. An inquest determined that Turing's death from cyanide poisoning was a suicide, although this has been disputed.
Members of the Sri Lankan Tamil (or Eelam Tamil) LTTE (Liberation Tigers of Tamil Eelam, whose insurgency lasted from 1983 to 2009), used to wear cyanide vials around their necks with the intention of dying by suicide if captured by the government forces.
On 22 June 1977, Moscow, Aleksandr Dmitrievich Ogorodnik, a Soviet diplomat accused of spying on behalf of the Colombian Intelligence Agency and the US Central Intelligence Agency, was arrested. During the interrogations, Ogorodnik offered to write a full confession and asked for his pen. Inside the pen cap was a hidden cyanide pill, which when bitten on, caused Ogorodnik to die before he hit the floor, according to the Soviets.
On 18 November 1978, Jonestown. A total of 909 individuals died in Jonestown, many from apparent cyanide poisoning, in an event termed "revolutionary suicide" by Jones and some members on an audio tape of the event and in prior discussions. The poisonings in Jonestown followed the murder of five others by Temple members at Port Kaituma, including United States Congressman Leo Ryan, an act that Jones ordered. Four other Temple members died by murder-suicide in Georgetown at Jones' command.
On 6 June 1985, serial killer Leonard Lake died in custody after having ingested cyanide pills he had sewn into his clothes.
On 28 June 2012, Wall Street trader Michael Marin ingested a cyanide pill seconds after a guilty verdict was read in his arson trial in Phoenix, Arizona; he died minutes after.
On 22 June 2015, John B. McLemore, a horologist and the central figure of the podcast S-Town, died after ingesting cyanide.
On 29 November 2017, Slobodan Praljak died from drinking potassium cyanide, after being convicted of war crimes by the International Criminal Tribunal for the former Yugoslavia.
Mining and industrial
In 1993, an illegal spill resulted in the death of seven people in Avellaneda, Argentina. In their memory, the National Environmental Conscious Day (Día Nacional de la Conciencia Ambiental) was established.
In 2000, a spill at Baia Mare, Romania, resulted in the worst environmental disaster in Europe since Chernobyl.
In 2000, Allen Elias, CEO of Evergreen Resources was convicted of knowing endangerment for his role in the cyanide poisoning of employee Scott Dominguez. This was one of the first successful criminal prosecutions of a corporate executive by the Environmental Protection Agency.
Murder
John Tawell, a murderer who in 1845 became the first person to be arrested as the result of telecommunications technology.
Grigori Rasputin (1916; attempted, later killed by gunshot)
The Goebbels children (1945)
Stepan Bandera (1959)
Jonestown, Guyana, was the site of a large mass murder–suicide, in which over 900 members of the Peoples Temple drank potassium cyanide–laced Flavor Aid in 1978.
Chicago Tylenol murders (1982)
Timothy Marc O'Bryan (1966–1974) died on October 31, 1974, by ingesting potassium cyanide placed into a giant Pixy Stix. His father, Ronald Clark O'Bryan, was convicted of Tim's murder plus four counts of attempted murder. O'Bryan put potassium cyanide into five giant Pixy Stix that he gave to his son and daughter along with three other children. Only Timothy ate the poisoned candy and died.
Bruce Nickell and Sue Snow (5 June 1986) Murdered by Stella Nickell who poisoned bottles of Excedrin.
Richard Kuklinski (1935–2006)
Janet Overton (1942–1988) Her husband, Richard Overton, was convicted of poisoning her, but Janet's symptoms did not match those of classic cyanide poisoning, the timeline was inconsistent with cyanide poisoning, and the amount found was just a trace. The diagnostic method used was prone to false positives. Richard Overton died in prison in 2009.
Urooj Khan (1966–2012), won the lottery and was found dead a few days later. A blood diagnostic reported a lethal level of cyanide in his blood, but the body did not display any classic symptoms of cyanide poisoning, and no link to cyanide could be found in Urooj's social circle. The diagnostic method used was the Conway diffusion method, prone to false positives with artifacts of heart attack and kidney failure. The chemistry of this and other false positives could be linked to the TBARS response following heart failure.
Autumn Marie Klein (20 April 2013), a prominent 41-year-old neuroscientist and physician, died from cyanide poisoning. Klein's husband, Robert J. Ferrante, also a prominent neuroscientist who used cyanide in his research, was convicted of murder and sentenced to life in prison for her death. Robert Ferrante is appealing his conviction, claiming the cyanide was a false positive.
Mirna Salihin died in hospital on 6 January 2016, after drinking a Vietnamese iced coffee at a cafe in a shopping mall in Jakarta. Police reports claim that cyanide poisoning was the most likely cause of her death.
Jolly Thomas of Kozhikode, Kerala, India, was arrested in 2019 for the murder of 6 family members. Murders took place over a 14-year period, and each victim ate a meal prepared by the killer. The murders were allegedly motivated by wanting control of the family finances and property.
Mei Xiang Li of Brooklyn, New York, collapsed and died in April 2017, with cyanide later reported to be in her blood. However, Mei never exhibited symptoms of cyanide poisoning and no link to cyanide could be found in her life. Another likely false positive.
Sararath "Am" Rangsiwutthiporn, who became quickly known as "Am Cyanide" in Thai media, was arrested by the Thai police for allegedly poisoning 11 of her friends and acquaintances, spanning 2020 to 2023, with 10 deaths and 1 surviving supposed victim. According to an ongoing investigation, the number of victims is currently at 20-30 persons, mostly dead with several survived.
Warfare or terrorism
In 1988, between 3,200 and 5,000 people died in the Halabja massacre owing to unknown chemical nerve agents. Hydrogen cyanide gas was strongly suspected.
In 1995, a device was discovered in a restroom in the Kayabachō Tokyo subway station, consisting of bags of sodium cyanide and sulfuric acid with a remote controlled motor to rupture them, in what was believed to be an attempt by the Aum Shinrikyo cult to produce toxic amounts of hydrogen cyanide gas.
In 2003, Al Qaeda reportedly planned to release cyanide gas into the New York City Subway system. The attack was supposedly aborted because there would not be enough casualties.
Research
Cobinamide is the final compound in the biosynthesis of cobalamin. It has greater affinity for the cyanide than cobalamin itself, which suggests that it could be a better option for emergency treatment.
See also
Amygdalin
Anaerobic glycolysis
Lactic acidosis
List of poisonings
Konzo
Murburn concept
References
Explanatory notes
Citations
Sources
Cyanides
Neurotoxins
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Toxic effects of substances chiefly nonmedicinal as to source | 0.774501 | 0.999351 | 0.773998 |
Medical Subject Headings | Medical Subject Headings (MeSH) is a comprehensive controlled vocabulary for the purpose of indexing journal articles and books in the life sciences. It serves as a thesaurus that facilitates searching. Created and updated by the United States National Library of Medicine (NLM), it is used by the MEDLINE/PubMed article database and by NLM's catalog of book holdings. MeSH is also used by ClinicalTrials.gov registry to classify which diseases are studied by trials registered in ClinicalTrials.
MeSH was introduced in the 1960s, with the NLM's own index catalogue and the subject headings of the Quarterly Cumulative Index Medicus (1940 edition) as precursors. The yearly printed version of MeSH was discontinued in 2007; MeSH is now available only online. It can be browsed and downloaded free of charge through PubMed. Originally in English, MeSH has been translated into numerous other languages and allows retrieval of documents from different origins.
Structure
MeSH vocabulary is divided into four types of terms. The main ones are the "headings" (also known as MeSH headings or descriptors), which describe the subject of each article (e.g., "Body Weight", "Brain Edema" or "Critical Care Nursing"). Most of these are accompanied by a short description or definition, links to related descriptors, and a list of synonyms or very similar terms (known as entry terms). MeSH contains approximately 30,000 entries and is updated annually to reflect changes in medicine and medical terminology. MeSH terms are arranged in alphabetic order and in a hierarchical structure by subject categories with more specific terms arranged beneath broader terms. When we search for a MeSH term, the most specific MeSH terms are automatically included in the search. This is known as the extended search or explode of that MeSH term. This additional information and the hierarchical structure (see below) make the MeSH essentially a thesaurus, rather than a plain subject headings list.
The second type of term, MeSH subheadings or qualifiers (see below), can be used with MeSH terms to more completely describe a particular aspect of a subject, such as adverse, diagnostic or genetic effects. For example, the drug therapy of asthma is displayed as asthma/drug therapy.
The remaining two types of term are those that describe the type of material that the article represents (publication types), and supplementary concept records (SCR) which describes substances such as chemical products and drugs that are not included in the headings (see below as "Supplements").
Descriptor hierarchy
The descriptors or subject headings are arranged in a hierarchy. A given descriptor may appear at several locations in the hierarchical tree. The tree locations carry systematic labels known as tree numbers, and consequently one descriptor can carry several tree numbers. For example, the descriptor "Digestive System Neoplasms" has the tree numbers C06.301 and C04.588.274; C stands for Diseases, C06 for Digestive System Diseases and C06.301 for Digestive System Neoplasms; C04 for Neoplasms, C04.588 for Neoplasms By Site, and C04.588.274 also for Digestive System Neoplasms. The tree numbers of a given descriptor are subject to change as MeSH is updated. Every descriptor also carries a unique alphanumerical ID that will not change.
Descriptions
Most subject headings come with a short description or definition. See the MeSH description for diabetes type 2 as an example. The explanatory text is written by the MeSH team based on their standard sources if not otherwise stated. References are mostly encyclopaedias and standard textbooks of the subject areas. References for specific statements in the descriptions are not given; instead, readers are referred to the bibliography.
Qualifiers
In addition to the descriptor hierarchy, MeSH contains a small number of standard qualifiers (also known as subheadings), which can be added to descriptors to narrow down the topic. For example, "Measles" is a descriptor and "epidemiology" is a qualifier; "Measles/epidemiology" describes the subheading of epidemiological articles about Measles. The "epidemiology" qualifier can be added to all other disease descriptors. Not all descriptor/qualifier combinations are allowed since some of them may be meaningless. In all there are 83 different qualifiers.
Supplements
In addition to the descriptors, MeSH also contains some 318,000 supplementary concept records. These do not belong to the controlled vocabulary as such; instead they enlarge the thesaurus and contain links to the closest fitting descriptor to be used in a MEDLINE search. Many of these records describe chemical substances.
Use in Medline/PubMed
In MEDLINE/PubMed, every journal article is indexed with about 10–15 subject headings, subheadings and supplementary concept records, with some of them designated as major and marked with an asterisk, indicating the article's major topics. When performing a MEDLINE search via PubMed, entry terms are automatically translated into (i.e., mapped to) the corresponding descriptors with a good degree of reliability; it is recommended to check the 'Details tab' in PubMed to see how a search formulation was translated. By default, a search for a descriptor will include all the descriptors in the hierarchy below the given one. PubMed does not apply automatic mapping of the term in the following circumstances: by writing the quoted phrase (e.g. "kidney allograft"), when truncated on the asterisk (e.g. ), and when looking with field labels (e.g. ).
Use at ClinicalTrials.gov
At ClinicalTrials.gov, each trial has keywords that describe the trial. The ClinicalTrials.gov team assigns each trial two sets of MeSH terms. One set is for the conditions studied by the trial and the other for the set of interventions used in the trial. The XML file that can be downloaded for each trial contains these MeSH keywords. The XML file also has a comment that says: "the assignment of MeSH keywords is done by imperfect algorithm".
Categories
The top-level categories in the MeSH descriptor hierarchy are:
Anatomy [A]
Organisms [B]
Diseases [C]
Chemicals and Drugs [D]
Analytical, Diagnostic and Therapeutic Techniques, and Equipment [E]
Psychiatry and Psychology [F]
Phenomena and Processes [G]
Disciplines and Occupations [H]
Anthropology, Education, Sociology and Social Phenomena [I]
Technology, Industry, and Agriculture [J]
Humanities [K]
Information Science [L]
Named Groups [M]
Health Care [N]
Publication Characteristics [V]
Geographicals [Z]
See also
Medical classification
Medical literature retrieval
References
External links
Medical Subject Heading Home provided by National Library of Medicine, National Institutes of Health (U.S.)
MeSH tutorials
Automatic Term Mapping
Browsing MeSH:
Entrez
MeSH Browser
Visual MeSH Browser mapping drug-disease relationships in research
Reference.MD
of qualifiers – 2009
Biological databases
Library cataloging and classification
Medical classification
Thesauri
United States National Library of Medicine | 0.778632 | 0.993992 | 0.773954 |
Myopathy | In medicine, myopathy is a disease of the muscle in which the muscle fibers do not function properly. Myopathy means muscle disease (Greek : myo- muscle + patheia -pathy : suffering). This meaning implies that the primary defect is within the muscle, as opposed to the nerves ("neuropathies" or "neurogenic" disorders) or elsewhere (e.g., the brain).
This muscular defect typically results in myalgia (muscle pain), muscle weakness (reduced muscle force), or premature muscle fatigue (initially normal, but declining muscle force). Muscle cramps, stiffness, spasm, and contracture can also be associated with myopathy. Myopathy experienced over a long period (chronic) may result in the muscle becoming an abnormal size, such as muscle atrophy (abnormally small) or a pseudoathletic appearance (abnormally large).
Capture myopathy can occur in wild or captive animals, such as deer and kangaroos, and leads to morbidity and mortality. It usually occurs as a result of stress and physical exertion during capture and restraint.
Muscular disease can be classified as neuromuscular or musculoskeletal in nature. Some conditions, such as myositis, can be considered both neuromuscular and musculoskeletal. Different myopathies may be inherited, infectious, non-communicable, or idiopathic (cause unknown). The disease may be isolated to affecting only muscle (pure myopathy), or may be part of a systemic disease as is typical in mitochondrial myopathies.
Signs and symptoms
Common symptoms include muscle weakness, cramps, stiffness, and tetany.
Systemic diseases
Myopathies in systemic disease results from several different disease processes including endocrine, inflammatory, paraneoplastic, infectious, drug- and toxin-induced, critical illness myopathy, metabolic, collagen related, and myopathies with other systemic disorders. Patients with systemic myopathies often present acutely or sub acutely. On the other hand, familial myopathies or dystrophies generally present in a chronic fashion with exceptions of metabolic myopathies where symptoms on occasion can be precipitated acutely. Metabolic myopathies, which affect the production of ATP within the muscle cell, typically present with dynamic (exercise-induced) rather than static symptoms. Most of the inflammatory myopathies can have a chance association with malignant lesion; the incidence appears to be specifically increased only in patients with dermatomyositis.
There are many types of myopathy. ICD-10 codes are provided here where available.
Inherited forms
(G71.0) Dystrophies (or muscular dystrophies) are a subgroup of myopathies characterized by muscle degeneration and regeneration. Clinically, muscular dystrophies are typically progressive, because the muscles' ability to regenerate is eventually lost, leading to progressive weakness, often leading to use of a wheelchair, and eventually death, usually related to respiratory weakness.
(G71.1) Myotonia
Neuromyotonia
(G71.2) The congenital myopathies do not show evidence for either a progressive dystrophic process (i.e., muscle death) or inflammation, but instead characteristic microscopic changes are seen in association with reduced contractile ability of the muscles. Congenital myopathies include, but are not limited to:
(G71.2) nemaline myopathy (characterized by presence of "nemaline rods" in the muscle),
(G71.2) multi/minicore myopathy (characterized by multiple small "cores" or areas of disruption in the muscle fibers),
(G71.2) centronuclear myopathy (or myotubular myopathy) (in which the nuclei are abnormally found in the center of the muscle fibers), a rare muscle wasting disorder
(G71.3) Mitochondrial myopathies, which are due to defects in mitochondria, which provide a critical source of energy for muscle
(G72.3) Familial periodic paralysis
(G72.4) Inflammatory myopathies, which are caused by problems with the immune system attacking components of the muscle, leading to signs of inflammation in the muscle
(G73.6) Metabolic myopathies, which result from defects in biochemical metabolism that primarily affect muscle
(G73.6/E74.0) Glycogen storage diseases, which may affect muscle
(G73.6/E75) Lipid storage disorder
(G72.89) Other myopathies
Brody myopathy
Congenital myopathy with abnormal subcellular organelles
Fingerprint body myopathy
Inclusion body myopathy 2
Megaconial myopathy
Myofibrillar myopathy
Rimmed vacuolar myopathy
Acquired
(G72.0 - G72.2) External substance induced myopathy
(G72.0) Drug-induced myopathy
Glucocorticoid myopathy is caused by this class of steroids increasing the breakdown of the muscle proteins leading to muscle atrophy.
(G72.1) Alcoholic myopathy
(G72.2) Myopathy due to other toxic agents - including atypical myopathy in horses caused by toxins in sycamore seeds and seedlings.
(M33.0-M33.1)
Dermatomyositis produces muscle weakness and skin changes. The skin rash is reddish and most commonly occurs on the face, especially around the eyes, and over the knuckles and elbows. Ragged nail folds with visible capillaries can be present. It can often be treated by drugs like corticosteroids or immunosuppressants. (M33.2)
Polymyositis produces muscle weakness. It can often be treated by drugs like corticosteroids or immunosuppressants.
Inclusion body myositis is a slowly progressive disease that produces weakness of hand grip and straightening of the knees. No effective treatment is known.
(M60.9) Benign acute childhood myositis
(M61) Myositis ossificans
(M62.89) Rhabdomyolysis and (R82.1) myoglobinurias
The Food and Drug Administration is recommending that physicians restrict prescribing high-dose Simvastatin (Zocor, Merck) to patients, given an increased risk of muscle damage. The FDA drug safety communication stated that physicians should limit using the 80-mg dose unless the patient has already been taking the drug for 12 months and there is no evidence of myopathy.
"Simvastatin 80 mg should not be started in new patients, including patients already taking lower doses of the drug," the agency states.
Statin-associated autoimmune myopathy
Myocardium / cardio-myopathy
Acute myocarditis
Myocarditis in diseases classified elsewhere
Cardiomyopathy
Dilated cardiomyopathy
Obstructive hypertrophy cardiomyopathy
Other hypertrophic cardiomyopathy
Endomyocardial (eosinophilic) disease
Eosinophilic myocarditis
Endomyocardial (tropical) fibrosis
Löffler's endocarditis
Endocardial fibroelastosis
Other restrictive cardiomyopathy
Alcoholic cardiomyopathy
Other cardiomyopathies
Arrhythmogenic right ventricular dysplasia
Cardiomyopathy in diseases classified elsewhere
Differential diagnosis
At birth''
None as systemic causes; mainly hereditary
Onset in childhood
Inflammatory myopathies – dermatomyositis, polymyositis (rarely)
Infectious myopathies
Endocrine and metabolic disorders – hypokalemia, hypocalcemia, hypercalcemia
Onset in adulthood
Inflammatory myopathies – polymyositis, dermatomyositis, inclusion body myositis, viral (HIV)
Infectious myopathies
Endocrine myopathies – thyroid, parathyroid, adrenal, pituitary disorders
Toxic myopathies – alcohol, corticosteroids, narcotics, colchicines, chloroquine
Critical illness myopathy
Metabolic myopathies
Paraneoplastic myopathy
Treatments
Because different types of myopathies are caused by many different pathways, there is no single treatment for myopathy. Treatments range from treatment of the symptoms to very specific cause-targeting treatments. Drug therapy, physical therapy, bracing for support, surgery, and massage are all current treatments for a variety of myopathies.
References
External links
GeneReviews/NCBI/NIH/UW entry on Myopathy with Deficiency of ISCU
See http://neuromuscular.wustl.edu/ for medical descriptions.
Muscular disorders | 0.775742 | 0.997679 | 0.773942 |
Ecology | Ecology is the natural science of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size will grow to approach equilibrium, where, when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
See also
Carrying capacity
Chemical ecology
Climate justice
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological empathy
Ecological overshoot
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Human ecology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Philosophy of ecology
Political ecology
Theoretical ecology
Sensory ecology
Sexecology
Spiritual ecology
Sustainable development
Lists
Glossary of ecology
Index of biology articles
List of ecologists
Outline of biology
Terminology of ecology
Notes
References
External links
The Nature Education Knowledge Project: Ecology
Biogeochemistry
Emergence | 0.774465 | 0.999069 | 0.773744 |
Insanity | Insanity, madness, lunacy, and craziness are behaviors caused by certain abnormal mental or behavioral patterns. Insanity can manifest as violations of societal norms, including a person or persons becoming a danger to themselves or to other people. Conceptually, mental insanity also is associated with the biological phenomenon of contagion (that mental illness is infectious) as in the case of copycat suicides. In contemporary usage, the term insanity is an informal, un-scientific term denoting "mental instability"; thus, the term insanity defense is the legal definition of mental instability. In medicine, the general term psychosis is used to include the presence of delusions and/or hallucinations in a patient; and psychiatric illness is "psychopathology", not mental insanity.
In English, the word "sane" derives from the Latin adjective sanus, meaning "healthy". Juvenal's phrase mens sana in corpore sano is often translated to mean a "healthy mind in a healthy body". From this perspective, insanity can be considered as poor health of the mind, not necessarily of the brain as an organ (although that can affect mental health), but rather refers to defective function of mental processes such as reasoning. Another Latin phrase related to our current concept of sanity is compos mentis ("sound of mind"), and a euphemistic term for insanity is non compos mentis. In law, mens rea means having had criminal intent, or a guilty mind, when the act (actus reus) was committed.
A more informal use of the term insanity is to denote something or someone considered highly unique, passionate or extreme, including in a positive sense. The term may also be used as an attempt to discredit or criticize particular ideas, beliefs, principles, desires, personal feelings, attitudes, or their proponents, such as in politics and religion.
Historical views and treatment
Madness, the non-legal word for insanity, has been recognized throughout history in every known society. Some traditional cultures have turned to witch doctors or shamans to apply magic, herbal mixtures, or folk medicine to rid deranged persons of evil spirits or bizarre behavior, for example. Archaeologists have unearthed skulls (at least 7000 years old) that have small, round holes bored in them using flint tools. It has been conjectured that the subjects may have been thought to have been possessed by spirits that the holes would allow to escape. More recent research on the historical practice of trepanning supports the hypothesis that this procedure was medical in nature and intended as means of treating cranial trauma.
Ancient Greece
The Greeks appeared to share something of the modern Western world's secular and holistic view, believing that afflictions of the mind did not differ from diseases of the body. Moreover, they saw mental and physical illness as a result of natural causes and an imbalance in bodily humors. Hippocrates frequently wrote that an excess of black bile resulted in irrational thinking and behavior.
Ancient Rome
Romans made other contributions to psychiatry, in particular a precursor of some contemporary practice. They put forward the idea that strong emotions could lead to bodily ailments, the basis of today's theory of psychosomatic illness. The Romans also supported humane treatment of the mentally ill, and in so doing, codified into law the principle of insanity as a mitigation of responsibility for criminal acts, although the criterion for insanity was sharply set as the defendant had to be found "non compos mentis", a term meaning "not sound of mind".
From the Middle Ages onward
The Middle Ages witnessed the end of the progressive ideas of the Greeks and Romans.
During the 18th century, the French and the British introduced humane treatment of the clinically insane, though the criteria for diagnosis and placement in an asylum were considerably looser than today, often including such conditions as speech disorder, speech impediments, epilepsy, and depression or being pregnant out of wedlock.
Europe's oldest asylum was the precursor of today's Bethlem Royal Hospital in London, known then as Bedlam, which began admitting the mentally ill in 1403 and is mentioned in Chaucer's Canterbury Tales. The first American asylum was built in Williamsburg, Virginia, circa 1773. Before the 19th century, these hospitals were used to isolate the mentally ill or the socially ostracized from society rather than cure them or maintain their health. Pictures from this era portrayed patients bound with rope or chains, often to beds or walls, or restrained in straitjackets.
Medicine
Insanity is no longer considered a medical diagnosis but is a legal term in the United States, stemming from its original use in common law. The disorders formerly encompassed by the term covered a wide range of mental disorders now diagnosed as bipolar disorder, organic brain syndromes, schizophrenia, and other psychotic disorders.
Law
In United States criminal law, insanity may serve as an affirmative defense to criminal acts and thus does not need to negate an element of the prosecution's case such as general or specific intent. Each U.S. state differs somewhat in its definition of insanity but most follow the guidelines of the Model Penal Code. All jurisdictions require a sanity evaluation to address the question first of whether or not the defendant has a mental illness.
Most courts accept a major mental illness such as psychosis but will not accept the diagnosis of a personality disorder for the purposes of an insanity defense. The second question is whether the mental illness interfered with the defendant's ability to distinguish right from wrong. That is, did the defendant know that the alleged behavior was against the law at the time the offense was committed.
Additionally, some jurisdictions add the question of whether or not the defendant was in control of their behavior at the time of the offense. For example, if the defendant was compelled by some aspect of their mental illness to commit the illegal act, the defendant could be evaluated as not in control of their behavior at the time of the offense.
The forensic mental health specialists submit their evaluations to the court. Since the question of sanity or insanity is a legal question and not a medical one, the judge and or jury will make the final decision regarding the defendant's status regarding an insanity defense.
In most jurisdictions within the United States, if the insanity plea is accepted, the defendant is committed to a psychiatric institution for at least 60 days for further evaluation, and then reevaluated at least yearly after that.
Insanity is generally no defense in a civil lawsuit, but an insane plaintiff can toll the statute of limitations for filing a suit until gaining sanity, or until a statute of repose has run.
Feigning
Feigned insanity is the simulation of mental illness in order to deceive. Amongst other purposes, insanity is feigned in order to avoid or lessen the consequences of a confrontation or conviction for an alleged crime. A number of treatises on medical jurisprudence were written during the nineteenth century, the most famous of which was Isaac Ray in 1838 (fifth edition 1871); others include Ryan (1832), Taylor (1845), Wharton and Stille (1855), Ordronaux (1869), Meymott (1882). The typical techniques as outlined in these works are the background for Dr. Neil S. Kaye's widely recognized guidelines that indicate an attempt to feign insanity.
One famous example of someone feigning insanity is Mafia boss Vincent Gigante, who pretended for years to be suffering from dementia, and was often seen wandering aimlessly around his neighborhood in his pajamas muttering to himself. Testimony from informants and surveillance showed that Gigante was in full control of his faculties the whole time, and ruled over his Mafia family with an iron fist.
Today feigned insanity is considered malingering. In a 2005 court case, United States v. Binion, the defendant was prosecuted and convicted for obstruction of justice (adding to his original sentence) because he feigned insanity in a Competency to Stand Trial evaluation.
Insult
In modern times, labeling someone as insane often carries little or no medical meaning and is rather used as an insult or as a reaction to behavior perceived to be outside the bounds of accepted norms. For instance, the definition of insanity is sometimes colloquially purported to be "doing the same thing over and over again and expecting a different result." However, this does not match the legal definition of insanity.
See also
Rosenhan, David L.
References
External links
"On Being Sane in Insane Places"
Obsolete medical terms
Pejorative terms for people with disabilities | 0.775942 | 0.997159 | 0.773738 |
Fever | Fever or pyrexia in humans is a symptom of organism's anti-infection defense mechanism that appears with body temperature exceeding the normal range due to an increase in the body's temperature set point in the hypothalamus. There is no single agreed-upon upper limit for normal temperature: sources use values ranging between in humans.
The increase in set point triggers increased muscle contractions and causes a feeling of cold or chills. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure, with this being more common in young children. Fevers do not typically go higher than .
A fever can be caused by many medical conditions ranging from non-serious to life-threatening. This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, Lassa fever, COVID-19, and malaria. Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication or vaccination, and cancer. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.
Treatment to reduce fever is generally not required. Treatment of associated pain and inflammation, however, may be useful and help a person rest. Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature. Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms. Hyperthermia requires treatment.
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever evolved as a defense mechanism, treating a fever does not appear to improve or worsen outcomes. Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as "fever phobia."
Associated symptoms
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, dehydration, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". Mild to severe delirium (which can also cause hallucinations) may also present itself during high fevers.
Diagnosis
A range for normal temperatures has been found. Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures.
Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
Temperature in the anus (rectum/rectal) is at or over . An ear (tympanic) or forehead (temporal) temperature may also be used.
Temperature in the mouth (oral) is at or over in the morning or over in the afternoon
Temperature under the arm (axillary) is usually about below core body temperature.
In adults, the normal range of oral temperatures in healthy individuals is among men and among women, while when taken rectally it is among men and among women, and for ear measurement it is among men and among women.
Normal body temperatures vary depending on many factors, including age, sex, time of day, ambient temperature, activity level, and more. Normal daily temperature variation has been described as 0.5 °C (0.9 °F). A raised temperature is not always a fever. For example, the temperature rises in healthy people when they exercise, but this is not considered a fever, as the set point is normal. On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of may represent a clinically significant fever.
Hyperthermia
Hyperthermia is an elevation of body temperature over the temperature set point, due to either too much heat production or not enough heat loss. Hyperthermia is thus not considered fever. Hyperthermia should not be confused with hyperpyrexia (which is a very high fever).
Clinically, it is important to distinguish between fever and hyperthermia as hyperthermia may quickly lead to death and does not respond to antipyretic medications. The distinction may however be difficult to make in an emergency setting, and is often established by identifying possible causes.
Types
Various patterns of measured patient temperatures have been observed, some of which may be indicative of a particular medical diagnosis:
Continuous fever, where temperature remains above normal and does not fluctuate more than in 24 hours (e.g. in bacterial pneumonia, typhoid fever, infective endocarditis, tuberculosis, or typhus).
Intermittent fever is present only for a certain period, later cycling back to normal (e.g., in malaria, leishmaniasis, pyemia, sepsis, or African trypanosomiasis).
Remittent fever, where the temperature remains above normal throughout the day and fluctuates more than in 24 hours (e.g., in infective endocarditis or brucellosis).
Pel–Ebstein fever is a cyclic fever that is rarely seen in patients with Hodgkin's lymphoma.
Undulant fever, seen in brucellosis.
Typhoid fever is a continuous fever showing a characteristic step-ladder pattern, a step-wise increase in temperature with a high plateau.
Among the types of intermittent fever are ones specific to cases of malaria caused by different pathogens. These are:
Quotidian fever, with a 24-hour periodicity, typical of malaria caused by Plasmodium knowlesi (P. knowlesi);
Tertian fever, with a 48-hour periodicity, typical of later course malaria caused by P. falciparum, P. vivax, or P. ovale;
Quartan fever, with a 72-hour periodicity, typical of later course malaria caused by P. malariae.
In addition, there is disagreement regarding whether a specific fever pattern is associated with Hodgkin's lymphoma—the Pel–Ebstein fever, with patients argued to present high temperature for one week, followed by low for the next week, and so on, where the generality of this pattern is debated.
Persistent fever that cannot be explained after repeated routine clinical inquiries is called fever of unknown origin. A neutropenic fever, also called febrile neutropenia, is a fever in the absence of normal immune system function. Because of the lack of infection-fighting neutrophils, a bacterial infection can spread rapidly; this fever is, therefore, usually considered to require urgent medical attention. This kind of fever is more commonly seen in people receiving immune-suppressing chemotherapy than in apparently healthy people.
Hyperpyrexia
Hyperpyrexia is an extreme elevation of body temperature which, depending upon the source, is classified as a core body temperature greater than or equal to ; the range of hyperpyrexia includes cases considered severe (≥ 40 °C) and extreme (≥ 42 °C). It differs from hyperthermia in that one's thermoregulatory system's set point for body temperature is set above normal, then heat is generated to achieve it. In contrast, hyperthermia involves body temperature rising above its set point due to outside factors. The high temperatures of hyperpyrexia are considered medical emergencies, as they may indicate a serious underlying condition or lead to severe morbidity (including permanent brain damage), or to death. A common cause of hyperpyrexia is an intracranial hemorrhage. Other causes in emergency room settings include sepsis, Kawasaki syndrome, neuroleptic malignant syndrome, drug overdose, serotonin syndrome, and thyroid storm.
Differential diagnosis
Fever is a common symptom of many medical conditions:
Infectious disease, e.g., COVID-19, dengue, Ebola, gastroenteritis, HIV, influenza, Lyme disease, rocky mountain spotted fever, secondary syphilis, malaria, mononucleosis, as well as infections of the skin, e.g., abscesses and boils.
Immunological diseases, e.g., relapsing polychondritis, autoimmune hepatitis, granulomatosis with polyangiitis, Horton disease, inflammatory bowel diseases, Kawasaki disease, lupus erythematosus, sarcoidosis, Still's disease, rheumatoid arthritis, lymphoproliferative disorders and psoriasis;
Tissue destruction, as a result of cerebral bleeding, crush syndrome, hemolysis, infarction, rhabdomyolysis, surgery, etc.;
Cancers, particularly blood cancers such as leukemia and lymphomas;
Metabolic disorders, e.g., gout, and porphyria; and
Inherited metabolic disorder, e.g., Fabry disease.
Adult and pediatric manifestations for the same disease may differ; for instance, in COVID-19, one metastudy describes 92.8% of adults versus 43.9% of children presenting with fever.
In addition, fever can result from a reaction to an incompatible blood product.
Teething is not a cause of fever.
Function
Immune function
Fever is thought to contribute to host defense, as the reproduction of pathogens with strict temperature requirements can be hindered, and the rates of some important immunological reactions are increased by temperature. Fever has been described in teaching texts as assisting the healing process in various ways, including:
increased mobility of leukocytes;
enhanced leukocyte phagocytosis;
decreased endotoxin effects; and
increased proliferation of T cells.
Advantages and disadvantages
A fever response to an infectious disease is generally regarded as protective, whereas fever in non-infections may be maladaptive. Studies have not been consistent on whether treating fever generally worsens or improves mortality risk. Benefits or harms may depend on the type of infection, health status of the patient and other factors. Studies using warm-blooded vertebrates suggest that they recover more rapidly from infections or critical illness due to fever. In sepsis, fever is associated with reduced mortality.
Pathophysiology of fever induction
Hypothalamus
Temperature is regulated in the hypothalamus. The trigger of a fever, called a pyrogen, results in the release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point. There are four receptors in which PGE2 can bind (EP1-4), with a previous study showing the EP3 subtype is what mediates the fever response. Hence, the hypothalamus can be seen as working like a thermostat. When the set point is raised, the body increases its temperature through both active generation of heat and retention of heat. Peripheral vasoconstriction both reduces heat loss through the skin and causes the person to feel cold. Norepinephrine increases thermogenesis in brown adipose tissue, and muscle contraction through shivering raises the metabolic rate.
If these measures are insufficient to make the blood temperature in the brain match the new set point in the hypothalamus, the brain orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:
Increased heat production by increased muscle tone, shivering (muscle movements to produce heat) and release of hormones like epinephrine; and
Prevention of heat loss, e.g., through vasoconstriction.
When the hypothalamic set point moves back to baseline—either spontaneously or via medication—normal functions such as sweating, and the reverse of the foregoing processes (e.g., vasodilation, end of shivering, and nonshivering heat production) are used to cool the body to the new, lower setting.
This contrasts with hyperthermia, in which the normal setting remains, and the body overheats through undesirable retention of excess heat or over-production of heat. Hyperthermia is usually the result of an excessively hot environment (heat stroke) or an adverse reaction to drugs. Fever can be differentiated from hyperthermia by the circumstances surrounding it and its response to anti-pyretic medications.
In infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-shivering thermogenesis).
Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.
Pyrogens
A pyrogen is a substance that induces fever. In the presence of an infectious agent, such as bacteria, viruses, viroids, etc., the immune response of the body is to inhibit their growth and eliminate them. The most common pyrogens are endotoxins, which are lipopolysaccharides (LPS) produced by Gram-negative bacteria such as E. coli. But pyrogens include non-endotoxic substances (derived from microorganisms other than gram-negative-bacteria or from chemical substances) as well. The types of pyrogens include internal (endogenous) and external (exogenous) to the body.
The "pyrogenicity" of given pyrogens varies: in extreme cases, bacterial pyrogens can act as superantigens and cause rapid and dangerous fevers.
Endogenous
Endogenous pyrogens are cytokines released from monocytes (which are part of the immune system). In general, they stimulate chemical responses, often in the presence of an antigen, leading to a fever. Whilst they can be a product of external factors like exogenous pyrogens, they can also be induced by internal factors like damage associated molecular patterns such as cases like rheumatoid arthritis or lupus.
Major endogenous pyrogens are interleukin 1 (α and β) and interleukin 6 (IL-6). Minor endogenous pyrogens include interleukin-8, tumor necrosis factor-β, macrophage inflammatory protein-α and macrophage inflammatory protein-β as well as interferon-α, interferon-β, and interferon-γ. Tumor necrosis factor-α (TNF) also acts as a pyrogen, mediated by interleukin 1 (IL-1) release. These cytokine factors are released into general circulation, where they migrate to the brain's circumventricular organs where they are more easily absorbed than in areas protected by the blood–brain barrier. The cytokines then bind to endothelial receptors on vessel walls to receptors on microglial cells, resulting in activation of the arachidonic acid pathway.
Of these, IL-1β, TNF, and IL-6 are able to raise the temperature setpoint of an organism and cause fever. These proteins produce a cyclooxygenase which induces the hypothalamic production of PGE2 which then stimulates the release of neurotransmitters such as cyclic adenosine monophosphate and increases body temperature.
Exogenous
Exogenous pyrogens are external to the body and are of microbial origin. In general, these pyrogens, including bacterial cell wall products, may act on Toll-like receptors in the hypothalamus and elevate the thermoregulatory setpoint.
An example of a class of exogenous pyrogens are bacterial lipopolysaccharides (LPS) present in the cell wall of gram-negative bacteria. According to one mechanism of pyrogen action, an immune system protein, lipopolysaccharide-binding protein (LBP), binds to LPS, and the LBP–LPS complex then binds to a CD14 receptor on a macrophage. The LBP-LPS binding to CD14 results in cellular synthesis and release of various endogenous cytokines, e.g., interleukin 1 (IL-1), interleukin 6 (IL-6), and tumor necrosis factor-alpha (TNFα). A further downstream event is activation of the arachidonic acid pathway.
PGE2 release
PGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.
PGE2 is the ultimate mediator of the febrile response. The setpoint temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus. Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.
Management
Fever does not necessarily need to be treated, and most people with a fever recover without specific medical attention. Although it is unpleasant, fever rarely rises to a dangerous level even if untreated. Damage to the brain generally does not occur until temperatures reach , and it is rare for an untreated fever to exceed . Treating fever in people with sepsis does not affect outcomes. Small trials have shown no benefit of treating fevers of or higher of critically ill patients in ICUs, and one trial was terminated early because patients receiving aggressive fever treatment were dying more often.
According to the NIH, the two assumptions which are generally used to argue in favor of treating fevers have not been experimentally validated. These are that (1) a fever is noxious, and (2) suppression of a fever will reduce its noxious effect. Most of the other studies supporting the association of fever with poorer outcomes have been observational in nature. In theory, these critically ill patients and those faced with additional physiologic stress may benefit from fever reduction, but the evidence on both sides of the argument appears to be mostly equivocal.
Conservative measures
Limited evidence supports sponging or bathing feverish children with tepid water. The use of a fan or air conditioning may somewhat reduce the temperature and increase comfort. If the temperature reaches the extremely high level of hyperpyrexia, aggressive cooling is required (generally produced mechanically via conduction by applying numerous ice packs across most of the body or direct submersion in ice water). In general, people are advised to keep adequately hydrated. Whether increased fluid intake improves symptoms or shortens respiratory illnesses such as the common cold is not known.
Medications
Medications that lower fevers are called antipyretics. The antipyretic ibuprofen is effective in reducing fevers in children. It is more effective than acetaminophen (paracetamol) in children. Ibuprofen and acetaminophen may be safely used together in children with fevers. The efficacy of acetaminophen by itself in children with fevers has been questioned. Ibuprofen is also superior to aspirin in children with fevers. Additionally, aspirin is not recommended in children and young adults (those under the age of 16 or 19 depending on the country) due to the risk of Reye's syndrome.
Using both paracetamol and ibuprofen at the same time or alternating between the two is more effective at decreasing fever than using only paracetamol or ibuprofen. It is not clear if it increases child comfort. Response or nonresponse to medications does not predict whether or not a child has a serious illness.
With respect to the effect of antipyretics on the risk of death in those with infection, studies have found mixed results, as of 2019.
Epidemiology
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children, and occurs in up to 75% of adults who are seriously sick. About 5% of people who go to an emergency room have a fever.
History
A number of types of fever were known as early as 460 BC to 370 BC when Hippocrates was practicing medicine including that due to malaria (tertian or every 2 days and quartan or every 3 days). It also became clear around this time that fever was a symptom of disease rather than a disease in and of itself.
Infections presenting with fever were a major source of mortality in humans for about 200,000 years. Until the late nineteenth century, approximately half of all humans died from infections before the age of fifteen.
An older term, febricula (a diminutive form of the Latin word for fever), was once used to refer to a low-grade fever lasting only a few days. This term fell out of use in the early 20th century, and the symptoms it referred to are now thought to have been caused mainly by various minor viral respiratory infections.
Society and culture
Mythology
Febris (fever in Latin) is the goddess of fever in Roman mythology. People with fevers would visit her temples.
Tertiana and Quartana are the goddesses of tertian and quartan fevers of malaria in Roman mythology.
Jvarasura (fever-demon in Hindi) is the personification of fever and disease in Hindu and Buddhist mythology.
Pediatrics
Fever is often viewed with greater concern by parents and healthcare professionals than might be deserved, a phenomenon known as fever phobia, which is based in both caregiver's and parents' misconceptions about fever in children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified "normal" number marked on a thermometer is a clinically significant fever. They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers. The underlying problem, according to professor of pediatrics Barton D. Schmitt, is that "as parents we tend to suspect that our children's brains may melt." As a result of these misconceptions parents are anxious, give the child fever-reducing medicine when the temperature is technically normal or only slightly elevated, and interfere with the child's sleep to give the child more medicine.
Other species
Fever is an important metric for the diagnosis of disease in domestic animals. The body temperature of animals, which is taken rectally, is different from one species to another. For example, a horse is said to have a fever above . In species that allow the body to have a wide range of "normal" temperatures, such as camels, whose body temperature varies as the environmental temperature varies, the body temperature which constitutes a febrile state differs depending on the environmental temperature. Fever can also be behaviorally induced by invertebrates that do not have immune-system based fever. For instance, some species of grasshopper will thermoregulate to achieve body temperatures that are 2–5 °C higher than normal in order to inhibit the growth of fungal pathogens such as Beauveria bassiana and Metarhizium acridum. Honeybee colonies are also able to induce a fever in response to a fungal parasite Ascosphaera apis.
References
Further reading
External links
Fever and Taking Your Child's Temperature
US National Institute of Health factsheet
Drugs most commonly associated with the adverse event Pyrexia (Fever) as reported the FDA
Fever at MedlinePlus
Why are We So Afraid of Fevers? at The New York Times
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Symptoms and signs | 0.774684 | 0.998753 | 0.773718 |
Birth defect | A birth defect is an abnormal condition that is present at birth, regardless of its cause. Birth defects may result in disabilities that may be physical, intellectual, or developmental. The disabilities can range from mild to severe. Birth defects are divided into two main types: structural disorders in which problems are seen with the shape of a body part and functional disorders in which problems exist with how a body part works. Functional disorders include metabolic and degenerative disorders. Some birth defects include both structural and functional disorders.
Birth defects may result from genetic or chromosomal disorders, exposure to certain medications or chemicals, or certain infections during pregnancy. Risk factors include folate deficiency, drinking alcohol or smoking during pregnancy, poorly controlled diabetes, and a mother over the age of 35 years old. Many are believed to involve multiple factors. Birth defects may be visible at birth or diagnosed by screening tests. A number of defects can be detected before birth by different prenatal tests.
Treatment varies depending on the defect in question. This may include therapy, medication, surgery, or assistive technology. Birth defects affected about 96 million people . In the United States, they occur in about 3% of newborns. They resulted in about 628,000 deaths in 2015, down from 751,000 in 1990. The types with the greatest numbers of deaths are congenital heart disease (303,000), followed by neural tube defects (65,000).
Classification
Much of the language used for describing congenital conditions antedates genome mapping, and structural conditions are often considered separately from other congenital conditions. Many metabolic conditions are now known to have subtle structural expression, and structural conditions often have genetic links. Still, congenital conditions are often classified on a structural basis, organized when possible by primary organ system affected.
Primarily structural
Several terms are used to describe congenital abnormalities. (Some of these are also used to describe noncongenital conditions, and more than one term may apply in an individual condition.)
Terminology
A congenital physical anomaly is an abnormality of the structure of a body part. It may or may not be perceived as a problem condition. Many, if not most, people have one or more minor physical anomalies if examined carefully. Examples of minor anomalies can include curvature of the fifth finger (clinodactyly), a third nipple, tiny indentations of the skin near the ears (preauricular pits), shortness of the fourth metacarpal or metatarsal bones, or dimples over the lower spine (sacral dimples). Some minor anomalies may be clues to more significant internal abnormalities.
Birth defect is a widely used term for a congenital malformation, i.e. a congenital, physical anomaly that is recognizable at birth, and which is significant enough to be considered a problem. According to the Centers for Disease Control and Prevention (CDC), most birth defects are believed to be caused by a complex mix of factors including genetics, environment, and behaviors, though many birth defects have no known cause. An example of a birth defect is cleft palate, which occurs during the fourth through seventh weeks of gestation. Body tissue and special cells from each side of the head grow toward the center of the face. They join to make the face. A cleft means a split or separation; the "roof" of the mouth is called the palate.
A congenital malformation is a physical anomaly that is deleterious, i.e. a structural defect perceived as a problem. A typical combination of malformations affecting more than one body part is referred to as a malformation syndrome.
Some conditions are due to abnormal tissue development:
A malformation is associated with a disorder of tissue development. Malformations often occur in the first trimester.
A dysplasia is a disorder at the organ level that is due to problems with tissue development.
Conditions also can arise after tissue is formed:
A deformation is a condition arising from mechanical stress to normal tissue. Deformations often occur in the second or third trimester, and can be due to oligohydramnios.
A disruption involves breakdown of normal tissues.
When multiple effects occur in a specified order, they are known as a sequence. When the order is not known, it is a syndrome.
Examples of primarily structural congenital disorders
A limb anomaly is called a dysmelia. These include all forms of limbs anomalies, such as amelia, ectrodactyly, phocomelia, polymelia, polydactyly, syndactyly, polysyndactyly, oligodactyly, brachydactyly, achondroplasia, congenital aplasia or hypoplasia, amniotic band syndrome, and cleidocranial dysostosis.
Congenital heart defects include patent ductus arteriosus, atrial septal defect, ventricular septal defect, and tetralogy of Fallot.
Congenital anomalies of the nervous system include neural tube defects such as spina bifida, encephalocele, and anencephaly. Other congenital anomalies of the nervous system include the Arnold–Chiari malformation, the Dandy–Walker malformation, hydrocephalus, microencephaly, megalencephaly, lissencephaly, polymicrogyria, holoprosencephaly, and agenesis of the corpus callosum.
Congenital anomalies of the gastrointestinal system include numerous forms of stenosis and atresia, and perforation, such as gastroschisis.
Congenital anomalies of the kidney and urinary tract include renal parenchyma, kidneys, and urinary collecting system.
Defects can be bilateral or unilateral, and different defects often coexist in an individual child.
Primarily metabolic
A congenital metabolic disease is also referred to as an inborn error of metabolism. Most of these are single-gene defects, usually heritable. Many affect the structure of body parts, but some simply affect the function.
Other
Other well-defined genetic conditions may affect the production of hormones, receptors, structural proteins, and ion channels.
Causes
Alcohol exposure
The mother's consumption of alcohol during pregnancy can cause a continuum of various permanent birth defects: craniofacial abnormalities, brain damage, intellectual disability, heart disease, kidney abnormality, skeletal anomalies, ocular abnormalities.
The prevalence of children affected is estimated at least 1% in U.S. as well in Canada.
Very few studies have investigated the links between paternal alcohol use and offspring health.
However, recent animal research has shown a correlation between paternal alcohol exposure and decreased offspring birth weight. Behavioral and cognitive disorders, including difficulties with learning and memory, hyperactivity, and lowered stress tolerance have been linked to paternal alcohol ingestion. The compromised stress management skills of animals whose male parent was exposed to alcohol are similar to the exaggerated responses to stress that children with fetal alcohol syndrome display because of maternal alcohol use. These birth defects and behavioral disorders were found in cases of both long- and short-term paternal alcohol ingestion. In the same animal study, paternal alcohol exposure was correlated with a significant difference in organ size and the increased risk of the offspring displaying ventricular septal defects at birth.
Toxic substances
Substances whose toxicity can cause congenital disorders are called teratogens, and include certain pharmaceutical and recreational drugs in pregnancy, as well as many environmental toxins in pregnancy.
A review published in 2010 identified six main teratogenic mechanisms associated with medication use: folate antagonism, neural crest cell disruption, endocrine disruption, oxidative stress, vascular disruption, and specific receptor- or enzyme-mediated teratogenesis.
An estimated 10% of all birth defects are caused by prenatal exposure to a teratogenic agent. These exposures include medication or drug exposures, maternal infections and diseases, and environmental and occupational exposures. Paternal smoking has also been linked to an increased risk of birth defects and childhood cancer for the offspring, where the paternal germline undergoes oxidative damage due to cigarette use. Teratogen-caused birth defects are potentially preventable. Nearly 50% of pregnant women have been exposed to at least one medication during gestation. During pregnancy, a woman can also be exposed to teratogens from contaminated clothing or toxins within the seminal fluid of a partner. An additional study found that of 200 individuals referred for genetic counseling for a teratogenic exposure, 52% were exposed to more than one potential teratogen.
The United States Environmental Protection Agency studied 1,065 chemical and drug substances in their ToxCast program (part of the CompTox Chemicals Dashboard) using in silico modeling and a human pluripotent stem cell-based assay to predict in vivo developmental intoxicants based on changes in cellular metabolism following chemical exposure. Findings of the study published in 2020 were that 19% of the 1065 chemicals yielded a prediction of developmental toxicity.
Medications and supplements
Probably, the most well-known teratogenic drug is thalidomide. It was developed near the end of the 1950s by Chemie Grünenthal as a sleep-inducing aid and antiemetic. Because of its ability to prevent nausea, it was prescribed for pregnant women in almost 50 countries worldwide between 1956 and 1962. Until William McBride published the study leading to its withdrawal from the market in 1961, about 8,000 to 10,000 severely malformed children were born. The most typical disorders induced by thalidomide were reductional deformities of the long bones of the extremities. Phocomelia, otherwise a rare deformity, therefore helped to recognise the teratogenic effect of the new drug. Among other malformations caused by thalidomide were those of ears, eyes, brain, kidney, heart, and digestive and respiratory tracts; 40% of the prenatally affected children died soon after birth. As thalidomide is used today as a treatment for multiple myeloma and leprosy, several births of affected children were described in spite of the strictly required use of contraception among female patients treated by it.
Vitamin A is the sole vitamin that is embryotoxic even in a therapeutic dose, for example in multivitamins, because its metabolite, retinoic acid, plays an important role as a signal molecule in the development of several tissues and organs. Its natural precursor, β-carotene, is considered safe, whereas the consumption of animal liver can lead to malformation, as the liver stores lipophilic vitamins, including retinol. Isotretinoin (13-cis-retinoic-acid; brand name Roaccutane), vitamin A analog, which is often used to treat severe acne, is such a strong teratogen that just a single dose taken by a pregnant woman (even transdermally) may result in serious birth defects. Because of this effect, most countries have systems in place to ensure that it is not given to pregnant women and that the patient is aware of how important it is to prevent pregnancy during and at least one month after treatment. Medical guidelines also suggest that pregnant women should limit vitamin A intake to about 700 μg/day, as it has teratogenic potential when consumed in excess. Vitamin A and similar substances can induce spontaneous abortions, premature births, defects of eyes (microphthalmia), ears, thymus, face deformities, and neurological (hydrocephalus, microcephalia) and cardiovascular defects, as well as intellectual disability.
Tetracycline, an antibiotic, should never be prescribed to women of reproductive age or to children, because of its negative impact on bone mineralization and teeth mineralization. The "tetracycline teeth" have brown or grey colour as a result of a defective development of both the dentine and the enamel of teeth.
Several anticonvulsants are known to be highly teratogenic. Phenytoin, also known as diphenylhydantoin, along with carbamazepine, is responsible for the fetal hydantoin syndrome, which may typically include broad nose base, cleft lip and/or palate, microcephalia, nails and fingers hypoplasia, intrauterine growth restriction, and intellectual disability. Trimethadione taken during pregnancy is responsible for the fetal trimethadione syndrome, characterized by craniofacial, cardiovascular, renal, and spine malformations, along with a delay in mental and physical development. Valproate has antifolate effects, leading to neural tube closure-related defects such as spina bifida. Lower IQ and autism have recently also been reported as a result of intrauterine valproate exposure.
Hormonal contraception is considered harmless for the embryo. Peterka and Novotná do, however, state that synthetic progestins used to prevent miscarriage in the past frequently caused masculinization of the outer reproductive organs of female newborns due to their androgenic activity. Diethylstilbestrol is a synthetic estrogen used from the 1940s to 1971, when the prenatal exposition has been linked to the clear-cell adenocarcinoma of the vagina. Following studies showed elevated risks for other tumors and congenital malformations of the sex organs for both sexes.
All cytostatics are strong teratogens; abortion is usually recommended when pregnancy is discovered during or before chemotherapy. Aminopterin, a cytostatic drug with antifolate effect, was used during the 1950s and 1960s to induce therapeutic abortions. In some cases, the abortion did not happen, but the newborns had a fetal aminopterin syndrome consisting of growth retardation, craniosynostosis, hydrocephalus, facial dismorphities, intellectual disability, or leg deformities
Toxic substances
Drinking water is often a medium through which harmful toxins travel. Heavy metals, elements, nitrates, nitrites, and fluoride can be carried through water and cause congenital disorders.
Nitrate, which is found mostly in drinking water from ground sources, is a powerful teratogen. A case-control study in rural Australia that was conducted following frequent reports of prenatal mortality and congenital malformations found that those who drank the nitrate-containing groundwater, as opposed to rain water, ran the risk of giving birth to children with central nervous system disorders, muscoskeletal defects, and cardiac defects.
Chlorinated and aromatic solvents such as benzene and trichloroethylene sometimes enter the water supply due to oversights in waste disposal. A case-control study on the area found that by 1986, leukemia was occurring in the children of Woburn, Massachusetts, at a rate that was four times the expected rate of incidence. Further investigation revealed a connection between the high occurrence of leukemia and an error in water distribution that delivered water to the town with significant contamination with manufacturing waste containing trichloroethylene.
As an endocrine disruptor, DDT was shown to induce miscarriages, interfere with the development of the female reproductive system, cause the congenital hypothyroidism, and suspectably childhood obesity.
Fluoride, when transmitted through water at high levels, can also act as a teratogen. Two reports on fluoride exposure from China, which were controlled to account for the education level of parents, found that children born to parents who were exposed to 4.12 ppm fluoride grew to have IQs that were, on average, seven points lower than their counterparts whose parents consumed water that contained 0.91 ppm fluoride. In studies conducted on rats, higher fluoride in drinking water led to increased acetylcholinesterase levels, which can alter prenatal brain development. The most significant effects were noted at a level of 5 ppm.
The fetus is even more susceptible to damage from carbon monoxide intake, which can be harmful when inhaled during pregnancy, usually through first- or second-hand tobacco smoke. The concentration of carbon monoxide in the infant born to a nonsmoking mother is around 2%, and this concentration drastically increases to a range of 6%–9% if the mother smoked tobacco. Other possible sources of prenatal carbon monoxide intoxication are exhaust gas from combustion motors, use of dichloromethane (paint thinner, varnish removers) in enclosed areas, defective gas water heaters, indoor barbeques, open flames in poorly ventilated areas, and atmospheric exposure in highly polluted areas. Exposure to carbon monoxide at toxic levels during the first two trimesters of pregnancy can lead to intrauterine growth restriction, leading to a baby who has stunted growth and is born smaller than 90% of other babies at the same gestational age. The effect of chronic exposure to carbon monoxide can depend on the stage of pregnancy in which the mother is exposed. Exposure during the embryonic stage can have neurological consequences, such as telencephalic dysgenesis, behavioral difficulties during infancy, and reduction of cerebellum volume. Also, possible skeletal defects could result from exposure to carbon monoxide during the embryonic stage, such as hand and foot malformations, hip dysplasia, hip subluxation, agenesis of a limb, and inferior maxillary atresia with glossoptosis. Also, carbon monoxide exposure between days 35 and 40 of embryonic development can lead to an increased risk of the child developing a cleft palate. Exposure to carbon monoxide or polluted ozone exposure can also lead to cardiac defects of the ventrical septal, pulmonary artery, and heart valves. The effects of carbon monoxide exposure are decreased later in fetal development during the fetal stage, but they may still lead to anoxic encephalopathy.
Industrial pollution can also lead to congenital defects. Over a period of 37 years, the Chisso Corporation, a petrochemical and plastics company, contaminated the waters of Minamata Bay with an estimated 27 tons of methylmercury, contaminating the local water supply. This led many people in the area to develop what became known as the "Minamata disease". Because methylmercury is a teratogen, the mercury poisoning of those residing by the bay resulted in neurological defects in the offspring. Infants exposed to mercury poisoning in utero showed predispositions to cerebral palsy, ataxia, inhibited psychomotor development, and intellectual disability.
Landfill sites have been shown to have adverse effects on fetal development. Extensive research has shown that landfills have several negative effects on babies born to mothers living near landfill sites: low birth weight, birth defects, spontaneous abortion, and fetal and infant mortality. Studies done around the Love Canal site near Niagara Falls and the Lipari Landfill in New Jersey have shown a higher proportion of low birth-weight babies than communities farther away from landfills. A study done in California showed a positive correlation between time and quantity of dumping and low birth weights and neonatal deaths. A study in the United Kingdom showed a correlation between pregnant women living near landfill sites and an increased risk of congenital disorders, such as neural tube defects, hypospadias, epispadia, and abdominal wall defects, such as gastroschisis and exomphalos. A study conducted on a Welsh community also showed an increased incidence of gastroschisis. Another study on 21 European hazardous-waste sites showed that those living within 3 km had an increased risk of giving birth to infants with birth defects and that as distance from the land increased, the risk decreased. These birth defects included neural tube defects, malformations of the cardiac septa, anomalies of arteries and veins, and chromosomal anomalies. Looking at communities that live near landfill sites brings up environmental justice. A vast majority of sites are located near poor, mostly black, communities. For example, between the early 1920s and 1978, about 25% of Houston's population was black. However, over 80% of landfills and incinerators during this time were located in these black communities.
Another issue regarding environmental justice is lead poisoning. A fetus exposed to lead during the pregnancy can result in learning difficulties and slowed growth. Some paints (before 1978) and pipes contain lead. Therefore, pregnant women who live in homes with lead paint inhale the dust containing lead, leading to lead exposure in the fetus. When lead pipes are used for drinking water and cooking water, this water is ingested, along with the lead, exposing the fetus to this toxin. This issue is more prevalent in poorer communities because more well-off families are able to afford to have their homes repainted and pipes renovated.
Endometriosis
Endometriosis can impact a woman's fetus, causing a 30% higher risk for congenital malformations and a 50% higher risk of neonates being under-sized for their gestational age.
Smoking
Paternal smoking prior to conception has been linked with the increased risk of congenital abnormalities in offspring.
Smoking causes DNA mutations in the germline of the father, which can be inherited by the offspring. Cigarette smoke acts as a chemical mutagen on germ cell DNA. The germ cells suffer oxidative damage, and the effects can be seen in altered mRNA production, infertility issues, and side effects in the embryonic and fetal stages of development. This oxidative damage may result in epigenetic or genetic modifications of the father's germline. Fetal lymphocytes have been damaged as a result of a father's smoking habits prior to conception.
Correlations between paternal smoking and the increased risk of offspring developing childhood cancers (including acute leukemia, brain tumors, and lymphoma) before age five have been established. Little is currently known about how paternal smoking damages the fetus, and what window of time in which the father smokes is most harmful to offspring.
Infections
A vertically transmitted infection is an infection caused by bacteria, viruses, or in rare cases, parasites transmitted directly from the mother to an embryo, fetus, or baby during pregnancy or childbirth.
Congenital disorders were initially believed to be the result of only hereditary factors. However, in the early 1940s, Australian pediatric ophthalmologist Norman Gregg began recognizing a pattern in which the infants arriving at his surgery were developing congenital cataracts at a higher rate than those who developed it from hereditary factors. On October 15, 1941, Gregg delivered a paper that explained his findings-68 out of the 78 children with congenital cataracts had been exposed in utero to rubella due to an outbreak in Australian army camps. These findings confirmed, to Gregg, that, in fact, environmental causes for congenital disorders could exist.
Rubella is known to cause abnormalities of the eye, internal ear, heart, and sometimes the teeth. More specifically, fetal exposure to rubella during weeks five to ten of development (the sixth week particularly) can cause cataracts and microphthalmia in the eyes. If the mother is infected with rubella during the ninth week, a crucial week for internal ear development, destruction of the organ of Corti can occur, causing deafness. In the heart, the ductus arteriosus can remain after birth, leading to hypertension. Rubella can also lead to atrial and ventricular septal defects in the heart. If exposed to rubella in the second trimester, the fetus can develop central nervous system malformations. However, because infections of rubella may remain undetected, misdiagnosed, or unrecognized in the mother, and/or some abnormalities are not evident until later in the child's life, precise incidence of birth defects due to rubella are not entirely known. The timing of the mother's infection during fetal development determines the risk and type of birth defect. As the embryo develops, the risk of abnormalities decreases. If exposed to the rubella virus during the first four weeks, the risk of malformations is 47%. Exposure during weeks five through eight creates a 22% chance, while weeks 9–12, a 7% chance exists, followed by 6% if the exposure is during the 13th-16th weeks. Exposure during the first eight weeks of development can also lead to premature birth and fetal death. These numbers are calculated from immediate inspection of the infant after birth. Therefore, mental defects are not accounted for in the percentages because they are not evident until later in the child's life. If they were to be included, these numbers would be much higher.
Other infectious agents include cytomegalovirus, the herpes simplex virus, hyperthermia, toxoplasmosis, and syphilis. Maternal exposure to cytomegalovirus can cause microcephaly, cerebral calcifications, blindness, chorioretinitis (which can cause blindness), hepatosplenomegaly, and meningoencephalitis in fetuses. Microcephaly is a disorder in which the fetus has an atypically small head, cerebral calcifications means certain areas of the brain have atypical calcium deposits, and meningoencephalitis is the enlargement of the brain. All three disorders cause abnormal brain function or intellectual disability. Hepatosplenomegaly is the enlargement of the liver and spleen which causes digestive problems. It can also cause some kernicterus and petechiae. Kernicterus causes yellow pigmentation of the skin, brain damage, and deafness. Petechaie is when the capillaries bleed resulting in red/purple spots on the skin. However, cytomegalovirus is often fatal in the embryo. The Zika virus can also be transmitted from the pregnant mother to her baby and cause microcephaly.
The herpes simplex virus can cause microcephaly, microphthalmus (abnormally small eyeballs), retinal dysplasia, hepatosplenomegaly, and intellectual disability. Both microphthalmus and retinal dysplasia can cause blindness. However, the most common symptom in infants is an inflammatory response that develops during the first three weeks of life. Hyperthermia causes anencephaly, which is when part of the brain and skull are absent in the infant. Mother exposure to toxoplasmosis can cause cerebral calcification, hydrocephalus (causes mental disabilities), and intellectual disability in infants. Other birth abnormalities have been reported as well, such as chorioretinitis, microphthalmus, and ocular defects. Syphilis causes congenital deafness, intellectual disability, and diffuse fibrosis in organs, such as the liver and lungs, if the embryo is exposed.
Malnutrition
For example, a lack of folic acid, a B vitamin, in the diet of a mother can cause cellular neural tube deformities that result in spina bifida. Congenital disorders such as a neural tube deformity can be prevented by 72% if the mother consumes 4 mg of folic acid before the conception and after twelve weeks of pregnancy. Folic acid, or vitamin B9, aids the development of the foetal nervous system.
Studies with mice have found that food deprivation of the male mouse prior to conception leads to the offspring displaying significantly lower blood glucose levels.
Physical restraint
External physical shocks or constraints due to growth in a restricted space may result in unintended deformation or separation of cellular structures resulting in an abnormal final shape or damaged structures unable to function as expected. An example is Potter syndrome due to oligohydramnios. This finding is important for future understanding of how genetics may predispose individuals for diseases such as obesity, diabetes, and cancer.
For multicellular organisms that develop in a womb, the physical interference or presence of other similarly developing organisms such as twins can result in the two cellular masses being integrated into a larger whole, with the combined cells attempting to continue to develop in a manner that satisfies the intended growth patterns of both cell masses. The two cellular masses can compete with each other, and may either duplicate or merge various structures. This results in conditions such as conjoined twins, and the resulting merged organism may die at birth when it must leave the life-sustaining environment of the womb and must attempt to sustain its biological processes independently.
Genetics
Genetic causes of birth defects include inheritance of abnormal genes from the mother or the father, as well as new mutations in one of the germ cells that gave rise to the fetus. Male germ cells mutate at a much faster rate than female germ cells, and as the father ages, the DNA of the germ cells mutates quickly. If an egg is fertilized with sperm that has damaged DNA, a possibility exists that the fetus could develop abnormally.
Genetic disorders are all congenital (present at birth), though they may not be expressed or recognized until later in life. Genetic disorders may be grouped into single-gene defects, multiple-gene disorders, or chromosomal defects. Single-gene defects may arise from abnormalities of both copies of an autosomal gene (a recessive disorder) or of only one of the two copies (a dominant disorder). Some conditions result from deletions or abnormalities of a few genes located contiguously on a chromosome. Chromosomal disorders involve the loss or duplication of larger portions of a chromosome (or an entire chromosome) containing hundreds of genes. Large chromosomal abnormalities always produce effects on many different body parts and organ systems.
Defective sperm
Non-genetic defects in sperm cells, such as deformed centrioles and other components in the tail and neck of the sperm which are important for the embryonic development, may result in defects.
Socioeconomics
A low socioeconomic status in a deprived neighborhood may include exposure to "environmental stressors and risk factors". Socioeconomic inequalities are commonly measured by the Cartairs-Morris score, Index of Multiple Deprivation, Townsend deprivation index, and the Jarman score. The Jarman score, for example, considers "unemployment, overcrowding, single parents, under-fives, elderly living alone, ethnicity, low social class and residential mobility". In Vos' meta-analysis these indices are used to view the effect of low SES neighborhoods on maternal health. In the meta-analysis, data from individual studies were collected from 1985 up until 2008. Vos concludes that a correlation exists between prenatal adversities and deprived neighborhoods. Other studies have shown that low SES is closely associated with the development of the fetus in utero and growth retardation. Studies also suggest that children born in low SES families are "likely to be born prematurely, at low birth weight, or with asphyxia, a birth defect, a disability, fetal alcohol syndrome, or AIDS". Bradley and Corwyn also suggest that congenital disorders arise from the mother's lack of nutrition, a poor lifestyle, maternal substance abuse and "living in a neighborhood that contains hazards affecting fetal development (toxic waste dumps)". In a meta-analysis that viewed how inequalities influenced maternal health, it was suggested that deprived neighborhoods often promoted behaviors such as smoking, drug and alcohol use. After controlling for socioeconomic factors and ethnicity, several individual studies demonstrated an association with outcomes such as perinatal mortality and preterm birth.
Radiation
For the survivors of the atomic bombing of Hiroshima and Nagasaki, who are known as the Hibakusha, no statistically demonstrable increase of birth defects/congenital malformations was found among their later conceived children, or found in the later conceived children of cancer survivors who had previously received radiotherapy.
The surviving women of Hiroshima and Nagasaki who were able to conceive, though exposed to substantial amounts of radiation, later had children with no higher incidence of abnormalities/birth defects than in the Japanese population as a whole.
Relatively few studies have researched the effects of paternal radiation exposure on offspring. Following the Chernobyl disaster, it was assumed in the 1990s that the germ line of irradiated fathers suffered minisatellite mutations in the DNA, which was inherited by descendants. More recently, however, the World Health Organization states, "children conceived before or after their father's exposure showed no statistically significant differences in mutation frequencies". This statistically insignificant increase was also seen by independent researchers analyzing the children of the liquidators. Animal studies have shown that incomparably massive doses of X-ray irradiation of male mice resulted in birth defects of the offspring.
In the 1980s, a relatively high prevalence of pediatric leukemia cases in children living near a nuclear processing plant in West Cumbria, UK, led researchers to investigate whether the cancer was a result of paternal radiation exposure. A significant association between paternal irradiation and offspring cancer was found, but further research areas close to other nuclear processing plants did not produce the same results. Later this was determined to be the Seascale cluster in which the leading hypothesis is the influx of foreign workers, who have a different rate of leukemia within their race than the British average, resulted in the observed cluster of 6 children more than expected around Cumbria.
Parent's age
Certain birth complications can occur more often in advanced maternal age (greater than 35 years). Complications include fetal growth restriction, preeclampsia, placental abruption, pre-mature births, and stillbirth. These complications not only may put the child at risk, but also the mother.
The effects of the father's age on offspring are not yet well understood and are studied far less extensively than the effects of the mother's age. Fathers contribute proportionally more DNA mutations to their offspring via their germ cells than the mother, with the paternal age governing how many mutations are passed on. This is because, as humans age, male germ cells acquire mutations at a much faster rate than female germ cells.
Around a 5% increase in the incidence of ventricular septal defects, atrial septal defects, and patent ductus arteriosus in offspring has been found to be correlated with advanced paternal age. Advanced paternal age has also been linked to increased risk of achondroplasia and Apert syndrome. Offspring born to fathers under the age of 20 show increased risk of being affected by patent ductus arteriosus, ventricular septal defects, and the tetralogy of Fallot. It is hypothesized that this may be due to environmental exposures or lifestyle choices.
Research has found that there is a correlation between advanced paternal age and risk of birth defects such as limb anomalies, syndromes involving multiple systems, and Down syndrome. Recent studies have concluded that 5-9% of Down syndrome cases are due to paternal effects, but these findings are controversial.
There is concrete evidence that advanced paternal age is associated with the increased likelihood that a mother will have a miscarriage or that fetal death will occur.
Unknown
Although significant progress has been made in identifying the etiology of some birth defects, approximately 65% have no known or identifiable cause. These are referred to as sporadic, a term that implies an unknown cause, random occurrence regardless of maternal living conditions, and a low recurrence risk for future children. For 20-25% of anomalies there seems to be a "multifactorial" cause, meaning a complex interaction of multiple minor genetic anomalies with environmental risk factors. Another 10–13% of anomalies have a purely environmental cause (e.g. infections, illness, or drug abuse in the mother). Only 12–25% of anomalies have a purely genetic cause. Of these, the majority are chromosomal anomalies.
Congenital disorders are not limited to humans and can be found in a variety of other species, including cattle. One such condition is called schistosomus reflexus and is defined by spinal inversion, exposure of abdominal viscera, and limb abnormalities.
Prevention
Folate supplements decrease the risk of neural tube defects. Tentative evidence supports the role of L-arginine in decreasing the risk of intrauterine growth restriction.
Screening
Newborn screening tests were introduced in the early 1960s and initially dealt with just two disorders. Since then tandem mass spectrometry, gas chromatography–mass spectrometry, and DNA analysis has made it possible for a much larger range of disorders to be screened. Newborn screening mostly measures metabolite and enzyme activity using a dried blood spot sample. Screening tests are carried out in order to detect serious disorders that may be treatable to some extent. Early diagnosis makes possible the readiness of therapeutic dietary information, enzyme replacement therapy and organ transplants. Different countries support the screening for a number of metabolic disorders (inborn errors of metabolism (IEM)), and genetic disorders including cystic fibrosis and Duchenne muscular dystrophy.
Tandem mass spectroscopy can also be used for IEM, and investigation of sudden infant death, and shaken baby syndrome.
Screening can also be carried out prenatally and can include obstetric ultrasonography to give scans such as the nuchal scan. 3D ultrasound scans can give detailed information of structural anomalies.
Epidemiology
Congenital anomalies resulted in about 632,000 deaths per year in 2013 down from 751,000 in 1990. The types with the greatest death are congenital heart defects (323,000), followed by neural tube defects (69,000).
Many studies have found that the frequency of occurrence of certain congenital malformations depends on the sex of the child (table). For example, pyloric stenosis occurs more often in males while congenital hip dislocation is four to five times more likely to occur in females. Among children with one kidney, there are approximately twice as many males, whereas among children with three kidneys there are approximately 2.5 times more females. The same pattern is observed among infants with excessive number of ribs, vertebrae, teeth and other organs which in a process of evolution have undergone reduction—among them there are more females. Contrarily, among the infants with their scarcity, there are more males. Anencephaly is shown to occur approximately twice as frequently in females. The number of boys born with 6 fingers is two times higher than the number of girls. Now various techniques are available to detect congenital anomalies in fetus before birth.
About 3% of newborns have a "major physical anomaly", meaning a physical anomaly that has cosmetic or functional significance.
Physical congenital abnormalities are the leading cause of infant mortality in the United States, accounting for more than 20% of all infant deaths. Seven to ten percent of all children will require extensive medical care to diagnose or treat a birth defect.
{| class="wikitable"
|+ The sex ratio of patients with congenital malformations
! Congenital anomaly !! Sex ratio, ♂♂:♀♀
|-
| Defects with female predominance ||
|-
| Congenital hip dislocation || 1 : 5.2; 1 : 5; 1 : 8; 1 : 3.7
|-
| Cleft palate || 1 : 3
|-
| Anencephaly || 1 : 1.9; 1 : 2
|-
| Craniocele || 1 : 1.8
|-
| Aplasia of lung || 1 : 1.51
|-
| Spinal herniation || 1 : 1.4
|-
| Diverticulum of the esophagus || 1 : 1.4
|-
| Stomach || 1 : 1.4
|-
| Neutral defects ||
|-
| Hypoplasia of the tibia and femur || 1 : 1.2
|-
| Spina bifida || 1 : 1.2
|-
| Atresia of small intestine || 1 : 1
|-
| Microcephaly || 1.2 : 1
|-
| Esophageal atresia || 1.3 : 1; 1.5 : 1
|-
| Hydrocephalus || 1.3 : 1
|-
| Defects with male predominance ||
|-
| Diverticula of the colon || 1.5 : 1
|-
| Atresia of the rectum || 1.5 : 1; 2 : 1
|-
| Unilateral renal agenesis || 2 : 1; 2.1 : 1
|-
| Schistocystis || 2 : 1
|-
| Cleft lip and palate || 2 : 1; 1.47 : 1
|-
| Bilateral renal agenesis || 2.6 : 1
|-
| Congenital anomalies of the genitourinary system || 2.7 : 1
|-
| Pyloric stenosis, congenital || 5 : 1; 5.4 : 1
|-
| Meckel's diverticulum || More common in boys
|-
| Congenital megacolon || More common in boys
|-
| All defects || 1.22 : 1; 1.29 : 1
|}
Data obtained on opposite-sex twins. ** — Data were obtained in the period 1983–1994.
P. M. Rajewski and A. L. Sherman (1976) have analyzed the frequency of congenital anomalies in relation to the system of the organism. Prevalence of men was recorded for the anomalies of phylogenetically younger organs and systems.
In respect of an etiology, sexual distinctions can be divided on appearing before and after differentiation of male's gonads during embryonic development, which begins from the eighteenth week. The testosterone level in male embryos thus raises considerably. The subsequent hormonal and physiological distinctions of male and female embryos can explain some sexual differences in frequency of congenital defects. It is difficult to explain the observed differences in the frequency of birth defects between the sexes by the details of the reproductive functions or the influence of environmental and social factors.
United States
The CDC and National Birth Defect Project studied the incidence of birth defects in the US. Key findings include:
Down syndrome was the most common condition with an estimated prevalence of 14.47 per 10,000 live births, implying about 6,000 diagnoses each year.
About 7,000 babies are born with a cleft palate, cleft lip or both.
See also
Idiopathic
List of congenital disorders
List of ICD-9 codes 740-759: Congenital anomalies
Malformative syndrome
March of Dimes
Mitochondrial disease
National Birth Defects Prevention Network, founded 1997
Supernumerary body part
Notes
References
External links
WHO fact sheet on birth defects
CDC's National Center on Birth Defects and Developmental Disabilities
Animal developmental biology
Wikipedia medicine articles ready to translate | 0.776052 | 0.996912 | 0.773655 |
Medical procedure | A medical procedure is a course of action intended to achieve a result in the delivery of healthcare.
A medical procedure with the intention of determining, measuring, or diagnosing a patient condition or parameter is also called a medical test. Other common kinds of procedures are therapeutic (i.e., intended to treat, cure, or restore function or structure), such as surgical and physical rehabilitation procedures.
Definition
"An activity directed at or performed on an individual with the object of improving health, treating disease or injury, or making a diagnosis." - International Dictionary of Medicine and Biology
"The act or conduct of diagnosis, treatment, or operation." - Stedman's Medical Dictionary by Thomas Lathrop Stedman
"A series of steps by which a desired result is accomplished." - Dorland's Medical Dictionary by William Alexander Newman Dorland
"The sequence of steps to be followed in establishing some course of action." - Mosby's Medical, Nursing, & Allied Health Dictionary
List of medical procedures
Propaedeutic
Auscultation
Medical inspection (body features)
Palpation
Percussion (medicine)
Vital signs measurement, such as blood pressure, body temperature, or pulse (or heart rate)
Diagnostic
Lab tests
Biopsy test
Blood test
Stool test
Urinalysis
Cardiac stress test
Electrocardiography
Electrocorticography
Electroencephalography
Electromyography
Electroneuronography
Electronystagmography
Electrooculography
Electroretinography
Endoluminal capsule monitoring
Endoscopy
Colonoscopy
Colposcopy
Cystoscopy
Gastroscopy
Laparoscopy
Laryngoscopy
Ophthalmoscopy
Otoscopy
Sigmoidoscopy
Esophageal motility study
Evoked potential
Magnetoencephalography
Medical imaging
Angiography
Aortography
Cerebral angiography
Coronary angiography
Lymphangiography
Pulmonary angiography
Ventriculography
Chest photofluorography
Computed tomography
Echocardiography
Electrical impedance tomography
Fluoroscopy
Magnetic resonance imaging
Diffuse optical imaging
Diffusion tensor imaging
Diffusion-weighted imaging
Functional magnetic resonance imaging
Positron emission tomography
Radiography
Scintillography
SPECT
Ultrasonography
Contrast-enhanced ultrasound
Gynecologic ultrasonography
Intravascular ultrasound
Obstetric ultrasonography
Thermography
Virtual colonoscopy
Neuroimaging
Posturography
Therapeutic
Thrombosis prophylaxis
Precordial thump
Politzerization
Hemodialysis
Hemofiltration
Plasmapheresis
Apheresis
Extracorporeal membrane oxygenation (ECMO)
Cancer immunotherapy
Cancer vaccine
Cervical conization
Chemotherapy
Cytoluminescent therapy
Insulin potentiation therapy
Low-dose chemotherapy
Monoclonal antibody therapy
Photodynamic therapy
Radiation therapy
Targeted therapy
Tracheal intubation
Unsealed source radiotherapy
Virtual reality therapy
Physical therapy/Physiotherapy
Speech therapy
Phototerapy
Hydrotherapy
Heat therapy
Shock therapy
Insulin shock therapy
Electroconvulsive therapy
Symptomatic treatment
Fluid replacement therapy
Palliative care
Hyperbaric oxygen therapy
Oxygen therapy
Gene therapy
Enzyme replacement therapy
Intravenous therapy
Phage therapy
Respiratory therapy
Vision therapy
Electrotherapy
Transcutaneous electrical nerve stimulation (TENS)
Laser therapy
Combination therapy
Occupational therapy
Immunization
Vaccination
Immunosuppressive therapy
Psychotherapy
Drug therapy
Acupuncture
Antivenom
Magnetic therapy
Craniosacral therapy
Chelation therapy
Hormonal therapy
Hormone replacement therapy
Opiate replacement therapy
Cell therapy
Stem cell treatments
Intubation
Nebulization
Inhalation therapy
Particle therapy
Proton therapy
Fluoride therapy
Cold compression therapy
Animal-Assisted Therapy
Negative Pressure Wound Therapy
Nicotine replacement therapy
Oral rehydration therapy
Surgical
Ablation
Amputation
Biopsy
Cardiopulmonary resuscitation (CPR)
Cryosurgery
Endoscopic surgery
Facial rejuvenation
General surgery
Hand surgery
Hemilaminectomy
Image-guided surgery
Knee cartilage replacement therapy
Laminectomy
Laparoscopic surgery
Lithotomy
Lithotriptor
Lobotomy
Neovaginoplasty
Radiosurgery
Stereotactic surgery
Vaginoplasty
Xenotransplantation
Anesthesia
Dissociative anesthesia
General anesthesia
Local anesthesia
Topical anesthesia (surface)
Epidural (extradural) block
Spinal anesthesia (subarachnoid block)
Regional anesthesia
Other
Interventional radiology
Screening (medicine)
See also
Algorithm (medical)
Autopsy
Complication (medicine)
Consensus (medical)
Contraindication
Course (medicine)
Drug interaction
Extracorporeal
Guideline (medical)
Iatrogenesis
Invasive (medical)
List of surgical instruments
Medical error
Medical prescription
Medical test
Minimally invasive
Nocebo
Non-invasive
Physical examination
Responsible drug use
Surgical instruments
Vital signs
References
Medical terminology
Medical treatments | 0.786424 | 0.983524 | 0.773467 |
Encephalitis | Encephalitis is inflammation of the brain. The severity can be variable with symptoms including reduction or alteration in consciousness, headache, fever, confusion, a stiff neck, and vomiting. Complications may include seizures, hallucinations, trouble speaking, memory problems, and problems with hearing.
Causes of encephalitis include viruses such as herpes simplex virus and rabies virus as well as bacteria, fungi, or parasites. Other causes include autoimmune diseases and certain medications. In many cases the cause remains unknown. Risk factors include a weak immune system. Diagnosis is typically based on symptoms and supported by blood tests, medical imaging, and analysis of cerebrospinal fluid.
Certain types are preventable with vaccines. Treatment may include antiviral medications (such as acyclovir), anticonvulsants, and corticosteroids. Treatment generally takes place in hospital. Some people require artificial respiration. Once the immediate problem is under control, rehabilitation may be required. In 2015, encephalitis was estimated to have affected 4.3 million people and resulted in 150,000 deaths worldwide.
Signs and symptoms
Adults with encephalitis present with acute onset of fever, headache, confusion, and sometimes seizures. Younger children or infants may present with irritability, poor appetite and fever. Neurological examinations usually reveal a drowsy or confused person. Stiff neck, due to the irritation of the meninges covering the brain, indicates that the patient has either meningitis or meningoencephalitis.
Limbic encephalitis
Limbic encephalitis refers to inflammatory disease confined to the limbic system of the brain. The clinical presentation often includes disorientation, disinhibition, memory loss, seizures, and behavioral anomalies. MRI imaging reveals T2 hyperintensity in the structures of the medial temporal lobes, and in some cases, other limbic structures. Some cases of limbic encephalitis are of autoimmune origin.
Encephalitis lethargica
Encephalitis lethargica is identified by high fever, headache, delayed physical response, and lethargy. Individuals can exhibit upper body weakness, muscular pains, and tremors, though the cause of encephalitis lethargica is not currently known. From 1917 to 1928, an epidemic of encephalitis lethargica occurred worldwide.
Cause
In 30%-40% of encephalitis cases, the etiology remains unknown.
Viral
Viral infections are the usual cause of infectious encephalitis. Viral encephalitis can occur either as a direct effect of an acute infection, or as one of the sequelae of a latent infection. The majority of viral cases of encephalitis have an unknown cause; however, the most common identifiable cause of viral encephalitis is from herpes simplex infection. Other causes of acute viral encephalitis are rabies virus, poliovirus, and measles virus.
Additional possible viral causes are arboviral flavivirus (St. Louis encephalitis, West Nile virus), bunyavirus (La Crosse strain), arenavirus (lymphocytic choriomeningitis virus), reovirus (Colorado tick virus), and henipavirus infections. The Powassan virus is a rare cause of encephalitis.
Bacterial
It can be caused by a bacterial infection, such as bacterial meningitis, or may be a complication of a current infectious disease such as syphilis (secondary encephalitis).
Other bacterial pathogens, like Mycoplasma and those causing rickettsial disease, cause inflammation of the meninges and consequently encephalitis. Lyme disease or Bartonella henselae may also cause encephalitis.
Other Infectious Causes
Certain parasitic or protozoal infestations, such as toxoplasmosis and malaria can also cause encephalitis in people with compromised immune systems.
The rare but typically deadly forms of encephalitis, primary amoebic meningoencephalitis and Granulomatous amoebic encephalitis, are caused by free-living amoeba.
Autoimmune encephalitis
Autoimmune encephalitis signs can include catatonia, psychosis, abnormal movements, and autonomic dysregulation. Antibody-mediated anti-N-methyl-D-aspartate-receptor encephalitis and Rasmussen encephalitis are examples of autoimmune encephalitis.
Anti-NMDA receptor encephalitis is the most common autoimmune form, and is accompanied by ovarian teratoma in 58 percent of affected women 18–45 years of age.
Another autoimmune cause includes acute disseminated encephalitis, a demyelinating disease which primarily affects children.
Diagnosis
People should only be diagnosed with encephalitis if they have a decreased or altered level of consciousness, lethargy, or personality change for at least twenty-four hours without any other explainable cause. Diagnosing encephalitis is done via a variety of tests:
Brain scan, done by MRI, can determine inflammation and differentiate from other possible causes.
EEG, in monitoring brain activity, encephalitis will produce abnormal signal.
Lumbar puncture (spinal tap), this helps determine via a test using the cerebral-spinal fluid, obtained from the lumbar region.
Blood test
Urine analysis
Polymerase chain reaction (PCR) testing of the cerebrospinal fluid, to detect the presence of viral DNA which is a sign of viral encephalitis.
Prevention
Vaccination is available against tick-borne and Japanese encephalitis and should be considered for at-risk individuals. Post-infectious encephalomyelitis complicating smallpox vaccination is avoidable, for all intents and purposes, as smallpox is nearly eradicated. Contraindication to Pertussis immunization should be observed in patients with encephalitis.
Treatment
An ideal drug to treat brain infection should be small, moderately lipophilic at pH of 7.4, low level of plasma protein binding, volume of distribution of litre per kg, does not have strong affinity towards binding with P-glycoprotein, or other efflux pumps on the surface of blood–brain barrier. Some drugs such as isoniazid, pyrazinamide, linezolid, metronidazole, fluconazole, and some fluoroquinolones have good penetration to blood brain barrier.Treatment (which is based on supportive care) is as follows:
Pyrimethamine-based maintenance therapy is often used to treat toxoplasmic encephalitis (TE), which is caused by Toxoplasma gondii and can be life-threatening for people with weak immune systems. The use of highly active antiretroviral therapy (HAART), in conjunction with the established pyrimethamine-based maintenance therapy, decreases the chance of relapse in patients with HIV and TE from approximately 18% to 11%. This is a significant difference as relapse may impact the severity and prognosis of disease and result in an increase in healthcare expenditure.
The effectiveness of intravenous immunoglobulin for the management of childhood encephalitis is unclear. Systematic reviews have been unable to draw firm conclusions because of a lack of randomised double-blind studies with sufficient numbers of patients and sufficient follow-up. There is the possibility of a benefit of intravenous immunoglobulin for some forms of childhood encephalitis on some indicators such as length of hospital stay, time to stop spasms, time to regain consciousness, and time to resolution of neuropathic symptoms and fever. Intravenous immunoglobulin for Japanese encephalitis appeared to have no benefit when compared with placebo (pretend) treatment.
Prognosis
Identification of poor prognostic factors include cerebral edema, status epilepticus, and thrombocytopenia. In contrast, a normal encephalogram at the early stages of diagnosis is associated with high rates of survival.
Epidemiology
The number of new cases a year of acute encephalitis in Western countries is 7.4 cases per 100,000 people per year. In tropical countries, the incidence is 6.34 per 100,000 people per year. The number of cases of encephalitis has not changed much over time, with about 250,000 cases a year from 2005 to 2015 in the US. Approximately seven per 100,000 people were hospitalized for encephalitis in the US during this time. In 2015, encephalitis was estimated to have affected 4.3 million people and resulted in 150,000 deaths worldwide. Herpes simplex encephalitis has an incidence of 2–4 per million of the population per year.
Terminology
Encephalitis with meningitis is known as meningoencephalitis, while encephalitis with involvement of the spinal cord is known as encephalomyelitis.
The word is from Ancient Greek , 'brain', composed of , , 'in' and , , 'head', and the medical suffix -itis 'inflammation'.
See also
References
Further reading
External links
WHO: Viral Encephalitis
Infectious diseases
Inflammations
Acute pain
Wikipedia medicine articles ready to translate | 0.773929 | 0.999376 | 0.773446 |
Trauma triad of death | The trauma triad of death is a medical term describing the combination of hypothermia, acidosis, and coagulopathy. This combination is commonly seen in patients who have sustained severe traumatic injuries and results in a significant rise in the mortality rate. Commonly, when someone presents with these signs, damage control surgery is employed to reverse the effects.
The three conditions share a complex relationship; each factor can compound the others, resulting in high mortality if this positive feedback loop continues uninterrupted.
Severe bleeding in trauma diminishes oxygen delivery, and may lead to hypothermia. This in turn can halt the coagulation cascade, preventing blood from clotting. In the absence of blood-bound oxygen and nutrients (hypoperfusion), the body's cells burn glucose anaerobically for energy, causing the release of lactic acid, ketone bodies, and other acidic compounds into the blood stream, which lower the blood's pH, leading to metabolic acidosis. Such an increase in acidity damages the tissues and organs of the body and can reduce myocardial performance, further reducing the oxygen delivery.
References
External links
Blood disorders
Traumatology
Medical emergencies
Trauma surgery
Medical triads | 0.780483 | 0.990931 | 0.773405 |
Body fluid | Body fluids, bodily fluids, or biofluids, sometimes body liquids, are liquids within the body of an organism. In lean healthy adult men, the total body water is about 60% (60–67%) of the total body weight; it is usually slightly lower in women (52–55%). The exact percentage of fluid relative to body weight is inversely proportional to the percentage of body fat. A lean man, for example, has about 42 (42–47) liters of water in his body.
The total body of water is divided into fluid compartments, between the intracellular fluid compartment (also called space, or volume) and the extracellular fluid (ECF) compartment (space, volume) in a two-to-one ratio: 28 (28–32) liters are inside cells and 14 (14–15) liters are outside cells.
The ECF compartment is divided into the interstitial fluid volume – the fluid outside both the cells and the blood vessels – and the intravascular volume (also called the vascular volume and blood plasma volume) – the fluid inside the blood vessels – in a three-to-one ratio: the interstitial fluid volume is about 12 liters; the vascular volume is about 4 liters.
The interstitial fluid compartment is divided into the lymphatic fluid compartment – about 2/3, or 8 (6–10) liters, and the transcellular fluid compartment (the remaining 1/3, or about 4 liters).
The vascular volume is divided into the venous volume and the arterial volume; and the arterial volume has a conceptually useful but unmeasurable subcompartment called the effective arterial blood volume.
Compartments by location
intracellular fluid (ICF), which consist of cytosol and fluids in the cell nucleus
Extracellular fluid
Intravascular fluid (blood plasma)
Interstitial fluid
Lymphatic fluid (sometimes included in interstitial fluid)
Transcellular fluid
Health
Clinical samples
Clinical samples are generally defined as non-infectious human or animal materials including blood, saliva, excreta, body tissue and tissue fluids, and also FDA-approved pharmaceuticals that are blood products. In medical contexts, it is a specimen taken for diagnostic examination or evaluation, and for identification of disease or condition.
See also
Basic reproduction number
Blood-borne diseases
Clinical pathology
Humorism
Hygiene
Ritual cleanliness
References
Further reading
Paul Spinrad. (1999) The RE/Search Guide to Bodily Fluids. Juno Books.
John Bourke. (1891) Rites of All Nations. Washington, D.C.: W.H. Lowdermilk.
External links
Medical diagnosis
Medical terminology | 0.777025 | 0.995269 | 0.773349 |
Health care | Health care, or healthcare, is the improvement of health via the prevention, diagnosis, treatment, amelioration or cure of disease, illness, injury, and other physical and mental impairments in people. Health care is delivered by health professionals and allied health fields. Medicine, dentistry, pharmacy, midwifery, nursing, optometry, audiology, psychology, occupational therapy, physical therapy, athletic training, and other health professions all constitute health care. The term includes work done in providing primary care, secondary care, tertiary care, and public health.
Access to healthcare may vary across countries, communities, and individuals, influenced by social and economic conditions and health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of healthcare access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affect negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Health systems are the organizations established to meet the health needs of targeted populations. According to the World Health Organization (WHO), a well-functioning healthcare system requires a financing mechanism, a well-trained and adequately paid workforce, reliable information on which to base decisions and policies, and well-maintained health facilities to deliver quality medicines and technologies.
An efficient healthcare system can contribute to a significant part of a country's economy, development, and industrialization. Health care is an important determinant in promoting the general physical and mental health and well-being of people around the world. An example of this was the worldwide eradication of smallpox in 1980, declared by the WHO, as the first disease in human history to be eliminated by deliberate healthcare interventions.
Delivery
The delivery of modern health care depends on groups of trained professionals and paraprofessionals coming together as interdisciplinary teams. This includes professionals in medicine, psychology, physiotherapy, nursing, dentistry, midwifery and allied health, along with many others such as public health practitioners, community health workers and assistive personnel, who systematically provide personal and population-based preventive, curative and rehabilitative care services.
While the definitions of the various types of health care vary depending on the different cultural, political, organizational, and disciplinary perspectives, there appears to be some consensus that primary care constitutes the first element of a continuing health care process and may also include the provision of secondary and tertiary levels of care. Health care can be defined as either public or private.
Primary care
Primary care refers to the work of health professionals who act as a first point of consultation for all patients within the health care system. The primary care model supports first-contact, accessible, continuous, comprehensive and coordinated person-focused care. Such a professional would usually be a primary care physician, such as a general practitioner or family physician. Another professional would be a licensed independent practitioner such as a physiotherapist, or a non-physician primary care provider such as a physician assistant or nurse practitioner. Depending on the locality and health system organization, the patient may see another health care professional first, such as a pharmacist or nurse. Depending on the nature of the health condition, patients may be referred for secondary or tertiary care.
Primary care is often used as the term for the health care services that play a role in the local community. It can be provided in different settings, such as Urgent care centers that provide same-day appointments or services on a walk-in basis.
Primary care involves the widest scope of health care, including all ages of patients, patients of all socioeconomic and geographic origins, patients seeking to maintain optimal health, and patients with all types of acute and chronic physical, mental and social health issues, including multiple chronic diseases. Consequently, a primary care practitioner must possess a wide breadth of knowledge in many areas. Continuity is a key characteristic of primary care, as patients usually prefer to consult the same practitioner for routine check-ups and preventive care, health education, and every time they require an initial consultation about a new health problem. The International Classification of Primary Care (ICPC) is a standardized tool for understanding and analyzing information on interventions in primary care based on the reason for the patient's visit.
Common chronic illnesses usually treated in primary care may include, for example, hypertension, diabetes, asthma, COPD, depression and anxiety, back pain, arthritis or thyroid dysfunction. Primary care also includes many basic maternal and child health care services, such as family planning services and vaccinations. In the United States, the 2013 National Health Interview Survey found that skin disorders (42.7%), osteoarthritis and joint disorders (33.6%), back problems (23.9%), disorders of lipid metabolism (22.4%), and upper respiratory tract disease (22.1%, excluding asthma) were the most common reasons for accessing a physician.
In the United States, primary care physicians have begun to deliver primary care outside of the managed care (insurance-billing) system through direct primary care which is a subset of the more familiar concierge medicine. Physicians in this model bill patients directly for services, either on a pre-paid monthly, quarterly, or annual basis, or bill for each service in the office. Examples of direct primary care practices include Foundation Health in Colorado and Qliance in Washington.
In the context of global population aging, with increasing numbers of older adults at greater risk of chronic non-communicable diseases, rapidly increasing demand for primary care services is expected in both developed and developing countries. The World Health Organization attributes the provision of essential primary care as an integral component of an inclusive primary health care strategy.
Secondary care
Secondary care includes acute care: necessary treatment for a short period of time for a brief but serious illness, injury, or other health condition. This care is often found in a hospital emergency department. Secondary care also includes skilled attendance during childbirth, intensive care, and medical imaging services.
The term "secondary care" is sometimes used synonymously with "hospital care". However, many secondary care providers, such as psychiatrists, clinical psychologists, occupational therapists, most dental specialties or physiotherapists, do not necessarily work in hospitals. Some primary care services are delivered within hospitals. Depending on the organization and policies of the national health system, patients may be required to see a primary care provider for a referral before they can access secondary care.
In countries that operate under a mixed market health care system, some physicians limit their practice to secondary care by requiring patients to see a primary care provider first. This restriction may be imposed under the terms of the payment agreements in private or group health insurance plans. In other cases, medical specialists may see patients without a referral, and patients may decide whether self-referral is preferred.
In other countries patient self-referral to a medical specialist for secondary care is rare as prior referral from another physician (either a primary care physician or another specialist) is considered necessary, regardless of whether the funding is from private insurance schemes or national health insurance.
Allied health professionals, such as physical therapists, respiratory therapists, occupational therapists, speech therapists, and dietitians, also generally work in secondary care, accessed through either patient self-referral or through physician referral.
Tertiary care
Tertiary care is specialized consultative health care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital.
Examples of tertiary care services are cancer management, neurosurgery, cardiac surgery, plastic surgery, treatment for severe burns, advanced neonatology services, palliative, and other complex medical and surgical interventions.
Quaternary care
The term quaternary care is sometimes used as an extension of tertiary care in reference to advanced levels of medicine which are highly specialized and not widely accessed. Experimental medicine and some types of uncommon diagnostic or surgical procedures are considered quaternary care. These services are usually only offered in a limited number of regional or national health care centers.
Home and community care
Many types of health care interventions are delivered outside of health facilities. They include many interventions of public health interest, such as food safety surveillance, distribution of condoms and needle-exchange programs for the prevention of transmissible diseases.
They also include the services of professionals in residential and community settings in support of self-care, home care, long-term care, assisted living, treatment for substance use disorders among other types of health and social care services.
Community rehabilitation services can assist with mobility and independence after the loss of limbs or loss of function. This can include prostheses, orthotics, or wheelchairs.
Many countries are dealing with aging populations, so one of the priorities of the health care system is to help seniors live full, independent lives in the comfort of their own homes. There is an entire section of health care geared to providing seniors with help in day-to-day activities at home such as transportation to and from doctor's appointments along with many other activities that are essential for their health and well-being. Although they provide home care for older adults in cooperation, family members and care workers may harbor diverging attitudes and values towards their joint efforts. This state of affairs presents a challenge for the design of ICT (information and communication technology) for home care.
Because statistics show that over 80 million Americans have taken time off of their primary employment to care for a loved one, many countries have begun offering programs such as the Consumer Directed Personal Assistant Program to allow family members to take care of their loved ones without giving up their entire income.
With obesity in children rapidly becoming a major concern, health services often set up programs in schools aimed at educating children about nutritional eating habits, making physical education a requirement and teaching young adolescents to have a positive self-image.
Ratings
Health care ratings are ratings or evaluations of health care used to evaluate the process of care and health care structures and/or outcomes of health care services. This information is translated into report cards that are generated by quality organizations, nonprofit, consumer groups and media. This evaluation of quality is based on measures of:
health plan quality
hospital quality
of patient experience
physician quality
quality for other health professionals
Access to health care
Access to healthcare may vary across countries, communities, and individuals, influenced by social and economic conditions as well as health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of healthcare access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affects negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Related sectors
Health care extends beyond the delivery of services to patients, encompassing many related sectors, and is set within a bigger picture of financing and governance structures.
Health system
A health system, also sometimes referred to as health care system or healthcare system, is the organization of people, institutions, and resources that deliver health care services to populations in need.
Industry
The healthcare industry incorporates several sectors that are dedicated to providing health care services and products. As a basic framework for defining the sector, the United Nations' International Standard Industrial Classification categorizes health care as generally consisting of hospital activities, medical and dental practice activities, and "other human health activities." The last class involves activities of, or under the supervision of, nurses, midwives, physiotherapists, scientific or diagnostic laboratories, pathology clinics, residential health facilities, patient advocates or other allied health professions.
In addition, according to industry and market classifications, such as the Global Industry Classification Standard and the Industry Classification Benchmark, health care includes many categories of medical equipment, instruments and services including biotechnology, diagnostic laboratories and substances, drug manufacturing and delivery.
For example, pharmaceuticals and other medical devices are the leading high technology exports of Europe and the United States. The United States dominates the biopharmaceutical field, accounting for three-quarters of the world's biotechnology revenues.
Research
The quantity and quality of many health care interventions are improved through the results of science, such as advanced through the medical model of health which focuses on the eradication of illness through diagnosis and effective treatment. Many important advances have been made through health research, biomedical research and pharmaceutical research, which form the basis for evidence-based medicine and evidence-based practice in health care delivery. Health care research frequently engages directly with patients, and as such issues for whom to engage and how to engage with them become important to consider when seeking to actively include them in studies. While single best practice does not exist, the results of a systematic review on patient engagement suggest that research methods for patient selection need to account for both patient availability and willingness to engage.
Health services research can lead to greater efficiency and equitable delivery of health care interventions, as advanced through the social model of health and disability, which emphasizes the societal changes that can be made to make populations healthier. Results from health services research often form the basis of evidence-based policy in health care systems. Health services research is also aided by initiatives in the field of artificial intelligence for the development of systems of health assessment that are clinically useful, timely, sensitive to change, culturally sensitive, low-burden, low-cost, built into standard procedures, and involve the patient.
Financing
There are generally five primary methods of funding health care systems:
General taxation to the state, county or municipality
Social health insurance
Voluntary or private health insurance
Out-of-pocket payments
Donations to health charities
In most countries, there is a mix of all five models, but this varies across countries and over time within countries. Aside from financing mechanisms, an important question should always be how much to spend on health care. For the purposes of comparison, this is often expressed as the percentage of GDP spent on health care. In OECD countries for every extra $1000 spent on health care, life expectancy falls by 0.4 years. A similar correlation is seen from the analysis carried out each year by Bloomberg. Clearly this kind of analysis is flawed in that life expectancy is only one measure of a health system's performance, but equally, the notion that more funding is better is not supported.
In 2011, the health care industry consumed an average of 9.3 percent of the GDP or US$ 3,322 (PPP-adjusted) per capita across the 34 members of OECD countries. The US (17.7%, or US$ PPP 8,508), the Netherlands (11.9%, 5,099), France (11.6%, 4,118), Germany (11.3%, 4,495), Canada (11.2%, 5669), and Switzerland (11%, 5,634) were the top spenders, however life expectancy in total population at birth was highest in Switzerland (82.8 years), Japan and Italy (82.7), Spain and Iceland (82.4), France (82.2) and Australia (82.0), while OECD's average exceeds 80 years for the first time ever in 2011: 80.1 years, a gain of 10 years since 1970. The US (78.7 years) ranges only on place 26 among the 34 OECD member countries, but has the highest costs by far. All OECD countries have achieved universal (or almost universal) health coverage, except the US and Mexico. (see also international comparisons.)
In the United States, where around 18% of GDP is spent on health care, the Commonwealth Fund analysis of spend and quality shows a clear correlation between worse quality and higher spending.
Expand the OECD charts below to see the breakdown:
"Government/compulsory": Government spending and compulsory health insurance.
"Voluntary": Voluntary health insurance and private funds such as households' out-of-pocket payments, NGOs and private corporations.
They are represented by columns starting at zero. They are not stacked. The 2 are combined to get the total.
At the source you can run your cursor over the columns to get the year and the total for that country.
Click the table tab at the source to get 3 lists (one after another) of amounts by country: "Total", "Government/compulsory", and "Voluntary".
Administration and regulation
The management and administration of health care is vital to the delivery of health care services. In particular, the practice of health professionals and the operation of health care institutions is typically regulated by national or state/provincial authorities through appropriate regulatory bodies for purposes of quality assurance. Most countries have credentialing staff in regulatory boards or health departments who document the certification or licensing of health workers and their work history.
Health information technology
Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making."
Health information technology components:
Electronic health record (EHR) – An EHR contains a patient's comprehensive medical history, and may include records from multiple providers.
Electronic Medical Record (EMR) – An EMR contains the standard medical and clinical data gathered in one's provider's office.
Health information exchange (HIE) – Health Information Exchange allows health care professionals and patients to appropriately access and securely share a patient's vital medical information electronically.
Medical practice management software (MPM) – is designed to streamline the day-to-day tasks of operating a medical facility. Also known as practice management software or practice management system (PMS).
Personal health record (PHR) – A PHR is a patient's medical history that is maintained privately, for personal use.
See also
:Category:Health care by country
Global health
Health equity
Health policy
Healthcare system / Health professionals
Tobacco control laws
Universal health care
References
External links
Primary care
Public services
Health
Public health
Universal health care
Health economics
Health sciences | 0.774133 | 0.998923 | 0.773299 |
Biotic | Biotics describe living or once living components of a community; for example organisms, such as animals and plants.
Biotic may refer to:
Life, the condition of living organisms
Biology, the study of life
Biotic material, which is derived from living organisms
Biotic components in ecology
Biotic potential, an organism's reproductive capacity
Biotic community, all the interacting organisms living together in a specific habitat
Biotic energy, a vital force theorized by biochemist Benjamin Moore
Biotic Baking Brigade, an unofficial group of pie-throwing activists
See also
Abiotic
Antibiotics are agents that either kill bacteria or inhibit their growth
Prebiotics are non-digestible food ingredients that stimulate the growth or activity of bacteria in the digestive system
Probiotics consist of a live culture of bacteria that inhibit or interfere with colonization by microbial pathogens
Synbiotics refer to nutritional supplements combining probiotics and prebiotics | 0.78837 | 0.980528 | 0.773018 |
Respiration (physiology) | In physiology, respiration is the movement of oxygen from the outside environment to the cells within tissues, and the removal of carbon dioxide in the opposite direction to the surrounding environment.
The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment.
Exchange of gases in the lung occurs by ventilation and perfusion. Ventilation refers to the in-and-out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths. Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries. Contraction of the diaphragm muscle causes a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system. In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation. Speaking and singing in humans requires sustained breath control that many mammals are not capable of performing.
The process of breathing does not fill the alveoli with atmospheric air during each inhalation (about 350 ml per breath), but the inhaled air is carefully diluted and thoroughly mixed with a large volume of gas (about 2.5 liters in adult humans) known as the functional residual capacity which remains in the lungs after each exhalation, and whose gaseous composition differs markedly from that of the ambient air. Physiological respiration involves the mechanisms that ensure that the composition of the functional residual capacity is kept constant, and equilibrates with the gases dissolved in the pulmonary capillary blood, and thus throughout the body. Thus, in precise usage, the words breathing and ventilation are hyponyms, not synonyms, of respiration; but this prescription is not consistently followed, even by most health care providers, because the term respiratory rate (RR) is a well-established term in health care, even though it would need to be consistently replaced with ventilation rate if the precise usage were to be followed. During respiration the C-H bonds are broken by oxidation-reduction reaction and so carbon dioxide and water are also produced. The cellular energy-yielding process is called cellular respiration.
Classifications of respiration
There are several ways to classify the physiology of respiration:
By species
Aquatic respiration
Buccal pumping
Cutaneous respiration
Intestinal respiration
Respiratory system
By mechanism
Breathing
Gas exchange
Arterial blood gas
Control of respiration
Apnea
By experiments
Huff and puff apparatus
Spirometry
Selected ion flow tube mass spectrometry
By intensive care and emergency medicine
CPR
Mechanical ventilation
Intubation
Iron lung
Intensive care medicine
Liquid breathing
ECMO
Oxygen toxicity
Medical ventilator
Life support
General anaesthesia
Laryngoscope
By other medical topics
Respiratory therapy
Breathing gases
Hyperbaric oxygen therapy
Hypoxia
Gas embolism
Decompression sickness
Barotrauma
Oxygen equivalent
Oxygen toxicity
Nitrogen narcosis
Carbon dioxide poisoning
Carbon monoxide poisoning
HPNS
Additional images
See also
References
Nelsons VCE Units 1–2 Physical Education. 2010 Cengage Copyright.
External links
Overview at Johns Hopkins University
Further reading
, human biology 146149
C.Michael Hogan. 2011. Respiration. Encyclopedia of Earth. Eds. Mark McGinley and C.J.Cleveland. National Council for Science and the Environment. Washington DC
Excretion
hr:Disanje | 0.775523 | 0.996718 | 0.772978 |
Botulism | Botulism is a rare and potentially fatal illness caused by botulinum toxin, which is produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever.
Botulism can occur in several ways. The bacterial spores which cause it are common in both soil and water and are very resistant. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism instead happens when the bacterium develops in the intestines and releases the toxin. This typically only occurs in children less than one year old, as protective mechanisms against development of the bacterium develop after that age. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. The disease is not passed directly between people. Its diagnosis is confirmed by finding the toxin or bacteria in the person in question.
Prevention is primarily by proper food preparation. The toxin, though not the spores, is destroyed by heating it to more than for longer than five minutes. The clostridial spores can be destroyed in an autoclave with moist heat (120°C/ 250°F for at least 15 minutes) or dry heat (160°C for 2 hours) or by irradiation. The spores of group I strains are inactivated by heating at 121°C (250°F) for 3 minutes during commercial canning. Spores of group II strains are less heat-resistant, and they are often damaged by 90°C (194°F) for 10 minutes, 85°C for 52 minutes, or 80°C for 270 minutes; however, these treatments may not be sufficient in some foods. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin , meaning 'sausage'.
Signs and symptoms
The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet).
Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated.
Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium").
Infant botulism
Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life.
The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked.
Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance.
Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents.
Complications
Infant botulism has no long-term side effects.
Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital.
Cause
Clostridium botulinum is an anaerobic, Gram-positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholine's release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above.
In all cases, illness is caused by the botulinum toxin which the bacterium C. botulinum produces in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first.
Mechanisms of entry into the human body for botulinum toxin are described below.
Colonization of the gut
The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare.
Food
Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food.
Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days.
No withdrawal periods have been established for cows affected by Botulism. Lactating cows injected with various doses of Botulinum toxin C have not resulted in detectable Botulinum neurotoxin in milk produced. Using mouse bioassays and immunostick ELISA tests, botulinum toxin was detected in whole blood and serum but not in milk samples, suggesting that botulinum type C toxin does not enter milk in detectable concentrations. Cooking and pasteurization denatures botulinum toxin but does not necessarily eliminate spores. Botulinum spores or toxins can find their way into the dairy production chain from the environment. Despite the low risk of milk and meat contamination, the protocol for fatal bovine botulism cases appears to be incineration of carcasses and withholding any potentially contaminated milk from human consumption. It is also advised that raw milk from affected cows should not be consumed by humans or fed to calves.
There have been several reports of botulism from pruno wine made of food scraps in prison. In a Mississippi prison in 2016, prisoners illegally brewed alcohol that led to 31 cases of botulism. The research study done on these cases found the symptoms of mild botulism matched the symptoms severe botulism though the outcomes and progression of the disease were different.
Wound
Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism can also come from a minor wound that is not properly cleaned out; the skin grows over the wound thus trapping the spore in an anaerobic environment and creating botulism. One example was a person who cut their ankle while using a weed eater; as the wound healed over, it trapped a blade of grass and spec of soil under the skin that lead to severe botulism requiring hospitalization and rehabilitation for months. Wound botulism accounts for 29% of cases.
Inhalation
Isolated cases of botulism have been described after inhalation by laboratory workers.
Injection (iatrogenic botulism)
Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. However, there are cases where an off-label use of botulinum toxin resulted in severe botulism and death. Following a 2008 review the FDA added these concerns as a boxed warning. An international grassroots effort led by NeverTox to assemble the people experiencing Iatrogenic Botulism Poisoning (IBP) and provide education and emotional support serves 39,000 people through a Facebook group who are suffering from adverse events from botulinum toxin injections.
Lawsuits about botulism against Pharmaceuticals
Prior to the boxed warning labels that included a disclaimer that botulinum toxin injections could cause botulism, there were a series of lawsuits against the pharmaceutical firms that manufactured injectable botulinum toxin. A Hollywood producer's wife brought a lawsuit after experiencing debilitating adverse events from migraine treatment. A lawsuit on behalf of a 3-year-old boy who was permanently disabled by a botulinum toxin injection was settled of court during the trial. The family of a 7-year-old boy treated with botulinum toxin injections for leg spasms sued after the boy almost died. Several families of people who died after treatments with botulinum toxin injections brought lawsuits. One lawsuit prevailed for the plaintiff who was awarded compensation of $18 million; the plaintiff was a physician who was diagnosed with botulism by thirteen neurologists at the NIH. Deposition video from that lawsuit quotes a pharmaceutical executive stating that "Botox doesn't cause botulism."
Mechanism
The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum.
Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores.
There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty.
Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals.
In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA).
Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them.
When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years.
Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria.
The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures.
Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring.
Botulinum toxin A and E specifically cleave the SNAP-25, whereas serotype B, D, F and G cut synaptobrevin. Serotype C cleaves both SNAP-25 and syntaxin. This causes blockade of neurotransmitter acetylcholine release, ultimately leading to paralysis.
Diagnosis
For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay.
In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony.
Prevention
Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right.
A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum.
While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberry's Food Company outbreak. Foodborne botulism is the rarest form, accounting for only around 15% of cases (US) and has more frequently resulted from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation.
Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish.
When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned.
Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters.
Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner.
Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; cans with any of these signs should be discarded.
Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety.
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum.
Vaccine
Vaccines are under development, but they have disadvantages. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism.
Treatment
Botulism is generally treated with botulism antitoxin and supportive care.
Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed.
In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria.
Antitoxin
Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis.
In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism.
Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab')2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated.
Prognosis
The paralysis caused by botulism can persist for two to eight weeks, during which supportive care and ventilation may be necessary to keep the patient alive. Botulism can be fatal in five to ten percent of people who are affected. However, if left untreated, botulism is fatal in 40 to 50 percent of cases.
Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than two percent for hospitalized babies.
Epidemiology
Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly.
United States
In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California.
Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaskan aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less.
The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California.
All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website.
On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup.
Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented.
In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking.
In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation.
Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberry's Food Company in its Augusta, Georgia plant. It was later identified that the Castleberry's plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberry's Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years.
One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes.
A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak.
United Kingdom
The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firm's yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts.
China
From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths.
Qapqal disease
After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place.
In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease.
Canada
From 1985 to 2005 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products.
Ukraine
In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack.
Vietnam
In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support.
Other susceptible species
Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches.
Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western US and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock.
Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent.
In cattle, the symptoms may include drooling, restlessness, incoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur. Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte.
The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs.
Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis.
Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of C. botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest.
Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis.
In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids. Broiler chickens with the toxicoinfectious form may also have diarrhea with excess urates.
Prevention in non-human species
One of the main routes of exposure for botulism is through the consumption of food contaminated with C. botulinum. Food-borne botulism can be prevented in domestic animals through careful inspection of the feed, purchasing high quality feed from reliable sources, and ensuring proper storage. Poultry litter and animal carcasses are places in which C. botulinum spores are able to germinate so it is advised to avoid spreading poultry litter or any carcass containing materials on fields producing feed materials due to their potential for supporting C. botulinum growth. Additionally, water sources should be checked for dead or dying animals, and fields should be checked for animal remains prior to mowing for hay or silage. Correcting any dietary deficiencies can also prevent animals from consuming contaminated materials such as bones or carcasses. Raw materials used for silage or feed mixed on site should be checked for any sign of mold or rotten appearance. Acidification of animal feed can reduce, but will not eliminate, the risk of toxin formation, especially in carcasses that remain whole.
Vaccines in animals
Vaccines have been developed for use in animals to prevent botulism. The availability and approval of these vaccines varies depending on the location, with places experiencing more cases generally having more vaccines available and routine vaccination is more common.
A variety of vaccines have been developed for the prevention of botulism in livestock. Most initial vaccinations require multiple doses at intervals from 2–6 weeks, however, some newer vaccines require only one shot. This mainly depends on the type of vaccine and manufacturers recommendations. All vaccines require annual boosters to maintain immunity. Many of these vaccines can be used on multiple species including cattle, sheep, and goats with some labeled for use in horses and mules as well as separate vaccines for mink. Additionally, vaccination during an outbreak is as beneficial as therapeutic treatment in cattle, and this method is also used in horses and pheasants.
The use of region specific toxoids to immunize animals has been shown to be effective. Toxoid types C and D used to immunize cattle is a useful vaccination method in South Africa and Australia. Toxoid has also been shown to be an appropriate method of immunizing minks and pheasants. In endemic areas, for example Kentucky, vaccination with type B toxoid appears to be effective.
Use in biological warfare and terrorism
United States
Based on CIA research in Fort Detrick on biological warfare, anthrax and botulism were widely regarded as the two most effective options. During the 1950s, a highly lethal strain was discovered during the biological warfare program. The CIA continued to hold 5 grams of Clostridium botulinum, even after Nixon's ban on biological warfare in 1969. During the Gulf War, when the United States were concerned with a potential biowarfare attack, the efforts around botulism turned to prevention. However, the only way to make antitoxin in America until the 1990s was by drawing antibodies from a single horse named First Flight, raising much concern from Pentagon health officials.
Iraq
Iraq has historically possessed many types of germs, including botulism. The American Type Culture Collection sold 5 variants of botulinum to the University of Baghdad in May 1986. 1991 CIA reports also show Iraqis filled shells, warheads, and bombs with biological agents like botulinum (though none have been deployed). The Iraqi air force used the code name "tea" to refer to botulinum, and it was also referred to as bioweapon "A."
Japan
A Japanese cult called Aum Shinrikyo created laboratories that produced biological weapons, specifically botulinum, anthrax, and Q fever. From 1990 to 1995, the cult staged numerous unsuccessful bioterrorism attacks on civilians. They sprayed botulinum toxin from a truck in downtown Tokyo and in the Narita airport, but there are no reported cases of botulism as a result.
See also
List of foodborne illness outbreaks
Botulinum toxin
References
Further reading
External links
WHO fact sheet on botulism
Botulism in the United States, 1889–1996. Handbook for Epidemiologists, Clinicians and Laboratory Technicians. Centers for Disease Control and Prevention. National Center for Infectious Diseases, Division of Bacterial and Mycotic Diseases 1998.
NHS choices
CDC Botulism: Control Measures Overview for Clinicians
University of California, Santa Cruz Environmental toxicology – Botulism
CDC Botulism FAQ
Biological weapons
Conditions diagnosed by stool test
Foodborne illnesses
Myoneural junction and neuromuscular diseases
Poultry diseases
Wikipedia infectious disease articles ready to translate
Wikipedia medicine articles ready to translate | 0.773115 | 0.999549 | 0.772766 |
Hospital | A hospital is a healthcare institution providing patient treatment with specialized health science and auxiliary healthcare staff and medical equipment. The best-known type of hospital is the general hospital, which typically has an emergency department to treat urgent health problems ranging from fire and accident victims to a sudden illness. A district hospital typically is the major health care facility in its region, with many beds for intensive care and additional beds for patients who need long-term care.
Specialized hospitals include trauma centers, rehabilitation hospitals, children's hospitals, geriatric hospitals, and hospitals for specific medical needs, such as psychiatric hospitals for psychiatric treatment and other disease-specific categories. Specialized hospitals can help reduce health care costs compared to general hospitals. Hospitals are classified as general, specialty, or government depending on the sources of income received.
A teaching hospital combines assistance to people with teaching to health science students and auxiliary healthcare students. A health science facility smaller than a hospital is generally called a clinic. Hospitals have a range of departments (e.g. surgery and urgent care) and specialist units such as cardiology. Some hospitals have outpatient departments and some have chronic treatment units. Common support units include a pharmacy, pathology, and radiology.
Hospitals are typically funded by public funding, health organizations (for-profit or nonprofit), health insurance companies, or charities, including direct charitable donations. Historically, hospitals were often founded and funded by religious orders, or by charitable individuals and leaders.
Hospitals are currently staffed by professional physicians, surgeons, nurses, and allied health practitioners. In the past, however, this work was usually performed by the members of founding religious orders or by volunteers. However, there are various Catholic religious orders, such as the Alexians and the Bon Secours Sisters that still focus on hospital ministry in the late 1990s, as well as several other Christian denominations, including the Methodists and Lutherans, which run hospitals. In accordance with the original meaning of the word, hospitals were original "places of hospitality", and this meaning is still preserved in the names of some institutions such as the Royal Hospital Chelsea, established in 1681 as a retirement and nursing home for veteran soldiers.
Etymology
During the Middle Ages, hospitals served different functions from modern institutions in that they were almshouses for the poor, hostels for pilgrims, or hospital schools. The word "hospital" comes from the Latin , signifying a stranger or foreigner, hence a guest. Another noun derived from this, came to signify hospitality, that is the relation between guest and shelterer, hospitality, friendliness, and hospitable reception. By metonymy, the Latin word then came to mean a guest-chamber, guest's lodging, an inn. is thus the root for the English words host (where the p was dropped for convenience of pronunciation) hospitality, hospice, hostel, and hotel. The latter modern word derives from Latin via the Old French romance word , which developed a silent s, which letter was eventually removed from the word, the loss of which is signified by a circumflex in the modern French word . The German word shares similar roots.
Types
Some patients go to a hospital just for diagnosis, treatment, or therapy and then leave ("outpatients") without staying overnight; while others are "admitted" and stay overnight or for several days or weeks or months ("inpatients"). Hospitals are usually distinguished from other types of medical facilities by their ability to admit and care for inpatients whilst the others, which are smaller, are often described as clinics.
General and acute care
The best-known type of hospital is the general hospital, also known as an acute-care hospital. These facilities handle many kinds of disease and injury, and normally have an emergency department (sometimes known as "accident & emergency") or trauma center to deal with immediate and urgent threats to health. Larger cities may have several hospitals of varying sizes and facilities. Some hospitals, especially in the United States and Canada, have their own ambulance service.
District
A district hospital typically is the major health care facility in its region, with large numbers of beds for intensive care, critical care, and long-term care.
In California, "district hospital" refers specifically to a class of healthcare facility created shortly after World War II to address a shortage of hospital beds in many local communities. Even today, district hospitals are the sole public hospitals in 19 of California's counties, and are the sole locally accessible hospital within nine additional counties in which one or more other hospitals are present at a substantial distance from a local community. Twenty-eight of California's rural hospitals and 20 of its critical-access hospitals are district hospitals. They are formed by local municipalities, have boards that are individually elected by their local communities, and exist to serve local needs. They are a particularly important provider of healthcare to uninsured patients and patients with Medi-Cal (which is California's Medicaid program, serving low-income persons, some senior citizens, persons with disabilities, children in foster care, and pregnant women). In 2012, district hospitals provided $54 million in uncompensated care in California.
Specialized
A specialty hospital is primarily and exclusively dedicated to one or a few related medical specialties. Subtypes include rehabilitation hospitals, children's hospitals, seniors' (geriatric) hospitals, long-term acute care facilities, and hospitals for dealing with specific medical needs such as psychiatric problems (see psychiatric hospital), cancer treatment, certain disease categories such as cardiac, oncology, or orthopedic problems, and so forth.
In Germany, specialised hospitals are called Fachkrankenhaus; an example is Fachkrankenhaus Coswig (thoracic surgery). In India, specialty hospitals are known as super-specialty hospitals and are distinguished from multispecialty hospitals which are composed of several specialties.
Specialised hospitals can help reduce health care costs compared to general hospitals. For example, Narayana Health's cardiac unit in Bangalore specialises in cardiac surgery and allows for a significantly greater number of patients. It has 3,000 beds and performs 3,000 paediatric cardiac operations annually, the largest number in the world for such a facility. Surgeons are paid on a fixed salary instead of per operation, thus when the number of procedures increases, the hospital is able to take advantage of economies of scale and reduce its cost per procedure. Each specialist may also become more efficient by working on one procedure like a production line.
Teaching
A teaching hospital delivers healthcare to patients as well as training to prospective medical professionals such as medical students and student nurses. It may be linked to a medical school or nursing school, and may be involved in medical research. Students may also observe clinical work in the hospital.
Clinics
Clinics generally provide only outpatient services, but some may have a few inpatient beds and a limited range of services that may otherwise be found in typical hospitals.
Departments or wards
A hospital contains one or more wards that house hospital beds for inpatients. It may also have acute services such as an emergency department, operating theatre, and intensive care unit, as well as a range of medical specialty departments. A well-equipped hospital may be classified as a trauma center. They may also have other services such as a hospital pharmacy, radiology, pathology, and medical laboratories. Some hospitals have outpatient departments such as behavioral health services, dentistry, and rehabilitation services.
A hospital may also have a department of nursing, headed by a chief nursing officer or director of nursing. This department is responsible for the administration of professional nursing practice, research, and policy for the hospital.
Many units have both a nursing and a medical director that serve as administrators for their respective disciplines within that unit. For example, within an intensive care nursery, a medical director is responsible for physicians and medical care, while the nursing manager is responsible for all the nurses and nursing care.
Support units may include a medical records department, release of information department, technical support, clinical engineering, facilities management, plant operations, dining services, and security departments.
Remote monitoring
The COVID-19 pandemic stimulated the development of virtual wards across the British NHS. Patients are managed at home, monitoring their own oxygen levels using an oxygen saturation probe if necessary and supported by telephone. West Hertfordshire Hospitals NHS Trust managed around 1200 patients at home between March and June 2020 and planned to continue the system after COVID-19, initially for respiratory patients. Mersey Care NHS Foundation Trust started a COVID Oximetry@Home service in April 2020. This enables them to monitor more than 5000 patients a day in their own homes. The technology allows nurses, carers, or patients to record and monitor vital signs such as blood oxygen levels.
History
Early examples
In early India, Fa Xian, a Chinese Buddhist monk who travelled across India , recorded examples of healing institutions. According to the Mahavamsa, the ancient chronicle of Sinhalese royalty, written in the sixth century AD, King Pandukabhaya of Sri Lanka (r. 437–367 BC) had lying-in-homes and hospitals (Sivikasotthi-Sala). A hospital and medical training center also existed at Gundeshapur, a major city in southwest of the Sassanid Persian Empire founded in AD 271 by Shapur I. In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepeion functioned as centers of medical advice, prognosis, and healing. The Asclepeia spread to the Roman Empire. While public healthcare was non-existent in the Roman Empire, military hospitals called valetudinaria did exist stationed in military barracks and would serve the soldiers and slaves within the fort. Evidence exists that some civilian hospitals, while unavailable to the Roman population, were occasionally privately built in extremely wealthy Roman households located in the countryside for that family, although this practice seems to have ended in 80 AD.
Middle Ages
The declaration of Christianity as an accepted religion in the Roman Empire drove an expansion of the provision of care. Following the First Council of Nicaea in AD 325 construction of a hospital in every cathedral town was begun, including among the earliest hospitals by Saint Sampson in Constantinople and by Basil, bishop of Caesarea in modern-day Turkey. By the twelfth century, Constantinople had two well-organised hospitals, staffed by doctors who were both male and female. Facilities included systematic treatment procedures and specialised wards for various diseases.
The earliest general hospital in the Islamic world was built in 805 in Baghdad by Harun Al-Rashid. By the 10th century, Baghdad had five more hospitals, while Damascus had six hospitals by the 15th century, and Córdoba alone had 50 major hospitals, many exclusively for the military, by the end of the 15th century. The Islamic bimaristan served as a center of medical treatment, as well nursing home and lunatic asylum. It typically treated the poor, as the rich would have been treated in their own homes. Hospitals in this era were the first to require medical licenses for doctors, and compensation for negligence could be made. Hospitals were forbidden by law to turn away patients who were unable to pay. These hospitals were financially supported by waqfs, as well as state funds.
In India, public hospitals existed at least since the reign of Firuz Shah Tughlaq in the 14th century. The Mughal emperor Jahangir in the 17th century established hospitals in large cities at government expense with records showing salaries and grants for medicine being paid for by the government.
In China, during the Song Dynasty, the state began to take on social welfare functions previously provided by Buddhist monasteries and instituted public hospitals, hospices and dispensaries.
Early modern and Enlightenment Europe
In Europe the medieval concept of Christian care evolved during the 16th and 17th centuries into a secular one. In England, after the dissolution of the monasteries in 1540 by King Henry VIII, the church abruptly ceased to be the supporter of hospitals, and only by direct petition from the citizens of London, were the hospitals St Bartholomew's, St Thomas's and St Mary of Bethlehem's (Bedlam) endowed directly by the crown; this was the first instance of secular support being provided for medical institutions.
In 1682, Charles II founded the Royal Hospital Chelsea as a retirement home for old soldiers known as Chelsea Pensioners, an instance of the use of the word "hospital" to mean an almshouse. Ten years later, Mary II founded the Royal Hospital for Seamen, Greenwich, with the same purpose.
The voluntary hospital movement began in the early 18th century, with hospitals being founded in London by the 1720s, including Westminster Hospital (1719) promoted by the private bank C. Hoare & Co and Guy's Hospital (1724) funded from the bequest of the wealthy merchant, Thomas Guy.
Other hospitals sprang up in London and other British cities over the century, many paid for by private subscriptions. St Bartholomew's in London was rebuilt from 1730 to 1759, and the London Hospital, Whitechapel, opened in 1752.
These hospitals represented a turning point in the function of the institution; they began to evolve from being basic places of care for the sick to becoming centers of medical innovation and discovery and the principal place for the education and training of prospective practitioners. Some of the era's greatest surgeons and doctors worked and passed on their knowledge at the hospitals. They also changed from being mere homes of refuge to being complex institutions for the provision and advancement of medicine and care for sick. The Charité was founded in Berlin in 1710 by King Frederick I of Prussia as a response to an outbreak of plague.
Voluntary hospitals also spread to Colonial America; Bellevue Hospital in New York City opened in 1736, first as a workhouse and then later as a hospital; Pennsylvania Hospital in Philadelphia opened in 1752, New York Hospital, now Weill Cornell Medical Center in New York City opened in 1771, and Massachusetts General Hospital in Boston opened in 1811.
When the Vienna General Hospital opened in 1784 as the world's largest hospital, physicians acquired a new facility that gradually developed into one of the most important research centers.
Another Enlightenment era charitable innovation was the dispensary; these would issue the poor with medicines free of charge. The London Dispensary opened its doors in 1696 as the first such clinic in the British Empire. The idea was slow to catch on until the 1770s, when many such organisations began to appear, including the Public Dispensary of Edinburgh (1776), the Metropolitan Dispensary and Charitable Fund (1779) and the Finsbury Dispensary (1780). Dispensaries were also opened in New York 1771, Philadelphia 1786, and Boston 1796.
The Royal Naval Hospital, Stonehouse, Plymouth, was a pioneer of hospital design in having "pavilions" to minimize the spread of infection. John Wesley visited in 1785, and commented "I never saw anything of the kind so complete; every part is so convenient, and so admirably neat. But there is nothing superfluous, and nothing purely ornamented, either within or without." This revolutionary design was made more widely known by John Howard, the philanthropist. In 1787 the French government sent two scholar administrators, Coulomb and Tenon, who had visited most of the hospitals in Europe. They were impressed and the "pavilion" design was copied in France and throughout Europe.
19th century
English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. In the mid-19th century, hospitals and the medical profession became more professionalised, with a reorganisation of hospital management along more bureaucratic and administrative lines. The Apothecaries Act 1815 made it compulsory for medical students to practise for at least half a year at a hospital as part of their training.
Florence Nightingale pioneered the modern profession of nursing during the Crimean War when she set an example of compassion, commitment to patient care and diligent and thoughtful hospital administration. The first official nurses' training programme, the Nightingale School for Nurses, was opened in 1860, with the mission of training nurses to work in hospitals, to work with the poor and to teach. Nightingale was instrumental in reforming the nature of the hospital, by improving sanitation standards and changing the image of the hospital from a place the sick would go to die, to an institution devoted to recuperation and healing. She also emphasised the importance of statistical measurement for determining the success rate of a given intervention and pushed for administrative reform at hospitals.
By the late 19th century, the modern hospital was beginning to take shape with a proliferation of a variety of public and private hospital systems. By the 1870s, hospitals had more than trebled their original average intake of 3,000 patients. In continental Europe the new hospitals generally were built and run from public funds. The National Health Service, the principal provider of health care in the United Kingdom, was founded in 1948. During the nineteenth century, the Second Viennese Medical School emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialisation advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna, being considered as the birth of specialised medicine.
20th century and beyond
By the late 19th and early 20th centuries, medical advancements such as anesthesia and sterile techniques that could make surgery less risky, and the availability of more advanced diagnostic devices such as X-rays, continued to make hospitals a more attractive option for treatment.
Modern hospitals measure various efficiency metrics such as occupancy rates, the average length of stay, time to service, patient satisfaction, physician performance, patient readmission rate, inpatient mortality rate, and case mix index.
In the United States, the number of hospitalizations grew to its peak in 1981 with 171 admissions per 1,000 Americans and 6,933 hospitals. This trend subsequently reversed, with the rate of hospitalization falling by more than 10% and the number of US hospitals shrinking from 6,933 in 1981 to 5,534 in 2016. Occupancy rates also dropped from 77% in 1980 to 60% in 2013. Among the reasons for this are the increasing availability of more complex care elsewhere such as at home or the physicians' offices and also the less therapeutic and more life-threatening image of the hospitals in the eyes of the public. In the US, a patient may sleep in a hospital bed, but be considered outpatient and "under observation" if not formally admitted.
In the U.S., inpatient stays are covered under Medicare Part A, but a hospital might keep a patient under observation which is only covered under Medicare Part B, and subjects the patient to additional coinsurance costs. In 2013, the Center for Medicare and Medicaid Services (CMS) introduced a "two-midnight" rule for inpatient admissions, intended to reduce an increasing number of long-term "observation" stays being used for reimbursement. This rule was later dropped in 2018. In 2016 and 2017, healthcare reform and a continued decline in admissions resulted in US hospital-based healthcare systems performing poorly financially. Microhospitals, with bed capacities of between eight and fifty, are expanding in the United States. Similarly, freestanding emergency rooms, which transfer patients that require inpatient care to hospitals, were popularised in the 1970s and have since expanded rapidly across the United States.
The Catholic Church is the largest non-government provider of health careservices in the world. It has around 18,000 clinics, 16,000 homes for the elderly and those with special needs, and 5,500 hospitals, with 65 percent of them located in developing countries. In 2010, the Church's Pontifical Council for the Pastoral Care of Health Care Workers said that the Church manages 26% of the world's health care facilities.
Funding
Modern hospitals derive funding from a variety of sources. They may be funded by private payment and health insurance or public expenditure, charitable donations.
In the United Kingdom, the National Health Service delivers health care to legal residents funded by the state "free at the point of delivery", and emergency care free to anyone regardless of nationality or status. Due to the need for hospitals to prioritise their limited resources, there is a tendency in countries with such systems for 'waiting lists' for non-crucial treatment, so those who can afford it may take out private health care to access treatment more quickly.
In the United States, hospitals typically operate privately and in some cases on a for-profit basis, such as HCA Healthcare. The list of procedures and their prices are billed with a chargemaster; however, these prices may be lower for health care obtained within healthcare networks. Legislation requires hospitals to provide care to patients in life-threatening emergency situations regardless of the patient's ability to pay. Privately funded hospitals which admit uninsured patients in emergency situations incur direct financial losses, such as in the aftermath of Hurricane Katrina.
Quality and safety
As the quality of health care has increasingly become an issue around the world, hospitals have increasingly had to pay serious attention to this matter. Independent external assessment of quality is one of the most powerful ways to assess this aspect of health care, and hospital accreditation is one means by which this is achieved. In many parts of the world such accreditation is sourced from other countries, a phenomenon known as international healthcare accreditation, by groups such as Accreditation Canada in Canada, the Joint Commission in the U.S., the Trent Accreditation Scheme in Great Britain, and the Haute Autorité de santé (HAS) in France. In England, hospitals are monitored by the Care Quality Commission. In 2020, they turned their attention to hospital food standards after seven patient deaths from listeria linked to pre-packaged sandwiches and salads in 2019, saying "Nutrition and hydration is part of a patient's recovery."
The World Health Organization reported in 2011 that being admitted to a hospital was far riskier than flying. Globally, the chance of a patient being subject to a treatment error in a hospital was about 10%, and the chance of death resulting from an error was about one in 300. according to Liam Donaldson. 7% of hospitalised patients in developed countries, and 10% in developing countries, acquire at least one health care-associated infection. In the U.S., 1.7 million infections are acquired in hospital each year, leading to 100,000 deaths, figures much worse than in Europe where there were 4.5 million infections and 37,000 deaths.
Architecture
Modern hospital buildings are designed to minimise the effort of medical personnel and the possibility of contamination while maximising the efficiency of the whole system. Travel time for personnel within the hospital and the transportation of patients between units is facilitated and minimised. The building also should be built to accommodate heavy departments such as radiology and operating rooms while space for special wiring, plumbing, and waste disposal must be allowed for in the design.
However, many hospitals, even those considered "modern", are the product of continual and often badly managed growth over decades or even centuries, with utilitarian new sections added on as needs and finances dictate. As a result, Dutch architectural historian Cor Wagenaar has called many hospitals:
Some newer hospitals now try to re-establish design that takes the patient's psychological needs into account, such as providing more fresh air, better views and more pleasant colour schemes. These ideas harken back to the late eighteenth century, when the concept of providing fresh air and access to the 'healing powers of nature' were first employed by hospital architects in improving their buildings.
The research of British Medical Association is showing that good hospital design can reduce patient's recovery time. Exposure to daylight is effective in reducing depression. Single-sex accommodation help ensure that patients are treated in privacy and with dignity. Exposure to nature and hospital gardens is also important – looking out windows improves patients' moods and reduces blood pressure and stress level. Open windows in patient rooms have also demonstrated some evidence of beneficial outcomes by improving airflow and increased microbial diversity. Eliminating long corridors can reduce nurses' fatigue and stress.
Another ongoing major development is the change from a ward-based system (where patients are accommodated in communal rooms, separated by movable partitions) to one in which they are accommodated in individual rooms. The ward-based system has been described as very efficient, especially for the medical staff, but is considered to be more stressful for patients and detrimental to their privacy. A major constraint on providing all patients with their own rooms is however found in the higher cost of building and operating such a hospital; this causes some hospitals to charge for private rooms.
See also
Burn center
History of hospitals
History of medicine
Hospice
Hospital network
Lists of hospitals
Hospital information system
Trauma center
The Waiting Room
Walk-in clinic
GP Liaison
Notes
References
Bibliography
History of hospitals
Brockliss, Lawrence, and Colin Jones. "The Hospital in the Enlightenment", in The Medical World of Early Modern France (Oxford UP, 1997), pp. 671–729; covers France 1650–1800
Chaney, Edward (2000), Philanthropy in Italy': English Observations on Italian Hospitals 1545–1789", in: The Evolution of the Grand Tour: Anglo-Italian Cultural Relations since the Renaissance, 2nd ed. London, Routledge, 2000.
Connor, J.T.H. "Hospital History in Canada and the United States", Canadian Bulletin of Medical History, 1990, Vol. 7 Issue 1, pp. 93–104
Crawford, D.S. Bibliography of Histories of Canadian hospitals and schools of nursing.
Gorsky, Martin. "The British National Health Service 1948–2008: A Review of the Historiography", Social History of Medicine, December 2008, Vol. 21 Issue 3, pp. 437–60
Harrison, Mar, et al. eds. From Western Medicine to Global Medicine: The Hospital Beyond the West (2008)
Horden, Peregrine. Hospitals and Healing From Antiquity to the Later Middle Ages (2008)
McGrew, Roderick E. Encyclopedia of Medical History (1985)
Porter, Roy. The Hospital in History, with Lindsay Patricia Granshaw (1989)
Risse, Guenter B. Mending Bodies, Saving Souls: A History of Hospitals (1999); world coverage
Rosenberg, Charles E. The Care of Strangers: The Rise of America's Hospital System (1995); history to 1920
Scheutz, Martin et al. eds. Hospitals and Institutional Care in Medieval and Early Modern Europe (2009)
Wall, Barbra Mann. American Catholic Hospitals: A Century of Changing Markets and Missions (Rutgers University Press, 2011).
External links
WHO Hospitals https://www.who.int/hospitals/en/ | 0.773781 | 0.998529 | 0.772643 |
Life | Life is a quality that distinguishes matter that has biological processes, such as signaling and self-sustaining processes, from matter that does not. It is defined descriptively by the capacity for homeostasis, organisation, metabolism, growth, adaptation, response to stimuli, and reproduction. All life over time eventually reaches a state of death, and none is immortal. Many philosophical definitions of living systems have been proposed, such as self-organizing systems. Viruses in particular make definition difficult as they replicate only in host cells. Life exists all over the Earth in air, water, and soil, with many ecosystems forming the biosphere. Some of these are harsh environments occupied only by extremophiles.
Life has been studied since ancient times, with theories such as Empedocles's materialism asserting that it was composed of four eternal elements, and Aristotle's hylomorphism asserting that living things have souls and embody both form and matter. Life originated at least 3.5 billion years ago, resulting in a universal common ancestor. This evolved into all the species that exist now, by way of many extinct species, some of which have left traces as fossils. Attempts to classify living things, too, began with Aristotle. Modern classification began with Carl Linnaeus's system of binomial nomenclature in the 1740s.
Living things are composed of biochemical molecules, formed mainly from a few core chemical elements. All living things contain two types of large molecule, proteins and nucleic acids, the latter usually both DNA and RNA: these carry the information needed by each species, including the instructions to make each type of protein. The proteins, in turn, serve as the machinery which carries out the many chemical processes of life. The cell is the structural and functional unit of life. Smaller organisms, including prokaryotes (bacteria and archaea), consist of small single cells. Larger organisms, mainly eukaryotes, can consist of single cells or may be multicellular with more complex structure. Life is only known to exist on Earth but extraterrestrial life is thought probable. Artificial life is being simulated and explored by scientists and engineers.
Definitions
Challenge
The definition of life has long been a challenge for scientists and philosophers. This is partially because life is a process, not a substance. This is complicated by a lack of knowledge of the characteristics of living entities, if any, that may have developed outside Earth. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. Legal definitions of life have been debated, though these generally focus on the decision to declare a human dead, and the legal ramifications of this decision. At least 123 definitions of life have been compiled.
Descriptive
Since there is no consensus for a definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This implies all or most of the following traits:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature.
Organisation: being structurally composed of one or more cells – the basic units of life.
Metabolism: transformation of energy, used to convert chemicals into cellular components (anabolism) and to decompose organic matter (catabolism). Living things require energy for homeostasis and other activities.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size and structure.
Adaptation: the evolutionary process whereby an organism becomes better able to live in its habitat.
Response to stimuli: such as the contraction of a unicellular organism away from external chemicals, the complex reactions involving all the senses of multicellular organisms, or the motion of the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Physics
From a physics perspective, an organism is a thermodynamic system with an organised molecular structure that can reproduce itself and evolve as survival dictates. Thermodynamically, life has been described as an open system which makes use of gradients in its surroundings to create imperfect copies of itself. Another way of putting this is to define life as "a self-sustained chemical system capable of undergoing Darwinian evolution", a definition adopted by a NASA committee attempting to define life for the purposes of exobiology, based on a suggestion by Carl Sagan. This definition, however, has been widely criticised because according to it, a single sexually reproducing individual is not alive as it is incapable of evolving on its own.
Living systems
Others take a living systems theory viewpoint that does not necessarily depend on molecular chemistry. One systemic definition of life is that living things are self-organizing and autopoietic (self-producing). Variations of this include Stuart Kauffman's definition as an autonomous agent or a multi-agent system capable of reproducing itself, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
Death
Death is the termination of all vital functions or life processes in an organism or cell.
One of the challenges in defining death is in distinguishing it from life. Death would seem to refer to either the moment life ends, or when the state that follows life begins. However, determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing conceptual lines between life and death. This is problematic because there is little consensus over how to define life. The nature of death has for millennia been a central concern of the world's religious traditions and of philosophical inquiry. Many religions maintain faith in either a kind of afterlife or reincarnation for the soul, or resurrection of the body at a later date.
Viruses
Whether or not viruses should be considered as alive is controversial. They are most often considered as just gene coding replicators rather than forms of life. They have been described as "organisms at the edge of life" because they possess genes, evolve by natural selection, and replicate by making multiple copies of themselves through self-assembly. However, viruses do not metabolise and they require a host cell to make new products. Virus self-assembly within host cells has implications for the study of the origin of life, as it may support the hypothesis that life could have started as self-assembling organic molecules.
History of study
Materialism
Some of the earliest theories of life were materialist, holding that all that exists is matter, and that life is merely a complex form or arrangement of matter. Empedocles (430 BC) argued that everything in the universe is made up of a combination of four eternal "elements" or "roots of all": earth, water, air, and fire. All change is explained by the arrangement and rearrangement of these four elements. The various forms of life are caused by an appropriate mixture of elements.
Democritus (460 BC) was an atomist; he thought that the essential characteristic of life was having a soul (psyche), and that the soul, like everything else, was composed of fiery atoms. He elaborated on fire because of the apparent connection between life and heat, and because fire moves.
Plato, in contrast, held that the world was organised by permanent forms, reflected imperfectly in matter; forms provided direction or intelligence, explaining the regularities observed in the world. The mechanistic materialism that originated in ancient Greece was revived and revised by the French philosopher René Descartes (1596–1650), who held that animals and humans were assemblages of parts that together functioned as a machine. This idea was developed further by Julien Offray de La Mettrie (1709–1750) in his book L'Homme Machine. In the 19th century the advances in cell theory in biological science encouraged this view. The evolutionary theory of Charles Darwin (1859) is a mechanistic explanation for the origin of species by means of natural selection. At the beginning of the 20th century Stéphane Leduc (1853–1939) promoted the idea that biological processes could be understood in terms of physics and chemistry, and that their growth resembled that of inorganic crystals immersed in solutions of sodium silicate. His ideas, set out in his book La biologie synthétique, were widely dismissed during his lifetime, but has incurred a resurgence of interest in the work of Russell, Barge and colleagues.
Hylomorphism
Hylomorphism is a theory first expressed by the Greek philosopher Aristotle (322 BC). The application of hylomorphism to biology was important to Aristotle, and biology is extensively covered in his extant writings. In this view, everything in the material universe has both matter and form, and the form of a living thing is its soul (Greek psyche, Latin anima). There are three kinds of souls: the vegetative soul of plants, which causes them to grow and decay and nourish themselves, but does not cause motion and sensation; the animal soul, which causes animals to move and feel; and the rational soul, which is the source of consciousness and reasoning, which (Aristotle believed) is found only in man. Each higher soul has all of the attributes of the lower ones. Aristotle believed that while matter can exist without form, form cannot exist without matter, and that therefore the soul cannot exist without the body.
This account is consistent with teleological explanations of life, which account for phenomena in terms of purpose or goal-directedness. Thus, the whiteness of the polar bear's coat is explained by its purpose of camouflage. The direction of causality (from the future to the past) is in contradiction with the scientific evidence for natural selection, which explains the consequence in terms of a prior cause. Biological features are explained not by looking at future optimal results, but by looking at the past evolutionary history of a species, which led to the natural selection of the features in question.
Spontaneous generation
Spontaneous generation was the belief that living organisms can form without descent from similar organisms. Typically, the idea was that certain forms such as fleas could arise from inanimate matter such as dust or the supposed seasonal generation of mice and insects from mud or garbage.
The theory of spontaneous generation was proposed by Aristotle, who compiled and expanded the work of prior natural philosophers and the various ancient explanations of the appearance of organisms; it was considered the best explanation for two millennia. It was decisively dispelled by the experiments of Louis Pasteur in 1859, who expanded upon the investigations of predecessors such as Francesco Redi. Disproof of the traditional ideas of spontaneous generation is no longer controversial among biologists.
Vitalism
Vitalism is the belief that there is a non-material life-principle. This originated with Georg Ernst Stahl (17th century), and remained popular until the middle of the 19th century. It appealed to philosophers such as Henri Bergson, Friedrich Nietzsche, and Wilhelm Dilthey, anatomists like Xavier Bichat, and chemists like Justus von Liebig. Vitalism included the idea that there was a fundamental difference between organic and inorganic material, and the belief that organic material can only be derived from living things. This was disproved in 1828, when Friedrich Wöhler prepared urea from inorganic materials. This Wöhler synthesis is considered the starting point of modern organic chemistry. It is of historical significance because for the first time an organic compound was produced in inorganic reactions.
During the 1850s Hermann von Helmholtz, anticipated by Julius Robert von Mayer, demonstrated that no energy is lost in muscle movement, suggesting that there were no "vital forces" necessary to move a muscle. These results led to the abandonment of scientific interest in vitalistic theories, especially after Eduard Buchner's demonstration that alcoholic fermentation could occur in cell-free extracts of yeast. Nonetheless, belief still exists in pseudoscientific theories such as homoeopathy, which interprets diseases and sickness as caused by disturbances in a hypothetical vital force or life force.
Development
Origin of life
The age of Earth is about 4.54 billion years. Life on Earth has existed for at least 3.5 billion years, with the oldest physical traces of life dating back 3.7 billion years. Estimates from molecular clocks, as summarised in the TimeTree public database, place the origin of life around 4.0 billion years ago. Hypotheses on the origin of life attempt to explain the formation of a universal common ancestor from simple organic molecules via pre-cellular life to protocells and metabolism. In 2016, a set of 355 genes from the last universal common ancestor was tentatively identified.
The biosphere is postulated to have developed, from the origin of life onwards, at least some 3.5 billion years ago. The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilised microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago.
Evolution
Evolution is the change in heritable characteristics of biological populations over successive generations. It results in the appearance of new species and often the disappearance of old ones. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on genetic variation, resulting in certain characteristics increasing or decreasing in frequency within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
Fossils
Fossils are the preserved remains or traces of organisms from the remote past. The totality of fossils, both discovered and undiscovered, and their placement in layers (strata) of sedimentary rock is known as the fossil record. A preserved specimen is called a fossil if it is older than the arbitrary date of 10,000 years ago. Hence, fossils range in age from the youngest at the start of the Holocene Epoch to the oldest from the Archaean Eon, up to 3.4 billion years old.
Extinction
Extinction is the process by which a species dies out. The moment of extinction is the death of the last individual of that species. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively after a period of apparent absence. Species become extinct when they are no longer able to survive in changing habitat or against superior competition. Over 99% of all the species that have ever lived are now extinct. Mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Environmental conditions
The diversity of life on Earth is a result of the dynamic interplay between genetic opportunity, metabolic capability, environmental challenges, and symbiosis. For most of its existence, Earth's habitable environment has been dominated by microorganisms and subjected to their metabolism and evolution. As a consequence of these microbial activities, the physical-chemical environment on Earth has been changing on a geologic time scale, thereby affecting the path of evolution of subsequent life. For example, the release of molecular oxygen by cyanobacteria as a by-product of photosynthesis induced global changes in the Earth's environment. Because oxygen was toxic to most life on Earth at the time, this posed novel evolutionary challenges, and ultimately resulted in the formation of Earth's major animal and plant species. This interplay between organisms and their environment is an inherent feature of living systems.
Biosphere
The biosphere is the global sum of all ecosystems. It can also be termed as the zone of life on Earth, a closed system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-regulating. Organisms exist in every part of the biosphere, including soil, hot springs, inside rocks at least deep underground, the deepest parts of the ocean, and at least high in the atmosphere. For example, spores of Aspergillus niger have been detected in the mesosphere at an altitude of 48 to 77 km. Under test conditions, life forms have been observed to survive in the vacuum of space. Life forms thrive in the deep Mariana Trench, and inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, and beneath the seabed off Japan. In 2014, life forms were found living below the ice of Antarctica. Expeditions of the International Ocean Discovery Program found unicellular life in 120 °C sediment 1.2 km below seafloor in the Nankai Trough subduction zone. According to one researcher, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are."
Range of tolerance
The inert components of an ecosystem are the physical and chemical factors necessary for life—energy (sunlight or chemical energy), water, heat, atmosphere, gravity, nutrients, and ultraviolet solar radiation protection. In most ecosystems, the conditions vary during the day and from one season to the next. To live in most ecosystems, then, organisms must be able to survive a range of conditions, called the "range of tolerance". Outside that are the "zones of physiological stress", where the survival and reproduction are possible but not optimal. Beyond these zones are the "zones of intolerance", where survival and reproduction of that organism is unlikely or impossible. Organisms that have a wide range of tolerance are more widely distributed than organisms with a narrow range of tolerance.
Extremophiles
To survive, some microorganisms have evolved to withstand freezing, complete desiccation, starvation, high levels of radiation exposure, and other physical or chemical challenges. These extremophile microorganisms may survive exposure to such conditions for long periods. They excel at exploiting uncommon sources of energy. Characterization of the structure and metabolic diversity of microbial communities in such extreme environments is ongoing.
Classification
Antiquity
The first classification of organisms was made by the Greek philosopher Aristotle (384–322 BC), who grouped living things as either plants or animals, based mainly on their ability to move. He distinguished animals with blood from animals without blood, which can be compared with the concepts of vertebrates and invertebrates respectively, and divided the blooded animals into five groups: viviparous quadrupeds (mammals), oviparous quadrupeds (reptiles and amphibians), birds, fishes and whales. The bloodless animals were divided into five groups: cephalopods, crustaceans, insects (which included the spiders, scorpions, and centipedes), shelled animals (such as most molluscs and echinoderms), and "zoophytes" (animals that resemble plants). This theory remained dominant for more than a thousand years.
Linnaean
In the late 1740s, Carl Linnaeus introduced his system of binomial nomenclature for the classification of species. Linnaeus attempted to improve the composition and reduce the length of the previously used many-worded names by abolishing unnecessary rhetoric, introducing new descriptive terms and precisely defining their meaning.
The fungi were originally treated as plants. For a short period Linnaeus had classified them in the taxon Vermes in Animalia, but later placed them back in Plantae. Herbert Copeland classified the Fungi in his Protoctista, including them with single-celled organisms and thus partially avoiding the problem but acknowledging their special status. The problem was eventually solved by Whittaker, when he gave them their own kingdom in his five-kingdom system. Evolutionary history shows that the fungi are more closely related to animals than to plants.
As advances in microscopy enabled detailed study of cells and microorganisms, new groups of life were revealed, and the fields of cell biology and microbiology were created. These new organisms were originally described separately in protozoa as animals and protophyta/thallophyta as plants, but were united by Ernst Haeckel in the kingdom Protista; later, the prokaryotes were split off in the kingdom Monera, which would eventually be divided into two separate groups, the Bacteria and the Archaea. This led to the six-kingdom system and eventually to the current three-domain system, which is based on evolutionary relationships. However, the classification of eukaryotes, especially of protists, is still controversial.
As microbiology developed, viruses, which are non-cellular, were discovered. Whether these are considered alive has been a matter of debate; viruses lack characteristics of life such as cell membranes, metabolism and the ability to grow or respond to their environments. Viruses have been classed into "species" based on their genetics, but many aspects of such a classification remain controversial.
The original Linnaean system has been modified many times, for example as follows:
The attempt to organise the Eukaryotes into a small number of kingdoms has been challenged. The Protozoa do not form a clade or natural grouping, and nor do the Chromista (Chromalveolata).
Metagenomic
The ability to sequence large numbers of complete genomes has allowed biologists to take a metagenomic view of the phylogeny of the whole tree of life. This has led to the realisation that the majority of living things are bacteria, and that all have a common origin.
Composition
Chemical elements
All life forms require certain core chemical elements for their biochemical functioning. These include carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur—the elemental macronutrients for all organisms. Together these make up nucleic acids, proteins and lipids, the bulk of living matter. Five of these six elements comprise the chemical components of DNA, the exception being sulfur. The latter is a component of the amino acids cysteine and methionine. The most abundant of these elements in organisms is carbon, which has the desirable attribute of forming multiple, stable covalent bonds. This allows carbon-based (organic) molecules to form the immense variety of chemical arrangements described in organic chemistry.
Alternative hypothetical types of biochemistry have been proposed that eliminate one or more of these elements, swap out an element for one not on the list, or change required chiralities or other chemical properties.
DNA
Deoxyribonucleic acid or DNA is a molecule that carries most of the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macromolecule that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides. Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA. This has the key property that each strand contains all the information needed to recreate the other strand, enabling the information to be preserved during reproduction and cell division. Within cells, DNA is organised into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotes store most of their DNA inside the cell nucleus.
Cells
Cells are the basic unit of structure in every living thing, and all cells arise from pre-existing cells by division. Cell theory was formulated by Henri Dutrochet, Theodor Schwann, Rudolf Virchow and others during the early nineteenth century, and subsequently became widely accepted. The activity of an organism depends on the total activity of its cells, with energy flow occurring within and between them. Cells contain hereditary information that is carried forward as a genetic code during cell division.
There are two primary types of cells, reflecting their evolutionary origins. Prokaryote cells lack a nucleus and other membrane-bound organelles, although they have circular DNA and ribosomes. Bacteria and Archaea are two domains of prokaryotes. The other primary type is the eukaryote cell, which has a distinct nucleus bound by a nuclear membrane and membrane-bound organelles, including mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic reticulum, and vacuoles. In addition, their DNA is organised into chromosomes. All species of large complex organisms are eukaryotes, including animals, plants and fungi, though with a wide diversity of protist microorganisms. The conventional model is that eukaryotes evolved from prokaryotes, with the main organelles of the eukaryotes forming through endosymbiosis between bacteria and the progenitor eukaryotic cell.
The molecular mechanisms of cell biology are based on proteins. Most of these are synthesised by the ribosomes through an enzyme-catalyzed process called protein biosynthesis. A sequence of amino acids is assembled and joined based upon gene expression of the cell's nucleic acid. In eukaryotic cells, these proteins may then be transported and processed through the Golgi apparatus in preparation for dispatch to their destination.
Cells reproduce through a process of cell division in which the parent cell divides into two or more daughter cells. For prokaryotes, cell division occurs through a process of fission in which the DNA is replicated, then the two copies are attached to parts of the cell membrane. In eukaryotes, a more complex process of mitosis is followed. However, the result is the same; the resulting cell copies are identical to each other and to the original cell (except for mutations), and both are capable of further division following an interphase period.
Multicellular structure
Multicellular organisms may have first evolved through the formation of colonies of identical cells. These cells can form group organisms through cell adhesion. The individual members of a colony are capable of surviving on their own, whereas the members of a true multi-cellular organism have developed specialisations, making them dependent on the remainder of the organism for survival. Such organisms are formed clonally or from a single germ cell that is capable of forming the various specialised cells that form the adult organism. This specialisation allows multicellular organisms to exploit resources more efficiently than single cells. About 800 million years ago, a minor genetic change in a single molecule, the enzyme GK-PID, may have allowed organisms to go from a single cell organism to one of many cells.
Cells have evolved methods to perceive and respond to their microenvironment, thereby enhancing their adaptability. Cell signalling coordinates cellular activities, and hence governs the basic functions of multicellular organisms. Signaling between cells can occur through direct cell contact using juxtacrine signalling, or indirectly through the exchange of agents as in the endocrine system. In more complex organisms, coordination of activities can occur through a dedicated nervous system.
In the universe
Though life is confirmed only on Earth, many think that extraterrestrial life is not only plausible, but probable or inevitable, possibly resulting in a biophysical cosmology instead of a mere physical cosmology. Other planets and moons in the Solar System and other planetary systems are being examined for evidence of having once supported simple life, and projects such as SETI are trying to detect radio transmissions from possible alien civilisations. Other locations within the Solar System that may host microbial life include the subsurface of Mars, the upper atmosphere of Venus, and subsurface oceans on some of the moons of the giant planets.
Investigation of the tenacity and versatility of life on Earth, as well as an understanding of the molecular systems that some organisms utilise to survive such extremes, is important for the search for extraterrestrial life. For example, lichen could survive for a month in a simulated Martian environment.
Beyond the Solar System, the region around another main-sequence star that could support Earth-like life on an Earth-like planet is known as the habitable zone. The inner and outer radii of this zone vary with the luminosity of the star, as does the time interval during which the zone survives. Stars more massive than the Sun have a larger habitable zone, but remain on the Sun-like "main sequence" of stellar evolution for a shorter time interval. Small red dwarfs have the opposite problem, with a smaller habitable zone that is subject to higher levels of magnetic activity and the effects of tidal locking from close orbits. Hence, stars in the intermediate mass range such as the Sun may have a greater likelihood for Earth-like life to develop. The location of the star within a galaxy may also affect the likelihood of life forming. Stars in regions with a greater abundance of heavier elements that can form planets, in combination with a low rate of potentially habitat-damaging supernova events, are predicted to have a higher probability of hosting planets with complex life. The variables of the Drake equation are used to discuss the conditions in planetary systems where civilisation is most likely to exist, within wide bounds of uncertainty. A "Confidence of Life Detection" scale (CoLD) for reporting evidence of life beyond Earth has been proposed.
Artificial
Artificial life is the simulation of any aspect of life, as through computers, robotics, or biochemistry. Synthetic biology is a new area of biotechnology that combines science and biological engineering. The common goal is the design and construction of new biological functions and systems not found in nature. Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health and the environment.
See also
Biology, the study of life
Biosignature
Carbon-based life
Central dogma of molecular biology
History of life
Lists of organisms by population
Viable system theory
Notes
References
External links
Vitae (BioLib)
Wikispecies – a free directory of life
Biota (Taxonomicon) (archived 15 July 2014)
Entry on the Stanford Encyclopedia of Philosophy
What Is Life? – by Jaime Green, The Atlantic (archived 5 December 2023)
Main topic articles | 0.772867 | 0.999534 | 0.772507 |
Exercise | Exercise is physical activity that enhances or maintains fitness and overall health. It is performed for various reasons, including weight loss or maintenance, to aid growth and improve strength, develop muscles and the cardiovascular system, hone athletic skills, improve health, or simply for enjoyment. Many individuals choose to exercise outdoors where they can congregate in groups, socialize, and improve well-being as well as mental health.
In terms of health benefits, usually, 2.5 hours of moderate-intensity exercise per week is recommended for reducing the risk of health problems. At the same time, even doing a small amount of exercise is healthier than doing none. Only doing an hour and a quarter (11 minutes/day) of exercise could reduce the risk of early death, cardiovascular disease, stroke, and cancer.
Classification
Physical exercises are generally grouped into three types, depending on the overall effect they have on the human body:
Aerobic exercise is any physical activity that uses large muscle groups and causes the body to use more oxygen than it would while resting. The goal of aerobic exercise is to increase cardiovascular endurance. Examples of aerobic exercise include running, cycling, swimming, brisk walking, skipping rope, rowing, hiking, dancing, playing tennis, continuous training, and long distance running.
Anaerobic exercise, which includes strength and resistance training, can firm, strengthen, and increase muscle mass, as well as improve bone density, balance, and coordination. Examples of strength exercises are push-ups, pull-ups, lunges, squats, bench press. Anaerobic exercise also includes weight training, functional training, Eccentric Training, interval training, sprinting, and high-intensity interval training which increase short-term muscle strength.
Flexibility exercises stretch and lengthen muscles. Activities such as stretching help to improve joint flexibility and keep muscles limber. The goal is to improve the range of motion which can reduce the chance of injury.
Physical exercise can also include training that focuses on accuracy, agility, power, and speed.
Types of exercise can also be classified as dynamic or static. 'Dynamic' exercises such as steady running, tend to produce a lowering of the diastolic blood pressure during exercise, due to the improved blood flow. Conversely, static exercise (such as weight-lifting) can cause the systolic pressure to rise significantly, albeit transiently, during the performance of the exercise.
Health effects
Physical exercise is important for maintaining physical fitness and can contribute to maintaining a healthy weight, regulating the digestive system, building and maintaining healthy bone density, muscle strength, and joint mobility, promoting physiological well-being, reducing surgical risks, and strengthening the immune system. Some studies indicate that exercise may increase life expectancy and the overall quality of life. People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who by comparison are not physically active. Moderate levels of exercise have been correlated with preventing aging by reducing inflammatory potential. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week, with diminishing returns at higher levels of activity. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for transportation 25 minutes on a daily basis would together achieve about 3000 MET minutes a week. A lack of physical activity causes approximately 6% of the burden of disease from coronary heart disease, 7% of type 2 diabetes, 10% of breast cancer, and 10% of colon cancer worldwide. Overall, physical inactivity causes 9% of premature mortality worldwide.
The American-British writer Bill Bryson wrote: "If someone invented a pill that could do for us all that a moderate amount of exercise achieves, it would instantly become the most successful drug in history."
Fitness
Most people can increase fitness by increasing physical activity levels. Increases in muscle size from resistance training are primarily determined by diet and testosterone. This genetic variation in improvement from training is one of the key physiological differences between elite athletes and the larger population. There is evidence that exercising in middle age may lead to better physical ability later in life.
Early motor skills and development is also related to physical activity and performance later in life. Children who are more proficient with motor skills early on are more inclined to be physically active, and thus tend to perform well in sports and have better fitness levels. Early motor proficiency has a positive correlation to childhood physical activity and fitness levels, while less proficiency in motor skills results in a more sedentary lifestyle.
The type and intensity of physical activity performed may have an effect on a person's fitness level. There is some weak evidence that high-intensity interval training may improve a person's VO2 max slightly more than lower intensity endurance training. However, unscientific fitness methods could lead to sports injuries.
Cardiovascular system
The beneficial effect of exercise on the cardiovascular system is well documented. There is a direct correlation between physical inactivity and cardiovascular disease, and physical inactivity is an independent risk factor for the development of coronary artery disease. Low levels of physical exercise increase the risk of cardiovascular diseases mortality.
Children who participate in physical exercise experience greater loss of body fat and increased cardiovascular fitness. Studies have shown that academic stress in youth increases the risk of cardiovascular disease in later years; however, these risks can be greatly decreased with regular physical exercise.
There is a dose-response relationship between the amount of exercise performed from approximately kcal of energy expenditure per week and all-cause mortality and cardiovascular disease mortality in middle-aged and elderly men. The greatest potential for reduced mortality is seen in sedentary individuals who become moderately active.
Studies have shown that since heart disease is the leading cause of death in women, regular exercise in aging women leads to healthier cardiovascular profiles.
The most beneficial effects of physical activity on cardiovascular disease mortality can be attained through moderate-intensity activity (40–60% of maximal oxygen uptake, depending on age). After a myocardial infarction, survivors who changed their lifestyle to include regular exercise had higher survival rates. Sedentary people are most at risk for mortality from cardiovascular and all other causes. According to the American Heart Association, exercise reduces the risk of cardiovascular diseases, including heart attack and stroke.
Some have suggested that increases in physical exercise might decrease healthcare costs, increase the rate of job attendance, as well as increase the amount of effort women put into their jobs.
Immune system
Although there have been hundreds of studies on physical exercise and the immune system, there is little direct evidence on its connection to illness. Epidemiological evidence suggests that moderate exercise has a beneficial effect on the human immune system; an effect which is modeled in a J curve. Moderate exercise has been associated with a 29% decreased incidence of upper respiratory tract infections (URTI), but studies of marathon runners found that their prolonged high-intensity exercise was associated with an increased risk of infection occurrence. However, another study did not find the effect. Immune cell functions are impaired following acute sessions of prolonged, high-intensity exercise, and some studies have found that athletes are at a higher risk for infections. Studies have shown that strenuous stress for long durations, such as training for a marathon, can suppress the immune system by decreasing the concentration of lymphocytes. The immune systems of athletes and nonathletes are generally similar. Athletes may have a slightly elevated natural killer cell count and cytolytic action, but these are unlikely to be clinically significant.
Vitamin C supplementation has been associated with a lower incidence of upper respiratory tract infections in marathon runners.
Biomarkers of inflammation such as C-reactive protein, which are associated with chronic diseases, are reduced in active individuals relative to sedentary individuals, and the positive effects of exercise may be due to its anti-inflammatory effects. In individuals with heart disease, exercise interventions lower blood levels of fibrinogen and C-reactive protein, an important cardiovascular risk marker. The depression in the immune system following acute bouts of exercise may be one of the mechanisms for this anti-inflammatory effect.
Cancer
A systematic review evaluated 45 studies that examined the relationship between physical activity and cancer survival rates. According to the review, "[there] was consistent evidence from 27 observational studies that physical activity is associated with reduced all-cause, breast cancer–specific, and colon cancer–specific mortality. There is currently insufficient evidence regarding the association between physical activity and mortality for survivors of other cancers." Evidence suggests that exercise may positively affect the quality of life in cancer survivors, including factors such as anxiety, self-esteem and emotional well-being. For people with cancer undergoing active treatment, exercise may also have positive effects on health-related quality of life, such as fatigue and physical functioning. This is likely to be more pronounced with higher intensity exercise.
Exercise may contribute to a reduction of cancer-related fatigue in survivors of breast cancer. Although there is only limited scientific evidence on the subject, people with cancer cachexia are encouraged to engage in physical exercise. Due to various factors, some individuals with cancer cachexia have a limited capacity for physical exercise. Compliance with prescribed exercise is low in individuals with cachexia and clinical trials of exercise in this population often have high drop-out rates.
There is low-quality evidence for an effect of aerobic physical exercises on anxiety and serious adverse events in adults with hematological malignancies. Aerobic physical exercise may result in little to no difference in the mortality, quality of life, or physical functioning. These exercises may result in a slight reduction in depression and reduction in fatigue.
Neurobiological
Depression
Continuous aerobic exercise can induce a transient state of euphoria, colloquially known as a "runner's high" in distance running or a "rower's high" in crew, through the increased biosynthesis of at least three euphoriant neurochemicals: anandamide (an endocannabinoid), β-endorphin (an endogenous opioid), and phenethylamine (a trace amine and amphetamine analog).
Sleep
Preliminary evidence from a 2012 review indicated that physical training for up to four months may increase sleep quality in adults over 40 years of age. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2018 systematic review and meta-analysis suggested that exercise can improve sleep quality in people with insomnia.
Libido
One 2013 study found that exercising improved sexual arousal problems related to antidepressant use.
Respiratory system
People who participate in physical exercise experience increased cardiovascular fitness.
There is some level of concern about additional exposure to air pollution when exercising outdoors, especially near traffic.
Mechanism of effects
Skeletal muscle
Resistance training and subsequent consumption of a protein-rich meal promotes muscle hypertrophy and gains in muscle strength by stimulating myofibrillar muscle protein synthesis (MPS) and inhibiting muscle protein breakdown (MPB). The stimulation of muscle protein synthesis by resistance training occurs via phosphorylation of the mechanistic target of rapamycin (mTOR) and subsequent activation of mTORC1, which leads to protein biosynthesis in cellular ribosomes via phosphorylation of mTORC1's immediate targets (the p70S6 kinase and the translation repressor protein 4EBP1). The suppression of muscle protein breakdown following food consumption occurs primarily via increases in plasma insulin. Similarly, increased muscle protein synthesis (via activation of mTORC1) and suppressed muscle protein breakdown (via insulin-independent mechanisms) has also been shown to occur following ingestion of β-hydroxy β-methylbutyric acid.
Aerobic exercise induces mitochondrial biogenesis and an increased capacity for oxidative phosphorylation in the mitochondria of skeletal muscle, which is one mechanism by which aerobic exercise enhances submaximal endurance performance. These effects occur via an exercise-induced increase in the intracellular AMP:ATP ratio, thereby triggering the activation of AMP-activated protein kinase (AMPK) which subsequently phosphorylates peroxisome proliferator-activated receptor gamma coactivator-1α (PGC-1α), the master regulator of mitochondrial biogenesis.
Other peripheral organs
Developing research has demonstrated that many of the benefits of exercise are mediated through the role of skeletal muscle as an endocrine organ. That is, contracting muscles release multiple substances known as myokines which promote the growth of new tissue, tissue repair, and multiple anti-inflammatory functions, which in turn reduce the risk of developing various inflammatory diseases. Exercise reduces levels of cortisol, which causes many health problems, both physical and mental. Endurance exercise before meals lowers blood glucose more than the same exercise after meals. There is evidence that vigorous exercise (90–95% of VO2 max) induces a greater degree of physiological cardiac hypertrophy than moderate exercise (40 to 70% of VO2 max), but it is unknown whether this has any effects on overall morbidity and/or mortality. Both aerobic and anaerobic exercise work to increase the mechanical efficiency of the heart by increasing cardiac volume (aerobic exercise), or myocardial thickness (strength training). Ventricular hypertrophy, the thickening of the ventricular walls, is generally beneficial and healthy if it occurs in response to exercise.
Central nervous system
The effects of physical exercise on the central nervous system may be mediated in part by specific neurotrophic factor hormones released into the blood by muscles, including BDNF, IGF-1, and VEGF.
Public health measures
Community-wide and school campaigns are often used in an attempt to increase a population's level of physical activity. Studies to determine the effectiveness of these types of programs need to be interpreted cautiously as the results vary. There is some evidence that certain types of exercise programmes for older adults, such as those involving gait, balance, co-ordination and functional tasks, can improve balance. Following progressive resistance training, older adults also respond with improved physical function. Brief interventions promoting physical activity may be cost-effective, however this evidence is weak and there are variations between studies.
Environmental approaches appear promising: signs that encourage the use of stairs, as well as community campaigns, may increase exercise levels. The city of Bogotá, Colombia, for example, blocks off of roads on Sundays and holidays to make it easier for its citizens to get exercise. Such pedestrian zones are part of an effort to combat chronic diseases and to maintain a healthy BMI.
Parents can promote physical activity by modelling healthy levels of physical activity or by encouraging physical activity. According to the Centers for Disease Control and Prevention in the United States, children and adolescents should do 60 minutes or more of physical activity each day. Implementing physical exercise in the school system and ensuring an environment in which children can reduce barriers to maintain a healthy lifestyle is essential.
The European Commission's Directorate-General for Education and Culture (DG EAC) has dedicated programs and funds for Health Enhancing Physical Activity (HEPA) projects within its Horizon 2020 and Erasmus+ program, as research showed that too many Europeans are not physically active enough. Financing is available for increased collaboration between players active in this field across the EU and around the world, the promotion of HEPA in the EU and its partner countries, and the European Sports Week. The DG EAC regularly publishes a Eurobarometer on sport and physical activity.
Exercise trends
Worldwide there has been a large shift toward less physically demanding work. This has been accompanied by increasing use of mechanized transportation, a greater prevalence of labor-saving technology in the home, and fewer active recreational pursuits. Personal lifestyle changes, however, can correct the lack of physical exercise.
Research published in 2015 suggests that incorporating mindfulness into physical exercise interventions increases exercise adherence and self-efficacy, and also has positive effects both psychologically and physiologically.
Social and cultural variation
Exercising looks different in every country, as do the motivations behind exercising. In some countries, people exercise primarily indoors (such as at home or health clubs), while in others, people primarily exercise outdoors. People may exercise for personal enjoyment, health and well-being, social interactions, competition or training, etc. These differences could potentially be attributed to a variety of reasons including geographic location and social tendencies.
In Colombia, for example, citizens value and celebrate the outdoor environments of their country. In many instances, they use outdoor activities as social gatherings to enjoy nature and their communities. In Bogotá, Colombia, a 70-mile stretch of road known as the Ciclovía is shut down each Sunday for bicyclists, runners, rollerbladers, skateboarders and other exercisers to work out and enjoy their surroundings.
Similarly to Colombia, citizens of Cambodia tend to exercise socially outside. In this country, public gyms have become quite popular. People will congregate at these outdoor gyms not only to use the public facilities, but also to organize aerobics and dance sessions, which are open to the public.
Sweden has also begun developing outdoor gyms, called utegym. These gyms are free to the public and are often placed in beautiful, picturesque environments. People will swim in rivers, use boats, and run through forests to stay healthy and enjoy the natural world around them. This works particularly well in Sweden due to its geographical location.
Exercise in some areas of China, particularly among those who are retired, seems to be socially grounded. In the mornings, square dances are held in public parks; these gatherings may include Latin dancing, ballroom dancing, tango, or even the jitterbug. Dancing in public allows people to interact with those with whom they would not normally interact, allowing for both health and social benefits.
These sociocultural variations in physical exercise show how people in different geographic locations and social climates have varying motivations and methods of exercising. Physical exercise can improve health and well-being, as well as enhance community ties and appreciation of natural beauty.
Nutrition and recovery
Proper nutrition is as important to health as exercise. When exercising, it becomes even more important to have a good diet to ensure that the body has the correct ratio of macronutrients while providing ample micronutrients, to aid the body with the recovery process following strenuous exercise.
Active recovery is recommended after participating in physical exercise because it removes lactate from the blood more quickly than inactive recovery. Removing lactate from circulation allows for an easy decline in body temperature, which can also benefit the immune system, as an individual may be vulnerable to minor illnesses if the body temperature drops too abruptly after physical exercise. Exercise physiologists recommend the "4-Rs framework":
Rehydration
Replacing any fluid and electrolyte deficits
Refuel
Consuming carbohydrates to replenish muscle and liver glycogen
Repair
Consuming high-quality protein sources with additional supplementation of creatine monohydrate
Rest
Getting long and high-quality sleep after exercise, additionally improved by consuming casein proteins, antioxidant-rich fruits, and high-glycemic-index meals
Exercise has an effect on appetite, but whether it increases or decreases appetite varies from individual to individual, and is affected by the intensity and duration of the exercise.
Excessive exercise
History
The benefits of exercise have been known since antiquity. Dating back to 65 BCE, it was Marcus Cicero, Roman politician and lawyer, who stated: "It is exercise alone that supports the spirits, and keeps the mind in vigor." Exercise was also seen to be valued later in history during the Early Middle Ages as a means of survival by the Germanic peoples of Northern Europe.
More recently, exercise was regarded as a beneficial force in the 19th century. In 1858, Archibald MacLaren opened a gymnasium at the University of Oxford and instituted a training regimen for Major Frederick Hammersley and 12 non-commissioned officers. This regimen was assimilated into the training of the British Army, which formed the Army Gymnastic Staff in 1860 and made sport an important part of military life. Several mass exercise movements were started in the early twentieth century as well. The first and most significant of these in the UK was the Women's League of Health and Beauty, founded in 1930 by Mary Bagot Stack, that had 166,000 members in 1937.
The link between physical health and exercise (or lack of it) was further established in 1949 and reported in 1953 by a team led by Jerry Morris. Morris noted that men of similar social class and occupation (bus conductors versus bus drivers) had markedly different rates of heart attacks, depending on the level of exercise they got: bus drivers had a sedentary occupation and a higher incidence of heart disease, while bus conductors were forced to move continually and had a lower incidence of heart disease.
Other animals
Animals like chimpanzees, orangutans, gorillas and bonobos, which are closely related to humans, without ill effect engage in considerably less physical activity than is required for human health, raising the question of how this is biochemically possible.
Studies of animals indicate that physical activity may be more adaptable than changes in food intake to regulate energy balance.
Mice having access to activity wheels engaged in voluntary exercise and increased their propensity to run as adults. Artificial selection of mice exhibited significant heritability in voluntary exercise levels, with "high-runner" breeds having enhanced aerobic capacity, hippocampal neurogenesis, and skeletal muscle morphology.
The effects of exercise training appear to be heterogeneous across non-mammalian species. As examples, exercise training of salmon showed minor improvements of endurance, and a forced swimming regimen of yellowtail amberjack and rainbow trout accelerated their growth rates and altered muscle morphology favorable for sustained swimming. Crocodiles, alligators, and ducks showed elevated aerobic capacity following exercise training. No effect of endurance training was found in most studies of lizards, although one study did report a training effect. In lizards, sprint training had no effect on maximal exercise capacity, and muscular damage from over-training occurred following weeks of forced treadmill exercise.
See also
Active living
Behavioural change theories
Bodybuilding
Exercise hypertension
Exercise intensity
Exercise intolerance
Exercise-induced anaphylaxis
Exercise-induced asthma
Exercise-induced nausea
Kinesiology
Metabolic equivalent
Neurobiological effects of physical exercise
Non-exercise associated thermogenesis
Supercompensation
Unilateral training
Warming up
References
External links
Adult Compendium of Physical Activities – a website containing lists of Metabolic Equivalent of Task (MET) values for a number of physical activities, based upon
MedLinePlus Topic on Exercise and Physical Fitness
Physical activity and the environment – guidance on the promotion and creation of physical environments that support increased levels of physical activity.
Science Daily's reference on physical exercise | 0.773408 | 0.9988 | 0.77248 |
Hygiene | Hygiene is a set of practices performed to preserve health.
According to the World Health Organization (WHO), "Hygiene refers to conditions and practices that help to maintain health and prevent the spread of diseases." Personal hygiene refers to maintaining the body's cleanliness. Hygiene activities can be grouped into the following: home and everyday hygiene, personal hygiene, medical hygiene, sleep hygiene, and food hygiene. Home and every day hygiene includes hand washing, respiratory hygiene, food hygiene at home, hygiene in the kitchen, hygiene in the bathroom, laundry hygiene,and medical hygiene at home.And also environmental hygiene in the society to prevent all kinds of bacterias from penetrating into our homes.
Many people equate hygiene with "cleanliness", but hygiene is a broad term. It includes such personal habit choices as how frequently to take a shower or bath, wash hands, trim fingernails, and wash clothes. It also includes attention to keeping surfaces in the home and workplace clean, including bathroom facilities. Adherence to regular hygiene practices is often regarded as a socially responsible and respectable behavior, while neglecting proper hygiene can be perceived as unclean or unsanitary, and may be considered socially unacceptable or disrespectful, while also posing a risk to public health.
Definition and overview
Hygiene is a practice related to lifestyle, cleanliness, health, and medicine. In medicine and everyday life, hygiene practices are preventive measures that reduce the incidence and spread of germs leading to disease.
Hygiene practices vary from one culture to another.
In the manufacturing of food, pharmaceuticals, cosmetics, and other products, good hygiene is a critical component of quality assurance.
The terms cleanliness and hygiene are often used interchangeably, which can cause confusion. In general, hygiene refers to practices that prevent spread of disease-causing organisms. Cleaning processes (e.g., handwashing) remove infectious microbes as well as dirt and soil, and are thus often the means to achieve hygiene.
Other uses of the term are as follows: body hygiene, personal hygiene, sleep hygiene, mental hygiene, dental hygiene, and occupational hygiene, used in connection with public health.
Home hygiene overview
Home hygiene pertains to the hygiene practices that prevent or minimize the spread of disease at home and other everyday settings such as social settings, public transport, the workplace, public places, and more. Hygiene in a variety of settings plays an important role in preventing the spread of infectious diseases. It includes procedures like hand hygiene, respiratory hygiene, food and water hygiene, general home hygiene (hygiene of environmental sites and surfaces), care of domestic animals, and home health care (the care of those who are at greater risk of infection).
At present, these components of hygiene tend to be regarded as separate issues, although based on the same underlying microbiological principles. Preventing the spread of diseases means breaking the chain of infection transmission so that infection cannot spread. "Targeted hygiene" is based on identifying the routes of pathogen spread in the home and introducing hygiene practices at critical times to break the chain of infection. It uses a risk-based approach based on Hazard Analysis Critical Control Point (HACCP).
The main sources of infection in the home are people (who are carriers or are infected), foods (particularly raw foods), water, pets, and domestic animals. Sites that accumulate stagnant water – such as sinks, toilets, waste pipes, cleaning tools, and face cloths – readily support microbial growth and can become secondary reservoirs of infection, though species are mostly those that threaten "at risk" groups. Pathogens (such as potentially infectious bacteria and viruses – colloquially called "germs") are constantly shed via mucous membranes, feces, vomit, skin scales, and other means. When circumstances combine, people are exposed, either directly or via food or water, and can develop an infection.
The main "highways" for the spread of pathogens in the home are the hands, hand and food contact surfaces, and cleaning cloths and utensils (e.g. fecal–oral route of transmission). Pathogens can also be spread via clothing and household linens, such as towels. Utilities such as toilets and wash basins were invented to deal safely with human waste but still have risks associated with them. Safe disposal of human waste is a fundamental need; poor sanitation is a primary cause of diarrhea disease in low-income communities. Respiratory viruses and fungal spores spread via the air.
Good home hygiene means engaging in hygiene practices at critical points to break the chain of infection. Because the "infectious dose" for some pathogens can be very small (10–100 viable units or even less for some viruses), and infection can result from direct transfer of pathogens from surfaces via hands or food to the mouth, nasal mucous, or the eye, "hygienic cleaning" procedures should be adopted to eliminate pathogens from critical surfaces.
Hand washing
Respiratory hygiene
Correct respiratory and hand hygiene when coughing and sneezing reduces the spread of pathogens particularly during the cold and flu season:
Carry tissues and use them to catch coughs and sneezes, or sneeze into your elbow.
Dispose of tissues as soon as possible.
Hygiene in the kitchen, bathroom and toilet
Routine cleaning of hands, food, sites, and surfaces (such as toilet seats and flush handles, door and tap handles, work surfaces, and bath and basin surfaces) in the kitchen, bathroom, and toilet rooms reduces the spread of pathogens. The infection risk from flush toilets is not high, provided they are properly maintained, although some splashing and aerosol formation can occur during flushing, particularly when someone has diarrhea. Pathogens can survive in the scum or scale left behind on baths, showers, and washbasins after washing and bathing.
Thorough cleaning is important to prevent the spread of fungal infections. Molds can live on wall and floor tiles and on shower curtains. Mold can be responsible for infections, cause allergic reactions, deteriorate/damage surfaces, and cause unpleasant odors. Primary sites of fungal growth are inanimate surfaces, including carpets and soft furnishings. Airborne fungi are usually associated with damp conditions, poor ventilation, or closed air systems.
Hygienic cleaning can be done through:
Mechanical removal (i.e., cleaning) using a soap or detergent. To be effective as a hygiene measure, this process must be followed by thorough rinsing under running water to remove pathogens from the surface.
Using a process or product that inactivates the pathogens in situ. Pathogen kill is achieved using a "micro-biocidal" product, i.e., a disinfectant or antibacterial product; waterless hand sanitizer; or by application of heat.
In some cases, combined pathogen removal with kill is used, e.g., laundering of clothing and household linens such as towels and bed linen.
House deep-cleaning an intensive cleaning process targeting often-neglected areas, enhancing aesthetics, and improving health by reducing allergens and bacteria. It typically includes tasks like detailed dusting, appliance cleaning, and carpet shampooing, recommended biannually to maintain a home's hygiene and air quality.
Laundry hygiene
Laundry hygiene involves practices that prevent disease and its spread via soiled clothing and household linens such as towels. Items most likely to be contaminated with pathogens are those that come into direct contact with the body, e.g., underwear, personal towels, facecloths, nappies. Cloths or other fabric items used during food preparation, or for cleaning the toilet or cleaning up material such as feces or vomit are a particular risk.
Microbiological and epidemiological data indicates that clothing and household linens are a risk factor for infection transmission in home and everyday life settings as well as institutional settings. The lack of quantitative data linking contaminated clothing to infection in the domestic setting makes it difficult to assess the extent of this risk. This also indicates that risks from clothing and household linens are somewhat less than those associated with hands, hand contact and food contact surfaces, and cleaning cloths, but even so these risks need to be managed through effective laundering practices. In the home, this should be carried out as part of a multibarrier approach to hygiene which includes hand, food, respiratory, and other hygiene practices.
Infectious disease risks from contaminated clothing can increase significantly under certain conditions - for example, in healthcare situations in hospitals, care homes, and the domestic setting where someone has diarrhoea, vomiting, or a skin or wound infection. The risk increases in circumstances where someone has reduced immunity to infection.
Hygiene measures, including laundry hygiene, are an important part of reducing spread of antibiotic-resistant strains of infectious organisms. In the community, otherwise-healthy people can become persistent skin carriers of MRSA, or faecal carriers of enterobacteria strains which can carry multi-antibiotic resistance factors (e.g. NDM-1 or ESBL-producing strains). The risks are not apparent until, for example, they are admitted to hospital, when they can become "self infected" with their own resistant organisms following a surgical procedure. As persistent nasal, skin, or bowel carriage in the healthy population spreads "silently" across the world, the risks from resistant strains in both hospitals and the community increases. In particular the data indicates that clothing and household linens are a risk factor for spread of S. aureus (including MRSA and PVL-producing MRSA strains), and that effectiveness of laundry processes may be an important factor in defining the rate of community spread of these strains. Experience in the United States suggests that these strains are transmissible within families and in community settings such as prisons, schools, and sport teams. Skin-to-skin contact (including unabraded skin) and indirect contact with contaminated objects such as towels, sheets, and sports equipment seem to represent the mode of transmission.
During laundering, temperature and detergent work to reduce microbial contamination levels on fabrics. Soil and microbes from fabrics are severed and suspended in the wash water. These are then "washed away" during the rinse and spin cycles. In addition to physical removal, micro-organisms can be killed by thermal inactivation which increases as the temperature is increased. Chemical inactivation of microbes by the surfactants and activated oxygen-based bleach used in detergents contributes to the hygiene effectiveness of laundering. Adding hypochlorite bleach in the washing process achieves inactivation of microbes. A number of other factors can contribute including drying and ironing.
Drying laundry on a line in direct sunlight is known to reduce pathogens.
In 2013, the International Scientific Forum on Home Hygiene reviewed 30 studies of the hygiene effectiveness of laundering at temperatures ranging from room temperature to , under varying conditions. A key finding was the lack of standardization and control within studies, and the variability in test conditions between studies such as wash cycle time, number of rinses, and other factors. The consequent variability in the data (i.e., the reduction in contamination on fabrics) in turn makes it extremely difficult to propose guidelines for laundering with any confidence. As a result, there is significant variability in the recommendations for hygienic laundering given by different agencies.
Medical hygiene at home
Medical hygiene pertains to hygiene practices that prevent or minimize disease and the spreading of disease in relation to administering medical care to those who are infected or who are more at risk of infection in the home. Members of "at-risk" groups are cared for at home by a carer who may be a household member and who requires a good knowledge of hygiene. People with reduced immunity to infection, who are looked after at home, make up an increasing proportion of the population (, up to 20%). The largest proportion are the elderly who have co-morbidities that reduce their immunity to infection. It also includes the very young, patients discharged from hospital, taking immuno-suppressive drugs, or using invasive systems, etc. For patients discharged from hospital, or being treated at home, special "medical hygiene" procedures may need to be performed for them, such as catheter or dressing replacement, which puts them at higher risk of infection.
Antiseptics may be applied to cuts, wounds, and abrasions of the skin to prevent the entry of harmful bacteria that can cause sepsis. Day-to-day hygiene practices, other than special medical hygiene procedures, are no different for those at increased risk of infection than for other family members. The difference is that, if hygiene practices are not correctly carried out, the risk of infection is much greater.
Disinfectants and antibacterials in home hygiene
Chemical disinfectants are products that kill pathogens. If the product is a disinfectant, the label on the product should say "disinfectant" or "kills" pathogens. Some commercial products, e.g. bleaches, even though they are technically disinfectants, say that they "kill pathogens" but are not actually labelled as "disinfectants". Not all disinfectants kill all types of pathogens. All disinfectants kill bacteria (called bactericidal). Some also kill fungi (fungicidal), bacterial spores (sporicidal), or viruses (virucidal).
An antibacterial product acts against bacteria in some unspecified way. Some products labelled "antibacterial" kill bacteria while others may contain a concentration of active ingredient that only prevents them from multiplying. It is, therefore, important to check whether the product label states that it "kills bacteria". An antibacterial is not necessarily anti-fungal or anti-viral unless this is stated on the label.
The term sanitizer has been used to define substances that both clean and disinfect. More recently this term has been applied to alcohol-based products that disinfect the hands (alcohol hand sanitizers). Alcohol hand sanitizers however are not considered to be effective on soiled hands.
The term biocide is a broad term for a substance that kills, inactivates or otherwise controls living organisms. It includes antiseptics and disinfectants, which combat micro-organisms, and pesticides.
Personal hygiene
Regular activities
Personal hygiene involves those practices performed by a person to care for their bodily health and well-being through cleanliness. Motivations for personal hygiene practice include reduction of personal illness, healing from illness, optimal health and sense of wellbeing, social acceptance, and prevention of spread of illness to others. What is considered proper personal hygiene can be culture-specific and may change over time.
Practices that are generally considered proper hygiene include showering or bathing regularly, washing hands regularly and especially before handling food, face washing, washing scalp hair, keeping hair short or removing hair, wearing clean clothing, brushing teeth, and trimming fingernails and toenails. Some practices are sex-specific, such as by a woman during menstruation.
Toiletry bags hold body hygiene and toiletry supplies.
Anal hygiene is the practice that a person performs on their anal area after defecation. The anus and buttocks may be either washed with liquids or wiped with toilet paper, or by adding gel wipe to toilet tissue as an alternative to wet wipes or other solid materials in order to remove remnants of feces.
People tend to develop a routine for attending to their personal hygiene needs. Other personal hygienic practices include covering one's mouth when coughing, disposal of soiled tissues appropriately, making sure toilets are clean, and making sure food handling areas are clean, besides other practices. Some cultures do not kiss or shake hands in order to reduce transmission of bacteria by contact.
Personal grooming extends personal hygiene as it pertains to the maintenance of a good personal and public appearance, which need not necessarily be hygienic. It may involve, for example, using deodorants or perfume, shaving, or combing.
Hygiene of internal ear canals
Excessive cleaning of the ear canals can result in infection or irritation. The ear canals require less care than other parts of the body because they are sensitive and mostly self-cleaning. There is a slow and orderly migration of the skin lining the ear canal from the eardrum to the outer opening of the ear. Old earwax is constantly being transported from the deeper areas of the ear canal out to the opening where it usually dries, flakes, and falls out. Attempts to clean the ear canals through the removal of earwax can push debris and foreign material into the ear that the natural movement of ear wax out of the ear would have removed.
Oral hygiene
It is recommended that all healthy adults brush twice a day, softly, with the correct technique, replacing their toothbrush every few months (~3).
There are a number of common oral hygiene misconceptions. The National Health Service (NHS) of England recommends not rinsing the mouth with water after brushing – only to spit out excess toothpaste. They claim that this helps fluoride from toothpaste bond to teeth for its preventative effects against tooth decay. It is also not recommended to brush immediately after drinking acidic substances, including sparkling water. It is also recommended to floss once a day, with a different piece of floss at each flossing session. The effectiveness of amorphous calcium phosphate products, such as Tooth Mousse, is in debate. Visits to a dentist for a checkup every year at least are recommended.
Sleep hygiene
Sleep hygiene is the recommended behavioral and environmental practices that promote better quality sleep. These recommendations were developed in the late 1970s as a method to help people with mild to moderate insomnia, but, , the evidence for effectiveness of individual recommendations is "limited and inconclusive". Clinicians assess the sleep hygiene of people who present with insomnia and other conditions, such as depression, and offer recommendations based on the assessment. Sleep hygiene recommendations include establishing a regular sleep schedule, using naps with care, not exercising physically or mentally too close to bedtime, and avoiding alcohol as well as nicotine, caffeine, and other stimulants in the hours before bedtime. Further recommendations include limiting worry, limiting exposure to light in the hours before sleep, getting out of bed if sleep does not come, not using the bed for anything but sleep, and having a peaceful, comfortable, and dark sleep environment.
Personal care services hygiene
Personal care services hygiene pertains to the care and use of instruments used in the administration of personal care services to people:
Personal care hygiene practices include:
sterilization of instruments used by service providers including hairdressers, aestheticians, and other service providers
sterilization by autoclave of instruments used in body piercing and tattooing
cleaning hands
Challenges
Excessive body hygiene is a possible sign of obsessive–compulsive disorder. Neglecting bodily hygiene, or the cleanliness of one's environment, may be a sign of major depression and other psychological disorders.
Hygiene hypothesis and allergies
Although media coverage of the hygiene hypothesis has declined, popular folklore continues to sometimes assert that dirt is healthy and hygiene unnatural. This has caused health professionals to be concerned that hygiene behaviors which are the foundation of public health are being undermined. In response to the need for effective hygiene in home and everyday life settings, the International Scientific Forum on Home Hygiene developed a "risk-based" or targeted approach to home hygiene that seeks to ensure that hygiene measures are focused on the places and times most critical for infection transmission. While targeted hygiene was originally developed as an effective approach to hygiene practice, it also seeks, as far as possible, to sustain "normal" levels of exposure to the microbial flora of our environment to the extent that is important to build a balanced immune system.
Although there is substantial evidence that some microbial exposures in early childhood can in some way protect against allergies, there is no evidence that humans need exposure to harmful microbes (infection) or that it is necessary to develop a clinical infection. Nor is there evidence that hygiene measures such as hand washing, food hygiene, etc., are linked to increased susceptibility to atopic disease. If this is the case, there is no conflict between the goals of preventing infection and minimizing allergies. that the answer lies in more fundamental changes in lifestyles that have led to decreased exposure to certain microbial or other species, such as helminths, that are important for development of immuno-regulatory mechanisms. There is still much uncertainty as to which lifestyle factors are involved.
Medical hygiene
Medical hygiene pertains to hygiene practices related to the administration of medicine and medical care that prevents or minimizes the spread of disease.
Medical hygiene practices include:
isolation of infectious persons or materials to prevent spread of infection
sterilization of instruments used in surgical procedures
proper bandaging and dressing of injuries
safe disposal of medical waste
disinfection of reusables (i.e., linen, pads, uniforms)
scrubbing up, handwashing, especially in an operating room, but in more general health-care settings as well, where diseases can be transmitted
ethanol-based sanitizers
Most of these practices were developed in the 19th century and were well-established by the mid-20th century. Some procedures (such as disposal of medical waste) were refined in response to late-20th century disease outbreaks, notably AIDS and Ebola.
Food hygiene
Culinary hygiene (or food hygiene) pertains to practices of food management and cooking that prevent food contamination, prevent food poisoning, and minimize the transmission of disease to other foods, humans, or animals. Culinary hygiene practices specify safe ways to handle, store, prepare, serve, and eat food.
Hygiene aspects in low- and middle-income countries
In developing countries (or low- and middle-income countries), universal access to water and sanitation, coupled with hygiene promotion, is essential in reducing infectious diseases. This approach has been integrated into the Sustainable Development Goal Number 6 whose second target states: "By 2030, achieve access to adequate and equitable sanitation and hygiene for all and end open defecation, paying special attention to the needs of women and girls and those in vulnerable situations". Due to their close linkages, water, sanitation, hygiene are together abbreviated and funded under the term WASH in development cooperation.
About two million people die every year due to diarrheal diseases; most of them are children less than five years of age. The most affected are people in developing countries who live in extreme conditions of poverty, normally peri-urban dwellers or rural inhabitants. Providing access to sufficient quantities of safe water and facilities for a sanitary disposal of excreta, and introducing sound hygiene behaviors are important in order to reduce the burden of disease.
Research shows that, if widely practiced, hand washing with soap could reduce diarrhea by almost fifty percent and respiratory infections by nearly twenty-five percent Hand washing with soap also reduces the incidence of skin diseases, and eye infections like trachoma and intestinal worms, especially ascariasis and trichuriasis. Other hygiene practices, such as safe disposal of waste, surface hygiene, and care of domestic animals, are important in low income communities to break the chain of infection transmission.
Cleaning of toilets and hand wash facilities is important to prevent odors and make them socially acceptable. Social acceptance is an important part of encouraging people to use toilets and wash their hands, in situations where open defecation is still seen as a possible alternative, e.g. in rural areas of some developing countries.
Household water treatment and safe storage
Household water treatment and safe storage ensure drinking water is safe for consumption. These interventions are part of the approach of self-supply of water for households. Drinking water quality remains a significant problem in developing and in developed countries; even in the European region it is estimated that 120 million people do not have access to safe drinking water. Point-of-use water quality interventions can reduce diarrheal disease in communities where water quality is poor or in emergency situations where there is a breakdown in water supply.
Since water can become contaminated during storage at home (e.g. by contact with contaminated hands or using dirty storage vessels), safe storage of water in the home is important.
Methods for treatment of drinking water at the household level include:
chemical disinfection using chlorine or iodine
boiling
filtration using ceramic filters
solar disinfection — Solar disinfection is an effective method, especially when no chemical disinfectants are available.
UV irradiation — Community or household UV systems may be batch or flow-though. The lamps can be suspended above the water channel or submerged in the water flow.
combined flocculation/disinfection systems — available as sachets of powder that act by coagulating and flocculating sediments in water followed by release of chlorine
multibarrier methods — Some systems use two or more of the above treatments in combination or in succession to optimize efficacy.
portable water purification devices
History
Asia
China
Bathing culture in Chinese literature can be traced back to the Shang dynasty, when Oracle bone inscriptions describe people washing their hair and body in a bath. The Book of Rites, a work regarding Zhou dynasty ritual, politics, and culture compiled during the Warring States period, recommends that people take a hot shower every five days, and wash their hair every three days. It was also considered good manners to take a bath provided by the host before a dinner. In the Han dynasty, bathing became a regular activity, and for government officials bathing was required every five days.
Ancient bath facilities have been found in ancient Chinese cities, such as Dongzhouyang archaeological site in Henan Province. Bathrooms were called , and bathtubs were made of bronze or timber. Bath beans – a powdery soap mixture of ground beans, cloves, eaglewood, flowers, and even powdered jade – were recorded in the Han Dynasty. Bath beans were considered luxury toiletries, while common people simply used powdered beans without spices mixed in. Luxurious bathhouses built around hot springs were recorded in Tang dynasty. While royal bathhouses and bathrooms were common among ancient Chinese nobles and commoners, public bathhouses were a relatively late development. In the Song dynasty, public bathhouses became popular and people could find them readily. Bathing became an essential part of social life and recreation. Bathhouses often provided massage, nail cutting service, rubdown service, ear cleaning, food, and beverages. Marco Polo, who traveled to China during the Yuan dynasty, noted Chinese bathhouses were using coal to heat the bathhouse, which he had never seen before in Europe. Coal was so plentiful that Chinese people of every social class had bathrooms in their houses, and people took showers every day in the winter for enjoyment.
A typical Ming dynasty bathhouse had slabbed floors and brick domed ceilings. A huge boiler would be installed in the back of the house, connected with the bathing pool through a tunnel. Water could be pumped into the pool by turning wheels attended by the staff.
Japan
The origin of Japanese bathing is , ritual purification with water.
In the Heian period, houses of prominent families, such as the families of court nobles or samurai, had baths. The bath had lost its religious significance and instead became leisure. became (to bathe in a shallow wooden tub). In the 17th century, the first European visitors to Japan recorded the habit of daily baths in mixed sex groups.
Indian subcontinent
The earliest written account of elaborate codes of hygiene can be found in several Hindu texts, such as the Manusmriti and the Vishnu Purana. Bathing is one of the five (daily duties) in Hinduism, and not performing it leads to sin, according to some scriptures.
Ayurveda is a system of medicine developed in ancient times that is still practiced in India, mostly combined with conventional Western medicine. Contemporary Ayurveda stresses a sattvic diet and good digestion and excretion. Hygiene measures include oil pulling, and tongue scraping. Detoxification also plays an important role.
The Americas
Mesoamerica
Spanish chronicles describe the bathing habits of the peoples of Mesoamerica during and after the conquest.
Bernal Díaz del Castillo describes Moctezuma (the Mexica, or Aztec, emperor at the arrival of Cortés) in his as being "...Very neat and cleanly, bathing every day each afternoon...".
Bathing was not restricted to the elite, but was practiced by all people; the chronicler Tomás López Medel wrote after a journey to Central America that "Bathing and the custom of washing oneself is so quotidian [common] amongst the Indians, both of cold and hot lands, as is eating, and this is done in fountains and rivers and other water to which they have access, without anything other than pure water..."
The Mesoamerican bath, known as in Spanish, from the Nahuatl word , a compound of ("steam") and ("house"), consists of a room, often in the form of a small dome, with an exterior firebox known as that heats a small portion of the room's wall made of volcanic rocks; after this wall has been heated, water is poured on it to produce steam, an action known as . As the steam accumulates in the upper part of the room a person in charge uses a bough to direct the steam to the bathers who are lying on the ground, with which he later gives them a massage, then the bathers scrub themselves with a small flat river stone and finally the person in charge introduces buckets with water along with soap and grass used to rinse. This bath had also ritual importance, and was tied to the goddess Toci; it is also therapeutic when medicinal herbs are used in the water for the . It is still used in Mexico.
Europe
Antiquity
Regular bathing was a hallmark of Roman civilization. Elaborate baths were constructed in urban areas to serve the public, who typically demanded the infrastructure to maintain personal cleanliness. The complexes usually consisted of large, swimming pool-like baths, smaller cold and hot pools, saunas, and spa-like facilities where people could be depilated, oiled, and massaged. Water was constantly changed by an aqueduct-fed flow. Bathing outside of urban centers involved smaller, less elaborate bathing facilities, or simply the use of clean bodies of water. Roman cities also had large sewers, such as Rome's Cloaca Maxima, into which public and private latrines drained. Romans did not have demand-flush toilets but did have some toilets with a continuous flow of water under them. The Romans used scented oils (mostly from Egypt), among other alternatives.
Christianity has always placed a strong emphasis on hygiene. Despite rejecting mixed bathing, early Christian clergy encouraged believers to bath, which contributed to hygiene and good health according to the Church Fathers Clement of Alexandria and Tertullian. The Church built public bathing facilities that were separated by sex near monasteries and pilgrimage sites.
Middle Ages
Contrary to popular belief, bathing and sanitation were not lost in Europe with the collapse of the Roman Empire. Starting in the early Middle Ages, popes situated baths within church basilicas and monasteries. Pope Gregory the Great promoted bathing as a bodily need. The use of water in many Christian countries is partly due to Biblical toilet etiquette which encourages washing after all instances of defecation. Bidet and bidet showers were used in regions where water was considered essential for anal cleansing. Public bathhouses were common in medieval Christendom larger towns and cities such as Constantinople, Paris, Regensburg, Rome and Naples. Great bathhouses were built in Byzantine centers such as Constantinople and Antioch.
In the 11th and 12th centuries, bathing was essential to the Western European upper class: the Cluniac monasteries (popular centers for resorting and retiring) were always equipped with bathhouses. These baths were also used ritually when the monks took full immersion baths at the two Christian festivals of renewal. The rules of the Augustinians and Benedictines contained references to ritual purification, and, inspired by Benedict of Nursia, encouraged the practice of therapeutic bathing. Benedictine monks also played a role in the development and promotion of spas.
On the other hand, bathing also sparked erotic phantasies, played upon by the writers of romances intended for the upper class; in the tale of Melusine the bath was a crucial element of the plot.
Cities regulated public bathing – the 26 public baths of Paris in the late 13th century were strictly overseen by the civil authorities and guild laws banned prostitutes from bathhouse admission.
In 14th century Tuscany, newlywed couples commonly took a bath together and we find an illustration of this custom in a fresco in the town hall of San Gimignano.
As evident in Hans Folz' Bath Booklet (a late 15th century guide on European baths) and various artistic depictions such as Albrecht Dürer's Women's Bath , public bathing continued to be a popular past time in the Renaissance. In Britain, the rise of Protestantism also played a prominent role in the development of spa culture.
Modernity
Until the late 19th century, only the elite in Western cities typically possessed indoor facilities for relieving bodily functions. The poorer majority used communal facilities built above cesspools in backyards and courtyards. This changed after Dr. John Snow discovered that cholera was transmitted by the fecal contamination of water. Though it took decades for his findings to gain wide acceptance, governments and sanitary reformers were eventually convinced of the health benefits of using sewers to keep human waste from contaminating the water. This encouraged the widespread adoption of both the flush toilet and the moral imperative that bathrooms should be indoors and as private as possible.
Modern sanitation was not widely adopted until the 19th and 20th centuries. According to medieval historian Lynn Thorndike, people in Medieval Europe probably bathed more than people did in the 19th century. Some time after Louis Pasteur's experiments proved the germ theory of disease and Joseph Lister and others put this into practice in sanitation, hygienic practices came to be regarded as synonymous with health, as they are in modern times.
The importance of hand washing for human healthparticularly for people in vulnerable circumstances like mothers who had just given birth or wounded soldiers in hospitalswas first recognized in the mid 19th century by two pioneers of hand hygiene: the Hungarian physician Ignaz Semmelweis who worked in Vienna, Austria, and Florence Nightingale, the English "founder of modern nursing". At that time most people still believed that infections were caused by foul odors called miasmas.
Middle East
Islam stresses the importance of cleanliness and personal hygiene. Islamic hygienical jurisprudence, which dates back to the 7th century, has a number of elaborate rules. (ritual purity) involves performing (ablution) for the five daily (prayers), as well as regularly performing (bathing), which led to bathhouses being built across the Islamic world. Islamic toilet hygiene also requires washing with water after using the toilet, for purity and to minimize pathogens.
In the Abbasid Caliphate (8th–13th centuries), its capital city of Baghdad (Iraq) had 65,000 baths, along with a sewer system. Cities and towns of the medieval Islamic world had water supply systems powered by hydraulic technology that supplied drinking water along with much greater quantities of water for ritual washing, mainly in mosques and hammams (baths). Bathing establishments in various cities were rated by Arabic writers in travel guides. Medieval Islamic cities such as Baghdad, Córdoba (Islamic Spain), Fez (Morocco), and Fustat (Egypt) also had sophisticated waste disposal and sewage systems with interconnected networks of sewers. The city of Fustat also had multi-storey tenement buildings (with up to six floors) with flush toilets, which were connected to a water supply system, and flues on each floor carrying waste to underground channels.
A basic form of contagion theory dates back to the Persian medicine in the medieval, where it was proposed by Persian physician Ibn Sina (also known as Avicenna) in The Canon of Medicine (1025), the most authoritative medical textbook of the Middle Ages. He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. The concept of invisible contagion was eventually widely accepted by Islamic scholars. In the Ayyubid Sultanate, they referred to them as ("impure substances"). The scholar Ibn al-Haj al-Abdari, while discussing Islamic diet and hygiene, gave advice and warnings about how contagion can contaminate water, food, and garments, and could spread through the water supply.
In the 9th century, Ziryab invented a type of deodorant. He also promoted morning and evening baths, and emphasized the maintenance of personal hygiene. Ziryab is thought to have invented a type of toothpaste, which he popularized throughout Islamic Iberia. The exact ingredients of this toothpaste are not known, but it was reported to have been both "functional and pleasant to taste."
Sub-Saharan Africa
In West Africa, various ethnic groups such as the Yoruba have used black soap to treat skin diseases.
In Southern Africa, the Zulu people conducted methods of sanitation by using water stored in pottery at Ulundi. The Himba people of Namibia and Angola also utilized mixtures of smoke and otjitze treat skin diseases in regions where water is scarce.
Soap and soap makers
Hard toilet soap with a pleasant smell was invented in the Middle East during the Islamic Golden Age when soap-making became an established industry. Recipes for soap-making are described by Muhammad ibn Zakariya al-Razi, who also gave a recipe for producing glycerine from olive oil. In the Middle East, soap was produced from the interaction of fatty oils and fats with alkali. In Syria, soap was produced using olive oil together with alkali and lime. Soap was exported from Syria to other parts of the Muslim world and to Europe. Two key Islamic innovations in soapmaking was the invention of bar soap, described by al-Razi, and the addition of scents using perfume technology perfected in the Islamic world.
By the 15th century, the manufacture of soap in Christendom had become virtually industrialized, with sources in Antwerp, Castile, Marseille, Naples, and Venice. In the 17th century the Spanish Catholic manufacturers purchased the monopoly on Castile soap from the cash-strapped Carolinian government. Industrially-manufactured bar soaps became available in the late 18th century, as advertising campaigns in Europe and America promoted popular awareness of the relationship between cleanliness and health.
A major contribution of the Christian missionaries in Africa, China, Guatemala, India, Indonesia, Korea, and other places was better health care through hygiene and introducing and distributing soap, and "cleanliness and hygiene became an important marker of being identified as a Christian".
Society and culture
Religious hygienic customs
Many religions require or encourage ritual purification via bathing or immersing the hands in water. In Islam, washing oneself via or is necessary for performing prayer. Islamic tradition also lists a variety of rules concerning proper hygiene after using the bathroom. The Baháʼí Faith mandates the washing of the hands and face prior to the obligatory Baháʼí prayers. Orthodox Judaism requires a bath following menstruation and childbirth, while washing the hands is performed upon waking up and before eating bread. Water plays a role in Christian rituals as well, and in certain denominations of Christianity such as the Ethiopian Orthodox Tewahedo Church which prescribes several kinds of hand washing for example after leaving the latrine, lavatory, or bathhouse, or before prayer, or after eating a meal, or ritual handwashing.
Etymology
First attested in English in 1676, the word hygiene comes from the French , the latinisation of the Greek , meaning "(art) of health", from , "good for the health, healthy", in turn from , "healthful, sound, salutary, wholesome". In ancient Greek religion, Hygeia was the personification of health, cleanliness, and hygiene.
See also
References
External links
Hygiene at the Centers for Disease Control and Prevention
Hand hygiene at the European Centre for Disease Prevention and Control
Water, Sanitation and Hygiene at the World Health Organization
Bathrooms
Hygiene
Hygiene | 0.773554 | 0.998286 | 0.772228 |
Epidemic | An epidemic (from Greek ἐπί epi "upon or above" and δῆμος demos "people") is the rapid spread of disease to a large number of hosts in a given population within a short period of time. For example, in meningococcal infections, an attack rate in excess of 15 cases per 100,000 people for two consecutive weeks is considered an epidemic.
Epidemics of infectious disease are generally caused by several factors including a change in the ecology of the host population (e.g., increased stress or increase in the density of a vector species), a genetic change in the pathogen reservoir or the introduction of an emerging pathogen to a host population (by movement of pathogen or host). Generally, an epidemic occurs when host immunity to either an established pathogen or newly emerging novel pathogen is suddenly reduced below that found in the endemic equilibrium and the transmission threshold is exceeded.
An epidemic may be restricted to one location; however, if it spreads to other countries or continents and affects a substantial number of people, it may be termed as a pandemic. The declaration of an epidemic usually requires a good understanding of a baseline rate of incidence; epidemics for certain diseases, such as influenza, are defined as reaching some defined increase in incidence above this baseline. A few cases of a very rare disease may be classified as an epidemic, while many cases of a common disease (such as the common cold) would not. An epidemic can cause enormous damage through financial and economic losses in addition to impaired health and loss of life.
Definition
The United States Centers for Disease Control and Prevention defines epidemic broadly: "Epidemic refers to an increase, often sudden, in the number of cases of a disease above what is normally expected in that population in that area." The term "outbreak" can also apply, but is usually restricted to smaller events.
Any sudden increase in disease prevalence may generally be termed an epidemic. This may include contagious disease (i.e. easily spread between persons) such as influenza; vector-borne diseases such as malaria; water-borne diseases such as cholera; and sexually transmitted diseases such as HIV/AIDS. The term can also be used for non-communicable health issues such as obesity.
The term epidemic derives from a word form attributed to Homer's Odyssey, which later took its medical meaning from the Epidemics, a treatise by Hippocrates. Before Hippocrates, , , , and other variants had meanings similar to the current definitions of "indigenous" or "endemic". Thucydides' description of the Plague of Athens is considered one of the earliest accounts of a disease epidemic. By the early 17th century, the terms endemic and epidemic referred to contrasting conditions of population-level disease, with the endemic condition a "common sicknesse" and the epidemic "hapning in some region, or countrey, at a certaine time, ....... producing in all sorts of people, one and the same kind of sicknesse".
The term "epidemic" is often applied to diseases in non-human animals, although "epizootic" is technically preferable.
Causes
There are several factors that may contribute (individually or in combination) to causing an epidemic. There may be changes in a pathogen, in the population that it can infect, in the environment, or in the interaction between all three. Factors include the following:
Antigenic Change
An antigen is a protein on the virus' surface that host antibodies can recognize and attack. Changes in the antigenic characteristics of the agent make it easier for the changed virus to spread throughout a previously immune population. There are two natural mechanisms for change - antigenic drift and antigenic shift. Antigenic drift arises over a period of time as an accumulation of mutations in the virus genes, possibly through a series of hosts, and eventually gives rise to a new strain of virus which can evade existing immunity. Antigenic shift is abrupt - in this, two or more different strains of a virus, coinfecting a single host, combine to form a new subtype having a mixture of characteristics of the original strains. The best known and best documented example of both processes is influenza. SARS-CoV2 has demonstrated antigenic drift and possibly shift as well.
Drug resistance
Antibiotic resistance applies specifically to bacteria that become resistant to antibiotics. Resistance in bacteria can arise naturally by genetic mutation, or by one species acquiring resistance from another through horizontal gene transfer. Extended use of antibiotics appears to encourage selection for mutations which can render antibiotics ineffective. This is especially true of tuberculosis, with increasing occurrence of multiple drug-resistant tuberculosis (MDR-TB) worldwide.
Changes in transmission
Pathogen transmission is a term used to describe the mechanisms by which a disease-causing agent (virus, bacterium, or parasite) spreads from one host to another. Common modes of transmission include: -
airborne (as with influenza and COVID-19),
fecal-oral (as with cholera and typhoid),
vector-borne (malaria, Zika) and
sexual (syphilis, HIV)
The first three of these require that pathogen must survive away from its host for a period of time; an evolutionary change which increases survival time will result in increased virulence.
Another possibility, although rare, is that a pathogen may adapt to take advantage of a new mode of transmission
Seasonality
Seasonal diseases arise due to the change in the environmental conditions, especially such as humidity and temperature, during different seasons. Many diseases display seasonality, This may be due to one or more of the following underlying factors: -
The ability of the pathogen to survive outside the host - e.g. water-borne cholera which becomes prevalent in tropical wet seasons, or influenza which peaks in temperate regions during winter.
The behaviour of people susceptible to the disease - such as spending more time in close contact indoors.
Changes in immune function during winter - one possibility is a reduction in vitamin D, and another is the effect of cold on mucous membranes in the nose.
Abundance of vectors such as mosquitoes.
Human behaviour
Changes in behaviour can affect the likelihood or severity of epidemics. The classic example is the 1854 Broad Street cholera outbreak, in which a cholera outbreak was mitigated by removing a supply of contaminated water - an event now regarded as the foundation of the science of epidemiology. Urbanisation and overcrowding (e.g. in refugee camps) increase the likelihood of disease outbreaks. A factor which contributed to the initial rapid increase in the 2014 Ebola virus epidemic was ritual bathing of (infective) corpses; one of the control measures was an education campaign to change behaviour around funeral rites.
Changes in the host population
The level of immunity to a disease in a population - herd immunity - is at its peak after a disease outbreak or a vaccination campaign. In the following years, immunity will decline, both within individuals and in the population as a whole as older individuals die and new individuals are born. Eventually, unless there is another vaccination campaign, an outbreak or epidemic will recur.
It's also possible for disease which is endemic in one population to become epidemic if it is introduced into a novel setting where the host population is not immune. An example of this was the introduction European diseases such as smallpox into indigenous populations during the 16th century.
Zoonosis
A zoonosis is an infectious disease of humans caused by a pathogen that can jump from a non-human host to a human. Major diseases such as Ebola virus disease and salmonellosis are zoonoses. HIV was a zoonotic disease transmitted to humans in the early part of the 20th century, though it has now evolved into a separate human-only disease. Some strains of bird flu and swine flu are zoonoses; these viruses occasionally recombine with human strains of the flu and can cause pandemics such as the 1918 Spanish flu or the 2009 swine flu.
Types
Common source outbreak
In a common source outbreak epidemic, the affected individuals had an exposure to a common agent. If the exposure is singular and all of the affected individuals develop the disease over a single exposure and incubation course, it can be termed as a point source outbreak. If the exposure was continuous or variable, it can be termed as a continuous outbreak or intermittent outbreak, respectively.
Propagated outbreak
In a propagated outbreak, the disease spreads person-to-person. Affected individuals may become independent reservoirs leading to further exposures.Many epidemics will have characteristics of both common source and propagated outbreaks (sometimes referred to as mixed outbreak).
For example, secondary person-to-person spread may occur after a common source exposure or an environmental vector may spread a zoonotic diseases agent.
Preparation
Preparations for an epidemic include having a disease surveillance system; the ability to quickly dispatch emergency workers, especially local-based emergency workers; and a legitimate way to guarantee the safety and health of health workers.
Effective preparations for a response to a pandemic are multi-layered. The first layer is a disease surveillance system. Tanzania, for example, runs a national lab that runs testing for 200 health sites and tracks the spread of infectious diseases. The next layer is the actual response to an emergency. According to U.S.-based columnist Michael Gerson in 2015, only the U.S. military and NATO have the global capability to respond to such an emergency. Still, despite the most extensive preparatory measures, a fast-spreading pandemic may easily exceed and overwhelm existing health-care resources. Consequently, early and aggressive mitigation efforts, aimed at the so-called "epidemic curve flattening" need to be taken. Such measures usually consist on non-pharmacological interventions such as social/physical distancing, aggressive contact tracing, "stay-at-home" orders, as well as appropriate personal protective equipment (i.e., masks, gloves, and other physical barriers to spread).
Moreover, India has taken significant strides in its efforts to prepare for future respiratory pandemics through the development of the National Pandemic Preparedness Plan for Respiratory Viruses using a multisectoral approach.
Preceding this national effort, a regional workshop on the Preparedness and Resilience for Emerging Threats (PRET) initiative was organized by WHO's South-East Asia Regional Office on October 12-13, 2023. Recognizing that the same capacities and capabilities can be leveraged and applied for groups of pathogens based on their mode of transmission, the workshop aimed to facilitate pandemic planning efficiency for countries in the region. The participating countries, in the aftermath of the workshop, outlined their immediate next steps and sought support from WHO and its partners to bolster regional preparedness against respiratory pathogen pandemics.
See also
List of epidemics
Epidemiology
Endemic (epidemiology)
Pandemic
Syndemic
European Centre for Disease Prevention and Control
Centers for Disease Control and Prevention
Mathematical modelling of infectious disease
Epidemic model
Biosecurity
Pathogen transmission
References
Further reading
Brook, Timothy; et al. "Comparative pandemics: the Tudor–Stuart and Wanli–Chongzhen years of pestilence, 1567–1666" Journal of Global History (2020) 14#3 pp 363–379 emphasis on Chinese history, compared to England
Eisenberg, Merle, and Lee Mordechai. "The Justinianic Plague and Global Pandemics: The Making of the Plague Concept." American Historical Review 125.5 (2020): 1632–1667.
McKenna, Maryn, "Return of the Germs: For more than a century drugs and vaccines made astounding progress against infectious diseases. Now our best defenses may be social changes", Scientific American, vol. 323, no. 3 (September 2020), pp. 50–56. "What might prevent or lessen [the] possibility [of a virus emerging and finding a favorable human host] is more prosperity more equally distributed – enough that villagers in South Asia need not trap and sell bats to supplement their incomes and that, low-wage workers in the U.S. need not go to work while ill because they have no sick leave." (p. 56.)
External links
Biological hazards | 0.774593 | 0.996913 | 0.772201 |
Psychosomatic medicine | Psychosomatic medicine is an interdisciplinary medical field exploring the relationships among social, psychological, behavioral factors on bodily processes and quality of life in humans and animals.
The academic forebearer of the modern field of behavioral medicine and a part of the practice of consultation-liaison psychiatry, psychosomatic medicine integrates interdisciplinary evaluation and management involving diverse specialties including psychiatry, psychology, neurology, psychoanalysis, internal medicine, pediatrics, surgery, allergy, dermatology, and psychoneuroimmunology. Clinical situations where mental processes act as a major factor affecting medical outcomes are areas where psychosomatic medicine has competence.
Psychosomatic disorders
Some physical diseases are believed to have a mental component derived from stresses and strains of everyday living. This has been suggested, for example, of lower back pain and high blood pressure, which some researchers have suggested may be related to stresses in everyday life. The psychosomatic framework additionally sees mental and emotional states as capable of significantly influencing the course of any physical illness. Psychiatry traditionally distinguishes between psychosomatic disorders, disorders in which mental factors play a significant role in the development, expression, or resolution of a physical illness, and somatoform disorders, disorders in which mental factors are the sole cause of a physical illness.
It is difficult to establish for certain whether an illness has a psychosomatic component. A psychosomatic component is often inferred when there are some aspects of the patient's presentation that are unaccounted for by biological factors, or some cases where there is no biological explanation at all. For instance, Helicobacter pylori causes 80% of peptic ulcers. However, most people living with Helicobacter pylori do not develop ulcers, and 20% of patients with ulcers have no H. pylori infection. Therefore, in these cases, psychological factors could still play some role. Similarly, in irritable bowel syndrome (IBS), there are abnormalities in the behavior of the gut. However, there are no actual structural changes in the gut, so stress and emotions might still play a role.
The strongest perspective on psychosomatic disorders is that attempting to distinguish between purely physical and mixed psychosomatic disorders is obsolete as almost all physical illness have mental factors that determine their onset, presentation, maintenance, susceptibility to treatment, and resolution. According to this view, even the course of serious illnesses, such as cancer, can potentially be influenced by a person's thoughts, feelings and general state of mental health.
Addressing such factors is the remit of the applied field of behavioral medicine. In modern society, psychosomatic aspects of illness are often attributed to stress making the remediation of stress one important factor in the development, treatment, and prevention of psychosomatic illness.
Connotations of the term "psychosomatic illness"
The term psychosomatic disease was most likely first used by Paul D. MacLean in his 1949 seminal paper ‘Psychosomatic disease and the “visceral brain”; recent developments bearing on the Papez theory of emotions.’ In the field of psychosomatic medicine, the phrase "psychosomatic illness" is used more narrowly than it is within the general population. For example, in lay language, the term often encompasses illnesses with no physical basis at all, and even illnesses that are faked (malingering). In contrast, in contemporary psychosomatic medicine, the term is normally restricted to those illnesses that do have a clear physical basis, but where it is believed that psychological and mental factors also play a role. Some researchers within the field believe that this overly broad interpretation of the term may have caused the discipline to fall into disrepute clinically. For this reason, among others, the field of behavioral medicine has taken over much of the remit of psychosomatic medicine in practice and there exist large areas of overlap in the scientific research.
Criticism
Studies have yielded mixed evidence regarding the impact of psychosomatic factors in illnesses. Early evidence suggested that patients with advanced-stage cancer may be able to survive longer if provided with psychotherapy to improve their social support and outlook. However, a major review published in 2007, which evaluated the evidence for these benefits, concluded that no studies meeting the minimum quality standards required in this field have demonstrated such a benefit. The review further argues that unsubstantiated claims that "positive outlook" or "fighting spirit" can help slow cancer may be harmful to the patients themselves if they come to believe that their poor progress results from "not having the right attitude".
Treatment
While in the U.S., psychosomatic medicine is considered a subspecialty of the fields of psychiatry and neurology, in Germany and other European countries it is considered a subspecialty of internal medicine. Thure von Uexküll and contemporary physicians following his thoughts regard the psychosomatic approach as a core attitude of medical doctors, thereby declaring it not as a subspecialty, but rather an integrated part of every specialty. Medical treatments and psychotherapy are used to treat illnesses believed to have a psychosomatic component.
History
In the medieval Islamic world the Persian psychologist-physicians Ahmed ibn Sahl al-Balkhi (d. 934) and Haly Abbas (d. 994) developed an early model of illness that emphasized the interaction of the mind and the body. He proposed that a patient's physiology and psychology can influence one another.
Contrary to Hippocrates and Galen, Ahmed ibn Sahl al-Balkhi did not believe that mere regulation and modulation of the body tempers and medication would remedy mental disorders because words play a vital and necessary role in emotional regulation. To change such behaviors, he used techniques, such as belief altering, regular musing, rehearsals of experiences, and imagination.
In the beginnings of the 20th century, there was a renewed interest in psychosomatic concepts. Psychoanalyst Franz Alexander had a deep interest in understanding the dynamic interrelation between mind and body. Sigmund Freud pursued a deep interest in psychosomatic illnesses following his correspondence with Georg Groddeck who was, at the time, researching the possibility of treating physical disorders through psychological processes. Hélène Michel-Wolfromm applied psychosomatic medicine to the field of gynecology and sexual problems experienced by women.
In the 1970s, Thure von Uexküll and his colleagues in Germany and elsewhere proposed a biosemiotic theory (the umwelt concept) that was widely influential as a theoretical framework for conceptualizing mind-body relations. This model shows that life is a meaning or functional system. Farzad Goli further explains in Biosemiotic Medicine (2016), how signs in the form of matter (e.g., atoms, molecules, cells), energy (e.g., electrical signals in nervous system), symbols (e.g., words, images, machine codes), and reflections (e.g., mindful moments, metacognition) can be interpreted and translated into each other.
Henri Laborit, one of the founders of modern neuropsychopharmacology, carried out experiments in the 1970s that showed that illness quickly occurred when there was inhibition of action in rats. Rats in exactly the same stressful situations but whom were not inhibited in their behavior (those who could flee or fight—even if fighting is completely ineffective) had no negative health consequences. He proposed that psychosomatic illnesses in humans largely have their source in the constraints that society puts on individuals in order to maintain hierarchical structures of dominance. The film My American Uncle, directed by Alain Resnais and influenced by Laborit, explores the relationship between self and society and the effects of the inhibition of action.
In February 2005, the Boston Syndromic Surveillance System detected an increase in young men seeking medical treatment for stroke. Most of them did not actually experience a stroke, but the largest number presented a day after Tedy Bruschi, a local sports figure, was hospitalized for a stroke. Presumably they began misinterpreting their own harmless symptoms, a group phenomenon now known as Tedy Bruschi syndrome.
Robert Adler is credited with coining the term Psychoneuroimmunology (PNI) to categorize a new field of study also known as mind-body medicine. The principles of mind-body medicine suggest that our mind and the emotional thoughts we produce have an incredible impact on our physiology, either positive or negative.
PNI integrates the mental/psychological, nervous, and immune system, and these systems are further linked together by ligands, which are hormones, neurotransmitters and peptides. PNI studies how every single cell in our body is in constant communication—how they are literally having a conversation and are responsible for 98% of all data transferred between the body and the brain.
Dr. Candace Pert, a professor and neuroscientist who discovered the opiate receptor, called this communication between our cells the ‘Molecules of Emotion' because they produce the feelings of bliss, hunger, anger, relaxation, or satiety. Dr. Pert maintains that our body is our subconscious mind, so what is going on in the subconscious mind is being played out by our body.
See also
, also known as "somatoform disorder"
References
External links
Mind-Body Medicine: An Overview, US National Institutes of Health, Center for Complementary and Integrative Health
NIH
Academy of Psychosomatic Medicine
Psychosomatics, journal of the Academy of Psychosomatic Medicine
American Psychosomatic Society
Psychosomatic Medicine, journal of the American Psychosomatic Society
Medical specialties
Mind–body interventions
Stress (biological and psychological)
Anxiety disorder treatment
Immune system
Somatic psychology | 0.777408 | 0.993299 | 0.772199 |