title
stringlengths
2
58
text
stringlengths
374
73.7k
relevans
float64
0.76
0.81
popularity
float64
0.94
1
ranking
float64
0.74
0.81
Medical classification
A medical classification is used to transform descriptions of medical diagnoses or procedures into standardized statistical code in a process known as clinical coding. Diagnosis classifications list diagnosis codes, which are used to track diseases and other health conditions, inclusive of chronic diseases such as diabetes mellitus and heart disease, and infectious diseases such as norovirus, the flu, and athlete's foot. Procedure classifications list procedure code, which are used to capture interventional data. These diagnosis and procedure codes are used by health care providers, government health programs, private health insurance companies, workers' compensation carriers, software developers, and others for a variety of applications in medicine, public health and medical informatics, including: statistical analysis of diseases and therapeutic actions reimbursement (e.g., to process claims in medical billing based on diagnosis-related groups) knowledge-based and decision support systems direct surveillance of epidemic or pandemic outbreaks In forensic science and judiciary settings There are country specific standards and international classification systems. Classification types Many different medical classifications exist, though they occur into two main groupings: Statistical classifications and Nomenclatures. A statistical classification brings together similar clinical concepts and groups them into categories. The number of categories is limited so that the classification does not become too big. An example of this is used by the International Statistical Classification of Diseases and Related Health Problems (known as ICD). ICD-10 groups diseases of the circulatory system into one "chapter", known as Chapter , covering codes I00–I99. One of the codes in this chapter (I47.1) has the code title (rubric) Supraventricular tachycardia. However, there are several other clinical concepts that are also classified here. Among them are paroxysmal atrial tachycardia, paroxysmal junctional tachycardia, auricular tachycardia and nodal tachycardia. Another feature of statistical classifications is the provision of residual categories for "other" and "unspecified" conditions that do not have a specific category in the particular classification. In a nomenclature there is a separate listing and code for every clinical concept. So, in the previous example, each of the tachycardia listed would have its own code. This makes nomenclatures unwieldy for compiling health statistics. Types of coding systems specific to health care include: Diagnostic codes Are used to determine diseases, disorders, and symptoms Can be used to measure morbidity and mortality Examples: ICD-9-CM, ICD-10, ICD-11 Procedural codes They are numbers or alphanumeric codes used to identify specific health interventions taken by medical professionals. Examples: CPT, HCPCS, ICPM, ICHI Pharmaceutical codes Are used to identify medications Examples: ATC, NDC, ICD-11 Topographical codes Are codes that indicate a specific location in the body Examples :ICD-O, SNOMED, ICD-11 WHO Family of International Classifications The World Health Organization (WHO) maintains several internationally endorsed classifications designed to facilitate the comparison of health related data within and across populations and over time as well as the compilation of nationally consistent data. This "Family of International Classifications" (FIC) include three main (or reference) classifications on basic parameters of health prepared by the organization and approved by the World Health Assembly for international use, as well as a number of derived and related classifications providing additional details. Some of these international standards have been revised and adapted by various countries for national use. Reference classifications International Statistical Classification of Diseases and Related Health Problems (ICD) ICD-10 (International classification of diseases, 10th revision) – effective from 1 January 1993. Although Version:2019 was the last update, and ICD-11 is now available, WHO are still accepting data reported using ICD-10 from member states yet to make the switch to ICD-11. ICD-11 (International classification of diseases, 11th revision) – available for reporting data to WHO since 1 January 2022 International Classification of Functioning, Disability and Health (ICF) International Classification of Health Interventions (ICHI) Derived classifications Derived classifications are based on the WHO reference classifications (i.e. ICD and ICF). They include the following: International Classification of Diseases for Oncology, Third Edition (ICD-O-3) The ICD-10 Classification of Mental and Behavioural Disorders – This publication deals exclusively with Chapter of ICD-10, and is available as two variants; Clinical descriptions and diagnostic guidelines, also known as the blue book. Diagnostic criteria for research, also known as the green book. Application of the International Classification of Diseases to Dentistry and Stomatology, 3rd Edition (ICD-DA) Application of the International Classification of Diseases to Neurology (ICD-10-NA) EUROCAT is an extension of the ICD-10 Chapter , which covers congenital disorders. National versions Several countries have developed their own version of WHO-FIC publications, which go beyond a local language translation. Many of these are based on the ICD: ICD-9-CM was the US' adaptation of ICD-9 and was maintained for use until September 2015. Starting on October 1, 2015, the Centers for Medicare and Medicaid Services (CMMS) granted physicians a one-year grace period to begin using ICD-10-CM, or they would be denied Medicare Part B claims. ICD-10-CM was developed by the US' Centers for Medicare and Medicaid Services (CMS) and the National Center for Health Statistics (NCHS), and has been in use in the US since October 2015replacing ICD-9-CM. ICD-10-AM was published by Australia's National Centre for Classification in Health in 1998 and has since been adopted by a number of other countries. Related classifications Related classifications in the WHO-FIC are those that partially refer to the reference classifications, e.g. only at specific levels. They include: International Classification of Primary Care (ICPC) ICPC-2 PLUS Anatomical Therapeutic Chemical Classification System with Defined Daily Doses (ATC/DDD) Assistive products — Classification and terminology (ISO9999:2022). WHO adopted ISO9999 as a related classification in 2003, however, the International Organization for Standardization (ISO) remains responsible for maintaining ISO9999. International Classification for Nursing Practice (ICNP) Historic FIC classifications ICD versions before ICD-9 are not in use anywhere. ICD-9 was published in 1977, and superseded by ICD-10 in 1994. The last version of ICD-10 was published in 2019, and it was replaced by ICD-11 on 1 January 2022. 35 of the 194 member states have made the transition to the latest version of the ICD. The International Classification of Procedures in Medicine (ICPM) is a procedural classification that has not updated since 1989, and will be replaced by ICHI. National adaptions of the ICPM includes OPS-301, which is the official German procedural classification. International Classification of External Causes of Injury (ICECI) was last updated in 2003 and, with the development ICD-11, is no longer maintained. The concepts of ICECI are represented within ICD-11 as extension codes. Other medical classifications Diagnosis The categories in a diagnosis classification classify diseases, disorders, symptoms and medical signs. In addition to the ICD and its national variants, they include: Diagnostic and Statistical Manual of Mental Disorders (DSM) DSM-IV Codes DSM-5 International Classification of Headache Disorders 2nd Edition (ICHD-II) International Classification of Sleep Disorders (ICSD) Online Mendelian Inheritance in Man, database of genetic codes Orchard Sports Injury and Illness Classification System (OSIICS) Read codes SNOMED CT Procedure The categories in a procedure classification classify specific health interventions undertaken by health professionals. In addition to the ICHI and ICPC, they include: Australian Classification of Health Interventions (ACHI) Canadian Classification of Health Interventions (CCI) Current Procedural Terminology (CPT) Health Care Procedure Coding System (HCPCS) ICD-10 Procedure Coding System (ICD-10-PCS) OPCS Classification of Interventions and Procedures (OPCS-4) Drugs Drugs are often grouped into drug classes. Such classifications include: RxNorm Anatomical Therapeutic Chemical Classification System Medical Reference Terminology National Pharmaceutical Product Index National Drug File-Reference Terminology (NDF-RT) National Drug File-Reference Terminology was a terminology maintained by the Veterans Health Administration (VHA). It groups drug concepts into classes. It was part of RxNorm until March 2018. Medication Reference Terminology (MED-RT) Medication Reference Terminology (MED-RT) is a terminology created and maintained by Veterans Health Administration in the United States. In 2018, it replaced NDF-RT that was used during 2005–2017. Med-RT is not included in RxNorm but is included in National Library of Medicine's UMLS Metathesaurus. Prior 2017, NDF-RT was included in RxNorm. The first release of MED-RT was in the spring of 2018. The United States Food and Drug Administration requires in its Manual of Policies and Procedures (MaPP) 7400.13 dated July 18, 2013 and updated on July 25, 2018, that MED-RT be used for selecting an established pharmacologic class (EPC) for the Highlights of Prescribing Information in drug labeling. Each EPC text phrase is associated with a term known as an EPC concept. EPC concepts use a standardized format derived from the U.S. Department of Veterans Affairs, Veterans Health Administration (VHA) Medication Reference Terminology (MED-RT). Each EPC concept also has a unique standardized alphanumeric identifier code, used as the machine-readable tag for the concept. These codes enable SPL indexing. The exact EPC text phrase used in INDICATIONS AND USAGE in Highlights might not be identical to the wording used to describe the EPC concept, because the standardized language used for the EPC concept might not be considered sufficiently clear to the readers of the labeling. Each active moiety also may be assigned MOA, PE, and CS standardized indexing concepts, which are also linked to unique standardized alphanumeric identifier codes. MOA, PE, and CS standardized indexing concepts may or may not be related to the therapeutic effect of the active moiety for a particular indication, but they should still be scientifically valid and clinically meaningful. Even if the MOA, PE, and CS standardized indexing concepts are not known with certainty to be related to the therapeutic effect, they may still be useful for identifying drug interactions and permitting other safety assessments for a moiety based upon appropriate and relevant considerations, such as enzyme inhibition and enzyme induction. MOA, PE, and CS concepts are maintained in a standardized format as part of the MED-RT hierarchy. https://www.fda.gov/media/86437/download The United States Food and Drug Administration Study Data Technical Conformance Guide dated July 2020 states, "6.5 Pharmacologic Class 6.5.1 Medication Reference Terminology 6.5.1.1 General Considerations The Veterans Administration's Medication Reference Terminology (MED-RT) should be used to identify the pharmacologic class(es) of all active investigational substances that are used in a study (either clinical or nonclinical). This information should be provided in the SDTM TS domain when a full TS is indicated. The information should be provided as one or more records in TS, where TSPARMCD= PCLAS. Pharmacologic class is a complex concept that is made up of one or more component concepts: mechanism of action (MOA), physiologic effect (PE), and chemical structure (CS).51 The established pharmacologic class is generally the MOA, PE, or CS term that is considered the most scientifically valid and clinically meaningful. Sponsors should include in TS (the full TS) the established pharmacologic class of all active moieties of investigational products used in a study. FDA maintains a list of established pharmacologic classes of approved moieties.52 If the established pharmacologic class is not available for an active moiety, then the sponsor should discuss the appropriate MOA, PE, and CS terms with the review division. For unapproved investigational active moieties where the pharmacologic class is unknown, the PCLAS record may not be available." https://www.fda.gov/media/136460/download The United States Food and Drug Administration publishes a Data Standards Catalog that lists the data standards and terminologies that FDA supports for use in regulatory submissions to better enable the evaluation of safety, effectiveness, and quality of FDA-regulated products. In addition, the FDA has the statutory and regulatory authority to require certain standards and terminologies and these are identified in the Catalog with the date the requirement begins and, as needed, the date the requirement ends, and information sources. The submission of data using standards or terminologies not listed in the Catalog should be discussed with the Agency in advance. Where the Catalog expresses support for more than one standard or terminology for a specific use, the sponsor or applicant may select one to use or can discuss, as appropriate, with their review division. Version 7.0 of the FDA Data Standards Catalog dated 03-15-2021, specifies that MED-RT was a required terminology by the White House Consolidated Health Informatics Initiative in various Federal Register Notices beginning as early as May 6, 2004, for NDAs, ANDAs, and certain BLAs beginning on December 17, 2016, and for certain IND's beginning on December 17, 2017. https://www.fda.gov/media/85137/download Medical Devices Global Medical Device Nomenclature (GMDN), the standard international naming system for medical devices. Other Classification of Pharmaco-Therapeutic Referrals (CPR) Logical Observation Identifiers Names and Codes (LOINC), standard for identifying medical laboratory observations MEDCIN, point-of-care terminology, intended for use in Electronic Health Record (EHR) systems Medical Dictionary for Regulatory Activities (MedDRA) Medical Subject Headings (MeSH) List of MeSH codes Nursing Interventions Classification (NIC) Nursing Outcomes Classification (NOC) TIME-ITEM, ontology of topics in medical education TNM Classification of Malignant Tumors Unified Medical Language System (UMLS) Victoria Ambulatory Coding System (VACS) / Queensland Ambulatory Coding System (QACS), Australia Library classification that have medical components Dewey Decimal Classification and Universal Decimal Classification (section 610–620) National Library of Medicine classification ICD, SNOMED and Electronic Health Record (EHR) SNOMED The Systematized Nomenclature of Medicine (SNOMED) is the most widely recognised nomenclature in healthcare. Its current version, SNOMED Clinical Terms (SNOMED CT), is intended to provide a set of concepts and relationships that offers a common reference point for comparison and aggregation of data about the health care process. SNOMED CT is often described as a reference terminology. SNOMED CT contains more than 311,000 active concepts with unique meanings and formal logic-based definitions organised into hierarchies. SNOMED CT can be used by anyone with an Affiliate License, 40 low income countries defined by the World Bank or qualifying research, humanitarian and charitable projects. SNOMED CT is designed to be managed by computer, and it is a complex relationship concepts. ICD The International Classification of Disease (ICD) is the most widely recognized medical classification. Maintained by the World Health Organization (WHO), its primary purpose is to categorise diseases for morbidity and mortality reporting. However the coded data is often used for other purposes too; including reimbursement practices such as medical billing. ICD has a hierarchical structure, and coding in this context, is the term applied when representations are assigned to the words they represent. Coding diagnoses and procedures is the assignment of codes from a code set that follows the rules of the underlying classification or other coding guidelines. The current version of the ICD, ICD-10, was endorsed by WHO in 1990. WHO Member states began using the ICD-10 classification system from 1994 for both morbidity and mortality reporting. The exception was the US, who only began using it for reporting mortality in 1999 whilst continuing to use ICD-9-CM for morbidity reporting. The US only adopted its version of ICD-10 in October 2015. The delay meant it was unable to compare US morbidity data with the rest of the world during this period. The next major version of the ICD, ICD-11, was ratified by the 72nd World Health Assembly on 25 May 2019, and member countries have been able to report data using ICD-11 codes since 1 January 2022. ICD-11 is a fully digital product with integration of clinical terminology and classification. It allows documentation at any level of detail. It includes extension codes, a terminology system, with medicaments, chemicals, infections agents, histopathology, anatomy and mechanisms, objects and animals, and other elements that serve to describe sources of injury or harm. Comparison SNOMED CT and ICD were originally designed for different purposes and each should be used for the purposes for which they were designed. As a core terminology for the EHR, SNOMED CT and ICD-11 provide a common language that enables a consistent way of capturing, and sharing health data across specialities and sites of care. SNOMED is a highly detailed terminology designed for input not reporting, without a specific use case. ICD-11 and SNOMED, are clinically based, and document whatever is needed for patient care. In contrast to SNOMED, ICD-11 allows full clinical documentation while permitting internationally agreed statistical aggregation for specific use cases. The foundation of ICD-11 together with the WHO Classification of Health Interventions (ICHI) and the WHO Classification for Functioning, Disability and Health (ICF), comprising also the WHO lists of anatomy, substances and more, are a complete ecosystem for lossless documentation in digital records and at the same time they address specific usecases for data aggregation in a multilingual, freely usable way. SNOMED CT and ICD are used directly by healthcare providers during the process of care, in addition, ICD can be also used for coding after the episode of care, in lower technology environments. SNOMED CT has multiple hierarchy, whereas there is single primary hierarchy for ICD-11 with alternative multiple hierarchies. SNOMED CT concepts are defined logically by their attributes, as is the case in ICD-11, that in addition has textual rules and definitions. Data Mapping SNOMED and ICD can be coordinated. The National Library of Medicine (NLM) maps ICD-9-CM, ICD-10-CM, ICD-10-PCS, and other classification systems to SNOMED. Data Mapping is the process of identifying relationships between two distinct data models. Veterinary medical coding Veterinary medical codes include the VeNom Coding Group, the U.S. Animal Hospital Codes, and the Veterinary Extension to SNOMED CT (VetSCT). See also Acronyms in healthcare Ambulatory Payment Classification, US billing system for outpatient services Biological database Classification of mental disorders Clinical coder German Institute for Medical Documentation and Information Health information management Health informatics Human resources for health information system List of international common standards Medical dictionary North American Nursing Diagnosis Association (professional organization) Nosology Pathology Messaging Implementation Project References External links WHO Family of International Classifications official site Medical terminologies at the National Library of Medicine The International Health Terminology Standards Development Organisation – SNOMED CT Nursing classification
0.76029
0.983501
0.747746
Spondylodiscitis
Spondylodiscitis is a combination of discitis (inflammation of one or more intervertebral disc spaces) and spondylitis (inflammation of one or more vertebrae), the latter generally involving the areas adjacent to the intervertebral disc space. Causes Spondylodiscitis is the most common complication of sepsis or local infection, usually in the form of an abscess. The main causative organisms are staphylococci, but potential organisms include a large number of bacteria, fungi, zoonoses. Spondylodiscitis frequently develops in immunocompromised individuals, such as by a cancer, infection, or by immunosuppressive drugs used for organ transplantations. Diagnosis The main methods to diagnose a spondylodiscitis are magnetic resonance imaging (MRI), biopsy and microbiological tests such as PCR to determine an infectious cause. Treatment Approximately 90% of cases can be treated conservatively. In the absence of spinal cord/nerve root compression and lack of data on instability of the inflamed segment, conservative treatment with: Antibiotics - empirical treatment should start AFTER biopsy material for microbiological testing is obtained (PMID 27082590). The following empirical treatment may be administered for a total of 6 weeks (PMID 26872859): - Ceftriaxone 2x2g and Clindamycin 3x600mg i.v. for 2 weeks - Ciprofloxacin 2x500mg and Clindamycin 4x300mg p.o. for 4 more weeks If the pathogen can be identified - antibiotic treatment should be adapted to the susceptibilities of the microorganism. Bed rest References External links Vertebral column disorders
0.762664
0.980377
0.747698
Airsickness
Airsickness is a specific form of motion sickness which is induced by air travel and is considered a normal response in healthy individuals. Airsickness occurs when the central nervous system receives conflicting messages from the body (including the inner ear, eyes and muscles) affecting balance and equilibrium. Whereas commercial airline passengers may simply feel poorly, the effect of airsickness on military aircrew may lead to a decrement in performance and adversely affect the mission. The inner ear is particularly important in the maintenance of balance and equilibrium because it contains sensors for both angular (rotational) and linear motion. Airsickness is usually a combination of spatial disorientation, nausea and vomiting. Signs and symptoms Common symptoms of airsickness include: Nausea, vomiting, vertigo, loss of appetite, cold sweating, skin pallor, difficulty concentrating, confusion, drowsiness, headache, and increased fatigue. Severe airsickness may cause a person to become completely incapacitated. Risk factors The following factors increase some people's susceptibility to airsickness: Fatigue, stress and anxiety are some factors that can increase susceptibility to motion sickness of any type. The use of alcohol, drugs, and medications may also contribute to airsickness. Additionally, airsickness is more common in women (especially during menstruation or pregnancy), young children, and individuals prone to other types of motion sickness. Although airsickness is uncommon among experienced pilots, it does occur with some frequency in student pilots. Prevention Travelers who are susceptible to motion sickness can minimize symptoms by: Choosing a window seat with a view of the Earth's surface or of lower clouds, such that motion can be detected and visually observed. Choosing seats with the smoothest ride in regards to pitch (the seats over the wings in an airplane). This may not be sufficient for sensitive individuals who need to see ground movement. Sitting facing forward while focusing on distant objects rather than trying to read or look at something inside the airplane. Treatment Medication Medications that may alleviate the symptoms of airsickness include: meclozine dimenhydrinate diphenhydramine scopolamine (available in both patch and oral form). Pilots who are susceptible to airsickness are usually advised not to take anti-motion sickness medications (prescription or over-the-counter). These medications can make one drowsy or affect brain functions in other ways. Non-medication based A method to increase pilot resistance to airsickness consists of repetitive exposure to the flying conditions that initially resulted in airsickness. In other words, repeated exposure to the flight environment decreases an individual's susceptibility to subsequent airsickness. The US Air Force and US Navy have an Air Sickness Management Program and use a device called a Barany chair to desensitize trainees over 3 days. This combined with progressive relaxation (diaphragmatic breathing and muscle tensing) yields a high success rate. The Italian Air Force also uses a similar spinning chair and psychologic relaxation techniques which yields an 82% long-term success rate, over a 10-day training period. Several devices have been introduced that are intended to reduce motion sickness through stimulation of various body parts (usually the wrist). Alternative medicine Alternative treatments include ginger and acupuncture, with variable effectiveness. See also Acclimatization Airsickness bag Motion sickness Space adaptation syndrome References External links Aviation medicine Symptoms and signs: Nervous system Effects of external causes Motion sickness
0.762352
0.980764
0.747687
Soil structure
In geotechnical engineering, soil structure describes the arrangement of the solid parts of the soil and of the pore space located between them. It is determined by how individual soil granules clump, bind together, and aggregate, resulting in the arrangement of soil pores between them. Soil has a major influence on water and air movement, biological activity, root growth and seedling emergence. There are several different types of soil structure. It is inherently a dynamic and complex system that is affected by different factors. Overview Soil structure describes the arrangement of the solid parts of the soil and of the pore spaces located between them (Marshall & Holmes, 1979). Aggregation is the result of the interaction of soil particles through rearrangement, flocculation and cementation. It is enhanced by: the precipitation of oxides, hydroxides, carbonates and silicates; the products of biological activity (such as biofilms, fungal hyphae and glycoproteins); ionic bridging between negatively charged particles (both clay minerals and organic compounds) by multivalent cations; and interactions between organic compounds (hydrogen bonding and hydrophobic bonding). The quality of soil structure will decline under most forms of cultivation—the associated mechanical mixing of the soil compacts and shears aggregates and fills pore spaces; it also exposes organic matter to a greater rate of decay and oxidation. A further consequence of continued cultivation and traffic is the development of compacted, impermeable layers or 'pans' within the profile. The decline of soil structure under irrigation is usually related to the breakdown of aggregates and dispersion of clay material as a result of rapid wetting. This is particularly so if soils are sodic; that is, having a high exchangeable sodium percentage (ESP) of the cations attached to the clays. High sodium levels (compared to high calcium levels) cause particles to repel one another when wet, and the associated aggregates to disaggregate and disperse. The ESP will increase if irrigation causes salty water (even of low concentration) to gain access to the soil. A wide range of practices are undertaken to preserve and improve soil structure. For example, the NSW Department of Land and Water Conservation advocates: increasing organic content by incorporating pasture phases into cropping rotations; reducing or eliminating tillage and cultivation in cropping and pasture activities; avoiding soil disturbance during periods of excessive dry or wet when soils may accordingly tend to shatter or smear; and ensuring sufficient ground cover to protect the soil from raindrop impact. In irrigated agriculture, it may be recommended to: apply gypsum (calcium sulfate) to displace sodium cations with calcium and so reduce ESP or sodicity, avoid rapid wetting, and avoid disturbing soils when too wet or dry. Types Platy – The units are flat and platelike. They are generally oriented horizontally. Prismatic – The individual units are bounded by flat to rounded vertical faces. Units are distinctly longer vertically, and the faces are typically casts or molds of adjoining units. Vertices are angular or subrounded; the tops of the prisms are somewhat indistinct and normally flat. Columnar – The units are similar to prisms and bounded by flat or slightly rounded vertical faces. The tops of columns, in contrast to those of prisms, are very distinct and normally rounded. Blocky – The units are blocklike or polyhedral. They are bounded by flat or slightly rounded surfaces that are casts of the faces of surrounding peds. Typically, blocky structural units are nearly equidimensional but grade to prisms and plates. The structure is described as angular blocky if the faces intersect at relatively sharp angles and as subangular blocky if the faces are a mixture of rounded and plane faces and the corners are mostly rounded. Granular – The units are approximately spherical or polyhedral. They are bounded by curved or very irregular faces that are not casts of adjoining peds. Wedge – The units are approximately elliptical with interlocking lenses that terminate in acute angles. They are commonly bounded by small slickensides. Lenticular —The units are overlapping lenses parallel to the soil surface. They are thickest in the middle and thin towards the edges. Lenticular structure is commonly associated with moist soils, texture classes high in silt or very fine sand (e.g., silt loam), and high potential for frost action. Improving soil structure The benefits of improving soil structure for the growth of plants, particularly in an agricultural setting, include: reduced erosion due to greater soil aggregate strength and decreased overland flow; improved root penetration and access to soil moisture and nutrients; improved emergence of seedlings due to reduced crusting of the surface; and greater water infiltration, retention and availability due to improved porosity. Productivity from irrigated no-tillage or minimum tillage soil management in horticulture usually decreases over time due to degradation of the soil structure, inhibiting root growth and water retention. There are a few exceptions, why such exceptional fields retain structure is unknown, but it is associated with high organic matter. Improving soil structure in such settings can increase yields significantly. The NSW Department of Land and Water Conservation suggests that in cropping systems, wheat yields can be increased by 10 kg/ha for every extra millimetre of rain that is able to infiltrate due to soil structure. Hardsetting soil Hardsetting soils lose their structure when wet and then set hard as they dry out to form a structureless mass that is very difficult to cultivate. They can only be tilled when their moisture content is within a limited range. When they are tilled the result is often a very cloddy surface (poor tilth). As they dry out the high soil strength often restricts seedling and root growth. Infiltration rates are low and runoff of rain and irrigation limits the productivity of many hardsetting soils. Definition Hardsetting has been defined this way: "A hardsetting soil is one that sets to an almost homogeneous mass on drying. It may have occasional cracks, typically at a spacing of >0.1 m. Air dry hardset soil is hard and brittle, and it is not possible to push a forefinger into the profile face. Typically, it has a tensile strength of 90 kN–2. Soils that crust are not necessarily hardsetting since a hardsetting horizon is thicker than a crust. (In cultivated soils the thickness of the hardsetting horizon is frequently equal to or greater than that of the cultivated layer.) Hardsetting soil is not permanently cemented and is soft when wet. The clods in a hardsetting horizon that has been cultivated will partially or totally disintegrate upon wetting. If the soil has been sufficiently wetted, it will revert to its hardset state on drying. This can happen after flood irrigation or a single intense rainfall event." Soil structure dynamics Soil structure is inherently a dynamic and complex system that is affected by different factors such as tillage, wheel traffic, roots, biological activities in soil, rainfall events, wind erosion, shrinking, swelling, freezing and thawing. In turn, reciprocally soil structure interacts and affects the root growth and function, soil fauna and biota, water and solute transport processes, gas exchange, thermal conductivity and electrical conductivity, traffic bearing capacity, and many other aspects in relation with soil. Ignoring soil structure or viewing it as "static" can lead to poor predictions of soil properties and might significantly affect the soil management. See also References Sources Australian Journal of Soil Research, 38(1) 61 – 70. Cited in: Land and Water Australia 2007, ways to improve soil structure and improve the productivity of irrigated agriculture, viewed May 2007, <https://web.archive.org/web/20070930071224/http://npsi.gov.au/> Department of Land and Water Conservation 1991, "Field indicators of soil structure decline", viewed May 2007 Leeper, GW & Uren, NC 1993, 5th edn, Soil science, an introduction, Melbourne University Press, Melbourne Marshall, TJ & Holmes JW, 1979, Soil Physics, Cambridge University Press Charman, PEV & Murphy, BW 1998, 5th edn, Soils, their properties and management, Oxford University Press, Melbourne Firuziaan, M. and Estorff, O., (2002), "Simulation of the Dynamic Behavior of Bedding-Foundation-Soil in the Time Domain", Springer Verlag. External links Jordán, Antonio. 2013. What is soil structure? European Geosciences Union Blog. Accessed 11 June 2017. Soil Survey Division Staff. 1993. syu tycid=nrcs142p2_054253 Soil Survey Manual, Chapter 3: Examination and Description of Soils. USDA NRCS. Accessed 11 June 2017. Soil Land management
0.763152
0.979651
0.747623
Acrodynia
Acrodynia is a medical condition which occurs due to mercury poisoning. The condition of pain and dusky pink discoloration in the hands and feet is due to exposure or ingesting of mercury. It was known as pink disease (due to these symptoms) before it was accepted that it was just mercury poisoning. The word acrodynia is derived from the , which means end or extremity, and , which means pain. As such, it might be (erroneously) used to indicate that a patient has pain in the hands or feet. The condition is known by various other names including hydrargyria, mercurialism, erythredema, erythredema polyneuropathy, Bilderbeck's, Selter's, Swift's and Swift-Feer disease. Symptoms and signs Besides peripheral neuropathy (presenting as paresthesia or itching, burning or pain) and discoloration, swelling (edema) and desquamation may occur. Since mercury blocks the degradation pathway of catecholamines, epinephrine excess causes profuse sweating (diaphora), tachycardia, salivation and elevated blood pressure. Mercury is suggested to inactivate S-adenosyl-methionine, which is necessary for catecholamine catabolism by catechol-o-methyl transferase. Affected children may show red cheeks and nose, red (erythematous) lips, loss of hair, teeth, and nails, transient rashes, hypotonia and photophobia. Other symptoms may include kidney dysfunction (e.g. Fanconi syndrome) or neuropsychiatric symptoms (emotional lability, memory impairment, insomnia). Thus, the clinical presentation may resemble pheochromocytoma or Kawasaki disease. There is some evidence that the same mercury poisoning may predispose to Young's syndrome (men with bronchiectasis and low sperm count). Causes Mercury compounds like calomel were historically used for various medical purposes: as laxatives, diuretics, antiseptics or antimicrobial drugs for syphilis, typhus and yellow fever. Teething powders were a widespread source of mercury poisoning until the recognition of mercury toxicity in the 1940s. However, mercury poisoning and acrodynia still exist today. Modern sources of mercury intoxication include broken thermometers. Diagnosis Removal of the inciting agent is the goal of treatment. Correcting fluid and electrolyte losses and rectifying any nutritional imbalances (vitamin-rich diets, vitamin-B complex) are of utmost importance in the treatment of the disease. The chelating agent meso 2,3-dimercaptosuccinic acid has been shown to be the preferred treatment modality. It can almost completely prevent methylmercury uptake by erythrocytes and hepatocytes. In the past, dimercaprol (British antilewisite; 2,3-dimer-capto-l-propanol) and D-penicillamine were the most popular treatment modalities. Disodium edetate (Versene) was also used. Neither disodium edetate nor British antilewisite has proven reliable. British antilewisite has now been shown to increase CNS levels and exacerbate toxicity. N -acetyl-penicillamine has been successfully given to patients with mercury-induced neuropathies and chronic toxicity, although it is not approved for such uses. It has a less favorable adverse effect profile than meso 2,3-dimercaptosuccinic acid. Hemodialysis with and without the addition of L-cysteine as a chelating agent has been used in some patients experiencing acute kidney injury from mercury toxicity. Peritoneal dialysis and plasma exchange also may be of benefit. Tolazoline (Priscoline) has been shown to offer symptomatic relief from sympathetic overactivity. Antibiotics are necessary when massive hyperhidrosis, which may rapidly lead to miliaria rubra, is present. This can easily progress to bacterial secondary infection with a tendency for ulcerating pyoderma. References Occupational diseases Toxicology Pediatrics
0.760278
0.98334
0.747612
Panchendriyas
Panchendriyas (, ) are the sense organs of the human body in Hinduism, consisting of mind and action, each consisting of five subtypes. Five buddhi-indriyas or Jnanendriyas ("mental or senses") and five Karmendriyas ("sense organs that deal with bodily functions"). Five gyanendriyas Gyanendriya is the organ of perception, the faculty of perceiving through the senses. The first five of the seventeen elements of the subtle body are the "organs of perception" or "sense organs". According to Hinduism and Vaishnavism there are five gyanendriya or "sense organs" – ears, skin, eyes, tongue and nose. Five Karmendriyas Karmendriya is an Indian philosophical concept. Karmendriya is the "organ of action" according to Hinduism and Jainism. Karmendriyas are five, and they are: hasta, pada, bak, anus, upastha. In Jainism these are the senses used by the experiencing soul to perform actions. See also Panchakosha Kosha References External links Indriya Pancha Phanchaka: 5 Five Of Sense Organs The Five Senses – Pancha Indriya - Eat & Breathe Pancha Indriya Buddhi: Association cortices, Kshama Gupta, Prasad Mamidi Indian philosophical concepts Hindu philosophical concepts
0.763987
0.978552
0.747601
Rivalta test
Certain diseases can cause excessive accumulations of fluid in areas of the body such as the abdomen (ascites) or the pleural space around the lungs (pleural effusion) or the pericardial space around the heart. An estimate of the concentration of protein in such fluids can narrow the differential diagnosis and assist the clinician in establishing a diagnosis. For example, fluid accumulations due to congestive heart failure and liver failure (cirrhosis) are typically lower in protein content and are called transudates whereas fluid accumulations due to cancer and tuberculosis are typically higher in protein content and are called exudates. The Rivalta Test is a simple, inexpensive method that can be used in resource-limited settings to differentiate a transudate from an exudate. It is a simple, inexpensive method that does not require special laboratory equipment and can be easily performed in private practice. The test was originally developed by the Italian researcher Rivalta around 1900 and was used to differentiate transudates and exudates in human patients. It is also useful in cats to differentiate between effusions due to feline infectious peritonitis (FIP) and effusions caused by other diseases. Not only the high protein content, but high concentrations of fibrinogen and inflammatory mediators lead to a positive reaction. Method A test tube is filled with distilled water and acetic acid is added. To this mixture one drop of the effusion to be tested is added. If the drop dissipates, the test is negative, indicating a transudate. If the drop precipitates, the test is positive, indicating an exudate. Using a pH 4.0 acetic acid solution, 8 types of proteins were identified in Rivalta reaction-positive turbid precipitates: C-reactive protein (CRP), Alpha 1-antitrypsin (alpha1-AT), Orosomucoid ((Alpha-1-acid glycoprotein or AGP)), haptoglobin (Hp), transferrin (Tf), ceruloplasmin (Cp), fibrinogen (Fg), and hemopexin (Hpx). Since those are Acute-phase proteins, a positive Rivalta's test may be suggestive of inflammation. Procedure To perform this test, a transparent reagent tube (volume 10 ml) is filled with approximately 7–8 ml distilled water, to which 1 drop of acetic acid (8%, plain white vinegar) is added and mixed thoroughly. On the surface of this solution, 1 drop of the effusion fluid is carefully layered. If the drop disappears and the solution remains clear, the Rivalta's test is defined as negative. If the drop retains its shape, stays attached to the surface or slowly floats down to the bottom of the tube (drop- or jelly-fish-like), the Rivalta's test is defined as positive. The Rivalta's test had a high positive predictive value (86%) and a very high negative predictive value for FIP (96%) in a study in which cats that presented with effusion were investigated (prevalence of FIP 51%). Positive Rivalta's test results can occur in cats with bacterial peritonitis or lymphoma. References http://abcd-vets.org/guidelines/feline_infectious_peritonitis/chapter-5.asp Medical tests
0.762388
0.980518
0.747535
Course (medicine)
In medicine the term course generally takes one of two meanings, both reflecting the sense of "path that something or someone moves along...process or sequence or steps": A course of medication is a period of continual treatment with drugs, sometimes with variable dosage and in particular combinations. For instance treatment with some drugs should not end abruptly. Instead, their course should end with a tapering dosage. Antibiotics: Taking the full course of antibiotics is important to prevent reinfection and/or development of drug-resistant bacteria. Steroids: For both short-term and long-term steroid treatment, when stopping treatment, the dosage is tapered rather than abruptly ended. This permits the adrenal glands to resume the body's natural production of cortisol. Abrupt discontinuation can result in adrenal insufficiency; and/or steroid withdrawal syndrome (a rebound effect in which exaggerated symptoms return). The course of a disease, also called its natural history, is the development of the disease in a patient, including the sequence and speed of the stages and forms they take. Typical courses of diseases include: chronic recurrent or relapsing subacute: somewhere between an acute and a chronic course acute: beginning abruptly, intensifying rapidly, not lasting long fulminant or peracute: particularly acute, especially if unusually violent A patient may be said to be at the beginning, the middle or the end, or at a particular stage of the course of a disease or a treatment. A precursor is a sign or event that precedes the course or a particular stage in the course of a disease, for example chills often are precursors to fevers. References Medical terminology Pharmacodynamics
0.772964
0.967097
0.747531
Endomysium
The endomysium, meaning within the muscle, is a wispy layer of areolar connective tissue that ensheaths each individual muscle fiber, or muscle cell. It also contains capillaries and nerves. It overlies the muscle fiber's cell membrane: the sarcolemma. Endomysium is the deepest and smallest component of muscle connective tissue. This thin layer helps provide an appropriate chemical environment for the exchange of calcium, sodium, and potassium, which is essential for the excitation and subsequent contraction of a muscle fiber. Endomysium combines with perimysium and epimysium to create the collagen fibers of tendons, providing the tissue connection between muscles and bones by indirect attachment. It connects with perimysium using intermittent perimysial junction plates. Collagen is the major protein that composes connective tissues like endomysium. Endomysium has been shown to contain mainly type I and type III collagen components, and type IV and type V in very minor amounts. Others have found type IV and type V more common. The term cardiac skeleton is sometimes considered synonymous with endomysium in the heart, but cardiac skeleton also refers to the combination of the endomysium and perimysium. Clinical significance Anti-endomysial antibodies (EMA) are present in celiac disease. They do not cause any direct symptoms to muscles, but detection of EMA is useful in the diagnosis of the disease. See also Connective tissue in skeletal muscle Epimysium Perimysium References External links Illustration at wku.edu Soft tissue Muscular system
0.760914
0.982343
0.747479
Flash fire
A flash fire is a sudden, intense fire caused by ignition of a mixture of air and a dispersed flammable substance such as a solid (including dust), flammable or combustible liquid (such as an aerosol or fine mist), or a flammable gas. It is characterized by high temperature, short duration, and a rapidly moving flame front. Definition A flash fire is defined by NFPA 2112 (Standard on Flame-Resistant Clothing for Protection of Industrial Personnel Against Short-Duration Thermal Exposures from Fire) as: "A type of short-duration fire that spreads by means of a flame front rapidly through a diffuse fuel, such as dust, gas, or the vapors of an ignitable liquid, without the production of damaging pressure." Characterization Flash fires may occur in environments where fuel, typically flammable gas or dust, is mixed with air in concentrations suitable for combustion. In a flash fire, the flame spreads at subsonic velocity, so the overpressure damage is usually negligible and the bulk of the damage comes from the thermal radiation and secondary fires. When inhaled, the heated air resulting from a flash fire can cause serious damage to the tissue of the lungs, possibly leading to death by asphyxiation. Flash fires can lead to smoke burns. Flash fire is a particular danger in enclosed spaces, as even a relatively small fire can consume enough oxygen and produce enough smoke to cause death of the persons present, whether by asphyxiation or by smoke inhalation. Protective clothing made of fire-retardant materials (e.g. Nomex) reduces or prevents thermal injury in the body areas that are covered by the fire-retardant material. Even normal clothing can provide partial protection. Surgical Small flash fires can occur in the operating room during surgery where the presence of ignition sources such as electrical instruments or lasers, an oxygen-rich environment, and flammable vapors (e.g. alcohol-based disinfectants) may set the stage for such an accident. While apparently smaller fires go unreported, surgical flash fires have led to burn injuries and fatalities. Incidents of surgical fires are "significantly under-reported", according to The Joint Commission. More than half of surgical fires happen inside a patient's airway or on the patient's upper body; around 10 percent of surgical fires actually happen within the body cavity, and a quarter of surgical fires happen on other parts of the body. About 70 percent are ignited by electrosurgical tools commonly known as Bovies, devices that use a high-frequency electric current to cut tissue or stop bleeding. 20 percent of fires are sparked by hot wires, light sources, burrs or defibrillators. Another 10 percent are touched off by lasers. As far as the patients are concerned, some recover with scars and emotional damage. Some die from burns and smoke inhalation. See also 1996 Garley Building fire Air Canada Flight 797 Apollo 1 Boiling liquid expanding vapor explosion Explosion Flash flood Flashover Fuel-air explosive The Station nightclub fire Trench effect References External links Explosion Hazard Assessment Flash fire exposure analysis Firefighting Types of fire
0.760862
0.982395
0.747467
Viral pathogenesis
Viral pathogenesis is the study of the process and mechanisms by which viruses cause diseases in their target hosts, often at the cellular or molecular level. It is a specialized field of study in virology. Pathogenesis is a qualitative description of the process by which an initial infection causes disease. Viral disease is the sum of the effects of viral replication on the host and the host's subsequent immune response against the virus. Viruses are able to initiate infection, disperse throughout the body, and replicate due to specific virulence factors. There are several factors that affect pathogenesis. Some of these factors include virulence characteristics of the virus that is infecting. In order to cause disease, the virus must also overcome several inhibitory effects present in the host. Some of the inhibitory effects include distance, physical barriers and host defenses. These inhibitory effects may differ among individuals due to the inhibitory effects being genetically controlled. Viral pathogenesis is affected by various factors: (1) transmission, entry and spread within the host, (2) tropism, (3) virus virulence and disease mechanisms, (4) host factors and host defense. Mechanisms of infection Viruses need to establish infections in host cells in order to multiply. For infections to occur, the virus has to hijack host factors and evade the host immune response for efficient replication. Viral replication frequently requires complex interactions between the virus and host factors that may result in deleterious effects in the host, which confers the virus its pathogenicity. Important steps of a virus life cycle that shape pathogenesis Transmission from a host with an infection to a second host Entry of the virus into the body Local replication in susceptible cells Dissemination and spread to secondary tissues and target organs Secondary replication in susceptible cells Shedding of the virus into the environment Onward transmission to third host Primary transmission Three requirements must be satisfied to ensure successful infection of a host. Firstly, there must be sufficient quantity of virus available to initiate infection. Cells at the site of infection must be accessible, in that their cell membranes display host-encoded receptors that the virus can exploit for entry into the cell, and the host anti-viral defense systems must be ineffective or absent. Entry to host Viruses causing disease in humans often enter through the mouth, nose, genital tract, or through damaged areas of skin, so cells of the respiratory, gastrointestinal, skin and genital tissues are often the primary site of infection. Some viruses are capable of transmission to a mammalian fetus through infected germ cells at the time of fertilization, later in pregnancy via the placenta, and by infection at birth. Local replication and spread Following initial entry to the host, the virus hijacks the host cell machinery to undergo viral amplification. Here, the virus must modulate the host innate immune response to prevent its elimination by the body while facilitating its replication. Replicated virus from the initially infected cell then disperse to infect neighbouring susceptible cells, possibly with spread to different cell types like leukocytes. This results in a localised infection, in which the virus mainly spreads and infects adjacent cells to the site of entry. Otherwise, the virus can be released into extracellular fluids. Examples of localised infections include: common cold (rhinovirus), flu (parainfluenza), gastrointestinal infections (rotavirus) or skin infections (papillomavirus). Dissemination and secondary replication In other cases, the virus can cause systemic disease through a disseminated infection spread throughout the body. The predominant mode of viral dissemination occurs through the blood or lymphatic system, some of which include viruses responsible for chickenpox (varicella zoster virus), smallpox (variola), HIV (human immunodeficiency virus). A minority of viruses can disseminate via the nervous system. Notably, the poliovirus can be transmitted via the fecal-oral route, where it initially replicates in its site of entry, the small intestine and spread to regional lymph nodes. Then, the virus disseminates via the bloodstream into different organs in the body (e.g. liver, spleen), followed by a secondary round of replication and dissemination into the central nervous system to damage motor neurons. Shedding and secondary transmission Finally, the viruses spread to sites where shedding into the environment can occur. The respiratory, alimentary and urogenital tracts and the blood are the most frequent sites of shedding in the form of bodily fluids, aerosols, skin, excrement. The virus would then go on to be transmitted to another person, and establish the infection cycle all over again. Factors affecting pathogenesis There are a few main overarching factors affecting viral diseases: Virus tropism Virus factors Host factors Molecular basis of virus tropism Virus tropism refers to the virus' preferential site of replication in discrete cell types within an organ. In most cases, tropism is determined by the ability of the viral surface proteins to fuse or bind to surface receptors of specific target cells to establish infection. Thus, the binding specificity of viral surface proteins dictates tropism as well as the destruction of particular cell populations, and is therefore a major determinant of virus pathogenesis. However, co-receptors are sometimes required in addition to the binding of cellular receptors on host cells to viral proteins in order to establish infection. For instance, HIV-1 requires target cells to express co-receptors CCR5 or CXCR4, on top of the CD4 receptor for productive viral attachment. Interestingly, HIV-1 can undergo a tropism switch, where the virus glycoprotein gp120 initially uses CCR5 (mainly on macrophages) as the primary co-receptor for entering the host cell. Subsequently, HIV-1 switches to bind to CXCR4 (mainly on T cells) as the infection progresses, in doing so transitions the viral pathogenicity to a different stage. Apart from cellular receptors, viral tropism can also governed by other intracellular factors, such as tissue-specific transcription factors. An example would be the JC polyomavirus, in which its tropism is limited to glial cells since its enhancer is only active in glial cells, and JC viral gene expression requires host transcription factors expressed exclusively in glial cells. The accessibility of host tissues and organs to the virus also regulates tropism. Accessibility is affected by physical barriers, such as in enteroviruses, which replicate in the intestine since they are able to withstand bile, digestive enzymes and acidic environments. Virus factors Viral genetics encoding viral factors will determine the degree of viral pathogenesis. This can be measured as virulence, which can be used to compare the quantitative degree of pathology between related viruses. In other words, different virus strains possessing different virus factors can lead to different degrees of virulence, which in turn can be exploited to study the differences in pathogenesis of viral variants with different virulence. Virus factors are largely influenced by viral genetics, which is the virulence determinant of structural or non-structural proteins and non-coding sequences. For a virus to successfully infect and cause disease in the host, it has to encode specific virus factors in its genome to overcome the preventive effects of physical barriers, and modulate host inhibition of virus replication. In the case of poliovirus, all vaccine strains found in the oral polio vaccine contain attenuating point mutations in the 5' untranslated region (5' UTR). Conversely, the virulent strain responsible for causing polio disease does not contain these 5' UTR point mutations and thus display greater viral pathogenicity in hosts. Virus factors encoded in the genome often control the tropism, routes of virus entry, shedding and transmission. In polioviruses, the attenuating point mutations are thought to induce a replication and translation defect to reduce the virus' ability of cross-linking to host cells and replicate within the nervous system. Viruses have also developed a variety of immunomodulation mechanisms to subvert the host immune response. This tend to feature virus-encoded decoy receptors that target cytokines and chemokines produced as part of the host immune response, or homologues of host cytokines. As such, viruses capable of manipulating the host cell response to infection as an immune evasion strategy exhibit greater pathogenicity. Host factors Viral pathogenesis is also largely dependent on host factors. Several viral infections have displayed a variety of effects, ranging from asymptomatic to symptomatic or even critical infection, solely based on differing host factors alone. In particular, genetic factors, age and immunocompetence play an important role is dictating whether the viral infection can be modulated by the host. Mice that possess functional Mx genes encode an Mx1 protein which can selectively inhibit influenza replication. Therefore, mice carrying a non-functional Mx allele fail to synthesise the Mx protein and are more susceptible to influenza infection. Alternatively, immunocompromised individuals due to existing illnesses may have a defective immune system which makes them more vulnerable to damage by the virus. Furthermore, a number of viruses display variable pathogenicity depending on the age of the host. Mumps, polio, and Epstein-Barr virus cause more severe disease in adults, while others like rotavirus cause more severe infection in infants. It is therefore hypothesized that the host immune system and defense mechanisms might differ with age. Disease mechanisms: How do viral infections cause disease? A viral infection does not always cause disease. A viral infection simply involves viral replication in the host, but disease is the damage caused by viral multiplication. An individual who has a viral infection but does not display disease symptoms is known as a carrier. Damage caused by the virus Once inside host cells, viruses can destroy cells through a variety of mechanisms. Viruses often induce direct cytopathic effects to disrupt cellular functions. This could be through releasing enzymes to degrade host metabolic precursors, or releasing proteins that inhibit the synthesis of important host factors, proteins, DNA and/or RNA. Namely, viral proteins of herpes simplex virus can degrade host DNA and inhibit host cell DNA replication and mRNA transcription. Poliovirus can inactivate proteins involved in host mRNA translation without affecting poliovirus mRNA translation. In some cases, expression of viral fusion proteins on the surface of the host cells can cause host cell fusion to form multinucleated cells. Notable examples include measles virus, HIV, respiratory syncytial virus. Importantly, viral infections can differ by the "lifestyle strategy". Persistent infections happen when cells continue to survive despite a viral infection and can be further classified into latent (only the viral genome is present, there is no replication occurring) and chronic (basal levels of viral replication without stimulating an immune response). In acute infections, lytic viruses are shed at high titres for rapid infection to a secondary tissue/host, whereas persistent viruses undergo shedding at lower titres for a longer duration of transmission (months to years). Lytic viruses are capable of destroying host cells by incurring and/or interfering with the specialised functions of host cells. An example would be the triggering of necrosis in host cells infected with the virus. Otherwise, signatures of viral infection, like the binding of HIV to co-receptors CCR5 or CXCR4, can also trigger cell death via apoptosis through host signalling cascades by immune cells. However, many viruses encode proteins that can modulate apoptosis depending on whether the infection is acute or persistent. Induction of apoptosis, such as through interaction with caspases, will promote viral shedding for lytic viruses to facilitate transmission, while viral inhibition of apoptosis could prolong the production of virus in cells, or allow the virus to remain hidden from the immune system in chronic, persistent infections. Nevertheless, induction of apoptosis in major immune cells or antigen-presenting cells may also act as a mechanism of immunosuppression in persistent infections like HIV. The primary cause of immunosuppression in HIV patients is due to the depletion of CD4+ T helper cells. Interestingly, adenovirus has an E1A protein to induce apoptosis by initiating the cell cycle, and an E1B protein to block the apoptotic pathway through inhibition of caspase interaction. Persistent viruses can sometimes transform host cells into cancer cells. Viruses such as the human papillomavirus (HPV), human T-lymphotropic virus (HTLV) etc., can stimulate growth of tumours in infected hosts, either by disrupting tumour suppressor gene expression (HPV) or upregulating proto-oncogene expression (HTLV). Damage caused by host immune system Sometimes, instead of cell death or cellular dysfunction caused by the virus, the host immune response can mediate disease and excessive inflammation. The stimulation of the innate and adaptive immune system in response to viral infections destroys infected cells, which may lead to severe pathological consequences to the host. This damage caused by the immune system is known as virus-induced immunopathology. Specifically, immunopathology is caused by the excessive release of antibodies, interferons and pro-inflammatory cytokines, activation of the complement system, or hyperactivity of cytotoxic T cells. Secretion of interferons and other cytokines can trigger cell damage, fever and flu-like symptoms. In severe cases of certain viral infections, as in avian H5N1 influenza in 2005, aberrant induction of the host immune response can elicit a flaring release of cytokines known as a cytokine storm. In some instances, viral infection can initiate an autoimmune response, which occurs via different proposed mechanisms: molecular mimicry and bystander mechanism. Molecular mimicry refers to an overlap in structural similarity between a viral antigen and a self-antigen. The bystander mechanism hypothesizes the initiation of a non-specific and overreactive antiviral response that tackles self-antigens in the process. Damage caused by the host itself due to autoimmunity was observed in the West Nile virus. Incubation period Viruses display variable incubation periods upon virus entry into the host. The incubation period refers to the time taken for the onset of disease after first contact with the virus. In Rabiesvirus, the incubation period varies with the distance traversed by the virus to the target organ; but in most viruses the length of incubation depends on many factors. Surprisingly, generalised infections by togaviruses have a short incubation period due to the direct entry of the virus into target cells through insect bites. There are several other factors that affect the incubation period. The mechanisms behind long incubation periods, months or years for example, are not completely understood yet. Evolution of virulence Some relatively avirulent viruses in their natural host show increased virulence upon transfer to a new host species. When an emerging virus first invades a new host species, the hosts have little or no immunity against the virus and often experience high mortality. Over time, a decrease in virulence in the predominant strain can sometimes be observed. A successful pathogen needs to spread to at least one other host, and lower virulence can result in higher transmission rates under some circumstances. Likewise, genetic resistance against the virus can develop in a host population over time. An example of the evolution of virulence in emerging virus is the case of myxomatosis in rabbits. The release of wild European rabbits in 1859 into Victoria, Australia for sport resulted in a rabbit plague. In order to curb with rabbit overpopulation, myxoma virus, a lethal species-specific poxvirus responsible for myxomatosis in rabbits, was deliberately released in South Australia in 1950. This led to a 90% decrease in rabbit populations, and the disease became endemic in a span of five years. Significantly, severely attenuated strains of the myxoma virus were detected in merely 2 years of its release, and genetic resistance in rabbits emerged within seven years. See also Virology Glossary of virology Pathogen Pathogenesis List of human diseases associated with infectious pathogens References Virology
0.776794
0.962152
0.747394
Hemotympanum
Hemotympanum, or hematotympanum, refers to the presence of blood in the tympanic cavity of the middle ear. Hemotympanum is often the result of basilar skull fracture. Hemotympanum refers to the presence of blood in the middle ear, which is the area behind the eardrum. In most cases, the blood is trapped behind the eardrum, so no discharge is visible. Treating hemotympanum depends on the underlying cause. Presentation The most common symptoms of hemotympanum are: pain sense of fullness in the ear hearing loss Causes Skull fracture A basal skull fracture is a fracture in one of the bones at the base of the skull. This is almost always caused by impact trauma such as a hard fall or a car crash. If the temporal bone is affected, one of the following may co-occur: Auricular cerebrospinal fluid discharge Dizziness Bruises around the eyes or behind the ears Facial weakness Difficulty seeing, smelling, or hearing Nasal packing Following nasal surgery or frequent nosebleeds, gauze or cotton may be inserted into the nose to stop the bleeding. This process is called therapeutic nasal packing. Nasal packing sometimes causes blood to back up into the middle ear, causing hemotympanum. Removing the packing may allow the blood to drain from the ear. Antibiotics can prevent an ear infection. Bleeding disorders Bleeding disorders, such as hemophilia or idiopathic thrombocytopenia purpura, can also cause hemotympanum. These disorders prevent proper blood clotting. In that circumstance, a mild head injury or a strong sneeze can cause hemotympanum. Anticoagulant medications Anticoagulants, often called blood thinners, are medications that keep blood from clotting too easily. In rare cases, anticoagulants can cause hemotympanum with no underlying cause or injury. Experiencing a head injury while taking anticoagulants, increases the likelihood of hemotympanum. Ear infections Frequent ear infections, ongoing inflammation and fluid buildup can increase the risk of hemotympanum. Treatment Skull fractures usually heal on their own, but they can also cause several complications. Cerebrospinal fluid leaking out of the ear involves a higher risk of developing meningitis. Treatment may include corticosteroids, antibiotics, or surgery. References Diseases of the ear and mastoid process
0.764263
0.977881
0.747358
Collagen loss
Collagen loss is the gradual decrease of levels of collagen in the body. Collagen is the main structural protein found in the body's various connective tissues (skin, bones, tendons, etc.) where it contributes to much of their strength and elasticity. Collagen loss occurs naturally as a part of aging, but can also be influenced by environmental factors such as exposure to ultraviolet light, tobacco, and excessive intake of sugar. Collagen loss is highly visible in the skin where it can cause the skin to lose elasticity, reduction of the thickness of the epidermis, an increase in the formation of wrinkles and sagging and also make the skin vulnerable and easily damaged. Prevalent throughout the body, loss of collagen can also contribute to numerous other disorders such as joint pain, weakened hair and nails, reduced bone density, gastrointestinal issues, and reduced muscle mass. Numerous interventions exist to address the loss of collagen with varying levels of efficacy and evidentiary support. Collagen Collagen is the main structural protein in the extracellular matrix found in the body's various connective tissues. It is a rigid, non-soluble, fibrous protein that adds up to one-third of the proteins in the human body. Collagen is mostly made up of molecules packed together to form long and thin fibrils that support each other and ensure the skin is strong and elastic. Various types of collagens have individual roles and structures. Most collagen belongs to types 1, 2, and 3. Collagen consists mainly of amino acids and can be mostly found in tendons, muscles, bones, skin, ligaments, and other fibrous tissues. It helps keep the skin strong and supple and sustains the renewal of skin cells and the replacement of damaged and dead body cells. Collagen tissues provide support for the formation of bones, tendons, and cartilage, which depends on their level of mineralization. Molecular mechanisms in skin aging Many dissimilar models have been used to explain skin aging on a molecular basis, such as the theory of cellular senescence, the reduction of the cells' DNA repair capacity, the loss of telomeres, oxidative stress, etc. It is believed that external factors cause a large portion of skin aging, while only 3% is caused by hereditary genetic influences. The following sections discuss prominent models and advancements in molecular mechanism studies related to skin aging. Oxidative stress Oxidative stress results from the lack of balance between the systemic production of reactive oxygen species (ROS) and the biological system's capacity to detoxify them or repair the resulting damage. It is known that reactive oxygen species take part in dermal changes taking place outside the cells in both aging caused by internal factors and those caused by external factors. ROS can be created within many dissimilar sources, which include the mitochondria, endoplasmic reticulum and peroxisomes. In normal conditions the binding of ligands to receptor tyrosine kinases (RTKs) activates them, while the various actions of RTKs on the cells' surface are repressed by receptor protein tyrosine phosphatases. DNA damage Exposure to ultraviolet rays damages DNA, which may disrupt the function of the genes that play a role in the skin stem cells' homeostasis. Mutations in DNA from frequent exposure to UV radiation may result in aging prematurely or carcinogenesis. When DNA absorbs photons in the UV-B range the nucleotide arrangement structurally changes which leads to the DNA strands having defects. In the lesser species, they can repair the damage to the DNA using the photolyase enzyme, but higher species do not have this enzyme. In human cells, repair can be achieved through the nucleotide excision repair pathway; when the associated proteins are deficient, the skin becomes susceptible to premature aging. Telomere shortening Telomeres are nucleoid sequences that repeat themselves and cap chromosomes. They protect chromosomes from dilapidation and recombination abnormalities. Their length decreases with every division of the cell and results in cellular senescence. They are critical structures at the end of the eukaryotic chromosomes, consisting of many copies of G rich repeats. Without telomeres, chromosomes will combine and cause instability in the genes. The enzyme that increases telomere length to prevent them from becoming short is called telomerase. Deficiency of this enzyme can hasten telomere shortening which can cause a flawed regeneration of the tissue. This also suppresses the production of epidermal cells. Also, exposure to UV radiation causes mutations to telomeres and sufficient exposure can result in the deaths of cells. "Inflammaging" "Inflammaging" refers a chronic, sterile, low-grade inflammation that develops with advanced age. It affects the start and progression of diseases that occur due to aging, e.g., type 2 diabetes. It occurs in the skin because when exposed to UV radiation, it damages the epidermal cells, which in turn causes inflammation to occur. Increase in age When an individual ages, the outermost layer of skin becomes thin despite the number of cell layers remaining unchanged. The number of cells that contain pigment decreases, and the melanocytes that remain increase in size. This is why aging skin looks thin, pale, and translucent. Large spots may be pigmented when some areas are exposed to sunlight. The various alterations in the skin and underlying connective tissue may decrease its strength and elasticity. Also, the blood vessels in the outer skin become more delicate and can result in bruises and bleeding under the skin's surface. The subcutaneous glands also excrete decreased amounts of oil as you age. Men experience this shortage mostly after reaching the age of 80 years. Women may slowly begin excreting less oil after menopause, making it difficult to keep the skin moist. The subcutaneous fat layer also decreases, reducing the insulation and padding capability of the skin. This can put the individual at risk of an injury and makes maintaining body temperature difficult. The sweat glands also reduce the amount of sweat they produce, making the individual's body harder to cool. Lifestyle habits Excessive sugar intake Too much sugar intake can negatively impact the body, including damage to collagen. Excess sugar consumption results in glycation that produces AGEs. This occurs naturally, and when too much sugar is consumed, the AGE molecules stick to the collagen molecules turning them stiff, thus damaging them. The process of glycation does not only damage the collagen existing in the body but also makes some alterations to its stability. When an individual consumes excessive amounts of sugar, the glycation process converts collagen into an unstable type 1, which becomes more vulnerable and can be easily broken down, potentially leading to premature aging. Tobacco usage The use of tobacco can cause damage to the skin's collagen layer. It can cause the skin around the lips to lose collagen when in contact with the smoke or due to puckering of the lips around the cigarette. It may also cause blood vessels to constrict and reduce blood flow. Due to this, perioral collagen (connective tissue around the mouth below the skin) may show signs of damage. When collagen is lost in large amounts, it may cause wrinkles to emerge. Tobacco use can also result in slow collagen healing. Treatment for collagen loss There are various ways in which an individual can treat the loss of collagen. Dietary changes may increase the turnover of cells and increase the creation of collagen. One can also adopt exercises that stimulate the production of collagen and also increase their intake of vitamin D. Moreover, applying an adequate amount of sunscreen can prevent UV rays from the sun from causing damage to your skin. You can also protect yourself from some of the causatives that break down collagen. Avoid spending too much time in the sun, apply sunblock, avoid smoking tobacco, drink plenty of water to prevent dehydration, and participate in stress-relieving activities. Stress is known to cause skin aging. Various other interventions can aid in preserving healthy, youthful skin. Taking vitamins C and A can provide a boost to collagen production in the body. To maintain healthy skin, individuals can nurture and protect the collagen present in their bodies by consuming nutritious foods rich in the necessary vitamins, minerals, and amino acids. This promotes collagen production and reduces cellular damage within the body. References Aging-associated diseases
0.766348
0.975157
0.747309
Prepuce
Prepuce , or as an adjective, preputial , refers to two homologous structures of male and female genitals: Foreskin, skin surrounding and protecting the head of the penis in humans Penile sheath, skin surrounding and protecting the head of the penis in other mammals Clitoral hood, skin surrounding and protecting the head of the clitoris in humans Clitoral sheath, skin surrounding and protecting the head of the clitoris in other mammals
0.767885
0.973105
0.747233
Rape paralysis
In human sexuality, paralysis, also known as rape paralysis, involuntary paralysis, fright (or faint), or tonic immobility, is a natural bodily survival reaction which can be automatically activated by the brain of a person who feels threatened by sexual violence. During this paralysis, one cannot move and cannot say anything, until one feels safe enough again. This survival reaction is a reflex; it automatically occurs without one's conscious choice, and one cannot stop it from happening. Paralysis is a survival reaction which the brain applies to the body whenever all other options to avoid sexual violence (prevent, freeze (hypervigilance), flight, fight, compromise) have been exhausted. In modern science, increasingly more is understood about when, how, and why paralysis occurs. However, public awareness about paralysis is still limited, which has negative consequences for the prevention, punishment and processing of sexual violence. Paralysis is sometimes also called freezing, although scholars prefer avoiding this word usage to prevent confusion with the 'freeze' (hypervigilance) response that usually precedes it (see below). Scientific explanations In the scientific and scholarly literature, distinctions are made between several survival reactions which humans (and sometimes non-human animals) either consciously or unconsciously employ in order to survive when confronted with a potentially life-threatening situation. Terms used include: Prevent (or avoid) Freeze (also known as hypervigilance: to be cautious, aware or alert) Flight Fight Compromise (or keeping the peace) Fright, faint, paralysis, tonic immobility, or playing dead In 1988, American psychologist J. A. Gray was the first to propose the sequence freeze, flight, fight, fright. He built on the existing concept in psychology (and later biology) of combing the responses flight and fight as a "fight-or-flight response" (first suggested by psychologist Walter Bradford Cannon in 1929; later scientists concluded that the usual sequence is first flight, and only then fight). A person sometimes still has the option of trying to keep the peace and negotiate a compromise with the person threatening them; by cooperating and offering concessions, the threatened person thereby tries to contain the damage that the aggressor is seeking to inflict on them. Paralysis or tonic immobility is the action threatened humans and animals perform whenever all other options have been exhausted: in physical contact with the aggressor, they pretend that they are dead, and thus attempt to survive the dangerous situation. Burgess & Holstrom (1976) proposed the term rape paralysis as a synonym; in the early 21st century the term tonic immobility became more common. Dutch psychologist Agnes van Minnen (2017) proposed prevent or avoid (voorkomen) as an extra strategy which precedes freeze or hypervigilance: try to prevent/avoid ending up in dangerous situations in the first place. In child psychology, the terms freezing or freeze have sometimes been applied to the last phase of fright (tonic immobility, paralysis), but because the earlier phase of freeze (hypervigilance, being alert) has already been described by that word in ethology, this has caused a lot of confusion. There is a fixed logic behind this sequence of survival reactions: the brain automatically considers all available options, according to the order of the reaction leading to the smallest risk of damage to the body to the reaction with the most risk. As soon as danger is detected, all possibilities are considered, and the safest available option is often employed unconsciously within milliseconds as a reflex. Paralysis is employed whenever all other options have been exhausted, and the brain decides to undergo the looming sexual violence in hopes of protecting the body against death. For example, if the threatened person would run too great a risk of being killed by trying to fight back against the aggressor, the brain could decide on paralysis in order to allow the body to survive. Aside from humans, tonic immobility is also a survival response in all other mammals, which is applied whenever fleeing or fighting would increase the risk of dying and would therefore not be the best options (anymore). Therefore, scientists think that tonic immobility as a survival response is the best explanation of why some humans paralyse when threatened by or during sexual violence. Prevalence A 2017 Scandinavian study reported that 70% of the 298 women who had visited an emergency clinic within a month of experiencing sexual violence had experienced 'significant tonic immobility' (paralysis) when it happened. 48% even reported 'extreme tonic immobility' during the sexual assault. Moreover, 189 (almost two-thirds) of the women developed post-traumatic stress disorder (PTSD) and major depression. Social issues Awareness of paralysis In modern societies, a large portion of the population does not yet know what paralysis (sometimes called freezing) is, when it happens and how often. For example, a 2021 Dutch survey of I & O Research commissioned by Amnesty International involving 1,059 Dutch-speaking students showed that 22% had never heard of freezing (in the sense of 'paralysis') before, and 25% had heard of it, but did not know exactly what it meant; the remaining 53% did know. 59% of the students aged 18 or younger did not know what it was; 42% had never even heard of it. However, the older the students, the more they knew about it (61% of those aged 25 and older knew what paralysis). Moreover, only 33% of the students who did not know from personal experience or from others what sexual penetration without consent was, knew what freezing was. The survey also showed that 29% of the men had never heard of it (26% had, but did not know what it was), while only 15% of women had never heard of it (23% did, without knowing what it was). Finally, many students found that someone should clearly say 'no' if they do not want sexual penetration, even if they knew what paralysis was and that a paralysed person is unable to say 'no'. The biggest difference was between the 145 women who knew what paralysis was and found that you (therefore) do not have to say 'no' if you do not want sex (36% of all women who knew what paralysis was) and the 91 men who had never heard of paralysis and found that you should clearly say 'no' if you do not want sex (77% of all men who had never heard of paralysis). Although men can also be victims of sexual violence and can also be paralysed by fear, it happens to women more often, and usually by male perpetrators, although there are also female perpetrators. Consequences for potential perpetrators Because of a lack of public awareness about paralysis, potential perpetrators often do not recognise paralysis in a person they want to have sex with. There is a risk that, if the initiator has asked the other person verbally or non-verbally whether the other wanted to have sex, or if the initiator had indicated their own wish to have sex, the fact that the other becomes paralysed by fear and thus is unable to say 'no' or resist, will be interpreted by the initiator as meaning that the other person does not object to sex. The initiator could falsely believe that silence means consent and proceed to initiate sexual acts with the paralysed person. In this way, it is possible that people unintentionally rape or assault a paralysed person without realising it (known as 'negligent rape' and 'negligent sexual assault', respectively). Furthermore, the assumption that every person could say 'no' or resist at any moment if they did not want to have sex, could afterwards lead the perpetrator to blame the victim for not having objected to their advances. Consequences for potential victims On the other hand, potential victims are often unprepared for a scenario in which they will become paralysed, and unable to say 'no' or physically resist anymore. As soon as they find themselves in that situation, it is too late. Afterwards, many victims (also known as survivors) do not understand what happened, and why they could not say or do anything to communicate that they did not want to have sex. The consequence is that they often blame themselves for being sexually assaulted because they expected to have been able to do something about it but conclude that they failed to do so (self-victim-blaming). This could lead to great shame, the tendency to tell nobody what has happened, attempts to forget the traumatic experience and erase all traces of it (including matters which could have been used as evidence against the perpetrator). Legal issues A lack of knowledge about paralysis amongst legislators and lawyers can lead to a failure to consider sexual scenarios in which paralysis occurs. On the one hand, this could lead to legislation based on the idea that rape or assault is always accompanied by violence or coercion from the perpetrator and/or always accompanied by resistance from the victim. Such coercion-based legislation falls short in cases where paralysis prevents the victim from resisting and thus the perpetrator does not have to use force or coercion to perform sexual acts with the person who does not want to. According to such a law, no crime has been committed and the perpetrator cannot be prosecuted. As a result, there is often no legal protection for victims of sexual violence who become paralysed. To remediate this issue, several countries not only define sexual violence by force or coercion, but also by psychological pressure and/or the defenselessness of the victim. Such legislation not only captures paralysis, but also cases where the victim was intoxicated. Another possible solution to this problem is to base legislation about sexual violence on a lack of consent. According to this approach, the requirement that the other person communicates consent and actually does so, is the best way to ensure that the person actually wants to have sex. If the initiator does not get a response from the other person, then the initiator may decide it is better not to engage in sexual acts just to be on the safe side in case the silence is misinterpreted. Consent-based legislation eliminates the requirement to prove rape or assault based on violence or coercion by the perpetrator or the victim's resistance, which is often made impossible by the occurrence of involuntary paralysis, and thus prevents the requirement from being met in coercion-based legislation. See also Bodily integrity, legal principle according to which each person decides for themselves what does and does not happen to their own body Catatonia, a cluster of differing symptoms including the inability to speak and to move one's body Lex Feri, a case of a Czech parliamentarian rapist which led to change of the criminal code in order to cover also rape paralysis Muteness, the inability to speak in certain situations Post-assault treatment of sexual assault victims Rape Sexual consent, agreement to engage in sexual acts Sexual consent in law, legal relevance of consent Sexual violence, the cause of paralysis Stupor, brain state coupled with immobility of the body Notes References Literature Reflexes Sexual violence
0.767496
0.973492
0.747151
Globus
Globus is Latin for sphere or globe. It may also refer to: Science and technology Globus pallidus, a sub-cortical structure in the brain Globus pharyngis (also globus sensation or globus hystericus), a feeling of a lump at the back of the throat GLOBUS, a radar system in Norway Voskhod Spacecraft "Globus" IMP navigation instrument Business Globus Medical, a medical device company in Audubon, PA Globus (clothing retailer), an Indian clothing retail store Globus (company), a Swiss department store chain Globus (hypermarket), a hypermarket chain in Germany, the Czech Republic and Russia Transportation Tata Globus, a range of buses by Tata Motors Globus Airlines, a Russian airline Globus family of brands, a group of travel package companies Media Globus (weekly), a political magazine published in Croatia Globus (Macedonian magazine) People Amy Globus, American artist and entrepreneur Globus, nickname of Odilo Globocnik, a World War II Nazi and SS leader Solomon Globus (born 1856), Lithuanian chess master Stephen Globus, New York City venture capitalist Yoram Globus (born 1941), Israeli film producer Other uses Globus (music), a movie trailer music-inspired band Globus cruciger, an orb topped with a cross, a Christian symbol Globus Institute for Globalisation and Sustainable Development at Tilburg University, the Netherlands See also Globe (disambiguation) Gobus (disambiguation)
0.761737
0.980826
0.747132
Residue (chemistry)
In chemistry, residue is whatever remains or acts as a contaminant after a given class of events. Residue may be the material remaining after a process of preparation, separation, or purification, such as distillation, evaporation, or filtration. It may also denote the undesired by-products of a chemical reaction. Residues as an undesired by-product are a concern in agricultural and food industries. Food safety Toxic chemical residues, wastes or contamination from other processes, are a concern in food safety. The most common food residues originate from pesticides, veterinary drugs, and industrial chemicals. For example, the U.S. Food and Drug Administration (FDA) and the Canadian Food Inspection Agency (CFIA) have guidelines for detecting chemical residues that are possibly dangerous to consume. In the U.S., the FDA is responsible for setting guidelines while other organizations enforce them. Environmental concerns Similar to the food industry, in environmental sciences residue also refers to chemical contaminants. Residues in the environment are often the result of industrial processes, such as escaped chemicals from mining processing, fuel leaks during industrial transportation, trace amounts of radioactive material, and excess pesticides that enter the soil. Characteristic units within a molecule Residue may refer to an atom or a group of atoms that form part of a molecule, such as a methyl group. Biochemistry In biochemistry and molecular biology, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid. In proteins, the carboxyl group of one amino acid links with the amino group of another amino acid to form a peptide. This results in the removal of water and what remains is called the residue. Naming of residues is done by replacing "acid" with "residue". A residue's properties will influence interactions with other residues and the overall chemical properties of the protein it resides in. One might say, "This protein consists of 118 amino acid residues" or "The histidine residue is considered to be basic because it contains an imidazole ring." Note that a residue is different from a moiety, which, in the above example would be constituted by the imidazole ring or the imidazole moiety. References Distillation
0.761101
0.981605
0.747101
Aerenchyma
Aerenchyma or aeriferous parenchyma or lacunae, is a modification of the parenchyma to form a spongy tissue that creates spaces or air channels in the leaves, stems and roots of some plants, which allows exchange of gases between the shoot and the root. The channels of air-filled cavities (see image to right) provide a low-resistance internal pathway for the exchange of gases such as oxygen, carbon dioxide and ethylene between the plant above the water and the submerged tissues. Aerenchyma is also widespread in aquatic and wetland plants which must grow in hypoxic soils. The word "aerenchyma" is Modern Latin derived from Latin for "air" and Greek for "infusion." Aerenchyma formation and hypoxia Aerenchyma (air-filled cavities) occur in two forms. Lysigenous aerenchyma form via apoptosis of particular cortical root cells to form air-filled cavities. Schizogenous aerenchyma form via decomposition of pectic substances in the middle lamellae with consequent cell separation. When soil is flooded, hypoxia develops, as soil microorganisms consume oxygen faster than diffusion occurs. The presence of hypoxic soils is one of the defining characteristics of wetlands. Many wetland plants possess aerenchyma, and in some, such as water-lilies, there is mass flow of atmospheric air through leaves and rhizomes. There are many other chemical consequences of hypoxia. For example, nitrification is inhibited as low oxygen occurs and toxic compounds are formed, as anaerobic bacteria use nitrate, manganese, and sulfate as alternative electron acceptors. The reduction-oxidation potential of the soil decreases and metal oxides such as iron and manganese dissolve, however, radial oxygen loss allows re-oxidation of these ions in the rhizosphere. In general, low oxygen stimulates trees and plants to produce ethylene. Advantages The large air-filled cavities provide a low-resistance internal pathway for the exchange of gases between the plant organs above the water and the submerged tissues. This allows plants to grow without incurring the metabolic costs of anaerobic respiration. Moreover, the degradation of cortical cells during aerenchyma formation reduce the metabolic costs of plants during stresses such as drought. Some of the oxygen transported through the aerenchyma leaks through root pores into the surrounding soil. The resulting small rhizosphere of oxygenated soil around individual roots support microorganisms that prevent the influx of potentially toxic soil components such as sulfide, iron, and manganese. References Plant physiology Plant cells Wetlands de:Parenchyma#hym
0.761123
0.98154
0.747073
Bioorganic chemistry
Bioorganic chemistry is a scientific discipline that combines organic chemistry and biochemistry. It is that branch of life science that deals with the study of biological processes using chemical methods. Protein and enzyme function are examples of these processes. Sometimes biochemistry is used interchangeably for bioorganic chemistry; the distinction being that bioorganic chemistry is organic chemistry that is focused on the biological aspects. While biochemistry aims at understanding biological processes using chemistry, bioorganic chemistry attempts to expand organic-chemical researches (that is, structures, synthesis, and kinetics) toward biology. When investigating metalloenzymes and cofactors, bioorganic chemistry overlaps bioinorganic chemistry. Sub disciplines Biophysical organic chemistry is a term used when attempting to describe intimate details of molecular recognition by bioorganic chemistry. Natural product chemistry is the process of Identifying compounds found in nature to determine their properties. Compound discoveries have and often lead to medicinal uses, development of herbicides and insecticides. References Biochemistry
0.774282
0.964827
0.747048
Deficiency (medicine)
In medicine, a deficiency is a lack or shortage of a functional entity, by less than normal or necessary supply or function. A person can have chromosomal deficiencies, mental deficiencies, nutritional deficiencies, complement deficiencies, or enzyme deficiencies. Nutritional deficiency Protein-energy malnutrition (PEM) is a condition where people consume very little in the way of energy, proteins, or both in their diets; as a result, it is common in developing nations. The two main illnesses associated with this condition are kwashiorkor, which is characterized by severe protein deficiency, and marasmus, which is total food deprivation with abnormally low amounts of protein and energy. Carbohydrates deficiency Certain human body cells, such as neurons, require high glucose concentrations. When there are insufficient carbohydrates in the diet, the breakdown of body proteins, dietary proteins, and glycerol from fats is what drives gluconeogenesis. Most gluconeogenesis occurs in the liver. A condition known as ketosis (increased ketones production), which is characterized by a strangely sweet-smelling patient, may result from a prolonged shortage of carbohydrates. Essential fatty acids deficiency The essential fatty acids (EFA) omega-3 and omega-6 are polyunsaturated. Clinical signs of an EFA deficiency include stunted growth in kids and babies, a scaly, dry rash, slowed wound healing and heightened susceptibility to infections. Enzyme deficiency Enzymes are unique protein subtypes that are needed during metabolism, the process by which the body obtains energy for regular growth and development, to break down food molecules into fuel. A variety of conditions that can change or even endanger life are caused by inherited defects known as enzyme deficiencies, or the lack of these enzymes. Enzyme deficiencies include Niemann-Pick disease, Lysosomal storage diseases, and Mucopolysaccharidoses. See also Complement deficiency References Medical terminology Human diseases and disorders
0.783369
0.95361
0.747028
Acute muscle soreness
Acute muscle soreness (AMS) is the pain felt in muscles during and immediately, up to 24 hours, after strenuous physical exercise. The pain appears within a minute of contracting the muscle and it will disappear within two or three minutes or up to several hours after relaxing it. There are two causes of acute muscle soreness: Accumulation of chemical end products of exercise in muscle cells such as lactic acid and H+ Muscle fatigue (the muscle tires and cannot contract anymore) Cause Muscle soreness can stem from strain on the sarcomere, the muscle's functional unit, due to the mechanism of activation of the unit by the nerves, which accumulates calcium that further degrades sarcomeres. This degradation initiates the body inflammatory response, and has to be supported by surrounding connective tissues. The inflammatory cells and cytokines stimulate the pain receptors that cause the acute pain associated with AMS. Repair of the sarcomere and the surrounding connective tissue leads to delayed onset muscle soreness, which peaks between 24 and 72 hours after exercise. AMS may also be caused by cramping following strenuous exercise, which has been theorized to be caused by two pathways: Dehydration Electrolyte imbalance Dehydration The dehydration theory states that extracellular fluid (ECF) compartment becomes contracted due to the excessive sweating, causing the volume to decrease to the point until the muscles are contracted until the fluids can re-inhabit the vacuum. Excessive sweating can also cause the electrolyte imbalance theory, which is sweating disturbs the body's balance of electrolyte, which results in exciting motor neurons and spontaneous discharge. The feeling of soreness can also be attributed to the lack of contraction from the muscle, which can lead to overexertion of the muscle. The decrease in contraction has been theorized to have been caused by the high level of concentrations of proton created by glycolysis. Excess in protons displaces calcium ions which is used within the fibers in activating the sarcomere, resulting in a reduced contractile force. Electrolyte imbalance When exercising, lactic acid becomes lactate and H+ through glycolysis. With more lactic acid consumed during the process, there will be a higher H+ concentration, thus lowering the blood’s pH level. This low pH level will affect the energy production process through the inhibition of phosphofructokinase. Phosphofructokinase is a key enzyme in the glycolytic process, which produces energy. A higher concentration of H+ will also cause the loss of contractile force through the misplacement of calcium in muscle fiber, which will disturb the formation of the actin-myosin cross-bridge. Treatments There is conflicting research in terms of treatments of muscle soreness. Stretching and muscle soreness Stretching immediately before or after a workout does provide some help, but is not significant enough to be considered as a preventative measure. References Exercise physiology Acute pain
0.76366
0.978169
0.746989
Disability in the United States
People with disabilities in the United States are a significant minority group, making up a fifth of the overall population and over half of Americans older than eighty. There is a complex history underlying the U.S. and its relationship with its disabled population, with great progress being made in the last century to improve the livelihood of disabled citizens through legislation providing protections and benefits. Most notably, the Americans with Disabilities Act is a comprehensive anti-discrimination policy that works to protect Americans with disabilities in public settings and the workplace. Definitions According to the Social Security Advisory Board, when the federal government first began provisioning funds for state-run disability assistance programs, eligible beneficiaries were defined as needing to be "totally and permanently disabled". In 1956, this definition was expanded by the Disability Insurance Program to describe disability as the "inability to engage in any substantial gainful activity by reason of any medically determinable physical or mental impairment which can be expected to result in death or to be of long-continued and indefinite duration." Critics indicated that this language limited the concept of disability to an occupational scope, and more holistic definitions were adopted as time passed. The modern consensus on disability within governmental, medical, sociological realms in the United States is that it includes impairments that either physically or mentally incapacitate individuals from engaging in significant life activities, or the perception of possessing such an impairment. For instance, in a 2013 study, the Centers for Disease Control and Prevention (CDC) evaluated disability across five dimensions: vision, cognition, mobility, self-care, and independent living. Specific conditions that fall under this umbrella vary circumstantially, however it is broadly accepted that disability includes, but is not limited to the following: Autism Autoimmune conditions (ex. lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS) Blindness or poor vision Cancer Cardiovascular or heart disease Celiac disease Cerebral palsy Deaf or hard of hearing Depression or anxiety Diabetes Epilepsy Gastrointestinal disorders (ex. Crohn's disease, Irritable bowel syndrome) Intellectual disability Missing limbs or partially missing limbs Nervous system conditions (ex. migraine headaches, MS, Parkinson's disease) Psychiatric conditions (ex. bipolar disorder, major depressive disorder, post-traumatic stress disorder, schizophrenia) History At a federal level, legislation pertaining to disability was limited in the 18th and 19th centuries, with notable laws at that time including an act for the relief of sick and disabled seamen, which was signed by John Adams in 1798. In the early 1900s eugenic sterilization laws were passed in several states, permitting governments to conduct forced sterilization on individuals with mental disorders. The 1927 Supreme Court case Buck v Bell upheld the constitutionality of such legislation, with such laws being banned nearly half a century later with the 1978 Federal Sterilization Legislation, although loopholes have been exploited with such sterilizations continuing into modern times. The burgeoning of disability rights legislation in the 1900s came after World War I with congress' establishment of the Rehabilitation Programs, which provided education and healthcare support to recovering veterans. Significant progress came in the 1930s with the presidential election of Franklin D. Roosevelt, who was physically disabled himself, and his signage of the Social Security Act. Progress towards disability justice came in hand with the civil rights movement in the latter half of the 20th century. In 1961 the American National Standards Institute published a document overviewing building accessibility limitations for physically disabled individuals, which supported the passage of the Architectural Barriers Act of 1968 and encouraged several states to adopt inclusive accessibility legislation in the '70s. Additionally in the 1960s, Medicaid and the Mental Retardation Facilities and Community Mental Health Centers Construction Act were passed, allocating funds towards healthcare and the developments of statewide councils, advocacy frameworks, and post-secondary education pathways for disabled citizens. In the 1970s major anti-discrimination legislation was enacted with the repeal of the last "Ugly law", which permitted law enforcement to incarcerate people for appearing disabled, as well as the 1973 Rehabilitation Act and Individuals with Disabilities Education Act (IDEA) which prevented institutions that received public funding from discriminating on disability status. The 1980s oversaw saw an additional movement towards accessibility with the passage of the Air Carrier Access Act, Fair Housing Amendments Act, and Technology-Related Assistance for Individuals with Disabilities Act and justice with the creation of the Civil Rights of Institutionalized Persons Act (CRIPA). In 1990, the Americans with Disabilities Act (ADA) served as a landmark bill outlining more comprehensive protections and accommodations for the disabled community, with other legislation, introduced later that decade such as the 1996 Telecommunications Act and the Ticket to Work and Work Incentives Improvement Act (TWWIIA) expanding upon the ADA. The turn of the Millennium oversaw crucial Supreme Court cases such as Olmstead v. L.C. and Tennessee v. Lane which upheld federally outlined disability rights. Demographics According to the Disability Status: 2019 - Census 2019 Brief approximately 20 percent of Americans have one or more diagnosed psychological or physical disability: This percentage varies depending on how disabilities are defined. It may be helpful to note that disability in the United States is classified under different types of physical or mental impairments of which include one's ability to physically function, mental status, including decision skills and memory, the ability of sight, if they are self-sufficient, and finally, if they depend on anyone to help them do tasks . According to Census Brief 97-5, "About 1 in 5 Americans have some kind of disability, and 1 in 10 have a severe disability. Not only does this statistic affect those who are disabled, but individuals with disabilities not only endure disadvantages but so do their children or possibly grandchildren as they can potentially be left facing health care disadvantages as well as education." The United States Census Bureau is legally charged with developing information on the type and prevalence of disability in the population. Statistics reveal the highest percentage of individuals with a disability reside in southern areas such as Texas, Florida, Mississippi, and anywhere along the southern coast. The states that include the fewest disabled individuals are western areas of which will include Wyoming, Colorado, and Utah. These disabled people are protected by three primary laws. They include the Americans with Disabilities Act, the Individuals with Disabilities Education Act, and Section 504 of the Rehabilitation Act. The primary purpose of collecting ACS data on disability is to help the U.S. Congress determine the allocation of federal funds and inform policies. It is also used to identify the characteristics of the disabled population of the United States. Determining the number and geographical location of people with disabilities is crucial for policies aimed at providing services like public transportation. ACS does not directly measure disability There are other smaller survey studies that provide some insight on disability in the U.S. While studies like the National Health Interview Survey the Health and Retirement Study, the Behavioral Risk Factor Surveillance System, and the Health, Aging, and Body Composition (Health ABC) Study are used to infer valuable disability-related health characteristics in the U.S. population. While responses to these items are commonly refer to as "disability", it could be argued the —it uses self- and proxy-reports to evaluate perceived ability to perform functional tasks. Existing publications have delineated details on the U.S. population regarding disability by using information from the ACS. Publications have also outlined issues with disability data in the ACS. Research on disability continues to improve, and potential remedies are found for current methodological challenges. Because of the uniqueness, regarding federal funding and policy, researchers from various fields (e.g., sociology, epidemiology, and government) make wide use of ACS data to better understand disability in the U.S. African Americans According to the 2000 U.S. Census, the African American community has the highest rate of disability in the United States at 20.8 percent, slightly higher than the overall disability rate of 19.4 percent. Given these statistics, it can be suggested that African Americans with disabilities experience the most severe underemployment, unemployment, and under-education compared to other disability groups. For instance, the 2015 American Community Survey indicates that African Americans who have disabilities live in poverty at a rate of 1.5 to 2 times greater than other racial groups in America. Criminal justice Data obtained in the National Longitudinal Survey of Youth indicates that Black men with disabilities encounter the greatest cumulative probability of being arrested by age 28, in contrast to others with either different gender, disability, or race status. With respect to African American women re-entering society after serving time, a JHCPCU article identified disability, more specifically positive HIV/HCV status, as a major factor correlated with lowered usage of various modes of health care such as alternative and emergency care. Calls for civil rights and criminal justice reform with the Black Lives Matter movement have brought into public eye how Black individuals with disabilities disproportionately experience police violence. Instances cited as police brutality, such as the 2018 killing of Marcus-David Peters, an unarmed black man experiencing a mental health crisis, has motivated legislation such as Virginia's Mental Health Awareness Response and Community Understanding Services (MARCUS) alert bill, which would necessitate that cases of individual mental distress be attended to by both police and mental health professionals. Viral footage of the killing of Walter Wallace, a black man with a history of mental illness who was killed during a police encounter, has helped bring the Black Disabled Lives Matter movement into the public eye. Education Work published in the Journal of African American History suggest that the enduring consequences of segregation and separation of special education classrooms can potential have a negative compound effect on the quality of education for African American students with disabilities. An article in the Harvard Educational Review suggests that educational pedagogy designed to "cross-pollinate" across race and disability coalitions is an effective way to combat exclusion that particularly impacts Black disabled children. Employment According to a study published in the Journal of Disability Policy, Black people with disabilities experience significantly higher unemployment and lower monthly wages compared to the overall disabled community and general population. A research paper in the Journal of Applied Rehabilitation Counseling reported that counseling professionals identified that disability status can limit employment prospects of Black and Latino offenders seeking work opportunities. Healthcare Disabled Black Americans face barriers to receiving comprehensive medical care to address their pre-existing health conditions. In studies published in the Journals of Applied Gerontology it was noted that elderly African American women face a greater likelihood of acquiring disability in comparison their white counterparts, and additionally are more likely to rely on medicaid for coverage. The same journal articles established a correlation between being uninsured or utilizing medicaid coverage with greater levels of disability. With respect to disabled youth, it was found that when controlling for socioeconomic and insurance status, Hispanic and Black children with disabilities were less likely to have received specialty medical care in comparison to children from other racial backgrounds according to research published in the journal PEDIATRICS. Disparities Discrimination in employment The Rehabilitation Act of 1973 requires all organizations that receive government funding to provide accessibility programs and services. A more recent law, the Americans with Disabilities Act of 1990 (ADA), which came into effect in 1992, prohibits private employers, state and local governments, employment agencies, and labor unions from discriminating against qualified individuals with disabilities in job application procedures, hiring, firing, advancement, compensation, job training, or in the terms, conditions, and privileges of employment. This includes organizations like retail businesses, movie theaters, and restaurants. They must make reasonable accommodation to people with different needs. Protection is extended to anyone with (A) a physical or mental impairment that substantially limits one or more of the major life activities of an individual, (B) a record of such an impairment, or (C) being regarded as having such an impairment. The second and third criteria are seen as ensuring protection from unjust discrimination based on a perception of risk, just because someone has a record of impairment or appears to have a disability or illness (e.g. features which may be erroneously taken as signs of an illness). Employment protection laws make discrimination against qualified individuals with a disability illegal and may also require the provision of reasonable accommodation. Reasonable accommodations includes changes in the physical environment like making facilities more accessible but also include increasing job flexibility like job restructuring, part-time or modified work schedules or reassignment to vacant position. Though many hold attitudes that are more enlightened and informed than past years, the word "disability" carries few positive connotations for most employers. Negative attitudes by employers toward potential employees with disabilities can lead to misunderstanding and discrimination. Healthcare disparities The disability paradox, a concept that recognizes the tendency for individuals without disabilities to perceive their disabled counterparts as having poorer livelihoods than disabled individuals would view themselves, is perpetuated in healthcare settings, with research published in the Handbook of Disability Studies identifying that practitioners award lower quality of life scores to disabled individuals than a member of the general population would. An article in the Kennedy Institute of Ethics Journal expresses that as a consequence physicians frequently go into consults with rigid, skewed perceptions of their patient's disabilities. Anita Ho, a bioethicist, argues that heightened practitioner confidence can harmfully impact disabled individuals in this regard as it increases the likelihood that patients may distrust their physicians or conversely place excessive faith in their care provider's insight into their condition. Disparate in healthcare access also impact disabled populations, for instance, most rural areas — especially in the Great Plains region — have little or no government-organized medical support infrastructure for the permanently disabled indigent population which results in disability in the United States not only affecting individuals physically and mentally but socioeconomically as well. Poverty Investigations on the "poverty and disability nexus" have consistently shown poverty and disability are correlated for all race-ethnic groups within the United States. Financial stability of people with disabilities would decrease the dependence on governmental support programs. Studies have been done with the U.S. Census Bureau data to examine the high prevalence of disabilities among welfare recipients. Thirteen percent of families with children under the age of 18, who are also receiving welfare benefits, had at least one child with a disability. Families with income below twice the poverty line were 50 percent more likely to have a child with a disability than those families with higher incomes. Children with disabilities from families with annual household incomes of higher than $50,000 were more likely to attend higher education. Research suggests higher education does impact employment and income opportunities for people with disabilities. It is also noted near equivocal employment opportunities and salaries for people with disabilities to their peers without disabilities. While only one-fifth of people in the U.S. have at least a four-year college degree, some studies note possessing a four-year degree is the difference between absolute job security and joblessness. Public resources Social Security Administration The Social Security Administration (SSA), defines disability in terms of an individual's inability to perform substantial gainful activity (SGA), by which it means "work paying minimum wage or better". The agency pairs SGA with a list of medical conditions that qualify individuals for disability benefits. Individuals who are disabled will receive insurance and this ensures they will always be getting paid when they must take time off work or cannot work due to the severity of their illness. The SSDI and the SSI are both social security programs that will assist in payments. The SSA makes available to disabled Americans two forms of disability benefits: Social Security Disability Insurance, (SSDI) and Supplemental Security Income (SSI). Briefly, the SSDI is a program that is useful in the sense that it is like welfare, but you must have been able to work enough hours throughout your life and you must have paid social security taxes in order to be approved. This benefit is most useful for those who do not have severe disabilities or illnesses because those who are immobile will not have been able to work. The good thing about this benefit does not only do it benefit the individual, but it will benefit their family members as well. To go more in-depth, Social Security pays disability benefits to citizens who have worked long enough and have a medical condition that has prevented them from working or is expected to prevent them from working for at least 12 months or end in death. Looking at the SSI, the SSI insurance, or the Social Security Insurance program, is there to pay individual benefits who could not have worked or have little capital. However, for both of these insurances, the same approval is required by the individual's disability. Some assistance ends if the beneficiary starts working. If that assistance (such as personal care or transportation) is necessary for work, it creates a welfare trap where it is not possible to work despite potentially being willing and able. Some programs do provide incentives to work. Education K-12 Before the Individuals with Disabilities Education Act was passed, children with disabilities were at-risk of not receiving a free, appropriate public education, but each act protecting the disabled individuals protects different criteria of which include differentiation in schools, the law by which schools must be up to date on the K-12 act, and equal service for the public who extend any services. For IDEA to apply, the child must first be determined to be able to benefit from public education. This benefit is not exclusively limited to school-aged children but applies to children with disabilities from infancy. Due to the societal stigma of disability, children are sometimes treated like disabled children, and not included in activities in which other children were able to participate. However, for These individuals with any type of disablement such as learning disabilities or physical disabilities, the Americans with Disabilities Act covers them. This act ensures no one is discriminated against beyond their own home of which includes work, school, and anywhere in public. The places it does not cover will include sanctioned or private areas including one's home. Educators can hold students with disabilities to lower expectations, which impacts their future educational attainment. Although their future may be impaired the K-12 act is broad amongst education and Section 504 of the Rehabilitation Act will help clarify. This act ensures individuals who attended schools, play on school sports teams, or attend any off or on-campus events will be protected unless that school is not funded by the government. These students must be given compromises that other students may not receive due to their impairments. Under the Individuals with Disabilities Education Act, the school district must provide every disabled child with an Individualized Education Plan (IEP). The IEP is compiled by a team of school administrators and guardians and may include a child advocate, counselors, occupational therapists, or other specialists. The Individuals with Disabilities Education Act ensures all fees for schooling are discharged, but these children must be approved under IDEA and must be in a certain category of disabilities. This act allows for any student or child to be assessed and is provided extra incentives that may apply to their condition. The IEP also evaluates the goals for the child and determines what needs to be done in order for those goals to be met. Children with disabilities who do not have a parent or guardian advocating on their behalf are not as well served in the education system as their peers with parent or guardian advocates. Transition preparation from K-12 education to post-secondary education or career was initially written into IDEA to begin at age 12, but in the existing law, transition preparation does not begin until age 16. While this law provides a maximum age at which to begin transition preparation, students with disabilities have been known to receive transition preparation at a younger age, as the states might mandate a younger age, or the IEP team might determine a younger age is appropriate to begin the transition preparation of the student. Some students with disabilities have noted not receiving any transition preparation at all. The transition services are to be designed to be results-oriented rather than outcome-oriented. This is to ensure the transition services are designed for the student's success. Students are intended to attend their transition planning meetings with the IEP, yet not all students do. Some do attend, yet generally not take a leadership role - only fourteen percent do. This places the students with a disability in a passive role instead of an agentic role in their own life plans. In a 2007 study of a higher education institution located in the Midwestern United States, it was found that one-third of students with disabilities felt their transition preparation was lacking. Many in this group were unaware of laws that pertained to disability and higher education. This leaves them without an understanding of their learning needs and unable to advocate for themselves. Higher Education Self-advocacy plays an important role in the success of students with disabilities in higher education. While the examination of self-advocacy skills has been largely limited to the impact in academic settings, self-advocacy skills, or the lack thereof, do also impact non-academic situations. A 2004 study noted that only 3 percent of students with disabilities had self-advocacy training. Students with disabilities who are confident about their disability identity and self-advocacy skills are more likely to disclose their disabilities and advocate for their needs when interacting with faculty and staff. Advocacy service is also provided to students as staff from different programs help instructors in college, understand the needs of disabled students who are attending their classes. So much so that when disable students need extra time to complete their course then some arrangements are made to accommodate such students. Students with disabilities who were embarrassed of their disability identity and did not understand their needs as learners looked to faculty and staff for solutions to accommodation needs. Education helps students with disabilities learn self-advocacy skills that affect their ability to advocate for their health, insurance, and other needs. In spite of IDEA and Section 504 providing support for the education of people with disabilities, the educational outcomes of people with disabilities vary significantly from the outcomes of people without disabilities. After high school, a 2005 study found students with disabilities enroll in postsecondary education, whether college, technical school or vocational school, at a rate of 46 percent compared to the rate of 63 percent for students without disabilities. This rate is up 23 percent since 1990, when the Americans with Disabilities Act of 1990 was passed. Specifically to four-year degree-granting higher education institutions, 27 percent of students with disabilities attend compared to 54 percent of students without disabilities. High school completion and postsecondary education enrollment vary per disability type. Students with disabilities are responsible for advocating for their accommodations and needs as learners in higher education environments. Many higher education institutions have staff to work with students with disabilities on their accommodation requests. Higher education institutions do vary in process to obtain accommodations and accommodations provided. The staff members at the higher education institutions can recommend accommodations. The faculty members, however, may choose to vary or not implement the accommodations at all based upon concerns of weakening the academic integrity of the course or risking the possibility of endless accommodation requests. When working with faculty members about accommodations, nearly half of the students with disabilities recalled receiving a negative response, while the other half felt their faculty members were accommodating. For people with disabilities, having a four-year college degree provides significant employment and salary advantages. Employment and minimum wage exemption More than 56 million Americans are enrolled in Medicare. Disabled citizens in the United States receive Medicare insurance and social security benefits to varying degrees. For those that seek employment for therapeutic or economic reasons, the Fair Labor Standards Act of 1938 is applicable. This act was an attempt to facilitate the large number of disabled servicemen returning from the front lines "to the extent necessary to prevent curtailment of opportunities for employment". Section 14(c) provides the employers with a method of paying their disabled employees less than applicable federal minimum wage. The Secretary of Labor issues certificates that align wages with the employee's productivity. there are 420,000 §14(c) employees being paid less than the minimum wage of $7.25 per hour. Administratively, the wage for disabled people was informally set at 75 percent. Those working in sheltered work centers have no minimum floor for their wage. The Fair Wages for Workers with Disabilities Act was proposed in 2013 to repeal §14(c) but was not passed. Insurance It is illegal for California insurers to refuse to provide car insurance to properly licensed drivers solely because they have a disability. It is also illegal for them to refuse to provide car insurance "on the basis that the owner of the motor vehicle to be insured is blind", but they are allowed to exclude coverage for injuries and damages incurred while a blind unlicensed owner is actually operating the vehicle (the law is apparently structured to allow blind people to buy and insure cars which their friends, family, and caretakers can drive for them). Policies and legislation The Department of Labor's 2014 rules for federal contractors, defined as companies that make more than $50,000/year from the federal government, required them to have as a goal that 7 percent of their workforce must be disabled people. In schools, the ADA requires that all classrooms must be wheelchair accessible. The U.S. Architectural and Transportation Barriers Compliance Board, commonly known as the Access Board, created the Rehabilitation Act of 1973 to help offer guidelines for transportation and accessibility for physically disabled people. About 12.6 percent of the U.S. population are individuals who have a mental or physical disability. Many are unemployed because of prejudiced assumptions that a person with disabilities is unable to complete tasks that are commonly required in the workforce. This became a major human rights issue because of the discrimination that this group faced when trying to apply for jobs in the U.S. Many advocacy groups protested against such discrimination, asking the federal government to implement laws and policies that would help individuals with disabilities. Rehabilitation Act of 1973 The Rehabilitation Act of 1973 was enacted with the purpose of protecting individuals with disabilities from prejudicial treatment by government-funded programs, employers, and agencies. The Rehabilitation Act of 1973 has not only helped protect U.S. citizens from being discriminated against but it has also created confidence amongst individuals to feel more comfortable with their disability. There are many sections within The Rehabilitation Act of 1973, that contains detailed information about what is covered in this policy. Section 501 An employer must hire an individual who meets the qualifications of a job description despite any preexisting disabilities. Section 503 Requires contractors or subcontractors, who receive more than $10,000 from the government to hire people with disabilities and to accommodate them with the needs that they need to achieve in the workforce. Section 504 States that receive federal money may not discriminate against any person with disabilities who qualifies for a program or job. On June 22, 1999, the United States Supreme Court issued a ruling in Olmstead vs. L. C. that said unjustified segregation of persons with disabilities constitutes discrimination in violation of title II of the Americans with Disabilities Act. This has been interpreted as meaning people with disabilities must be given all opportunities by the government to stay in their own homes as opposed to assisted living, nursing homes or worse, institutions for disabled people. It has been interpreted as meaning the government must make all reasonable efforts to allow people with disabilities to be included in their respective communities and enjoy family and friends, work if possible, get married, own homes and interact with nondisabled people. The Americans with Disabilities Act of 1990 The federal government enacted the Americans with Disabilities Act of 1990 (ADA), which was created to allow equal opportunity for jobs, access to private and government-funded facilities, and transportation for disabled people. This act was created with the purpose to ensure that employers would not discriminate against any individual despite their disability. In 1990, data was gathered to show the percentage of disabled people who worked in the U.S. Out of the 13 percent who filled out the survey, only 53 percent of individuals with disabilities worked while 90 percent of this group population did not, the government wanted to change this, they wanted Americans with disabilities to have the same opportunities as those who did not have a disability. The ADA required corporations to not only hire qualified disabled people but also accommodate them and their needs. Title I Employment An employer must give a qualified individual with disabilities the same opportunities as any other employee despite their disability. The employer must offer equal work privileges to someone who has a disability including but not limited to pay, work hours, training, etc. The employer must also create accommodations suitable for the person and their physical or mental disabilities. Title II State and Local Government Activities Requires that the government give disabled people the same opportunities involving work, programs, building access, and services. Title II also requires that buildings create easy access for disabled people and provide communicators who will be able to help those with hearing or speaking impairments. Public spaces are however not required to create accommodations that would, in turn, alter their services as long as the services proved that they did all they could to prevent discrimination against disabled people. Title II Transportation Public transportation should be customized so that disabled people may have easy access to public transit. Paratransit is a service that provides transportation to people who are unable to get from one destination to another due to their mental or physical disability. Title III Public Accommodations Public accommodations require that private businesses create accommodations that will allow disabled people easy access to buildings. Private businesses may not discriminate against disabled people and must provide accommodations that are reasonable, alterations may be made so that a person with disabilities can have equal access to facilities that are provided, communicators for the hearing impaired, devices for the visually impaired, and wheelchair access. Facilities must regulate with the ADA, when regulating the building's infrastructure so it meets the ADA regulations. Title IV Telecommunication Relay Services Requires telephone companies to have TRS seven days a week, twenty-four hours a day. It requires telephone companies to create accommodations for deaf and hard-of-hearing people by providing a third party able to assist both parties in communicating with one another. Disability culture Media The National Center on Disability and Journalism (NCDJ) provides resources and support to journalists and communications professionals covering disability issues. The center is headquartered at the Walter Cronkite School of Journalism and Mass Communication at Arizona State University. Arts There are many government initiatives that support the participation of people with disabilities in arts and cultural programs. Most U.S. state governments include an accessibility coordinator with their state arts agency or regional arts organization. There are a variety of non-governmental organizations (NGOs) and non-profit groups that support initiatives for inclusive arts and culture. Office for Accessibility at the National Endowment for the Arts Media Access Group at WGBH WGBH is the Public Television broadcaster for the Boston region. It has three divisions: the Caption Center, Descriptive Video Services (DVS), and the Carl and Ruth Shapiro Family National Center for Accessible Media (NCAM). WGBH pioneered accessible television and video in the U.S. International Center on Deafness and the Arts provides education, training, and arts projects in areas such as theatre, arts festivals, museums, dance, distance learning, and children's programming. The development of disability arts in the U.S. is also tied to several non-profit organizations such as Creative Growth in Oakland, California, that serves adult artists with developmental, mental and physical disabilities, providing a professional studio environment for artistic development, gallery exhibition and representation and a social atmosphere among peers. Organizations with similar mandates in the Bay Area include Creativity Explored in San Francisco, and NIAD Art Center in Richmond, California. See also American Association of People with Disabilities List of disability organizations Timeline of disability rights in the United States Disability in American slavery Disability treatments in the United States References Further reading External links Census Bureau Data on Disability Unfit for Work: The startling rise of disability in America (2013) United States
0.768386
0.972122
0.746965
Protein toxicity
Protein toxicity is the effect of the buildup of protein metabolic waste compounds, like urea, uric acid, ammonia, and creatinine. Protein toxicity has many causes, including urea cycle disorders, genetic mutations, excessive protein intake, and insufficient kidney function, such as chronic kidney disease and acute kidney injury. Symptoms of protein toxicity include unexplained vomiting and loss of appetite. Untreated protein toxicity can lead to serious complications such as seizures, encephalopathy, further kidney damage, and even death. Definition Protein toxicity occurs when protein metabolic wastes build up in the body. During protein metabolism, nitrogenous wastes such as urea, uric acid, ammonia, and creatinine are produced. These compounds are not utilized by the human body and are usually excreted by the kidney. However, due to conditions such as renal insufficiency, the under-functioning kidney is unable to excrete these metabolic wastes, causing them to accumulate in the body and lead to toxicity. Although there are many causes of protein toxicity, this condition is most prevalent in people with chronic kidney disease who consume a protein-rich diet, specifically, proteins from animal sources that are rapidly digested and metabolized, causing the release of a high concentration of protein metabolic wastes in the blood stream rapidly. Causes and pathophysiology Protein toxicity has a significant role in neurodegenerative diseases. Whether it is due to high protein intake, pathological disorders lead to the accumulation of protein waste products, the no efficient metabolism of the proteins, or oligomerization of the amino acids from proteolysis. The mechanism by which protein can lead to well known neurodegenerative diseases includes transcriptions dysfunction, propagation, pathological cytoplasmic inclusions, mitochondrial and stress granule dysfunction. Ammonia, one of the waste products of protein metabolism, is very harmful, especially to the brain, where it crosses the blood brain barrier leading to a whole range of neurological dysfunctions from cognitive impairment to death. The brain has a mechanism to counteract the presence of this waste metabolite. One of the mechanisms involved in the impairment of the brain is the compromise of astrocyte potassium buffering, where astrocytes play a key role. However, as more ammonia crosses, the system gets saturated, leading to astrocyte swelling and brain edema. Urea is another waste product that originates from protein metabolism in humans. However, urea is used by the body as a source of nitrogen essential for growth and life. The most relevant disorders on the urea cycle are genetic, leading to defective enzymes or transporters inhibiting the reabsorption of urate with the subsequent increase in ammonia levels, which is toxic. High protein intake can lead to high protein waste, and this is different from protein poisoning since the issue relates to the high level of the waste metabolites. Usually, when protein consumption goes above one-third of the food we consumed, this situation presents. The liver has a limited capacity and won't deaminate proteins, leading to increased nitrogen in the body. The rate at which urea is excreted can not keep up with the rate at which it is produced. The catabolism of amino acids can lead to toxic levels of ammonia. Furthermore, there is a limited rate at which the gastrointestinal tract can absorb amino acids from proteins. Uric acid is not a waste metabolite derived from protein metabolism, but many high protein diets also contain higher relative fractions of nucleic acids. One of the two types of nucleic acids, purines (the other being pyrimidines, which are not problematic) are metabolized to uric acid in humans when in excess, which can lead to problems, chiefly gout. The kidneys play an essential role in the reabsorption and excretion of uric acid. Certain transporters located in the nephron in the apical and basolateral surfaces regulate uric acid serum levels. Uric acid is not as toxic as other nitrogen derivates. It has an antioxidant function in the blood at low levels. People with compromised kidneys will have a lower excretion of uric acid leading to several diseases, including further renal damage, cardiovascular disease, diabetes, and gout. Creatinine might not be a direct indicator of protein toxicity; however, it is important to mention that creatinine could increase due to overwork by the kidneys exposed to high levels of protein waste. Also, high serum creatinine levels could indicate decreased renal filtration rate due to kidney disease, increase byproduct as a consequence of muscle breakdown, or high protein intake. Effects of a high protein diet A high-protein diet is a health concern for those suffering from kidney disease. The main concern is that a high protein intake may promote further renal damage that can lead to protein toxicity. The physiological changes induced by an increased protein intake, such as an increased glomerular pressure and hyperfiltration, place further strain on already damaged kidneys. This strain can lead to proteins being inadequately metabolized and subsequently causing toxicity. A high-protein diet can lead to complications for those with renal disease and has been linked to further progression of the disease. The well-known Nurse's Health Study found a correlation between the loss of kidney function and an increased dietary intake of animal protein by people who had already been diagnosed with renal disease. This association suggests that a total protein intake that exceeds the recommendations may accelerate renal disease and lead to risk of protein toxicity within a diseased individual. For this reason, dietary protein restriction is a common treatment for people with renal disease in which proteinuria is present. Protein restricted individuals have been shown to have slower rates of progression of their renal diseases. Several studies, however, have found no evidence of protein toxicity due to high protein intakes on kidney function in healthy people. Diets that regularly exceed the recommendations for protein intake have been found to lead to an increased glomerular filtration rate in the kidneys and also have an effect on the hormone systems in the body. It is well established that these physiological effects are harmful to individuals with renal disease, but has not found these responses to be detrimental to those who are healthy and demonstrate adequate renal activity. In people with healthy kidney function, the kidneys work continuously to excrete the by-products of protein metabolism which prevents protein toxicity from occurring. In response to an increased consumption of dietary protein, the kidneys maintain homeostasis within the body by operating at an increased capacity, producing a higher amount of urea and subsequently excreting it from the body. Although some have proposed that this increase in waste production and excretion will cause increased strain on the kidneys, other research has not supported this. Currently, evidence suggests that changes in renal function that occur in response to an increased dietary protein intake are part of the normal adaptive system employed by the body to sustain homeostasis. In a healthy individual with well-functioning kidneys, there is no need for concern that an increased dietary protein intake will lead to protein toxicity and decreased renal function. Protein toxicity and other metabolic disorders associated with chronic kidney failure have been shown to be related to more systemic complications such as atherosclerosis, anemia, malnutrition, and hyperparathyroidism. Symptoms Unexplained vomiting and a loss of appetite are indicators of protein toxicity. If those two symptoms are accompanied by an ammonia quality on the breath, the onset of kidney failure is a likely culprit. People with kidney disease who are not on dialysis are advised to avoid consumption of protein if possible, as consuming too much accelerates the condition and can lead to death. Most of the problems stem from the accumulation of unfiltered toxins and wastes from protein metabolism. Kidney function naturally declines with age due to the gradual loss of nephrons (filters) in the kidney. Common causes of chronic kidney disease include diabetes, heart disease, long term untreated high blood pressure, as well as abuse of analgesics like ibuprofen, aspirin, and paracetamol. Kidney disease like the polycystic kidney disease can be genetic in nature and progress as the individual ages. Diagnosis Under normal conditions in the body, ammonia, urea, uric acid, and creatinine are produced by protein metabolism and excreted through the kidney as urine. When these by-products cannot be excreted properly from the body they will accumulate and become highly toxic. Protein consumption is a major source of these waste products. An accumulation of these waste products can occur in people with kidney insufficiency who eat a diet rich in protein and therefore can not excrete the waste properly. Blood urea nitrogen (BUN) test measures the amount of urea nitrogen in the blood. Increased levels of urea in the blood (uremia) is an indicator for poor elimination of urea from the body usually due to kidney damage. Increased BUN levels can be caused by kidney diseases, kidney stones, congestive heart failure, fever, and gastrointestinal bleedings. BUN levels can also be elevated in pregnant people and people whose diet consists mainly of protein. Increased creatinine levels in the blood can also be a sign of kidney damage and inability to excrete protein waste by-products properly. A confirmation of kidney disease or kidney failure is often obtained by performing a blood test which measures the concentration of creatinine and urea (blood urea nitrogen). A creatinine blood test and BUN test are usually performed together along with other blood panels for diagnosis. Treatment Treatment options for protein toxicity can include renal replacement therapies like hemodialysis and hemofiltration. Lifestyle modifications like a diet low in protein, decreased sodium intake, and exercise can also be in incorporated as part of a treatment plan. Medications may also be prescribed depending on symptoms. Common medications prescribed for kidney diseases include hypertension medications like angiotensin converting enzyme inhibitors (ACEI) and angiotensin II receptor blockers (ARB) as they have been found to be kidney protective. Diuretics may also be prescribed to facilitate with waste excretion as well as any fluid retention. A kidney transplant surgery is another treatment option where a healthy kidney is donated from a living or deceased donor to the recipient. Complications Accumulation of protein metabolic waste products in the body can cause diseases and serious complications such as gout, uremia, acute renal failure, seizure, encephalopathy, and death. These products of protein metabolism, including urea, uric acid, ammonia, and creatinine, are compounds that the human body must eliminate in order for the body to function properly. The build up of uric acid causing high amount of uric acid in blood, is a condition called hyperuricemia. Long-standing hyperuricemia can cause deposition of monosodium urate crystals in or around joints, resulting in an arthritic condition called gout. When the body is unable to eliminate urea, it can cause a serious medical condition called uremia, which is a high level of urea in blood. Symptoms of uremia include nausea, vomiting, fatigue, anorexia, weight loss, and change in mental status. If left untreated, uremia can lead to seizure, coma, cardiac arrest, and death. When the body is unable to process or eliminate ammonia, such as in protein toxicity, this will lead to the build up of ammonia in the bloodstream, causing a condition called hyperammonemia. Symptoms of elevated blood ammonia include muscle weakness and fatigue. If left untreated, ammonia can cross the blood brain barrier and affect brain tissues, leading to a spectrum of neuropsychiatric and neurological symptoms including impaired memory, seizure, confusion, delirium, excessive sleepiness, disorientation, brain edema, intracranial hypertension, coma, and even death. Epidemiology The prevalence of protein toxicity cannot be accurately quantified as there are numerous etiologies from which protein toxicity can arise. Many people have protein toxicity as a result of chronic kidney disease (CKD) or end-stage renal disease (ESRD). The prevalence of CKD (all stages) from 1988 to 2016 in the U.S. has remained relatively consistent at about 14.2% annually. The prevalence of people who have received treatment for ESRD has increased to about 2,284 people per 1 million in 2018, up from 1927 people per 1 million in 2007. Prevalence of treated ESRD increases with age, is more prevalent in males than in females, and is higher in Native Hawaiians and Pacific Islanders over any other racial group. However, the prevalence of protein toxicity specifically is difficult to quantify as people who have diseases that cause protein metabolites to accumulate typically initiate hemodialysis before they become symptomatic. Urea cycle disorders also cause toxic buildup of protein metabolites, namely ammonia. As of 2013, in the U.S., the incidence of urea cycle disorders has been estimated to be 1 case in every 31,000 births, resulting in about 113 new cases annually. Special Populations Neonates Protein toxicity, specifically ammonia buildup, can affect preterm newborns that have serious defects in the urea cycle enzymes with almost no physical manifestations at birth. Clinical symptoms can manifest within a few days of birth, causing extreme illness and intellectual disability or death, if left untreated. Hyperammonemia in newborns can be diagnosed with visual cues like sepsis-like presentation, hyperventilation, fluctuating body temperature, and respiratory distress; blood panels can also be used to form differential diagnoses between hyperammonemia caused by urea cycle disorders and other disorders. Neurodegenerative diseases People who have neurodegenerative diseases like Huntington's disease, dementia, Parkinson's disease, and amyotrophic lateral sclerosis (ALS), also often show symptoms of protein toxicity. Cellular deficits and genetic mutations caused by these neurodegenerative diseases can pathologically alter gene transcription, negatively affecting protein metabolism. See also Protein poisoning – malnutrition due to adequate protein and fat deficiency Proteopathy – damage caused by mis-folded proteins References Further reading Educational resource for renal protein toxicity Symptoms and signs Nephrology Proteins as nutrients
0.761678
0.980672
0.746957
Monitoring (medicine)
In medicine, monitoring is the observation of a disease, condition or one or several medical parameters over time. It can be performed by continuously measuring certain parameters by using a medical monitor (for example, by continuously measuring vital signs by a bedside monitor), and/or by repeatedly performing medical tests (such as blood glucose monitoring with a glucose meter in people with diabetes mellitus). Transmitting data from a monitor to a distant monitoring station is known as telemetry or biotelemetry. Classification by target parameter Monitoring can be classified by the target of interest, including: Cardiac monitoring, which generally refers to continuous electrocardiography with assessment of the patient's condition relative to their cardiac rhythm. A small monitor worn by an ambulatory patient for this purpose is known as a Holter monitor. Cardiac monitoring can also involve cardiac output monitoring via an invasive Swan-Ganz catheter. Hemodynamic monitoring, which monitors the blood pressure and blood flow within the circulatory system. Blood pressure can be measured either invasively through an inserted blood pressure transducer assembly, or noninvasively with an inflatable blood pressure cuff. Respiratory monitoring, such as: Pulse oximetry which involves measurement of the saturated percentage of oxygen in the blood, referred to as SpO2, and measured by an infrared finger cuff Capnography, which involves CO2 measurements, referred to as EtCO2 or end-tidal carbon dioxide concentration. The respiratory rate monitored as such is called AWRR or airway respiratory rate) Respiratory rate monitoring through a thoracic transducer belt, an ECG channel or via capnography Neurological monitoring, such as of intracranial pressure. Also, there are special patient monitors which incorporate the monitoring of brain waves (electroencephalography), gas anesthetic concentrations, bispectral index (BIS), etc. They are usually incorporated into anesthesia machines. In neurosurgery intensive care units, brain EEG monitors have a larger multichannel capability and can monitor other physiological events, as well. Blood glucose monitoring Childbirth monitoring Body temperature monitoring through an adhesive pad containing a thermoelectric transducer. Cancer therapy monitoring through circulating tumor cells Vital parameters Monitoring of vital parameters can include several of the ones mentioned above, and most commonly include at least blood pressure and heart rate, and preferably also pulse oximetry and respiratory rate. Multimodal monitors that simultaneously measure and display the relevant vital parameters are commonly integrated into the bedside monitors in critical care units, and the anesthetic machines in operating rooms. These allow for continuous monitoring of a patient, with medical staff being continuously informed of the changes in general condition of a patient. Some monitors can even warn of pending fatal cardiac conditions before visible signs are noticeable to clinical staff, such as atrial fibrillation or premature ventricular contraction (PVC). Medical monitor A medical monitor or physiological monitor is a medical device used for monitoring. It can consist of one or more sensors, processing components, display devices (which are sometimes in themselves called "monitors"), as well as communication links for displaying or recording the results elsewhere through a monitoring network. Components Sensor Sensors of medical monitors include biosensors and mechanical sensors. For example, photodiode is used in pulse oximetry, Pressure sensor used in Non Invasive blood pressure measurement. Translating component The translating component of medical monitors is responsible for converting the signals from the sensors to a format that can be shown on the display device or transferred to an external display or recording device. Display device Physiological data are displayed continuously on a CRT, LED or LCD screen as data channels along the time axis. They may be accompanied by numerical readouts of computed parameters on the original data, such as maximum, minimum and average values, pulse and respiratory frequencies, and so on. Besides the tracings of physiological parameters along time (X axis), digital medical displays have automated numeric readouts of the peak and/or average parameters displayed on the screen. Modern medical display devices commonly use digital signal processing (DSP), which has the advantages of miniaturization, portability, and multi-parameter displays that can track many different vital signs at once. Old analog patient displays, in contrast, were based on oscilloscopes, and had one channel only, usually reserved for electrocardiographic monitoring (ECG). Therefore, medical monitors tended to be highly specialized. One monitor would track a patient's blood pressure, while another would measure pulse oximetry, another the ECG. Later analog models had a second or third channel displayed on the same screen, usually to monitor respiration movements and blood pressure. These machines were widely used and saved many lives, but they had several restrictions, including sensitivity to electrical interference, base level fluctuations and absence of numeric readouts and alarms. Communication links Several models of multi-parameter monitors are networkable, i.e., they can send their output to a central ICU monitoring station, where a single staff member can observe and respond to several bedside monitors simultaneously. Ambulatory telemetry can also be achieved by portable, battery-operated models which are carried by the patient and which transmit their data via a wireless data connection. Digital monitoring has created the possibility, which is being fully developed, of integrating the physiological data from the patient monitoring networks into the emerging hospital electronic health record and digital charting systems, using appropriate health care standards which have been developed for this purpose by organizations such as IEEE and HL7. This newer method of charting patient data reduces the likelihood of human documentation error and will eventually reduce overall paper consumption. In addition, automated ECG interpretation incorporates diagnostic codes automatically into the charts. Medical monitor's embedded software can take care of the data coding according to these standards and send messages to the medical records application, which decodes them and incorporates the data into the adequate fields. Long-distance connectivity can avail for telemedicine, which involves provision of clinical health care at a distance. Other components A medical monitor can also have the function to produce an alarm (such as using audible signals) to alert the staff when certain criteria are set, such as when some parameter exceeds of falls the level limits. Mobile appliances An entirely new scope is opened with mobile carried monitors, even such in sub-skin carriage. This class of monitors delivers information gathered in body-area networking (BAN) to e.g. smart phones and implemented autonomous agents. Interpretation of monitored parameters Monitoring of clinical parameters is primarily intended to detect changes (or absence of changes) in the clinical status of an individual. For example, the parameter of oxygen saturation is usually monitored to detect changes in respiratory capability of an individual. Change in status versus test variability When monitoring a clinical parameters, differences between test results (or values of a continuously monitored parameter after a time interval) can reflect either (or both) an actual change in the status of the condition or a test-retest variability of the test method. In practice, the possibility that a difference is due to test-retest variability can almost certainly be excluded if the difference is larger than a predefined "critical difference". This "critical difference" (CD) is calculated as: , where: K, is a factor dependent on the preferred probability level. Usually, it is set at 2.77, which reflects a 95% prediction interval, in which case there is less than 5% probability that a test result would become higher or lower than the critical difference by test-retest variability in the absence of other factors. CVa is the analytical variation CVi is the intra-individual variability For example, if a patient has a hemoglobin level of 100 g/L, the analytical variation (CVa) is 1.8% and the intra-individual variability CVi is 2.2%, then the critical difference is 8.1 g/L. Thus, for changes of less than 8 g/L since a previous test, the possibility that the change is completely caused by test-retest variability may need to be considered in addition to considering effects of, for example, diseases or treatments. Critical differences for other tests include early morning urinary albumin concentration, with a critical difference of 40%. Delta check In a clinical laboratory, a delta check is a laboratory quality control method that compares a current test result with previous test results of the same person, and detects whether there is a substantial difference, as can be defined as a critical difference as per previous section, or defined by other pre-defined criteria. If the difference exceeds the pre-defined criteria, the result is reported only after manual confirmation by laboratory personnel, in order to exclude a laboratory error as a cause of the difference. In order to flag samples as deviating from previously, the exact cutoff values are chosen to give a balance between sensitivity and the risk of being overwhelmed by false-positive flags. This balance, in turn, depends on the different kinds of clinical situations where the cutoffs are used, and hence, different cutoffs are often used at different departments even in the same hospital. Techniques in development The development of new techniques for monitoring is an advanced and developing field in smart medicine, biomedical-aided integrative medicine, alternative medicine, self-tailored preventive medicine and predictive medicine that emphasizes monitoring of comprehensive medical data of patients, people at risk and healthy people using advanced, smart, minimally invasive biomedical devices, biosensors, lab-on-a-chip (in the future nanomedicine devices like nanorobots) and advanced computerized medical diagnosis and early warning tools over a short clinical interview and drug prescription. As biomedical research, nanotechnology and nutrigenomics advances, realizing the human body's self-healing capabilities and the growing awareness of the limitations of medical intervention by chemical drugs-only approach of old school medical treatment, new researches that shows the enormous damage medications can cause, researchers are working to fulfill the need for a comprehensive further study and personal continuous clinical monitoring of health conditions while keeping legacy medical intervention as a last resort. In many medical problems, drugs offer temporary relief of symptoms while the root of a medical problem remains unknown without enough data of all our biological systems . Our body is equipped with sub-systems for the purpose of maintaining balance and self healing functions. Intervention without sufficient data might damage those healing sub systems. Monitoring medicine fills the gap to prevent diagnosis errors and can assist in future medical research by analyzing all data of many patients. Examples and applications The development cycle in medicine is extremely long, up to 20 years, because of the need for U.S. Food and Drug Administration (FDA) approvals, therefore many of monitoring medicine solutions are not available today in conventional medicine. Blood glucose monitoring In vivo blood glucose monitoring devices can transmit data to a computer that can assist with daily life suggestions for lifestyle or nutrition and with the physician can make suggestions for further study in people who are at risk and help prevent diabetes mellitus type 2 . Stress monitoring Bio sensors may provide warnings when stress levels signs are rising before human can notice it and provide alerts and suggestions. Deep neural network models using photoplethysmography imaging (PPGI) data from mobile cameras can assess stress levels with a high degree of accuracy (86%). Serotonin biosensor Future serotonin biosensors may assist with mood disorders and depression. Continuous blood test based nutrition In the field of evidence-based nutrition, a lab-on-a-chip implant that can run 24/7 blood tests may provide a continuous results and a computer can provide nutrition suggestions or alerts. Psychiatrist-on-a-chip In clinical brain sciences drug delivery and in vivo Bio-MEMS based biosensors may assist with preventing and early treatment of mental disorders Epilepsy monitoring In epilepsy, next generations of long-term video-EEG monitoring may predict epileptic seizure and prevent them with changes of daily life activity like sleep, stress, nutrition and mood management. Toxicity monitoring Smart biosensors may detect toxic materials such mercury and lead and provide alerts. Minimum standards of monitoring Minimum acceptable monitoring 1. Clinical observation (one-to-one) 2. Pulse oximetry 3. Non-invasive blood pressure 4. ECG 5. Core temperature 6. End-tidal carbon dioxide (if tracheal tube or supraglottic airway device in situ) Additional monitoring which should be immediately available 1. Blood/capillary glucose 2. Nerve stimulator Additional monitoring which should be available 1. Urine output 2. Invasive pressure monitoring (arterial line, central venous pressure) 3. Cardiac output monitoring 4. Access to haematological and biochemical investigations Essential Monitoring Presence of the anaesthetist throughout anaesthesia A. Induction and maintenance of anaesthesia 1. Pulse oximeter 2. Non-invasive blood pressure monitoring 3. Inspired and expired oxygen, carbon dioxide, nitrous oxide and vapour 4. Airway pressure 5. A nerve stimulator whenever a muscle relaxant is used 6. Temperature (pre-op) and for any procedure >30 min anaesthesia duration B. Recovery from anaesthesia 1. Pulse oximeter 2. Non-invasive blood pressure monitor 3. Electrocardiograph 4. Capnograph if the patient has a tracheal tube or supraglottic airway device in situ, or is deeply sedated 5. Temperature C. Additional monitoring 1. Some patients will require additional monitoring: e.g. intravascular pressures, cardiac output. 2. Depth of anaesthesia monitors recommended when patients are anaesthetised with total intravenous techniques. D. Regional techniques & sedation for operative procedures 1. Pulse oximeter 2. Non-invasive blood pressure monitoring 3. Electrocardiograph 4. End-tidal carbon dioxide monitor if the patient is sedated. See also Medical equipment Medical test MECIF Protocol Nanoelectromechanical system (NEMS) Functional medicine Wireless ambulatory ECG References Further reading Monitoring Level of Consciousness During Anesthesia & Sedation , Scott D. Kelley, M.D., Healthcare Sensor Networks: Challenges Toward Practical Implementation, Daniel Tze Huei Lai (Editor), Marimuthu Palaniswami (Editor), Rezaul Begg (Editor), Blood Pressure Monitoring in Cardiovascular Medicine and Therapeutics (Contemporary Cardiology), William B. White, Physiological Monitoring and Instrument Diagnosis in Perinatal and Neonatal Medicine, Yves W. Brans, William W. Hay Jr, Medical Nanotechnology and Nanomedicine (Perspectives in Nanotechnology), Harry F. Tibbals, External links Nanomedicine Intensive care medicine Anesthesia
0.762334
0.979778
0.746918
Subspecialty
A subspecialty or subspeciality (see spelling differences) is a narrow field of professional knowledge/skills within a specialty of trade, and is most commonly used to describe the increasingly more diverse medical specialties. A subspecialist is a specialist of a subspecialty. In medicine, subspecialization is particularly common in internal medicine, cardiology, neurology and pathology, psychiatry and has grown as medical practice has: become more complex, and it has become clear that a physician's case volume is negatively associated with their complication rate; that is, complications tend to decrease as the volume of cases per physician goes up. See also Medical specialty Notes and references Medical specialties
0.772408
0.966963
0.74689
Psychosocial hazard
A psychosocial hazard or work stressor is any occupational hazard related to the way work is designed, organized and managed, as well as the economic and social contexts of work. Unlike the other three categories of occupational hazard (chemical, biological, and physical), they do not arise from a physical substance, object, or hazardous energy. Psychosocial hazards affect the psychological and physical well-being of workers, including their ability to participate in a work environment among other people. They cause not only psychiatric and psychological outcomes such as occupational burnout, anxiety disorders, and depression, but they can also cause physical injury or illness such as cardiovascular disease or musculoskeletal injury. Psychosocial risks are linked to the organization of work as well as workplace violence and are recognized internationally as major challenges to occupational safety and health as well as productivity. Types of hazard In general, workplace stress can be defined as an imbalance between the demands of a job, and the physical and mental resources available to cope with them. Several models of workplace stress have been proposed, including imbalances between work demands and employee control, between effort and reward, and general focuses on wellness. Psychosocial hazards may be divided into those that arise from the content or the context of work. Work content includes the amount and pace of work, including both too much and too little to do; the extent, flexibility, and predictability of work hours; and the extent of employee control and participation in decision-making. Work context includes impacts on career development and wages, organizational culture, interpersonal relationships, and work–life balance. According to a survey by the European Agency for Safety and Health at Work, the most important psychosocial hazards—work stressors—are: Job strain Effort-reward imbalance Lack of supervisor and co-worker support Long working hours Work intensification Lean production and outsourcing Emotional labor Work–life balance Job insecurity Precarious work Other psychosocial hazards are: Having a toxic workplace or hostile work environment Lack of perceived organizational support, including perceived psychological contract violation Lack of work–life balance, including work–family conflict Lack of person–environment fit Behavioral issues such as workplace aggression, workplace bullying, workplace harassment including sexual harassment, workplace incivility, workplace revenge, and workplace violence Personality issues such as narcissism in the workplace, Machiavellianism in the workplace, and psychopathy in the workplace Micromanagement Organizational conflict Incident stress Jury stress Shift work Information privacy issues regarding data derived from workers In addition, levels of noise or air quality that are considered acceptable from a physical or chemical hazard standpoint may still provide psychosocial hazards from being annoying, irritating, or causing fear of other health impacts from the environment. Assessment Psychosocial hazards are usually identified or assessed through inspecting how workers carry out work and interact with each other, having conversations with workers individually or in focus groups, using surveys, and reviewing records such as incident reports, workers' compensation claims, and worker absenteeism and turnover data. A more formal occupational risk assessment may be warranted if there is uncertainty about the hazards' potential severity, interactions, or the effectiveness of controls. There are several risk assessment survey tools for psychosocial hazards. These include the NIOSH Worker Well-Being Questionnaire (WellBQ) from the U.S. National Institute for Occupational Safety and Health's Total Worker Health program, the People at Work survey from Queensland Workplace Health and Safety, the Copenhagen Psychosocial Questionnaire from the Danish , and the Management Standards Indicator Tool from the UK Health and Safety Executive. Control According to the hierarchy of hazard controls, the most effective controls are eliminating hazards, or if that is impractical, minimizing them, through good work design practices. These include measures to reduce overwork; providing workers with support, personal control, and clearly defined roles; and providing effective change management. In the context of psychosocial hazards, engineering controls are physical changes to the workplace that mitigate hazards or isolate workers from them. Engineering controls for psychosocial hazards include workplace design to affect the amount, type, and level of personal control of work, as well as access controls and alarms. The risk of workplace violence can be reduced through physical design of the workplace or by cameras. Proper manual handling equipment, measures to reduce noise exposure, and appropriate lighting levels have a positive effect on psychosocial hazards, in addition to their effects to control physical hazard. Administrative controls include job rotation to reduce exposure time, clear policies on workplace bullying and sexual harassment, and proper consultation and training of employees. Personal protective equipment includes personal distress alarms, as well as equipment typically used for other types of hazards such as eye and face protection and hearing protection. Health promotion activities can improve workers' general and mental health, but should not be used as an alternative or substitute for directly managing risk from psychosocial hazards. A recent Cochrane review – using moderate quality evidence – related that the addition of work-directed interventions for depressed workers receiving clinical interventions reduces the number of lost work days as compared to clinical interventions alone. This review also demonstrated that the addition of cognitive behavioral therapy to primary or occupational care and the addition of a "structured telephone outreach and care management program" to usual care are both effective at reducing sick leave days. International Standards to manage psychosocial risk at work ISO 45003:2021 is an international standard developed by the International Organization for Standardization (ISO) allowing organizations to manage psychosocial risk at work, in particular, to be considered within occupational health and safety (OH&S) management systems based on ISO 45001 on Occupational Health and Safety Management System Standards. Impact Exposure to psychosocial hazards in the workplace not only has the potential to produce psychological and physiological harm to individual employees, but can also produce further repercussions within society—reducing productivity in local/state economies, corroding familial/interpersonal relationships, and producing negative behavioral outcomes. Occupational burnout is a consequence of psychosocial hazards. Psychological and behavioral Occupational stress, anxiety, and depression can be directly correlated to psychosocial hazards in the workplace. Exposure to workplace psychosocial hazards has been strongly correlated with a wide spectrum of unhealthy behaviors such as physical inactivity, excessive alcohol and drug consumption, nutritional imbalance and sleep disturbances. In 2003, a cross-sectional survey of 12,110 employees from 26 different workplace environments was established to examine the relationship between subjective workplace stress and healthy activity. The survey quantified the measurement of stress mainly through evaluation of an individual's perceived locus of control in the workplace (although other variables were also examined). The results concluded that self-reported high levels of stress were associated with, across both sexes: diets with a higher concentration of fat, less exercise, cigarette smoking (and increasing use), and less self-efficacy to control smoking habits. Physiological Supported by strong evidence from a plethora of meticulous cross-sectional and longitudinal studies, a link has been indicated between the psychosocial work environment and consequences on employees' physical health. Increasing evidence indicates that four main physiological systems are effected: hypertension and heart disease, wound-healing, musculoskeletal disorders, gastro-intestinal disorders, and impaired immuno-competence. Additional disorders generally recognized as stress-induced include: bronchitis, coronary heart disease, mental illness, thyroid disorders, skin diseases, certain types of rheumatoid arthritis, obesity, tuberculosis, headaches and migraine, peptic ulcers and ulcerative colitis, and diabetes. Economic Across the European Union, work-related stress alone affects over 40 million individuals, costing an estimated €20 billion a year in lost productivity. See also Industrial and organizational psychology Occupational health psychology Positive psychology in the workplace References External links Psychosocial issues on OSH-Wiki Occupational hazards Social psychology
0.760666
0.981827
0.746843
Enanthem
Enanthem or enanthema is a rash (small spots) on the mucous membranes. It is characteristic of patients with viral infections causing hand foot and mouth disease, measles, and sometimes chicken pox, or COVID-19. In addition, bacterial infections such as scarlet fever may also be a cause of enanthema. The aforementioned diseases usually present with exanthema and enanthema. Enanthema can also indicate hypersensitivity. See also Koplik's spots Strawberry tongue Forchheimer spots References External links Symptoms and signs: Skin and subcutaneous tissue
0.764576
0.97676
0.746808
Aquatic therapy
Aquatic therapy refers to treatments and exercises performed in water for relaxation, fitness, physical rehabilitation, and other therapeutic benefit. Typically a qualified aquatic therapist gives constant attendance to a person receiving treatment in a heated therapy pool. Aquatic therapy techniques include Ai Chi, Aqua Running, Bad Ragaz Ring Method, Burdenko Method, Halliwick, Watsu, and other aquatic bodywork forms. Therapeutic applications include neurological disorders, spine pain, musculoskeletal pain, postoperative orthopedic rehabilitation, pediatric disabilities, pressure ulcers, and disease conditions, such as osteoporosis. Overview Aquatic therapy refers to water-based treatments or exercises of therapeutic intent, in particular for relaxation, fitness, and physical rehabilitation. Treatments and exercises are performed while floating, partially submerged, or fully submerged in water. Many aquatic therapy procedures require constant attendance by a trained therapist, and are performed in a specialized temperature-controlled pool. Rehabilitation commonly focuses on improving the physical function associated with illness, injury, or disability. Aquatic therapy encompasses a broad set of approaches and techniques, including aquatic exercise, physical therapy, aquatic bodywork, and other movement-based therapy in water (hydrokinesiotherapy). Treatment may be passive, involving a therapist or giver and a patient or receiver, or active, involving self-generated body positions, movement, or exercise. Examples include Halliwick Aquatic Therapy, Bad Ragaz Ring Method, Watsu, and Ai chi. For orthopedic rehabilitation, aquatic therapy is considered to be synonymous with therapeutic aquatic exercise, aqua therapy, aquatic rehabilitation, water therapy, and pool therapy. Aquatic therapy can support restoration of function for many areas of orthopedics, including sports medicine, work conditioning, joint arthroplasty, and back rehabilitation programs. A strong aquatic component is especially beneficial for therapy programs where limited or non-weight bearing is desirable and where normal functioning is limited by inflammation, pain, guarding, muscle spasm, and limited range of motion (ROM). Water provides a controllable environment for reeducation of weak muscles and skill development for neurological and neuromuscular impairment, acute orthopedic or neuromuscular injury, rheumatological disease, or recovery from recent surgery. Various properties of water contribute to therapeutic effects, including the ability to use water for resistance in place of gravity or weights; thermal stability that permits maintenance of near-constant temperature; hydrostatic pressure that supports and stabilizes, and that influences heart and lung function; buoyancy that permits flotation and reduces the effects of gravity; and turbulence and wave propagation that allow gentle manipulation and movement. History The use of water for therapeutic purposes first dates back to 2400 B.C. in the form of hydrotherapy, with records suggesting that ancient Egyptian, Assyrian, and Mohammedan cultures utilized mineral waters which were thought to have curative properties through the 18th century. In 1911, Dr. Charles Leroy Lowman began to use therapeutic tubs to treat cerebral palsy and spastic patients in California at Orthopedic Hospital in Los Angeles. Lowman was inspired after a visit to Spaulding School for Crippled Children in Chicago, where wooden exercise tanks were used by paralyzed patients. The invention of the Hubbard Tank, developed by Leroy Hubbard, launched the evolution of modern aquatic therapy and the development of modern techniques including the Halliwick Concept and the Bad Ragaz Ring Method (BRRM). Throughout the 1930s, research and literature on aquatic exercise, pool treatment, and spa therapy began to appear in professional journals. Dr. Charles Leroy Lowman's Technique of Underwater Gymnastics: A Study in Practical Application, published in 1937, introduced underwater exercises that were used to help restore muscle function lost by bodily deformities. The National Foundation for Infantile Paralysis began utilizing corrective swimming pools and Lowman's techniques for treatment of poliomyelitis in the 1950s. The American Physical Therapy Association (APTA) recognized the aquatic therapy section within the APTA in 1992, after a vote within the House of Delegates of the APTA in Denver, CO after lobbying efforts spearheaded starting in 1989 by Judy Cirullo and Richard C. Ruoti. Techniques Techniques for aquatic therapy include the following: Ai Chi: Ai Chi, developed in 1993 by Jun Konno, uses diaphragmatic breathing and active progressive resistance training in water to relax and strengthen the body, based on elements of qigong and tai chi. Aqua running: Aqua running (Deep Water Running or Aquajogging) is a form of cardiovascular conditioning, involving running or jogging in water, useful for injured athletes and those who desire a low-impact aerobic workout. Aqua running is performed in deep water using a floatation device (vest or belt) to support the head above water. Bad Ragaz Ring Method: The Bad Ragaz Ring Method (BRRM) focuses on rehabilitation of neuromuscular function using patterns of therapist-assisted exercise performed while the patient lies horizontal in water, with support provided by rings or floats around the neck, arms, pelvis, and knees. BRRM is an aquatic version of Proprioceptive Neuromuscular Facilitation (PNF) developed by physiotherapists at Bad Ragaz, Switzerland, as a synthesis of aquatic exercises designed by a German physician in the 1930s and land-based PNF developed by American physiotherapists in the 1950s and 1960s. Burdenko Method: The Burdenko Method, originally developed by Soviet professor of sports medicine Igor Burdenko, is an integrated land-water therapy approach that develops balance, coordination, flexibility, endurance, speed, and strength using the same methods as professional athletes. The water-based therapy uses buoyant equipment to challenge the center of buoyancy in vertical positions, exercising with movement in multiple directions, and at multiple speeds ranging from slow to fast. Halliwick Concept: The Halliwick Concept, originally developed by fluid mechanics engineer James McMillan in the late 1940s and 1950s at the Halliwick School for Girls with Disabilities in London, focuses on biophysical principles of motor control in water, in particular developing sense of balance (equilibrioception) and core stability. The Halliwick Ten-Point-Program implements the concept in a progressive program of mental adjustment, disengagement, and development of motor control, with an emphasis on rotational control, and applies the program to teach physically disabled people balance control, swimming, and independence. Halliwick Aquatic Therapy (also known as Water Specific Therapy, WST), implements the concept in patient-specific aquatic therapy. Watsu: Watsu is a form of aquatic bodywork, originally developed in the early 1980s by Harold Dull at Harbin Hot Springs, California, in which an aquatic therapist continuously supports and guides the person receiving treatment through a series of flowing movements and stretches that induce deep relaxation and provide therapeutic benefit. In the late 1980s and early 1990s physiotherapists began to use Watsu for a wide range of orthopedic and neurologic conditions, and to adapt the techniques for use with injury and disability. Applications and effectiveness Applications of aquatic therapy include neurological disorders, spine pain, musculoskeletal pain, postoperative orthopedic rehabilitation, pediatric disabilities, pressure ulcers, and other disease conditions, such as osteoporosis. A 2006 systematic review of effects of aquatic interventions in children with neuromotor impairments found "substantial lack of evidence-based research evaluating the specific effects of aquatic interventions in this population". For musculoskeletal rehabilitation, aquatic therapy is typically used to treat acute injuries as well as subjective pain of chronic conditions, such as arthritis. Water immersion has compressive effects and reflexively regulates blood vessel tone. Muscle blood flow increases by about 225% during immersion, as increased cardiac output is distributed to skin and muscle tissue. Flotation is able to counteract the effects of gravitational force on joints, creating a low impact environment for joints to perform within. The temperature changes, increase in systolic blood pressure to extremities, and overall increase in ambulation are factors which help immersion to alleviate pain. Aquatic Therapy helps with pain and stiffness, but can also improve quality of life, tone the muscles in the body, and can help with movement in the knees and hips. Protocols using a combination of strengthening, flexibility, and balance exercises resulted in the greatest improvements in Childhood Health Assessment Questionnaire scores, whereas aerobic exercise did not result in greater improvements in CHAQ scores compared to a comparison group performing Qigong. Not only does aquatic therapy help with pain, but can benefit postural stability, meaning it can help to strengthen balance functions especially with people who have neurological disorders. For people diagnosed with Parkinson's disease, aquatic exercise has been proven to be more beneficial than land-based exercise for two important outcome measures. The Berg Balance Scale and Falls Efficacy Scale score were reported to have significant improvement when implementing aquatic exercise over land-based exercise. These results suggest that aquatic exercise can be extremely helpful for Parkinson's disease patients with specific balance disorders and fear of falling. Aquatic therapy in warm water has been shown to have a positive effect on the aerobic capacity of people with fibromyalgia. It is still inconclusive whether land therapy is better than aquatic therapy however it has been demonstrated that aquatic therapy is as effective as land base therapy. There are advantageous outcomes for patients with fibromyalgia resulting from aquatic therapy such as decrease of articulate load regarding an individual's biomechanics. Currently there is no standardized aquatic therapy protocol for people post stroke however it is safe to conclude that aquatic therapy can be more effective than land based therapy for improving balance and mobility. There is insufficient evidence regarding improvements in functional independence of people post stroke. From a cardiopulmonary standpoint, aquatic therapy is often used because its effects mirror land-based effects but at lower speeds. During immersion, blood is displaced upwards into heart and there is an increase in pulse pressure due to increased cardiac filling. Cardiac volume increases 27-30%. Oxygen consumption is increased with exercise, and heart rate is increased at higher temperatures, and decreased at lower temperatures. However, immersion can worsen effects in cases of valvular insufficiency due to this cardiac and stroke volume increase. The aquatic environment is also not recommended for those who experience severe or uncontrolled heart failure. Aquatic therapy can be used for younger populations or in a pediatric setting. Aquatic therapy improves the trunk structure involved in gross motor function. The role of physical therapists is early intervention to improve their physical, mental, and social recovery. There are different interventions or activity sequences that can be implemented using aquatic therapy to improve specific functions or address specific disabilities in children. In regards to children and aquatic therapy, studies show that aquatic therapy improves motor symptoms, increases physical activity levels (which can be maintained over a long period of time) in children with developmental or motor disabilities. It also has a positive influence on social interactions/behaviors, and participation in children with neurological disorders. Aquatic therapy is beneficial for people with spinal cord injury or disorder. Aquatic therapy promotes physical and psychosocial benefits for patients with spinal cord injury and disorders. In a study, underwater treadmill training improved lower extremity strength, balance and gait in people who suffer from partial damage to their spinal cord. Respiratory function also improved with underwater treadmill training in these individuals. Knowledge of how to use aquatic therapy in application to people with spinal cord injuries or disorders is important because access to aquatic therapy is limited in this population even though there is evidence of significant improvement of many systems/ overall function using aquatic therapy. Multiple Sclerosis or MS, is a disabling disease that affects one's central nervous system. MS will target the protective sheath (myelin) that covers the nerves. Myelin allows for communication. The destruction of myelin would result in poor communication between the brain and the body. Those with MS will experience neurological damage that impacts physical, cognitive, and psychological and emotional functioning, as well as quality of life. Aquatic therapy offers benefits for this population. By utilizing the physical properties of water such as buoyancy, turbulence, hydrostatic pressure, and hydrostatic resistance, MS patients would be able to work on balance and coordination. This being something that had been compromised with the progression of the disease. The viscosity or thickness of water, allows for MS patients to take their time with their movements. The viscous environment would result in slower more careful movement. Aquatic therapy also offers the benefit of being able to actively use your muscle in order to maintain stabilization within the water itself. Finally, another potential benefit of aquatic therapy and patients with MS is the temperature of the water creating a comfortable environment. Patients with MS experience increased body temperature. Some authors have recommended that water temperature be below 85 °F (29.4 °C) for MS patients. In the exercise program, a temperature range of 83 °F to 85 °F (28.3 °C -29.4 °C) is recommended for low-repeat and low resistance exercises. The benefits of using aquatic therapy would result in a cool-down effect, that would essentially create a more optimal central temperature eventually increasing the ability to perform exercises effectively. Exercise has been shown to decrease the number of osteoporotic fractures in postmenopausal adults. However, the risk of falling along with the intense weight bearing (WB) and dynamic resistance exercises recommended to improve bone mineral density (BMD) typically conflicts with the proclivity of many older and vulnerable individuals. Research shows that the properties of water utilized during Aquatic Therapy, such as buoyancy and water resistance have made statically significant improvements in the BMD of patient’s Lumbar Spine (LS) and proximal Femoral Neck (FN), the most important sites for osteoporotic fractures. Due to its safety, Aquatic Therapy is recommended for individuals unable, unmotivated, or scared to perform intense land exercises. Further research is to be completed to determine the effects of specific aquatic exercise properties, such as intensity, frequency and duration on BMD in order to provide effective aquatic program recommendations. Professional training and certification Aquatic therapy is performed by diverse professionals with specific training and certification requirements. An aquatic therapy specialization is an add-on certification for healthcare providers, mainly including physical therapists and athletic trainers. For medical purposes, aquatic therapy, as defined by the American Medical Association (AMA), can be performed by various legally-regulated healthcare professionals who have scopes of practice that permit them to offer such services and who are permitted to use AMA Current Procedural Terminology (CPT) codes. Currently, aquatic therapy certification is provided by the Aquatic Therapy and Rehab Institute (ATRI), which aims to further education for therapists and healthcare professionals working in aquatic environments. The ATRI prerequisites for certification include 15 hours of Aquatic Therapy, Rehab and/or Aquatic Therapeutic Exercise education, which can be completed hands-on or online. Once completing the prerequisites, those pursuing certification can take the Aquatic Therapy & Rehab Institutes Aquatic Therapeutic Exercise Certification exam. References Hydrotherapy Manual therapy Massage therapy Physical therapy Rehabilitation medicine
0.771403
0.967953
0.746682
Hydrothorax
Hydrothorax is the synonym of pleural effusion in which fluid accumulates in the pleural cavity. This condition is most likely to develop secondary to congestive heart failure, following an increase in hydrostatic pressure within the lungs. More rarely, hydrothorax can develop in 10% of patients with ascites which is called hepatic hydrothorax. It is often difficult to manage in end-stage liver failure and often fails to respond to therapy. Pleural effusions may also develop following the accumulation of other fluids within the pleural cavity; if the fluid is blood it is known as hemothorax (as in major chest injuries), if the fluid is pus it is known as pyothorax (resulting from chest infections), and if the fluid is lymph it is known as chylothorax (resulting from rupture of the thoracic duct). Treatment Treatment of hydrothorax is difficult for several reasons. The underlying condition needs to be corrected; however, often the source of the hydrothorax is end stage liver disease and correctable only by transplant. Chest tube placement should not occur. Other measures such as a TIPS procedure are more effective as they treat the cause of the hydrothorax, but have complications such as worsened hepatic encephalopathy. See also Pleural effusion Pneumothorax References https://www.sciencedirect.com/topics/pharmacology-toxicology-and-pharmaceutical-science/hydrothorax Garbuzenko D.V., Arefyev N.O. Hepatic hydrothorax: An update and review of the literature. World J. Hepatol. 2017; 9 (31): 1197-1204 External links Diseases of pleura
0.760918
0.981243
0.746645
New World syndrome
New World syndrome is a set of non-communicable diseases brought on by consumption of junk food and a sedentary lifestyle, especially common to indigenous peoples of the Americas, Oceania, and circumpolar peoples. It is characterized by obesity, heart disease, diabetes, hypertension, and shortened life span. Causes New World syndrome is linked to a change from a traditional diet and exercise to a Western diet and a sedentary lifestyle. Traditional occupations of indigenous people—such as fishing, farming, and hunting—tended to involve constant activity, whereas modern office jobs do not. The introduction of modern transportation such as automobiles also decreased physical exertion. Meanwhile, Western foods which are rich in fat, salt, sugar, and refined starches are also imported into countries. The amount of carbohydrates in diets increases. Diagnosis The diagnosis does not require specific criteria. Obesity is often followed by its complications like hyperlipidemia, hypertension, and cardiac diseases. See also Alcohol and Native Americans Diabetes in Indigenous Australians Genetics of obesity Human genetic variation Indigenous health in Australia Metabolic syndrome Native American health Obesity in the Pacific Thrifty gene hypothesis References External links Culture-bound syndromes Health in Greenland Health in Oceania Health in North America Indigenous health Indigenous health in Canada Indigenous health in Australia Medical conditions related to obesity Modernity Race and health Social issues
0.778022
0.959561
0.74656
Mendelian traits in humans
Mendelian traits in humans are human traits that are substantially influenced by Mendelian inheritance. Most – if not all – Mendelian traits are also influenced by other genes, the environment, immune responses, and chance. Therefore no trait is purely Mendelian, but many traits are almost entirely Mendelian, including canonical examples, such as those listed below. Purely Mendelian traits are a minority of all traits, since most phenotypic traits exhibit incomplete dominance, codominance, and contributions from many genes. If a trait is genetically influenced, but not well characterized by Mendelian inheritance, it is non-Mendelian. Examples Albinism (recessive) Achondroplasia Alkaptonuria Ataxia telangiectasia Brachydactyly (shortness of fingers and toes) Colour blindness (monochromatism, dichromatism, anomalous trichromatism, tritanopia, deuteranopia, protanopia) Duchenne muscular dystrophy Ectrodactyly Ehlers–Danlos syndrome Fabry disease Galactosemia Gaucher's disease Some forms of Haemophilia Hereditary breast–ovarian cancer syndrome Hereditary nonpolyposis colorectal cancer HFE hereditary haemochromatosis Huntington's disease Hypercholesterolemia Krabbe disease Lactase persistence (dominant) Leber's hereditary optic neuropathy Lesch–Nyhan syndrome Marfan syndrome Niemann–Pick disease Phenylketonuria Porphyria Retinoblastoma Sickle-cell disease Sanfilippo syndrome Tay–Sachs disease Wet (dominant) or dry (recessive) earwax Non-Mendelian traits Most traits (including all complex traits) are non-mendelian. Some traits commonly thought of as Mendelian are not, including: Eye Color Psychiatric diseases Hair color Height References Further reading External links OMIM Online Mendelian Inheritance in Man Myths of Human Genetics Human genetics
0.760394
0.981729
0.7465
Interaction
Interaction is action that occurs between two or more entities, generally used in philosophy and the sciences. It may refer to: Science Interaction hypothesis, a theory of second language acquisition Interaction (statistics), when three or more variables influence each other Interactions of actors theory, created by cybernetician Gordon Pask Fundamental interaction or fundamental force, the core interactions in physics Human–computer interaction, interfaces for people using computers Social interaction between people Biology Biological interaction Cell–cell interaction Drug interaction Gene–environment interaction Protein–protein interaction Chemistry Aromatic interaction Cation-pi interaction Metallophilic interaction Arts and media Interaction (album), 1963, by Art Farmer's Quartet ACM Interactions, a magazine published by the Association for Computing Machinery "Interactions" (The Spectacular Spider-Man), an episode of the animated television series 63rd World Science Fiction Convention, titled Interaction See also Interact (disambiguation)
0.763165
0.978032
0.7464
Quality of life (healthcare)
In general, quality of life (QoL or QOL) is the perceived quality of an individual's daily life, that is, an assessment of their well-being or lack thereof. This includes all emotional, social and physical aspects of the individual's life. In health care, health-related quality of life (HRQoL) is an assessment of how the individual's well-being may be affected over time by a disease, disability or disorder. Measurement Early versions of healthcare-related quality of life measures referred to simple assessments of physical abilities by an external rater (for example, the patient is able to get up, eat and drink, and take care of personal hygiene without any help from others) or even to a single measurement (for example, the angle to which a limb could be flexed). The current concept of health-related quality of life acknowledges that subjects put their actual situation in relation to their personal expectation. The latter can vary over time, and react to external influences such as length and severity of illness, family support, etc. As with any situation involving multiple perspectives, patients' and physicians' rating of the same objective situation have been found to differ significantly. Consequently, health-related quality of life is now usually assessed using patient questionnaires. These are often multidimensional and cover physical, social, emotional, cognitive, work- or role-related, and possibly spiritual aspects as well as a wide variety of disease related symptoms, therapy induced side effects, and even the financial impact of medical conditions. Although often used interchangeably with the measurement of health status, both health-related quality of life and health status measure different concepts. Activities of daily living Activities of Daily Living (ADLs) are activities that are oriented toward taking care of one's own body and are completed daily. These include bathing/showering, toileting and toilet hygiene, dressing, eating, functional mobility, personal hygiene and grooming, and sexual activity. Many studies demonstrate the connection between ADLs and health-related quality of life (HRQOL). Mostly, findings show that difficulties in performing ADLs are directly or indirectly associated with decreased HRQOL. Furthermore, some studies found a graded relationship between ADL difficulties/disabilities and HRQOL- the less independent people are at ADLs- the lower their HRQOL is. While ADLs are an excellent tool to objectively measure quality of life, it is important to remember that Quality of life goes beyond these activities. For more information about the complex concept of quality of life, see information regarding the disability paradox. In addition to ADLs, instrumental activities of daily living (IADLs) can be used as a relatively objective measure of health-related quality of life. IADLs, as defined by the American Occupational Therapy Association (AOTA), are “Activities to support daily life within the home and community that often require more complex interactions than those used in ADLs”. IADLs include tasks such as: care for others, communication management, community mobility, financial management, health management, and home management. Activities of IADLS includes: grocery shopping, preparing food, housekeeping, using the phone, laundry, managing transportation/finances. Research has found that an individual's ability to engage in IADLs can directly impact their quality of life. Pharmacology for older adults Elderly patients taking more than five medications increases risk of cognitive impairment, and is one consideration when assessing what factors impact QoL, ADLs, and IADLs of older adults. Due to multiple chronic conditions, managing medications in this group of people is particular challenging and complex. Recent studies showed that polypharmacy is associated with ADL disability due to malnutrition, and is a risk factor for hospital admission due to falls, which can have severe consequences on a person's quality of life moving forward. Thus, when assessing an elderly person's quality of life, it is important to consider the medications an older patient is taking, and whether they are adhering to their current prescription taking schedule. Occupational Therapy's Role Occupational therapists (OTs) are global healthcare professionals who treat individuals to achieve their highest level of quality of life and independence through participation in everyday activities. OTs are trained to complete a person-centered evaluation of an individual's interests and needs, and tailor their treatment to specifically address ADLs and IADLs that their patient values. In the AOTAs most recent vision statement (2025) they explicitly state that OT as an inclusive profession works to maximize quality of life through the effective solution of participation in everyday living. To learn more about occupational therapy, see the Wikipedia page dedicated to the profession. Special Considerations in Palliative Care HRQoL in patients with serious, progressive, life-threatening illness should be given special considerations in both the measurement and analysis of HRQoL. Oftentimes, as level of functioning deteriorates, more emphasis is put on caregiver and proxy questionnaires or abbreviated questionnaires. Additionally, as diseases progress, patients and families often shift their priorities throughout the disease course. This can affect the measurement of HRQoL as, oftentimes, patients change the way they respond to questionnaires which results in HRQoL staying the same of even improving as their physical condition worsens. To address this issue, researchers have developed new instruments for measuring end-of-life HRQoL that incorporate factors such as sense of completion, relations with the healthcare system, preparation, symptom severity, and affective social support. Additionally, research is being conducted on the impact of existential QoL on palliative care patients as terminal illness awareness and symptom burden may be associated with lower existential QoL. Examples Similar to other psychometric assessment tools, health-related quality of life questionnaires should meet certain quality criteria, most importantly with regard to their reliability and validity. Hundreds of validated health-related quality of life questionnaires have been developed, some of which are specific to various illnesses. The questionnaires can be generalized into two categories: Generic instruments CDC HRQOL–14 "Healthy Days Measure": A questionnaire with four base questions and ten optional questions used by the Centers for Disease Control and Prevention (CDC) (https://www.cdc.gov/hrqol/hrqol14_measure.htm). Short-Form Health Survey (SF-36, SF-12, SF-8): One example of a widely used questionnaire assessing physical and mental health-related quality of life. Used in clinical trials and population health assessments. Suitable for pharmacoeconomic analysis, benefiting healthcare rationing. EQ-5D a simple quality of life questionnaire (https://euroqol.org). AQoL-8D a comprehensive questionnaire that assesses HR-QoL over 8 domains - independent living, happiness, mental health, coping, relationships, self-worth, pain, senses (https://www.aqol.com.au). Disease, disorder or condition specific instruments King's Health Questionnaire (KHQ) International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) in urinary incontinence, the LC -13 Lung Cancer module from the EORTC Quality of Life questionnaire library, or the Hospital Anxiety and Depression Scale (HADS) ). Manchester Short Assessment of Quality of Life: 16-item questionnaire for use in psychiatric populations. ECOG, most commonly used to evaluate the impact of cancer on people. NYHA scale, most commonly used to evaluate the impact of heart disease on individuals. EORTC measurement system for use in clinical trials in oncology. These tools are robustly tested and validated and translated. A large amount of reference data is now available. The field of HRQOL has grown significantly in the last decade, with hundreds of new studies and better reporting of clinical trials. HRQOL appears to be prognostic for survival in some diseases and patients. WHO-Quality of life-BREF (WHOQOL-BREF): A general Quality of life survey validated for several countries. The Stroke Specific Quality Of Life scale SS-QOL: It is a patient-centered outcome measure intended to provide an assessment of health-related quality of life (HRQOL) specific to patients with stroke only. It measures energy, family roles, language, mobility, mood, personality, self care, social roles, thinking, upper extremity function, vision and work productivity. In rheumatology, condition specific instruments have been developed such as RAQoL for rheumatoid arthritis, OAQoL for osteoarthritis, ASQoL for ankylosing spondylitis, SScQoL for systemic sclerosis and PsAQoL for people with psoriatic arthritis. MOS-HIV(Medical Outcome Survey-HIV) was developed specifically for people living with HIV/AIDS. Utility A variety of validated surveys exist for healthcare providers to use for measuring a patient's health-related quality of life. The results are then used to help determine treatment options for the patient based on past results from other patients, and to measure intra-individual improvements in QoL pre- and post-treatment. When it is used as a longitudinal study device that surveys patients before, during, and after treatment, it can help health care providers determine which treatment plan is the best option, thereby improving healthcare through an evolutionary process. Importance There is a growing field of research concerned with developing, evaluating, and applying quality of life measures within health related research (e.g. within randomized controlled studies), especially in relation to Health Services Research. Well-executed health-related quality of life research informs those tasked with health rationing or anyone involved in the decision-making process of agencies such as the Food and Drug Administration, European Medicines Agency or National Institute for Clinical Excellence. Additionally, health-related quality of life research may be used as the final step in clinical trials of experimental therapies. The understanding of Quality of Life is recognized as an increasingly important healthcare topic because the relationship between cost and value raises complex problems, often with high emotional attachment because of the potential impact on human life. For instance, healthcare providers must refer to cost-benefit analysis to make economic decisions about access to expensive drugs that may prolong life by a short amount of time and/or provide a minimal increase to quality of life. Additionally, these treatment drugs must be weighed against the cost of alternative treatments or preventative medicine. In the case of chronic and/or terminal illness where no effective cure is available, an emphasis is placed on improving health-related quality of life through interventions such as symptom management, adaptive technology, and palliative care. Another example of why understanding quality of life is important is during a randomized study of 151 patients with metastatic non-small-cell lung cancer who were split into obtaining early palliative and standardized care group. The earlier palliative group not only had better quality of life based on the Functional assessment of Cancer Therapy-Lung scale and the Hospital Anxiety and Depression Scale, but the palliative care group also had less depressive symptoms (16% vs. 38%, P=0.01) despite having received less aggressive end-of-life care (33% vs. 54%, P=0.05) and longer median overall survival than the standard group (11.6 months vs. 8.9 months, P=0.02). By having a quality of life measure, we are able to evaluate early palliative care and see its value in terms of improving quality of care, reduced aggressive treatment and consequently costs, and also greater quality/quantity of life. In the realm of elder care, research indicates that improvements in quality of life ratings may also improve resident outcomes, which can lead to substantial cost savings over time. Research has shown that evaluating an elderly person's functional status, in addition to other aspects of their health, helps improve geriatric quality of life and decrease caregiver burden. Research has also shown that quality of life ratings can be successfully used as a key-performance metric when designing and implementing organizational change initiatives in nursing homes. Research Research revolving around Health Related Quality of Life is extremely important because of the implications that it can have on current and future treatments and health protocols. Thereby, validated health-related quality of life questionnaires can become an integral part of clinical trials in determining the trial drugs' value in a cost-benefit analysis. For example, the Centers for Disease Control and Prevention (CDC) is using their health-related quality of life survey, Healthy Day Measure, as part of research to identify health disparities, track population trends, and build broad coalitions around a measure of population health. This information can then be used by multiple levels of government or other officials to "increase quality and years of life" and to "eliminate health disparities" for equal opportunity. Within the field of childhood cancer, quality of life is often measured both during and after treatment. International comparisons of both outcomes and predictors are hindered by the use of a large number of different measurements. Recently, a first step for a joint international consensus statement for measuring quality of survival for patients with childhood cancer has been established. Ethics The quality of life ethic refers to an ethical principle that uses assessments of the quality of life that a person could potentially experience as a foundation for making decisions about the continuation or termination of life. It is often used in contrast to or in opposition to the sanctity of life ethic. While measuring tools can be a way to scientifically quantify quality of life in an objective manner on a broad range of topics and circumstances, there are limitations and potential negative consequences with its utilization. Firstly, it makes the assumption that an assessment can be able to quantify domains such as physical, emotional, social, well-being, etc. with a single quantitative score. Furthermore, how are these domains weighted? Will they be measured the same or equally for each person? Or will it take into account how important these specific domains are for each person when creating the final score? Each person has their own specific set of experiences and values and a point of argument is that this needs to be taken into account. However, this would be a difficult task for the person to rank these quality of life domains. Another point to keep in mind is that people's values and experiences change over time and their quality of life domain rankings may differ. This caveat must be added or the dynamics of this could be taken into account when interpreting and understanding the results from a quality of life measuring tool. Quality of life measuring tools can also promote a negative and pessimistic view for clinicians, patients, and families, especially when used at baseline during the time of diagnosis. Quality of life measuring tools can fail to account for effective therapeutic strategies that can alleviate health burdens, and thus can promote a self-fulfilling prophecy for patients. On a societal level, the concept of low quality of life can also perpetuate negative prejudices experienced by people with disabilities or chronic illnesses. Analysis Statistical biases It is not considered uncommon for there to be some statistical anomalies during data analysis. Some of the more frequently seen in health-related quality of life analysis are the ceiling effect, the floor effect, and response shift bias. The ceiling effect refers to how patients who start with a higher quality of life than the average patient do not have much room for improvement when treated. The opposite of this is the floor effect, where patients with a lower quality of life average have much more room for improvement. Consequentially, if the spectrum of quality of life before treatment is too unbalanced, there is a greater potential for skewing the end results, creating the possibility for incorrectly portraying a treatment's effectiveness or lack thereof. Response shift bias Response shift bias is an increasing problem within longitudinal studies that rely on patient reported outcomes. It refers to the potential of a subject's views, values, or expectations changing over the course of a study, thereby adding another factor of change on the end results. Clinicians and healthcare providers must recalibrate surveys over the course of a study to account for Response Shift Bias. The degree of recalibration varies due to factors based on the individual area of investigation and length of study. Statistical variation In a study by Norman et al. about health-related quality of life surveys, it was found that most survey results were within a half standard deviation. Norman et al. theorized that this is due to the limited human discrimination ability as identified by George A. Miller in 1956. Utilizing the Magic Number of 7 ± 2, Miller theorized that when the scale on a survey extends beyond 7 ± 2, humans fail to be consistent and lose ability to differentiate individual steps on the scale because of channel capacity. Norman et al. proposed health-related quality of life surveys use a half standard deviation as the statistically significant benefit of a treatment instead of calculating survey-specific "minimally important differences", which are the supposed real-life improvements reported by the subjects. In other words, Norman et al. proposed all health-related quality of life survey scales be set to a half standard deviation instead of calculating a scale for each survey validation study where the steps are referred to as "minimally important differences". See also Medical law Patient-reported outcome Pharmacoeconomics Medical ethics References External links ProQolid (Patient-Reported Outcome & Quality of Life Instruments Database) Mapi Research Trust ("Non-profit organization involved in Patient-Centered Outcomes") PROLabels(Database on Patient-Reported Outcome claims in marketing authorizations) Quality-of-Life-Recorder (Project to bring QoL measurement to routine practice. Platform & library of electronic questionnaires, Shareware/Freeware) The International Society for Quality of Life Health and Quality of Life Outcomes The Healthcare Center. Better Health for Everyone Health care Medical terminology Healthcare
0.760145
0.981854
0.746351
Skin fold
Skin folds or skinfolds are areas of skin that are naturally folded. Many skin folds are distinct, heritable anatomical features, and may be used for identification of animal species, while others are non-specific and may be produced either by individual development of an organism or by arbitrary application of force to skin, either by the actions of the muscles of the body or by external force, e.g., gravity. Anatomical folds can also be found in other structures and tissues besides the skin, such as the ileocecal fold beneath the terminal ileum of the cecum. Skin folds are of interest for cosmetology, as some kinds may be considered aesthetically undesirable, and for medicine, because some of them are susceptible to inflammation and infection. Skin creases, skin folds and lines The skin creases of the human body are features of great anatomical, morphological, and surgical interest and important for the maintenance of the contour of each anatomic area. In the literature, when referring to a skin crease, there is variation of terms used other than "crease", such as "fold" and "sulcus", but these terms do not accurately reflect their histology structure nor their function. In the review of literature, a record of the creases of the human body for each anatomic area, including the synonyms that are used for each crease in the literature, has been attempted. The skin crease as a fixed and permanent line, according to their histology, is related to connective tissue attachments with the underlying structures or extensions of the underlying muscle fibers in the dermis of the crease site. The skin fold is characterized by skin redundancy that is responsible partly, often in combination with connective tissue attachments, for the skin crease. It is essential to use appropriate terms that accurately reflect the anatomic structure and histology when referring to the skin lines. Human skin folds The following distinct skin fold types are among the roughly 100 identified in human anatomy: Nasolabial fold Epicanthal fold Interdigital folds (Plica Interdigitalis) Inframammary fold Triceps skinfold Webbed neck See also Wrinkle Skin line Pannus Dimple Intertrigo Irritant diaper dermatitis Inverse psoriasis References fold Human anatomy
0.761586
0.979961
0.746324
Immersion foot syndromes
Immersion foot syndromes are a class of foot injury caused by water absorption in the outer layer of skin. There are different subclass names for this condition based on the temperature of the water to which the foot is exposed. These include trench foot, tropical immersion foot, and warm water immersion foot. In one 3-day military study, it was found that submersion in water allowing for a higher skin temperature resulted in worse skin maceration and pain. Causes Trench foot Trench foot is a medical condition caused by prolonged exposure of the feet to damp, unsanitary, and cold conditions. The use of the word trench in the name of this condition is a reference to trench warfare, mainly associated with World War I. Affected feet may become numb, affected by erythrosis (turning red) or cyanosis (turning blue) as a result of poor vascular supply, and feet may begin to have a decaying odour due to the possibility of the early stages of necrosis setting in. As the condition worsens, feet may also begin to swell. Advanced trench foot often involves blisters and open sores, which lead to fungal infections; this is sometimes called tropical ulcer (jungle rot). If left untreated, trench foot usually results in gangrene, which can cause the need for amputation. If trench foot is treated properly, complete recovery is normal, though it is marked by severe short-term pain when feeling returns. As with other cold-related injuries, trench foot leaves those affected more susceptible to it in the future. Tropical immersion foot Tropical immersion foot (also known as "Paddy foot", and "Paddy-field foot") is a skin condition of the feet seen after continuous immersion of the feet in water or mud of temperature above for two to ten days. Warm water immersion foot Warm water immersion foot is a skin condition of the feet that results after exposure to warm, wet conditions for 48 hours or more and is characterized by maceration ("pruning"), blanching, and wrinkling of the soles, padding of toes (especially the big toe) and padding of the sides of the feet. Foot maceration occur whenever exposed for prolonged periods to moist conditions. Large watery blisters appear which are painful when they open and begin to peel away from the foot itself. The heels, sides and bony prominences are left with large areas of extremely sensitive, red tissue, exposed and prone to infection. As the condition worsens, more blisters develop due to prolonged dampness which eventually covers the entire heel and/or other large, padded sections of the foot, especially the undersides as well as toes. Each layer in turn peels away resulting in deep, extremely tender, red ulcers. Healing occurs only when the feet are cleansed, dried and exposed to air for weeks. Scarring is permanent with dry, thin skin that appears red for up to a year or more. The padding of the feet returns but healing can be painful as the nerves repair with characteristics of diabetic neuropathy. Antibiotics and/or antifungal are sometimes prescribed. Foot immersion is a common problem with homeless individuals wearing one pair of socks and shoes for extensive periods of time, especially wet shoes and sneakers from rain and snow. The condition is exacerbated by excessive dampness of the feet for prolonged periods of time. Fungus and bacterial infections prosper in the warm, dark, wet conditions and are characterized by a sickly odor that is distinct to foot immersion. Prevention In the British Army, policies were developed to help the soldiers keep their feet dry—the surest way of preventing the disease. Soldiers were told to dry their feet, and keep them dry by changing socks several times a day. After the first year of the First World War, British troops were instructed to keep at least three pairs of socks with them and to frequently change them. The use of whale oil was also successful in combating trench foot. A British battalion in front line positions could be expected to use ten gallons of whale oil every day. References External links Skin conditions resulting from physical factors de:Immersionsfuß he:רגל חפירות nl:Loopgravenvoeten no:Skyttergravsfot sv:Skyttegravsfot
0.760056
0.981933
0.746324
Primary and secondary gain
Primary gain and secondary gain, and more rarely tertiary gain, are terms used in medicine and psychology to describe the significant subconscious psychological motivators patients may have when presenting with symptoms. If these motivators are recognized by the patient, and especially if symptoms are fabricated or exaggerated for personal gain, then this is instead considered malingering. The difference between primary and secondary gain is that with primary gain, the reason a person may not be able to go to work is because they are injured or ill, whereas with secondary gain, the reason that person is injured or ill is so that they cannot go to work. Primary gain Primary gain produces positive internal motivations. For example, a patient might feel guilty about being unable to perform some task. If a medical condition justifying an inability is present, it may lead to decreased psychological stress. Primary gain can be a component of any disease, but is most typically demonstrated in conversion disorder — a psychiatric disorder in which stressors manifest themselves as physical symptoms without organic causes, such as a person who becomes blind after seeing a murder. The "gain" may not be particularly evident to an outside observer. Secondary gain Secondary gain can also be a component of any disease, but is an external motivator. If a patient's disease allows them to miss work, avoid military duty, obtain financial compensation, obtain drugs, avoid a jail sentence, etc., these would be examples of a secondary gain. For instance, an individual having household chores completed by someone else because they have stomach cramps would be a secondary gain. In the context of a person with a significant psychiatric disability, this effect is sometimes called "secondary handicap". Tertiary gain Tertiary gain, a less well-studied process, is the benefit that a third-party receives from the patient's symptoms. It includes gaslighting wherein a person, such as a family member or healthcare worker for financial or other reasons, manipulates a patient into believing that they are ill. Tertiary gain can also be received when, for example, a pharmaceutical company runs advertisements to convince viewers they have symptoms which require treatment with the company's drug. References Psychiatric diagnosis
0.762193
0.979161
0.74631
Health assessment
A health assessment is a plan of care that identifies the specific needs of a person and how those needs will be addressed by the healthcare system or skilled nursing facility. Health assessment is the evaluation of the health status by performing a physical exam after taking a health history. It is done to detect diseases early in people that may look and feel well. Evidence does not support routine health assessments in otherwise healthy people. Health assessment is the evaluation of the health status of an individual along the health continuum. The purpose of the assessment is to establish where on the health continuum the individual is because this guides how to approach and treat the individual. The health care approaches range from preventive, to treatment, to palliative care in relation to the individual's status on the health continuum. It is not the treatment or treatment plan. The plan related to findings is a care plan which is preceded by the specialty such as medical, physical therapy, nursing, etc. Corporate health assessments Research by Data Bridge Market Research shows that the market for corporate health assessments which was USD 2,91,272.4 million in 2023, is likely to reach USD 8,23,374.65million by 2031 and is expected to undergo a CAGR of 12.5% during the forecast period. Healthcare providers such as Bupa and Nuffield now routinely offer health assessments to individuals and corporate clients, building on the growing market for these services. Definitions of health assessment are varied, with some using the term health assessment and health checks interchangeably. UK healthcare provider Verve Healthcare makes a clear difference: A staff health check is a routine examination conducted by a health professional to assess an individual's overall health status. The primary aim is to identify health issues early, to monitor ongoing health conditions and to monitor future health risks. Health assessments are more detailed than regular health checks. They provide a holistic view of an individual's health and can identify underlying health conditions. History Health assessment has been separated by authors from physical assessment to include the focus on health occurring on a continuum as a fundamental teaching. In the healthcare industry it is understood health occurs on a continuum, so the term used is assessment but may be preference by the speciality's focus such as nursing, physical therapy, etc. In healthcare, the assessment's focus is biopsychosocial but the intensity of focus may vary by the type of healthcare practitioner. For example, in the emergency room the focus is chief complaint and how to help that person related to the perceived problem. If the problem is a heart attack then the intensity of focus is on the biological/physical problem initially. See also Nursing assessment References Diagnosis codes Medical terminology
0.76244
0.978798
0.746274
Socioecology
Socioecology is the scientific study of how social structure and organization are influenced by an organism's environment. Socioecology is primarily related to anthropology, geography, sociology, and ecology. Specifically, the term is used in human ecology, the study of the interaction between humans and their environment. Socioecological models of human health examine the interaction of many factors, ranging from narrowest (individual behaviors) to broadest (federal policies). The factors of socioecological models consist of individual behaviors, sociodemographic factors (race, education, socioeconomic status), interpersonal factors (romantic, family, and coworker relationships), community factors (physical and social environment), and societal factors (local, state, and federal policies. References External links Socioecology Research Today (free online) Environmental social science
0.774651
0.963336
0.746249
Biophotonics
The term biophotonics denotes a combination of biology and photonics, with photonics being the science and technology of generation, manipulation, and detection of photons, quantum units of light. Photonics is related to electronics and photons. Photons play a central role in information technologies, such as fiber optics, the way electrons do in electronics. Biophotonics can also be described as the "development and application of optical techniques, particularly imaging, to the study of biological molecules, cells and tissue". One of the main benefits of using the optical techniques which make up biophotonics is that they preserve the integrity of the biological cells being examined. Biophotonics has therefore become the established general term for all techniques that deal with the interaction between biological items and photons. This refers to emission, detection, absorption, reflection, modification, and creation of radiation from biomolecular, cells, tissues, organisms, and biomaterials. Areas of application are life science, medicine, agriculture, and environmental science. Similar to the differentiation between "electric" and "electronics," a difference can be made between applications such as therapy and surgery, which use light mainly to transfer energy, and applications such as diagnostics, which use light to excite matter and to transfer information back to the operator. In most cases, the term biophotonics refers to the latter type of application. Applications Biophotonics is an interdisciplinary field involving the interaction between electromagnetic radiation and biological materials including: tissues, cells, sub-cellular structures, and molecules in living organisms. Recent biophotonics research has created new applications for clinical diagnostics and therapies involving fluids, cells, and tissues. These advances are allowing scientists and physicians opportunities for superior, non-invasive diagnostics for vascular and blood flow, as well as tools for better examination of skin lesions. In addition to new diagnostic tools, the advancements in biophotonics research have provided new photothermal, photodynamic, and tissue therapies. Raman and FT-IR based diagnostics Raman and FTIR spectroscopy can be applied in many different ways towards improved diagnostics. For example: Identifying bacterial and fungal infections Tissue tumor assessment in: skin, liver, bones, bladder etc. Identifying antibiotic resistances Other applications Dermatology By observing the numerous and complex interactions between light and biological materials, the field of biophotonics presents a unique set of diagnostic techniques that medical practitioners can utilize. Biophotonic imaging provides the field of dermatology with the only non-invasive technique available for diagnosing skin cancers. Traditional diagnostic procedures for skin cancers involve visual assessment and biopsy, but a new laser-induced fluorescence spectroscopy technique allow dermatologists to compare spectrographs of a patient's skin with spectrographs known to correspond with malignant tissue. This provides doctors with earlier diagnosis and treatment options. "Among optical techniques, an emerging imaging technology based on laser scanning, the optical coherence tomography or OCT imaging is considered to be a useful tool to differentiate healthy from malignant skin tissue". The information is immediately accessible and eliminates the need for skin excision. This also eliminates the need for the skin samples to be processed in a lab which reduces labor costs and processing time. Furthermore, these optical imaging technologies can be used during traditional surgical procedures to determine the boundaries of lesions to ensure that the entirety of the diseased tissue is removed. This is accomplished by exposing nanoparticles that have been dyed with a fluorescing substance to the acceptable light photons. Nanoparticles that are functionalized with fluorescent dyes and marker proteins will congregate in a chosen tissue type. When the particles are exposed to wavelengths of light that correspond to the fluorescent dye, the unhealthy tissue glows. This allows for the attending surgeon to quickly visually identify boundaries between healthy and unhealthy tissue, resulting in less time on the operating table and higher patient recovery. "Using dielectrophoretic microarray devices, nanoparticles and DNA biomarkers were rapidly isolated and concentrated onto specific microscopic locations where they were easily detected by epifluorescent microscopy". Optical tweezers Optical tweezers (or traps) are scientific tools employed to maneuver microscopic particles such as atoms, DNA, bacteria, viruses, and other types of nanoparticles. They use the light's momentum to exert small forces on a sample. This technique allows for the organizing and sorting of cells, the tracking of the movement of bacteria, and the changing of cell structure Laser micro-scalpel Laser micro-scalpels are a combination of fluorescence microscopy and a femtosecond laser "can penetrate up to 250 micrometers into tissue and target single cells in 3-D space." The technology, which was patented by researchers at the University of Texas at Austin, means that surgeons can excise diseased or damaged cells without disturbing or damaging healthy surrounding cells in delicate surgeries involving areas such as the eyes and vocal chords. Photoacoustic microscopy (PAM) Photoacoustic microscopy (PAM) is an imaging technology that utilizes both laser technology and ultrasound technology. This dual imaging modality is far superior at imaging deep tissue and vascular tissues than previous imaging technologies. The improvement in resolution provides higher quality images of deep tissues and vascular systems, allowing non-invasive differentiation of cancerous tissues vs healthy tissue by observing such things as "water content, oxygen saturation level, and hemoglobin concentration". Researchers have also been able to use PAM to diagnose endometriosis in rats. Low level laser therapy (LLLT) Although low-level laser therapy's (LLLT) efficacy is somewhat controversial, the technology can be used to treat wounds by repairing tissue and preventing tissue death. However, more recent studies indicate that LLLT is more useful for reducing inflammation and assuaging chronic joint pain. In addition, it is believed that LLLT could possibly prove to be useful in the treatment of severe brain injury or trauma, stroke, and degenerative neurological diseases. Photodynamic therapy (PT) Photodynamic therapy (PT) uses photosynthesizing chemicals and oxygen to induce a cellular reaction to light. It can be used to kill cancer cells, treat acne, and reduce scarring. PT can also kill bacteria, viruses, and fungi. The technology provides treatment with little to no long-term side effects, is less invasive than surgery and can be repeated more often than radiation. Treatment is limited, however, to surfaces and organs that can be exposed to light, which eliminates deep tissue cancer treatments. Photothermal therapy Photothermal therapy most commonly uses nanoparticles made of a noble metal to convert light into heat. The nanoparticles are engineered to absorb light in the 700-1000 nm range, where the human body is optically transparent. When the particles are hit by light they heat up, disrupting or destroying the surrounding cells via hyperthermia. Because the light used does not interact with tissue directly, photothermal therapy has few long term side effects and it can be used to treat cancers deep within the body. FRET Fluorescence resonance energy transfer, also known as Förster resonance energy transfer (FRET in both cases) is the term given to the process where two excited "fluorophores" pass energy one to the other non-radiatively (i.e., without exchanging a photon). By carefully selecting the excitation of these fluorophores and detecting the emission, FRET has become one of the most widely used techniques in the field of biophotonics, giving scientists the chance to investigate sub-cellular environments. Biofluorescence Biofluorescence describes the absorption of ultraviolet or visible light and the sub sequential emission of photons at a lower energy level (S_1 excited state relaxes to S_0 ground state) by intrinsically fluorescent proteins or by synthetic fluorescent molecules covalently attached to a biomarker of interest. Biomarkers are molecules indicative or disease or distress and are a typically monitored systemically in a living organism, or by using an ex vivo tissue sample for microscopy, or in vitro: in the blood, urine, sweat, saliva, interstitial fluid, aqueous humor, or sputum. Stimulating light excites an electron, raising energy to an unstable level. This instability is unfavorable, so the energized electron is returned to a stable state almost as immediately as it becomes unstable. The time delay between excitation and re-emission that occurs when returning to the stable ground state causes the photon that is re-emitted to be a different color (i.e. it relaxes to a lower energy and thus the photon emitted is at a shorter wavelength, as governed by the Plank-Einstein relation) than the excitation light that was absorbed. This return to stability corresponds with the release of excess energy in the form of fluorescent light. This emission of light is only observable whilst the excitation light is still providing photons to the fluorescent molecule and is typically excited by blue or green light and emits purple, yellow, orange, green, cyan, or red. Biofluorescence is often confused with the following forms of biotic light: bioluminescence and biophosphorescence. Bioluminescence Bioluminescence differs from biofluorescence in that it is the natural production of light by chemical reactions within an organism, whereas biofluorescence and biophosphorescence are the absorption and reemission of light from the natural environment. Biophosphorescence Biophosphorescence is similar to biofluorescence in its requirement of light at specified wavelengths as a provider of excitation energy. The difference here lies in the relative stability of the energized electron. Unlike with biofluorescence, here the electron retains stability in the forbidden triplet state (unpaired spins), with a longer delay in emitting light resulting in the effect that it continues to “glow-in-the-dark” even long after the stimulating light source has been removed. Biolasing A biolaser is when laser light is generated by or from within a living cell. Imaging in biophotonics often relies on laser light, and integration with biological systems is seen as a promising route to enhancing sensing and imaging techniques. Biolasers, like any lasers, require three components: a gain medium, an optical feedback structure and a pump source. For the gain medium, a variety of naturally produced fluorescent proteins can be used in different laser structure. Enclosing an optical feedback structure in a cell has been demonstrated using cell vacuoles, as well as using fully enclosed laser systems such as dye doped polymer microspheres, or semiconductor nanodisks lasers. Light sources The predominantly used light sources are beam lights. LEDs and superluminescent diodes also play an important role. Typical wavelengths used in biophotonics are between 600 nm (Visible) and 3000 nm (near IR). Lasers Lasers play an increasingly important role in biophotonics. Their unique intrinsic properties like precise wavelength selection, widest wavelength coverage, highest focusability and thus best spectral resolution, strong power densities and broad spectrum of excitation periods make them the most universal light tool for a wide spectrum of applications. As a consequence a variety of different laser technologies from a broad number of suppliers can be found in the market today. Gas lasers Major gas lasers used for biophotonics applications, and their most important wavelengths, are: - Argon Ion laser: 457.8 nm, 476.5 nm, 488.0 nm, 496.5 nm, 501.7 nm, 514.5 nm (multi-line operation possible) - Krypton Ion laser: 350.7 nm, 356.4 nm, 476.2 nm, 482.5 nm, 520.6 nm, 530.9 nm, 568.2 nm, 647.1 nm, 676.4 nm, 752.5 nm, 799.3 nm - Helium–neon laser: 632.8 nm (543.5 nm, 594.1 nm, 611.9 nm) - HeCd lasers: 325 nm, 442 nm Other commercial gas lasers like carbon dioxide, carbon monoxide, nitrogen, oxygen, xenon-ion, excimer or metal vapor lasers have no or only very minor importance in biophotonics. Major advantage of gas lasers in biophotonics is their fixed wavelength, their perfect beam quality and their low linewidth/high coherence. Argon ion lasers can also operate in multi-line mode. Major disadvantage are high power consumption, generation of mechanical noise due to fan cooling and limited laser powers. Key suppliers are Coherent, CVI/Melles Griot, JDSU, Lasos, LTB and Newport/Spectra Physics. Diode lasers The most commonly integrated laser diodes, which are used for diode lasers in biophotonics are based either on GaN or GaAs semiconductor material. GaN covers a wavelength spectrum from 375 to 488 nm (commercial products at 515 have been announced recently) whereas GaAs covers a wavelength spectrum starting from 635 nm. Most commonly used wavelengths from diode lasers in biophotonics are: 375, 405, 445, 473, 488, 515, 640, 643, 660, 675, 785 nm. Laser Diodes are available in 4 classes: - Single edge emitter/broad stripe/broad area - Surface emitter/VCSEL - Edge emitter/Ridge waveguide - Grating stabilized (FDB, DBR, ECDL) For biophotonic applications, the most commonly used laser diodes are edge emitting/ridge waveguide diodes, which are single transverse mode and can be optimized to an almost perfect TEM00 beam quality. Due to the small size of the resonator, digital modulation can be very fast (up to 500 MHz). Coherence length is low (typically < 1 mm) and the typical linewidth is in the nm-range. Typical power levels are around 100 mW (depending on wavelength and supplier). Key suppliers are: Coherent, Melles Griot, Omicron, Toptica, JDSU, Newport, Oxxius, Power Technology. Grating stabilized diode lasers either have an lithographical incorporated grating (DFB, DBR) or an external grating (ECDL). As a result, the coherence length will raise into the range of several meters, whereas the linewidth will drop well below picometers (pm). Biophotonic applications, which make use of this characteristics are Raman spectroscopy (requires linewidth below cm-1) and spectroscopic gas sensing. Solid-state lasers Solid-state lasers are lasers based on solid-state gain media such as crystals or glasses doped with rare earth or transition metal ions, or semiconductor lasers. (Although semiconductor lasers are of course also solid-state devices, they are often not included in the term solid-state lasers.) Ion-doped solid-state lasers (also sometimes called doped insulator lasers) can be made in the form of bulk lasers, fiber lasers, or other types of waveguide lasers. Solid-state lasers may generate output powers between a few milliwatts and (in high-power versions) many kilowatts. Ultrachrome lasers Many advanced applications in biophotonics require individually selectable light at multiple wavelengths. As a consequence a series of new laser technologies has been introduced, which currently looks for precise wording. The most commonly used terminology are supercontinuum lasers, which emit visible light over a wide spectrum simultaneously. This light is then filtered e.g. via acousto-optic modulators (AOM, AOTF) into 1 or up to 8 different wavelengths. Typical suppliers for this technology was NKT Photonics or Fianium. Recently NKT Photonics bought Fianium, remaining the major supplier of the supercontinuum technology on the market. In another approach the supercontinuum is generated in the infrared and then converted at a single selectable wavelength into the visible regime. This approach does not require AOTF's and has a background-free spectral purity. Since both concepts have major importance for biophotonics the umbrella term "ultrachrome lasers" is often used. Swept sources Swept sources are designed to continuously change ('sweep') emitted light frequency in time. They typically continuously circle through a pre-defined range of frequencies (e.g., 800 +/- 50 nm). Swept sources in the terahertz regime have been demonstrated. A typical application of swept sources in biophotonics is optical coherence tomography (OCT) imaging. THz sources Vibrational spectroscopy in the terahertz (THz) frequency range, 0.1–10 THz, is a fast emerging technique for fingerprinting biological molecules and species. For more than 20 years, theoretical studies predicted multiple resonances in absorption (or transmission) spectra of biological molecules in this range. THz radiation interacts with the low- frequency internal molecular vibrations by exciting these vibrations. Single photon sources Single photon sources are novel types of light sources distinct from coherent light sources (lasers) and thermal light sources (such as incandescent light bulbs and mercury-vapor lamps) that emit light as single particles or photons. References Photonics Light therapy Bioelectromagnetics
0.763248
0.977711
0.746236
Psammoma body
A psammoma body is a round collection of calcium, seen microscopically. The term is derived . Cause Psammoma bodies are associated with the papillary (nipple-like) histomorphology and are thought to arise from, Infarction and calcification of papillae tips. Calcification of intralymphatic tumor thrombi. Association with lesions Psammoma bodies are commonly seen in certain tumors such as: Papillary thyroid carcinoma Papillary renal cell carcinoma Ovarian papillary serous cystadenoma and cystadenocarcinoma Endometrial adenocarcinomas (papillary serous carcinoma ~3%-4%) Meningiomas, in the central nervous system Peritoneal and pleural mesothelioma Somatostatinoma (pancreas) Prolactinoma of the pituitary Glucagonoma Micropapillary subtype of lung adenocarcinoma Benign lesions Psammoma bodies may be seen in: Endosalpingiosis Psammomatous melanotic schwannoma Melanocytic nevus Appearance Psammoma bodies usually have a laminar appearance, are circular, acellular and basophilic. References External links Slides: Meningioma Thyroid cancer Endometriosis (peritoneum) Video of psammoma bodies in meningioma Histopathology
0.760304
0.981484
0.746227
Micronutrient deficiency
Micronutrient deficiency is defined as the sustained insufficient supply of vitamins and minerals needed for growth and development, as well as to maintain optimal health. Since some of these compounds are considered essentials (we need to obtain them from the diet), micronutrient deficiencies are often the result of an inadequate intake. However, it can also be associated to poor intestinal absorption, presence of certain chronic illnesses and elevated requirements. Prevalence Micronutrient deficiencies are considered a public health problem worldwide. For over 30 years it has been estimated that more than two billion people of all ages are affected by this burden, while a recently published study based on individual-level biomarker data estimated that there are 372 million children aged 5 years and younger, and 1.2 billion non-pregnant women of reproductive age with one or more micronutrient deficiencies globally, affecting greatly Asia and sub-Saharan Africa. Women of reproductive age (including pregnant and lactating) as well as children and adolescents are at higher risk of micronutrient deficiencies due to their higher demands. Similarly, the elderly are among the most vulnerable populations, associated to reduced absorption and utilization, as well as poorer diets. Vegans and people reducing animal-source foods in their diets, as recommended by many scientific studies and experts, are also at greater risk of some micronutrient deficiencies if they don't adequately consume supplements or foods substituting animal-sourced micronutrients. The most commonly analyzed micronutrient deficiencies, and therefore the most prevalent, include iodine, iron, zinc, calcium, selenium, fluorine, and vitamins A, B6, B12, B9 (folate) and D, with large variations between countries and populations. Impact Micronutrient deficiencies are associated to short- and long-term consequences as clinical symptoms and signs will manifest in relation to the body stores for the specific micronutrient and the magnitude of the deficiency. Nonetheless, it has been well established that micronutrient deficiencies are major contributors to impaired growth and neurodevelopment, perinatal complications and increased risk of morbidity and mortality. It has also been associated with 10% of all children's deaths, and are therefore of special concern to those involved with child welfare. Early childhood micronutrient deficiency leads to stunted growth and impaired cognitive development, which in turn can translate into reduced work capacity, productivity and overall well-being during adulthood. Deficiencies can constrain physical and (neurocognitive) development and compromise health in various ways. Beyond dangerous health conditions, they can also lead to less clinically notable reductions in energy level, mental clarity and overall capacity. They not only affect the cognition of elderly and children but also that of adults. Micronutrients help to resist or to recover from infectious diseases which can have extensive health impacts. Causes Deficiencies of essential vitamins or minerals such as Vitamin A, iron, and zinc may be caused by long-term shortages of nutritious food or by infections such as intestinal worms. They may also be caused or exacerbated when illnesses (such as diarrhoea or malaria) cause rapid loss of nutrients through feces or vomit. Interventions There are several interventions to improve the micronutrient status including fortification of foods, supplementation and treatment of underlying infections. Implementation of appropriate micronutrient interventions has several benefits, including improved cognitive development or enhanced cognition, increased child survival, and reduced prevalence of low birth weight. Plants In plants a micronutrient deficiency (or trace mineral deficiency) is a physiological plant disorder which occurs when a micronutrient is deficient in the soil in which a plant grows. Micronutrients are distinguished from macronutrients (nitrogen, phosphorus, sulfur, potassium, calcium and magnesium) by the relatively low quantities needed by the plant. A number of elements are known to be needed in these small amounts for proper plant growth and development. Nutrient deficiencies in these areas can adversely affect plant growth and development. Some of the best known trace mineral deficiencies include: zinc deficiency, boron deficiency, iron deficiency, and manganese deficiency. List of essential trace minerals for plants Boron is believed to be involved in carbohydrate transport in plants; it also assists in metabolic regulation. Boron deficiency will often result in bud dieback. Chlorine is necessary for osmosis and ionic balance; it also plays a role in photosynthesis. Copper is a component of some enzymes and of vitamin A. Symptoms of copper deficiency include browning of leaf tips and chlorosis. Iron is essential for chlorophyll synthesis, which is why an iron deficiency results in chlorosis. Manganese activates some important enzymes involved in chlorophyll formation. Manganese deficient plants will develop chlorosis between the veins of its leaves. The availability of manganese is partially dependent on soil pH. Molybdenum is essential to plant health. Molybdenum is used by plants to reduce nitrates into usable forms. Some plants use it for nitrogen fixation, thus it may need to be added to some soils before seeding legumes. Nickel is essential for activation of urease, an enzyme involved with nitrogen metabolism that is required to process urea. Zinc participates in chlorophyll formation, and also activates many enzymes. Symptoms of zinc deficiency include chlorosis and stunted growth. See also Screening (medicine) Blood test References External links Physiological plant disorders Nutrition
0.762124
0.979013
0.746129
Body force
In physics, a body force is a force that acts throughout the volume of a body. Forces due to gravity, electric fields and magnetic fields are examples of body forces. Body forces contrast with contact forces or surface forces which are exerted to the surface of an object. Fictitious forces such as the centrifugal force, Euler force, and the Coriolis effect are other examples of body forces. Definition Qualitative A body force is simply a type of force, and so it has the same dimensions as force, [M][L][T]−2. However, it is often convenient to talk about a body force in terms of either the force per unit volume or the force per unit mass. If the force per unit volume is of interest, it is referred to as the force density throughout the system. A body force is distinct from a contact force in that the force does not require contact for transmission. Thus, common forces associated with pressure gradients and conductive and convective heat transmission are not body forces as they require contact between systems to exist. Radiation heat transfer, on the other hand, is a perfect example of a body force. More examples of common body forces include; Gravity, Electric forces acting on an object charged throughout its volume, Magnetic forces acting on currents within an object, such as the braking force that results from eddy currents, Fictitious forces (or inertial forces) can be viewed as body forces. Common inertial forces are, Centrifugal force, Coriolis force, Euler force (or transverse force), which occurs in a rotating reference frame when the rate of rotation of the frame is changing However, fictitious forces are not actually forces. Rather they are corrections to Newton's second law when it is formulated in an accelerating reference frame. (Gravity can also be considered a fictitious force in the context of General Relativity.) Quantitative The body force density is defined so that the volume integral (throughout a volume of interest) of it gives the total force acting throughout the body; where dV is an infinitesimal volume element, and f is the external body force density field acting on the system. Acceleration Like any other force, a body force will cause an object to accelerate. For a non-rigid object, Newton's second law applied to a small volume element is , where ρ(r) is the mass density of the substance, ƒ the force density, and a(r) is acceleration, all at point r. The case of gravity In the case of a body in the gravitational field on a planet surface, a(r) is nearly constant (g) and uniform. Near the Earth . In this case simply where m is the mass of the body. See also Action at a distance Fictitious force Force density Non-contact force Normal force Surface force References Force
0.763826
0.976807
0.746111
Biosignature
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life on a planet. Measurable attributes of life include its physical or chemical structures, its use of free energy, and the production of biomass and wastes. The field of astrobiology uses biosignatures as evidence for the search for past or present extraterrestrial life. Types Biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically-formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments. Atmospheric gases: Gases formed by metabolic processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether an observed feature is a true biosignature is complex. There are three criteria that a potential biosignature must meet to be considered viable for further research: Reliability, survivability, and detectability. Reliability A biosignature must be able to dominate over all other processes that may produce similar physical, spectral, and chemical features. When investigating a potential biosignature, scientists must carefully consider all other possible origins of the biosignature in question. Many forms of life are known to mimic geochemical reactions. One of the theories on the origin of life involves molecules developing the ability to catalyse geochemical reactions to exploit the energy being released by them. These are some of the earliest known metabolisms (see methanogenesis). In such case, scientists might search for a disequilibrium in the geochemical cycle, which would point to a reaction happening more or less often than it should. A disequilibrium such as this could be interpreted as an indication of life. Survivability A biosignature must be able to last for long enough so that a probe, telescope, or human can be able to detect it. A consequence of a biological organism's use of metabolic reactions for energy is the production of metabolic waste. In addition, the structure of an organism can be preserved as a fossil and we know that some fossils on Earth are as old as 3.5 billion years. These byproducts can make excellent biosignatures since they provide direct evidence for life. However, in order to be a viable biosignature, a byproduct must subsequently remain intact so that scientists may discover it. Detectability A biosignature must be detectable with the most latest technology to be relevant in scientific investigation. This seems to be an obvious statement, however, there are many scenarios in which life may be present on a planet yet remain undetectable because of human-caused limitations. False positives Every possible biosignature is associated with its own set of unique false positive mechanisms or non-biological processes that can mimic the detectable feature of a biosignature. An important example is using oxygen as a biosignature. On Earth, the majority of life is centred around oxygen. It is a byproduct of photosynthesis and is subsequently used by other life forms to breathe. Oxygen is also readily detectable in spectra, with multiple bands across a relatively wide wavelength range, therefore, it makes a very good biosignature. However, finding oxygen alone in a planet's atmosphere is not enough to confirm a biosignature because of the false-positive mechanisms associated with it. One possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of non-condensable gasses or if the planet loses a lot of water. Finding and distinguishing a biosignature from its potential false-positive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abiotic-biological degeneracy, if nature allows. False negatives Opposite to false positives, false negative biosignatures arise in a scenario where life may be present on another planet, but some processes on that planet make potential biosignatures undetectable. This is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres. Human limitations There are many ways in which humans may limit the viability of a potential biosignature. The resolution of a telescope becomes important when vetting certain false-positive mechanisms, and many current telescopes do not have the capabilities to observe at the resolution needed to investigate some of these. In addition, probes and telescopes are worked on by huge collaborations of scientists with varying interests. As a result, new probes and telescopes carry a variety of instruments that are a compromise to everyone's unique inputs. For a different type of scientist to detect something unrelated to biosignatures, a sacrifice may have to be made in the capability of an instrument to search for biosignatures. General examples Geomicrobiology The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox-sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements). For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example is the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. Another example is the presence of straight-chain lipids in the form of alkanes, alcohols, and fatty acids with 20–36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants. Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct sizes, shapes, and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible. Morphology Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 are one of the longest-debated of several potential biosignatures in that specimen. The possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Currently, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection". Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation. Chemistry No single compound will prove life once existed. Rather, it will be distinctive patterns present in any organic compounds showing a process of selection. For example, membrane lipids left behind by degraded cells will be concentrated, have a limited size range, and comprise an even number of carbons. Similarly, life only uses left-handed amino acids. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature. Chemical biosignatures include any suite of complex organic compounds composed of carbon, hydrogen, and other elements or heteroatoms such as oxygen, nitrogen, and sulfur, which are found in crude oils, bitumen, petroleum source rock and eventually show simplification in molecular structure from the parent organic molecules found in all living organisms. They are complex carbon-based molecules derived from formerly living organisms. Each biomarker is quite distinctive when compared to its counterparts, as the time required for organic matter to convert to crude oil is characteristic. Most biomarkers also usually have high molecular mass. Some examples of biomarkers found in petroleum are pristane, triterpanes, steranes, phytane and porphyrin. Such petroleum biomarkers are produced via chemical synthesis using biochemical compounds as their main constituents. For instance, triterpenes are derived from biochemical compounds found on land angiosperm plants. The abundance of petroleum biomarkers in small amounts in its reservoir or source rock make it necessary to use sensitive and differential approaches to analyze the presence of those compounds. The techniques typically used include gas chromatography and mass spectrometry. Petroleum biomarkers are highly important in petroleum inspection as they help indicate the depositional territories and determine the geological properties of oils. For instance, they provide more details concerning their maturity and the source material. In addition to that they can also be good parameters of age, hence they are technically referred to as "chemical fossils". The ratio of pristane to phytane (pr:ph) is the geochemical factor that allows petroleum biomarkers to be successful indicators of their depositional environments. Geologists and geochemists use biomarker traces found in crude oils and their related source rock to unravel the stratigraphic origin and migration patterns of presently existing petroleum deposits. The dispersion of biomarker molecules is also quite distinctive for each type of oil and its source; hence, they display unique fingerprints. Another factor that makes petroleum biomarkers more preferable than their counterparts is that they have a high tolerance to environmental weathering and corrosion. Such biomarkers are very advantageous and often used in the detection of oil spillage in the major waterways. The same biomarkers can also be used to identify contamination in lubricant oils. However, biomarker analysis of untreated rock cuttings can be expected to produce misleading results. This is due to potential hydrocarbon contamination and biodegradation in the rock samples. Atmospheric The atmospheric properties of exoplanets are of particular importance, as atmospheres provide the most likely observables for the near future, including habitability indicators and biosignatures. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth. An exoplanet's color—or reflectance spectrum—can also be used as a biosignature due to the effect of pigments that are uniquely biologic in origin such as the pigments of phototrophic and photosynthetic life forms. Scientists use the Earth as an example of this when looked at from far away (see Pale Blue Dot) as a comparison to worlds observed outside of our solar system. Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths that may be detected by the new generation of space observatories under development. Some scientists have reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. For example, the presence of oxygen and methane together could indicate the kind of extreme thermochemical disequilibrium generated by life. Two of the top 14,000 proposed atmospheric biosignatures are dimethyl sulfide and chloromethane. An alternative biosignature is the combination of methane and carbon dioxide. The detection of phosphine in the atmosphere of Venus is being investigated as a possible biosignature. Atmospheric disequilibrium A disequilibrium in the abundance of gas species in an atmosphere can be interpreted as a biosignature. Life has greatly altered the atmosphere on Earth in a way that would be unlikely for any other processes to replicate. Therefore, a departure from equilibrium is evidence for a biosignature. For example, the abundance of methane in the Earth's atmosphere is orders of magnitude above the equilibrium value due to the constant methane flux that life on the surface emits. Depending on the host star, a disequilibrium in the methane abundance on another planet may indicate a biosignature. Agnostic biosignatures Because the only form of known life is that on Earth, the search for biosignatures is heavily influenced by the products that life produces on Earth. However, life that is different from life on Earth may still produce biosignatures that are detectable by humans, even though nothing is known about their specific biology. This form of biosignature is called an "agnostic biosignature" because it is independent of the form of life that produces it. It is widely agreed that all life–no matter how different it is from life on Earth–needs a source of energy to thrive. This must involve some sort of chemical disequilibrium, which can be exploited for metabolism. Geological processes are independent of life, and if scientists can constrain the geology well enough on another planet, then they know what the particular geologic equilibrium for that planet should be. A deviation from geological equilibrium can be interpreted as an atmospheric disequilibrium and agnostic biosignature. Antibiosignatures In the same way that detecting a biosignature would be a significant discovery about a planet, finding evidence that life is not present can also be an important discovery about a planet. Life relies on redox imbalances to metabolize the resources available into energy. The evidence that nothing on an earth is taking advantage of the "free lunch" available due to an observed redox imbalance is called antibiosignatures. Polyelectrolytes The Polyelectrolyte theory of the gene is a proposed generic biosignature. In 2002, Steven A. Benner and Daniel Hutter proposed that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. Benner and others proposed methods for concentrating and analyzing these polyelectrolyte genetic biopolymers on Mars, Enceladus, and Europa. Specific examples Methane on Mars The presence of methane in the atmosphere of Mars is an area of ongoing research and a highly contentious subject. Because of its tendency to be destroyed in the atmosphere by photochemistry, the presence of excess methane on a planet can indicate that there must be an active source. With life being the strongest source of methane on Earth, observing a disequilibrium in the methane abundance on another planet could be a viable biosignature. Since 2004, there have been several detections of methane in the Mars atmosphere by a variety of instruments onboard orbiters and ground-based landers on the Martian surface as well as Earth-based telescopes. These missions reported values anywhere between a 'background level' ranging between 0.24 and 0.65 parts per billion by volume (p.p.b.v.) to as much as 45 ± 10 p.p.b.v. However, recent measurements using the ACS and NOMAD instruments on board the ESA-Roscosmos ExoMars Trace Gas Orbiter have failed to detect any methane over a range of latitudes and longitudes on both Martian hemispheres. These highly sensitive instruments were able to put an upper bound on the overall methane abundance at 0.05 p.p.b.v. This nondetection is a major contradiction to what was previously observed with less sensitive instruments and will remain a strong argument in the ongoing debate over the presence of methane in the Martian atmosphere. Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin, similarly to the use of the δ13C standard for recognizing biogenic methane on Earth. Martian atmosphere The Martian atmosphere contains high abundances of photochemically produced CO and H2, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used by a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars. Missions inside the Solar System Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined not only by the probability of life creating it but also by the improbability of non-biological (abiotic) processes producing it. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered requires proving that a possible biosignature was produced by the activities or remains of life. As with most scientific discoveries, discovery of a biosignature will require evidence building up until no other explanation exists. Possible examples of a biosignature include complex organic molecules or structures whose formation is virtually unachievable in the absence of life: Cellular and extracellular morphologies Biomolecules in rocks Bio-organic molecular structures Chirality Biogenic minerals Biogenic isotope patterns in minerals and organic compounds Atmospheric gases Photosynthetic pigments The Viking missions to Mars The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared inconclusive. Mars Science Laboratory The Curiosity rover from the Mars Science Laboratory mission, with its Curiosity rover is currently assessing the potential past and present habitability of the Martian environment and is attempting to detect biosignatures on the surface of Mars. Considering the MSL instrument payload package, the following classes of biosignatures are within the MSL detection window: organism morphologies (cells, body fossils, casts), biofabrics (including microbial mats), diagnostic organic molecules, isotopic signatures, evidence of biomineralization and bioalteration, spatial patterns in chemistry, and biogenic gases. The Curiosity rover targets outcrops to maximize the probability of detecting 'fossilized' organic matter preserved in sedimentary deposits. ExoMars Orbiter The 2016 ExoMars Trace Gas Orbiter (TGO) is a Mars telecommunications orbiter and atmospheric gas analyzer mission. It delivered the Schiaparelli EDM lander and then began to settle into its science orbit to map the sources of methane on Mars and other gases, and in doing so, will help select the landing site for the Rosalind Franklin rover to be launched in 2022. The primary objective of the Rosalind Franklin rover mission is the search for biosignatures on the surface and subsurface by using a drill able to collect samples down to a depth of , away from the destructive radiation that bathes the surface. Mars 2020 Rover The Mars 2020 rover, which launched in 2020, is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability, the possibility of past life on Mars, and potential for preservation of biosignatures within accessible geological materials. In addition, it will cache the most interesting samples for possible future transport to Earth. Titan Dragonfly NASA's Dragonfly lander/aircraft concept is proposed to launch in 2025 and would seek evidence of biosignatures on the organic-rich surface and atmosphere of Titan, as well as study its possible prebiotic primordial soup. Titan is the largest moon of Saturn and is widely believed to have a large subsurface ocean consisting of a salty brine. In addition, scientists believe that Titan may have the conditions necessary to promote prebiotic chemistry, making it a prime candidate for biosignature discovery. Europa Clipper NASA's Europa Clipper probe is designed as a flyby mission to Jupiter's smallest Galilean moon, Europa. The mission launched in October 2024 and is set to reach Europa in April 2030, where it will investigate the potential for habitability on Europa. Europa is one of the best candidates for biosignature discovery in the Solar System because of the scientific consensus that it retains a subsurface ocean, with two to three times the volume of water on Earth. Evidence for this subsurface ocean includes: Voyager 1 (1979): The first close-up photos of Europa are taken. Scientists propose that a subsurface ocean could cause the tectonic-like marks on the surface. Galileo (1997): The magnetometer aboard this probe detected a subtle change in the magnetic field near Europa. This was later interpreted as a disruption in the expected magnetic field due to the current induction in a conducting layer on Europa. The composition of this conducting layer is consistent with a salty subsurface ocean. Hubble Space Telescope (2012): An image was taken of Europa which showed evidence for a plume of water vapor coming off the surface. The Europa Clipper probe includes instruments to help confirm the existence and composition of a subsurface ocean and thick icy layer. In addition, the instruments will be used to map and study surface features that may indicate tectonic activity due to a subsurface ocean. Enceladus Although there are no set plans to search for biosignatures on Saturn's sixth-largest moon, Enceladus, the prospects of biosignature discovery there are exciting enough to warrant several mission concepts that may be funded in the future. Similar to Jupiter's moon Europa, there is much evidence for a subsurface ocean to also exist on Enceladus. Plumes of water vapor were first observed in 2005 by the Cassini mission and were later determined to contain salt as well as organic compounds. In 2014, more evidence was presented using gravimetric measurements on Enceladus to conclude that there is in fact a large reservoir of water underneath an icy surface. Mission design concepts include: Enceladus Life Finder (ELF) Enceladus Life Signatures and Habitability Enceladus Organic Analyzer Enceladus Explorer (En-Ex) Explorer of Enceladus and Titan (E2T) Journey to Enceladus and Titan (JET) Life Investigation For Enceladus (LIFE) Testing the Habitability of Enceladus's Ocean (THEO) All of these concept missions have similar science goals: To assess the habitability of Enceladus and search for biosignatures, in line with the strategic map for exploring the ocean-world Enceladus. Searching outside of the Solar System At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). It is currently not feasible to send humans or even probes to search for biosignatures outside of the Solar System. The only way to search for biosignatures outside of the Solar System is by observing exoplanets with telescopes. There have been no plausible or confirmed biosignature detections outside of the Solar System. Despite this, it is a rapidly growing field of research due to the prospects of the next generation of telescopes. The James Webb Space Telescope, which launched in December 2021, will be a promising next step in the search for biosignatures. Although its wavelength range and resolution will not be compatible with some of the more important atmospheric biosignature gas bands like oxygen, it will still be able to detect some evidence for oxygen false positive mechanisms. The new generation of ground-based 30-meter class telescopes (Thirty Meter Telescope and Extremely Large Telescope) will have the ability to take high-resolution spectra of exoplanet atmospheres at a variety of wavelengths. These telescopes will be capable of distinguishing some of the more difficult false positive mechanisms such as the abiotic buildup of oxygen via photolysis. In addition, their large collecting area will enable high angular resolution, making direct imaging studies more feasible. See also Bioindicator MERMOZ (remote detection of lifeforms) Taphonomy Technosignature References Astrobiology Astrochemistry Bioindicators Biology terminology Search for extraterrestrial intelligence Petroleum geology
0.763326
0.977392
0.746069
Mouth infection
Mouth infections, also known as oral infections, are a group of infections that occur around the oral cavity. They include dental infection, dental abscess, and Ludwig's angina. Mouth infections typically originate from dental caries at the root of molars and premolars that spread to adjacent structures. In otherwise healthy patients, removing the offending tooth to allow drainage will usually resolve the infection. In cases that spread to adjacent structures or in immunocompromised patients (cancer, diabetes, transplant immunosuppression), surgical drainage and systemic antibiotics may be required in addition to tooth extraction. Since bacteria that normally reside in the oral cavity cause mouth infections, proper dental hygiene can prevent most cases of infection. As such, mouth infections are more common in populations with poor access to dental care (homeless, uninsured, etc.) or populations with health-related behaviors that damage one's teeth and oral mucosa (tobacco, methamphetamine, etc.). This is a common problem, representing nearly 36% of all encounters within the emergency department related to dental conditions. Patients with mouth infections usually complain of pain at the affected tooth with or without fevers. The inability to fully open one's mouth, also known as trismus, suggests that the infection has spread to spaces between the jaw and muscles of mastication (masseter, medial pterygoid, and temporalis). If an abscess has formed, swelling, redness, and tenderness will be present. Depending on the location of the abscess, it will be visible intraorally, extraorally, or both. Severe infections with significant swelling may cause airway obstruction by shifting/enlarging soft tissue structures (floor of mouth, tongue, etc.) or by causing dysphagia that prevents adequate clearance of saliva. This is a medical emergency and may require endonasal intubation or tracheotomy to protect one's airway. The development of stridor, shortness of breath, and pooling oral secretions may indicate impending airway compromise due to a worsening mouth infection. Other rare but dangerous complications include osteomyelitis, cavernous sinus thrombosis, and deep neck space infection. Signs and symptoms Dental pain and swelling are the two hallmark symptoms of a mouth infection. Fever is sometimes present, but not as frequently as tooth pain or persistent swelling. The swelling will occur at the tooth root or at the spaces occupied by the infection. Other symptoms that usually accompany an infection like increased heart rate, low energy, chills, and sweating may also be present. If infection spreads to the space between the muscles of mastication, then trismus, the inability to completely open one's mouth, will also be present. Severe mouth infections become dangerous when breathing or swallowing are impaired. Since the primary and secondary spaces extend towards the back of the throat, significant swelling can lead to airway obstruction. Signs and symptoms of airway obstruction are difficulty breathing, stridor, low oxygen saturation measured by a pulse oximeter, blue discoloration of the skin or lips, and stridor. Similarly, infections that spread to adjacent structures may also impair swallowing or cause significant pain with swallowing. Individuals with long-standing infections may lose significant weight because pain blunts their desire and impairs their ability to eat food. When infections affect swallowing, one may not be able to swallow saliva and other oral secretions faster than they are produced, causing drooling. Pooling secretions at the back of the throat increases the likelihood of the saliva traveling down the windpipe and into the lungs instead of through the esophagus and into the stomach. This process of breathing in material that should be swallowed is known as aspiration, and can lead to more infections like pneumonia. Complications The complications that arise from mouth infections depend on how long the infection has persisted and where the infection has spread. The three main, albeit rare, complications of mouth infections are osteomyelitis, cavernous sinus thrombosis, and deep neck space infections. Osteomyelitis Mouth infections that persist for months have the potential to cause a chronic infection of the surrounding bone, also known as osteomyelitis. Cavernous sinus thrombosis Although rare, mouth infections may also spread through the nasal and facial veins that drain into a reservoir of deoxygenated blood called the cavernous sinus. Once the infection has spread to the cavernous sinus, it can compress important nerves (cranial nerves III, IV, V1, V2, and VI) within this space and obstruct venous drainage from the upper face. The main symptoms are swelling and pain of both eyes, fever, changes in vision, and headaches. On exam, redness and decreased range of motion of the eyes are present in about 90% of cases. Treatment includes antibiotics and antithrombotics to treat the infection and blood clot. This is a serious complication that leads to death or serious morbidity if not diagnosed within the first week of symptoms. Deep neck space infection Deep neck space infections are mouth infections that have spread to the spaces between the connective tissue that separates the compartments of the neck, also known as the deep cervical fascia. When an infection involves the deep neck spaces, patients may report a wide variety of symptoms, including fever, pain with swallowing, inability to swallow, confusion, reduced mobility of the neck, chest pain, shortness of breath, and many other alarming symptoms. If the infection remains untreated or under treated, then even more serious complications can occur like descending necrotizing mediastinitis (infection of the soft tissues that encase the heart) and cervical necrotizing fasciitis (infection of the soft tissues along the throat and cervical spine). The mortality rate of mouth infections that affect the deep neck space and lead to necrotizing mediastinitis or necrotizing fasciitis is high at around a 40-60% mortality rate. Causes Mouth infections are most commonly caused by an overgrowth of bacteria that normally populate the oral cavity. In a healthy adult, billions of bacteria, viruses, and fungi reside within the oral cavity and represent more than 500 different species. They are collectively known as the oral microbiome. When healthy, the oral microbiome is in dynamic equilibrium such that no one bacteria or group of organisms dominates. However, certain situations, like a decaying tooth root or a penetrating puncture wound from a fish bone, can generate an environment that disrupts the normal oral microbiome and promote the growth of pathogenic bacteria. Although sore throats (pharyngitis) are caused by viruses and oral yeast infections (candidiasis) are caused by fungi, most mouth infections that lead to swelling and abscesses are caused by bacteria. The bacteria of the oral microbiome consist of a wide variety of gram positive cocci and rods, gram negative cocci and rods, obligate anaerobes, and facultative anaerobes. The most common bacteria that causes mouth infections are Streptococcus species. Poor dental hygiene promotes the accumulation of these bacteria at the tooth root, eventually causing a cavity or dental caries. The decaying tooth root provides bacteria with an enclosed environment with low oxygen content. Consequently, the obligate and facultative anaerobes present within the oral cavity flourish and outcompete the other bacteria at the site of tooth decay, causing the dental caries to escalate into a mouth infection. The corrosive enzymes released by the anaerobes erode the surrounding bone and enable the infection to invade surrounding structures. Given the natural history of a mouth infection, the vast majority of clinically-treated oral infections are polymicrobial, or caused by multiple different species of bacteria at the same time. Until the source of the infection is controlled with some form of drainage and antibiotics, a mouth infection will likely not resolve on its own. Anatomy of mouth The anatomy of the oral cavity affects the progression of infection and dictates the severity of disease. In other words, where the infection starts will determine the pattern of its spread and its catastrophic potential based on the surrounding anatomy. Oral cavity The oral cavity serves as the starting point of the digestive track and facilitates breathing as a channel for airflow to the lungs. The borders of the oral cavity include the lips in the front, cheeks on the side, mylohyoid muscle/associated soft tissue below, soft and hard palate above, and the oropharynx at the back. The most important structures within the mouth include teeth for chewing and the tongue for speech and assistance with swallowing. The oral cavity is lined with specialized mucosa containing salivary glands that moisten food, breakdown sugars, and humidify air prior to entering the lungs. The roots of the upper teeth are anchored into a bone called the maxilla, more commonly known as the hard palate, at ridges called the alveolar process. The roots of the lower teeth are anchored into a bone called the mandible, more commonly known as the jaw, at their respective alveolar processes. The surface of the oral cavity between the teeth and the inner side of the lips are called the oral vestibule. Surrounding the oral cavity, there are many different muscles that facilitate chewing, opening the mouth, and swallowing. Each muscle, group of muscles, or separate anatomical compartment is encased in a thin fibrous layer of connective tissue called fascia. Normally, the fascia of adjacent structures are in direct contact with each other. However, air or pus can occupy the space between adjacent fascia, known as fascial planes, and collect over time. As the air pocket or pus enlarges within the fascial planes, the structures surrounding the abnormality can become compressed or shifted out of its normal place. These phenomena of compression and deviation due to a growing infection/air pocket drive the progression of disease into potentially life-threatening situations. Spread of oral infection Mouth infections spread from the root of the infected tooth through the jaw bones and into potential spaces between the fascial planes of surrounding soft tissue, eventually forming an abscess. These potential spaces are usually empty, but can expand and form a pocket of pus when an infection drains into them. The potential spaces are categorized into primary and secondary spaces. Primary space A primary space is a potential space between adjacent soft tissue structures that communicate directly with the infected tooth through the eroded bone. In the upper jaw (maxilla), the primary spaces are the buccal and vestibular spaces. The most clinically significant structures that dictate the pattern of infectious spread are the buccinator muscle and the maxillary sinus. Infection that originates above the buccinator's attachment point with the maxilla will spread laterally into the buccal space. Infection that begins below the buccinator's attachment point with the maxilla will spread inferiorly into the vestibular space. Rarely, the infection will spread upwards into the maxillary sinus and cause a sinusitis. In the lower jaw (mandible), the primary spaces are the sublingual, submandibular, and submental spaces. The location of the mylohyoid dictates the spread of infection. It attaches to the mandible along a line that separates the sublingual and submandibular space. If an infection begins above the mylohyoid's point of attachment, then the infection will spread to the sublingual space. If the infection originates below the mylohyoid's point of attachment, then the infection will spread to the submandibular space. The submental space is located behind the mentalis muscles, and infections spread to this space when the oral infection begins at the roots of the mandibular incisors because they are so long. Secondary space Primary spaces are the result of direct spread from the infected tooth, while secondary spaces are the result of spread from primary spaces. In the oral cavity, mouth infections from primary spaces can spread to fascial planes between the muscles of mastication (masseter, medial pterygoid, and temporalis) or within the deep neck spaces. The space between the muscles of mastication is collectively known as the masticator space and they are all connected with each other at the back of the throat. Therefore, when an infection spreads to the masticator space, significant swelling, tenderness, and trismus are usually present. Deep neck spaces, another set of secondary spaces, are located between fascial planes that separate the deeper structures of the neck into discrete compartments. They are important because they begin at the back of the throat and depending on the space, can track downwards to the chest cavity or encase the windpipe. Infections that involve the deep neck spaces are rare, but must be treated immediately with surgery to washout the infection because they can compromise the airway and lead to fatal complications like mediastinitis. Diagnosis Mouth infections are usually diagnosed on history and physical exam in the dental office or at a clinic visit with an otolaryngologist. Swelling within the oral cavity or cheeks, along with a history of progressively worsening tooth pain and fevers, is usually enough evidence to support the diagnosis of a mouth infection. Depending on the severity of the infection, further tests may include x-rays and CT scans of the mouth to better characterize the location and extent of the infection. If the infection is drained with a needle or scalpel, then a swab of the infection is collected to identify the microbes present in the abscess and to determine their respective susceptibilities to antibiotics. Other lab tests may include a complete blood count with differential, serum electrolyte concentrations, and other routine assays for an infectious workup. Treatment Although mouth infections can present in many different ways, they are managed according to the same guiding principles - protect the airway, drain the abscess, and treat with antibiotics if necessary. Securing a patient's airway is the most important part of initial treatment because loss of airway is emergently life-threatening. Inflammation and large abscesses, particularly those within the floor of the mouth, may block airflow into the lungs. To pre-emptively protect a patient's airway, placing flexible plastic tubing through the nasal cavity and into the trachea, called endonasal intubation, is typically the first option. It can be performed with or without direct visualization with laryngoscopy, a small camera with a live video feed to ensure the tubing is placed in the proper location. If attempts to intubate through the nasal cavity are unsuccessful or if the airway must be re-established quickly, then an incision can be made through the front of the neck to gain access into the trachea, also known as a tracheotomy. After stabilizing the patient's airway, extracting the infected tooth will typically promote adequate drainage and the infection will resolve shortly thereafter. If the infection involves multiple primary spaces or any of the secondary spaces previously mentioned, then incision and drainage with culture-guided antibiotics may be indicated. Since most mouth infections are polymicrobial, penicillin is an appropriate initial choice of antibiotic because of its activity against Streptococcus and gram negative anaerobes. If the patient has a penicillin allergy, then clindamycin with or without metronidazole are also effective empiric antibiotic regimens. Additionally, empiric antibiotics should be initiated in patients with a compromised immune system, like those on immunosuppressive medications, with diabetes, or with cancer. In situations where the infection worsens or fails to improve after multiple days, washing out the wound in the operating room should control the source of infection and promote healing. References Diseases of oral cavity, salivary glands and jaws
0.761539
0.979639
0.746033
Anatomical variation
An anatomical variation, anatomical variant, or anatomical variability is a presentation of body structure with morphological features different from those that are typically described in the majority of individuals. Anatomical variations are categorized into three types including morphometric (size or shape), consistency (present or absent), and spatial (proximal/distal or right/left). Variations are seen as normal in the sense that they are found consistently among different individuals, are mostly without symptoms, and are termed anatomical variations rather than abnormalities. Anatomical variations are mainly caused by genetics and may vary considerably between different populations. The rate of variation considerably differs between single organs, particularly in muscles. Knowledge of anatomical variations is important in order to distinguish them from pathological conditions. A very early paper published in 1898, presented anatomic variations to have a wide range and significance, and before the use of X-ray technology, anatomic variations were mostly only found on cadaver studies. The use of imaging techniques have defined many such variations. Some variations are found in different species such as polydactyly, having more than the usual number of digits. Variants of structures Muscles Kopsch gave a detailed listing of muscle variations. These included the absence of muscles; muscles that were doubled; muscles that were divided into two or more parts; an increase or decrease in the origin or insertion of the muscle; and the joining to adjacent organs. The palmaris longus muscle in the forearm is sometimes absent, as is the plantaris muscle in the leg. The sternalis muscle is a variant that lies in front of the pectoralis major and may show up on a mammogram. Bones Usually there are five lumbar vertebrae but sometimes there are six, and sometimes there are four. Joints A discoid meniscus is a rare thickened lateral meniscus in the knee joint that can sometimes be swollen and painful. Organs The lungs are subject to anatomical variations. Clinical significance Accessory small bones called ossicles may be mistaken for avulsion fractures. See also Supernumerary body part Visible difference References External links Atlas of human anatomical variations Anatomy
0.762053
0.978822
0.745915
Clinical physiology
Clinical physiology is an academic discipline within the medical sciences and a clinical medical specialty for physicians in the health care systems of Sweden, Denmark and Finland. Clinical physiology is characterized as a branch of physiology that uses a functional approach to understand the pathophysiology of a disease. Overview As a specialty for medical doctors, clinical physiology is a diagnostic specialty in which patients are subjected to specialized tests for the functions of the heart, blood vessels, lungs, kidneys and gastrointestinal tract, and other organs. Testing methods include evaluation of electrical activity (e.g. electrocardiogram of the heart), blood pressure (e.g. ankle brachial pressure index), and air flow (e.g. pulmonary function testing using spirometry). In addition, Clinical Physiologists measure movements, velocities, and metabolic processes through imaging techniques such as ultrasound, echocardiography, magnetic resonance imaging (MRI), x-ray computed tomography (CT), and nuclear medicine scanners (e.g. single photon emission computed tomography (SPECT) and positron emission tomography (PET) with and without CT or MRI). History The field of clinical physiology was originally founded by Professor Torgny Sjöstrand in Sweden, and it continues to make its way around the world in other hospitals and academic environments. Sjöstrand was the first to establish departments for clinical physiology separate than those of physiology, during his work at the Karolinska Hospital in Stockholm. Along with Sjöstrand, another influential name in clinical physiology was P.K Anokhin. Anohkin heavily contributed to the branch of physiology where he worked diligently to use his theories of functional systems to solve medical mysteries amongst his patients. In Sweden, clinical physiology was originally a discipline on its own, however, between 2008 and 2015, clinical physiology was categorized as a sub-discipline to radiology. For this reason, those pursuing a career in clinical physiology had to first become registered and certified radiologists before becoming clinical physiologists. Since 2015, clinical physiology is once again a separate discipline, independent of radiology. Role Human physiology is the study of bodily functions. Clinical physiology examinations typically involve assessments of such functions as opposed to assessments of structures and anatomy. The specialty encompasses the development of new physiological tests for medical diagnostics. Using equipments to measure, monitor and record patients proves very helpful for patients in many hospitals. Moreover, it is helpful to doctors, making it possible for patients to be diagnosed correctly. Some Clinical Physiology departments perform tests from related medical specialties including nuclear medicine, clinical neurophysiology, and radiology. In the health care systems of countries that lack this specialty, the tests performed in clinical physiology are often performed by the various organ-specific specialties in internal medicine, such as cardiology, pulmonology, nephrology, and others. In Australia, the United Kingdom, and many other commonwealth and European countries, clinical physiology is not a medical specialty for physicians. It is individually a non-medical allied health profession - scientist, physiologist or technologist - who may practice as a cardiac scientist, vascular scientist, respiratory scientist, sleep scientist or in Ophthalmic and Vision Science as an Ophthalmic Science Practitioner (UK). These professionals also aid in the diagnosis of disease and manage patients, with an emphasis on understanding physiological and pathophysiological pathways. Disciplines within clinical physiology field include audiologists, cardiac physiologists, gastro-intestinal physiologists, neurophysiologists, respiratory physiologists, and sleep physiologists. References External links Scandinavian Society of Clinical Physiology and Nuclear Medicine (SSCPNM) http://www.sscpnm.com/ The official journal of the SSCPNM: Clinical Physiology and Functional Imaging http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1475-097X Physiology Academic disciplines Medical specialties
0.761979
0.978886
0.74589
Viremia
Viremia is a medical condition where viruses enter the bloodstream and hence have access to the rest of the body. It is similar to bacteremia, a condition where bacteria enter the bloodstream. The name comes from combining the word "virus" with the Greek word for "blood" (haima). It usually lasts for 4 to 5 days in the primary condition. Primary versus secondary Primary viremia refers to the initial spread of virus in the blood from the first site of infection. Secondary viremia occurs when primary viremia has resulted in infection of additional tissues via bloodstream, in which the virus has replicated and once more entered the circulation. Usually secondary viremia results in higher viral shedding and viral loads within the bloodstream due to the possibility that the virus is able to reach its natural host cell from the bloodstream and replicate more efficiently than the initial site. An excellent example to profile this distinction is the rabies virus. Usually the virus will replicate briefly within the first site of infection, within the muscle tissues. Viral replication then leads to viremia and the virus spreads to its secondary site of infection, the central nervous system (CNS). Upon infection of the CNS, secondary viremia results and symptoms usually begin. Vaccination at this point is useless, as the spread to the brain is unstoppable. Vaccination must be done before secondary viremia takes place for the individual to avoid brain damage or death. Active versus passive Active viremia is caused by the replication of viruses which results in viruses being introduced into the bloodstream. Examples include the measles, in which primary viremia occurs in the epithelial lining of the respiratory tract before replicating and budding out of the cell basal layer (viral shedding), resulting in viruses budding into capillaries and blood vessels. Passive viremia is the introduction of viruses in the bloodstream without the need of active viral replication. Examples include direct inoculation from mosquitoes, through physical breaches or via blood transfusions. See also Septicemia References External links Virology Abnormal clinical and laboratory findings for blood
0.760233
0.981096
0.745861
Cancer and nausea
Cancer and nausea are associated in about fifty percent of people affected by cancer. This may be as a result of the cancer itself, or as an effect of the treatment such as chemotherapy, radiation therapy, or other medication such as opiates used for pain relief. About 70–80% of people undergoing chemotherapy experience nausea or vomiting. Nausea and vomiting may also occur in people not receiving treatment, often as a result of the disease involving the gastrointestinal tract, electrolyte imbalance, or as a result of anxiety. Nausea and vomiting may be experienced as the most unpleasant side effects of cytotoxic drugs and may result in patients delaying or refusing further radiotherapy or chemotherapy. The strategies of management or therapy of nausea and vomiting depend on the underlying causes. Medical treatments or conditions associated with a high risk of nausea and/or vomiting include chemotherapy, radiotherapy, and malignant bowel obstruction. Anticipatory nausea and vomiting may also occur. Nausea and vomiting may lead to further medical conditions and complications including: dehydration, electrolyte imbalance, malnutrition, and a decrease in quality of life. Nausea may be defined as an unpleasant sensation of the need to vomit. It may be accompanied by symptoms such as salivation, feeling faint, and a fast heart rate. Vomiting is the forceful ejection of stomach contents through the mouth. Although nausea and vomiting are closely related, some patients experience one symptom without the other and it may be easier to eliminate vomiting than nausea. The vomiting reflex (also called emesis) is thought to have evolved in many animal species as a protective mechanism against ingested toxins. In humans, the vomiting response may be preceded by an unpleasant sensation termed nausea, but nausea may also occur without vomiting. The central nervous system is the primary site where a number of emetic stimuli (input) are received, processed and efferent signals (output) are generated as a response and sent to various effector organs or tissues, leading to processes that eventually end in vomiting. The detection of emetic stimuli, the central processing by the brain and the resulting response by organs and tissues that lead to nausea and vomiting are referred to as the emetic pathway or emetic arch. Causes Some medical conditions that arise as a result of cancer or as a complication of its treatment are known to be associated with a high risk of nausea and/or vomiting. These include malignant bowel obstruction (MBO), chemotherapy-induced nausea and vomiting (CINV), anticipatory nausea and vomiting (ANV), and radiotherapy-induced nausea and vomiting (RINV). Malignant bowel obstruction Malignant bowel obstruction (MBO) of the gastrointestinal tract is a common complication of advanced cancer, especially in patients with bowel or gynaecological cancer. These include colorectal cancer, ovarian cancer, breast cancer, and melanoma. Three percent of all advanced cancers lead to malignant bowel obstruction, and 25 to 50 percent of patients with ovarian cancer experience at least one episode of malignant bowel obstruction. The mechanisms of action that may lead to nausea in MBO include mechanical compression of the gut, motility disorders, gastrointestinal secretion accumulation, decreased gastrointestinal absorption, and inflammation. Bowel obstruction and the resulting nausea may also occur as a result of anti-cancer therapy such as radiation, or adhesion after surgery. Impaired gastric emptying as a result of bowel obstruction may not respond to drugs alone, and surgical intervention is sometimes the only means of symptom relief. Some constipating drugs used in cancer therapy such as opioids may cause a slowing of peristalsis of the gut, which may lead to a functional bowel obstruction. Chemotherapy Chemotherapy-induced nausea and vomiting (CINV) is one of the most feared side effects of chemotherapy and is associated with a significant deterioration in quality of life. CINV is classified into three categories: early onset (occurring within 24 hours of initial exposure to chemotherapy) delayed onset (occurring 24 hours to several days after treatment) anticipatory (triggered by taste, odor, sight, thoughts, or anxiety) Risk factors that predict the occurrence and severity of CINV include sex and age, with females, younger people and people who have a high pretreatment expectation of nausea being at a higher risk, while people with a history of high alcohol consumption being at a lower risk. Other person-related variables, such as chemotherapy dose, rate and route of administration, hydration status, prior history of CINV, emesis during pregnancy or motion sickness, tumour burden, concomitant medication and medical conditions also play a role in the degree of CINV experienced by a person. By far the most important factor which determines the degree of CINV is the emetogenic potential of the chemotherapeutic agents used. Chemotherapeutic agents are classified into four groups according to their degree of emetogenicity: high, moderate, low and minimal. The European Society of Medical Oncology (ESMO) and the Multinational Association of Supportive Care in Cancer (MASCC) in 2010 as well as the American Society of Clinical Oncology (ASCO) (2011) recommend a prophylaxis to prevent acute vomiting and nausea following chemotherapy with high emetic risk drugs by using a three-drug regimen including a 5-HT3 receptor antagonist, dexamethasone and aprepitant (a neurokinin-1 antagonist) given before chemotherapy. Anticipatory A common consequence of cancer treatment is the development of anticipatory nausea and vomiting (ANV). This kind of nausea is usually elicited by the re-exposure of the patients to the clinical context they need to attend to be treated. Approximately 20% of people undergoing chemotherapy are reported to develop anticipatory nausea and vomiting. Once developed, ANV is difficult to control by pharmacological means. Benzodiazepines are the only drugs that have been found to reduce the occurrence of ANV but their efficacy decreases with time. Recently, clinical trials suggests that cannabidiolic acid suppresses conditioned gaping (ANV) in shrews. Because ANV is widely believed to be a learned response, the best approach is to avoid the development of ANV by adequate prophylaxis and treatment of acute vomiting and nausea from the first exposure to therapy. Behavioral treatment techniques, such as systematic desensitization, progressive muscle relaxation, and hypnosis have been shown to be effective against ANV. Radiation therapy The incidence and severity of radiation therapy-induced nausea and vomiting (RINV) depends on a number of factors including therapy related factors such as irradiated site, single and total dose, fractionation, irradiated volume and radiotherapy techniques. Also involved are person related factors such as gender, general health of the person, age, concurrent or recent chemotherapy, alcohol consumption, previous experience of nausea, vomiting, anxiety as well as the tumor stage. The emetogenic potential of radiotherapy is classified into high, moderate, low and minimal risk depending on the site of irradiation: High risk: total body irradiation (TBI) is associated with a high risk of RINV Moderate risk: radiation of the upper abdomen, half body irradiation and upper body irradiation Low risk: radiation of the cranium, spine, head and neck, lower thorax region and pelvis Minimal risk: radiation of extremities and breast Pathophysiology Nausea and vomiting may have a number of causes in people with cancer. While more than one cause may exist in the same person stimulating symptoms via more than one pathway, the actual cause of nausea and vomiting may be unknown in some people. The underlying causes of nausea and vomiting may in some cases not be directly related to the cancer. The causes may be categorized as disease-related and treatment-related. The stimuli which lead to emesis are received and processed in the brain. It is thought that a number of loosely organized neuronal networks within the medulla oblongata probably interact to coordinate the emetic reflex. Some of the brain stem nuclei which have been identified as important in the coordination of the emetic reflex include the parvicellular reticular formation, the Bötzinger complex and the nucleus tractus solitarii. The nuclei coordinating emesis had formerly been referred to as the vomiting complex, but it is no longer thought to represent a single anatomical structure. Efferent outputs which transmit the information from the brain leading to the motoric response of retching and vomiting include vagal efferents to the esophagus, stomach and intestine as well as spinal somatomotor neurones to the abdominal muscles and phrenic motor neurones (C3–C5) to the diaphragm. Autonomic efferents also supply the heart and airways (vagus), salivary glands (chorda tympani) and skin and are responsible for many of the prodromal signs such as salivation and skin pallor. Nausea and vomiting may be initiated by various stimuli, through different neuronal pathways. A stimulus may act on more than one pathway. Stimuli and pathways include: Toxic substances in the gastrointestinal tract: toxic substances (including drugs which are used in the treatment of cancer) in the lumen of the gastrointestinal tract stimulate vagal afferent nerves in the gut mucosa which communicate to the nucleus tractus solitarii and the area postrema to initiate vomiting and nausea. A number of receptors on the terminal ends of the vagal afferent nerves have been identified as being involved in this process, including the 5-hydroxytryptamine3 (5-HT3), neurokinin-1, and cholecystokinin-1 receptors. Various local mediators located in enterochromaffin cells of the gut mucosa play a role in stimulating these receptors. Of these 5-hydroxytryptamine seems to play the dominating role. This pathway has been postulated to be the mechanism by which some anti-cancer drugs such as cisplatin induce emesis. Toxic substances in the blood: toxic substances which have been absorbed into the blood (including cytostatics) or endogenous toxic (waste) material released by body or cancer cells into the blood can be detected directly in the area postrema of the brain and trigger the emetic reflex. The area postrema is a structure located on the floor of the fourth ventricle around which the blood–brain barrier is permeable, thus allowing for the detection of humoral or pharmacological stimuli in the blood or cerebrospinal fluid. This structure contains receptors which form a chemoreceptor trigger zone. Some of the receptors and neurotransmitters involved in the regulation of this emetic pathway include dopamine type D2, serotonin types 2–4 (5HT2–4), histamine type 1(H1), and acetylcholine (muscarinic receptors type 1 to 5, M1–5). Some other receptors such as substance P, cannabinoid type 1 (CB1) and the endogenous opioids may also be involved. Pathological conditions of the gastrointestinal tract: diseases and pathological conditions of the GIT may also lead to nausea and vomiting through direct or indirect stimulation of the above named pathways. Such conditions may include malignant bowel obstruction, hypertrophic pyloric stenosis and gastritis. Pathological conditions in other organs which are linked to the above named emetic pathways may also lead to nausea and vomiting, such as the myocardial infarction (through stimulation of cardiac vagal afferents) and kidney failure. Stimulation of the central nervous system: certain stimuli of the central nervous system may induce the emetic reflex. These include fear, anticipation, brain trauma and increased intracranial pressure. Of particular relevance to cancer patients in this regard are the stimuli of fear and anticipation. Evidence suggests that cancer patients may develop the side effects of nausea and vomiting in anticipation of chemotherapy. In some patients, re-exposure to cues such as smell, sounds or sight associated with the clinic or previous treatment may evoke anticipatory nausea and vomiting. Pathological conditions of the vestibular system: a disturbance of the vestibular system such as in motion sickness or Ménière's disease can induce the emetic reflex. Such disturbances of the vestibular system could also be cancer related such as in cerebral or vestibular secondaries (metastasis), or cancer treatment related such as the use of opioids. Patient Reported Outcomes Patient reported outcomes (PROs) allow patients to voice their perspective on health and behavioral status through self administered questionnaires. Cancer and nausea have been measured with the Patient Reported Outcomes Measurement System (PROMIS) using surveys with questions such as "in the last 7 days, how severe was your nausea?". PROs can aid clinicians in tailoring nausea treatment specific to variations in high or low emetogenic chemotherapy from patient to patient. One notable benefit of PROs is that surveys can be administered electronically, meaning patients who are too sick to go to the doctor can do it from home. Limitations: While helpful, PROs are subject to bias since they are reported after the symptoms are experienced. Errors in patients' memories can influence their PROs compared to if they had been asked while experiencing nausea rather than afterwards. This can lead to ratings which may not accurately reflect how patients perceive their nausea at the moment. Management The strategies of management or prevention of nausea and vomiting depend on the underlying causes, whether they are reversible or treatable, stage of the illness, the person's prognosis and other person specific factors. Anti emetic drugs are chosen according to previous effectiveness and side effects. Medication Drugs that are used in the prophylaxis and therapy of nausea and vomiting in cancer include: 5-HT3 antagonists: 5-HT3 antagonists produce their anti emetic effect by blocking of the amplifying effect of serotonin on peripheral and central 5-HT3 receptors located on the various vagal afferent nerve endings and the chemoreceptor trigger zone. They are effective in the treatment and prophylaxis of CINV as well as in malignant bowel obstruction and kidney failure which are associated with elevated serotonin levels. These substances include dolasetron, granisetron, ondansetron, palonosetron, and tropisetron. They are often used in combination with other anti emetic drugs in people with high risk of emesis or nausea and are recommended as the most effective anti emetics in the prophylaxis of acute CINV. Corticosteroids: such as dexamethasone are used in the treatment of emesis as a result of chemotherapy, malignant bowel obstruction, raised intracranial pressure and in the chronic nausea of advanced cancer, though their exact mode of action remain unclear. Dexamethasone is recommended for use in the acute prevention of highly, moderately, and low emetogenic chemotherapy and in combination with aprepitant for the prevention of delayed emesis in highly emetogenic chemotherapy. NK1 receptor antagonists: such as Aprepitant block the NK1 receptor in the brainstem and gastrointestinal tract. Their antiemetic activity when added to a 5-HT3 receptor antagonist plus dexamethasone has been shown in several phase II double-blind studies. Cannabinoids: are a useful adjunct to modern anti emetic therapy in selected patients. They show a combination of weak anti emetic efficacy with potentially beneficial side effects such as sedation and euphoria. However, their usefulness is generally limited by the high incidence of toxic effects, such as dizziness, dysphoria, and hallucinations. Some studies have shown that cannabinoids are slightly better than conventional anti emetics such as metoclopramide, phenothiazines and haloperidol in the prevention of nausea and vomiting. Cannabinoids are an option in affected people who are intolerant or refractory to 5-HT3 antagonists or steroids and aprepitant as well as in refractory nausea and vomiting and rescue anti emetic therapy. Prokinetic agents such as metoclopramide Dopamine receptor antagonists such as phenothiazines (prochlorperazine and chlorpromazine), haloperidol, olanzapine, and levomepromazine, block D2 receptors found in the chemoreceptor trigger zone Antihistaminic agents like promethazine block H1 receptors in the vomiting center of the medulla, the vestibular nucleus, and the chemoreceptor trigger zone Anticholinergic agents such as scopolamine (hyoscine) are used as anti emetics as they relax smooth muscle and reduce gastrointestinal secretions by blockade of muscarinic receptors. They may be useful in the management of terminal bowel obstruction Somatostatin analoga such as octreotide are used for the palliation of malignant bowel obstruction, especially when there is high output vomiting not responding to other measures Cannabidiol is used as a palliative treatment (non-curative symptomatic treatment) and improves numerous symptoms that frequently appear during chemotherapy like nausea, vomiting, loss of appetite, physical pain or insomnia. Due to the large number of cannabinoid receptors (CB1 and CB2) distributed throughout the gastrointestinal (GI) tract, these substances can help to control and treat many GI diseases where vomiting and nausea are frequent. Side Effects Side effects of antiemetic drugs are relatively mild. Depending on the type of drug and dosage prescribed, common side effects may include: headache, constipation, diarrhea, insomnia, agitation, acne, weight loss/weight gain, dizziness, or drowsiness. In addition, although cannabis has proven extremely beneficial for emetic relief, a small percentage of patients opting to use medical cannabis have shown to become dependent on it after treatment concludes. Nonmedical Interventions Other non-drug measures may include: Diet: Small palatable meals are normally tolerated better than big meals in people affected by nausea and vomiting in cancer. Carbohydrate meals are better tolerated than spicy, fatty and sweet foods. Cool, fizzy drinks are found to be more palatable than still or hot drinks. The avoidance of environmental stimuli, such as sights, sounds, or smells that may initiate nausea. Patients who have become conditioned to feel nauseas after chemotherapy by treatment setting, sights, sounds, or smells associated with chemotherapy can be treated (albeit with varying levels of effectiveness) by introducing a new flavor or odor to unpair the conditioned stimuli. Instructional placebo interventions have shown varying outcomes, but experiments have not found any clinically significant changes in nausea levels. Behavioral approaches, such as distraction, relaxation training and cognitive behavioural therapy, yoga, and guided imagery may also be useful. Alternative medicine: acupuncture, and ginger have been shown to have some anti emetic effects on chemotherapy-induced emesis and anticipatory nausea, but have not been evaluated in the nausea of far advanced disease. Additionally, the effectiveness of ginger may be dampened by perceptions of it being a weak antiemetic. Palliative surgery Palliative care is the active care of people with advanced, progressive illness such as cancer. The World Health Organization (WHO) defines it as an approach that improves the quality of life of patients and their families facing the problems associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems (such as nausea or vomiting), physical, psychosocial, and spiritual. Sometimes it is possible or necessary to provide relief for cancer-caused nausea and vomiting through palliative surgical intervention. Surgery is however not routinely carried out when there are poor prognostic criteria for surgical intervention such as intra-abdominal carcinomatosis, poor performance status and massive ascites. The surgical approach proves beneficial in affected people with operable lesions, a life expectancy greater than two months and good performance status. Often a malignant bowel obstruction is the cause of the symptoms in which case the purpose of palliative surgery is to relieve the symptoms of bowel obstruction by means of several procedures including: Stoma formation Bypass of the obstruction Resection of bowel segments Placement of stents. Percutaneous endoscopic gastrostomy (PEG) tube placement to enable gastric venting. Gastric venting through a nasogastric tube is a semi-invasive possibility for palliation of nausea and vomiting due to gastrointestinal obstruction in people with abdominal malignancies who decline surgery or where surgery may not be indicated. However nasogastric tubes are not recommended to be used over a long period of time because of the high risk of displacement, poor tolerance, restrictions in daily routine activities, coughing, clearing pulmonary secretions and can be cosmetically unacceptable and confining. Complications of nasogastric tubes include aspiration, hemorrhage, gastric erosion, necrosis, sinusitis and otitis. Epidemiology In 2008, 12.7 million new cancer cases and 7.6 million cancer deaths were estimated worldwide. Nausea or vomiting occur in 50–70% of people with advanced cancer. 50–80% of people undergoing radiotherapy experience nausea and/or vomiting, depending on the site of irradiation. Anticipatory nausea and vomiting is experienced by approximately 20–30% of people undergoing chemotherapy. Chemotherapy-induced nausea and vomiting resulting from treatment with highly emetogenic cytotoxic drugs can be prevented or effectively treated in 70 to 80% of affected people. Financial implications Individual: CINV has shown to bring a heavy financial burden on cancer patients. These costs may discourage patients from seeking treatment or purchasing medication despite nausea being one of the most debilitating side effects of chemotherapy. In addition to hospital fees, studies have found that costs incurred for prescription antiemetics averaged between $100–1400 per chemotherapy cycle depending on the drugs prescribed. Healthcare system: In addition to patient costs, CINV also takes a heavy financial toll on the healthcare system at large. General cancer symptom management has shown to make up 5% of annual hospital expenses, with the cost of CINV changing with antiemetic treatment. It was found that people receiving prophylactic treatment posed a significantly lower burden on the healthcare system. In contrast, patients who received no prophylactic treatment were shown to pose a substantial cost to the healthcare system. These additional costs have shown to be associated with repeated hospital visits and emergency medication for uncontrolled CINV. See also Antiemetic Chemotherapy-induced nausea and vomiting Management of cancer References Oncology Cancer Symptoms and signs: Digestive system and abdomen Vomiting
0.772754
0.965182
0.745848
Ultrastructure
Ultrastructure (or ultra-structure) is the architecture of cells and biomaterials that is visible at higher magnifications than found on a standard optical light microscope. This traditionally meant the resolution and magnification range of a conventional transmission electron microscope (TEM) when viewing biological specimens such as cells, tissue, or organs. Ultrastructure can also be viewed with scanning electron microscopy and super-resolution microscopy, although TEM is a standard histology technique for viewing ultrastructure. Such cellular structures as organelles, which allow the cell to function properly within its specified environment, can be examined at the ultrastructural level. Ultrastructure, along with molecular phylogeny, is a reliable phylogenetic way of classifying organisms. Features of ultrastructure are used industrially to control material properties and promote biocompatibility. History In 1931, German engineers Max Knoll and Ernst Ruska invented the first electron microscope. With the development and invention of this microscope, the range of observable structures that were able to be explored and analyzed increased immensely, as biologists became progressively interested in the submicroscopic organization of cells. This new area of research concerned itself with substructure, also known as the ultrastructure. Applications Many scientists use ultrastructural observations to study the following, including but not limited to: Human Tumors Chloroplasts Bone Platelets Sperm Biology A common ultrastructural feature found in plant cells is the formation of calcium oxalate crystals. It has been theorized that these crystals function to store calcium within the cell until it is needed for growth or development. Calcium oxalate crystals can also form in animals, and kidney stones are a form of these ultrastructural features. Theoretically, nanobacteria could be used to decrease the formation of calcium oxalate kidney stones. Engineering Controlling ultrastructure has engineering uses for controlling the behavior of cells. Cells respond readily to changes in their extracellular matrix (ECM), so manufacturing materials to mimic ECM allows for increased control over the cell cycle and protein expression. Many cells, such as plants, produce calcium oxalate crystals, and these crystals are usually considered ultrastructural components of plant cells. Calcium oxalate is a material that is used to manufacture ceramic glazes [6], and it also has biomaterial properties. For culturing cells and tissue engineering, this crystal is found in fetal bovine serum, and is an important aspect of the extracellular matrix for culturing cells. Ultrastructure is an important factor to consider when engineering dental implants. Since these devices interface directly with bone, their incorporation to surrounding tissue is necessary to optimal device function. It has been found that applying a load to a healing dental implant allows for increased osseointegration with facial bones. Analyzing the ultrastructure surrounding an implant is useful in determining how biocompatible it is and how the body reacts to it. One study found implanting granules of a biomaterial derived from pig bone caused the human body to incorporate the material into its ultrastructure and form new bone. Hydroxyapatite is a biomaterial used to interface medical devices directly to bone by ultrastructure. Grafts can be created along with 𝛃-tricalcium phosphate, and it has been observed that surrounding bone tissue with incorporate the new material into its extracellular matrix. Hydroxyapatite is a highly biocompatible material, and its ultrastructural features, such as crystalline orientation, can be controlled carefully to ensure optimal biocompatibility. Proper crystal fiber orientation can make introduced minerals, like hydroxyapatite, more similar to the biological materials they intend to replace. Controlling ultrastructural features makes obtaining specific material properties possible. References External links Electron microscopy Cell anatomy
0.768311
0.970742
0.745832
Population structure (genetics)
Population structure (also called genetic structure and population stratification) is the presence of a systematic difference in allele frequencies between subpopulations. In a randomly mating (or panmictic) population, allele frequencies are expected to be roughly similar between groups. However, mating tends to be non-random to some degree, causing structure to arise. For example, a barrier like a river can separate two groups of the same species and make it difficult for potential mates to cross; if a mutation occurs, over many generations it can spread and become common in one subpopulation while being completely absent in the other. Genetic variants do not necessarily cause observable changes in organisms, but can be correlated by coincidence because of population structure—a variant that is common in a population that has a high rate of disease may erroneously be thought to cause the disease. For this reason, population structure is a common confounding variable in medical genetics studies, and accounting for and controlling its effect is important in genome wide association studies (GWAS). By tracing the origins of structure, it is also possible to study the genetic ancestry of groups and individuals. Description The basic cause of population structure in sexually reproducing species is non-random mating between groups: if all individuals within a population mate randomly, then the allele frequencies should be similar between groups. Population structure commonly arises from physical separation by distance or barriers, like mountains and rivers, followed by genetic drift. Other causes include gene flow from migrations, population bottlenecks and expansions, founder effects, evolutionary pressure, random chance, and (in humans) cultural factors. Even in lieu of these factors, individuals tend to stay close to where they were born, which means that alleles will not be distributed at random with respect to the full range of the species. Measures Population structure is a complex phenomenon and no single measure captures it entirely. Understanding a population's structure requires a combination of methods and measures. Many statistical methods rely on simple population models in order to infer historical demographic changes, such as the presence of population bottlenecks, admixture events or population divergence times. Often these methods rely on the assumption of panmictia, or homogeneity in an ancestral population. Misspecification of such models, for instance by not taking into account the existence of structure in an ancestral population, can give rise to heavily biased parameter estimates. Simulation studies show that historical population structure can even have genetic effects that can easily be misinterpreted as historical changes in population size, or the existence of admixture events, even when no such events occurred. Heterozygosity One of the results of population structure is a reduction in heterozygosity. When populations split, alleles have a higher chance of reaching fixation within subpopulations, especially if the subpopulations are small or have been isolated for long periods. This reduction in heterozygosity can be thought of as an extension of inbreeding, with individuals in subpopulations being more likely to share a recent common ancestor. The scale is important — an individual with both parents born in the United Kingdom is not inbred relative to that country's population, but is more inbred than two humans selected from the entire world. This motivates the derivation of Wright's F-statistics (also called "fixation indices"), which measure inbreeding through observed versus expected heterozygosity. For example, measures the inbreeding coefficient at a single locus for an individual relative to some subpopulation : Here, is the fraction of individuals in subpopulation that are heterozygous. Assuming there are two alleles, that occur at respective frequencies , it is expected that under random mating the subpopulation will have a heterozygosity rate of . Then: Similarly, for the total population , we can define allowing us to compute the expected heterozygosity of subpopulation and the value as: If F is 0, then the allele frequencies between populations are identical, suggesting no structure. The theoretical maximum value of 1 is attained when an allele reaches total fixation, but most observed maximum values are far lower. FST is one of the most common measures of population structure and there are several different formulations depending on the number of populations and the alleles of interest. Although it is sometimes used as a genetic distance between populations, it does not always satisfy the triangle inequality and thus is not a metric. It also depends on within-population diversity, which makes interpretation and comparison difficult. Admixture inference An individual's genotype can be modelled as an admixture between K discrete clusters of populations. Each cluster is defined by the frequencies of its genotypes, and the contribution of a cluster to an individual's genotypes is measured via an estimator. In 2000, Jonathan K. Pritchard introduced the STRUCTURE algorithm to estimate these proportions via Markov chain Monte Carlo, modelling allele frequencies at each locus with a Dirichlet distribution. Since then, algorithms (such as ADMIXTURE) have been developed using other estimation techniques. Estimated proportions can be visualized using bar plots — each bar represents an individual, and is subdivided to represent the proportion of an individual's genetic ancestry from one of the K populations. Varying K can illustrate different scales of population structure; using a small K for the entire human population will subdivide people roughly by continent, while using large K will partition populations into finer subgroups. Though clustering methods are popular, they are open to misinterpretation: for non-simulated data, there is never a "true" value of K, but rather an approximation considered useful for a given question. They are sensitive to sampling strategies, sample size, and close relatives in data sets; there may be no discrete populations at all; and there may be hierarchical structure where subpopulations are nested. Clusters may be admixed themselves, and may not have a useful interpretation as source populations. Dimensionality reduction Genetic data are high dimensional and dimensionality reduction techniques can capture population structure. Principal component analysis (PCA) was first applied in population genetics in 1978 by Cavalli-Sforza and colleagues and resurged with high-throughput sequencing. Initially PCA was used on allele frequencies at known genetic markers for populations, though later it was found that by coding SNPs as integers (for example, as the number of non-reference alleles) and normalizing the values, PCA could be applied at the level of individuals. One formulation considers individuals and bi-allelic SNPs. For each individual , the value at locus is is the number of non-reference alleles (one of ). If the allele frequency at is , then the resulting matrix of normalized genotypes has entries: PCA transforms data to maximize variance; given enough data, when each individual is visualized as point on a plot, discrete clusters can form. Individuals with admixed ancestries will tend to fall between clusters, and when there is homogenous isolation by distance in the data, the top PC vectors will reflect geographic variation. The eigenvectors generated by PCA can be explicitly written in terms of the mean coalescent times for pairs of individuals, making PCA useful for inference about the population histories of groups in a given sample. PCA cannot, however, distinguish between different processes that lead to the same mean coalescent times. Multidimensional scaling and discriminant analysis have been used to study differentiation, population assignment, and to analyze genetic distances. Neighborhood graph approaches like t-distributed stochastic neighbor embedding (t-SNE) and uniform manifold approximation and projection (UMAP) can visualize continental and subcontinental structure in human data. With larger datasets, UMAP better captures multiple scales of population structure; fine-scale patterns can be hidden or split with other methods, and these are of interest when the range of populations is diverse, when there are admixed populations, or when examining relationships between genotypes, phenotypes, and/or geography. Variational autoencoders can generate artificial genotypes with structure representative of the input data, though they do not recreate linkage disequilibrium patterns. Demographic inference Population structure is an important aspect of evolutionary and population genetics. Events like migrations and interactions between groups leave a genetic imprint on populations. Admixed populations will have haplotype chunks from their ancestral groups, which gradually shrink over time because of recombination. By exploiting this fact and matching shared haplotype chunks from individuals within a genetic dataset, researchers may trace and date the origins of population admixture and reconstruct historic events such as the rise and fall of empires, slave trades, colonialism, and population expansions. Role in genetic epidemiology Population structure can be a problem for association studies, such as case-control studies, where the association between the trait of interest and locus could be incorrect. As an example, in a study population of Europeans and East Asians, an association study of chopstick usage may "discover" a gene in the Asian individuals that leads to chopstick use. However, this is a spurious relationship as the genetic variant is simply more common in Asians than in Europeans. Also, actual genetic findings may be overlooked if the locus is less prevalent in the population where the case subjects are chosen. For this reason, it was common in the 1990s to use family-based data where the effect of population structure can easily be controlled for using methods such as the transmission disequilibrium test (TDT). Phenotypes (measurable traits), such as height or risk for heart disease, are the product of some combination of genes and environment. These traits can be predicted using polygenic scores, which seek to isolate and estimate the contribution of genetics to a trait by summing the effects of many individual genetic variants. To construct a score, researchers first enroll participants in an association study to estimate the contribution of each genetic variant. Then, they can use the estimated contributions of each genetic variant to calculate a score for the trait for an individual who was not in the original association study. If structure in the study population is correlated with environmental variation, then the polygenic score is no longer measuring the genetic component alone. Several methods can at least partially control for this confounding effect. The genomic control method was introduced in 1999 and is a relatively nonparametric method for controlling the inflation of test statistics. It is also possible to use unlinked genetic markers to estimate each individual's ancestry proportions from some K subpopulations, which are assumed to be unstructured. More recent approaches make use of principal component analysis (PCA), as demonstrated by Alkes Price and colleagues, or by deriving a genetic relationship matrix (also called a kinship matrix) and including it in a linear mixed model (LMM). PCA and LMMs have become the most common methods to control for confounding from population structure. Though they are likely sufficient for avoiding false positives in association studies, they are still vulnerable to overestimating effect sizes of marginally associated variants and can substantially bias estimates of polygenic scores and trait heritability. If environmental effects are related to a variant that exists in only one specific region (for example, a pollutant is found in only one city), it may not be possible to correct for this population structure effect at all. For many traits, the role of structure is complex and not fully understood, and incorporating it into genetic studies remains a challenge and is an active area of research. References Genetic epidemiology Medical genetics Population genetics
0.767219
0.972075
0.745794
Evolutionary anthropology
Evolutionary anthropology, the interdisciplinary study of the evolution of human physiology and human behaviour and of the relation between hominids and non-hominid primates, builds on natural science and on social science. Various fields and disciplines of evolutionary anthropology include: human evolution and anthropogeny paleoanthropology and paleontology of both human and non-human primates primatology and primate ethology the sociocultural evolution of human behavior, including phylogenetic approaches to historical linguistics the cultural anthropology and sociology of humans the archaeological study of human technology and of its changes over time and space human evolutionary genetics and changes in the human genome over time the neuroscience, endocrinology, and neuroanthropology of human and primate cognition, culture, actions and abilities human behavioural ecology and the interaction between humans and the environment studies of human anatomy, physiology, molecular biology, biochemistry, and differences and changes between species, variation between human groups, and relationships to cultural factors Evolutionary anthropology studies both the biological and the cultural evolution of humans, past and present. Based on a scientific approach, it brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. As a dynamic and interdisciplinary field, it draws on many lines of evidence to understand the human experience, past and present. Studies of human biological evolution generally focus on the evolution of the human form. Cultural evolution involves the study of cultural change over time and space and frequently incorporates cultural-transmission models. Cultural evolution is not the same as biological evolution: human culture involves the transmission of cultural information (compare memetics), and such transmission can behave in ways quite distinct from human biology and genetics. The study of cultural change increasingly takes place through cladistics and genetic models. See also References Anthropology Anthropology
0.763876
0.976316
0.745785
Psychoorganic syndrome
Psychoorganic syndrome (POS), also known as organic psychosyndrome, is a progressive disease comparable to presenile dementia. It consists of psychopathological complex of symptoms that are caused by organic brain disorders that involve a reduction in memory and intellect. Psychoorganic syndrome is often accompanied by asthenia. Psychoorganic syndrome occurs during atrophy of the brain, most commonly during presenile and senile age (e.g. Alzheimer's disease, senile dementia). There are many causes, including cerebrovascular diseases, CNS damages to traumatic brain injury, intoxication, exposure to organic solvents such as toluene, chronic metabolic disorders, tumors and abscesses of the brain, encephalitis, and can also be found in cases of diseases accompanied by convulsive seizures. Psychoorganic syndrome may occur at any age but is most pronounced in elderly and senile age. Depending on the nosological entity, the main symptoms of psychoorganic syndrome are expressed differently. For example, in atrophic cases such as Alzheimer's disease, the symptoms are more geared towards a memory disorder, while in Pick's disease, mental disorders are more commonly expressed. Symptoms Patients with psychoorganic syndrome often complain about headaches, dizziness, unsteadiness when walking, poor tolerance to the heat, stuffiness, atmospheric pressure changes, loud sounds, neurological symptoms. The common reported psychological symptoms include: loss of memory and concentration emotional lability Clinical fatigue long term major depression severe anxiety reduced intellectual ability The cognitive and behavioral symptoms are chronic and have little response to treatment. Depending on lesion location, some patients may experience visual complications. Cause Psycho-organic syndrome is typically diagnosed in individuals following 5–10 years of consistent exposure to chemicals like xylene, toluene, and styrene, which are generally found in paint, plastic and degreasing products. Patients work and environmental history must be evaluated for exposure to organic chemicals. A traumatic brain injury may also lead to POS. Although the cause varies in each individual case, localization of the atrophy in the brain can occur due to aging and without external causes. Prevention includes proper and regular use of Preventive Personal Equipment (PPE) in work environments that involve organic chemicals and limiting alcohol and drug substance intake. Mechanism Psychoorganic syndrome is a combination of various symptoms that are caused by organic changes in the brain. The exact component of the solvents that causes the neurological disorder is difficult to isolate due to worker generally being exposed to mixtures of various grades, compositions, and purity of solvents. At the initial stage, asthenia is prevalent and the progress of the disorder is slow. Acute onset can be diagnosed when a large amount of psychological symptoms surface. The final stage of the disorder is made up of numerous disorders, including dementia, Korsakov's syndrome, and includes severe personality change such as depression, anxiety, memory loss, and drastic change in intellect. Level of kindness, happiness, and insight are greatly affect in the final stage. The disorder stems from a defect in brain tissue, usually atrophy from another neurological disorder.Although the exact mechanism that solvents have on the nervous system are not fully understood, the metabolism of the solvents in the body that turn them into toxic intermediates are important. Some evidence shows that genetic polymorphisms affect the activity of metabolic enzymes that metabolize foreign chemicals. Diagnosis Along with occupational and environmental evaluation, a neurological exam, ECHO, EEG, CT-San, and X-ray of the brain may be conducted to determine disorder. Neuroimaging that detects cerebral atrophy or cardiovascular subcortical alterations can help point to psychoorganic syndrome. Strong CNS lesions are detected in POS patients. However, this is found to be difficult as many psychiatric disorders, like dementia, have common diagnosis. Diagnosing POS is an ongoing and developing in the medical and psychiatric industry. Exact diagnosis is difficult due to many symptoms mirroring other psychological disorders in the older aged patients. Various symptom diagnosis CT scan or MRI can confirm dementia via observation of ventricular dilation and cortical substance degeneration. Pick's disease can be confirmed via CT scan or MRI with atrophy of frontal and temporal lobe roots. Alzheimer's is a disease confirmed by atrophy of the parietal and temporal lobe ganglia along with changes in the cortical ganglia found in a CT scan or MRI. Treatment In a confirmed medical diagnosis, therapy is used to isolate and begin treating the cause of the disorder. Thereafter, psychiatric medication is used a secondary step in treatment. Medications include antipsychotic, antidepressant, or sedation-inducing, varying on the patients severity. Treatment of psychoorganic syndrome is directed at the main disease. Nootropics like piracetam, have had positive effects on patients. Vitamin therapy, antioxidants, neurotropic, and cerebroprotective have also found to be effective when put on a repeat course. History POS was suggested to be associated with long term and high level solvent exposure in early studies conducted in Scandinavia. These studies found neurological deficits such as personality changes and memory loss were tied to these exposures. However, these studies were highly criticized and found biased, causing doubt in the existence of the syndrome. Furthermore, various health organizations had difficulty coming to an agreement on the definition of the syndrome. In 1985, the syndrome was defined and provided clear criteria that could be used by patients and medical professionals to help identify the syndrome and isolate ways of prevention. Recent research In a 2007 clinical study conducted in Sweden on 128 subjects who had constant high exposure to solvents in their work environments, a definite link to POS was unable to be determined. However, the subject who had diagnosis of POS showed increased neurological symptoms with increased brain atrophy in as little as 3 years of exposure. See also Neurotoxicity References Neurological disorders Psychopathological syndromes Mental disorders due to brain damage
0.774567
0.962806
0.745758
Anorectal disorder
Anorectal disorders include conditions involving the anorectal junction as seen in the image. They are painful but common conditions like hemorrhoids, tears, fistulas, or abscesses that affect the anal region. Most people experience some form of anorectal disorder during their lifetime. Primary care physicians can treat most of these disorders, however, high-risk individuals include those with HIV, roughly half of whom need surgery to remedy the disorders. Likelihood of malignancy should also be considered in high risk individuals. This is why it is important to perform a full history and physical exam on each patient. Because these disorders affect the rectum, people are often embarrassed or afraid to confer with a medical professional. Common conditions Symptoms and signs Itchiness, a burning sensation, pus discharge, blood, and swelling in around the rectum and anus, diarrhea. Other common symptoms include anal spasm, Bleeding with defecation and painful defecation. Diagnosis Doctors uses a variety of tools and techniques to evaluate the type of anorectal disorder, including digital and anoscopic investigations, palpations, and palpitations. The initial examination can be painful because a gastroenterologist will need to spread the buttocks and probe the painful area, which may require a local anesthetic. Treatment Treatments range from recommendations for over-the-counter products to more invasive surgical procedures. Among the most common outpatient advice given to patients with less severe disorders include a high-fiber diet, application of ointment, and increased water intake. More serious procedures include the removal of affected tissue, injection of botulinum toxin, or surgically opening the fistula tract in the sphincter muscle. Notes Gastrointestinal tract disorders Anal diseases Rectal diseases
0.76182
0.978849
0.745707
Musculoskeletal injury
Musculoskeletal injury refers to damage of muscular or skeletal systems, which is usually due to a strenuous activity and includes damage to skeletal muscles, bones, tendons, joints, ligaments, and other affected soft tissues. In one study, roughly 25% of approximately 6300 adults received a musculoskeletal injury of some sort within 12 months—of which 83% were activity-related. Musculoskeletal injury spans into a large variety of medical specialties including orthopedic surgery (with diseases such as arthritis requiring surgery), sports medicine, emergency medicine (acute presentations of joint and muscular pain) and rheumatology (in rheumatological diseases that affect joints such as rheumatoid arthritis). Musculoskeletal injuries can affect any part of the human body including; bones, joints, cartilages, ligaments, tendons, muscles, and other soft tissues. Symptoms include mild to severe aches, low back pain, numbness, tingling, atrophy and weakness. These injuries are a result of repetitive motions and actions over a period of time. Tendons connect muscle to bone whereas ligaments connect bone to bone. Tendons and ligaments play an active role in maintain joint stability and controls the limits of joint movements, once injured tendons and ligaments detrimentally impact motor functions. Continuous exercise or movement of a musculoskeletal injury can result in chronic inflammation with progression to permanent damage or disability. In many cases, during the healing period after a musculoskeletal injury, a period in which the healing area will be completely immobile, a cast-induced muscle atrophy can occur. Routine sessions of physiotherapy after the cast is removed can help return strength in limp muscles or tendons. Alternately, there exist different methods of electrical stimulation of the immobile muscles which can be induced by a device placed underneath a cast, helping prevent atrophies Preventative measures include correcting or modifying one's postures and avoiding awkward and abrupt movements. It is beneficial to rest post injury to prevent aggravation of the injury. There are three stages of progressing from a musculoskeletal injury; Cause, Disability and Decision. The first stage arises from the injury itself whether it be overexertion, fatigue or muscle degradation. The second stage involves how the individual's ability is detrimentally affected as disability affects both physical and cognitive functions of an individual. The final stage, decision, is the individual's decision to return to work post recovery as Musculoskeletal injuries compromise movement and physical ability which ultimately degrades one's professional career. Repetitive use injuries Injury can be described as a ‘mechanical disruption of tissues resulting in pain.' Despite the fact tissues can self-repair, muscle degradation occurs after repeated and prolonged use. Overuse and strain injuries can occur at work, physical activity and daily life. Repetitive motions strain our musculoskeletal systems, if continued in an improper form can result in chronic inflammation with progression to permanent damage. These injuries can compromise an individual's posture or other physical abilities, including fine motor movements. Nerves play an important role in repetitive strain injuries as it is nerves that get pulled in injured soft tissues ultimately affecting motor functions. Pressure on the nerve will impair blood flow which can impair either distal or proximal points to the first injury and cause pain. Tendons connect muscle to bone whereas ligaments connect bone to bone. Tendons and ligaments play an active role in maintain joint stability and controls the limits of joint movements, once injured tendons and ligaments detrimentally impact motor functions. Injuries associated with repetitive-use activities include: tennis elbow, tendonitis, wrist injuries, myelopathy, low back injuries and lower leg and ankle injuries. Repetitive use injuries are a result of rapid and continuous movements, long duration postures without adequate support. Excessive muscle use results in fatigue which limits movement of limbs. Forms of musculoskeletal injuries An acute injury can be traced back to a specific incident, causing immediate pain and often swelling. On the other hand, a Chronic injury does not have a distinct origin, it develops slowly, is persistent and long lasting, and it is accompanied by dull pain, aches or soreness. Neck and shoulder injury The shoulder is a joint which allows your arm to move Poor posture can lead to nerve damage. Repetitive shoulder movements, overhead, swinging, throwing or circling movement can cause musculoskeletal injury. Some cases can result in spinal cord damage at the C3-C5 levels, producing a myelopathy which can dramatically compromise overall movements in arm and legs as well as other fine motor functions. Injury to the rotator cuff Is a result of trauma and old age, complete and partial tears are more frequent in older patients caused by degeneration of the tendons. Wrist and hand injury Wrist mobility is often restricted due to inflammation of the forearm muscles as they contract and tighten due to injury. Most wrist dislocations occur between the capitate and the lunate. Carpal fractures are caused by falling on an outstretched hand the wrist is hyper-extended in ulnar deviation with a component of rotation. Swelling of the Median nerve tissue leads to nerve entrapment ultimately resulting in restriction of movement, other symptoms include; pain, numbness and weakness. DeQuervain's Tenosynovitis is a form of tendinitis of the muscles that move the thumb. Leg and foot injury Most leg pain is transferred pain from our backs or hips. Foot injuries including plantar fasciitis is another source of pain which is associated with-standing for long periods. There are three major tendons that maintain stability at the ankle joint; anterior extensors, medial flexors and lateral peroneal, these tendons facilitate movement around the ankle, foot and toes. Malleolar fractures are related to ankle twisting or shearing injury, these fractures are often associated with ligament injury. An ankle sprain can lead to a spectrum of soft tissue impingement reducing motion in the ankle. Spinal and neck injury The spinal column has five sections consisting of thirty three individual vertebrae separated by cushioning discs, the upper three sections are movable and the lower two are fixed. Nerve compression is a result of poor posture, prolonged computer use is an example of repetitive strain injury which affects the musculoskeletal system. Whiplash injury, whereby the force causes strain to the capsule and ligaments of the apophyseal joints of the cervical spine. Hyper-flexion is a common mechanism of injury in the cervical spine associated with an anterior compression vector and a posterior distraction vector. These injuries are associated with diving injuries, falls and car accidents. Anterior compression vector results in mild height loss, whereas hyper-extension often occurs with the posterior displacement of the head in car crashes. Severe hyper-extension injury leads to pinching of the spinal cord along the posterior margin of the body. Elbow injury The upper arm and the forearm meet to form the elbow joint. Examples of injuries affected on an elbow include; Carpal tunnel syndrome, Radial Tunnel Syndrome and tennis elbow, all of which are due to tendon and ligament damage from overuse or strain. Distal humeral fractures are related to high energy trauma from falling from a height or in a motor vehicle accident, this results in stiffness and restricted range of motion. Elbow dislocation and radial head or neck fractures are common when one falls on an outstretched hand. Elbow Dislocations are divided into two categories; Simple and complex. Simple dislocations are defined as soft tissue injury whereas complex involves a fracture. Injury prevention Preventing injuries to workers is essential to maintain an effective organisational management. Repetitive injuries can be prevented by early medical intervention as an effective way to prevent permanent injury. Injuries can be prevented by understanding proper body mechanics. Correcting one's postures, avoiding abrupt and awkward movements will avoid acute injury. Taking breaks to change your position and moving about instead of remaining static can also reduce risk of injury. Daily body stretches can help elevate pain from hamstrings, back and neck. Creating healthy awareness through social media and celebrities further allow individuals to create healthy practices which ultimately prevent injury. It is essential for a work environment to comply with safety standards. Workplaces should have upper management implement safety precautions making health and safety the primary goal. Implementation of company policies and procedures in case of serious incident or fatality. Other strategies such as substances abuse programs are effective at reducing the potential for injuries. If musculoskeletal injuries are not prevented, they can develop and become debilitating. Heat and cold are used to facilitate the healing process, if applied immediately after an acute injury or overuse strain, it will reduce pain and swelling. A healthy workspace is also substantially important including; floor surfaces, ergonomic seating, working heights, working rates and task variability. Understanding the symptoms of repetitive strain injuries such as; Numbness of arms, hands or legs, aches and pains of joints, shoulder and back pain and tingling or burning of arms, legs and feet, allow an individual to self-diagnose and seek medical attention to prevent further aggravation. Pain is the body's natural way to alert an individual to rest. It is important to rest, if ignored can lead to further problems. It is crucial not to further aggravate the injury and compromise one's physical movement as it can detrimentally impact general health. Sustaining a secondary injury has a large risk whilst recovering from an initial injury. Injury recovery Injuries often limit physical activity and result in immobilisation which is a significant factor in recovery. Symptoms vary from, numbness, tingling, atrophy and weakness which can ultimately lead to permanent damage and disability. Neural injury recovery in acute strokes are compensated with the help of medical drugs. Repeating motions and actions whilst performing an activity increases an individual's risk of accumulating acute musculoskeletal injuries. Factors that affect sustaining these injuries include; duration of activity, the force required to complete the activity, the environment of the workplace and work postures. Although, specially advised exercises with stretching promotes blood circulation and increase range of motion and ultimately help decrease muscle tension. Our immune system is our natural mechanism which manages injuries to the musculoskeletal system. Inflammation, redness, swollen tissue are all part of the healing process, during this process new cells are generated to form new tissue. Macro-nutrients are essential components for tissue regeneration. Proteins, carbohydrates and fats are crucial for new muscle tissues. Water allows all biochemical processes to take place including, elimination of waste and toxins via sweat and urination. On the other hand, Micro nutrients include; vitamins, minerals, enzymes, protect cells and DNA from oxidation damages which is evident in the inflammation response and recovery process. Decision to return to work Recovery is enhanced by doing activities that make an individual feel better. Recovery from an injury also consists of returning to work or physical exercise. Employers are legally required to provide suitable duties for the person returning to work. It is important to get medical advice on when to return to work. It is important to consider the physical demands of the job, the work environment when deciding to return to work. Once you are approved to return to work or physical exercise it is crucial to maintain both physical and psychological relapse. See also Musculoskeletal disorder Human musculoskeletal system Sprain Muscular system References External links Prevention of Musculoskeletal Disorders in the Workplace - U.S. Occupational Safety and Health Administration Musculoskeletal disorders Single Entry Point, European Agency for Safety and Health at Work (OSHA) Good Practices to prevent Musculoskeletal disorders, European Agency for Safety and Health at Work (OSHA) Musculoskeletal disorders homepage Health and Safety Executive (HSE) Hazards and risks associated with manual handling of loads in the workplace, European Agency for Safety and Health at Work (OSHA) National Institute for Occupational Safety and Health Musculoskeletal Health Program Injuries Ergonomics Occupational diseases Overuse injuries Inflammations Tennis terminology Sports injuries Soft tissue disorders Tennis culture
0.761489
0.979271
0.745705
Chemical pneumonitis
Chemical pneumonitis is inflammation of the lung caused by aspirating or inhaling irritants. It is sometimes called a "chemical pneumonia", though it is not infectious. There are two general types of chemical pneumonitis: acute and chronic. Irritants capable of causing chemical pneumonitis include vomitus, barium used in gastro-intestinal imaging, chlorine gas (among other pulmonary agents), ingested gasoline or other petroleum distillates, ingested or skin absorbed pesticides, gases from electroplating, smoke and others. It may also be caused by the use of inhalants. Mendelson's syndrome is a type of chemical pneumonitis. Mineral oil should not be given internally to young children, pets, or anyone with a cough, hiatal hernia, or nocturnal reflux, because it can cause complications such as lipoid pneumonia. Due to its low density, it is easily aspirated into the lungs, where it cannot be removed by the body. In children, if aspirated, the oil can work to prevent normal breathing, resulting in death of brain cells and permanent paralysis and/or brain damage. Signs and symptoms Acute: Cough Difficulty Breathing Abnormal lung sounds (wet or gurgling sounds when breathing) Chest pain, tightness or burning sensation Chronic: Persistent cough Shortness of breath Increased susceptibility to respiratory illness Symptoms of chronic chemical pneumonitis may or may not be present, and can take months or years to develop to the point of noticeability. Diagnosis The pragmatic challenge is to distinguish from aspiration pneumonia with an infectious component because the former does not require antibiotics while the latter does. While some issues, such as a recent history of exposure to substantive toxins, can foretell the diagnosis, for a patient with dysphagia the diagnosis may be less obvious, as the dysphagic patient may have caustic gastric contents damaging the lungs which may or may not have progressed to bacterial infection. The following tests help determine how severely the lungs are affected: Blood gases (measurement of how much oxygen and carbon dioxide are in your blood) CT scan of chest Lung function studies (tests to measure breathing and how well the lungs are functioning) X-ray of the chest Swallowing studies to check if stomach acid is the cause of pneumonitis Treatment Treatment is focused on reversing the cause of inflammation and reducing symptoms. Corticosteroids may be given to reduce inflammation, often before long-term scarring occurs. Antibiotics are usually not helpful or needed, unless there is a secondary infection. Oxygen therapy may be helpful. References External links Respiratory diseases
0.762145
0.978395
0.745679
Motor disorder
Motor disorders are disorders of the nervous system that cause abnormal and involuntary movements. They can result from damage to the motor system. Motor disorders are defined in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) – published in 2013 to replace the fourth text revision (DSM-IV-TR) – as a new sub-category of neurodevelopmental disorders. The DSM-5 motor disorders include developmental coordination disorder, stereotypic movement disorder, and the tic disorders including Tourette syndrome. Signs and symptoms Motor disorders are malfunctions of the nervous system that cause involuntary or uncontrollable movements or actions of the body. These disorders can cause lack of intended movement or an excess of involuntary movement. Symptoms of motor disorders include tremors, jerks, twitches, spasms, contractions, or gait problems. Tremor is the uncontrollable shaking of an arm or a leg. Twitches or jerks of body parts may occur due to a startling sound or unexpected, sudden pain. Spasms and contractions are temporary abnormal resting positions of hands or feet. Spasms are temporary while contractions could be permanent. Gait problems are problems with the way one walks or runs. This can mean an unsteady pace or dragging of the feet along with other possible irregularities. Causes Pathological changes of certain areas of the brain are the main causes of most motor disorders. Causes of motor disorders by genetic mutation usually affect the cerebrum. The way humans move requires many parts of the brain to work together to perform a complex process. The brain must send signals to the muscles instructing them to perform a certain action. There are constant signals being sent to and from the brain and the muscles that regulate the details of the movement such as speed and direction, so when a certain part of the brain malfunctions, the signals can be incorrect or uncontrollable causing involuntary or uncontrollable actions or movements. Diagnosis References Extrapyramidal and movement disorders Neurological disorders Symptoms and signs: Nervous and musculoskeletal systems Neurodevelopmental disorders
0.766954
0.972235
0.745659
Pyaemia
Pyaemia (or pyemia) is a type of sepsis that leads to widespread abscesses of a metastatic nature. It is usually caused by the staphylococcus bacteria by pus-forming organisms in the blood. Apart from the distinctive abscesses, pyaemia exhibits the same symptoms as other forms of septicaemia. It was almost universally fatal before the introduction of antibiotics. Sir William Osler included a three-page discussion of pyaemia in his textbook The Principles and Practice of Medicine, published in 1892. He defined pyaemia as follows: Earlier still, Ignaz Semmelweis – who later died of the disease – included a section titled "Childbed fever is a variety of pyaemia" in his treatise, The Etiology of Childbed Fever (1861). Jane Grey Swisshelm, in her autobiography titled Half a Century, describes the treatment of pyaemia in 1862 during the American Civil War. Types arterial p. Pyaemia resulting from dissemination of emboli from a thrombus in cardiac vessels. cryptogenic p. Pyaemia of an origin that is hidden in the deeper tissues. metastatic p. Multiple abscesses resulting from infected pyaemic thrombi. portal p. Suppurative inflammation of the portal vein. Symptoms The disease is characterized by intermittent high temperature with recurrent chills; metastatic processes in various parts of the body, especially in the lungs; septic pneumonia; empyema. It may be fatal. Clinical sign and symptoms can be differ according to system it involves. Diagnosis features of systemic inflammatory response syndrome tachycardia >90beats/min tachypnea >24/min temperature >38 or <36 Treatment Antibiotics are effective. Prophylactic treatment consists in prevention of suppuration. Cultural references Ignaz Semmelweis, the original proponent of hand-washing in the practice of medicine, was widely scorned for his belief and was committed to an insane asylum where he died at age 47 of pyaemia, after being beaten by the guards, only 14 days after he was committed. The nihilistic character Bazarov in Ivan Turgenev's Fathers and Sons dies of pyaemia. Miller Huggins, manager of the New York Yankees, died of pyaemia while managing the team during the 1929 season. Blind Boy Fuller died at his home in Durham, North Carolina on February 13, 1941, at 5 p.m. of pyemia due to an infected bladder, gastrointestinal tract and perineum, plus kidney failure. Casper, a wounded soldier in "Nostalgia," by Dennis McFarland, is dying of pyemia after his lower arm is amputated. References External links Bacterial diseases
0.76362
0.976478
0.745658
Diving medicine
Diving medicine, also called undersea and hyperbaric medicine (UHB), is the diagnosis, treatment and prevention of conditions caused by humans entering the undersea environment. It includes the effects on the body of pressure on gases, the diagnosis and treatment of conditions caused by marine hazards and how aspects of a diver's fitness to dive affect the diver's safety. Diving medical practitioners are also expected to be competent in the examination of divers and potential divers to determine fitness to dive. Hyperbaric medicine is a corollary field associated with diving, since recompression in a hyperbaric chamber is used as a treatment for two of the most significant diving-related illnesses, decompression sickness and arterial gas embolism. Diving medicine deals with medical research on issues of diving, the prevention of diving disorders, treatment of diving accidents and diving fitness. The field includes the effect of breathing gases and their contaminants under high pressure on the human body and the relationship between the state of physical and psychological health of the diver and safety. In diving accidents it is common for multiple disorders to occur together and interact with each other, both causatively and as complications. Diving medicine is a branch of occupational medicine and sports medicine, and at first aid level, an important part of diver education. Range and scope of diving medicine The scope of diving medicine must necessarily include conditions that are specifically connected with the activity of diving, and not found in other contexts, but this categorization excludes almost everything, leaving only deep water blackout, isobaric counterdiffusion and high pressure nervous syndrome. A more useful grouping is conditions that are associated with exposure to variations of ambient pressure. These conditions are largely shared by aviation and space medicine. Further conditions associated with diving and other aquatic and outdoor activities are commonly included in books which are aimed at the diver, rather than the specialist medical practitioner, as they are useful background to diver first aid training. The scope of knowledge necessary for a practitioner of diving medicine includes the medical conditions associated with diving and their treatment, physics and physiology relating to the underwater and pressurised environment, the standard operating procedures and equipment used by divers which can influence the development and management of these conditions, and the specialised equipment used for treatment. Scope of knowledge for diving medicine The ECHM-EDTC Educational and Training Standards for Diving and Hyperbaric Medicine (2011) specify the following scope of knowledge for Diving Medicine: Physiology and pathology of diving and hyperbaric exposure. Human physiology of underwater diving Hyperbaric physics Diving related physiology Hyperbaric pathophysiology of immersion Pathophysiology of decompression A brief introduction to acute dysbaric disorders Chronic dysbaric disorders Hyperbaric oxygen therapy basis – Physiology and pathology Oxygen toxicity Pressure and inert gas effects Nitrogen narcosis High pressure neurological syndrome Medication under pressure Non-dysbaric diving pathologies Diving technology and safety Basic safety planning Compressed air work Diving procedures Wet bells and stages Scuba diving on air and mixed gas Surface supplied diving Standard diving (copper helmet) Rebreather diving (semi-closed and closed circuit) Other diving procedures Characteristics of various divers Diving equipment as used to c.50m and Chambers Diving tables and computers Regulations and standards for diving Saturation diving Saturation mode Physiology of deep exposure Compression At depth in a living chamber Bell excursions Fitness to dive Fitness to dive criteria and contraindications for divers, compressed air workers and HBOT chamber personnel Fitness to dive assessment Fitness to dive standards (professional and recreational) Diving accidents Diving incidents and accidents Emergency medical support with no chamber on site Barotrauma: ENT; dental; cutaneous, conjunctival, etc. Physical injuries Decompression illness Pathophysiological basis and mechanisms of DCI Differential diagnosis of decompression illness Management of decompression incidents at the surface Immediate management, recompression tables and strategies Rehabilitation of disabled divers Diving accident investigation Clinical HBO Recompression chambers Scope of knowledge for hyperbaric medicine The ECHM-EDTC Educational and Training Standards for Diving and Hyperbaric Medicine (2011) specify the following scope of knowledge for Hyperbaric Medicine additional to that for Diving medicine: Physiology and pathology of diving and hyperbaric exposure. HBO-Basics – effects of hyperbaric oxygen – physiology and pathology Clinical HBO Chamber technique (multiplace, monoplace, transport chambers, wet recompression) Mandatory indications HBO Recommended indications HBO Experimental and anecdotal indications HBO Data collection / statistics / evaluation HBO General basic treatment (nursing) HBO Diagnostic, monitoring and therapeutic devices in chambers Risk assessment, incidents monitoring and safety plan in HBO chambers HBO Safety regulations Diagnostics The signs and symptoms of diving disorders may present during a dive, on surfacing, or up to several hours after a dive. Divers have to breathe a gas which is at the same pressure as their surroundings, which can be much greater than on the surface. The ambient pressure underwater increases by for every of depth. The principal conditions are: decompression illness (which covers decompression sickness and arterial gas embolism); nitrogen narcosis; high pressure nervous syndrome; oxygen toxicity; and pulmonary barotrauma (burst lung). Although some of these may occur in other settings, they are of particular concern during diving activities. The disorders are caused by breathing gas at the high pressures encountered at depth, and divers will often breathe a gas mixture different from air to mitigate these effects. Nitrox, which contains more oxygen and less nitrogen is commonly used as a breathing gas to reduce the risk of decompression sickness at recreational depths (up to about ). Helium may be added to reduce the amount of nitrogen and oxygen in the gas mixture when diving deeper, to reduce the effects of narcosis and to avoid the risk of oxygen toxicity. This is complicated at depths beyond about , because a helium–oxygen mixture (heliox) then causes high pressure nervous syndrome. More exotic mixtures such as hydreliox, a hydrogen–helium–oxygen mixture, are used at extreme depths to counteract this. Decompression sickness Decompression sickness (DCS) occurs when gas, which has been breathed under high pressure and dissolved into the body tissues, forms bubbles as the pressure is reduced on ascent from a dive. The results may range from pain in the joints where the bubbles form to blockage of an artery leading to damage to the nervous system, paralysis or death. While bubbles can form anywhere in the body, DCS is most frequently observed in the shoulders, elbows, knees, and ankles. Joint pain occurs in about 90% of DCS cases reported to the U.S. Navy, with neurological symptoms and skin manifestations each present in 10% to 15% of cases. Pulmonary DCS is very rare in divers. Pulmonary barotrauma and arterial gas embolism If the breathing gas in a diver's lungs cannot freely escape during an ascent, the lungs may be expanded beyond their compliance, and the lung tissues may rupture, causing pulmonary barotrauma (PBT). The gas may then enter the arterial circulation producing arterial gas embolism (AGE), with effects similar to severe decompression sickness. Gas bubbles within the arterial circulation can block the supply of blood to any part of the body, including the brain, and can therefore manifest a vast variety of symptoms. Nitrogen narcosis Nitrogen narcosis is caused by the pressure of dissolved gas in the body and produces temporary impairment to the nervous system. This results in alteration to thought processes and a decrease in the diver's ability to make judgements or calculations. It can also decrease motor skills, and worsen performance in tasks requiring manual dexterity. As depth increases, so does the pressure and hence the severity of the narcosis. The effects may vary widely from individual to individual, and from day to day for the same diver. Because of the perception-altering effects of narcosis, a diver may not be aware of the symptoms, but studies have shown that impairment occurs nevertheless. The narcotic effects dissipate without lasting effect as the pressure decreases during ascent. High-pressure nervous syndrome Helium is the least narcotic of all gases, and divers may use breathing mixtures containing a proportion of helium for dives exceeding about deep. In the 1960s it was expected that helium narcosis would begin to become apparent at depths of . However, it was found that different symptoms, such as tremors, occurred at shallower depths around . This became known as high-pressure nervous syndrome, and its effects are found to result from both the absolute depth and the speed of descent. Although the effects vary from person to person, they are stable and reproducible for the individual. Oxygen toxicity Although oxygen is essential to life, in concentrations significantly greater than normal it becomes toxic, overcoming the body's natural defences (antioxidants), and causing cell death in any part of the body. The lungs and brain are particularly affected by high partial pressures of oxygen, such as are encountered in diving. The body can tolerate partial pressures of oxygen around indefinitely, and up to for many hours, but higher partial pressures rapidly increase the chance of the most dangerous effect of oxygen toxicity, a convulsion resembling an epileptic seizure. Susceptibility to oxygen toxicity varies dramatically from person to person, and to a smaller extent from day to day for the same diver. Prior to convulsion, several symptoms may be present – most distinctly that of an aura. Treatments Treatment of diving disorders depends on the specific disorder or combination of disorders, but two treatments are commonly associated with first aid and definitive treatment where diving is involved. These are first aid oxygen administration at high concentration, which is seldom contraindicated, and generally recommended as a default option in diving accidents where there is any significant probability of hypoxia, and hyperbaric oxygen therapy, which is the definitive treatment for most conditions of decompression illness. Oxygen therapy The administration of oxygen as a medical intervention is common in diving medicine, both for first aid and for longer term treatment. Normobaric oxygen administration at the highest available concentration is frequently used as first aid for any diving injury that may involve inert gas bubble formation in the tissues. There is epidemiological support for its use from a statistical study of cases recorded in a long term database. Recompression and hyperbaric oxygen therapy Recompression treatment in a hyperbaric chamber was initially used as a life-saving tool to treat decompression sickness in caisson workers and divers who stayed too long at depth and developed decompression sickness. In the 21st century, it is a highly specialized treatment modality found to be effective for treating many conditions where the administration of oxygen under pressure is beneficial. Hyperbaric oxygen treatment is generally preferred when effective, as it is usually a more efficient and lower risk method of reducing symptoms of decompression illness, but in some cases recompression to pressures where oxygen toxicity is unacceptable may be required to eliminate the bubbles in the tissues in severe cases of decompression illness. Availability of recompression treatment is limited. Some countries have no facilities at all, and in others which have facilities, such as the US, some hospitals do not make them available for emergency treatment. Medical examination for fitness to dive Fitness to dive, (or medical fitness to dive), is the medical and physical suitability of a person to function safely in the underwater environment using underwater diving equipment and procedures. Depending on the circumstances it may be established by a signed statement by the diver that he or she does not suffer from any of the listed disqualifying conditions and is able to manage the ordinary physical requirements of diving, to a detailed medical examination by a physician registered as a medical examiner of divers following a procedural checklist, and a legal document of fitness to dive issued by the medical examiner. The most important medical examination is the one before starting diving, as the diver can be screened to prevent exposure when a dangerous condition exists. The other important medicals are after some significant illness, where medical intervention is needed there and has to be done by a doctor who is competent in diving medicine, and can not be done by prescriptive rules. Psychological factors can affect fitness to dive, particularly where they affect response to emergencies, or risk taking behaviour. The use of medical and recreational drugs, can also influence fitness to dive, both for physiological and behavioural reasons. In some cases prescription drug use may have a net positive effect, when effectively treating an underlying condition, but frequently the side effects of effective medication may have undesirable influences on the fitness of diver, and most cases of recreational drug use result in an impaired fitness to dive, and a significantly increased risk of sub-optimal response to emergencies. Education and registration of practitioners Specialist training in underwater and hyperbaric medicine is available from several institutions, and registration is possible both with professional associations and governmental registries. Education NOAA/UHMS Physicians Training Course in Diving Medicine This course has been presented since 1977, and has been influenced by internationally accepted training objectives recommended by the Diving Medical Advisory Committee, the European Diving Technology Committee, and the European Committee for Hyperbaric Medicine. The course is designed for qualified medical practitioners, but may be useful to others who work in the field of diving safety and operations. The course is to train physicians to recognize and treat diving medical emergencies. Subject matter includes: Basic physics and physiology of the hyperbaric environment: the laws and principles; the differences between hyperbaric and hypobaric pressure; hyperbaric gases and their effects under pressure; links between the physiological effects of the hyperbaric environment and the pathology of the disease Basic decompression theory: historical development to current concepts factors affecting decompression safety including acceptable risks and thermal issues distinguish decompression sickness from barotrauma and arterial gas embolism; Dive computer theories and types, and comparison to dive tables Introduction to commercial diving and comparison to recreational and technical diving, including differences in procedures, equipment, and diver categories The clinical application of hyperbaric oxygen therapy and the treatment tables used; Participation in surface-supplied diving operation and hyperbaric chamber operations; Components, types, operational and safety hazards associated with hyperbaric chambers; Diving-related conditions resulting from the effects of long-term effects of diving, flying after diving, altitude, thermal conditions, age and gender Neurologic assessment on a diver with signs and/or symptoms of DCI Medical and fitness standards for diving, including: contraindications for both commercial and recreational divers differences in medical standards for recreational versus occupational diving communities legal implications for approving and denying fitness to dive in an occupational setting approaches for determining the safety of prescription and OTC medications used by divers Fellowship in Undersea and Hyperbaric Medicine The Accreditation Council for Graduate Medical Education (ACGME) and the American Osteopathic Association (AOA) offer 12-month programs in undersea and hyperbaric medicine associated with ACGME or AOA accredited programs in emergency medicine, family medicine, internal medicine, occupational medicine, preventive medicine, or anesthesiology. ECHM-EDTC Educational and Training Standards The standard drawn up jointly by the European Committee for Hyperbaric Medicine and the European Diving Technical Committee defines job descriptions for several levels of diving and hyperbaric physician: Education and assessment to these standards may be provided by institutions of higher education under the leadership of a Level 3 Hyperbaric Medicine Expert as defined below. Certificates of competence may be issued by a nationally accredited institution or an internationally acknowledged agency, and periodic recertification is required. Level 1. (MED) minimum 28 teaching hours. The MED must be competent to perform the assessments of medical fitness to dive of occupational and recreational divers and compressed air workers, except the assessment of medical fitness to resume diving after major decompression incidents. Level 2D. (DMP) minimum 80 teaching hours. A DMP must be competent to perform the initial and all other assessments of medical fitness to dive of working and recreational divers or compressed air workers, and manage diving accidents and advise diving contractors and others on diving medicine and physiology (with the backup of a diving medical expert or consultant). A DMP should have knowledge in relevant aspects of occupational health, but is not required to be a certified specialist in occupational medicine. A DMP should have certified skills and basic practical experience in assessment of medical fitness to dive, management of diving accidents, safety planning for professional diving operations, advanced life support, acute trauma care and general wound care. Level 2H. Hyperbaric Medicine Physician (HMP) minimum 120 teaching hours An HMP will be responsible for hyperbaric treatment sessions (with the backup of a hyperbaric medicine expert or consultant) An HMP should have appropriate experience in anaesthesia and intensive care in order to manage the HBO patients, but is not required to be a certified specialist in anaesthesia and intensive care. An HMP must be competent to assess and manage clinical patients for hyperbaric oxygen therapy treatment Level 3. or consultant (hyperbaric and diving medicine) is a physician who has been assessed as competent to: manage a hyperbaric facility (HBO centre) or the medical and physiological aspects of complex diving activities. manage research programs on diving medicine. supervise a team of HBO doctors and personnel, health professionals and others. teach relevant aspects of hyperbaric medicine and physiology to all members of staff. Gesellschaft für Tauch- und Überdruckmedizin e. V. Society for Diving and Hyperbaric medicine German standards for education and assessment of diving medical practitioners are similar to the ECHM-EDTC Standards and are controlled by the Gesellschaft für Tauch- und Überdruckmedizin e. V. They include Medical Examiner of Divers, Diving Medicine Physician, Hyperbaric Medicine Physician, Chief Hyperbaric Medicine Physician and Hyperbaric Medicine Consultant. Schweizerische Gesellschaft für Unterwasser- und Hyperbarmedizin Swiss Society for underwater and hyperbaric medicine. Swiss standards for education and assessment of diving medical practitioners are controlled by the Schweizerische Gesellschaft für Unterwasser- und Hyperbarmedizin. They include Medical Examiner of Divers, Diving Medicine Physician and Hyperbaric Medicine Physician. Österreichische Gesellschaft für Tauch- und Hyperbarmedizin Austrian Society for Diving and Hyperbaric medicine. Austrian standards for education and assessment of diving medical practitioners are controlled by the Österreichische Gesellschaft für Tauch- und Hyperbarmedizin They include Medical Examiner of Divers, Diving Medicine Physician, Hyperbaric Medicine Physician, Chief Hyperbaric Medicine Physician and Hyperbaric Medicine Consultant. Registration The American Medical Association recognises the sub-speciality Undersea and Hyperbaric Medicine held by someone who is already Board Certified in some other speciality. The South African Department of Employment and Labour registers two levels of Diving Medical Practitioner. Level 1 is qualified to conduct annual examinations and certification of medical fitness to dive, on commercial divers (equivalent to ECHM-EDTC Level 1. Medical Examiner of Divers), and Level 2 is qualified to provide medical advice to a diving contractor and hyperbaric treatment for diving injuries (equivalent to ECHM-EDTC Level 2D Diving Medicine Physician) Australia has a four tier system: In 2007 there was no recognised equivalence with the European standard. GPs completing the first tier four- to five-day course on how to examine divers for 'fitness to dive' can then add their names to the SPUMS Diving Doctors List GPs completing the second tier two-week diving medicine courses provided by the Royal Australian Navy and the Royal Adelaide Hospital, or the two-week course in Diving and Hyperbaric Medicine provided by the ANZ Hyperbaric Medicine Group, qualify to do commercial-diving medicals. The third tier is the SPUMS Diploma in Diving and Hyperbaric Medicine. The candidate must attend a two-week course, write a dissertation related to DHM and have the equivalent of six months' full-time experience working in a hyperbaric medicine unit. The fourth tier is the Certificate in Diving and Hyperbaric Medicine from the ANZ College of Anaesthetists. Training of divers and support staff in relevant first aid Divers A basic knowledge understanding of the causes, symptoms and first aid treatment of diving related disorders is part of the basic training for most recreational and professional divers, both to help the diver avoid the disorders, and to allow appropriate action in case of an incident resulting in injury. Recreational divers A recreational diver has the same duty of care to other divers as any ordinary member of the public, and therefore there is no obligation to train recreational divers in first aid or other medical skills. Nevertheless, first aid training is recommended by most, if not all, recreational diver training agencies. Recreational diving instructors and divemasters, on the other hand, are to a greater or lesser extent responsible for the safety of divers under their guidance, and therefore are generally required to be trained and certified to some level of rescue and first aid competence, as defined in the relevant training standards of the certifying body. In many cases this includes certification in cardiopulmonary resuscitation and first aid oxygen administration for diving accidents. Professional divers Professional divers usually operate as members of a team with a duty of care for other members of the team. Divers are expected to act as standby divers for other members of the team and the duties of a standby diver include rescue attempts if the working diver gets into difficulties. Consequently, professional divers are generally required to be trained in rescue procedures appropriate to the modes of diving they are certified in, and to administer first aid in emergencies. The specific training, competence and registration for these skills varies, and may be specified by state or national legislation or by industry codes of practice. Diving supervisors have a similar duty of care, and as they are responsible for operational planning and safety, generally are also expected to manage emergency procedures, including the first aid that may be required. The level of first aid training, competence and certification will generally take this into account. In South Africa, registered commercial and scientific divers must hold current certification in first aid at the national Level 1, with additional training in oxygen administration for diving accidents, and registered diving supervisors must hold Level 2 first aid certification. Offshore diving contractors frequently follow the IMCA recommendations. Diver medic A diver medic or diving medical technician is a member of a dive team who is trained in advanced first aid. A diver medic recognised by IMCA must be capable of administering first aid and emergency treatment, and carrying out the directions of a physician, and be familiar with diving procedures and compression chamber operation. The diver medic must also be able to assist the diving supervisor with decompression procedures, and provide treatment in a hyperbaric chamber in an emergency. The diver medic must hold, at a minimum, a valid certificate of medical fitness to operate in a pressurized environment, and a certificate of medical fitness to dive. Training standards for diver medic are described in the IMCA Scheme for Recognition of Diver Medic Training. Ethical and medicolegal issues Experimental work on human subjects is often ethically and/or legally impracticable. Tests where the endpoint is symptomatic decompression sickness are difficult to authorise and this makes the accumulation of adequate and statistically valid data difficult. The precautionary principle may be applied in the absence of information allowing a realistic assessment of risk. Analysis of investigations into accidents is useful when reliable results are available, which is less often than would be desirable, but privacy concerns prevent a large mount of information potentially useful to the general diving population from being made available to researchers. History of diving medical research Timeline November 1992: The first examination for certification in Undersea Medicine by the American Board of Preventive Medicine. November 1999: The first examination for Undersea and Hyperbaric Medicine qualification. Notable researchers Arthur J. Bachrach Albert R. Behnke Peter B. Bennett Thomas E. Berghage Paul Bert George F. Bond Alf O. Brubakk Albert A. Bühlmann Carl Edmonds William Paul Fife John Scott Haldane Robert William Hamilton Jr. Leonard Erskine Hill Brian Andrew Hills F.J. Keays Christian J. Lambertsen Joseph B. MacInnis Simon Mitchell Richard E. Moon František Novomeský John Rawlins Charles Wesley Shilling Edward D. Thalmann Richard D. Vann James Vorosmarti R.M. Wong Robert D. Workman Research organisations See also References Further reading External links Scubadoc's Diving Medicine Online Diving Diseases Research Centre (DDRC) Diving Medical Literature SCUBA Diving and Asthma infos scuba diving restrictions – free download of complete text Military medicine Medical specialties
0.769899
0.968361
0.74554
Skin appendage
Skin appendages (or adnexa of skin) are anatomical skin-associated structures that serve a particular function including sensation, contractility, lubrication and heat loss in animals. In humans, some of the more common skin appendages are hairs (sensation, heat loss, filter for breathing, protection), arrector pilli (smooth muscles that pull hairs straight), sebaceous glands (secrete sebum onto hair follicle, which oils the hair), sweat glands (can secrete sweat with strong odour (apocrine) or with a faint odour (merocrine or eccrine), and nails (protection). Skin appendages are derived from the skin, and are usually adjacent to it. Types of appendages include hair, glands, and nails. Glands Sweat glands are distributed all over the body except nipples and outer genitals. Although the nipples do have the mammary glands, these are known as modified sweat glands. Sebaceous glands are typically found in the opening shafts of hair. They are not on the palms of the hands or the soles of the feet. These glands secrete an antibacterial moisture known as sebum fluid. The sebum also softens the hands. The secretion activity is related to hormonal release. If acne is occurring, it is because these gland ducts are blocked. Eccrine (merocrine) glands are most common. The secretions are very watery that contain some electrolytes Apocrine glands produce a fatty secretion, thus giving away an odorous smell. These are located in the inguinal and axillary regions of the body, and include the mammary glands. References appendage
0.76801
0.970742
0.74554
Feldsher
A feldsher (, , , , , , ) is a health care professional who provides various medical services limited to emergency treatment and ambulance practice. As such, a feldsher is one kind of mid-level medical practitioner. In Russia, Ukraine and in other countries of the former Soviet Union, feldshers provide primary-, obstetric- and surgical-care services in many rural medical centres and clinics across Russia, Armenia, Kazakhstan, Kyrgyzstan, Mongolia and Uzbekistan. Similar types of mid-level practitioners are known by different titles in different countries, including advanced practitioner (United Kingdom), clinical associate/clinical officer (in parts of sub-Saharan Africa), community health officer (India), medical assistant (United States), nurse practitioner (Australia, Canada and US), and physician assistant (Canada and US). The International Standard Classification of Occupations, 2008 revision, collectively groups such workers under the category paramedical practitioners. History The word Feldsher is derived from the German Feldscher, which was coined in the 15th century. Feldscher (or Feldscherer) literally means "(battle-)field shearer" and was the term used for barber surgeons in the German and Swiss armies from the 17th century until professional military medical services were established, first by Prussia in the early 18th century. Today, Feldshers do not exist in Germany anymore, but the term was exported with Prussian officers and nobles to Russia. An All-Russia Union of Feldshers was founded in 1905. They were regarded as "Middle Medical Workers". The Feldsher system of rural primary care provided some of the inspiration for China's barefoot doctors. Today feldshers can be found in every medical setting from primary to intensive care. They are often the first point of contact with health professionals for people in rural areas. Education and training Training for feldshers can include up to four years of post-secondary education, including medical diagnosis and prescribing. They have clinical responsibilities that may be considered midway between those of physicians and those of nurses. They do not have full professional qualifications as physicians. The training program typically includes basic pre-clinical sciences: anatomy, physiology, pharmacology, microbiology, laboratory subjects, etc.; and advanced clinical sciences: internal medicine and therapeutics, neurology and psychiatry, obstetrics, infectious diseases and epidemiology, preventive medicine, surgery and trauma, anesthesiology and intensive care, pediatrics, and other clinical subjects such as ophthalmology, otolaryngology, dermatology and sexually transmitted diseases, ambulance service and pre-hospital emergency medical care, army field medical-surgical training. See also Allied health professions Clinical officer, a similar category of health care provider in sub-Saharan Africa Health care providers Medical assistant Mid-level practitioner Nurse practitioner Physician assistant, a similar category of health care provider in the United States References Kossoy E. & Ohry A. The Feldsher: Medical, Sociological and Historical Aspects of Practitioners of Medicine with below University Level Education, the Magnes Press, the Hebrew University, Jerusalem, 1992.. Health care occupations
0.761784
0.978629
0.745504
Squeamishness
Squeamishness typically refers to feelings of faintness, repulsion, disgust, or physical illness brought on by exposure to certain external stimuli. Causes Anything can cause someone to feel squeamish. Some examples of common triggers are the sight of blood or other bodily fluids, witnessing a human endure pain, the sight of insects, strong smells, and general ideas such as war, hospitals, or death. While these are common triggers, there are no limits to what stimuli can cause this reaction as it is based on the subjective observations of the person experiencing it. The feeling can also be triggered by traumatic experiences from the past. People can feel squeamish while witnessing, thinking of, or speaking about any particularly unpleasant topic. Often squeamishness is associated with medical phobias, as some of the most common triggers include sites or experiences one may encounter during a medical emergency. Symptoms Symptoms of squeamishness may include dizziness, lightheadedness, nausea, shaking, and pallor. In extreme instances it can also cause vomiting and fainting. See also List of phobias References External links Discussion of the unclear etymology of squeamish Symptoms and signs of mental disorders
0.778173
0.95801
0.745498
Mental health first aid
Mental health first aid is an extension of the concept of traditional first aid to cover mental health conditions. Mental health first aid is the first and immediate assistance given to any person experiencing or developing a mental health condition, such as depression or anxiety disorders, or experiencing a mental health crisis situation such as suicidal ideation or panic attack. Mental health first aid training Mental health first aid training teaches members of the public how to help a person who is experiencing varying degrees of worsening mental health issues. Like traditional first aid training, mental health first aid training does not teach people to treat or diagnose mental health or substance use conditions. Instead, the training teaches people how to offer initial support until appropriate professional help is received or until the crisis resolves. History The first mental health first aid training program was developed in Australia in 2001 by a research team led by Betty Kitchener and Anthony Jorm. The program was created to teach members of the general public how to provide initial support to people experiencing mental health problems, as well as to connect them with appropriate professional help and community resources. They tested the idea that giving first aid for mental health could lessen the effects of mental health problems, speed up recovery, and make suicide less likely by educating students on common mental health crises including feelings of suicide, deliberate self-harm, panic attacks, or symptoms of psychosis, and how to deal with these situations. The idea was to reduce the stigma associated with mental illness and make it more likely that people with mental health problems would seek help, which would reduce the risk of the person coming to harm. Mental health first aid training programs are provided by different organizations around the world, many of them non-profit. They have been implemented in countries such as the United States, Canada, the United Kingdom, Ireland, and a number of other European, Asian, and African countries. Public reception General media articles and videos indicate that mental health first aid training has political and celebrity proponents, such as former US president Barack Obama, former US first lady Michelle Obama, and singer/actress Lady Gaga. A few bills of law have been proposed by politicians in countries such as Australia and the United Kingdom to make mental health first aid training compulsory in schools and other organizations. Although considered good practice in several countries, mental health first aid training is not legally imposed for organizations anywhere in the world. Curriculum The curriculum for mental health first aid training typically includes the following topics: Symptoms associated with common mental health conditions such as depression, anxiety, schizophrenia, bipolar disorder, and eating disorders, as well as a general overview of mental health and mental illnesses. Common warning signs of mental illnesses, such as mood, behavior, and cognitive changes. Information about local counseling and psychiatric services, and how to help others gain access to them. Using the knowledge from those topics, participants are trained on a step-by-step action plan for providing mental health first aid, including how to: Evaluate the risk of suicide or harm Approach safely and appropriately Listen non-judgmentally Provide reassurance Encourage appropriate professional assistance Promote self-help Additional support strategies Depending on the program, there may be additional modules that target specific populations, such as children and adolescents, the elderly, or veterans. or conditions such as substance use disorder and its related issues and challenges. All of these topics are covered in order to develop participants' mental health literacy which consists of the knowledge, skills, and confidence necessary to recognize and respond appropriately to signs of mental illness and substance use disorders. Research on mental health first aid training A number of systematic reviews and meta-analyses have been carried out to review data concerning the effectiveness of mental health first aid training on participants' knowledge of mental health conditions and subsequent helping behaviors. A meta-analysis conducted in 2014 concluded that mental health first aid training increases participants' knowledge of mental health, reduces their negative views, and increases their supportive behaviors toward people with mental health issues. A meta-analysis conducted in 2018 concluded that mental health first aid training enhances participants' knowledge, awareness, and beliefs about successful treatments for mental diseases. At follow-up, there were slight improvements in the amount of assistance provided to a person with a mental health problem, but the nature of the change in the offered behaviors was unclear. A systematic review conducted in 2020 showed that mental health first aid training had conflicting effects on how trainees applied the skills they learned, but no influence on how beneficial their actions were for the mental health of the recipients. A systematic review conducted in 2020 focused on youth and adolescent mental health first aid training and found significant improvements in the understanding, recognition, stigmatizing perceptions, helping motivations, and helping behavior of youth and adolescent participants. The most frequently stated improvement was in knowledge and confidence, while the least frequently reported improvement was in helping behavior. As of 2024, the mental health first aid programme has been exported to over 25 countries and trained up over 6 million people worldwide, with over 1 million trained within Australia See also Emotional First Aid First aid Mental health Mental disorder Mental health triage – A brief overview of the Australian concept for dealing with psychiatric emergencies, similar to regular triage References and notes Clinical psychology First aid Mental disorders Mental health Emergency mental health services
0.761626
0.978822
0.745497
Nonpathogenic organisms
Nonpathogenic organisms are those that do not cause disease, harm or death to another organism. The term is usually used to describe bacteria. It describes a property of a bacterium – its inability to cause disease. Most bacteria are nonpathogenic. It can describe the presence of non-disease causing bacteria that normally reside on the surface of vertebrates and invertebrates as commensals. Some nonpathogenic microorganisms are commensals on and inside the body of animals and are called microbiota. Some of these same nonpathogenic microorganisms have the potential to cause disease, or being pathogenic, if they enter the body, multiply and cause symptoms of infection. Immunocompromised individuals are especially vulnerable to bacteria that are typically nonpathogenic; because of a compromised immune system, disease occurs when these bacteria gain access to the body's interior. Genes have been identified that predispose disease and infection with nonpathogenic bacteria by a small number of persons. Nonpathogenic Escherichia coli strains normally found in the gastrointestinal tract have the ability to stimulate the immune response in humans, though further studies are needed to determine clinical applications. A particular strain of bacteria can be nonpathogenic in one species but pathogenic in another. One species of bacterium can have many different types or strains. One strain of a bacterium species can be nonpathogenic and another strain of the same species can be pathogenic. References Bacteriology Gram-positive bacteria Gram-negative bacteria Immune system
0.764407
0.975212
0.745458
Carrion's disease
Carrion's disease is an infectious disease produced by Bartonella bacilliformis infection. It is named after Daniel Alcides Carrión. Signs and symptoms The clinical symptoms of bartonellosis are pleomorphic and some patients from endemic areas may be asymptomatic. The two classical clinical presentations are the acute phase and the chronic phase, corresponding to the two different host cell types invaded by the bacterium (red blood cells and endothelial cells). An individual can be affected by either or both phases. Acute phase The acute phase is also called the hematic phase or Oroya fever. The most common findings are fever (usually sustained, but with temperature no greater than ), pale appearance, malaise, painless liver enlargement, jaundice, enlarged lymph nodes, and enlarged spleen. This phase is characterized by severe hemolytic anemia and transient immunosuppression. The case fatality ratios of untreated patients exceeded 40% but reach around 90% when opportunistic infection with Salmonella spp. occurs. In a recent study, the attack rate was 13.8% (123 cases) and the case-fatality rate was 0.7%. Other symptoms include a headache, muscle aches, and general abdominal pain. Some studies have suggested a link between Carrion's disease and heart murmurs due to the disease's impact on the circulatory system. In children, symptoms of anorexia, nausea, and vomiting have been investigated as possible symptoms of the disease. Most of the mortality of Carrion's disease occurs during the acute phase. Studies vary in their estimates of mortality. In one study, mortality has been estimated as low as just 1% in studies of hospitalized patients, to as high as 88% in untreated, unhospitalized patients. In developed countries, where the disease rarely occurs, it is recommended to seek the advice of a specialist in infectious disease when diagnosed. Mortality is often thought to be due to subsequent infections due to the weakened immune system and opportunistic pathogen invasion, or consequences of malnutrition due to weight loss in children. In a study focusing on pediatric and gestational effects of the disease, mortality rates for pregnant women with the acute phase were estimated at 40% and rates of spontaneous abortion in another 40%. Chronic phase The chronic phase is also called the eruptive phase or tissue phase, in which the patients develop a cutaneous rash produced by a proliferation of endothelial cells, known as "Peruvian warts" or "verruga peruana". Depending on the size and characteristics of the lesions, there are three types: miliary (1–4 mm), nodular or subdermic, and mular (>5mm). Miliary lesions are the most common. The lesions often ulcerate and bleed. The most common findings are bleeding of verrugas, fever, malaise, arthralgias (joint pain), anorexia, myalgias, pallor, lymphadenopathy, and liver and spleen enlargement. On microscopic examination, the chronic phase and its rash are produced by angioblastic hyperplasia, or the increased rates and volume of cell growth in the tissues that form blood vessels. This results in a loss of contact between cells and a loss of normal functioning. The chronic phase is the more common phase. Mortality during the chronic phase is very low. Cause Carrion's disease is caused by Bartonella bacilliformis. Recent investigations show that Bartonella ancashensis may cause verruga peruana, although it may not meet all of Koch's postulates. There has been no experimental reproduction of the Peruvian wart in animals apart from Macaca mulatta, and there is little research on the disease's natural spread or impact in native animals. Diagnosis Diagnosis during the acute phase can be made by obtaining a peripheral blood smear with Giemsa stain, Columbia blood agar cultures, immunoblot, indirect immunofluorescence, and PCR. Diagnosis during the chronic phase can be made using a Warthin–Starry stain of wart biopsy, PCR, and immunoblot. Treatment Because Carrion's disease is often comorbid with Salmonella infections, chloramphenicol has historically been the treatment of choice. Fluoroquinolones (such as ciprofloxacin) or chloramphenicol in adults and chloramphenicol plus beta-lactams in children are the antibiotic regimens of choice during the acute phase of Carrion's disease. Chloramphenicol-resistant B. bacilliformis has been observed. During the eruptive phase, in which chloramphenicol is not useful, azithromycin, erythromycin, and ciprofloxacin have been used successfully for treatment. Rifampin or macrolides are also used to treat both adults and children. Because of the high rates of comorbid infections and conditions, multiple treatments are often required. These have included the use of corticosteroids for respiratory distress, red blood cell transfusions for anemia, pericardiectomies for pericardial tamponades, and other standard treatments. Society and culture The disease was featured in an episode of The WB supernatural drama Charmed that aired on February 3, 2000. In the episode Piper Halliwell becomes infected with the condition after importing a crate of Kiwano for her club, P3. She is bitten by a sandfly that was alive in the crate, infecting her. Piper slowly begins to die of the condition as her sisters Prue and Phoebe rush to find a magical way to save her. References Bacterial diseases Bacterium-related cutaneous conditions Insect-borne diseases Tropical diseases
0.760225
0.980528
0.745422
Zymotic disease
Zymotic disease was a 19th-century medical term for acute infectious diseases, especially "chief fevers and contagious diseases (e.g. typhus and typhoid fevers, smallpox, scarlet fever, measles, erysipelas, cholera, whooping-cough, diphtheria, etc.)". Zyme or microzyme was the name of the organism presumed to be the cause of the disease. As originally employed by William Farr, of the British Registrar-General's department, the term included the diseases which were "epidemic, endemic and contagious," and were regarded as owing their origin to the presence of a morbific principle in the system, acting in a manner analogous to, although not identical with, the process of fermentation. In the late 19th century, Antoine Béchamp proposed that tiny organisms he termed microzymas, and not cells, are the fundamental building block of life. Béchamp claimed these microzymas are present in all things—animal, vegetable, and mineral—whether living or dead. Microzymas coalesce to form blood clots and bacteria. Depending upon the condition of the host, microzymas assume various forms. In a diseased body, the microzymas become pathological bacteria and viruses. In a healthy body, microzymas form healthy cells. When a plant or animal dies, the microzymas live on. His ideas did not gain acceptance. The word zymotic comes from the Greek word ζυμοῦν zumoûn which means "to ferment". It was in British official use from 1839. This term was used extensively in the English Bills of Mortality as a cause of death from 1842. In 1877, Thomas Watson wrote in a Scientific American article "Zymotic Disease" describing contagion as the origin of infectious diseases. Robert Newstead (1859–1947) used this term in a 1908 publication in the Annals of Tropical Medicine and Parasitology, to describe the contribution of house flies (Musca domestica) towards the spread of infectious diseases. However, by the early 1900s, bacteriology "displaced the old fermentation theory", and so the term became obsolete. In her Diagram of the causes of mortality in the army in the East, Florence Nightingale depicts The blue wedges measured from the centre of the circle represent area for area the deaths from Preventible or Mitigable Zymotic diseases; the red wedges measured from the centre the deaths from wounds, & the black wedges measured from the centre the deaths from all other causes. References Obsolete medical theories
0.779771
0.95591
0.745391
Immunochemistry
Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays. In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization. Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry. One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins. Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry). References Branches of immunology
0.767069
0.971697
0.745359
MAH
Mah, the Avestan language word for both the moon and for the Zoroastrian divinity Maharashtra, a state in western India (postal code MAH) Malév Hungarian Airlines (ICAO code), the flag carrier airline of Hungary Mansion House tube station, London, London Underground station code Menorca Airport (IATA airport code), the airport serving the Balearic island of Minorca in the Mediterranean Sea Milli Emniyet Hizmeti, former Turkish government intelligence agency milliampere-hour, often abbreviated as mAh or mA·h, a unit of electric charge Monocyclic aromatic hydrocarbon, a type of chemical compound My American Heart, an American band M.A.H., an honorary master's degree granted ad eundem Santa Cruz Museum of Art and History, Santa Cruz, California See also Mah (disambiguation)
0.763157
0.976667
0.74535
Osmolyte
Osmolytes are low-molecular-weight organic compounds that influence the properties of biological fluids. Osmolytes are a class of organic molecules that play a significant role in regulating osmotic pressure and maintaining cellular homeostasis in various organisms, particularly in response to environmental stressors. Their primary role is to maintain the integrity of cells by affecting the viscosity, melting point, and ionic strength of the aqueous solution. When a cell swells due to external osmotic pressure, membrane channels open and allow efflux of osmolytes carrying water, restoring normal cell volume. These molecules are involved in counteracting the effects of osmotic stress, which occurs when there are fluctuations in the concentration of solutes (such as ions and sugars) inside and outside cells. Osmolytes help cells adapt to changing osmotic conditions, thereby ensuring their survival and functionality. Osmolytes also interact with the constituents of the cell, e.g., they influence protein folding. Common osmolytes include amino acids, sugars and polyols, methylamines, methylsulfonium compounds, and urea. Case studies Natural osmolytes that can act as osmoprotectants include trimethylamine N-oxide (TMAO), dimethylsulfoniopropionate, sarcosine, betaine, glycerophosphorylcholine, myo-inositol, taurine, glycine, and others. Bacteria accumulate osmolytes for protection against a high osmotic environment. The osmolytes are neutral non-electrolytes, except in bacteria that can tolerate salts. In humans, osmolytes are of particular importance in the renal medulla. Osmolytes are present in the cells of fish, and function to protect the cells from water pressure. As the osmolyte concentration in fish cells scales linearly with pressure and therefore depth, osmolytes have been used to calculate the maximum depth where a fish can survive. Fish cells reach a maximum concentration of osmolytes at depths of approximately , with no fish ever being observed beyond . References Further reading Diffusion Solutions
0.765393
0.973779
0.745324
Blastocystosis
Blastocystosis refers to a medical condition caused by infection with Blastocystis. Blastocystis is a protozoal, single-celled parasite that inhabits the gastrointestinal tracts of humans and other animals. Many different types of Blastocystis exist, and they can infect humans, farm animals, birds, rodents, amphibians, reptiles, fish, and even cockroaches. Blastocystosis has been found to be a possible risk factor for development of irritable bowel syndrome. Signs and symptoms Researchers have published conflicting reports concerning whether Blastocystis causes symptoms in humans, with one of the earliest reports in 1916. The incidence of reports associated with symptoms began to increase in 1984, with physicians from Saudi Arabia reporting symptoms in humans and US physicians reporting symptoms in individuals with travel to less developed countries. A lively debate ensued in the early 1990s, with some physicians objecting to publication of reports that Blastocystis caused disease. Some researchers believe the debate has been resolved by finding of multiple species of Blastocystis that can infect humans, with some causing symptoms and others being harmless (see Genetics and Symptoms). A few of most commonly reported symptoms are: abdominal pain itching, usually anal itching constipation diarrhea watery or loose stools weight loss fatigue flatulence Some less commonly reported symptoms include: Skin rash Arthritic symptoms and joint pain Intestinal inflammation Variation in severity Researchers have sought to develop models to understand the variety of symptoms seen in humans. Some patients do not have symptoms, while others report severe diarrhea and fatigue. A number of researchers have investigated the possibility that some species of Blastocystis are more virulent than others. An Italian researcher reported differences in the protein profiles of isolates associated with chronic and acute infection. A research team from Malaysia reported that isolates from symptomatic patients produced large amoeboid forms that were not present in isolates from asymptomatic patients. The development of a classification system for Blastocystis in 2007 produced a series of studies investigating this possibility. The studies that followed generally found that no specific "pathogenic" or nonpathogenic species of Blastocystis exists. One study investigated the subtypes found in patients with irritable bowel syndrome (IBS), inflammatory bowel disease (IBD), and chronic diarrhea, and found the subtypes in these diseases were similar (subtypes 2 and 3), and have also been found in asymptomatic carriers. The researchers concluded that host factors, such as age and genetics, may play the dominant role in determining the symptoms seen in the disease. Associations Blastocystis colonisation is positively associated with IBS and is a possible risk factor for developing IBS. A study of IBS patients in the Middle East showed a "significantly increased" immune reaction in IBS patients to Blastocystis, even when the organism could not be identified in stool samples. The following reports have linked Blastocystis infection to inflammatory bowel disease: A study using riboprinting identified specific types of Blastocystis as associated with inflammation. A case report described IBD in conjunction with Blastocystis infection. Three research groups have reported experimental infection of mice with Blastocystis produces intestinal inflammation. Transmission and risk factors Humans contract Blastocystis infection by drinking water or eating food contaminated with feces from an infected human or animal. Blastocystis infection can be spread from animals to humans, from humans to other humans, from humans to animals, and from animals to animals. Risk factors for infection have been reported as following: International travel: Travel to less developed countries has been cited in development of symptomatic Blastocystis infection. A 1986 study in the United States found that all individuals symptomatically infected with Blastocystis reported recent travel history to less developed countries. In the same study, all hospital employees working in New York who were screened for Blastocystis were found to have asymptomatic infections. Military service: Several studies have identified high rates of infection in military personnel. An early account described infection of British troops in Egypt in 1916 who recovered following treatment with emetine. A 1990 study published in Military Medicine from Lackland AFB in Texas concluded symptomatic infection was more common in foreign nationals, children, and immunocompromised individuals. A 2002 study published in Military Medicine of army personnel in Thailand identified a 44% infection rate. Infection rates were highest in privates who had served the longest at the army base. A follow-up study found a significant correlation between infection and symptoms, and identified the most likely cause as contaminated water. A 2007 newspaper article suggested the infection rate of US military personnel returning from the Gulf War was 50%, quoting the head of Oregon State University's Biomedicine department. Consumption of Untreated Water (well water): Many studies have linked Blastocystis infection with contaminated drinking water. A 1993 study of children infected symptomatically with Blastocystis in Pittsburgh indicated that 75% of them had a history of drinking well water or travel in less developed countries. Two studies in Thailand linked Blastocystis infection in military personnel and families to drinking of unboiled and untreated water. A book published in 2006 noted that in an Oregon community, infections are more common in winter months during heavy rains. A research study published in 1980 reported bacterial contamination of well water in the same community during heavy rainfall. A 2007 study from China specifically linked infection with Blastocystis sp. subtype 3 with drinking untreated water. Recreational contact with untreated water, for example through boating, has also been identified as a risk factor. Studies have shown that Blastocystis survives sewage treatment plants in both the United Kingdom and Malaysia. Blastocystis cysts have been shown to be resistant to chlorination as a treatment method and are among the most resistant cysts to ozone treatment. Contaminated Food: Contamination of leafy vegetables has been implicated as a potential source for transmission of Blastocystis infection, as well as other gastrointestinal protozoa. A Chinese study identified infection with Blastocystis sp. subtype 1 as specifically associated with eating foods grown in untreated water. Daycare facilities: A Canadian study identified an outbreak of Blastocystis associated with daycare attendance. Prior studies have identified outbreaks of similar protozoal infections in daycares. Geography: Infection rates vary geographically, and variants which produce symptoms may be less common in industrialized countries. For example, a low incidence of Blastocystis infection has been reported in Japan. A study of individuals infected with Blastocystis in Japan found that many (43%, 23/54) carried Blastocystis sp. subtype 2, which was found to produce no symptoms in 93% (21/23) of patients studied, in contrast to other variants which were less common but produced symptoms in 50% of Japanese individuals. Studies in urban areas of industrialized countries have found Blastocystis infection associated with a low incidence of symptoms. In contrast, studies in developing countries generally show Blastocystis to be associated with symptoms. In the United States, a higher incidence of Blastocystis infection has been reported in California and West Coast states. Prevalence over Time: A 1989 study of the prevalence of Blastocystis in the United States found an infection rate of 2.6% in samples submitted from all 48 states. The study was part of the CDC's MMWR Report. A more recent study, in 2006, found an infection rate of 23% in samples submitted from all 48 states. However, the more recent study was performed by a private laboratory located in the Western US, and emphasized samples from Western states, which have previously been reported to have a higher infection rate. Research studies have suggested the following items are not risk factors for contracting Blastocystis infection: Consumption of municipal water near water plant (not a risk factor): One study showed that municipal water was free of Blastocystis, even when drawn from a polluted source. However, samples taken far away from the treatment plant showed cysts. The researchers suggested that aging pipes may permit intrusion of contaminated water into the distribution system. Human-to-Human transmission among adults (not a risk factor): Some research suggests that direct human-to-human transmission is less common even in households and between married partners. One study showed different members of the same household carried different subtypes of Blastocystis. Pathogeneses Pathogenesis refers to the mechanism by which an organism causes disease. The following disease-causing mechanisms have been reported in studies of Blastocystis infection: Barrier disruption: In isolates from Blastocystis sp. subtype 4, study has demonstrated that Blastocystis has the ability to alter the arrangement of F-actin in intestinal epithelial cells. Actin filaments are important in stabilizing tight junctions; they in turn stabilize the barrier, which is a layer of cells, between the intestinal epithelial cells and the intestinal content. The parasite causes the actin filaments to rearrange, and so compromising barrier function. This has been suggested to contribute to the diarrheal symptoms sometimes observed in Blastocystis patients. Invasiveness: Invasive infection has been reported in humans and animal studies. Immune modulation: Blastocystis has been shown to provoke cells from the human colon to produce inflammatory cytokines interleukin-8 and GM-CSF. Interleukin-8 plays a role in rheumatoid arthritis. Protease secretion: Blastocystis secretes a protease that breaks up antibodies produced and secreted into the gastrointestinal tract lumen. These antibodies, known as immunoglobulin A (IgA), make up the immune defense system of human by preventing the growth of harmful microorganisms in the body and by neutralizing toxins secreted by these microorganisms. By breaking up the antibodies, it allows the persistence of Blastocystis in the human gut. Another more recent study has also shown and proposed that, in response to the proteases secreted by Blastocystis, the intestinal host cells would signal a series of events to be carried out, eventually leading to the self-destruction of the host cells – a phenomenon known as apoptosis. Other secretory mechanism: A study of a different protozoan which produces similar symptoms, Entamoeba histolytica, found that organism secretes several neurologically active chemicals, such as serotonin and Substance P. Serum levels of serotonin have been found to be elevated in patients with Entamoeba histolytica. Diagnosis Clinically available Diagnosis is performed by determining if the infection is present, and then making a decision as to whether the infection is responsible for the symptoms. Diagnostic methods in clinical use have been reported to be of poor quality and more reliable methods have been reported in research papers. For identification of infection, the only method clinically available in most areas is the ova and parasite (O&P) exam, which identifies the presence of the organism by microscopic examination of a chemically preserved stool specimen. This method is sometimes called direct microscopy. In the United States, pathologists are required to report the presence of Blastocystis when found during an O&P exam, so a special test does not have to be ordered. Direct microscopy is inexpensive, as the same test can identify a variety of gastrointestinal infections, such as Giardia, Entamoeba histolytica, and Cryptosporidium. However, one laboratory director noted that pathologists using conventional microscopes failed to identify many Blastocystis infections, and indicated the necessity for special microscopic equipment for identification. The following table shows the sensitivity of Direct Microscopy in detecting Blastocystis when compared to stool culture, a more sensitive technique. Stool culture was considered by some researchers to be the most reliable technique, but a recent study found stool culture only detected 83% of individuals infected when compared to polymerase chain reaction (PCR) testing. Reasons given for the failure of Direct Microscopy include: (1) Variable Shedding: The quantity of Blastocystis organisms varies substantially from day to day in infected humans and animals; (2) Appearance: Some forms of Blastocystis resemble fat cells or white blood cells, making it difficult to distinguish the organism from other cells in the stool sample; (3) Large number of morphological forms: Blastocystis cells can assume a variety of shapes, some have been described in detail only recently, so it is possible that additional forms exist but have not been identified. Several methods have been cited in literature for determination of the significance of the finding of Blastocystis: Diagnosis only when large numbers of organism present: Some physicians consider Blastocystis infection to be a cause of illness only when large numbers are found in stool samples. Researchers have questioned this approach, noting that it is not used with any other protozoal infections, such as Giardia or Entamoeba histolytica. Some researchers have reported no correlation between number of organisms present in stool samples and the level of symptoms. A study using polymerase chain reaction testing of stool samples suggested that symptomatic infection can exist even when sufficient quantities of the organism do not exist for identification through Direct Microscopy. Diagnosis-by-exclusion: Some physicians diagnose Blastocystis infection by excluding all other causes, such as infection with other organisms, food intolerances, colon cancer, etc. This method can be time-consuming and expensive, requiring many tests such as endoscopy and colonoscopy. Disregarding Blastocystis : In the early to mid-1990s, some US physicians suggested all findings of Blastocystis are insignificant. No recent publications expressing this opinion could be found. Not clinically available The following diagnostic methods are not routinely available to patients. Researchers have reported that they are more reliable at detecting infection, and in some cases can provide the physician with information to help determine whether Blastocystis infection is the cause of the patient's symptoms: Serum antibody testing: A 1993 research study performed by the NIH with United States patients suggested that it was possible to distinguish symptomatic and asymptomatic infection with Blastocystis using serum antibody testing. The study used blood samples to measure the patient's immune reaction to chemicals present on the surface of the Blastocystis cell. It found that patients diagnosed with symptomatic Blastocystis infection exhibited a much higher immune response than controls who had Blastocystis infection but no symptoms. The study was repeated in 2003 at Ain Shams University in Egypt with Egyptian patients with equivalent results. Fecal antibody testing: A 2003 study at Ain Shams University in Egypt indicated that patients symptomatically infected could be distinguished with a fecal antibody test. The study compared patients diagnosed with symptomatic Blastocystis infection to controls who had Blastocystis infection but no symptoms. In the group with symptoms, IgA antibodies to Blastocystis were detected in fecal specimens that were not present in the healthy control group. Stool culture: Culturing has been shown to be a more reliable method of identifying infection. In 2006, researchers reported the ability to distinguish between disease causing and non-disease causing isolates of Blastocystis using stool culture. Blastocystis cultured from patients who were sick and diagnosed with Blastocystis infection produced large, highly adhesive amoeboid forms in culture. These cells were absent in Blastocystis cultures from healthy controls. Subsequent genetic analysis showed the Blastocystis from healthy controls was genetically distinct from that found in patients with symptoms. Protozoal culture is unavailable in most countries due to the cost and lack of trained staff able to perform protozoal culture. Genetic analysis of isolates: Researchers have used techniques which allow the DNA of Blastocystis to be isolated from fecal specimens. This method has been reported to be more reliable at detecting Blastocystis in symptomatic patients than stool culture. This method also allows the species group of Blastocystis to be identified. Research is continuing into which species groups are associated with symptomatic (see Genetics and Symptoms) blastocystosis. Immuno-fluorescence (IFA) stain: An IFA stain causes Blastocystis cells to glow when viewed under a microscope, making the diagnostic method more reliable. IFA stains are in use for Giardia and Cryptosporidium for both diagnostic purposes and water quality testing. A 1991 paper from the NIH described the laboratory development of one such stain. However, no company currently offers this stain commercially. Classification Reports conflict regarding whether Blastocystis causes disease in humans. These reports resulted in a brief debate in medical journals in the early 1990s between some physicians in the United States who believed that Blastocystis was harmless, and physicians in the United States and overseas who believed it could cause disease. At the time, it was common practice to identify all Blastocystis from humans as Blastocystis hominis, while Blastocystis from animals was identified differently (e.g. Blastocystis ratti from rats). Research performed since then has shown that the concept of Blastocystis hominis as a unique species of Blastocystis infecting humans is not supported by microbiological findings. Although one species group associated with primates was found, it was also discovered that humans can acquire infection from any one of nine species groups of Blastocystis which are also carried by cattle, pigs, rodents, chickens, pheasants, monkeys, dogs, and other animals. Research has suggested that some types produce few or no symptoms, while others produce illness and intestinal inflammation. Researchers have suggested conflicting reports may be due to the practice of naming all Blastocystis from humans Blastocystis hominis and have proposed discontinuing the use of that term. A standard naming system for Blastocystis organisms from humans and animals has been proposed which names Blastocystis isolates according to the genetic identity of the Blastocystis organism rather than the host. The naming system used identifies all isolates as Blastocystis sp. subtype nn where nn is a number from 1 to 9 indicating the species group of the Blastocystis organism. The identification of the species can not be performed with a microscope at this time, because the different species look alike. Identification requires equipment for genetic analysis that is common in microbiology laboratories, but not available to most physicians. Some new scientific papers have begun using the standard naming system. Treatment There is a lack of scientific study to support the efficacy of any particular treatment. An additional review published in 2009 made a similar conclusion, noting that because the diagnostics in use have been unreliable, it has been impossible to determine whether a drug has eradicated the infection, or just made the patient feel better. Historical reports, such as one from 1916, note difficulty associated with eradication of Blastocystis from patients, describing it as "an infection that is hard to get rid of." A 1999 in vitro study from Pakistan found 40% of isolates are resistant to common antiprotozoal drugs. A study of isolates from patients diagnosed with IBS found 40% of isolates resistant to metronidazole and 32% resistant to furazolidone. Drugs reported in studies to be effective in eradicating Blastocystis infection have included metronidazole, trimethoprim, TMP-SMX (only trimethoprim is active with sulphamethoxazole demonstrating no activity), tetracycline, doxycycline, nitazoxanide, pentamidine, paromomycin and iodoquinol. Iodoquinol has been found to be less effective in practice than in-vitro. Miconazole and quinacrine have been reported as effective agents against Blastocystis growth in-vitro. Rifaximin, and albendazole have shown promise as has ivermectin which demonstrated high effectiveness against blastocystis hominis isolates in an in vitro study. There is also evidence that the probiotic yeast Saccharomyces boulardii, and the plant mallotus oppositifolius may be effective against Blastocystis infections. Physicians have described the successful use of a variety of discontinued antiprotozoals in treatment of Blastocystis infection. Emetine was reported as successful in cases in early 20th century with British soldiers who contracted Blastocystis infection while serving in Egypt. In vitro testing showed emetine was more effective than metronidazole or furazolidone. Emetine is available in the United States through special arrangement with the Center for Disease Control. Clioquinol (Entero-vioform) was noted as successful in treatment of Blastocystis infection but removed from the market following an adverse event in Japan. Stovarsol and Narsenol, two arsenic-based antiprotozoals, were reported to be effective against the infection. Carbarsone was available as an anti-infective compound in the United States as late as 1991, and was suggested as a possible treatment. The reduction in the availability of antiprotozoal drugs has been noted as a complicating factor in treatment of other protozoal infections. For example, in Australia, production of diloxanide furoate ended in 2003, paromomycin is available under special access provisions, and the availability of iodoquinol is limited. Epidemiology Like other protozoal infections, the prevalence of Blastocystis infection varies depending on the area investigated and the population selected. A number of different species groups of Blastocystis infect humans, with some being reported to cause disease while others do not. To date, surveys have not distinguished between different types of Blastocystis in humans, so the significance of findings may be difficult to evaluate. Developing countries have been reported to have higher incidences, but recent studies suggest that symptomatic infection with Blastocystis may be prevalent in certain industrialized countries, as well. A study on parasites in stool samples in the United States during 2000 found blastocystosis to be the most common parasitic infection in the population, occurring in 23% of individuals. A Canadian study of samples received in 2005 identified Blastocystis as the most prevalent protozoal infection identified. A study in Pakistan identified Blastocystis infection in 7% of the general population and 46% of patients with irritable bowel syndrome. The study used stool culture for identification. A 2014 study of samples from 93 children from the Senegal River basin found that 100% of the population was infected with Blastocystis. Other animals Experimental infection in immunocompetent and immunocompromised mice has produced intestinal inflammation, altered bowel habits, lethargy, and death. Chronic diarrhea has been reported in non-human higher primates. Research While many enteric protists are the subject of research, Blastocystis is unusual in that basic questions concerning how it should be diagnosed and treated and how it causes disease remain unsettled. The following groups have ongoing research programs directed at these questions: See also Blastocystis List of parasites (human) History of emerging infectious diseases References External links CDC description of Blastocystis hominis Badbugs.org: Dientamoeba fragilis and Blastocystis hominis resources Protozoal diseases Waterborne diseases Conditions diagnosed by stool test Abdominal pain
0.760391
0.980087
0.74525
End organ damage
End organ damage is severe impairment of major body organs due to systemic disease. Commonly this is referred to in diabetes, high blood pressure, or states of low blood pressure or low blood volume. This can present as a heart attack or heart failure, pulmonary edema, neurologic deficits including a stroke, or acute kidney failure. Pathophysiology End organ damage typically occurs where systemic disease causes cell death in most or all organs. Hypertensive When blood pressures are critically high (>180/120 mm Hg) or the rate of rise in blood pressure is rapid, a large volume of blood circulating in a small space creates turbulence and can damage the inner lining of blood vessels. The body’s repair systems are activated by damage and circulating blood components, like platelets, work on repair. The deposition of platelets can clutter the vessel space and impair the body’s natural ability to produce nitrous oxide, which would dilate blood vessels and help lower blood pressure. When high pressure is pushing on the walls of narrowed blood vessels, fluid leaves the inside of blood vessels and moves to the space just outside. This impairs necessary blood flow and cuts off circulating oxygen, which can lead to tissue death and permanent damage to the brain, heart, arteries, and kidneys. This may occur as a result of chronic or poorly controlled hypertension, illicit drug use, or as a complication of pregnancy. Recent studies have shown that activation of the immune system may also be closely involved with the development of end organ damage in hypertensive states. Shock Shock is when the body does not have adequate circulation to provide oxygen to body tissues. Hypovolemic shock occurs due to low circulating volume of fluids in the blood vessels. Distributive shock, which can occur due to anaphylaxis or sepsis, results in widespread dilation of blood vessels in the body resulting in lower blood pressure. In cases of extremely low circulating volume or inability to maintain an adequate blood pressure, body tissues do not receive enough oxygen and nutrients. When tissues lack oxygen and adequate circulation, organs can fail. Diabetes In diabetes, the dysregulation of insulin and blood glucose levels damages end organ cells and as the body compensates through regulating fluid volume to adjust glucose concentration, it also incurs collateral damage to organs. Microvascular and macrovascular complications include nephropathy, retinopathy, neuropathy, and ASCVD events. In diabetic neuropathy, glucose promotes oxidative stress leading to nerve damage. Chronically high insulin levels are also associated with early development of atherosclerotic plaques inside blood vessel walls. Clinical presentation Hypertensive Important definitions Hypertensive Crisis - blood pressure >180/120 mm Hg with or without signs of end organ damage Hypertensive Urgency - blood pressure >180/120 mm Hg without signs of end organ damage Hypertensive Emergency - blood pressure >180/120 mm Hg with signs of end organ damage Presentation Source: Altered mental status Shortness of breath Chest pain Lower extremity swelling New heart murmur Unequal blood pressures - may be sign of aortic dissection Headache or dizziness Neurologic deficits - may be due to a stroke or transient ischemic attack Vision changes Shock Important definitions Systemic Inflammatory Response Syndrome (SIRS) meets any two criteria: Body temperature >38 or <36 degrees Celsius Heart rate >90 beats per minute Respiratory rate >20 breaths per minute or partial pressure CO2 <32 mm Hg White blood cell count >12000 or <4000 per microliter or >10% immature forms or bands qSOFA score helps predict organ dysfunction outside of the intensive care unit by assessing 3 components: Systolic blood pressure <100 mm Hg Maximum respiratory rate >21 breaths per minute Decreased Glasgow coma score <15 Presentation Source: Altered mental status - person may not be oriented to person, place, or time Delayed capillary refill - skin may be pale or mottled, limbs may be cool Little or no urine output - poor urine output Absent bowel sounds Diabetes End organ damage can occur at any diagnostic stage of diabetes, including pre-diabetes. Presentation Source: Urinating often Feeling very thirsty Feeling very hungry—even though you are eating Extreme fatigue Blurry vision Cuts/bruises that are slow to heal Weight loss—even though you are eating more (Type 1 diabetes) Tingling, pain, or numbness in the hands/feet (Type 2 diabetes) Evaluation and work-up Physical examination Heart - evaluate for new-onset heart failure (leg swelling, new murmur) Lungs - fluid overload or infection can cause shortness of breath Neurologic - a detailed neurologic exam should be performed to evaluate for stroke and peripheral vascular disease Fundoscopy - exam of the eye that can show signs of hypertension including papilledema and retinal hemorrhages Labs Complete Blood Count - check for low red blood cell count, elevated white blood cell count Basic Metabolic Panel - evaluate kidney function with creatinine and blood urea nitrogen Urinalysis - may show excess protein (hypertensive) or bacteria or white blood cells in urine (infection) Urine Drug Screen - illicit drugs like cocaine and PCP can increase blood pressure rapidly Cardiac Enzymes - elevated troponin and brain natriuretic peptide may indicate stress on the heart Pregnancy Test - pre-eclampsia in pregnancy can cause dangerously high blood pressure Lactate - rising lactate in the blood indicates that areas of the body are not getting enough oxygen Cultures - blood cultures and source-specific cultures (urine, sputum, etc.) should be collected when septic shock is suspected in order to identify and source and target treatment Fasting blood glucose, A1C, Oral Glucose Tolerance Test - for diabetes diagnosis Imaging Chest X-Ray - may show signs of infection or fluid build-up or enlarged heart Electrocardiogram - check for heart dysfunction Echocardiogram - may show signs of left ventricular muscle thickening due to heart failure CT Head - may show signs of stroke CT-Angiogram - evaluate for signs of aortic dissection OCT - evaluate signs of diabetic retinopathy Management Hypertensive When there is concern for the presence or development of end organ damage, blood pressure should be lowered emergently with intravenous antihypertensive medications. Patients should be admitted to the hospital to be closely monitored for complications of end organ damage, notably strokes. Blood pressure should be lowered a maximum of 10% over the first hour and 25% over the first two hours as rapid lowering of blood pressure can lead to decreased blood flow in the brain and cause the development of an ischemic stroke. Once blood pressure is stabilized, patients can be changed from intravenous medications to oral. For patients with long-standing hypertension, patient education on the importance of consistently taking prescribed medications and keeping blood pressure well-controlled is critical. Additionally, future treatments may focus not only on blood pressure control but also the reduction of local inflammation that can lead to end organ damage. In pregnant patients where there is concern for pre-eclampsia, patients should be given magnesium sulfate and admitted. Urine output, breathing, and reflexes should be monitored closely with concern for the development of worsening kidney function and magnesium toxicity. Systolic blood pressure should be treated with antihypertensive medications only if it is higher than 160 mm Hg. Shock When a patient is in shock, the development of end organ damage is typically due to circulating blood volume or blood pressure that is not high enough to maintain oxygen and nutrient supply to vital organs. Initial treatment is focused on stabilizing the patient. Fluids are given to increase circulating blood volume. Vasopressors, medications that constrict blood vessels, can also be given in order to maintain a higher blood pressure and help vital organs receive enough oxygen and nutrients. High-dose steroids, like hydrocortisone, may also help maintain blood pressures in patients. Close monitoring in the critical care unit is often necessary to measure blood pressures. The next step in treating end organ damage due to septic shock is to identify the source of the infection and treat it. Broad-spectrum antibiotics can be started that will treat many potential bacteria before cultures grow the specific bacteria that is causing the infection. Once cultures identify the culprit of the infection, the antibiotic therapy can be changed so that it is only covering what needs to be treated. Treatment of the source of infection should resolve low blood pressures that compromise vital organ function. Complications, including acute respiratory distress syndrome, acute kidney injury, and electrolyte abnormalities, can be treated proactively and managed on an individual basis. Diabetes Lifelong treatment and monitoring is often necessary for glucose control. Glucose levels should be maintained at 90 to 130 mg/dL and HbA1c at less than 7%. Medical treatment includes use of insulin and/or other medications to control glucose levels. Monitoring for end organ damage complications is recommended on guidelines by different regional medical bodies. References Medical terminology
0.762383
0.977479
0.745213
Copper in biology
Copper is an essential trace element that is vital to the health of all living things (plants, animals and microorganisms). In humans, copper is essential to the proper functioning of organs and metabolic processes. Also, in humans, copper helps maintain the nervous system, immune system, brain development, and activates genes, as well as assisting in the production of connective tissues, blood vessels, and energy. The human body has complex homeostatic mechanisms which attempt to ensure a constant supply of available copper, while eliminating excess copper whenever this occurs. However, like all essential elements and nutrients, too much or too little nutritional ingestion of copper can result in a corresponding condition of copper excess or deficiency in the body, each of which has its own unique set of adverse health effects. Daily dietary standards for copper have been set by various health agencies around the world. Standards adopted by some nations recommend different copper intake levels for adults, pregnant women, infants, and children, corresponding to the varying need for copper during different stages of life. Biochemistry Copper proteins have diverse roles in biological electron transport and oxygen transportation, processes that exploit the easy interconversion of Cu(I) and Cu(II). Copper is essential in the aerobic respiration of all eukaryotes. In mitochondria, it is found in cytochrome c oxidase, which is the last protein in oxidative phosphorylation. Cytochrome c oxidase is the protein that binds the O2 between a copper and an iron; the protein transfers 4 electrons to the O2 molecule to reduce it to two molecules of water. Copper is also found in many superoxide dismutases, proteins that catalyze the decomposition of superoxides by converting it (by disproportionation) to oxygen or hydrogen peroxide: Cu+-SOD + O2− + 2H+ → Cu2+-SOD + H2O2 (oxidation of copper; reduction of superoxide) Cu2+-SOD + O2− → Cu+-SOD + O2 (reduction of copper; oxidation of superoxide) The protein hemocyanin is the oxygen carrier in most mollusks and some arthropods such as the horseshoe crab (Limulus polyphemus). Because hemocyanin is blue, these organisms have blue blood rather than the red blood of iron-based hemoglobin. Structurally related to hemocyanin are the laccases and tyrosinases. Instead of reversibly binding oxygen, these proteins hydroxylate substrates, illustrated by their role in the formation of lacquers. The biological role for copper commenced with the appearance of oxygen in Earth's atmosphere. Several copper proteins, such as the "blue copper proteins", do not interact directly with substrates; hence they are not enzymes. These proteins relay electrons by the process called electron transfer. A unique tetranuclear copper center has been found in nitrous-oxide reductase. Chemical compounds which were developed for treatment of Wilson's disease have been investigated for use in cancer therapy. Optimal copper levels Copper deficiency and toxicity can be either of genetic or non-genetic origin. The study of copper's genetic diseases, which are the focus of intense international research activity, has shed insight into how human bodies use copper, and why it is important as an essential micronutrient. The studies have also resulted in successful treatments for genetic copper excess conditions, empowering patients whose lives were once jeopardized. Researchers specializing in the fields of microbiology, toxicology, nutrition, and health risk assessments are working together to define the precise copper levels that are required for essentiality, while avoiding deficient or excess copper intakes. Results from these studies are expected to be used to fine-tune governmental dietary recommendation programs which are designed to help protect public health. Essentiality Copper is an essential trace element (i.e., micronutrient) that is required for plant, animal, and human health. It is also required for the normal functioning of aerobic (oxygen-requiring) microorganisms. Copper's essentiality was first discovered in 1928, when it was demonstrated that rats fed a copper-deficient milk diet were unable to produce sufficient red blood cells. The anemia was corrected by the addition of copper-containing ash from vegetable or animal sources. Fetuses, infants, and children Human milk is relatively low in copper, and the neonate's liver stores fall rapidly after birth, supplying copper to the fast-growing body during the breast feeding period. These supplies are necessary to carry out such metabolic functions as cellular respiration, melanin pigment and connective tissue synthesis, iron metabolism, free radical defense, gene expression, and the normal functioning of the heart and immune systems in infants. Since copper availability in the body is hindered by an excess of iron and zinc intake, pregnant women prescribed iron supplements to treat anemia or zinc supplements to treat colds should consult physicians to be sure that the prenatal supplements they may be taking also have nutritionally-significant amounts of copper. When newborn babies are breastfed, the babies' livers and the mothers' breast milk provide sufficient quantities of copper for the first 4–6 months of life. When babies are weaned, a balanced diet should provide adequate sources of copper. Cow's milk and some older infant formulas are depleted in copper. Most formulas are now fortified with copper to prevent depletion. Most well-nourished children have adequate intakes of copper. Health-compromised children, including those who are premature, malnourished, have low birth weights, develop infections, and who experience rapid catch-up growth spurts, are at elevated risk for copper deficiencies. Fortunately, diagnosis of copper deficiency in children is clear and reliable once the condition is suspected. Supplements under a physician's supervision usually facilitate a full recovery. Homeostasis Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals. Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+(cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. A list of some key copper-containing enzymes and their functions is summarized below: The transport and metabolism of copper in living organisms is currently the subject of much active research. Copper transport at the cellular level involves the movement of extracellular copper across the cell membrane and into the cell by specialized transporters. In the bloodstream, copper is carried throughout the body by albumin, ceruloplasmin, and other proteins. The majority of blood copper (or serum copper) is bound to ceruloplasmin. The proportion of ceruloplasmin-bound copper can range from 70 to 95% and differs between individuals, depending, for example, on hormonal cycle, season, and copper status. Intracellular copper is routed to sites of synthesis of copper-requiring enzymes and to organelles by specialized proteins called metallochaperones. Another set of these transporters carries copper into subcellular compartments. Certain mechanisms exist to release copper from the cell. Specialized transporters return excess unstored copper to the liver for additional storage and/or biliary excretion. These mechanisms ensure that free unbound toxic ionic copper is unlikely to exist in the majority of the population (i.e., those without genetic copper metabolism defects). Absorption In mammals copper is absorbed in the stomach and small intestine, although there appear to be differences among species with respect to the site of maximal absorption. Copper is absorbed from the stomach and duodenum in rats and from the lower small intestine in hamsters. The site of maximal copper absorption is not known for humans, but is assumed to be the stomach and upper intestine because of the rapid appearance of Cu in the plasma after oral administration. Absorption of copper ranges from 15 to 97%, depending on copper content, form of the copper, and composition of the diet. Various factors influence copper absorption. For example, copper absorption is enhanced by ingestion of animal protein, citrate, and phosphate. Copper salts, including copper gluconate, copper acetate, and copper sulfate, are easily absorbed. copper oxides is not absorbed. Elevated levels of dietary zinc, as well as cadmium, high intakes of phytate and simple sugars (fructose, sucrose) inhibit dietary absorption of copper. Furthermore, low levels of dietary copper appear to inhibit iron absorption. Some forms of copper are not soluble in stomach acids and cannot be absorbed from the stomach or small intestine. Also, some foods may contain indigestible fiber that binds with copper. High intakes of zinc can significantly decrease copper absorption. Extreme intakes of Vitamin C or iron can also affect copper absorption, reminding us of the fact that micronutrients need to be consumed as a balanced mixture. This is one reason why extreme intakes of any one single micronutrient are not advised. Individuals with chronic digestive problems may be unable to absorb sufficient amounts of copper, even though the foods they eat are copper-rich. Several copper transporters have been identified that can move copper across cell membranes. Other intestinal copper transporters may exist. Intestinal copper uptake may be catalyzed by Ctr1. Ctr1 is expressed in all cell types so far investigated, including enterocytes, and it catalyzes the transport of Cu+1 across the cell membrane. Excess copper (as well as other heavy metal ions like zinc or cadmium) may be bound by metallothionein and sequestered within intracellular vesicles of enterocytes (i.e., predominant cells in the small intestinal mucosa). Distribution Copper released from intestinal cells moves to the serosal (i.e., thin membrane lining) capillaries where it binds to albumin, glutathione, and amino acids in the portal blood. There is also evidence for a small protein, transcuprein, with a specific role in plasma copper transport Several or all of these copper-binding molecules may participate in serum copper transport. Copper from portal circulation is primarily taken up by the liver. Once in the liver, copper is either incorporated into copper-requiring proteins, which are subsequently secreted into the blood. Most of the copper (70 – 95%) excreted by the liver is incorporated into ceruloplasmin, the main copper carrier in blood. Copper is transported to extra-hepatic tissues by ceruloplasmin, albumin and amino acids, or excreted into the bile. By regulating copper release, the liver exerts homeostatic control over extra-hepatic copper. Excretion Bile is the major pathway for the excretion of copper and is vitally important in the control of liver copper levels. Most fecal copper results from biliary excretion; the remainder is derived from unabsorbed copper and copper from desquamated mucosal cells. Dietary recommendations Various national and international organizations concerned with nutrition and health have standards for copper intake at levels judged to be adequate for maintaining good health. These standards are periodically changed and updated as new scientific data become available. The standards sometimes differ among countries and organizations. Adults The World Health Organization recommends a minimal acceptable intake of approximately 1.3 mg/day. These values are considered to be adequate and safe for most of the general population. In North America, the U.S. Institute of Medicine (IOM) set the Recommended Dietary Allowance (RDA) for copper for healthy adult men and women at 0.9 mg/day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of copper, the UL is set at 10 mg/day. The European Food Safety Authority reviewed the same safety question and set its UL at 5 mg/day. Adolescents, children, and infants Full-term and premature infants are more sensitive to copper deficiency than adults. Since the fetus accumulates copper during the last 3 months of pregnancy, infants that are born prematurely have not had sufficient time to store adequate reserves of copper in their livers and therefore require more copper at birth than full-term infants. For full-term infants, the North American recommended safe and adequate intake is approximately 0.2 mg/day. For premature babies, it is considerably higher: 1 mg/day. The World Health Organization has recommended similar minimum adequate intakes and advises that premature infants be given formula supplemented with extra copper to prevent the development of copper deficiency. Pregnant and lactating women In North America, the IOM has set the RDA for pregnancy at 1.0 mg/day and for lactation at 1.3 mg/day. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA. PRI for pregnancy is 1.6 mg/day, for lactation 1.6 mg/day – higher than the U.S. RDAs. Food sources Foods contribute virtually all of the copper consumed by humans. In both developed and developing countries, adults, young children, and adolescents who consume diets of grain, millet, tuber, or rice along with legumes (beans) or small amounts of fish or meat, some fruits and vegetables, and some vegetable oil are likely to obtain adequate copper if their total food consumption is adequate in calories. In developed countries where consumption of red meat is high, copper intake may also be adequate. As a natural element in the Earth's crust, copper exists in most of the world's surface water and groundwater, although the actual concentration of copper in natural waters varies geographically. Drinking water can comprise 20–25% of dietary copper. In many regions of the world, copper tubing that conveys drinking water can be a source of dietary copper. Copper tube can leach a small amount of copper, particularly in its first year or two of service. Afterwards, a protective surface usually forms on the inside of copper tubes that slows leaching. In France and some other countries, copper bowls are traditionally used for whipping egg white, as the copper helps stabilise bonds in the white as it is beaten and whipped. Small amounts of copper may leach from the bowl during the process and enter the egg white. Supplementation Copper supplements can prevent copper deficiency. Copper supplements are not prescription medicines, and are available at vitamin and herb stores and grocery stores and online retailers. Different forms of copper supplementation have different absorption rates. For example, the absorption of copper from cupric oxide supplements is lower than that from copper gluconate, copper sulfate, or carbonate. Supplementation is generally not recommended for healthy adults who consume a well-balanced diet which includes a wide range of foods. However, supplementation under the care of a physician may be necessary for premature infants or those with low birth weights, infants fed unfortified formula or cow's milk during the first year of life, and malnourished young children. Physicians may consider copper supplementation for 1) illnesses that reduce digestion (e.g., children with frequent diarrhea or infections; alcoholics), 2) insufficient food consumption (e.g., the elderly, the infirm, those with eating disorders or on diets), 3) patients taking medications that block the body's use of copper, 4) anemia patients who are treated with iron supplements, 5) anyone taking zinc supplements, and 6) those with osteoporosis. Many popular vitamin supplements include copper as small inorganic molecules such as cupric oxide. These supplements can result in excess free copper in the brain as the copper can cross the blood-brain barrier directly. Normally, organic copper in food is first processed by the liver which keeps free copper levels under control. Copper deficiency and excess health conditions (non-genetic) If insufficient quantities of copper are ingested, copper reserves in the liver will become depleted and a copper deficiency leading to disease or tissue injury (and in extreme cases, death). Toxicity from copper deficiency can be treated with a balanced diet or supplementation under the supervision of a doctor. On the contrary, like all substances, excess copper intake at levels far above World Health Organization limits can become toxic. Acute copper toxicity is generally associated with accidental ingestion. These symptoms abate when the high copper food source is no longer ingested. In 1996, the International Program on Chemical Safety, a World Health Organization-associated agency, stated "there is greater risk of health effects from deficiency of copper intake than from excess copper intake". This conclusion was confirmed in recent multi-route exposure surveys. The health conditions of non-genetic copper deficiency and copper excess are described below. Copper deficiency There are conflicting reports on the extent of deficiency in the U.S. One review indicates approximately 25% of adolescents, adults, and people over 65, do not meet the Recommended Dietary Allowance for copper. Another source states less common: a federal survey of food consumption determined that for women and men over the age of 19, average consumption from foods and beverages was 1.11 and 1.54 mg/day, respectively. For women, 10% consumed less than the Estimated Average Requirement; for men, fewer than 3%. Acquired copper deficiency has recently been implicated in adult-onset progressive myeloneuropathy and in the development of severe blood disorders including myelodysplastic syndrome. Fortunately, copper deficiency can be confirmed by very low serum metal and ceruloplasmin concentrations in the blood. Other conditions linked to copper deficiency include osteoporosis, osteoarthritis, rheumatoid arthritis, cardiovascular disease, colon cancer, and chronic conditions involving bone, connective tissue, heart and blood vessels. nervous system and immune system. Copper deficiency alters the role of other cellular constituents involved in antioxidant activities, such as iron, selenium, and glutathione, and therefore plays an important role in diseases in which oxidant stress is elevated. A marginal, i.e., 'mild' copper deficiency, believed to be more widespread than previously thought, can impair human health in subtle ways. Populations susceptible to copper deficiency include those with genetic defects for Menkes disease, low-birth-weight infants, infants fed cow's milk instead of breast milk or fortified formula, pregnant and lactating mothers, patients receiving total parenteral nutrition, individuals with "malabsorption syndrome" (impaired dietary absorption), diabetics, individuals with chronic diseases that result in low food intake, such as alcoholics, and persons with eating disorders. The elderly and athletes may also be at higher risk for copper deficiency due to special needs that increase the daily requirements. Vegetarians may have decreased copper intake due to the consumption of plant foods in which copper bioavailability is low. On the other hand, Bo Lönnerdal commented that Gibson's study showed that vegetarian diets provided larger quantities of copper. Fetuses and infants of severely copper deficient women have increased risk of low birth weights, muscle weaknesses, and neurological problems. Copper deficiencies in these populations may result in anemia, bone abnormalities, impaired growth, weight gain, frequent infections (colds, flu, pneumonia), poor motor coordination, and low energy. Copper excess Copper excess is a subject of much current research. Distinctions have emerged from studies that copper excess factors are different in normal populations versus those with increased susceptibility to adverse effects and those with rare genetic diseases. This has led to statements from health organizations that could be confusing to the uninformed. For example, according to a U.S. Institute of Medicine report, the intake levels of copper for a significant percentage of the population are lower than recommended levels. On the other hand, the U.S. National Research Council concluded in its report Copper in Drinking Water that there is concern for copper toxicity in susceptible populations and recommended that additional research be conducted to identify and characterize copper-sensitive populations. Excess copper intake causes stomach upset, nausea, and diarrhea and can lead to tissue injury and disease. The oxidation potential of copper may be responsible for some of its toxicity in excess ingestion cases. At high concentrations copper is known to produce oxidative damage to biological systems, including peroxidation of lipids or other macromolecules. While the cause and progression of Alzheimer's disease are not well understood, research indicates that, among several other key observations, iron, aluminum, and copper accumulate in the brains of Alzheimer's patients. However, it is not yet known whether this accumulation is a cause or a consequence of the disease. Research has been ongoing over the past two decades to determine whether copper is a causative or a preventive agent of Alzheimer's disease. For example, as a possible causative agent or an expression of a metal homeostasis disturbance, studies indicate that copper may play a role in increasing the growth of protein clumps in Alzheimer's disease brains, possibly by damaging a molecule that removes the toxic buildup of amyloid beta (Aβ) in the brain. There is an association between a diet rich in copper and iron together with saturated fat and Alzheimer's disease. On the other hand, studies also demonstrate potential beneficial roles of copper in treating rather than causing Alzheimer's disease. For example, copper has been shown to 1) promote the non-amyloidogenic processing of amyloid beta precursor protein (APP), thereby lowering amyloid beta (Aβ) production in cell culture systems 2) increase lifetime and decrease soluble amyloid production in APP transgenic mice, and 3) lower Aβ levels in cerebral spinal fluid in Alzheimer's disease patients. Furthermore, long-term copper treatment (oral intake of 8 mg copper (Cu-(II)-orotate-dihydrate)) was excluded as a risk factor for Alzheimer's disease in a noted clinical trial on humans and a potentially beneficial role of copper in Alzheimer's disease has been demonstrated on cerebral spinal fluid levels of Aβ42, a toxic peptide and biomarker of the disease. More research is needed to understand metal homeostasis disturbances in Alzheimer's disease patients and how to address these disturbances therapeutically. Since this experiment used Cu-(II)-orotate-dihydrate, it does not relate to the effects of cupric oxide in supplements. Copper toxicity from excess exposures In humans, the liver is the primary organ of copper-induced toxicity. Other target organs include bone and the central nervous and immune systems. Excess copper intake also induces toxicity indirectly by interacting with other nutrients. For example, excess copper intake produces anemia by interfering with iron transport and/or metabolism. The identification of genetic disorders of copper metabolism leading to severe copper toxicity (i.e., Wilson disease) has spurred research into the molecular genetics and biology of copper homeostasis (for further information, refer to the following section on copper genetic diseases). Much attention has focused on the potential consequences of copper toxicity in normal and potentially susceptible populations. Potentially susceptible subpopulations include hemodialysis patients and individuals with chronic liver disease. Recently, concern was expressed about the potential sensitivity to liver disease of individuals who are heterozygote carriers of Wilson disease genetic defects (i.e., those having one normal and one mutated Wilson copper ATPase gene) but who do not have the disease (which requires defects in both relevant genes). However, to date, no data are available that either support or refute this hypothesis. Acute exposures In case reports of humans intentionally or accidentally ingesting high concentrations of copper salts (doses usually not known but reported to be 20–70 grams of copper), a progression of symptoms was observed including abdominal pain, headache, nausea, dizziness, vomiting and diarrhea, tachycardia, respiratory difficulty, hemolytic anemia, hematuria, massive gastrointestinal bleeding, liver and kidney failure, and death. Episodes of acute gastrointestinal upset following single or repeated ingestion of drinking water containing elevated levels of copper (generally above 3–6 mg/L) are characterized by nausea, vomiting, and stomach irritation. These symptoms resolve when copper in the drinking water source is reduced. Three experimental studies were conducted that demonstrate a threshold for acute gastrointestinal upset of approximately 4–5 mg/L in healthy adults, although it is not clear from these findings whether symptoms are due to acutely irritant effects of copper and/or to metallic, bitter, salty taste. In an experimental study with healthy adults, the average taste threshold for copper sulfate and chloride in tap water, deionized water, or mineral water was 2.5–3.5 mg/L. This is just below the experimental threshold for acute gastrointestinal upset. Chronic exposures The long-term toxicity of copper has not been well studied in humans, but it is infrequent in normal populations that do not have a hereditary defect in copper homeostasis. There is little evidence to indicate that chronic human exposure to copper results in systemic effects other than liver injury. Chronic copper poisoning leading to liver failure was reported in a young adult male with no known genetic susceptibility who consumed 30–60 mg/d of copper as a mineral supplement for 3 years. Individuals residing in U.S. households supplied with tap water containing >3 mg/L of copper exhibited no adverse health effects. No effects of copper supplementation on serum liver enzymes, biomarkers of oxidative stress, and other biochemical endpoints have been observed in healthy young human volunteers given daily doses of 6 to 10 mg/d of copper for up to 12 weeks. Infants aged 3–12 months who consumed water containing 2 mg Cu/L for 9 months did not differ from a concurrent control group in gastrointestinal tract (GIT) symptoms, growth rate, morbidity, serum liver enzyme and bilirubin levels, and other biochemical endpoints.) Serum ceruloplasmin was transiently elevated in the exposed infant group at 9 months and similar to controls at 12 months, suggesting homeostatic adaptation and/or maturation of the homeostatic response. Dermal exposure has not been associated with systemic toxicity but anecdotal reports of allergic responses may be a sensitization to nickel and cross-reaction with copper or a skin irritation from copper. Workers exposed to high air levels of copper (resulting in an estimated intake of 200 mg Cu/d) developed signs suggesting copper toxicity (e.g., elevated serum copper levels, hepatomegaly). However, other co-occurring exposures to pesticidal agents or in mining and smelting may contribute to these effects. Effects of copper inhalation are being thoroughly investigated by an industry-sponsored program on workplace air and worker safety. This multi-year research effort is expected to be finalized in 2011. Measurements of elevated copper status Although a number of indicators are useful in diagnosing copper deficiency, there are no reliable biomarkers of copper excess resulting from dietary intake. The most reliable indicator of excess copper status is liver copper concentration. However, measurement of this endpoint in humans is intrusive and not generally conducted except in cases of suspected copper poisoning. Increased serum copper or ceruolplasmin levels are not reliably associated with copper toxicity as elevations in concentrations can be induced by inflammation, infection, disease, malignancies, pregnancy, and other biological stressors. Levels of copper-containing enzymes, such as cytochrome c oxidase, superoxide dismutase, and diaminase oxidase, vary not only in response to copper state but also in response to a variety of other physiological and biochemical factors and therefore are inconsistent markers of excess copper status. A new candidate biomarker for copper excess as well as deficiency has emerged in recent years. This potential marker is a chaperone protein, which delivers copper to the antioxidant protein SOD1 (copper, zinc superoxide dismutase). It is called "copper chaperone for SOD1" (CCS), and excellent animal data supports its use as a marker in accessible cells (e.g., erythrocytes) for copper deficiency as well as excess. CCS is currently being tested as a biomarker in humans. Hereditary copper metabolic diseases Several rare genetic diseases (Wilson disease, Menkes disease, idiopathic copper toxicosis, Indian childhood cirrhosis) are associated with the improper use of copper in the body. All of these diseases involve mutations of genes containing the genetic codes for the production of specific proteins involved in the absorption and distribution of copper. When these proteins are dysfunctional, copper either builds up in the liver or the body fails to absorb copper. These diseases are inherited and cannot be acquired. Adjusting copper levels in the diet or drinking water will not cure these conditions (although therapies are available to manage symptoms of genetic copper excess disease). The study of genetic copper metabolism diseases and their associated proteins are enabling scientists to understand how human bodies use copper and why it is important as an essential micronutrient. The diseases arise from defects in two similar copper pumps, the Menkes and the Wilson Cu-ATPases. The Menkes ATPase is expressed in tissues like skin-building fibroblasts, kidneys, placenta, brain, gut and vascular system, while the Wilson ATPase is expressed mainly in the liver, but also in mammary glands and possibly in other specialized tissues. This knowledge is leading scientists towards possible cures for genetic copper diseases. Menkes disease Menkes disease, a genetic condition of copper deficiency, was first described by John Menkes in 1962. It is a rare X-linked disorder that affects approximately 1/200,000 live births, primarily boys. Livers of Menkes disease patients cannot absorb essential copper needed for patients to survive. Death usually occurs in early childhood: most affected individuals die before the age of 10 years, although several patients have survived into their teens and early 20s. The protein produced by the Menkes gene is responsible for transporting copper across the gastrointestinal tract (GIT) mucosa and the blood–brain barrier. Mutational defects in the gene encoding the copper ATPase cause copper to remain trapped in the lining of the small intestine. Hence, copper cannot be pumped out of the intestinal cells and into the blood for transport to the liver and consequently to rest of the body. The disease therefore resembles a severe nutritional copper deficiency despite adequate ingestion of copper. Symptoms of the disease include coarse, brittle, depigmented hair and other neonatal problems, including the inability to control body temperature, intellectual disability, skeletal defects, and abnormal connective tissue growth. Menkes patients exhibit severe neurological abnormalities, apparently due to the lack of several copper-dependent enzymes required for brain development, including reduced cytochrome c oxidase activity. The brittle, kinky hypopigmented hair of steely appearance is due to a deficiency in an unidentified cuproenzyme. Reduced lysyl oxidase activity results in defective collagen and elastin polymerization and corresponding connective-tissue abnormalities including aortic aneurisms, loose skin, and fragile bones. With early diagnosis and treatment consisting of daily injections of copper histidine intraperitoneally and intrathecally to the central nervous system, some of the severe neurological problems may be avoided and survival prolonged. However, Menkes disease patients retain abnormal bone and connective-tissue disorders and show mild to severe intellectual disability. Even with early diagnosis and treatment, Menkes disease is usually fatal. Ongoing research into Menkes disease is leading to a greater understanding of copper homeostasis, the biochemical mechanisms involved in the disease, and possible ways to treat it. Investigations into the transport of copper across the blood/brain barrier, which are based on studies of genetically altered mice, are designed to help researchers understand the root cause of copper deficiency in Menkes disease. The genetic makeup of transgenic mice is altered in ways that help researchers garner new perspectives about copper deficiency. The research to date has been valuable: genes can be turned off gradually to explore varying degrees of deficiency. Researchers have also demonstrated in test tubes that damaged DNA in the cells of a Menkes patient can be repaired. In time, the procedures needed to repair damaged genes in the human body may be found. Wilson's disease Wilson's disease is a rare autosomal (chromosome 13) recessive genetic disorder of copper transport that causes an excess of copper to build up in the liver. This results in liver toxicity, among other symptoms. The disease is now treatable. Wilson's disease is produced by mutational defects of a protein that transports copper from the liver to the bile for excretion. The disease involves poor incorporation of copper into ceruloplasmin and impaired biliary copper excretion and is usually induced by mutations impairing the function of the Wilson copper ATPase. These genetic mutations produce copper toxicosis due to excess copper accumulation, predominantly in the liver and brain and, to a lesser extent, in kidneys, eyes, and other organs. The disease, which affects about 1/30,000 infants of both genders, may become clinically evident at any time from infancy through early adulthood. The age of onset of Wilson's disease ranges from 3 to 50 years of age. Initial symptoms include hepatic, neurologic, or psychiatric disorders and, rarely, kidney, skeletal, or endocrine symptomatology. The disease progresses with deepening jaundice and the development of encephalopathy, severe clotting abnormalities, occasionally associated with intravascular coagulation, and advanced chronic kidney disease. A peculiar type of tremor in the upper extremities, slowness of movement, and changes in temperament become apparent. Kayser–Fleischer rings, a rusty brown discoloration at the outer rims of the iris due to copper deposition noted in 90% of patients, become evident as copper begins to accumulate and affect the nervous system. Almost always, death occurs if the disease is untreated. Fortunately, identification of the mutations in the Wilson ATPase gene underlying most cases of Wilson's disease has made DNA testing for diagnosis possible. If diagnosed and treated early enough, patients with Wilson's disease may live long and productive lives. Wilson's disease is managed by copper chelation therapy with D-penicillamine (which picks up and binds copper and enables patients to excrete excess copper accumulated in the liver), therapy with zinc sulfate or zinc acetate, and restrictive dietary metal intake, such as the elimination of chocolate, oysters, and mushrooms. Zinc therapy is now the treatment of choice. Zinc produces a mucosal block by inducing metallothionein, which binds copper in mucosal cells until they slough off and are eliminated in the feces. and it competes with copper for absorption in the intestine by DMT1 (Divalent Metal transporter 1). More recently, experimental treatments with tetrathiomolybdate showed promising results. Tetrathiomolybdate appears to be an excellent form of initial treatment in patients who have neurologic symptoms. In contrast to penicillamine therapy, initial treatment with tetrathiomolybdate rarely allows further, often irreversible, neurologic deterioration. Over 100 different genetic defects leading to Wilson's disease have been described and are available on the Internet at . Some of the mutations have geographic clustering. Many Wilson's patients carry different mutations on each chromosome 13 (i.e., they are compound heterozygotes). Even in individuals who are homozygous for a mutation, onset and severity of the disease may vary. Individuals homozygous for severe mutations (e.g., those truncating the protein) have earlier disease onset. Disease severity may also be a function of environmental factors, including the amount of copper in the diet or variability in the function of other proteins that influence copper homeostasis. It has been suggested that heterozygote carriers of the Wilson's disease gene mutation may be potentially more susceptible to elevated copper intake than the general population. A heterozygotic frequency of 1/90 people has been estimated in the overall population. However, there is no evidence to support this speculation. Further, a review of the data on single-allelic autosomal recessive diseases in humans does not suggest that heterozygote carriers are likely to be adversely affected by their altered genetic status. Other copper-related hereditary syndromes Other diseases in which abnormalities in copper metabolism appear to be involved include Indian childhood cirrhosis (ICC), endemic Tyrolean copper toxicosis (ETIC), and idiopathic copper toxicosis (ICT), also known as non-Indian childhood cirrhosis. ICT is a genetic disease recognized in the early twentieth century primarily in the Tyrolean region of Austria and in the Pune region of India. ICC, ICT, and ETIC are infancy syndromes that are similar in their apparent etiology and presentation. Both appear to have a genetic component and a contribution from elevated copper intake. In cases of ICC, the elevated copper intake is due to heating and/or storing milk in copper or brass vessels. ICT cases, on the other hand, are due to elevated copper concentrations in water supplies. Although exposures to elevated concentrations of copper are commonly found in both diseases, some cases appear to develop in children who are exclusively breastfed or who receive only low levels of copper in water supplies. The currently prevailing hypothesis is that ICT is due to a genetic lesion resulting in impaired copper metabolism combined with high copper intake. This hypothesis was supported by the frequency of occurrence of parental consanguinity in most of these cases, which is absent in areas with elevated copper in drinking water and in which these syndromes do not occur. ICT appears to be vanishing as a result of greater genetic diversity within the affected populations in conjunction with educational programs to ensure that tinned cooking utensils are used instead of copper pots and pans being directly exposed to cooked foods. The preponderance of cases of early childhood cirrhosis identified in Germany over a period of 10 years were not associated with either external sources of copper or with elevated hepatic metal concentrations Only occasional spontaneous cases of ICT arise today. Cancer The role of copper in angiogenesis associated with different types of cancers has been investigated. A copper chelator, tetrathiomolybdate, which depletes copper stores in the body, is under investigation as an anti-angiogenic agent in pilot and clinical trials. The drug may inhibit tumor angiogenesis in hepatocellular carcinoma, pleural mesothelioma, colorectal cancer, head and neck squamous cell carcinoma, breast cancer, and kidney cancer. The copper complex of a synthetic salicylaldehyde pyrazole hydrazone (SPH) derivative induced human umbilical endothelial cell (HUVEC) apoptosis and showed anti-angiogenesis effect in vitro. The trace element copper had been found promoting tumor growth. Several evidence from animal models indicates that tumors concentrate high levels of copper. Meanwhile, extra copper has been found in some human cancers. Recently, therapeutic strategies targeting copper in the tumor have been proposed. Upon administration with a specific copper chelator, copper complexes would be formed at a relatively high level in tumors. Copper complexes are often toxic to cells, therefore tumor cells were killed, while normal cells in the whole body remained alive for the lower level of copper. Researchers have also recently found that cuproptosis, a copper-induced mechanism of mitochondrial-related cell death, has been implicated as a breakthrough in the treatment of cancer and has become a new treatment strategy. Some copper chelators get more effective or novel bioactivity after forming copper-chelator complexes. It was found that Cu2+ was critically needed for PDTC induced apoptosis in HL-60 cells. The copper complex of salicylaldehyde benzoylhydrazone (SBH) derivatives showed increased efficacy of growth inhibition in several cancer cell lines, when compared with the metal-free SBHs. SBHs can react with many kinds of transition metal cations and thereby forming a number of complexes. Copper-SBH complexes were more cytotoxic than complexes of other transitional metals (Cu > Ni > Zn = Mn > Fe = Cr > Co) in MOLT-4 cells, an established human T-cell leukemia cell line. SBHs, especially their copper complexes appeared to be potent inhibitors of DNA synthesis and cell growth in several human cancer cell lines, and rodent cancer cell lines. Salicylaldehyde pyrazole hydrazone (SPH) derivatives were found to inhibit the growth of A549 lung carcinoma cells. SPH has identical ligands for Cu2+ as SBH. The Cu-SPH complex was found to induce apoptosis in A549, H322 and H1299 lung cancer cells. Contraception with copper IUDs A copper intrauterine device (IUD) is a type of long-acting reversible contraception that is considered to be one of the most effective forms of birth control. Plant and animal health In addition to being an essential nutrient for humans, copper is vital for the health of animals and plants and plays an important role in agriculture. Plant health Copper concentrations in soil are not uniform around the world. In many areas, soils have insufficient levels of copper. Soils that are naturally deficient in copper often require copper supplements before agricultural crops, such as cereals, can be grown. Copper deficiencies in soil can lead to crop failure. Copper deficiency is a major issue in global food production, resulting in losses in yield and reduced quality of output. Nitrogen fertilizers can worsen copper deficiency in agricultural soils. The world's two most important food crops, rice and wheat, are highly susceptible to copper deficiency. So are several other important foods, including citrus, oats, spinach and carrots. On the other hand, some foods including coconuts, soybeans and asparagus, are not particularly sensitive to copper-deficient soils. The most effective strategy to counter copper deficiency is to supplement the soil with copper, usually in the form of copper sulfate. Sewage sludge is also used in some areas to replenish agricultural land with organics and trace metals, including copper. Animal health In livestock, cattle and sheep commonly show indications when they are copper deficient. Swayback, a sheep disease associated with copper deficiency, imposes enormous costs on farmers worldwide, particularly in Europe, North America, and many tropical countries. For pigs, copper has been shown to be a growth promoter. See also Dietary mineral Essential nutrient List of micronutrients Micronutrients Nutrition References
0.765789
0.97302
0.745128
De Medicina
De Medicina is a 1st-century medical treatise by Aulus Cornelius Celsus, a Roman encyclopedist and possibly (but not likely) a practicing physician. It is the only surviving section of a much larger encyclopedia; only small parts still survive from sections on agriculture, military science, oratory, jurisprudence and philosophy. De Medicina draws upon knowledge from ancient Greek works, and is considered the best surviving treatise on Alexandrian medicine. It is also the first complete textbook on medicine to be printed, and has an "encyclopedic arrangement that follows the tripartite division of medicine at the time as established by Hippocrates and Asclepiades – diet, pharmacology, and surgery." This work also covers the topics of disease and therapy. Sections detail the removal of missile weapons, stopping bleeding, preventing inflammation, diagnosis of internal maladies, removal of kidney stones, the amputation of limbs and so forth. The original work was published some time before 47 CE. It consisted of eight books in highly regarded Latin text. The subject matter is divided as follows: Book I – Diet, hygiene, and the benefits of exercise. Book II – The cause of disease, its symptoms and prognosis. Book III – Treatment of diseases, including the common cold and pneumonia. He classified mental disorders into: Phrenitis, delirium with fever; Melancholia, depression; one due to false images and disordered judgment, presumably schizophrenia; Delirium due to fear; Lethargus, coma; and Morbus comitialis, epilepsy. The term insania, insanity, was first used by him. The methods of treatment included bleeding, frightening the patient, emetics, enemas, total darkness, and decoctions of poppy or henbane, and pleasant ones such as music therapy, travel, sport, reading aloud, and massage. He was aware of the importance of the doctor-patient relationship. Book IV – Anatomical descriptions of selected diseases. Book V – Medicines, including opiates, diuretics, purgatives and laxatives. Book VI – Ulcers, skin lesions and diseases. Book VII – Classical operations, such as lithotomy and removal of cataracts. Book VIII – Treatment of dislocations and fractures. De Medicina was known during the Middle Ages up to the 9th or 10th centuries, but was later lost up until the 15th century. It was the first medical book to be printed, in Florence, 1478. References External links De Medicina at LacusCurtius (Latin original and English translation) 1st-century books in Latin Classical Latin literature Medical manuals Traditional medicine Encyclopedias in Latin Encyclopedias of medicine Encyclopedias in classical antiquity
0.771955
0.965221
0.745107
Zangfu
The zangfu organs are functional entities stipulated by traditional Chinese medicine (TCM). These classifications are based on east Asian cosmological observations rather than bio medical definitions that are used in Western evidence based medical models. In TCM theory they represent the energetic representation of the internal organs rather than the anatomical viscera that is referred to in Western medicine. The Zung Fu are considered Each zang is paired with a fu, and each pair is assigned to one of the welng. The zangfu are also connected to the twelve standard meridians – each yang meridian is attached to a fu organ and each yin meridian is attached to a zang. They are five systems of Heart, Liver, Spleen, Lung, Kidney. To highlight the fact that the zangfu are not equivalent to the anatomical organs, their names are often capitalized. Anatomical organs To understand the zangfu it is important to realize that their concept did not primarily develop out of anatomical biological considerations but from cosmological patterns and influences. The need to describe and systematize the bodily functions was more significant to ancient Chinese physicians than opening up a cadaver (dead body) and seeing what morphological formal structures there actually were. For example traditionally viewing the Heart of pericardium was forbidden. Thus, the zangfu are functional relational entities first and foremost, and only loosely tied to (rudimentary) anatomical assumptions. Yin/yang and the Five Elements Each zangfu organ has a yin and a yang aspect, but overall, the zang organs are considered to be yin, and the fu organs yang. Since the concept of the zangfu was developed on the basis of wuxing philosophy, they are incorporated into a system of allocation to one of five elemental qualities (i.e., the Five goings or Five Phases). The zangfu share their respective element's allocations (e.g., diagnostics of colour, sound, odour and emotion etc.) and interact with each other cyclically in the same way the Five Elements do: each zang organ has one corresponding zang organ that it disperses, and one that it reinforces or tonifying and sedative. The correspondence between zangfu and Five Elements are stipulated as: Fire = Heart and Small Intestine (and, secondarily, Sanjiao [, ‘’Triple Burner‘’] and Pericardium []) Earth = Spleen and Stomach Metal = Lung and Large Intestine Water = Kidney and Bladder Wood = Liver and Gallbladder Details The zang organs' essential functions consist in manufacturing and storing qi and blood (and, in the case of the Kidney, essence). The hollow fu organs' main purpose is to transmit and digest (传化, ) substances (like waste, food, etc.). Zang Each zang has a corresponding "orifice" it "opens" into. This means the functional entity of a given zang includes the corresponding orifice's functions (e.g. blurry vision is primarily seen as a dysfunction of the Liver zang because the Liver channel "opens" into the eyes). In listing the functions of the zang organs, TCM regularly uses the term "governing" – indicating that the main responsibility of regulating something (e.g. blood, qi, water metabolism etc.) lies with a certain zang. Although the zang are functional entities in the first place, TCM gives vague locations for them – namely, the general area where the anatomical organ of the same name would be found. One could argue that this (or any) positioning of the zang is irrelevant for the TCM system; there is some relevance, however, in whether a certain zang would be attributed to the upper, middle or lower jiao. Heart The Heart: "Stores" (, ) the shen (usually translated as "mind"), paired with small intestines Governs xuě (blood) and vessels/meridians Opens into the tongue Reflects in facial complexion Pericardium Since there are only five zang organs but six yin channels, the remaining meridian is assigned to the Pericardium. Its concept is closely related to the Heart, and its stipulated main function is to protect the Heart from attacks by Exterior Pathogenic Factors. Like the Heart, the Pericardium governs blood and stores the mind. The Pericardium's corresponding yang channel is assigned to the San Jiao ("Triple Burner"). Spleen The Spleen: "Stores" the yi Governs "transportation and transformation", i.e. the extraction of jing wei (, usually translated with food essence, sometimes also called jing qi [, essence qi]) – and water – from food and drink, and the successive distribution of it to the other zang organs. Is the source of "production and mutual transformation" of qi and xue (blood) "Contains" the blood inside the vessels Opens into the lips (and mouth) Governs muscles and limbs Liver The Liver: "Stores" blood, and the hun (, Ethereal Soul) and is paired with the gall bladder. Governs "unclogging and deflation" primarily of qì. The free flow and harmony of qì in turn will ensure the free flow of emotions, blood, and water. Opens into the eyes Governs the tendons Reflects in the nails Lung Yin Metal. Home of the po (, Corporeal Soul), paired with the yang organ the Large Intestine. The function of the Lung is to disperse and descend qi throughout the body. It receives qi through the breath, and exhales the waste and helps the peristaltic action of the gastrointestinal tract.The Lung governs the skin and hair and also governs the exterior (one part of immunity) and the closing of the skin pores. A properly functioning Lung organ will ensure the skin and hair are of good quality and that the immune system is strong and able to fight disease. The normal direction of the Lung is defending, when Lung qi "rebels" it goes upwards, causing coughing and wheezing. When the Lung is weak, there can be skin conditions such as eczema, thin or brittle hair, and a propensity to catching colds and flu. The Lung is weakened by dryness and the emotion of grief or sadness. Kidney Water. Home of the zhi (, Will), paired with the Bladder. The Kidneys store jing Essence, govern birth, growth, reproduction and development. They also produce the Marrow which fills the spinal cord, brain and control the bones. The Kidneys are often referred to as the "Root of Life" or the "Root of the Pre-Heaven Qi". Fu Large intestine Gall bladder Urinary bladder Stomach Small intestine San Jiao (Triple Burner) Criticism The concept of the zangfu is not identified by evidence based medicine – the underlying assumptions and theory have not been verified or falsified by controlled experiments. As the study and practice of Traditional Chinese medicine's mechanisms are comparatively new in the west it has been criticized as pseudoscientific. See also Traditional Chinese medicine Wuxing References Citations Sources (2006-07-18), "", , retrieved 2010-12-16 Cultural China (2007), "Chinese Medicine : Basic Zang Fu Theory", "Kaleidoscope → Health", retrieved 2010-12-21 Kaptchuk, T. (2000). "The Web That Has No Weaver: Understanding Chinese Medicine, 2nd ed." Mcgraw-Hill. Oguamanam C. (2006). "International Law and Indigenous Knowledge: Intellectual Property, Plant Biodiversity, and Traditional Medicine" University of Toronto Press Agnes Fatrai, Stefan Uhrig (eds.). Chinese Ophthalmology – Acupuncture, Herbal Therapy, Dietary Therapy, Tuina and Qigong. Tipani-Verlag, Wiesbaden 2015, . External links The Zang Fu – Information on the functions of the Zang Fu Organs. Syndrome differentiation according to zang-fu – Chinese medicine diagnosis on organ diseases. Traditional Chinese medicine
0.760582
0.979651
0.745105
Chemotaxonomy
Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc. Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes. John Griffith Vaughan and Victor Plouvier were among the pioneers of chemotaxonomy. Biochemical products The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution. Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat. Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nucleotides. Each nucleotide has a pentose sugar, a phosphate group, and nitrogenous bases like adenine, guanine, cytosine, and thymine. RNA contains uracil instead of thymine. It has been proved in the laboratory that a single strand of DNA of one species can match with the other strand from another species. If the alleles of the strands of any two species are close, then it can be concluded that these two species are more closely related. Digestive enzymes are chemical compounds that help in digestion. Proteins are always digested by a particular type of enzymes like pepsin, trypsin, etc., in all animals from a single celled amoeba to a human being. The complexity in the composition of these enzymes increases from lower to higher organisms but are fundamentally the same. Likewise, carbohydrates are always digested by amylase, and fats by lipase. End products of digestion: Irrespective of the type of animal, the end products of protein, carbohydrates and fats are amino acids, simple sugars, and fatty acids respectively. It can thus be comfortably concluded that the similarity of the end products is due to common ancestry. Hormones are secretions from ductless glands called the endocrine glands like the thyroid, pituitary, adrenal, etc. Their chemical nature is the same in all animals. For example, thyroxine is secreted from the thyroid gland, irrespective of what the animal is. It is used to control metabolism in all animals. If a human being is deficient in thyroxine, it is not mandatory that this hormone should be supplemented from another human being. It can be extracted from any mammal and injected into humans for normal metabolism to take place. Likewise, insulin is secreted from the pancreas. If the thyroid gland from a tadpole is removed and replaced with a bovine thyroid gland, normal metabolism will take place and the tadpole will metamorphose into a frog. As there is a fundamental relationship among these animals, such exchange of hormones or glands is possible. Nitrogenous Excretory Products: Mainly three types of nitrogenous waste is excreted by living organisms; ammonia is a characteristics of aquatic life form, urea is formed by the land and water dwellers, uric acid is excreted by terrestrial life forms. A frog, in its tadpole stage excretes ammonia just like a fish. When it turns into an adult frog and moves to land, it excretes urea instead of ammonia. Thus an aquatic ancestry to land animal is established. A chick on up to its fifth day of development excretes ammonia; from its 5th to 9th day, urea; and thereafter, uric acid. Based on these findings, Baldwin sought a biochemical recapitulation in the development of vertebrates with reference to nitrogenous excretory products. Phosphagens are energy reservoirs of animals. They are present in the muscles. They supply energy for the synthesis of ATP. Generally, there are two types of phosphagens in animals, phosphoarginine (PA) in invertebrates and phosphocreatine (PC) in vertebrates. Among the echinoderms and prochordates, some have PA and others PC. Only a few have both PA and PC. Biochemically, these two groups are related. This is the most basic proof that the first chordate animals should have been derived only from echinoderm-like ancestors. Body fluid of animals: When the body fluids of both aquatic and terrestrial animals are analyzed, it shows that they resemble sea water in their ionic composition. There is ample evidence that primitive members of most of the phyla lived in the sea in Paleozoic times. It is clear that the first life appeared only in the sea and then evolved onto land. A further point of interest is that the body fluids of most animals contain less magnesium and more potassium than the water of the present-day ocean. In the past, the ocean contained less magnesium and more potassium. Animals' bodies accumulated more of these minerals due to the structure of DNA, and this characteristic remains so today. When the first life forms appeared in the sea, they acquired the composition of the contemporary sea water, and retained it even after their evolution onto land, as it was a favorable trait. Opsins: In the vertebrates, vision is controlled by two very distinct types of opsins, porphyropsin and rhodopsin. They are present in the rods of the retina. Fresh water fishes have porphyropsin; marine ones and land vertebrates have rhodopsin. In amphibians, a tadpole living in fresh water has porphyropsin, and the adult frog, which lives on land most of the time, has rhodopsin. In catadromous fish, which migrate from fresh water to the sea, the porphyropsin is replaced by rhodopsin. In an anadromous fish, which migrates from the sea to freshwater, the rhodopsin is replaced by porphyropsin. These examples show the freshwater origin of vertebrates. They then deviated into two lines, one leading to marine life and the other to terrestrial life. Serological evidence: In recent years, experiments made in the composition of blood offer good evidence for evolution. It has been found that blood can be transmitted only between animals that are closely related. The degree of relationship between these animals is determined by what is known as the serological evidence. There are various methods of doing so; the method employed by George Nuttall is called the precipitation method. In this method, anti-serum of the involved animals has to be prepared. For human study, human blood is collected and allowed to clot. Then, the serum is separated from the erythrocytes. A rabbit is then injected with a small amount of serum at regular intervals, which is allowed to incubate for a few days. This forms antibodies in the rabbit's body. The rabbit's blood is then drawn and clotted. The serum separated from the red blood cells is called the anti-human serum. When such a serum is treated with that of blood of monkeys or apes, a clear white precipitate is formed. When the serum is treated with the blood of any other animal like dogs, cats, or cows, no precipitate appears. It can thus be concluded that humans are more closely related to monkeys and apes. As a result, it has been determined that lizards are closely related to snakes, horses to donkeys, dogs to cats, etc. This systematic position of Limulus was controversial for a long time, but has been found to show that human serum is more closely related to arachnids than to crustaceans. The field of biochemistry has greatly developed since Darwin's time, and this serological study is one of the most recent pieces of evidence of evolution. A number of biochemical products like nucleic acids, enzymes, hormones and phosphagens clearly show the relationship of all life forms. The composition of body fluid has shown that the first life originated in the oceans. The presence of nitrogenous waste products reveal the aquatic ancestry of vertebrates, and the nature of visual pigments points out the fresh water ancestry of land vertebrates. Serological tests indicate relationships within these animal phyla. Paleontology When only fragments of fossils, or some biomarkers remain in a rock or oil deposit, the class of organisms that produced it can often be determined using Fourier transform infrared spectroscopy References External links http://www.merriam-webster.com/dictionary/chemotaxonomy Phylogenetics
0.775882
0.960311
0.745088
Disease model of addiction
The disease model of addiction describes an addiction as a disease with biological, neurological, genetic, and environmental sources of origin. The traditional medical model of disease requires only that an abnormal condition be present that causes discomfort, dysfunction, or distress to the affected individual. The contemporary medical model attributes addiction, in part, to changes in the brain's mesolimbic pathway. The medical model also takes into consideration that such disease may be the result of other biological, psychological or sociological entities despite an incomplete understanding of the mechanisms of these entities. The common biomolecular mechanisms underlying all forms of addiction – CREB and ΔFosB – were reviewed by Eric J. Nestler in a 2013 review. Genetic factors and mental disorders can contribute to the severity of drug addiction. Approximately fifty percent of the chance a person will develop an addiction can be attributed to genetic factors. Criticism Critics of the disease model, particularly those who subscribe to the life-process model of addiction argue that labeling people as addicts keeps them from developing self-control and stigmatizes them. As noted by the harm reduction specialist Andrew Tatarsky: See also Addiction psychology References Addiction psychiatry
0.767797
0.970272
0.744972
Facility
A facility is a place for doing something, or a place that facilitates an activity. Types of facility include: A commercial or institutional building, such as a hotel, resort, school, office complex, sports arena, or convention center Medical facility Post-production facility Telecommunications facility Public toilet, euphemistically called "facilities" See also Faculty (disambiguation) Broad-concept articles Buildings and structures by type
0.768602
0.969089
0.744844
Human thermoregulation
As in other mammals, human thermoregulation is an important aspect of homeostasis. In thermoregulation, body heat is generated mostly in the deep organs, especially the liver, brain, and heart, and in contraction of skeletal muscles. Humans have been able to adapt to a great diversity of climates, including hot humid and hot arid. High temperatures pose serious stress for the human body, placing it in great danger of injury or even death. For humans, adaptation to varying climatic conditions includes both physiological mechanisms resulting from evolution and behavioural mechanisms resulting from conscious cultural adaptations. There are four avenues of heat loss: convection, conduction, radiation, and evaporation. If skin temperature is greater than that of the surroundings, the body can lose heat by radiation and conduction. But, if the temperature of the surroundings is greater than that of the skin, the body actually gains heat by radiation and conduction. In such conditions, the most efficient means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise. During sports activities, evaporation becomes the main avenue of heat loss. Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss. Humans cannot survive prolonged exposure to a wet-bulb temperature above . Such a temperature used to be thought not to occur on Earth's surface but has been recorded in some parts of the Indus Valley and Persian Gulf. Occurrence of conditions too hot and humid for human life is expected to increase in the future due to global warming. Control system The core temperature of a human is regulated and stabilized primarily by the hypothalamus, a region of the brain linking the endocrine system to the nervous system, and more specifically by the anterior hypothalamic nucleus and the adjacent preoptic area regions of the hypothalamus. As core temperature varies from the set point, endocrine production initiates control mechanisms to increase or decrease energy production/dissipation as needed to return the temperature toward the set point (see figure). In hot conditions Eccrine sweat glands under the skin secrete sweat (a fluid containing mostly water with some dissolved ions), which travels up the sweat duct, through the sweat pore and onto the surface of the skin. This causes heat loss via evaporative cooling; however, a lot of essential water is lost. The hair on the skin lie flat, preventing heat from being trapped by the layer of still air between the hair. This is caused by tiny muscles under the surface of the skin called arrector pili muscles relaxing so that their attached hair follicles are not erect. These flat hairs increase the flow of air next to the skin increasing heat loss by convection. When environmental temperature is above core body temperature, sweating is the only physiological way for humans to lose heat. Arteriolar vasodilation occurs. The smooth muscle walls of the arterioles relax allowing increased blood flow through the artery. This redirects blood into the superficial capillaries in the skin increasing heat loss by convection and conduction. In hot and humid conditions In general, humans appear physiologically well adapted to hot dry conditions. However, effective thermoregulation is reduced in hot, humid environments such as the Red Sea and Persian Gulf (where moderately hot summer temperatures are accompanied by unusually high vapor pressures), tropical environments, and deep mines where the atmosphere can be water-saturated. In hot-humid conditions, clothing can impede efficient evaporation. In such environments, it helps to wear light clothing such as cotton, that is pervious to sweat but impervious to radiant heat from the sun. This minimizes the gaining of radiant heat, while allowing as much evaporation to occur as the environment will allow. Clothing such as plastic fabrics that are impermeable to sweat and thus do not facilitate heat loss through evaporation can actually contribute to heat stress. In cold conditions Heat is lost mainly through the hands and feet. Sweat production is decreased. The minute muscles under the surface of the skin called arrector pili muscles (attached to an individual hair follicle) contract (piloerection), lifting the hair follicle upright. This makes the hairs stand on end, which acts as an insulating layer, trapping heat. This is what also causes goose bumps since humans do not have very much hair and the contracted muscles can easily be seen. Arterioles carrying blood to superficial capillaries under the surface of the skin can shrink (constrict), thereby rerouting blood away from the skin and towards the warmer core of the body. This prevents blood from losing heat to the surroundings and also prevents the core temperature dropping further. This process is called vasoconstriction. It is impossible to prevent all heat loss from the blood, only to reduce it. In extremely cold conditions, excessive vasoconstriction leads to numbness and pale skin. Frostbite occurs only when water within the cells begins to freeze. This destroys the cell causing damage. Muscles can also receive messages from the thermoregulatory center of the brain (the hypothalamus) to cause shivering. This increases heat production as respiration is an exothermic reaction in muscle cells. Shivering is more effective than exercise at producing heat because the animal (includes humans) remains still. This means that less heat is lost to the environment through convection. There are two types of shivering: low-intensity and high-intensity. During low-intensity shivering, animals shiver constantly at a low level for months during cold conditions. During high-intensity shivering, animals shiver violently for a relatively short time. Both processes consume energy, however high-intensity shivering uses glucose as a fuel source and low-intensity tends to use fats. This is a primary reason why animals store up food in the winter. Brown adipocytes are also capable of producing heat via a process called non-shivering thermogenesis. In this process, triglycerides are burned into heat, thereby increasing body temperature. Related factors Fitness The more physically fit a person is, the greater their ability to adjust to temperature variation. This includes adapting for heat (keeping cool) and for cold (keeping warm). Age Age can be a factor in a person's ability to adapt to temperature variations. Studies have shown that younger people adapt more efficiently to contact with cold surfaces than elderly people. Notably, a good level of fitness allowed the elderly people to cope better and offset somewhat the drop off to their ability to thermoregulate due to old age. Body mass A high body mass has been found to help with thermoregulation in regard to adapting for hot environments. This is considered on the basis that the levels of body fat were within healthy ranges i.e. the person's muscle-to-fat ratio was healthy. However, extra body fat has been shown to offer some benefit in terms of keeping warm, especially during immersion in cold water. For this reason long distance outdoor swimmers often have a generous layer of body fat. This is not necessarily always the case though, and high levels of physical fitness can allow thinner swimmers to also perform effectively in cold water environments. Uses of hypothermia Adjusting the human body temperature downward has been used therapeutically, in particular, as a method of stabilizing a body following trauma. It has been suggested that adjusting the adenosine A1 receptor of the hypothalamus may allow humans to enter a hibernation-like state of reduced body temperature, which could be useful for applications such as long-duration space flight. Related testing The thermoregulatory sweat test (TST) can be used to diagnose certain conditions that cause abnormal temperature regulation and defects in sweat production in the body. To perform the test, the patient is placed in a chamber that slowly rises in temperature. Before the chamber is heated, the patient is coated with a special kind of indicator powder that will change in color when sweat is produced. This powder, when changing color, will be useful in visualizing which skin is sweating versus not sweating. Results of the patient's sweat pattern will be documented by digital photography, and abnormal TST patterns can indicate if there is dysfunction in the autonomic nervous system. Certain differentials can be made depending on the type of sweat pattern found from the TST (along with history and clinical presentation) including hyperhidrosis, small fiber and autonomic neuropathies, multiple system atrophy, Parkinson disease with autonomic dysfunction, and pure autonomic failure. Related physiological processes, diseases and syndromes Hypothermia Hyperthermia Heat stroke Raynaud's phenomenon (Raynaud's disease) Endocrine system disorders (hyperthyroidism, hypothyroidism) Induced hypothermia Erythromelalgia (hyperthermia) Hypohidrotic ectodermal dysplasia Thermogenesis Poikilothermia References Thermoregulation Human homeostasis Heat transfer
0.768719
0.96894
0.744842
Structural bioinformatics
Structural bioinformatics is the branch of bioinformatics that is related to the analysis and prediction of the three-dimensional structure of biological macromolecules such as proteins, RNA, and DNA. It deals with generalizations about macromolecular 3D structures such as comparisons of overall folds and local motifs, principles of molecular folding, evolution, binding interactions, and structure/function relationships, working both from experimentally solved structures and from computational models. The term structural has the same meaning as in structural biology, and structural bioinformatics can be seen as a part of computational structural biology. The main objective of structural bioinformatics is the creation of new methods of analysing and manipulating biological macromolecular data in order to solve problems in biology and generate new knowledge. Introduction Protein structure The structure of a protein is directly related to its function. The presence of certain chemical groups in specific locations allows proteins to act as enzymes, catalyzing several chemical reactions. In general, protein structures are classified into four levels: primary (sequences), secondary (local conformation of the polypeptide chain), tertiary (three-dimensional structure of the protein fold), and quaternary (association of multiple polypeptide structures). Structural bioinformatics mainly addresses interactions among structures taking into consideration their space coordinates. Thus, the primary structure is better analyzed in traditional branches of bioinformatics. However, the sequence implies restrictions that allow the formation of conserved local conformations of the polypeptide chain, such as alpha-helix, beta-sheets, and loops (secondary structure). Also, weak interactions (such as hydrogen bonds) stabilize the protein fold. Interactions could be intrachain, i.e., when occurring between parts of the same protein monomer (tertiary structure), or interchain, i.e., when occurring between different structures (quaternary structure). Finally, the topological arrangement of interactions, whether strong or weak, and entanglements is being studied in the field of structural bioinformatics, utilizing frameworks such as circuit topology. Structure visualization Protein structure visualization is an important issue for structural bioinformatics. It allows users to observe static or dynamic representations of the molecules, also allowing the detection of interactions that may be used to make inferences about molecular mechanisms. The most common types of visualization are: Cartoon: this type of protein visualization highlights the secondary structure differences. In general, α-helix is represented as a type of screw, β-strands as arrows, and loops as lines. Lines: each amino acid residue is represented by thin lines, which allows a low cost for graphic rendering. Surface: in this visualization, the external shape of the molecule is shown. Sticks: each covalent bond between amino acid atoms is represented as a stick. This type of visualization is most used to visualize interactions between amino acids... DNA structure The classic DNA duplexes structure was initially described by Watson and Crick (and contributions of Rosalind Franklin). The DNA molecule is composed of three substances: a phosphate group, a pentose, and a nitrogen base (adenine, thymine, cytosine, or guanine). The DNA double helix structure is stabilized by hydrogen bonds formed between base pairs: adenine with thymine (A-T) and cytosine with guanine (C-G). Many structural bioinformatics studies have focused on understanding interactions between DNA and small molecules, which has been the target of several drug design studies. Interactions Interactions are contacts established between parts of molecules at different levels. They are responsible for stabilizing protein structures and perform a varied range of activities. In biochemistry, interactions are characterized by the proximity of atom groups or molecules regions that present an effect upon one another, such as electrostatic forces, hydrogen bonding, and hydrophobic effect. Proteins can perform several types of interactions, such as protein-protein interactions (PPI), protein-peptide interactions, protein-ligand interactions (PLI), and protein-DNA interaction. Calculating contacts Calculating contacts is an important task in structural bioinformatics, being important for the correct prediction of protein structure and folding, thermodynamic stability, protein-protein and protein-ligand interactions, docking and molecular dynamics analyses, and so on. Traditionally, computational methods have used threshold distance between atoms (also called cutoff) to detect possible interactions. This detection is performed based on Euclidean distance and angles between atoms of determined types. However, most of the methods based on simple Euclidean distance cannot detect occluded contacts. Hence, cutoff free methods, such as Delaunay triangulation, have gained prominence in recent years. In addition, the combination of a set of criteria, for example, physicochemical properties, distance, geometry, and angles, have been used to improve the contact determination. Protein Data Bank (PDB) The Protein Data Bank (PDB) is a database of 3D structure data for large biological molecules, such as proteins, DNA, and RNA. PDB is managed by an international organization called the Worldwide Protein Data Bank (wwPDB), which is composed of several local organizations, as. PDBe, PDBj, RCSB, and BMRB. They are responsible for keeping copies of PDB data available on the internet at no charge. The number of structure data available at PDB has increased each year, being obtained typically by X-ray crystallography, NMR spectroscopy, or cryo-electron microscopy. Data format The PDB format (.pdb) is the legacy textual file format used to store information of three-dimensional structures of macromolecules used by the Protein Data Bank. Due to restrictions in the format structure conception, the PDB format does not allow large structures containing more than 62 chains or 99999 atom records. The PDBx/mmCIF (macromolecular Crystallographic Information File) is a standard text file format for representing crystallographic information. Since 2014, the PDB format was substituted as the standard PDB archive distribution by the PDBx/mmCIF file format (.cif). While PDB format contains a set of records identified by a keyword of up to six characters, the PDBx/mmCIF format uses a structure based on key and value, where the key is a name that identifies some feature and the value is the variable information. Other structural databases In addition to the Protein Data Bank (PDB), there are several databases of protein structures and other macromolecules. Examples include: MMDB: Experimentally determined three-dimensional structures of biomolecules derived from Protein Data Bank (PDB). Nucleic acid Data Base (NDB): Experimentally determined information about nucleic acids (DNA, RNA). Structural Classification of Proteins (SCOP): Comprehensive description of the structural and evolutionary relationships between structurally known proteins. TOPOFIT-DB: Protein structural alignments based on the TOPOFIT method. Electron Density Server (EDS): Electron-density maps and statistics about the fit of crystal structures and their maps. CASP: Prediction Center Community-wide, worldwide experiment for protein structure prediction CASP. PISCES server for creating non-redundant lists of proteins: Generates PDB list by sequence identity and structural quality criteria. The Structural Biology Knowledgebase: Tools to aid in protein research design. ProtCID: The Protein Common Interface Database Database of similar protein-protein interfaces in crystal structures of homologous proteins. AlphaFold:AlphaFold - Protein Structure Database. Structure comparison Structural alignment Structural alignment is a method for comparison between 3D structures based on their shape and conformation. It could be used to infer the evolutionary relationship among a set of proteins even with low sequence similarity. Structural alignment implies superimposing a 3D structure over a second one, rotating and translating atoms in corresponding positions (in general, using the Cα atoms or even the backbone heavy atoms C, N, O, and Cα). Usually, the alignment quality is evaluated based on the root-mean-square deviation (RMSD) of atomic positions, i.e., the average distance between atoms after superimposition: where δi is the distance between atom i and either a reference atom corresponding in the other structure or the mean coordinate of the N equivalent atoms. In general, the RMSD outcome is measured in Ångström (Å) unit, which is equivalent to 10−10 m. The nearer to zero the RMSD value, the more similar are the structures. Graph-based structural signatures Structural signatures, also called fingerprints, are macromolecule pattern representations that can be used to infer similarities and differences. Comparisons among a large set of proteins using RMSD still is a challenge due to the high computational cost of structural alignments. Structural signatures based on graph distance patterns among atom pairs have been used to determine protein identifying vectors and to detect non-trivial information. Furthermore, linear algebra and machine learning can be used for clustering protein signatures, detecting protein-ligand interactions, predicting ΔΔG, and proposing mutations based on Euclidean distance. Structure prediction The atomic structures of molecules can be obtained by several methods, such as X-ray crystallography (XRC), NMR spectroscopy, and 3D electron microscopy; however, these processes can present high costs and sometimes some structures can be hardly established, such as membrane proteins. Hence, it is necessary to use computational approaches for determining 3D structures of macromolecules. The structure prediction methods are classified into comparative modeling and de novo modeling. Comparative modeling Comparative modeling, also known as homology modeling, corresponds to the methodology to construct three-dimensional structures from an amino acid sequence of a target protein and a template with known structure. The literature has described that evolutionarily related proteins tend to present a conserved three-dimensional structure. In addition, sequences of distantly related proteins with identity lower than 20% can present different folds. De novo modeling In structural bioinformatics, de novo modeling, also known as ab initio modeling, refers to approaches for obtaining three-dimensional structures from sequences without the necessity of a homologous known 3D structure. Despite the new algorithms and methods proposed in the last years, de novo protein structure prediction is still considered one of the remain outstanding issues in modern science. Structure validation After structure modeling, an additional step of structure validation is necessary since many of both comparative and 'de novo' modeling algorithms and tools use heuristics to try assembly the 3D structure, which can generate many errors. Some validation strategies consist of calculating energy scores and comparing them with experimentally determined structures. For example, the DOPE score is an energy score used by the MODELLER tool for determining the best model. Another validation strategy is calculating φ and ψ backbone dihedral angles of all residues and construct a Ramachandran plot. The side-chain of amino acids and the nature of interactions in the backbone restrict these two angles, and thus, the visualization of allowed conformations could be performed based on the Ramachandran plot. A high quantity of amino acids allocated in no permissive positions of the chart is an indication of a low-quality modeling. Prediction tools A list with commonly used software tools for protein structure prediction, including comparative modeling, protein threading, de novo protein structure prediction, and secondary structure prediction is available in the list of protein structure prediction software. Molecular docking Molecular docking (also referred to only as docking) is a method used to predict the orientation coordinates of a molecule (ligand) when bound to another one (receptor or target). The binding may be mostly through non-covalent interactions while covalently linked binding can also be studied. Molecular docking aims to predict possible poses (binding modes) of the ligand when it interacts with specific regions on the receptor. Docking tools use force fields to estimate a score for ranking best poses that favored better interactions between the two molecules. In general, docking protocols are used to predict the interactions between small molecules and proteins. However, docking also can be used to detect associations and binding modes among proteins, peptides, DNA or RNA molecules, carbohydrates, and other macromolecules. Virtual screening Virtual screening (VS) is a computational approach used for fast screening of large compound libraries for drug discovery. Usually, virtual screening uses docking algorithms to rank small molecules with the highest affinity to a target receptor. In recent times, several tools have been used to evaluate the use of virtual screening in the process of discovering new drugs. However, problems such as missing information, inaccurate understanding of drug-like molecular properties, weak scoring functions, or insufficient docking strategies hinder the docking process. Hence, the literature has described that it is still not considered a mature technology. Molecular dynamics Molecular dynamics (MD) is a computational method for simulating interactions between molecules and their atoms during a given period of time. This method allows the observation of the behavior of molecules and their interactions, considering the system as a whole. To calculate the behavior of the systems and, thus, determine the trajectories, an MD can use Newton's equation of motion, in addition to using molecular mechanics methods to estimate the forces that occur between particles (force fields). Applications Informatics approaches used in structural bioinformatics are: Selection of Target - Potential targets are identified by comparing them with databases of known structures and sequence. The importance of a target can be decided on the basis of published literature. Target can also be selected on the basis of its protein domain. Protein domains are building blocks that can be rearranged to form new proteins. They can be studied in isolation initially. Tracking X-ray crystallography trials - X-Ray crystallography can be used to reveal three-dimensional structure of a protein. But, in order to use X-ray for studying protein crystals, pure proteins crystals must be formed, which can take a lot of trials. This leads to a need for tracking the conditions and results of trials. Furthermore, supervised machine learning algorithms can be used on the stored data to identify conditions that might increase the yield of pure crystals. Analysis of X-Ray crystallographic data - The diffraction pattern obtained as a result of bombarding X-rays on electrons is Fourier transform of electron density distribution. There is a need for algorithms that can deconvolve Fourier transform with partial information ( due to missing phase information, as the detectors can only measure amplitude of diffracted X-rays, and not the phase shifts ). Extrapolation technique such as Multiwavelength anomalous dispersion can be used to generate electron density map, which uses the location of selenium atoms as a reference to determine rest of the structure. Standard Ball-and-stick model is generated from the electron density map. Analysis of NMR spectroscopy data - Nuclear magnetic resonance spectroscopy experiments produce two (or higher) dimensional data, with each peak corresponding to a chemical group within the sample. Optimization methods are used to convert spectra into three dimensional structures. Correlating Structural information with functional information - Structural studies can be used as probe for structural-functional relationship. Tools See also References Further reading
0.775483
0.960363
0.744745
Systems medicine
Systems medicine is an interdisciplinary field of study that looks at the systems of the human body as part of an integrated whole, incorporating biochemical, physiological, and environment interactions. Systems medicine draws on systems science and systems biology, and considers complex interactions within the human body in light of a patient's genomics, behavior and environment. The earliest uses of the term systems medicine appeared in 1992, in an article on systems medicine and pharmacology by T. Kamada. An important topic in systems medicine and systems biomedicine is the development of computational models that describe disease progression and the effect of therapeutic interventions. More recent approaches include the redefinition of disease phenotypes based on common mechanisms rather than symptoms. These provide then therapeutic targets including network pharmacology and drug repurposing. Since 2018, there is a dedicated scientific journal, Systems Medicine. Fundamental schools of systems medicine Essentially, the issues dealt with by systems medicine can be addressed in two basic ways, molecular (MSM) and organismal systems medicine (OSM): Molecular systems medicine (MSM) This approach relies on omics technologies (genomics, proteomics, transcriptomics, phenomics, metabolomics etc.) and tries to understand physiological processes and the evolution of disease in a bottom-up strategy, i.e. by simulating, synthesising and integrating the description of molecular processes to deliver an explanation of an organ system or even the organism in its whole. Organismal systems medicine (OSM) This branch of systems medicine, going back to the traditions of Ludwig von Bertalanffy's systems theory and biological cybernetics is a top-down strategy that starts with the description of large, complex processing structures (i.e. neural networks, feedback loops and other motifs) and tries to find sufficient and necessary conditions for the corresponding functional organisation on a molecular level. A common challenge for both schools is the translation between the molecular and the organismal level. This can be achieved e.g. by affine subspace mapping and sensitivity analysis, but also requires some preparative steps on both ends of the epistemic gap. Systems Medicine Education Georgetown University is the first in the Nation to launch a MS program in Systems Medicine. It has developed a rigorous curriculum, The programs have been developed and led by Dr. Sona Vasudevan, PhD. List of research groups See also Biocybernetics Medical cybernetics Systems biology Systems science References Clinical medicine Medicine Concepts in alternative medicine Evidence-based medicine Health care Medicine
0.783281
0.950735
0.744693
Biological activity
In pharmacology, biological activity or pharmacological activity describes the beneficial or adverse effects of a drug on living matter. When a drug is a complex chemical mixture, this activity is exerted by the substance's active ingredient or pharmacophore but can be modified by the other constituents. Among the various properties of chemical compounds, pharmacological/biological activity plays a crucial role since it suggests uses of the compounds in the medical applications. However, chemical compounds may show some adverse and toxic effects which may prevent their use in medical practice. Biological activity is usually measured by a bioassay and the activity is generally dosage-dependent, which is investigated via dose-response curves. Further, it is common to have effects ranging from beneficial to adverse for one substance when going from low to high doses. Activity depends critically on fulfillment of the ADME criteria. To be an effective drug, a compound not only must be active against a target, but also possess the appropriate ADME (Absorption, Distribution, Metabolism, and Excretion) properties necessary to make it suitable for use as a drug. Because of the costs of the measurement, biological activities are often predicted with computational methods, so-called QSAR models. Bioactivity is a key property that promotes osseointegration for bonding and better stability of dental implants. Bioglass coatings represent high surface area and reactivity leading to an effective interaction of the coating material and surrounding bone tissues. In the biological environment, the formation of a layer of carbonated hydroxyapatite (CHA) initiates bonding to the bone tissues. The bioglass surface coating undergoes leaching/exchange of ions, dissolution of glass, and formation of the HA layer that promotes cellular response of tissues. The high specific surface area of bioactive glasses is likely to induce quicker solubility of the material, availability of ions in the surrounding area, and enhanced protein adsorption ability. These factors altogether contribute toward the bioactivity of bioglass coatings. In addition, tissue mineralization (bone, teeth) is promoted while tissue forming cells are in direct contact with bioglass materials. Whereas a material is considered bioactive if it has interaction with or effect on any cell tissue in the human body, pharmacological activity is usually taken to describe beneficial effects, i.e. the effects of drug candidates as well as a substance's toxicity. In the study of biomineralisation, bioactivity is often meant to mean the formation of calcium phosphate deposits on the surface of objects placed in simulated body fluid, a buffer solution with ion content similar to blood. See also Chemical property Chemical structure Lipinski's rule of five, describing molecular properties of drugs Molecular property Physical property QSAR, quantitative structure-activity relationship References Pharmacodynamics Bioactivity
0.762598
0.976494
0.744673
Case report
In medicine, a case report is a detailed report of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. Case reports may contain a demographic profile of the patient, but usually describe an unusual or novel occurrence. Some case reports also contain a literature review of other reported cases. Case reports are professional narratives that provide feedback on clinical practice guidelines and offer a framework for early signals of effectiveness, adverse events, and cost. They can be shared for medical, scientific, or educational purposes. Types Most case reports are on one of six topics: An unexpected association between diseases or symptoms. An unexpected event in the course of observing or treating a patient. Findings that shed new light on the possible pathogenesis of a disease or an adverse effect. Unique or rare features of a disease. Unique therapeutic approaches. A positional or quantitative variation of the anatomical structures. Roles in research and education A case report is generally considered a type of anecdotal evidence. Given their intrinsic methodological limitations, including lack of statistical sampling, case reports are placed at the bottom of the hierarchy of clinical evidence, together with case series. Nevertheless, case reports do have genuinely useful roles in medical research and evidence-based medicine. In particular, they have facilitated recognition of new diseases and adverse effects of treatments (e.g., recognition of the link between administration of thalidomide to mothers and malformations in their babies was triggered by a case report). Case reports have a role in pharmacovigilance. They can also help understand the clinical spectrum of rare diseases as well as unusual presentations of common diseases. They can help generate study hypotheses, including plausible mechanisms of disease. Case reports may also have a role to play in guiding the personalization of treatments in clinical practice. Proponents of case reports have outlined some particular advantages of the format. Case reports and series have a high sensitivity for detecting novelty and therefore remain one of the cornerstones of medical progress; they provide many new ideas in medicine. Whereas randomized clinical trials usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation, the case report can detail many different aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up). Because typical, unremarkable cases are less likely to be published, use of case reports as scientific evidence must take into account publication bias. Some case reports also contain an extensive review of the relevant literature on the topic at-hand (and sometimes a systematic review of available evidence). Reports adopting this sort of approach can be identified by terms such as a "case report and review of the literature". Reports containing broader active research such as this might be considered case studies in the true definition of the term. Case reports can also play a relevant role in medical education, providing a structure for case-based learning. A particular attraction of case reports is the possibility of quick publication (with respect to more extensive studies such as randomized control trials), allowing them to act as a kind of rapid short communication between busy clinicians who may not have the time or resources to conduct large scale research. Reporting guidelines The quality of the scientific reporting of case reports is variable, and sub-optimal reporting hinders the use of case reports to inform research design or help guide clinical practice. In response to these issues, reporting guidelines are under development to facilitate greater transparency and completeness in the provision of relevant information for individual cases. The CARE (i.e. CAse REport) guidelines include a reporting checklist that is listed on the EQUATOR Network, an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability of medical research literature. This 13-item checklist includes indications regarding the title, key words, abstract, introduction, patient information, clinical findings, timeline, diagnostic assessment, therapeutic interventions, follow-up and outcomes, discussion, patient perspective, and informed consent. An explanation and elaboration article (a manual for writing case reports following the CARE guidelines) was published in the Journal of Clinical Epidemiology in 2017. Publishing Many international journals publish case reports, but they restrict the number that appear in the print run because this has an adverse effect on the journal's impact factor. Case reports are often published online, and there is often still a requirement for a subscription to access them. However, an increasing number of journals are devoted to publishing case reports alone, most of which are open access. The first of these to start publishing, in 2001, was Grand Rounds. There are a number of websites that allow patients to submit and share their own patient case reports with other people. PatientsLikeMe and Treatment Report are two such sites. Use of terminology outside science The term is also used to describe non-scientific reports usually prepared for educational reasons. Famous scientific case reports Sigmund Freud reported on numerous cases, including Anna O., Dora, Little Hans, Rat Man, and Wolf Man Frederick Treves reported on "The Elephant Man" Paul Broca reported on language impairment following left hemisphere lesions in the 1860s. Joseph Jules Dejerine reported on a case of pure alexia. William MacIntyre reported on a case of multiple myeloma (described in the 1840s). Christiaan Barnard described the world's first heart transplant as a case report W. G. McBride, Thalidomide Case Report (1961). The Lancet 2:1358. See also Case series Case presentation References Further reading External links Case reports – The CARE guidelines Medical terminology Medical literature Clinical research Reports
0.761012
0.978473
0.74463
Primary nutritional groups
Primary nutritional groups are groups of organisms, divided in relation to the nutrition mode according to the sources of energy and carbon, needed for living, growth and reproduction. The sources of energy can be light or chemical compounds; the sources of carbon can be of organic or inorganic origin. The terms aerobic respiration, anaerobic respiration and fermentation (substrate-level phosphorylation) do not refer to primary nutritional groups, but simply reflect the different use of possible electron acceptors in particular organisms, such as in aerobic respiration, or nitrate, sulfate or fumarate in anaerobic respiration, or various metabolic intermediates in fermentation. Primary sources of energy Phototrophs absorb light in photoreceptors and transform it into chemical energy. Chemotrophs release chemical energy. The freed energy is stored as potential energy in ATP, carbohydrates, or proteins. Eventually, the energy is used for life processes such as moving, growth and reproduction. Plants and some bacteria can alternate between phototrophy and chemotrophy, depending on the availability of light. Primary sources of reducing equivalents Organotrophs use organic compounds as electron/hydrogen donors. Lithotrophs use inorganic compounds as electron/hydrogen donors. The electrons or hydrogen atoms from reducing equivalents (electron donors) are needed by both phototrophs and chemotrophs in reduction-oxidation reactions that transfer energy in the anabolic processes of ATP synthesis (in heterotrophs) or biosynthesis (in autotrophs). The electron or hydrogen donors are taken up from the environment. Organotrophic organisms are often also heterotrophic, using organic compounds as sources of both electrons and carbon. Similarly, lithotrophic organisms are often also autotrophic, using inorganic sources of electrons and as their inorganic carbon source. Some lithotrophic bacteria can utilize diverse sources of electrons, depending on the availability of possible donors. The organic or inorganic substances (e.g., oxygen) used as electron acceptors needed in the catabolic processes of aerobic or anaerobic respiration and fermentation are not taken into account here. For example, plants are lithotrophs because they use water as their electron donor for the electron transport chain across the thylakoid membrane. Animals are organotrophs because they use organic compounds as electron donors to synthesize ATP (plants also do this, but this is not taken into account). Both use oxygen in respiration as electron acceptor, but this character is not used to define them as lithotrophs. Primary sources of carbon Heterotrophs metabolize organic compounds to obtain carbon for growth and development. Autotrophs use carbon dioxide as their source of carbon. Energy and carbon {| class="wikitable float-right" style="text-align:center" width="50%" |+Classification of organisms based on their metabolism |- | rowspan=2 bgcolor="#FFFF00" |Energy source || bgcolor="#FFFF00" | Light || bgcolor="#FFFF00" | photo- || rowspan=2 colspan=2 |   || rowspan=6 bgcolor="#7FC31C" | -troph |- | bgcolor="#FFFF00" | Molecules || bgcolor="#FFFF00" | chemo- |- | rowspan=2 bgcolor="#FFB300" | Electron donor || bgcolor="#FFB300" | Organic compounds || rowspan=2 |   || bgcolor="#FFB300" | organo- || rowspan=2 |   |- | bgcolor="#FFB300" | Inorganic compounds || bgcolor="#FFB300" | litho- |- | rowspan=2 bgcolor="#FB805F" | Carbon source || bgcolor="#FB805F" | Organic compounds' || rowspan=2 colspan=2 |   || bgcolor="#FB805F" | hetero- |- | bgcolor="#FB805F" | Carbon dioxide || bgcolor="#FB805F" | auto- |} A chemoorganoheterotrophic organism is one that requires organic substrates to get its carbon for growth and development, and that obtains its energy from the decomposition of an organic compound. This group of organisms may be further subdivided according to what kind of organic substrate and compound they use. Decomposers are examples of chemoorganoheterotrophs which obtain carbon and electrons or hydrogen from dead organic matter. Herbivores and carnivores are examples of organisms that obtain carbon and electrons or hydrogen from living organic matter. Chemoorganotrophs are organisms which use the chemical energy in organic compounds as their energy source and obtain electrons or hydrogen from the organic compounds, including sugars (i.e. glucose), fats and proteins. Chemoheterotrophs also obtain the carbon atoms that they need for cellular function from these organic compounds. All animals are chemoheterotrophs (meaning they oxidize chemical compounds as a source of energy and carbon), as are fungi, protozoa, and some bacteria. The important differentiation amongst this group is that chemoorganotrophs oxidize only organic compounds while chemolithotrophs instead use oxidation of inorganic compounds as a source of energy. Primary metabolism table The following table gives some examples for each nutritional group: *Some authors use -hydro- when the source is water. The common final part -troph is from Ancient Greek "nutrition". Mixotrophs Some, usually unicellular, organisms can switch between different metabolic modes, for example between photoautotrophy, photoheterotrophy, and chemoheterotrophy in Chroococcales. Rhodopseudomonas palustris – another example – can grow with or without oxygen, use either light, inorganic or organic compounds for energy. Such mixotrophic organisms may dominate their habitat, due to their capability to use more resources than either photoautotrophic or organoheterotrophic organisms. Examples All sorts of combinations may exist in nature, but some are more common than others. For example, most plants are photolithoautotrophic, since they use light as an energy source, water as electron donor, and as a carbon source. All animals and fungi are chemoorganoheterotrophic, since they use organic substances both as chemical energy sources and as electron/hydrogen donors and carbon sources. Some eukaryotic microorganisms, however, are not limited to just one nutritional mode. For example, some algae live photoautotrophically in the light, but shift to chemoorganoheterotrophy in the dark. Even higher plants retained their ability to respire heterotrophically on starch at night which had been synthesised phototrophically during the day. Prokaryotes show a great diversity of nutritional categories. For example, cyanobacteria and many purple sulfur bacteria can be photolithoautotrophic, using light for energy, or sulfide as electron/hydrogen donors, and as carbon source, whereas green non-sulfur bacteria can be photoorganoheterotrophic, using organic molecules as both electron/hydrogen donors and carbon sources. Many bacteria are chemoorganoheterotrophic, using organic molecules as energy, electron/hydrogen and carbon sources. Some bacteria are limited to only one nutritional group, whereas others are facultative and switch from one mode to the other, depending on the nutrient sources available. Sulfur-oxidizing, iron, and anammox bacteria as well as methanogens are chemolithoautotrophs, using inorganic energy, electron, and carbon sources. Chemolithoheterotrophs are rare because heterotrophy implies the availability of organic substrates, which can also serve as easy electron sources, making lithotrophy unnecessary. Photoorganoautotrophs are uncommon since their organic source of electrons/hydrogens would provide an easy carbon source, resulting in heterotrophy. Synthetic biology efforts enabled the transformation of the trophic mode of two model microorganisms from heterotrophy to chemoorganoautotrophy: Escherichia coli was genetically engineered and then evolved in the laboratory to use as the sole carbon source while using the one-carbon molecule formate as the source of electrons. The methylotrophic Pichia pastoris'' yeast was genetically engineered to use as the carbon source instead of methanol, while the latter remained the source of electrons for the cells. See also Autotrophic Chemosynthesis Chemotrophic Heterotrophic Lithotrophic Metabolism Mixotrophic Organotrophic Phototrophic Notes and references Trophic ecology Physiology
0.764041
0.97457
0.744611
Social medicine
Social medicine is an interdisciplinary field that focuses on the profound interplay between socio-economic factors and individual health outcomes. Rooted in the challenges of the Industrial Revolution, it seeks to: Understand how specific social, economic, and environmental conditions directly impact health, disease, and the delivery of medical care. Promote conditions and interventions that address these determinants, aiming for a healthier and more equitable society. Social medicine as a scientific field gradually began in the early 19th century, the Industrial Revolution and the subsequent increase in poverty and disease among workers raised concerns about the effect of social processes on the health of the poor. The field of social medicine is most commonly addressed today by efforts to understand what are known as social determinants of health. Scope The major emphasis on biomedical science in medical education, health care, and medical research has resulted into a gap with our understanding and acknowledgement of far more important social determinants of health and individual disease: social-economic inequalities, war, illiteracy, detrimental life-styles (smoking, obesity), discrimination because of race, gender and religion. Farmer et al. (2006) gave the following explanation for this gap: The holy grail of modern medicine remains the search for a molecular basis of disease. While the practical yield of such circumscribed inquiry has been enormous, exclusive focus on molecular-level phenomena has contributed to the increasing "desocialization" of scientific inquiry: a tendency to ask only biological questions about what are in fact biosocial phenomena. They further concluded that "Biosocial understandings of medical phenomena are urgently needed". Social medicine is a vast and evolving field, and its scope can cover a wide range of topics that touch on the intersection of society and health. The scope of social medicine includes: Social Determinants of Health: Investigation of how factors like income, education, employment, race, gender, housing, and social support impact health outcomes. Health Equity and Disparities: Studying the disparities in health outcomes among different groups based on racial, economic, gender, or other sociodemographic factors and creating strategies to promote equal health opportunities for all. Health Systems and Policies: Evaluating how different healthcare systems, structures, and policies impact health outcomes. This includes assessing the effectiveness of public health campaigns, insurance models, and health-related legislation. Environmental Health: Understanding how environmental factors such as pollution, climate change, and access to clean water and sanitation affect health. Global Health: Addressing health concerns that transcend national borders, such as epidemics, pandemics, or the health impacts of globalization. Cultural Competency: Training healthcare professionals to understand and respect cultural differences in patient care. This involves understanding diverse health beliefs, values, and behaviors. Migration and Health: Studying the health implications of migration, whether it's due to conflict, economic reasons, or other factors. This includes looking at issues like refugee health, healthcare access for undocumented migrants, and more. Urbanization and Health: Analyzing the impact of urban living conditions, urban development, and city policies on health. Mental Health: Delving into how social factors like stigma, discrimination, social isolation, and traumatic events impact mental health and well-being. Violence and Health: Investigating the health implications of different forms of violence, including domestic violence, community violence, and structural violence, and developing strategies to prevent and address these impacts. Occupational Health: Examining the health impacts of different work environments, job roles, and organizational structures. Substance Use and Addiction: Analyzing the social determinants and implications of substance use, including policies and societal attitudes toward different substances. Community Engagement and Empowerment: Working with communities to identify their health needs, co-create interventions, and mobilize resources to promote health. Medical Education: Integrating social medicine topics into medical curricula to ensure that healthcare professionals are equipped to address the social aspects of health and illness. Interdisciplinary Collaboration: Working with professionals from diverse fields, such as anthropology, sociology, economics, and urban planning, to address complex health challenges. Comparison with Public Health While there is some overlap between social medicine and public health , there are distinctions between the two fields. Distinct from public health, which concentrates on the health of entire populations and encompasses broad strategies for disease prevention and health promotion, social medicine dives deeper into the societal structures and conditions that lead to health disparities among different groups. Its approach is often more qualitative, honing in on the lived experiences of individuals within their social contexts. While public health might launch broad-spectrum interventions like vaccination campaigns or sanitation drives, social medicine probes the underlying socio-economic reasons why certain communities might be disproportionately affected by health challenges. The ultimate goal of social medicine is to ensure that societal structures support the health of all members, particularly those most vulnerable or marginalized. Social Medicine: Focus: Primarily on the socio-economic factors that affect health and how these can be addressed to promote better health outcomes. Approach: It delves deeper into the relationship between society and individual health. This includes the impacts of discrimination, inequality, poverty, and other social determinants. Historical Context: Originated during the Industrial Revolution as a response to the health challenges faced by the working class due to industrialization. Goal: To use the understanding of socio-economic factors to influence healthcare practices and policy to bring about a healthier society. Public Health: Focus: On the health of the general population, aiming to prevent disease and promote health at a community or population level. Approach: It encompasses a broader set of tools and strategies, ranging from disease surveillance, health education, policy recommendations, and health promotion initiatives. Historical Context: Has its roots in controlling infectious diseases, ensuring clean water and sanitation, and other community-wide health initiatives. Goal: To improve health outcomes through community interventions, policy, and education, often utilizing epidemiological studies and data analysis. To visualize the difference: Imagine a city facing an outbreak of a disease. A public health approach might involve vaccination campaigns, public health advisories, and quarantine measures. A social medicine approach might delve into why certain communities within the city are more affected than others, looking at housing conditions, employment status, racial or socio-economic discrimination, and other societal factors, and then proposing solutions based on these insights. Both fields recognize the importance of the social determinants of health but approach the topic from slightly different angles and with varying emphases. In practice, there's a lot of collaboration and overlap between social medicine and public health, as both are essential for a holistic approach to health and wellness. Social care Social care aims to promote wellness and emphasizes preventive, ameliorative, and maintenance efforts during illness, impairment, or disability. It adopts a holistic perspective on health and encompasses a variety of practices and viewpoints aimed at disease prevention and reduction of the economic, social, and psychological burdens associated with prolonged illnesses and diseases. The social model was developed as a direct response to the medical model, the social model sees barriers (physical, attitudinal and behavioural) not just as a biomedical issue, but as caused in part by the society we live in – as a product of the physical, organizational and social worlds that lead to discrimination (Oliver 1996; French 1993; Oliver and Barnes 1993). Social care advocates equality of opportunities for vulnerable sections of society. History German physician Rudolf Virchow (1821–1902) laid foundations for this model. Other prominent figures in the history of social medicine, beginning from the 20th century, include Salvador Allende, Henry E. Sigerist, Thomas McKeown, Victor W. Sidel, Howard Waitzkin, and more recently Paul Farmer and Jim Yong Kim. In The Second Sickness, Waitzkin traces the history of social medicine from Engels, through Virchow and Allende. Waitzkin has sought to educate North Americans about the contributions of Latin American Social Medicine. In 1976, the British public health scientist and health care critic Thomas McKeown, MD, published "The role of medicine: Dream, mirage or nemesis?", wherein he summarized facts and arguments that supported what became known as McKeown's thesis, i.e. that the growth of population can be attributed to a decline in mortality from infectious diseases, primarily thanks to better nutrition, later also to better hygiene, and only marginally and late to medical interventions such as antibiotics and vaccines. McKeown was heavily criticized for his controversial ideas, but is nowadays remembered as "the founder of social medicine". Occupational Health & Social Medicine The world of work played a fundamental role in the development of a social approach to health during the first industrial revolution, as exemplified by Virchow’s work on typhus and coal miners. Over the past 50 years, Occupational Safety and Health.  The resulting distinction between work/nonwork related risks and outcomes has served as an artificial line of demarcation between OSH and the rest of public health. However, growing social inequality, the fundamental reorganization of the world of work,  and a broadening of our understanding of the relationship between work and health have blurred this line of demarcation and highlight the need to expand and complement the reductionist view of cause and effect.  In response, OSH is reintegrating a social approach to account for the social, political, and economic interactions that contribute to occupational health outcomes. See also Epidemiology Medical anthropology Medical sociology Social determinants of health in poverty Social epidemiology Social psychology Socialized medicine Society for Social Medicine References Bibliography Social Medicine: http://journals.sfu.ca/socialmedicine/index.php/socialmedicine/index Social Medicine Portal: http://www.socialmedicine.org/ Matthew R. Anderson, Lanny Smith, and Victor W. Sidel. What is Social Medicine? Monthly Review: 56(8). http://www.monthlyreview.org/0105anderson.htm King NMP, Strauss RP, Churchill LR, Estroff SE, Henderson GE, et al. editors (2005) Patients, doctors, and illness. Volume I: The social medicine reader 2nd edition Durham: Duke University Press. Henderson GE, Estroff SE, Churchill LR, King NMP, Oberlander J, et al. editors (2005) Social and cultural contributions to health, difference, and inequality. Volume II: The social medicine reader 2nd edition Durham: Duke University Press. Oberlander J, Churchill LR, Estroff SE, Henderson GE, King NMP, et al. editors (2005) Health policy, markets, and medicine. Volume III: The social medicine reader 2nd edition Durham: Duke University Press. External links Introduction to the journal: Social Medicine What is social medicine? Anthropology Determinants of health Medical terminology History of medicine Medical sociology Public health Social philosophy
0.762904
0.975985
0.744583
Health physics
Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged. Sub-specialties There are many sub-specialties in the field of health physics, including Ionising radiation instrumentation and measurement Internal dosimetry and external dosimetry Radioactive waste management Radioactive contamination, decontamination and decommissioning Radiological engineering (shielding, holdup, etc.) Environmental assessment, radiation monitoring and radon evaluation Operational radiation protection/health physics Particle accelerator physics Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team) Industrial uses of radioactive material Medical health physics Public information and communication involving radioactive materials Biological effects/radiation biology Radiation standards Radiation risk analysis Nuclear power Radioactive materials and homeland security Radiation protection Nanotechnology Operational health physics The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the practical application of health physics knowledge to real-world situations, rather than basic research. Medical physics The field of Health Physics is related to the field of medical physics and they are similar to each other in that practitioners rely on much of the same fundamental science (i.e., radiation physics, biology, etc.) in both fields. Health physicists, however, focus on the evaluation and protection of human health from radiation, whereas medical health physicists and medical physicists support the use of radiation and other physics-based technologies by medical practitioners for the diagnosis and treatment of disease. Radiation protection instruments Practical ionising radiation measurement is essential for health physics. It enables the evaluation of protection measures, and the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law. In the UK it is the Ionising Radiation Regulations 1999. The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable). Installed instruments Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne contamination monitors. The area monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Interlock monitors are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. Airborne contamination monitors measure the concentration of radioactive particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel. Personnel exit monitors are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory has published a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used. Portable instruments Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. Instrument types A number of commonly used detection instruments are listed below. ionization chambers proportional counters Geiger counters Semiconductor detectors Scintillation detectors The links should be followed for a fuller description of each. Guidance on use In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned . This covers all ionising radiation instrument technologies, and is a useful comparative guide. Radiation dosimeters Dosimeters are devices worn by the user which measure the radiation dose that the user is receiving. Common types of wearable dosimeters for ionizing radiation include: Quartz fiber dosimeter Film badge dosimeter Thermoluminescent dosimeter Solid state (MOSFET or silicon diode) dosimeter Units of measure Absorbed dose The fundamental units do not take into account the amount of damage done to matter (especially living tissue) by ionizing radiation. This is more closely related to the amount of energy deposited rather than the charge. This is called the absorbed dose. The gray (Gy), with units J/kg, is the SI unit of absorbed dose, which represents the amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter. The rad (radiation absorbed dose), is the corresponding traditional unit, which is 0.01 J deposited per kg. 100 rad = 1 Gy. Equivalent dose Equal doses of different types or energies of radiation cause different amounts of damage to living tissue. For example, 1 Gy of alpha radiation causes about 20 times as much damage as 1 Gy of X-rays. Therefore, the equivalent dose was defined to give an approximate measure of the biological effect of radiation. It is calculated by multiplying the absorbed dose by a weighting factor WR, which is different for each type of radiation (see table at Relative biological effectiveness#Standardization). This weighting factor is also called the Q (quality factor), or RBE (relative biological effectiveness of the radiation). The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the gray, J/kg, it measures something different. For a given type and dose of radiation(s) applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-rays or gamma radiation dose applied to the whole body of the organism, such that the probabilities of the two scenarios to induce cancer is the same according to current statistics. The rem (Roentgen equivalent man) is the traditional unit of equivalent dose. 1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10 μSv. A unit sometimes used for low-level doses of radiation is the BRET (Background Radiation Equivalent Time). This is the number of days of an average person's background radiation exposure the dose is equivalent to. This unit is not standardized, and depends on the value used for the average background radiation dose. Using the 2000 UNSCEAR value (below), one BRET unit is equal to about 6.6 μSv. For comparison, the average 'background' dose of natural radiation received by a person per day, based on 2000 UNSCEAR estimate, makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3 rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem). History In 1898, The Röntgen Society (Currently the British Institute of Radiology) established a committee on X-ray injuries, thus initiating the discipline of radiation protection. The term "health physics" According to Paul Frame: "The term Health Physics is believed to have originated in the Metallurgical Laboratory at the University of Chicago in 1942, but the exact origin is unknown. The term was possibly coined by Robert Stone or Arthur Compton, since Stone was the head of the Health Division and Arthur Compton was the head of the Metallurgical Laboratory. The first task of the Health Physics Section was to design shielding for reactor CP-1 that Enrico Fermi was constructing, so the original HPs were mostly physicists trying to solve health-related problems. The explanation given by Robert Stone was that '...the term Health Physics has been used on the Plutonium Project to define that field in which physical methods are used to determine the existence of hazards to the health of personnel.' A variation was given by Raymond Finkle, a Health Division employee during this time frame. 'The coinage at first merely denoted the physics section of the Health Division... the name also served security: 'radiation protection' might arouse unwelcome interest; 'health physics' conveyed nothing.'" Radiation-related quantities The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. See also Health Physics Society Certified Health Physicist Radiological Protection of Patients Radiation protection Society for Radiological Protection The principal UK body concerned with promoting the science and practice of radiation protection. It is the UK national affiliated body to IRPA IRPA The International Radiation Protection Association. The International body concerned with promoting the science and practice of radiation protection. References External links The Health Physics Society, a scientific and professional organization whose members specialize in occupational and environmental radiation safety. - "The confusing world of radiation dosimetry" - M.A. Boyd, 2009, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Q&A: Health effects of radiation exposure, BBC News, 21 July 2011. Nuclear safety and security Medical physics Radiation health effects Health physicists
0.766139
0.971842
0.744566
Functio laesa
Functio laesa is a term used in medicine to refer to a loss of function or a disturbance of function. It was identified as the fifth sign of acute inflammation by Galen, who added it to the four signs identified by Celsus (tumor, rubor, calor, and dolor). The attribution to Galen is disputed, and has variously been attributed to Thomas Sydenham and Rudolf Virchow. References Medical signs
0.765397
0.972731
0.744526
Complications of prolonged standing
The complications of prolonged standing are conditions that may arise after standing, walking, or running for prolonged periods. Many of the complications come from prolonged standing (more than 60% of a work day) that is repeated several times a week. Many jobs require prolonged standing, such as "retail staff, baristas, bartenders, assembly line workers, security staff, engineers, catering staff, library assistants, hair stylists and laboratory technicians". The basic physiological change that occurs in the body during prolonged standing or sudden stand from supine position is that there will be increased pooling of blood in the legs. This decreases the venous return, and so there will be decreased cardiac output, which ultimately causes systolic blood pressure to fall (hypotension). This hypotension may lead the subject to faint or to have other symptoms of hypotension. Standing requires about 10% more energy than sitting. Prevalence There are no exact measures of how prevalent the complications are. However, European studies report that between one third and one half of all workers spend at least four hours per Working time (for an average workday of eight hours) standing or walking. One estimate from the United Kingdom stated that over 11 million people stand for long periods of time without rest. Complications Slouching Proper posture is often referred to as a "neutral spine"; slouching is an improper posture or a "nonneutral spine". Slouching is often described as improper posture, movement or rigidity of the spine, especially the cervical and thoracic regions, in relation to other parts of the body. Varicose veins Varicose veins are veins that have become enlarged and twisted, especially within the legs, ankles and feet of an affected individual. When standing, gravity pulls the blood downwards to the lower part of the body. Body mechanisms, such as vasoconstriction and valves of the veins, assist in pumping blood upwards. As blood is pumped through the body, the valves within the veins prevent the blood from flowing backwards. After extensive, prolonged standing, these valves can become weak and eventually fail. When this happens, blood is no longer being prevented from flowing backward. Gravity will pull the blood back into an individual's legs, ankles and feet. This forces the veins to expand or "balloon" to accommodate this extra blood. The valves of the veins work best in concert with accompanying muscle contractions that force the blood to continue moving up the leg. Standing with some muscles constantly strained weakens these muscles and therefore the strength of the contractions. Varicose veins have also been associated with chronic heart and circulatory disorders and hypertension as well as complications related to pregnancy. Prolonged standing increases the risk for hospitalization from varicose veins. Among the working age population one out of five hospitalizations from varicose veins are as a result of prolonged standing. Prolonged standing leads to impeded blood flow and stasis in the veins in the lower limbs, which can cause varicose veins. Cardiovascular disorders Standing for prolonged periods can lead to certain cardiovascular disorders. In a study by Krause et al. (2000) the authors examined the relationship between standing at work and the progression of carotid atherosclerosis in men. Standing for long periods can change the distribution of blood in the extremities. This in turn causes the blood to pool and reduces the circulating blood plasma volume leading to hemodynamic changes that impact the body. The authors reported that long periods of standing at work were significantly associated with atherosclerotic progression. This study provides evidence that hemodynamic changes from standing can influence the progressions of carotid atherosclerosis. The authors also found that men with carotid stenosis or ischemic heart disease were at greater risk for the progression of atherosclerosis . Atherosclerosis can lead to coronary artery disease, carotid artery disease, peripheral artery disease, and aneurysms. Joint compression Standing places significant pressure on the joint of the hips, knees, ankle and feet but without any significant movement of it. This reduces the normal lubrication and cushioning of synovial joints, causing them to tear. The combined effect of pressure and tearing can cause extensive amounts of pain and make it difficult to move or walk. Muscle fatigue Muscles kept in a constant stress position quickly become exhausted and can result in pain and swelling in the lower back, legs, ankles and feet. The Occupational Safety and Health Administration (OSHA) has stated that muscle fatigue and musculoskeletal disorders account for "33% of all worker injury and illness". Considerable research has been conducted as to the extent of muscle injuries and all have concluded that these are expected but can be reduced with breaks and the availability of chairs. Research has shown that the body experiences muscle fatigue after standing for five hours; this fatigue persists for more than 30 minutes after the end of the work day according to electronic measurements of fatigue. The perception of fatigue is subjective and does not necessarily correlate with the experimental indicators of fatigue. Pregnancy Walking or standing more than six hours per day has been linked with pre-term births, low birth weights as well as high blood pressure for the mother. Researchers have found that working more than 25 hours a week has been "associated with slower rates of fetal growth". They also found that, on average, there are no "negative effects of working up to 36 weeks into pregnancy". Productivity A systematic review from Karakolis and Callaghan found that sit-stand workstations did not reduce worker productivity. Three of the reviewed studies found increased productivity when workers used sit stand stations, four reported no impact on the productivity of workers, and one reported mixed results. Intervention There is no real prevention for standing, but there are ways to mitigate time spent standing in the workplace. Experts suggest to move around and change positions throughout the day. It is best not to sit in one position for more than 20 minutes, or to stand in one position for more than 8 minutes. If prolonged sitting in the workplace is required or desired, individuals should gradually transition to significant periods of standing. When transitioning from sitting to standing, individuals might experience musculoskeletal discomfort or fatigue while the body adapts. Companies should design workstations that are conducive to good health. Workstations should allow workers to choose between several working positions and to move easily between them. Additionally, workers should be able to adjust the height of their workstations to fit their body size. Other helpful aspects of workstations are footrests, elbow rests, and seats so workers can sit when they need to. Footwear The choice of footwear can change the impact of prolonged standing. Shoes should support the foot, have shock-absorbing cushions, fit the foot well, and be comfortable. Shoes should not be flat, have heels higher than 5 cm, or change the shape of the foot. There are also special insoles for shoes that can be used when soft mats or chairs are not available. Additionally the floors in a work area should not be metal or concrete. It is best to have cork or rubber covered floors. Floors should not be slippery. Training and education are important components of avoiding complications from standing. Employees trained in ergonomics experience less muscle discomfort and more productivity while using sit-stand work stations than workers not trained. Floor mats Floor mat or anti-fatigue mats are used to prevent the complication associated with prolonged standing. A study by the University of Loughborough conducted by George Havenith and Lucy E. Dorman has shown "(dis)comfort sensations did show statistically significant improvements related to mat use." Proper floor mats can also be used to reduce stress on feet and knees. Anti-fatigue matting is recommended and launderable matting is preferred. A study investigating the effects of 4 different standing conditions on assembly workers showed that using mats and shoes with in-soles was perceived as more comfortable for the workers than without while standing on hard floors. See also Neutral spine Ergonomics Sitting - Health risks References Best shoes for walking and standing all day Injuries Spinal cord Physical exercise Ergonomics
0.760627
0.978784
0.74449
Fascial spaces of the head and neck
Fascial spaces (also termed fascial tissue spaces or tissue spaces) are potential spaces that exist between the fasciae and underlying organs and other tissues. In health, these spaces do not exist; they are only created by pathology, e.g. the spread of pus or cellulitis in an infection. The fascial spaces can also be opened during the dissection of a cadaver. The fascial spaces are different from the fasciae themselves, which are bands of connective tissue that surround structures, e.g. muscles. The opening of fascial spaces may be facilitated by pathogenic bacterial release of enzymes which cause tissue lysis (e.g. hyaluronidase and collagenase). The spaces filled with loose areolar connective tissue may also be termed clefts. Other contents such as salivary glands, blood vessels, nerves and lymph nodes are dependent upon the location of the space. Those containing neurovascular tissue (nerves and blood vessels) may also be termed compartments. Generally, the spread of infection is determined by barriers such as muscle, bone and fasciae. Pus moves by the path of least resistance, e.g. the fluid will more readily dissect apart loosely connected tissue planes, such the fascial spaces, than erode through bone or muscles. In the head and neck, potential spaces are primarily defined by the complex attachment of muscles, especially mylohyoid, buccinator, masseter, medial pterygoid, superior constrictor and orbicularis oris. Infections involving fascial spaces of the head and neck may give varying signs and symptoms depending upon the spaces involved. Trismus (difficulty opening the mouth) is a sign that the muscles of mastication (the muscles that move the jaw) are involved. Dysphagia (difficulty swallowing) and dyspnoea (difficulty breathing) may be a sign that the airway is being compressed by the swelling. Classification Different classifications are used. One method distinguishes four anatomic groups: The mandible and below The buccal vestibule The body of the mandible The mental space The submental space The sublingual space The submandibular space The cheek and lateral face The buccal vestibule of the maxilla The buccal space The submasseteric space The temporal space The pharyngeal and cervical areas The pterygomandibular space The parapharyngeal spaces The cervical spaces The midface The palate The base of the upper lip The canine spaces (infraorbital spaces) The periorbital spaces Since the hyoid bone is the most important anatomic structure in the neck that limits the spread of infection, the spaces can be classified according to their relation to the hyoid bone: Suprahyoid (above the hyoid) Infrahyoid (below the hyoid) Fascial spaces traversing the length of the neck In oral and maxillofacial surgery, the fascial spaces are almost always of relevance due to the spread of odontogenic infections. As such, the spaces can also be classified according to their relation to the upper and lower teeth, and whether infection may directly spread into the space (primary space), or must spread via another space (secondary space): Primary maxillary spaces Canine space Buccal space Infratemporal space Primary mandibular spaces Submental space Buccal space Submandibular space Sublingual space Submasseteric space Cervical spaces Perimandibular spaces The submaxillary space is a historical term for the combination of the submandibular, submental and sublingual spaces, which in modern practice are referred to separately or collectively termed the perimandibular spaces. The term submaxillary may be confusing to modern students and clinicians since these spaces are located below the mandible, but historically the maxilla and mandible together were termed "maxillae", and sometimes the mandible was termed the "inferior maxilla". Sometimes the term submaxillary space is used synonymously with submandibular space. Confusion exists, as some sources describe the sublingual and the submandibular spaces as compartments of the "submandibular space". Submandibular space Submental space Sublingual space Mental space Buccal space Canine space (infra-orbital space) Masticator space This term is sometimes used, and is a collective name for the submasseteric (masseteric), pterygomandibular, superficial temporal and deep temporal spaces. The infratemporal space is the inferior portion of the deep temporal space. The superficial temporal and the deep temporal spaces are sometimes together called the temporal spaces. The masticator spaces are paired structures on either side of the head. The muscles of mastication are enclosed in a layer of fascia, formed by cervical fascia ascending from the neck which divides at the inferior border of the mandible to envelope the area. Each masticator space also contains the sections of the mandibular division of the trigeminal nerve and the internal maxillary artery. The masticator space could therefore be described as a potential space with four separate compartments. Infections usually only occupy one of these compartments, but severe or long standing infections can spread to involve the entire masticator space. The compartments of the masticator space are located on either side of the mandibular ramus and on either side of the temporalis muscle. Submasseteric space This is also referred to as the masseter space or the superifical masticator space. The submasseteric space is logically located under (deep to) the masseter muscle, created by the insertions of masseter onto the lateral surface of the mandibular ramus. Submasseteric abscesses are rare and are associated with marked trismus. Pterygomandibular space The pterygomandibular space lies between the medial side of the ramus of the mandible and the lateral surface of the medial pterygoid muscle. Deep temporal space (infra-temporal space) The infra-temporal space is the inferior portion of the deep temporal space. Superficial temporal space History Modern understanding of the fascial spaces of the head and neck developed from the landmark research of Grodinsky and Holyoke in the 1930s. They injected a dye into cadavers to simulate pus. Their hypothesis was that infection in the head and neck mainly spread by hydrostatic pressure. This is now accepted to be true for most infections in the head and neck, with the exception of actinomycosis which tends to burrow into the skin, and mycotuberculoid infections which tend to spread via the lymphatics. References Human anatomy
0.767947
0.969431
0.744472
Mechanobiology
Mechanobiology is an emerging field of science at the interface of biology, engineering, chemistry and physics. It focuses on how physical forces and changes in the mechanical properties of cells and tissues contribute to development, cell differentiation, physiology, and disease. Mechanical forces are experienced and may be interpreted to give biological responses in cells. The movement of joints, compressive loads on the cartilage and bone during exercise, and shear pressure on the blood vessel during blood circulation are all examples of mechanical forces in human tissues. A major challenge in the field is understanding mechanotransduction—the molecular mechanisms by which cells sense and respond to mechanical signals. While medicine has typically looked for the genetic and biochemical basis of disease, advances in mechanobiology suggest that changes in cell mechanics, extracellular matrix structure, or mechanotransduction may contribute to the development of many diseases, including atherosclerosis, fibrosis, asthma, osteoporosis, heart failure, and cancer. There is also a strong mechanical basis for many generalized medical disabilities, such as lower back pain, foot and postural injury, deformity, and irritable bowel syndrome. Load sensitive cells Fibroblasts Skin fibroblasts are vital in development and wound repair and they are affected by mechanical cues like tension, compression and shear pressure. Fibroblasts synthesize structural proteins, some of which are mechanosensitive and form integral part of the extracellular Matrix (ECM) e. g collagen types I, III, IV, V VI, elastin, lamin etc. In addition to the structural proteins, fibroblasts make Tumor-Necrosis-Factor- alpha (TNF-α), Transforming-Growth-Factor-beta (TGF-β) and matrix metalloproteases that plays in tissue in tissue maintenance and remodeling. Chondrocytes Articular cartilage is the connective tissue that protects bones of load-bearing joints like knee, shoulder by providing a lubricated surface. It deforms in response to compressive load, thereby reducing stress on bones. This mechanical responsiveness of articular cartilage is due to its biphasic nature; it contains both the solid and fluid phases. The fluid phase is made up of water -which contributes 80% of the wet weight – and inorganic ions e. g Sodium ion, Calcium ion and Potassium ion. The solid phase is made up of porous ECM. The proteoglycans and interstitial fluids interact to give compressive force to the cartilage through negative electrostatic repulsive forces. The ion concentration difference between the extracellular and intracellular ions composition of chondrocytes result in hydrostatic pressure. During development, mechanical environment of joint determines surface and topology of the joint. In adult, moderate mechanical loading is required to maintain cartilage; immobilization of joint leads to loss of proteoglycans and cartilage atrophy while excess mechanical loading results in degeneration of joint. Nuclear mechanobiology The nucleus is also responsive to mechanical signals which are relayed from the extracellular matrix through the cytoskeleton by the help of Linker of Nucleoskeleton and Cytoskeleton LINC-associated proteins like KASH and SUN. Examples of effect of mechanical responses in the nucleus involve: Hyperosmotic challenge results in chromosome condensation and translocation and activation of the Ataxia Telangiectasia and Rad3-related (ATR) to the nuclear peripheral region while mechanical stretching due to hypo-osmotic challenge and compression re-localizes and activates cPLA2 to the nuclear membrane. High nuclear tension on the Lamin A hinders the access of kinases , thereby suppressing its degradation etc. Mechanobiology of embryogenesis The embryo is formed by self-assembly through which cells differentiate into tissues performing specialized functions. It was previously believed that only chemical signals give cues that control spatially oriented changes in cell growth, differentiation and fate switching that mediate morphogenetic controls. This is based on the ability of chemical signals to induce biochemical responses like tissue patterning in distant cells. However, it is now known that mechanical  forces generated within cells and tissues provide regulatory signals. During the division of the fertilized oocyte, cells aggregate and the compactness between cells increases with the help of actomyosin-dependent cytoskeletal traction forces and their application to adhesive receptors in neighboring cells, thereby leading to formation of solid balls called Morula. The spindle positioning within symmetrically and asymmetrically dividing cells in the early embryo is controlled by mechanical forces mediated by microtubules and actin microfilament system. Local variation in physical forces and mechanical cues such as stiffness of the ECM also control the expression of genes that give rise to the embryonic developmental process of blastulation. The loss of stiffness-controlled transcription factor Cdx leads to the ectopic expression of inner cell mass markers in the trophectoderm, and the pluripotent transcription factor, Oct-4 may be negatively expressed, thereby inducing lineage switching. This cell fate switching is regulated by the mechanosensitive hippo pathway Applications The effectiveness of many of the mechanical therapies already in clinical use shows how important physical forces can be in physiological control. Several examples illustrate this point. Pulmonary surfactant promotes lung development in premature infants; modifying the tidal volumes of mechanical ventilators reduces morbidity and death in patients with acute lung injury. Expandable stents physically prevent coronary artery constriction. Tissue expanders increase the skin area available for reconstructive surgery. Surgical tension application devices are used for bone fracture healing, orthodontics, cosmetic breast expansion and closure of non-healing wounds. Insights into the mechanical basis of tissue regulation may also lead to development of improved medical devices, biomaterials, and engineered tissues for tissue repair and reconstruction. Known contributors to cellular mechanotransduction are a growing list and include stretch-activated ion channels, caveolae, integrins, cadherins, growth factor receptors, myosin motors, cytoskeletal filaments, nuclei, extracellular matrix, and numerous other signaling molecules. Endogenous cell-generated traction forces also contribute significantly to these responses by modulating tensional prestress within cells, tissues, and organs that govern their mechanical stability, as well as mechanical signal transmission from the macroscale to the nanoscale. See also Biophysics References Branches of biology Biological engineering
0.77586
0.959507
0.744443
Metabolic bone disease
Metabolic bone disease is an abnormality of bones caused by a broad spectrum of disorders. Most commonly these disorders are caused by deficiencies of minerals such as calcium, phosphorus, magnesium or vitamin D leading to dramatic clinical disorders that are commonly reversible once the underlying defect has been treated. These disorders are to be differentiated from a larger group of genetic bone disorders where there is a defect in a specific signaling system or cell type that causes the bone disorder. There may be overlap. For example, genetic or hereditary hypophosphatemia may cause the metabolic bone disorder osteomalacia. Although there is currently no treatment for the genetic condition, replacement of phosphate often corrects or improves the metabolic bone disorder. Metabolic bone disease in captive reptiles is also common, and is typically caused by calcium deficiency in a reptile's diet. Conditions considered to be metabolic bone disorders osteoporosis osteopenia osteomalacia (adults) & rickets (children) osteitis fibrosa cystica Paget's disease of bone pyramiding (turtles) Osteoporosis is due to causal factors like atrophy of disuse and gonadal deficiency. Hence osteoporosis is common in postmenopausal women and in men above 50 yrs. Hypercorticism may also be a causal factor, as osteoporosis may be seen as a feature of Cushing's syndrome. References External links Osteopathies
0.760124
0.979186
0.744303
The body in traditional Chinese medicine
The model of the body in traditional Chinese medicine (TCM) has the following elements: the Fundamental Substances; Qi, ( Energy), Jing (Essence), Shen (Spirit) that nourish and protect the Zang-Fu organs; and the meridians (jing-luo) which connect and unify the body. Every diagnosis is a "Pattern of disharmony" that affects one or more organs, such as "Spleen Qi Deficiency" or "Liver Fire Blazing" or "Invasion of the Stomach by Cold", and every treatment is centered on correcting the disharmony. The traditional Chinese model is concerned with function. Thus, the TCM Spleen is not a specific piece of flesh, but an aspect of function related to transformation and transportation within the body, and of the mental functions of thinking and studying. Indeed, the San Jiao or Triple Burner has no anatomical correspondent at all, and is said to be completely a functional entity. Chinese medicine and the model of the body is founded on the balance of the five elements: Earth, Metal, Water, Wood, and Fire. The elements are infinitely linked, consuming and influencing each other. Each element corresponds to different organs in the body. The organs act as representatives of the qualities of different elements, which impact the physical and mental body in respective ways. Each organ is categorized as either Yin or Yang. The energies of Yin and Yang are conflicting yet inter-reliant. When the two (Yin+Yang) forces are united they create a divine energy, which supports the flow of all life. Yin organs represent: femininity, coldness, compression, darkness, and submission. Yang organs represent: masculinity, expansion, heat, motion, and action. This duality (yin+yang) must be in balance or else disease of the mind and body will occur. Each organ governs energy channels, which distribute qi and connect all parts of the body to one another. These channels are called meridians. Wood Wood is an element of growth, originality, creativity, and evolution. The Liver (1) and the Gallbladder (2) are the two wood-governed organs in the body. (1) The Liver, a Yin organ, influences emotional flexibility and the flow of energy on a cellular level. The organ has a strong impact on the efficiency and effectiveness of the immune system along with storing the body's blood, a physical manifestation of one's true self. The Liver rules one's direction, vision, sense of self-purpose and opens into the eyes. Lastly, the Liver absorbs what is not digested and regulates blood sugar. Imbalance in the Liver can lead to great problems. Moodiness, anger, pain, poor self-esteem, lack of direction, addiction, and indecision are all associated with the Liver organ. Muscle spasms, numbness, tremors, eye diseases, hypertension, allergies, arthritis, and multiple sclerosis are also a result of Liver imbalances. The Liver Meridian begins on the big toe, runs along the inner leg through the genitals and ends on the chest. (2) The Gallbladder, a wood controlled Yang organ, governs decisiveness and judgement. The Gallbladder also stores bile. Imbalance of the Gallbladder can lead to indecisiveness along with obesity. The Gallbladder meridian begins at the outer edge of the eye, moves to the side of the head and trunk, and ends on the outside of the fourth toe. Fire Fire is an element of transformation, demolition, primal power, and divinity. (1)The Heart, (2) Small Intestine are the organs that fire controls. (3) Heart Protector, (4) Triple Heater are organs that secondary (ministerial) fire controls. (1) The Heart, a Yin organ, regulates the pulse, manifests in the face and tongue, and bridges the connection between the human and the celestial. Dysfunction of the Heart leads to insomnia, disturbance of the spirit, and an irregular pulse. The Heart Meridian begins in the chest moves to the inner aspect of the arm down to the palm of the hand and ends on the pinky. (2)The Small Intestine, a Yang organ, separates pure food and fluid essences from the polluted. The pure essences are distributed to the spleen while the polluted are sent to the bladder and the large intestine. Dysfunction of the Small Intestine can lead to bowel problems and a sense of distrust of one's self. The Small Intestine Meridian begins on the pinky, moves to the underside of the arm, up to the top of the shoulder blade, the neck, and ends on the front of the ear. (3) The Heart Protector, a Yin organ, shields the heart. It filters psychic inclinations and stabilizes emotions. A problem with the Heart Protector can lead to anxiety and heart palpitations. The Heart Protector Meridian begins on the chest, travels through the armpit to the arm and ends on the top of the middle finger. (4) The Triple Heater, a Yang organ, disperses fluids throughout the body and regulates the relationship between all organs. The Triple Heater Meridian begins on the ring finger, moves up the back of the arm to the side of the neck, goes around the ear and ends of the eyebrow. Earth Earth is an element of fertility, cultivation, femininity, and wrath. Earth governs the Spleen (1) and the Stomach (2). (1) The Spleen, a Yin organ, regulates digestion and the metabolism. It also holds the flesh and organs in their proper place while directing the movement of ascending fluids and essences. Mentally, the Spleen aids in concentration. Imbalance of the Spleen leads worry and pensive behaviour, chi deficiencies, diarrhea, organ prolapses, and headaches, The Spleen Meridian begins at the big toe, moves to the inner aspect of the leg, up to the front of the torso, and ends on the side of the trunk. (2) The Stomach, the most active yang organ, breaks down food and controls the descending movement of chi. Imbalance of the stomach leads to vomiting and belching. The Stomach Meridian begins below the eye, moves down the front of the face, torso, to the outer part of the leg, and ends on the third toe. Metal Metal is an element of purity, treasure, and masculinity. Metal controls the Lungs (1) and the Large Intestine (2). (1) The Lungs, a Yin organ, draws in pure chi by inhalation and eliminates impurities by exhalation. The lungs also disperse bodily fluids, defend the body from a cold or flu, govern the sense of smell, and open in the nose. Dysfunction of the Lungs leads to colds, the flu, phlegm, and asthma. The Lung Meridian begins at the chest moves to the inner arm, palm, and ends on the thumb. (2) The Large Intestine, a Yang organ, controls the removal of waste and feces. Imbalance in the Large Intestine leads to constipation, diarrhea and the inability to emotionally detach and let go. The Large Intestine Meridian begins on the forefinger, moves to the back of the arm, shoulder, side of the neck, cheek, and ends beside the opposite nostril. Water Water is an element of life and death. Water governs the Kidneys (1) and the Bladder (2). (1) The Kidneys, a Yin organ, are the source of all the Yin and Yang energy in the body. The Kidneys also govern the endocrine system, receive air from the lungs, govern bones, govern teeth, control water in the body, and store essence. Dysfunction of the Kidneys leads to deficiencies of Yin or Yang. It also leads to imbalanced hormones, weak bones, an impaired sex drive, and dizziness. Water in excess leads to bipolar disorder. Depressive episodes are characterized by Kidney Yin excess while manic episodes are characterized by Kidney Yang excess. The Kidney Meridian begins on the sole, moves up the inner leg to the groin, up the trunk, and ends under the collarbone. (2) The Bladder, a Yang organ, stores and removes fluid from the body by receiving Kidney chi. Imbalance of the Bladder leads to frequent or uncontrolled urination. The Bladder Meridian begins in the corner of the eye, moves down the back, and ends on the back of the knee. The Bladder also has another line, which starts alongside the previous line, moves down to the outer edge of the foot and ends on the small toe. See also Traditional Chinese medicine References Traditional Chinese medicine Human anatomy
0.776253
0.958835
0.744298
Hutchinson's mask
Hutchinson's mask is a patient's sensation that the face is covered with a mask or a gauzy network like cobwebs. This medical sign is associated with tabes dorsalis affecting the trigeminal nerve (fifth cranial nerve CN V). It is named in honour of the English physician Sir Jonathan Hutchinson (1828–1913). References Medical signs Symptoms and signs of mental disorders
0.763567
0.97459
0.744165
Posterior circulation infarct
Posterior circulation infarct (POCI) is a type of cerebral infarction affecting the posterior circulation supplying one side of the brain. Posterior circulation stroke syndrome (POCS) refers to the symptoms of a patient who clinically appears to have had a posterior circulation infarct, but who has not yet had any diagnostic imaging (e.g. CT Scan) to confirm the diagnosis. It can cause the following symptoms: Cranial nerve palsy AND contralateral motor/sensory defect Bilateral motor or sensory defect Eye movement problems (e.g.nystagmus) Cerebellar dysfunction Isolated homonymous hemianopia Vertigo It has also been associated with deafness. See also Stroke Artery of Percheron References External links Types of stroke
0.762048
0.976494
0.744136
Urologic disease
Urologic diseases or conditions include urinary tract infections, kidney stones, bladder control problems, and prostate problems, among others. Some urologic conditions do not affect a person for that long and some are lifetime conditions. Kidney diseases are normally investigated and treated by nephrologists, while the specialty of urology deals with problems in the other organs. Gynecologists may deal with problems of incontinence in women. Diseases of other bodily systems also have a direct effect on urogenital function. For instance, it has been shown that protein released by the kidneys in diabetes mellitus sensitizes the kidney to the damaging effects of hypertension. Diabetes also can have a direct effect on urination due to peripheral neuropathies, which occur in some individuals with poorly controlled diabetics. Kidney disease Kidney disease, or renal disease, also known as nephropathy, is damage to or disease of a kidney. Nephritis is an inflammatory kidney disease and has several types according to the location of the inflammation. Inflammation can be diagnosed by blood tests. Nephrosis is non-inflammatory kidney disease. Nephritis and nephrosis can give rise to nephritic syndrome and nephrotic syndrome respectively. Kidney disease usually causes a loss of kidney function to some degree and can result in kidney failure, the complete loss of kidney function. Kidney failure is known as the end-stage of kidney disease, where dialysis or a kidney transplant is the only treatment option. Chronic kidney disease causes the gradual loss of kidney function over time. Acute kidney disease is now termed acute kidney injury and is marked by the sudden reduction in kidney function over seven days. About one in eight Americans (as of 2007) has chronic kidney disease. Primary renal cell carcinomas as well as metastatic cancers can affect the kidney. Kidney failure Kidney failure is defined by functional impairment of the kidney, whereas the kidneys are functioning at 15% or less than normal capability. It is divided into acute kidney failure (cases that develop rapidly) and chronic kidney failure (those that are long term). Symptoms may include leg swelling, feeling tired, vomiting, loss of appetite, and confusion. Complications of acute disease may include uremia, high blood potassium, and volume overload. Complications of chronic disease may include heart disease, high blood pressure, and anemia. Pre-renal kidney failure refers to impairment of supply of blood to the functional nephrons including renal artery stenosis. Intrinsic kidney diseases are the classic diseases of the kidney including drug toxicity and nephritis. Post-renal kidney failure is outlet obstruction after the kidney, such as a kidney stone or prostatic bladder outlet obstruction. Kidney failure may require medication, dietary lifestyle modifications, and dialysis. Non-renal urinary tract disease Structural and or traumatic changes in the urinary tract can lead to hemorrhage, functional blockage or inflammation. Colonization by bacteria, protozoa or fungi can cause infection. Uncontrolled cell growth can cause neoplasia. The term "uropathy" refers to a disease of the urinary tract, while "nephropathy" refers to a disease of the kidney. For example: Urinary tract infections (UTIs) are infections that affect part of the urinary tract. When it affects the lower urinary tract it is known as a bladder infection (cystitis) and when it affects the upper urinary tract it is known as a kidney infection (pyelonephritis). Symptoms from a lower urinary tract infection include pain with urination, frequent urination, and feeling the need to urinate despite having an empty bladder. Symptoms of a kidney infection include fever and flank pain usually in addition to the symptoms of a lower UTI. Rarely the urine may appear bloody. In the very old and the very young, symptoms may be vague or non-specific. Interstitial cystitis (IC), also known as bladder pain syndrome (BPS), is a type of chronic pain that affects the bladder. Symptoms include feeling the need to urinate right away, needing to urinate often, and pain with sex. IC/BPS is associated with depression and lower quality of life. Many of those affected also have irritable bowel syndrome and fibromyalgia. Incontinence (UI), also known as involuntary urination, is any uncontrolled leakage of urine. It is a common and distressing problem, which may have a large impact on quality of life. It has been identified as an important issue in geriatric health care. The term enuresis is often used to refer to urinary incontinence primarily in children, such as nocturnal enuresis (bed wetting). Benign prostatic hyperplasia (BPH), also called prostate enlargement, is a noncancerous increase in size of the prostate gland. Symptoms may include frequent urination, trouble starting to urinate, weak stream, inability to urinate, or loss of bladder control. Complications can include urinary tract infections, bladder stones, and chronic kidney problems. Prostatitis is inflammation of the prostate gland. The condition is classified into acute, chronic, asymptomatic inflammatory prostatitis, and chronic pelvic pain syndrome. It may occur as an appropriate physiological response to an infection, or it may occur in the absence of infection. In the United States, prostatitis is diagnosed in 8 percent of all urologist visits and 1 percent of all primary care physician visits. Urinary retention is an inability to completely empty the bladder. Onset can be sudden or gradual. When of sudden onset, symptoms include an inability to urinate and lower abdominal pain. When of gradual onset, symptoms may include loss of bladder control, mild lower abdominal pain, and a weak urine stream. Those with long-term problems are at risk of urinary tract infections. Causes include blockage of the urethra, nerve problems, certain medications, and weak bladder muscles. Blockage can be caused by benign prostatic hyperplasia (BPH), urethral strictures, bladder stones, a cystocele, constipation, or tumors. Nerve problems can occur from diabetes, trauma, spinal cord problems, stroke, or heavy metal poisoning. Medications that can cause problems include anticholinergics, antihistamines, tricyclic antidepressants, decongestants, cyclobenzaprine, diazepam, NSAIDs, amphetamines, and opioids. Diagnosis is typically based on measuring the amount of urine in the bladder after urinating. Treatment is typically with a catheter either through the urethra or lower abdomen. Transitional cell carcinoma or bladder cancer is any of several types of cancer arising from the tissues of the urinary bladder. It is a disease in which cells grow abnormally and have the potential to spread to other parts of the body. Symptoms include blood in the urine, pain with urination, and low back pain. Renal cell carcinoma (RCC) is a kidney cancer that originates in the lining of the proximal convoluted tubule, a part of the very small tubes in the kidney that transport primary urine. RCC is the most common type of kidney cancer in adults, responsible for approximately 90–95% of cases. Prostate cancer is the development of cancer in the prostate, a gland in the male reproductive system. Most prostate cancers are slow growing; however, some grow relatively quickly. The cancer cells may spread from the prostate to other areas of the body, particularly the bones and lymph nodes. It may initially cause no symptoms. In later stages, it can lead to difficulty urinating, blood in the urine or pain in the pelvis, back, or when urinating. A disease known as benign prostatic hyperplasia may produce similar symptoms. Other late symptoms may include feeling tired due to low levels of red blood cells. Urinary tract obstruction is a urologic disease consisting of a decrease in the free passage of urine through one or both ureters and/or the urethra. It is a cause of urinary retention. Complete obstruction of the urinary tract requires prompt treatment for renal preservation. Any sign of infection, such as fever and chills, in the context of obstruction to urine flow constitutes a urologic emergency. Testing Biochemical blood tests determine the amount of typical markers of renal function in the blood serum, for instance serum urea, serum uric acid, and serum creatinine. Biochemistry can also be used to determine serum electrolytes. Special biochemical tests (arterial blood gas) can determine the amount of dissolved gases in the blood, indicating if pH imbalances are acute or chronic. Urinalysis is a test that studies urine for abnormal substances such as protein or signs of infection. A Full Ward Test, also known as dipstick urinalysis, involves the dipping of a biochemically active test strip into the urine specimen to determine levels of tell-tale chemicals in the urine. Urinalysis may also involve MC&S microscopy, culture and sensitivity Urodynamic tests evaluate the storage of urine in the bladder and the flow of urine from the bladder through the urethra. It may be performed in cases of incontinence or neurological problems affecting the urinary tract. However the American Urogynecologic Society does not recommend that urodynamics are part of initial diagnosis for uncomplicated overactive bladder. Ultrasound is routinely used in urology. In a pelvic sonogram, organs of the pelvic region are imaged. This includes the uterus and ovaries or urinary bladder. Males are sometimes given a pelvic sonogram to check on the health of their bladder, the prostate, or their testicles (for example to distinguish epididymitis from testicular torsion). In young males, it is used to distinguish more benign masses (varicocele or hydrocele) from testicular cancer, which is highly curable but which must be treated to preserve health and fertility. There are two methods of performing a pelvic sonography – externally or internally. The internal pelvic sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic imaging of the pelvic floor can produce important diagnostic information regarding the precise relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation. It is used to diagnose and, at higher frequencies, to treat (break up) kidney stones or kidney crystals (nephrolithiasis). Radiology based testing KUB stands for Kidneys, Ureters, and Bladder. The projection does not necessarily include the diaphragm. The projection includes the entire urinary system, from the pubic symphysis to the superior aspects of the kidneys. The anteroposterior (AP) abdomen projection, in contrast, includes both halves of the diaphragm. Despite its name, a KUB is not typically used to investigate pathology of the kidneys, ureters, or bladder, since these structures are difficult to assess (for example, the kidneys may not be visible due to overlying bowel gas.) In order to assess these structures radiographically, a technique called an intravenous pyelogram was historically utilized, and today at many institutions CT urography is the technique of choice. An intravenous pyelogram, also called an intravenous urogram (IVU), is a radiological procedure used to visualize abnormalities of the urinary system, including the kidneys, ureters, and bladder. Unlike a kidneys, ureters, and bladder x-ray (KUB), which is a plain (that is, noncontrast) radiograph, an IVP uses contrast to highlight the urinary tract. CT urography (CTU) is commonly used in the evaluation of hematuria, and specifically tailored to image the renal collecting system, ureters and bladder in addition to the renal parenchyma. Initial imaging includes a noncontrast phase to detect renal calculi as a source of hematuria. Note that dual energy CT may eventually allow the noncontrast phase to be eliminated. Contrast enhancement techniques for CTU vary from institution to institution. A common technique is a double bolus, single phase imaging algorithm. This technique is a hybrid contrast injection strategy that results in opacification of the renal parenchyma and the collecting system, ureters, and bladder. A small contrast bolus is administered initially, followed 10 minutes later with a larger bolus that is imaged in the corticomedullary phase. Excretory phase imaging allows for not only evaluation of the ureteral lumen, but also periureteral abnormalities including external masses and lymphadenopathy. MRI is the investigation of choice in the preoperative staging of prostate cancer. A voiding cystogram is a functional study where contrast "dye" is injected through a catheter into the bladder. Under x-ray the radiologist asks the patient to void (usually young children) and will watch the contrast exiting the body on the x-ray monitor. This examines the child's bladder and lower urinary tract. Typically looking for vesicoureteral reflux, involving urine backflow up into the kidneys. References External links
0.766333
0.971024
0.744127
Obligate
As an adjective, obligate means "by necessity" (antonym facultative) and is used mainly in biology in phrases such as: Obligate aerobe, an organism that cannot survive without oxygen Obligate anaerobe, an organism that cannot survive in the presence of oxygen Obligate air-breather, a term used in fish physiology to describe those that respire entirely from the atmosphere Obligate biped, Bipedalism designed to walk on two legs Obligate carnivore, an organism dependent for survival on a diet of animal flesh. Obligate chimerism, a kind of organism with two distinct sets of DNA, always Obligate hibernation, a state of inactivity in which some organisms survive conditions of insufficiently available resources. Obligate intracellular parasite, a parasitic microorganism that cannot reproduce without entering a suitable host cell Obligate parasite, a parasite that cannot reproduce without exploiting a suitable host Obligate photoperiodic plant, a plant that requires sufficiently long or short nights before it initiates flowering, germination or similarly functions Obligate symbionts, organisms that can only live together in a symbiosis See also Opportunism (biological) Biology terminology
0.769531
0.966927
0.74408
Immunomodulation
Immunomodulation is modulation (regulatory adjustment) of the immune system. It has natural and human-induced forms, and thus the word can refer to the following: Homeostasis in the immune system, whereby the system self-regulates to adjust immune responses to adaptive rather than maladaptive levels (using regulatory T cells, cell signaling molecules, and so forth) Immunomodulation as part of immunotherapy, in which immune responses are induced, amplified, attenuated, or prevented according to therapeutic goals See also Immunomodulation in osseointegration Immune system process Immunology
0.763092
0.974958
0.743982
Contrast bath therapy
Contrast bath therapy is a form of treatment where a limb or the entire body is immersed in hot (but not boiling) water followed by the immediate immersion of the limb or body in cold ice water. This procedure is repeated several times, alternating hot and cold. The only evidence of benefit is anecdotal and no plausible mechanism has been confirmed. Theory The theory behind contrast bath therapy is that the hot water causes vasodilation of the blood flow in the limb or body followed by the cold water which causes vasoconstriction. The lymph system, unlike the circulatory system, lacks a central pump. By alternating hot and cold, it is believed that lymph vessels dilate and contract to "pump" and move stagnant fluid out of the injured area and that this positively affects the inflammation process, which is the body's primary mechanism for healing damaged tissue. Treatment Contrast bathing can be used to reduce swelling around injuries or to aid recovery from exercise. It can also significantly improve muscle recovery following exercise by reducing the levels of blood lactate concentration. For any injury presenting with palpable swelling and heat, and visible redness - such as a strain/sprain - contrast baths are contraindicated during the acute inflammation stage. Acute inflammation begins at the time of injury and lasts for approximately 72 hours. Effectiveness in athletic recovery The current evidence base suggests that contrast water therapy (CWT) is superior to using passive recovery or rest after exercise; the magnitudes of these effects may be most relevant to an elite sporting population. There seems to be little difference in recovery outcome between CWT and other popular recovery interventions such as cold water immersion and active recovery. In a review on immersion therapy in general, Ian Wilcock, John Cronin, and Wayne Hing suggest that most of the benefits of contrast therapy are from the hydrostatic pressure from the water, not the variations in temperature. See also Ice bath References Physical therapy
0.761572
0.97687
0.743957
Brief resolved unexplained event
Brief resolved unexplained event (BRUE), previously apparent life-threatening event (ALTE), is a medical term in pediatrics that describes an event that occurs during infancy. The event is noted by an observer, typically the infant's caregiver. It is characterized by one or more concerning symptoms such as change in skin color, lack of breathing, weakness, or poor responsiveness. By definition, by the time they are assessed in a healthcare environment they must be back to normal without obvious explanation after the clinician takes the appropriate clinical history and physical examination. The American Academy of Pediatrics (AAP) clarified the use of both terms in a 2016 consensus statement that recommended the term BRUE be used whenever possible as it is more specifically defined. Thus, it is more useful for assessing risk of further events. The cause for BRUEs is often unknown, although some of the more common causes include gastroesophageal reflux, seizure, and child maltreatment. Evaluation after an ALTE or BRUE is diagnostically important, as some events represent the first sign or symptom of an underlying medical condition. In most cases, assuming the infants are otherwise healthy and no underlying medical issue is found, the infants who have a BRUE are unlikely to have a second event and have an even smaller risk of death. Presentation A BRUE is a description of a self-limited episode. Usually a BRUE lasts for less than 1 minute. By definition, the episode must have resolved by the time the infant is evaluated by a medical professional. The caregiver may report observation of bluish skin discoloration, called cyanosis. Breathing abnormalities, such as lack of breathing, slow breathing, or irregular breathing may be noted. Differences in muscle tone, such as transient floppiness or rigidity can also be characterized as a BRUE. Changes in level of responsiveness such as abnormal eye contact or inability to interact can also fulfill the classification. A BRUE is a term used by a clinician to characterize an infant's self-limited episode witnessed by someone else. The AAP defines a BRUE as a sudden, brief episode that occurs to infants less than 1 year of age, lasts less than one minute, and resolves completely on its own prior to being evaluated by a health professional. The event must include at least one of the following: skin color change to blue (cyanosis) or pale (pallor) abnormal breathing muscle weakness decreased responsiveness Causes Most infants who have a BRUE are never diagnosed with a definitive cause for the event. However, we use the literature on ALTEs, which is more extensive, to help explain the cause of a BRUE. These causes may also be considered conditions that can be confused with a BRUE. Gastroesophageal reflux Vomiting or choking during feeding can trigger laryngospasm that leads to a BRUE or ALTE. This is a likely cause if the infant had vomiting or regurgitation just prior to the event, or if the event occurred while the infant was awake and lying down. In healthy infants with a suggestive GER event, no additional testing is typically done. In infants with repeated episodes of choking or repeated acute events, evaluation with a swallowing study can be helpful. Other causes Other causes that are less common include meningitis, urinary tract infection, breath-holding spells, congenital central hypoventilation syndrome, cancer, intracranial bleed, apnea of infancy, periodic breathing of infancy, choking, obstructive sleep apnea, factitious disorder imposed on another (formerly Munchausen syndrome). Diagnosis Taking the history of the event is vital in the evaluation of a BRUE. The first step is determining whether this is truly a BRUE by looking for presence of abnormal symptoms or vital signs. If this is the case, then it cannot be labelled as a BRUE and the healthcare professional should treat accordingly. Low-risk infants The next step in evaluation is distinguishing whether this BRUE is low- or high-risk. The American Academy of Pediatrics classifies an infant as low risk if they have a BRUE and meet the following characteristics: infant is of age greater than 60 days gestational age greater than or equal to 32 weeks infant has had no prior BRUEs this BRUE did not occur in a cluster cardiopulmonary resuscitation (CPR) by a medical provider was not required no concerning features on history no concerning physical examination findings duration less than 20 seconds High-risk infants If the infant does not meet all of these criteria, the BRUE is considered high-risk, and more likely represents an underlying medical condition. Characteristics of the infant that make this more likely include history of similar events or clustering, history of unexpected death in a sibling, need for CPR by a trained medical professional, ongoing lethargy, suspicion for child abuse or maltreatment, or existence of genetic syndrome or congenital anomalies. Management If the infant meets criteria for a low-risk BRUE and the clinician feels there are no concerning findings otherwise, treatment often involves simple short observation in the emergency department with pulse oximetry. For the cases where parents complain of specific symptoms at the time of the event, then follow-up testing may be done for the related conditions or diseases. Other tests are not typically recommended for low-risk infants. For infants that have concerning features on history or physical, and are thus categorized as high-risk, further evaluation is warranted. This will vary greatly depending on the infants symptoms, but may include, urinalysis, complete blood count, imaging with chest x-ray, and laboratory screening for ingestion of medications or poisons. Also, for infants in the high-risk category, clinicians should consider admission to the hospital for extended observation, depending on the benefits and risk of the case. The course of the admission provides an opportunity to witness a second event to better characterize it and narrow the list of possible diagnoses. The observation of infants at home with the help of medical devices after discharge is not recommended. Prognosis The risk of death of patients who have a BRUE has been studied by using the literature about ALTEs, since this data is more abundant. The studies concluded that there is no increased risk of death for these patients compared to the rest of the infant population. As for the prognosis of these infants into adulthood, research still needs to be conducted to assess for any long-term health effects. History In 1986, the National Institute of Health defined an apparent life-threatening event (ALTE ) as an observed frightening event of an infant that includes at least one component of lack of breathing (apnea), skin color change (such as cyanosis), weakness, choking, or gagging. The term was invented to avoid previously used terms such as "near-miss SIDS" to dissociate the event from SIDS, a separate condition in infancy. There had been literature discussion in the past about the increased risk of SIDS in these infants, but more recently the research has concluded that there is no direct relationship between an ALTE and SIDS. It also was defined as part of an attempt to characterize the different forms of apnea, or sudden lack of breathing, in infants. In 2016, the American Academy of Pediatrics (AAP) published a clinical practice guideling recommending the replacement of ALTE with a new term, brief resolved unexplained event (BRUE). The guidelines state that the term ALTE is still applicable with key differences between ALTE and BRUE. The biggest difference is whether the infant is symptomatic at time of presentation to a health professional. If the infant is still showing symptoms, then the condition is termed an ALTE. In order to be considered a BRUE, the infant should be completely asymptomatic at time of presentation, which is more common. Because of this, a BRUE can also be considered as a subset of ALTE. The term change was also recommended in large part due to the "life-threatening" suggestion from the older term. The rate of death in infants following a BRUE has been studied and is relatively rare, about 1 in 800. References External links Children's health Pediatrics
0.761132
0.977406
0.743935
Mental protuberance
The symphysis of the external surface of the mandible divides below and encloses a triangular eminence, the mental protuberance, the base of which is depressed in the center but raised on either side to form the mental tubercle. The size and shape of the bones making up this structure are responsible for the size and shape of a person's chin. Synonyms of mental protuberance include mental process and protuberantia mentalis. Mental in this sense derives from Latin mentum (chin), not mens (mind), source of the more common meaning of mental. References External links Bones of the head and neck
0.775142
0.959724
0.743922
Generalist
A generalist is a person with a wide array of knowledge on a variety of subjects, useful or not. It may also refer to: Occupations a physician who provides general health care, as opposed to a medical specialist; see also: General practitioner, a medical doctor who treats acute and chronic illnesses and provides preventive care and health education to patients Family medicine, comprehensive health care for people of all ages Information Technology Generalist, a technology professional proficient in many facets of information technology without any specific specialty Biology Generalist species, a species which can survive in multiple habitats or eat food from multiple sources Generalist Genes Hypothesis, a theory of learning abilities and disabilities Other "Jack of all trades, master of none", a figure of speech about generalists a multipotentialite, someone having exceptional interest, and talent, in two or more fields. Philomath, someone who loves learning Polymath, someone whose knowledge spans a substantial number of subjects Generalist channel, a TV or radio channel without a particular target audience See also Encyclopedism, an outlook that aims to include a wide range of knowledge in a single work Interdisciplinarity, the combining of two or more academic disciplines in one activity Laity, religious group members who are not clerics, sometimes also used metaphorically to describe non-specialists Jack of all trades (disambiguation) Specialist (disambiguation)
0.763909
0.973779
0.743878
Faget sign
In medicine, the Faget sign—sometimes called sphygmothermic dissociation—is the unusual pairing of fever with bradycardia (slow pulse). (Fever is usually accompanied by tachycardia (rapid pulse), an association known by the eponym "Liebermeister's rule".) The Faget sign is named after Louisiana physician Jean Charles Faget, who studied yellow fever in Louisiana. Faget sign is often seen in: Yellow fever Typhoid fever Brain abscess Tularaemia Brucellosis Colorado tick fever Some pneumonias - Legionella pneumonia and Mycoplasma pneumonia Drug fever (e.g. beta-blockers, known as the Beta-Faget sign) Of note, the Faget sign in bacterial infections is consistently associated with bacteria that have an intracellular life cycle. References Medical signs
0.762269
0.97575
0.743784