{"text": "In the molecular world, researchers act as detectives working hard to unravel the mysteries surrounding cells. One of the researchers' greatest tools in this endeavor has been Raman spectroscopy. Raman spectroscopy is a spectroscopic technique that measures the unique Raman spectra for every type of biological molecule. As such, Raman spectroscopy has the potential to provide scientists with a library of spectra that can be used to unravel the makeup of an unknown molecule. However, this technique is limited in that it is not able to manipulate particular structures without disturbing their unique environment. Recently, a novel technology that combines Raman spectroscopy with optical tweezers, termed Raman tweezers, evades this problem due to its ability to manipulate a sample without physical contact. As such, Raman tweezers has the potential to become an incredibly effective diagnostic tool for differentially distinguishing tissue, and therefore holds great promise in the field of virology for distinguishing between various virally infected cells. This review provides an introduction for a virologist into the world of spectroscopy and explores many of the potential applications of Raman tweezers in virology. In today's world of increasingly complex and refined biological analytical techniques, spectroscopy has maintained its place at the forefront. One type of spectroscopy in particular, Raman spectroscopy, has proven especially useful in providing detailed analysis of a staggering variety of biological samples. Raman spectroscopy is able to detect and analyze extremely small molecular objects with high resolution while eliminating outside interference .Recently, a derivative of Raman spectroscopy, termed Raman tweezers, has allowed for an even greater degree of analytical capability. Raman tweezers use optical tweezers to suspend and manipulate a molecule without direct contact, so that the molecule's Raman spectra may be recorded while it is in its most natural state. As such, the spectra collected are more reflective of the true nature of the molecule under study and therefore of more significance. Even with today's advances, we are only beginning to scratch the surface of a technique that holds the promise of far-reaching and highly significant future applications.One such field that stands to benefit greatly from Raman tweezers is virology. The high resolution, lack of sample preparation, and very short data collection time required make the technology ideal for use in the study of viruses and virally infected cells. However, because of the newness of the approach, this review has been written in such a manner that those unfamiliar with optical physics not become lost and lose interest in a technology that holds such incredible potential.Spectroscopy was born in 1801, when the British scientist William Wollaston discovered the existence of dark lines in the solar spectrum. Thirteen years later, Jospeh von Fraunhofer repeated Wollaston's work and hypothesized that the dark lines were caused by an absence of certain wavelengths of light . It was The end of the nineteenth and beginning of the twentieth centuries was marked by significant efforts to quantify and explain the origin of spectral phenomena. Beginning with the simplest atom, hydrogen, scientists including Johann Balmer and Johannes Rydberg developed equations to explain the atom's frequency spectrum. It was not until Niels Bohr developed his famous model in 1913 that the energy levels of the hydrogen spectrum could accurately be calculated. However, Bohr's model failed miserably when applied to other elements that had more than one electron. It took the development of quantum mechanics by Werner Heisenberg and Erwin Schrodinger in 1925 to universally explain the spectra of most elements .From the discovery of unique atomic spectra developed modern spectroscopy. The three main varieties of spectroscopy in use today are absorption, emission, and scattering spectroscopy. Absorption spectroscopy, including Infrared and Ultraviolet spectroscopy, measures the wavelengths of light that a substance absorbs to give information about its structure. Emission spectroscopy, such as fluorescence and laser spectroscopy, measures the amount of light of a certain wavelength that a substance reflects. Lastly, scattering spectroscopy, to which Raman spectroscopy belongs, is similar to emission spectroscopy but detects and analyzes all of the wavelengths that a substance reflects upon excitation .Raman spectroscopy is named after the famous Indian physicist Sir Chandrasekhara Venkata Raman who in 1928, along with K.S. Krishnan, found that when a beam of light transverses a transparent chemical compound, a small fraction of that beam will emerge from the compound at right angles to and of a different wavelength from the original beam . Raman rNormally, when a beam of light is shined through a transparent substance, the molecules of the substance that absorb those light wavelengths are excited into a partial quantum state and emit wavelengths of equal frequency as the incoming wavelengths such that there is no net change in energy between the light and the substance. Such light wavelengths are said to be elastically scattered in a process known as Rayleigh scattering . On rareRaman spectroscopy is performed by illuminating a sample with a laser. The reflected light is collected with a lens and sent through a monochromator that typically employs holographic diffraction gratings and multiple dispersion stages to achieve a high degree of resolution of the desired wavelengths . A chargWhile initial Raman spectroscopy was unable to analyze most biological samples due to the interference from the background fluorescence of water, buffers, and/or mediums present in the sample, two new types of Raman spectroscopy have been developed that solve this problem. Both types, near-infrared (NIR) and ultraviolet (UV) Raman spectroscopy, rely on using wavelengths well away from those of fluorescence. Near-infrared Raman spectroscopy relies on long near-infrared wavelengths while ultraviolet Raman spectroscopy relies on short wavelengths to avoid interference from mid-wavelength fluorescence, as shown in figure There are four major types of Raman spectroscopy in use today: surface enhanced Raman spectroscopy (SERS), resonance Raman spectroscopy (RRS), confocal Raman microspectroscopy, and coherent anti-Stokes Raman scattering (CARS) . SERS, wThe second two types of Raman spectroscopy, confocal Raman microspectroscopy and coherent anti-Stokes Raman scattering (CARS) are not only able to analyze nearly all biological samples, but also avoid any fluorescent interference. Both confocal Raman microspectroscopy and CARS spectroscopy get around this problem of fluorescence in unique ways. Confocal Raman microspectroscopy eliminates any lingering fluorescence by measuring the Raman spectra of micro regions of a sample one at a time such that the effects of fluorescence are eliminated while high resolution is maintained . BecauseWith the issue of background fluorescence solved, Raman spectroscopic analysis has become an analytical method of choice in an extremely wide range of biological applications. Some of the more obscure applications of this technique include everything from determining the molecular structure of the skin of a 5200 year old frozen man to the analysis and authentication of foods such as olive oil and Japanese sake . One of in vivo without the need of fixatives, thereby providing extremely detailed analysis of cells in their natural state [Of even greater consequence, perhaps, has been Raman spectroscopy's contribution to detailed cellular analysis. Modern techniques have allowed for the Raman spectroscopic analysis of cells al state . Such anal state ,26. For al state -29. AddiOf particular interest has been the application of Raman spectroscopy in medicine. The technique's ability to provide detailed images of cells has allowed for the comparative analysis between numerous healthy tissues and their diseased states. Such analytical potential has been especially suited in the diagnosis of numerous cancers, including: intestinal, stomach, laryngeal, brain, breast, mouth, skin, and others -37. OtheRecently, Raman spectroscopy has been coupled with modern fiber optic technology to accurately measure tissue spectra in vivo without the need of biopsy. This method employs a small fiber optic probe that both has the capability to reach less assessable organs and only requires less than two seconds to collect spectra . As suchThe use of Raman spectroscopy in differential medicine is not limited to tissues and cells; it also has applications in virology. The technique has been put to good use in determining the structures and stereochemistry of both the protein and nucleic acid components of viruses, even going so far as to being able to distinguish between different types of right-handed DNA alpha-helixes -53. RamaThe analytical capabilities of Raman spectroscopy are limited by its inability to manipulate, and therefore thoroughly analyze the biological molecules under study without making physical contact. This limitation has been resolved by coupling Raman spectroscopy with a technology called optical tweezers. The new method, termed Raman tweezers, uses optical tweezers to manipulate a sample without contact with it so that it remains unchanged for Raman spectroscopic analysis.Raman tweezers is a relatively new technology that couples Raman spectroscopy with optical tweezers to achieve previously unheard of sample control and resolution. Optical tweezers is a system that focuses a near-infrared laser on a sample to fix it in an optical trap from which it may then be maneuvered and controlled. The technique, which was first developed by Arthur Ashkin et al. in 1986, has the ability to control objects ranging in size from 5 nm to over 100 mm, whether they be atoms, viruses, bacteria, proteins, cells, or other biological molecules ,58. Perhin vivo tissue analyses. Despite this drawback, Raman tweezers is a highly useful marriage of Raman spectroscopy and optical tweezers that further enhances Raman spectroscopy's analytical capabilities.The one major drawback of using Raman tweezers instead of Raman spectroscopy, however, is its inability to be used with fiber optic probes and therefore be applied to The potential of Raman tweezers is staggering. The technique holds all the promise of Raman spectroscopy, including the potential to identify almost any biological molecule and disease, and adds to it both a greater level of control and analytical capability as well as the capability of observing a sample in its natural state. As such, Raman tweezers is likely to surpass Raman spectroscopy in use for biological analysis.in vivo under different conditions [in vivo tissue analysis, its ability to manipulate a sample without physically coming into contact with it has allowed a degree of detailed analysis not possible with Raman spectroscopy alone.To date, only a handful of biological molecules and processes, including red blood cells, lipoproteins, cell membrane components and T cell activation, have been studied with Raman tweezers -65. Notanditions . Raman tnditions ,67-69. TRaman Tweezers, while yet far having proven itself an enlightening diagnostic tool in virology, is still in its infancy. With proper nurturing, this technique has the potential to blossom into a truly brilliant and highly useful tool in the virologist's arsenal. As the resolution of Raman spectrographs increases, so will their analytical capabilities. It is likely in the not too distant future, that this technology will allow scientists to go beyond their current capability of distinguishing infected from healthy cells to being able to distinguish between differentially infected cells. Given a detailed library of spectra, a researcher could potentially even be able to characterize an unknown virus' structure, components, and lytic or latent state of infection. Furthermore, the technique's optical tweezers would allow for the study of the more temperamental cell lines, such as 293, that die more easily upon physical contact. All of these analytical capabilities would give the virologist a much clearer window to study viruses.One could also use this technique's capabilities to not only characterize a virus, but also monitor the efficacy of antiviral treatments and determine viral load, among other applications. While all of these potential applications can be done today through alternative means, these processes must be completed separately and are time consuming. Raman tweezers greatly simplifies this process by providing a comprehensive analytical system that is both able to collect all the necessary data at once and able to do so in a very short time, thereby making it extremely cost effective. The process is so fast in fact Table that theIn conclusion, Raman tweezers is an extremely powerful analytical tool that provides biologists with a fingerprint of the agent they are studying and whose immense future applications are only now being fully understood. It is up to virologists, however, to realize the full scope and magnitude of these applications and to press for the development of this seemingly unrelated technology in virology.The author(s) declare that they have no competing interests.SMA conceived the idea, designed the outline, coordinated the project, and helped to draft this review. PJL, AGW, and OFD collected intellectual materials towards different sections of the review. In addition, PJL was instrumental in writing the first draft. All authors read and approved the final version of the manuscript."} {"text": "The cause of chest pain in patients presenting to the emergency room often remains unclear. We present a case of essential thrombocythemia as a novel cause of atypical chest pain, which responded dramatically to a simple treatment intervention.A 54-year-old patient presenting with atypical chest pain was found to have essential thrombocythemia as a cause for her chest pain. She responded dramatically to aspirin therapy and had no recurrence of symptoms over 3 months.Essential thrombocythemia should be considered as a differential cause in patient presenting with atypical chest pain, vasomotor symptoms and high platelet counts. These symptoms are generally more bothersome than dangerous and are usually controlled by low dose aspirin therapy. Acute chest pain (CP) is one of the most common reasons for ER visits and its management involves establishing the cause while excluding potentially life-threatening conditions. It, therefore, poses a significant diagnostic challenge. Despite extensive work up to determine the cause of chest pain, the diagnosis often remains unclear leaving both the patient and the physician unsatisfied. We present a case of atypical chest pain due to a relatively unremarkable cause; Essential thrombocythemia (ET), which responded dramatically to a simple treatment intervention.9/l to 750 \u00d7 109/l over the last one year. Initial laboratory investigations in the ER revealed normal basic metabolic panel and complete blood count except for a platelet count of 935 \u00d7 109/l. Her chest pain was evaluated with three sets of cardiac enzymes and exercise stress echocardiogram which were all normal. Her chest X-ray did not reveal any infiltrates or pneumothorax. CT scan of chest with contrast showed no evidence of pulmonary embolism or aortic dissection. Her EKG revealed normal sinus rhythm. Her 24 hour continuous telemetry also did not show any arrhythmias. Her atypical, non-localizing, sensory symptoms with negative cardiopulmonary work up were subsequently attributed to vasomotor manifestations due to ET. The patient was reassured and started on a low dose aspirin therapy. At 12 weeks follow up, she reported no subsequent similar episodes.A 54-year-old Caucasian female presented to ER with episodes of CP for the last 4 hours. The pain was characterised as acute onset, sharp, non-exertional, non-radiating pain localised to left side of the chest. It was 10/10 in severity with no discernable aggravating or relieving factors with each episode lasting for 2-3 minutes. Certain sensory symptoms had preceded the onset of CP. This comprised of sudden onset numbness of the left thumb followed by a tingling sensation in the shoulders with radiation distally into both hands. Subsequently, patient also developed severe, generalized, headache along with lights flashing across her eyes. By the time she reached the ER, her symptoms had completely resolved. Her past medical history was significant only for the diagnosis of essential thrombocythemia (ET) based on a bone marrow biopsy and a positive JAK2 mutation test. She had not required any previous treatment for this condition. Her baseline platelet count had ranged between 600 \u00d7 10Essential thrombocythemia (ET) is a clonal stem cell disorder characterized by a persistent, nonreactive thrombocythemic state that is not accounted for by any of the other chronic myeloproliferative disorders .Our patient presented with classic vasomotor manifestations of ET, which may be present in approximately half of the patients with this myeloproliferative disorder. The common vasomotor symptoms experienced by these patients include headache, palpitations, atypical chest pain, distal paresthesias or transWhereas the work up of acute chest pain to rule out potentially life threatening conditions is essential, the recognition and appropriate treatment of relatively unremarkable causes of chest pain like ET can relieve the patients' symptoms and may also reduce the excessive utilization of health care resources.ASA: acetylsalicylic acid; CP: chest pain; CT: computer tomography; ER: emergency room; ET: essential thrombocythemia; EKG: electrocardiogram.Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The author(s) declare that they have no competing interests.KP assembled, analyzed and interpreted the patient data regarding the hematological disease. All authors contributed to writing the manuscript. All authors read and approved the final manuscript."} {"text": "There is an error in the citation and copyright statement. The correct citation is: Tirado-Gonz\u00e1lez I, Barrientos G, Freitag N, Otto T, Thijssen VLJL, et al. (2012) Uterine NK Cells Are Critical in Shaping DC Immunogenic Functions Compatible with Pregnancy Progression. PLoS ONE 7(10): e46755. doi:10.1371/journal.pone.0046755The correct copyright is: \u00a9 Tirado-Gonz\u00e1lez et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited."} {"text": "The United States Public Health Service (USPHS) Guideline for Treating Tobacco Use and Dependence includes ten key recommendations regarding the identification and the treatment of tobacco users seen in all health care settings. To our knowledge, the impact of system-wide brief interventions with cigarette smokers on smoking prevalence and health care utilization has not been examined using patient population-based data.Data on clinical interventions with cigarette smokers were examined for primary care office visits of 104,639 patients at 17 Harvard Vanguard Medical Associates (HVMA) sites. An operational definition of \u201csystems change\u201d was developed. It included thresholds for intervention frequency and sustainability. Twelve sites met the criteria. Five did not. Decreases in self-reported smoking prevalence were 40% greater at sites that achieved systems change . On average, the likelihood of quitting increased by 2.6% per occurrence of brief intervention. For patients with a recent history of current smoking whose home site experienced systems change, the likelihood of an office visit for smoking-related diagnoses decreased by 4.3% on an annualized basis after systems change occurred . There was no change in the likelihood of an office visit for smoking-related diagnoses following systems change among non-smokers.The clinical practice data from HVMA suggest that a systems approach can lead to significant reductions in smoking prevalence and the rate of office visits for smoking-related diseases. Most comprehensive tobacco intervention strategies focus on the provider or the tobacco user, but these results argue that health systems should be included as an integral component of a comprehensive tobacco intervention strategy. The HVMA results also give us an indication of the potential health impacts when meaningful use core tobacco measures are widely adopted. The United States Public Health Service (USPHS) Guideline for Treating Tobacco Use and Dependence includes ten key recommendations regarding the identification and the treatment of tobacco users seen in all health care settings While research clearly shows that systems-level changes can reduce smoking prevalence among enrollees of managed health care plans This study took place at Harvard Vanguard Medical Associates, a large health care provider network based in eastern Massachusetts. HVMA has more than 20 offices primarily in Boston and the surrounding suburban areas providing primary and specialty health care to more than 400,000 patients. In 2007, HVMA leadership established a clinical quality goal \u201cto intervene\u201d with patients who smoke. A multidisciplinary design team comprised of clinical and administrative personnel, defined \u201cintervention\u201d as identification of cigarette smokers at every office visit and delivery of a brief intervention to each identified smoker during that office visit. HVMA used a team approach to complete the equivalent of the PHS Guideline recommended \u201c5A\u201d tobacco intervention . There are many international correlates to the 5A model. For example, the National Health Service Stop Smoking program in the United Kingdom recommends a 4A model . In New Zealand, the system is titled ABC which stands for Ask, Brief advice, and Cessation support. What all these models have in common is that the recommended physician interventions are brief (<10 minutes) and that they include offers of counseling as well as prescriptions for tobacco cessation medications.The data recorded at HVMA focused exclusively on cigarette smokers instead of the broader definition of tobacco users. See In this work flow, the medical assistant was charged with recording in the EHR smoking status during each office visit and to assess readiness to quit. The clinician was responsible for advising each smoker to stop and for assisting each smoker according to his/her stage of change. Decision support tools for clinicians were embedded into the EHR to promote use of evidence-based medications to strengthen quit attempts. An option to refer smokers to a community-based, state-funded \u201cstop-smoking service\u201d that provides free telephone counseling could be ordered through the EHR. Advance Practice Clinicians (Nurse Practitioners and Physician Assistants) were trained to provide counseling and were educated on the use of stop-smoking medications. Many intervention sites identified a tobacco champion to lead the work locally. Feedback reports of medical assistant performance were delivered to clinical staff and administrative supervisors monthly.De-identified encounter level data for all primary care office visits for all adult patients at 17 HVMA sites was prepared by analysts from Harvard Vanguard Medical Associates (HVMA). Records covered the period from 1/1/2005 through 11/30/2010. Evaluation plans were reviewed and approved by the Institutional Review Board for the Massachusetts Department of Public Health. Harvard Vanguard obtains written consent from patients for the type of analysis conducted here. The consent form includes the following 2 statements.Harvard Vanguard may use or disclose your Health Information in order to conduct its business of providing health care. These \u201chealth care operations\u201d may include quality assessment, training of medical students, credentialing and various other activities that are necessary to run our practice and to improve the quality and cost effectiveness of the care that we deliver to you. Some of these activities occur in conjunction and cooperation with other Atrius Health groups. Other of these business operations may be performed by outside parties (\u201cBusiness Associates\u201d) on Harvard Vanguard\u2019s behalf. Our Business Associates must agree to maintain the confidentiality of your Health Information.Harvard Vanguard may disclose your Health Information for public health activities.The legal Department at Harvard Vanguard carefully reviewed this project and determined that the analysis fell within the realm of public health work related to quality improvement.The de-identified data set prepared by HVMA analysts contained demographics and encounter level data for 310,577 adult patients. Demographics for each patient included a randomly defined patient ID, age, race/ethnicity, marital status, town of residence, and patient\u2019s \u201chome\u201d clinic site. All HVMA patients have a \u201chome site\u201d which is the location of their primary care provider. Since patients can change doctors and/or move from one home site to another, the \u201chome\u201d for this data set was the clinic site associated with the patient\u2019s primary care provider on 11/30/2010. Data for all office visits at 17 HVMA office sites between 1/3/2005 and 11/30/2010 also were prepared. Included in the office visit encounter data were the unique patient ID, a primary and four secondary diagnoses, and recorded components of the brief intervention with cigarette smokers. For the 310,577 unique patients, there were 2,561,782 unique single-day patient encounters in the data set prepared by HVMA analysts. Nearly all (96.6%) interventions with cigarette smokers occurred at a patient\u2019s home site.To be included in our analysis, patients had to be Massachusetts residents between the ages of 22 and 64 on 11/30/2010, who were screened for smoking status at least once. At least 3 years had to elapse between the first and last office visit. The requirement that patients be at least 22 years of age by 11/30/2010 was to ensure that all patients in our data set were at least 18 when the first HVMA site began to systematically intervene with cigarette smokers (1/1/2007). Although some children were screened for smoking status by HVMA, the intervention program for cigarette smokers was aimed almost exclusively at adults. The upper age limit was set to 64 because it was thought that older patients might be receiving a greater proportion of their care outside the primary care system. Seasonal residents and college students might also receive care outside the primary care system, but it was impossible to screen only for full-time/non-college residents with the data available.In the original data set, 104,639 of the 310,377 patients met the criteria described above. Of this total, 15,286 had some history of self-reported smoking recorded in the EHR between 1/3/2005 and 11/30/2010 while 89,353 had no recorded history of current smoking in that time.When recorded, smoking status was listed as either \u201cYes\u201d, \u201cQuit\u201d, \u201cNever\u201d, \u201cPassive\u201d or \u201cNot Asked.\u201d Since the \u201cPassive\u201d and \u201cNot Asked\u201d categorizations could not be used to specifically define a patient\u2019s use of cigarettes, these categories were ignored. Therefore, smoker identification for this study was defined as an office visit where smoking status was listed as \u201cYes\u201d, \u201cQuit\u201d, or \u201cNever.\u201dThe recorded smoking status was attached to the dated office visit. This date was not overwritten as is often the case when smoking status is stored in a patient\u2019s social history. As a result, the full complement of patient encounter records would have a discontinuous but longitudinal history of smoking status. In addition to smoking status, information could be recorded about a patient\u2019s interest in quitting, readiness to quit, smoking pattern, referrals to telephone counseling, and prescriptions for medications covered by insurance.For this study, a brief intervention with smokers was defined as any evidence at a specific visit that the conversation about cigarette smoking went beyond the identification of smoking status. This could include information about interest in quitting, readiness to quit, smoking pattern, referrals to telephone counseling, and prescriptions for medications covered by insurance. Therefore, any visit in which a patient was identified as a smoker could also include a brief intervention about smoking.To assess data quality, we focused on all office visits throughout the data set where a patient\u2019s smoking status was recorded as \u201cNever.\u201d Of the three primary categories of smoking status , \u201cNever\u201d is the only absolute classification. Logically, no visit where status is recorded as \u201cNever\u201d should have any other status recorded at a prior visit. To obtain our quality assessment score, we counted the number of times that the smoking status was also recorded as \u201cNever\u201d where smoking status was also recorded at an earlier visit. Next, we computed the percentage of the earlier visits in which smoking status was also recorded as \u201cNever.\u201d For the 384,338 visits at which smoking status was recorded as \u201cNever\u201d, 381,917 (99.4%) also had smoking status recorded as \u201cNever\u201d at the earlier visit.To our knowledge there is no common standard for defining systems change using real-world office encounter records. Operationally, we defined \u201chealth systems change\u201d as first month when more than half of all office visits at a given site included an identification for cigarette smoking. In all months following that date, the rate of cigarette smoker identifications could never drop below 50%. Furthermore, there had to be at least 12 consecutive months with rates above 50%. By this definition, 12 of the 17 HVMA sites had achieved \u201csystems change.\u201dChanges in self-reported smoking behavior were examined by computing the proportion of all patients who were recorded as smokers at the earliest possible visit and then comparing this to the proportion of all patients recorded as smokers at the latest possible visit. Group comparisons were made between sites that achieved systems change and those that did not.Changes in the rate of smoking-related office visits were analyzed using generalized estimating equations (GEEs) with a logistic link function and patient as the unit of analysis. The period between 1/3/2005 and 11/30/2010 was divided into 77 twenty-eight day segments for 104,639 patients \u201315,286 patients with histories of recent smoking and 89,353 patients with no recent history of self-reported smoking. The dependent variable was the presence or absence of a smoking-related office visit during the 28-day period. If any of the first five recorded ICD9 codes for any visit in a 28-day period matched the list of smoking-related diagnoses from the Surgeon General\u2019s 2004 report on Smoking and Health To avoid biasing the results, data prior to and including the period of the first recorded visit were not included in the analysis. Patients were divided into those who had some history of self-reported smoking between 1/5/2005 and 11/30/2010 and those who had no history of self-reported smoking. Any patient who reported current smoking at any visit between 1/5/2005 and 11/30/2010 was grouped with the smokers. Longitudinal data for smokers and non-smokers were evaluated separately.There were three temporal variables in each model: time since 1/1/2005, time since the first recorded office visit, and time since systems change occurred at the patient\u2019s home site. This last variable was the primary focus of our analysis. We hypothesized that there would be decrease in the rate of smoking-related office visits following systems change and that this effect would only be seen in patients with a recent history of self-reported smoking.Our model adjusted for the seasonality of office visits using sines and cosines As defined above, \u201csystems change\u201d occurred at 12 of the 17 sites between 1/1/2007 and 4/1/2009. For all 12 sites, there was a dramatic and significant increase in the identification rate of cigarette smokers after the date of achieving systems change. All but one site achieved an 80% identification rate within 9 months of that date. The median time between the date of \u201csystems change\u201d and an 80% identification rate was just 4.5 months. At 11 of the 12 sites, there was also a significant increase in the percentage of rate of brief intervention for identified cigarette smokers. Identification rates for the remaining 5 sites remained relatively low throughout the study period. At sites that achieved systems change, 82.5% of visits where patients were identified as smokers included evidence of a further clinical intervention. At sites that did not achieve systems change, this rate was only 59.4%. The proportion of home-site interventions was slightly higher for patients seen at the 12 sites where systems change took place (97.2% versus 93.1%).Most demographics were similar for smokers and non-smokers. However, patients with a recent history of self-reported smoking were more likely to be younger, male, of white race and live alone. Smokers had a significantly higher average number of office visits .Changes in smoking prevalence were examined by focusing on patient visits where smoking status was recorded. Of the 104,639 patients in our study group, 13,517 (12.9%) were current smokers at the first visit where smoking status was recorded. On the last visit where smoking status was recorded, 11,817 (11.3%) were current smokers. Overall, there were 1,700 (12.6%) fewer smokers at the last visit. The decrease in self-reported smoking prevalence was 40% larger at the 12 sites that achieved systems change . As one would expect given our operational definition of systems change, patients received more clinical interventions about cigarette smoking at sites that achieved systems change .The impact per encounter of brief clinical intervention on the likelihood of quitting was examined using a logistic model. The outcome variable was the final recorded smoking status for a patient . The model included as predictors the total number of office visits where the patient\u2019s smoking status was recorded and the number of office visits where smoking status was not recorded. The analysis was restricted to 1,255 patients who had at least 4 years of between the first and last visit, at least 3 years between the first and last confirmation, and at least one visit in which smoking status was recorded as \u201cYes.\u201d Each encounter where a smoker\u2019s smoking status was recorded increased the likelihood of quitting by 2.6% .Most office visits did not include a smoking-related diagnosis code. On average, smokers had 0.90 smoking related office visits throughout the time period studied while non-smokers had an average of 0.69 visits with a smoking-related diagnosis code. Changes in the rate of smoking-related office visits were computed using generalized estimating equations (GEEs). Separate models were developed for patients with recent histories of self-reported smoking and all other patients. We refer to these groups as smokers and non-smokers. The independent variable of primary interest was the time since systems change occurred at the patient\u2019s home site.After adjusting for temporal effects, seasonality, previous visit pattern, flu-related visits, the date of health reform, and cigarette taxes, the annualized rate of smoking-related office visits following systems change for smokers decreased by 4.3% (95% CI: 0.5%\u20138.1%). The difference from the unadjusted to the adjusted rates is likely due to the fact that several of the independent variables had strong positive relationships with the dependent variables . For non-smokers, there was a non-significant decrease in the annualized rate of smoking-related office visits following systems change (95% CI: \u22120.4% to 2.0%). See To explore whether demographics could explain the reduction in the rate of smoking related office visits, six primary and interaction terms were added to the smoker and non-smoker models. The primary terms were the six demographic variables shown in An operational definition of systems change was established for clinical interventions with cigarette smokers. This definition included thresholds for frequency and sustainability. Based on this definition, 12 of 17 HVMA sites achieved systems change between 1/1/2007 and 12/1/2009. These 12 sites had significant increases in rates of smoker identifications and further clinical interventions with smokers. Decreases in smoking prevalence were found across all sites; however, the reduction in prevalence was 40% greater at sites achieving systems change. We estimate that each clinical intervention with a smoker increased the likelihood of quitting by 2.4%. The likelihood of an office visit for a smoking-related diagnosis also decreased but only for smokers at sites that achieved systems change (4.3%). Among non-smokers, there was no significant change in the rate of office visits for smoking-related office visits following systems change. Patient demographics did not appear to strongly affect the likelihood of a smoking related office visit following systems change.The health care system should be viewed as central to the any tobacco intervention strategy. As recommended in the USPH Guideline, health care administrators like practice managers and chief medical officers, as much as individual clinicians, must be responsible for ensuring that tobacco interventions become an integrated component of health care delivery. Yet, despite the well-known consequences of tobacco use and conclusive research on the effectiveness of tobacco treatment, many healthcare facilities still lack the policies and clinical systems needed to achieve consistent and effective treatment. However, this landscape is changing rapidly. Recent federal legislation, including PPACA, ARRA (American Recovery and Reinvestment), and HITECH include provisions that incentivize physician providers and hospitals increasingly to identify tobacco users, assess use, and conduct interventions The HVMA data support and shine the spotlight on strategies in healthcare that focus on the system, rather than the individual clinician. In the case with HVMA, data captured in the EHR is retrieved and reported back monthly. Administrators and clinicians are informed about their own performance with comparisons to other sites. This analysis also demonstrates that data sharing with clinicians may go beyond the rate of brief tobacco interventions and enter the realm of behavioral change and improvements of patient population health.In contrast to strategies that target only the clinician or the tobacco user, with systems strategies, tobacco use interventions are likely to become a fully integrated and routine part of patient care. Bolstered increasingly by meaningful use of EHRs, they may become easier to perform than not. If results such as those realized within HVMA can be replicated across the primary care delivery system, significant strides can be made towards reducing tobacco use prevalence and improving health.A number of limitations should be noted. Although we endeavored to assess data quality, no independent measure of quality was available and thus the inaccuracy of the electronic medical records may lead to variability as well as potential bias in the analysis. The size of this potential issue cannot be known. However, our test of internal consistency showed that patients who were recorded as never having smoked were also listed as never smokers 99.7% of the time at prior visits. Furthermore, the sites that had increased rates of brief intervention also had simultaneous decreases in the number of smokers and the likelihood of office visits for smokers. This is consistent with literature showing relationships between likelihood of quitting and tobacco use interventions with medical doctors. Had the data quality been poor, it is unlikely that this relationship would have existed in the HVMA patient histories.This analysis also relied on patient self-report smoking status that is subject to reporting bias, especially among certain populations like pregnant women. The self-report bias may have affected the estimates of smoking prevalence, but unlikely to have affected the estimates of pre-post changes. The percentage of women with ICD9 diagnosis codes (V22) for pregnancies varied across sites from 2% to 5%. There was no discernible pattern between these percentages and sites that achieved systems change.Without any measure of continuity of care, we also can\u2019t know whether patients visited non-HVMA providers for any period of time. We attempted to deal with the limitation by requiring that at least 3 years elapse between the first and last visit for all patients in our study group. We have no reason to assume that patients who sought routine care elsewhere and then returned at a later date to receive care at HVMA would bias the results in any way.Similarly, the likelihood of office visits for smoking-related diagnoses could be impacted by patients seeing non-HVMA providers for their care. Smokers, in particular, have more health problems and may require care of specialists or more ED or hospital visits for smoking-related diagnoses. Furthermore, there is extensive literature on what has been called the \u201cill-quitter effect\u201d or \u201cquitting while sick\u201d Future research in this area should also examine data sets collected in settings other than the primary care setting. Without a better understanding of changes in hospitalization rates, it would be impossible to claim that there have been significant health improvements or to develop adequate return on investment estimates for brief tobacco intervention in real world settings. Nonetheless, the success of the HVMA program of brief tobacco use interventions demonstrates the value of system-wide adoption of MU core tobacco measures. When systems routinely meet the MU criteria, there will be real opportunities to improve healthcare quality, among them tailored feedback systems to motivate clinicians, new ways to identify and address health disparities, and development of payment systems that tie bonuses to reliable measures of improving population health.The forces driving healthcare in the United States to adopt a systems approach to tobacco interventions are quite large. These include significant federal legislation , tobacco related meaningful use rules, and the move toward Accountable Care Organizations, Alternative Quality Contracts, and Value Based Purchasing. With these tailwinds, the rates of tobacco interventions in the United States are likely to increase significantly in the coming years, ultimately leading to substantial savings from the decreased utilization of health care services related to tobacco use."} {"text": "Depression increases the risk of disability pension and represents a health related strain that pushes people out of the labour market. Although early voluntary retirement is an important alternative to disability pension, few studies have examined whether depressive symptoms incur early voluntary retirement. This study examined whether depressive symptoms and changes in depressive symptoms over time were associated with early retirement intentions.We used a cross-sectional (n\u2009=\u20094041) and a prospective (n\u2009=\u20092444) population from a longitudinal study on employees of the Danish eldercare sector. Depressive symptoms were measured by the Major Depression Inventory and the impact of different levels of depressive symptoms and changes in depressive symptoms on early retirement intentions were analysed with multinomial logistic regression.In the cross-sectional analysis all levels of depressive symptoms were significantly associated with retirement intentions before the age of 62\u00a0years. Similar associations were found prospectively. Depressive symptoms and worsened depressive symptoms in the two year period from baseline to follow-up were also significantly associated with early retirement intentions before age 62. The prospective associations lost statistical significance when controlling for early retirement intentions at baseline.The whole spectrum of depressive symptoms represents a health related strain that can incur intentions to retire early by early voluntary retirement. In order to change the intentions to retire early, the work related consequences of depressive symptoms should be addressed as early in the treatment process as possible. The ageing work force challenges the economy of most Western countries and calls for new strategies to maintain people in the work force. Depression is a common mental disorder , 2 and iEVR can be defined as early retirement by either self-financed or publicly financed non-illness based pensions. Publicly financed EVR is available in a number of Western countries and is generally available at the age of 60\u00a0years or earlier . Publicle.g., being female and having a spouse that is retired have been associated with EVR [e.g., experienced health and medical problems [Both \u2018pull\u2019 and \u2018push\u2019 factors can help explain people\u2019s incentives to retire early . Factorswith EVR . Factorsproblems , 23, incproblems or unempproblems , 20. In i.e., people experiencing symptoms not severe enough to reach the threshold of a clinical depression.Depression is mostly characterised by an episodic course, but there is a high risk of relapse . The cori.e., people with depression were more likely to retire early than people without depression. By contrast, a study of within-person changes would measure whether changes in depression in the same person from one point in time to another affected early retirement [These prospective studies were based on between-group differences, tirement . Within-Early retirement plans are likely to change over the course of one\u2019s life and can be viewed as a transitional process. A process which often begins with considerations and early retirement intentions then proceeds to a decision to retire early and eventually ends with actual retirement , 36. MosIs the severity of depressive symptoms associated with immediate early retirement intentions?Is the severity of depressive symptoms associated with early retirement intentions over a two-year period?Are changes in depressive symptoms associated with early retirement intentions over a two-year period?Do depressive symptoms affect changes in retirement intentions over time?This study examined the association between depressive symptoms and early retirement intentions in four research questions:This study used data from a prospective study of eldercare workers health and work characteristics . QuestioIn the cross-sectional study population, we included participants who responded at T2 (n\u2009=\u20098431). We excluded those participants who were below 45 or above 59\u00a0years at T2 (n\u2009=\u20093490) and those who had missing values (n\u2009=\u2009145), or who answered \u201cI don\u2019t know\u201d to the question about early retirement intentions (n\u2009=\u2009755). The final cross-sectional population amounted to 4041. We did not have information on actual retirement, but in order to account for the possible bias of people who left the population because of EVR the participants were excluded at age 60 at T2. In Denmark, most employees can receive publicly financed EVR at the age of 60 (Danish: Efterl\u00f8n) and the publicly financed old age pension at age 65 (Danish: Folkepension). Due to high Danish taxes on income self-financed retirement before the age of 60 is rare.i.e., above 59\u00a0years at T2, n\u2009=\u20092336) and who had turned 60\u00a0years at T2 (n\u2009=\u200911). We also excluded participants, who had missing values (n\u2009=\u200925), or answered \u201cI don\u2019t know\u201d (n\u2009=\u2009390) to the question about early retirement intentions. The final prospective population consisted of 2444 employees.In the prospective study population, we included those participants who responded at both T1 and T2 (n\u2009=\u20095206). We excluded participants who were below 45 or above 57\u00a0years old at T1 . The individual responses to the items are summed up to a severity scale that ranges from 0 to 50 - the higher the score the more severe the depressive symptoms. A cut-off score of \u226520 indicate a probable major depression. We categorised the MDI scores at T1 into five different categories indicating the severity of depressive symptoms:\u2018Severe depressive symptoms\u2019 (scores \u226520);\u2018Moderately severe depressive symptoms\u2019 (scores between 15 and 19);\u2018Moderate depressive symptoms\u2019 (scores between 10 and 14);\u2018Mild depressive symptoms\u2019 (scores between 5 and 9);\u2018No depressive symptoms\u2019 (scores between 0 and 4).We measured depressive symptoms by the Major Depression Inventory (MDI). The MDI is a widely used self-rating scale to asses depression accordinAlthough a self-report questionnaire is able to indicate the presence of a clinical depression, a valid diagnosis can only be made by clinical assessment performed by a health care professional . We therSeverity of depressive symptoms at T1 and changes in depressive symptoms from T1 to T2 were used as predictor variables in the longitudinal analyses. Changes in depressive symptoms had three categories: \u201cWorsened\u201d, \u201cimproved\u201d or \u201cunaffected\u201d symptoms. Participants were classified with \u201cworsened symptoms\u201d if their MDI score had increased five points or more and with \u201cimproved symptoms\u201d if their MDI-score had decreased by five points or more. All other participants were classified with \u201cunaffected symptoms\u201d.\u201cWhen would you like to retire from the labour market? Five response categories indicated age of intended retirement: \u201cI would like to work until I turn 65\u00a0years old\u201d, \u201cI would like to receive early retirement pension, when I am between 62 and 65\u00a0years old\u201d, \u201cI would like to receive early retirement pension, when I am between 60 and 62\u00a0years old\u201d, \u201cI would like to retire, before I turn 60\u00a0years old\u201d or, \u201cI don\u2019t know\u201d These were recoded into three categories:The intention to retire early was measured with a single item: I would like to retire before the age of 60\u2019 and \u2018when I am between 60 and 62\u00a0years old\u2019;1. Very early retirement intentions, which included the responses: \u2018I would like to retire when I am between 62 and 64\u00a0years old\u2019;2. Early retirement intentions, which included the response: \u2018I would like to work until I turn 65\u00a0years old\u2019;3. Normal retirement intentions, which included the response: \u2018I don\u2019t know\u201d were excluded from the study populations, as described previously.Participants who answered \u201c\u03c72).Statistical analysis was conducted in PASW 18 . Using multinomial logistic regression for the cross-sectional and prospective analyses, we examined the association of depressive symptoms with the probability of very early and early retirement intentions. A p-value of <0.05 was regarded as statistically significant in these analyses. The cross-sectional analysis was adjusted for gender, age, marital status, working hours, seniority and type of occupation. We tested two models in the prospective analyses. Model 1 was adjusted for the same variables as the cross-sectional analysis. Model 2 was further adjusted for early retirement intentions at T1. The drop-out analyses were conducted using Pearson chi-square . Of this sample, 37\u00a0% (n\u2009=\u2009541) did not return the questionnaire because they no longer worked at the particular workplace. Chi-square tests revealed that these non-responders were more likely to be married (p\u2009<\u20090.05), be male (p\u2009<\u20090.001), and have less years of work experience (p\u2009<\u20090.001) compared to the responders from the prospective study population. No differences between depressive symptoms, age, shift work, or type of occupation were found.Chi-square tests revealed that the non-responders who remained at the same work place (n\u2009=\u2009926) were more likely to be male (p\u2009<\u20090.001), to be younger (p\u2009<\u20090.05), have less years of work experience (p\u2009<\u20090.05), and to have more severe depressive symptoms (p\u2009<\u20090.001) compared to the responders from the selected prospective study population. No differences between marital status, shift work, or type of occupation were found.Chi-square tests revealed that the participants who were excluded because they had answered \u201cI don\u2019t know\u201d to the question about early retirement intentions from both the cross-sectional and longitudinal study populations were significantly younger (p\u2009<\u20090.05) than the study participants, but did not significantly differ with regards to depressive symptoms, gender, marital status, shift work, or work experience.The results from the cross-sectional analysis showed that all levels of elevated depressive symptoms doubled the chance of very early retirement intentions (before age 62), but not early retirement intentions (between ages 62 and 64). The same association was also found when examined over time, except that \u2018severe depressive symptoms\u2019 did not significantly affect intentions to retire before age 62. In addition, those who experienced worsened depressive symptoms over time were also more likely to have very early retirement intentions compared to those with no change in depressive symptoms. However, when adjusting for early retirement intentions at T1 in the longitudinal analyses, the results were no longer significant.Our findings are in line with previous studies showing that depressive symptoms increase the risk of early retirement by either EVR , 37, 38 We were also able to examine the association between depressive symptoms and changes in retirement intentions from T1 to T2. The prospective associations were no longer statistically significant when adjusting for retirement intentions at T1, regardless of the severity of depressive symptoms at T1 or whether the depressive symptoms had worsened or improved. A significant result after adjusting for retirement intentions at T1 would require that the retirement intentions increased from T1 to T2 among those participants who already had depressive symptoms and early retirement intentions at T1. Our finding indicated that the association between depressive symptoms and retirement intentions remained the same or was weakened in the two-year period.In accordance with other recent studies, our study indicated that the presence of even mild depressive symptoms had significant work related consequences , 32, 44,It remains unknown whether the impact of depressive symptoms that are below the threshold of a clinical depression on early retirement should be understood as the effect of residual symptoms from a previous depression or as symptoms that reoccur independently. Our results in either case show that it is important to consider the whole spectrum of depression when analysing the impact of depression on EVR over time.Our results may have been confounded by \u2018push\u2019 and \u2018pull\u2019 factors at T1. Studies have identified several factors that play an important role in both early retirement intentions and actual early retirement, such as an adverse working environment , 37, 38,Another important limitation of this study was the selective attrition. The response rate from the original study was fairly high , but we Our attrition analyses showed that 37\u00a0% of non-responders no longer worked at the particular work place. Early retirement before the age of 60 is rare in Denmark , 48, 49 The study population consisted of mainly female eldercare workers, an occupational group that is known for reporting a lower quality of the work environment compared to other sectors . Thus, iThe whole spectrum of depressive symptoms represents a health related strain that can influence EVR. Our findings suggest that the severity of depressive symptoms can have an immediate impact on the intentions to retire early and that these intentions are maintained, but weakened over time. These findings also suggest that in order to change the intentions to retire early, the work related consequences of depressive symptoms should be addressed as early in the treatment process as possible."} {"text": "Along with the development of science and technology, lanthanide\u2010doped upconversion nanostructures as a new type of materials have taken their place in the field of nanomaterials. Upconversion luminescence is a nonlinear optical phenomenon, which absorbs two or more photons and emits one photon. Compared with traditional luminescence materials, upconversion nanostructures have many advantages, such as weak background interference, long lifetime, low excitation energy, and strong tissue penetration. These interesting nanostructures can be applied in anticounterfeit, solar cell, detection, bioimaging, therapy, and so on. This review is focused on the current advances in lanthanide\u2010doped upconversion nanostructures, covering not only basic luminescence mechanism, synthesis, and modification methods but also the design and fabrication of upconversion nanostructures, like core\u2013shell nanoparticles or nanocomposites. At last, this review emphasizes the application of upconversion nanostructure in detection and bioimaging and therapy. Learning more about the advances of upconversion nanostructures can help us better exploit their excellent performance and use them in practice. Through adjusting the relative intensity of these particular emission bands, the emission from green to red can be obtained. The two characteristic emission bands of Tm3+ are respective about 800 and 480 nm, and blue light was apparent because the light of 800 nm is not visible. Ho3+ possesses similar characteristic emission bands to those of Er3+. As another kind of dopants, sensitizers are responsible for the absorption of energy from excitation source and then transferring it to activators. Yb3+ and Nd3+ are the most commonly used sensitizers.3+ and Nd3+ not only has large absorption cross\u2010section at 980 and 808 nm, respectively, but also could be in resonance with activators. In addition to dopant, host matrix provides a platform for energy transfer.Because of the different luminescence mechanism of upconversion materials against the traditional fluorescent ones, many research teams were committed to the study of their luminescence mechanism, which leads to a developing understanding of their luminescence mechanism. In most cases, upconversion luminescence (UCL) was derived from more than one fundamental mechanism. Among the numerous fundamental mechanisms, excited state absorption (ESA), energy transfer upconversion (ETU) are the most significantly influential factor. Dopants and matrices are the essential elements for upconversion. Activators act as dopant, to absorb and then release the energy in the form of fluorescence. The common activators for upconversion are ErSynthesis plays a critical role in determining the structure, composition and properties of resulting materials. The resulting upconversion nanostructures may be either hydrophobic or hydrophilic, which have their merits and demerits, though the former surface feature is more common. For example, the upconversion nanostructures with controllable shape and size can be prepared by thermal decomposition, high temperature co\u2010precipitation and solvothermal method.Utilizing newly developed materials for particular applications is the ultimate goal of scientific research, which applies to the upconversion fluorescent nanomaterials. Given low background noise and excitation located in the IR region, lanthanide\u2010doped upconversion nanostructures can be used in many aspects, such as physics, chemistry, biology, and medicine, especially, as a probe for detection, bioimaging, and therapy.This review aims to discuss the existing advances in the reasonable design and synthesis of lanthanide\u2010doped upconversion nanostructures. To begin with, we attempt to introduce the mechanism of lanthanide\u2010doped upconversion nanostructures from host matrices to dopants in Section 2UCL is an anti\u2010Stokes optical phenomenon. Simple upconversion nanostructures normally contain dopants and host matrices that are the key factors determining luminescence efficiency. Dopants, including sensitizers and activators, provide a luminescence center, while host matrices supply a platform for energy transfer between the dopants and drive them into optimal position.2.1n5s25p6 (n varies from 0 to 14) electronic configuration. The partially filled 4f electronic shell that is critically relevant to photoluminescence is protected by outer 5s and 5p electronic shells from external environmental disturbances.n between 0 and 14 elicits lanthanide ions energy\u2010enriched levels, which contributes greatly to broadband spectrum. Upconversion is a nonlinear optical phenomenon, whose fundamental mechanisms consist of ESA, ETU, photon avalanche (PA), cooperative energy transfer, and cross\u2010relaxation. Of them, ESA and ETU are responsible for UCL efficiency.It is well known that lanthanide ions exhibit a 4f\u200aFigure3+, Tm3+, and Ho3+, commonly play an activating role in single\u2010doped systems, which can be explained by ESA. For single\u2010doped systems, Er3+ has a relatively high quantum yield given the high similarity of \u2248980 nm in the energy gaps between 4I11/2 and 4I15/2 states and between 4I11/2 and 4F7/2 generated by 3H4\u20133H6 transition, which is favorable for biological tissues because of the deep tissue penetration and low heat effect. Three additional primary upconversion emission bands stay around 350, 450, and 479 nm corresponding to the transitions 1D2\u20133H6, 1D2\u20133F4, and 1G4\u20133H6, respectively. In terms of Ho3+ ions, there are two main upconversion emission bands, including the red centered at 650 nm originated from the transition 5F5\u20135I8, and the green centered at 540 nm originated from the transitions 5S2/5F4\u20135I8. Low doping concentration is common in single\u2010doped system (less than 3% Er3+ and no more than 1% Tm3+) to avoid concentration fluorescence quenching that can be incurred by increase in harmful nonradiative transitions derived from high doping concentration.ESA, normally doped with low Lanthanide ion concentration <1%), is responsible for the single ion based process. As depicted in %, is resUnlike ESA based on single ions, ETU always occurs in two neighboring ions regardless of their chemical nature.3+ ions with a simple energy level structure are typical sensitizer, given to their large absorbing cross\u2010section about 980 nm corresponding to 2F7/2\u20132F5/2 transition, and matched energy well with a large number of f\u2013f transitions of the typical activators . The doping concentration of Yb3+ ions should be controlled at a medium level (20%\u201340%) to eliminate the hazardous concentration quenching. Selection of various dopants or doping concentrations will lead to diverse colors of upconversion emissions. Liu and co\u2010workers utilized Yb3+/Er3+ and Yb3+/Tm3+ co\u2010doped NaYF4 NCs to yield a series of colors of light.3+, Nd3+ sensitizer also has increasingly been sought. Compared to Yb3+, the thermal effect of Nd3+ is lower, and more importantly, unlike the centered at 980 nm absorption peak of Yb3+, the absorption peak of Nd3+ does not overlap the absorption peak of water, which is more suitable for biological applications. In some cases, Nd3+ ions and Yb3+ ions act jointly as sensitizers.3+ ions are excited metastable level 4F5/2 from ground level 4I9/2 by absorbing a pump photon, reaches energy level 4F3/2 by nonradiation transition, and then transfers the energy to Yb3+ ion, and finally, Yb3+ ion transfers the energy to activator. However, for the sake of improvement in UCL efficiency and feasibility for further applications, the upconversion systems doped with two sensitizers will have a complicated reaction mechanism, like core/shell structures,Dopant ions can be categorized into activators and sensitizers. As Figure\u00a02.22+,2+,2+,2+,2+,4+,4,4,4,4,5,5,UCL efficiency is closely related to host matrices, given their critical role in determining the surrounding environments of dopant ions, such as spatial distance, coordination numbers, and energy transfer efficiency. Selection of proper host materials is of paramount importance in which a few basic requirements are highly demanded, including optical stability and resembling ionic size to that of the dopant ions. Thus, the inorganic compounds containing alkaline earth ions, for instance Ca4 whose UCL efficiency is substantially reinforced due to the transformation from \u03b1 to \u03b2 phase with a more asymmetry. Symmetry of rare earth doped host crystal field could be tailored to change the spatial distance between the luminescent centers and lead to some other energy transfer processes.In general, radiative transitions of rare earth ions are forbidden by quantum mechanical selection rules. However, such a forbidden nature can be broken by the crystal field of host matrix. When lanthanide doping ions are introduced into an asymmetrical crystal field, their 4f state shall mix with higher electronic configurations, leading to a higher degree of asymmetry of the host matrix and consequently a better UCL efficiency. A good example can be seen in lanthanide doped \u03b2\u2010NaYF3+ ions in NaYF4 facilitates the crystal phase transition from cubic to hexagonal.+ ions that are optically inert have been extensively used to modify the host crystal field.+ ions having the smallest cationic radius are expected to enter lattice sites or interstices randomly, which makes Li+ ions more suitable to modify the host crystal field. Zhang and co\u2010workers reported a significant enhancement (by 25 times) of the visible upconversion emissions in Y2O3:Yb,Er nanoparticles through Li+ doping for the first time.+ doping produces a more than 30\u2010fold increase in upconversion emission for NaYF4:Yb,Er upconversion nanoparticles (UCNPs) Figure2a.48 I2F7 host material adopts an orthorhombic crystallographic structure to construct a \u201cdopant ions spatial separation\u201d structure at the sub\u2010lattice level with enhanced UCL efficiency . KYb2F7 material plays a dual role of host matrix and sensitizer, which is favorable for the generation of multiphoton upconversion.In general, concentrated luminescence centers shorten the spatial distances between luminescence centers, which leads to the development of some detrimental energy transfer and luminescence quenching effect. As such, concentrations of activators and sensitizers have to be strictly controlled to perform their desirable functions. However, a latest report claims that the use of a new class of KYb3 was adopted as host matrix to obtain pure single band upconversion .2+ ions within the host matrix and the dopants ions and a pure single\u2010band upconversion emission concentered on the red and NIR spectral regions was obtained. In addition to the host materials containing Mn2+, red single band can be observed in NaScF4, NaSc2F7, and YOF host materials.The color derived from upconversion emission is normally adjusted through controlling concentration or species of the doping rare earth ions. In some particular cases, the host materials, nevertheless, also have an influence on the upconversion emission color, as indicated by the latest report of Liu's group, where KMnFFurthermore, crystal grain size of host matrix is one of primary factors affecting the luminescence efficiency. For upconversion materials a smaller grain size gives rise to a lower UCL efficiency, which is attributed to the higher density of surface defects and more serious energy transfer loss associated with smaller grain size.2.3Energy transfer, a great physical concept, applies well to all upconversion process. Here, we discuss energy transfer process from three main contents, i.e., energy transfer in core@shell nanostructures, localized surface plasmon resonance (LSPR) assisted energy transfer, and luminescence resonance energy transfer (LRET).4:Yb,Er@KYF4,4:Yb,Er(Tm)@NaGdF44:Ln@CaF24:Tm@CaF24:Yb,Er@NaYF4:Yb,Tm was reported to that of NaYF4:Yb,Er,Tm.Figure\u00a03+ ions, and accumulated by Tm3+ ions in the core area. Afterward, the energy transfer from Tm3+ ions to Gd3+ ions in the intermediate layer, and finally is captured by X3+ ions in outer shell for UCL under 980 nm irradiation. Apart from the host matrix containing Gd3+ ions, Yb3+ doped host materials also have a similar property. Zhao and co\u2010workers designed an Nd3+\u2010sensitized core\u2013shell\u2013shell nanostructure of NaYF4:Yb,X@NaYF4:Yb@NaNdF4:Yb. The Nd3+ ions in the outer shell was excited by 800 nm irradiation, and with an aid of Yb3+ ions, energy migrates from Nd3+ ions to X3+ ions.Shell layers in core@shell nanostructures are either active and inactive in terms of UCL.4:Yb,Tm NCs with plasmonic gold nanostructures.4:Yb,Tm NCs, emission intensities of 1D2\u20103F4 and 1G4\u20103H6 transitions were increased more than 150%, while an increase of only \u224850% was observed in 1G4\u20103F4 transition. The enhancement could be attributed to the increase in radiative decay rate and emission efficiency. On the contrary, a quenching effect may be caused by the considerable scattering of excitation irradiation, when NaYF4:Yb,Tm nanocrystals was embraced by gold shells. As Figure\u00a03+ ions to Er3+ ions, which is enhanced at least 6 folds on fabricated gold pyramid pattern.In recent years, noble metals have been extensively studied due to their excellent optical properties, such as strong visible light absorbing and scattering. LSPR can occur between noble\u00a0metals, like Ag or Au, and phosphors, when the confined free electrons of noble metals are resonant with frequencies close to those of the passing photons. In general, the enhancement effect of LSPR derives from increases in excitation and emission rate. The excitation rate is elevated by the amplification of local incident electromagnetic fields when the excitation band of upconversion nanostructure couples with the LSPR band of noble metals. The increase in emission rate, when the emission band upconversion nanostructure is resonant with the LSPR frequencies of noble metals, would not only promote radiative decay rate but also promote nonradiative decay rate, which can give rise to emission quenching. In recent years, various nanostructures of noble metals, including nanoparticles, nanowires, and nanoarrays, have been employed to investigate the effect of LSPR. Zhang et al. reported a modulation of upconversion emission through hetero\u2010integration of NaYF4:Yb,Er nanoheterostructures (CSNY), where the CdSe nanoparticles act as an acceptor absorbing green upconversion band and emitting red light under 980 nm excitation , referring to energy donor and acceptor, requires a certain degree of overlapping between absorption band of acceptor and emission band of donor, and a close spatial distance between donor and acceptor for energy transfer. In UCNPs based LRET nanocomposites, UCNPs are usually used as energy donor, while a number of dyes, quantum dots (QDs) or metal nanoparticles act as acceptor. UCL was absorbed by acceptor and subsequently produced new emission colors. Perepichka and co\u2010workers reported a novel CdSe/NaYFn Figure\u00a0e,f.67 In3Due to the development of nanoscience and nanotechnology, synthesis of UCNPs with controllable size, crystalline phase and composition has gradually become mature. Synthetic UCNPs can be divided into the oil\u2010dispersible and water\u2010dispersible based on their decentralized nature. Oil\u2010dispersible UCNPs are produced through employing oleic acid (OA) and oleylamine (OM) as surfactant. Such hydrophobic UCNPs exhibit excellent dispersibility, uniform size distribution, high crystallinity, and superior UCL properties. However, some particular UCL related applications including bioimaging and theropies, require water\u2010dispersible UCNPs, which inspires studies of converting hydrophobic UCNPs into hydrophilic counterparts. In order to facilitate biological applications, direct synthesis of water\u2010dispersible UCNPs has been developed. Surface of water\u2010dispersible UCNPs is normally covered by hydrophilic polymer or molecules to achieve sufficient stability in service. In this section, we will discuss the synthesis and modification of UCNPs from three aspects: synthesis of hydrophobic UCNPs, direct synthesis of hydrophilic UCNPs, and conversion of hydrophobic UCNPs to hydrophilic UCNPs.3.1Considerable efforts have been devoted to synthesis of hydrophobic UCNPs with controllable shapes, sizes and phase, which delivers a great impact on upconversion emission efficiency and their applications.3 triangular nanoplates via decomposing La(CF3COO)3 precursors in a OA/ODE solution at high temperature.4:Yb,Er and NaYF4:Yb,Tm UCNPs by thermal decomposition of sodium and lanthanides trifluoroacetates in OM,4 and NaYF4:Yb,Er/Tm NCs with controllable size in OA/OM/ODE.4:Yb,Er UCNPs, they proposed a scheme of the nucleation stages combining transmission electron microscopy, X\u2010ray diffraction and upconversion emission spectroscopy.Figure4:Yb,Er from their \u03b1 counterparts was depicted in Figure\u00a04:Yb,Er precursors dissolve in the beginning period, and Ostwald\u2010ripening process is enhanced during the \u03b1 \u2192 \u03b2 phase transition, owing to the broad size distribution of the dissolved \u03b1\u2010NaYF4:Yb,Er. In terms of route B, addition of CF3COONa in this system suppresses Ostwald\u2010ripening, a nucleation process initiating at low temperature and regrowth at high temperature, which is ascribed to the high concentration of monomers. At the stage of \u03b1 \u2192 \u03b2 phase transition, the size increases uniformly. Murray et al. obtained highly uniform \u03b2\u2010NaYF4 UCNPs with a diverse family of morphologies by adjusting the synthetic conditions, including the reaction time, the ratio of OA and ODE, and the concentration of precursors and high crystallinity nanocrystals will be synthesized by Ostwald\u2010ripening. For example, Zhang et al. reported the synthesis of lanthanide\u2010doped pure hexagonal\u2010NaYF4 UCNPs via high\u2010temperature coprecipitation.4\u2010based UCNPs with different shapes could be obtained via adjusting OA and ODE to hexagonal phase (NaScF4) would be achieved. The raw materials of rare earth chlorides could be replaced by other rare earth inorganic salts. Recently, Liu's group reported the preparation of lanthanide\u2010doped NaGdF4 nanoparticles by a similar process, which just uses acetates counterparts as starting materials thermal synthesis.4 (M = Rare earth).3)3 as precursors and OA as a stabilizing agent and obtained nanostructured arrays that were composed of hexagonal nanotubes with lengths of \u2248500 nm and outer diameters of \u2248250 nm thermal synthesis has attracted much attention for synthesizing high quality NCs with well controlled size and shape, due to its low cost. At a certain temperature and pressure, water or other solvents is at a critical or supercritical state, which leads to an elevated reaction activity. So, the physical and chemical properties of the substances in the solvents were altered greatly. The most popular procedure to prepare hydrophobic NCs is the liquid\u2212solid\u2010solution (LSS) process that was proposed by Li's group Figure6a.87 ASome other synthetic methods have also been developed to obtain hydrophobic UCNPs, such as ionic liquid\u2010based,3.2Regarding bioapplications, UCNPs with high water solubility is desired, as such secondary modifications are essential to convert hydrophobic UCNPs into hydrophilic ones, which inspires study of direct synthesis of hydrophilic UCNPs. Direct synthesis of hydrophilic UCNPs often employs water or polyol as solvent and hydrophilic polymers and molecules as surfactant. The capping ligands of the finally obtained hydrophilic UCNPs normally contain special reactive groups and can further conjugate with biomolecules or functional groups.Figure4:Yb,Er,4:Yb/Er upconversion phosphors with carboxyl\u2010functionalized surfaces, to prepare hydrophilic NaYF4:Yb/Er upconversion phosphors via one\u2010step hydrothermal method.4:Yb/Er upconversion phosphors can be controlled. Furthermore, the carboxyl\u2010functionalized hydrophilic NaYF4:Yb/Er upconversion phosphors can directly conjugate antibodies for biodetection. Our group also has conducted some investigations in terms of preparation of hydrophilic lanthanide\u2010doped UCNPs.4 UCNPs were obtained via a fast, simple, and environ\u2010mentally friendly microwave\u2010assisted modified polyol process with PEI as surfactant.4 pure phase transition from cubic to hexagonal was achieved by modulating the ratio of Gd3+:F\u2212. The upconversion emission from visible to near\u2010IR, even white light, was tuned via adjusting the doping concentrations of the rare earth luminescent centers.The above mentioned hydro(solvo)\u2010thermal synthesis is a typical synthetic method, which is applicable to direct synthesis of hydrophilic UCNPs.4:Yb,Er upconversion nanospheres were fabricated by using Y(OH)CO3:Yb,Er nanospheres as sacrificial templates via a surface\u2010protected \u201cetching\u201d and hydrothermal ion\u2010exchange process by Lin's group.3:Yb,Er precursors. After addition of NaBF4, the as\u2010obtained solution was transferred into a Teflon autoclave for a certain time period under a certain temperature. At high temperature and pressure, the fluoride source, i.e., NaBF4, can gradually release H+ and F\u2212 ions. The H+ ions corroded Y(OH)CO3 nanospheres and resulted in a large quantity of Y3+ ions. Na+, Y3+ and F\u2212 ions reacted to generate \u03b1\u2010NaYF4. PEI coated on the surface of Y(OH)CO3 can effectively protect them against rapid dissolution incurred by H+ ions, as a result, \u03b1\u2010NaYF4:Yb,Er upconversion nanospheres with hollow structure were obtained. Furthermore, FA was conjugated on the surface of \u03b1\u2010NaYF4:Yb,Er nanospheres, due to the presence of free amine groups. Using the identical synthetic process, hollow CaF2, GdVO4, and NaREF4 microspheres can be fabricated.2O3:Yb,Er hollow spheres with uniform morphology and controllable inner structure via hydrothermal method followed by temperature\u2010programmed calcination 4) to replace the original organic ligands of nanocrystal surface. Such strategy does not work well on other diazonium tetra\u2010fluoroborate compounds.4\u2212 anions attaching weakly to the surface after replacement can render NCs easily a high dispersibility in various polar, hydrophilic solvents with a marginal impact on the particle size and shape. This approach is widely applicable to a great number of NCs with varying shape and size. This method is of importance because the obtained ligand\u2010free NCs can be stored in solvents over a long time period without evident aggregation.When an acidic pH is applied, oleate is protonated completely and detached from the UCNPs surface. The charge repulsion on the surface stabilize UCNPs. Liu's group adopted a similar strategy to prepare bare sandwich\u2010structured UCNPs with short spatial distance for energy transfer.3.3.32MA\u2010co\u2010SEMA) copolymers as the new capping ligand.2 in the ligands exchange procedure, the luminescence intensity would be reinforced.Ligand exchange is an effective technique to replace the original hydrophobic ligand coating on the UCNPs surface with hydrophilic ones. The process is easy to operate, and exhibits a negligible effect on the morphology of the yielded UCNPs. The driving force behind this reaction is that hydrophilic ligand has a stronger coordination ability to lanthanide ions than original hydrophobic ligand. A variety of organic molecules or polymers, such as PAA,3.3.4Ligands interaction can be categorized into ligands layer\u2010by\u2010layer assembly and ligands attraction. This approach would bring a hydrophilic shell coating on the OA\u2010capped UCNPs for conversion of hydrophobic UCNPs into hydrophilic ones.2 groups could attach to biotin for further fluorescence resonant energy transfer. Li and co\u2010workers developed water dispersible UCNPs by self\u2010assembly interaction between the host molecule alpha\u2010cyclodextrin (\u03b1\u2010CD) and the guest molecules OA , PSS = poly(styrene sulfonate)).A Figure\u00a0e.119 ThiA Figure\u00a0f.23 In tAbove all, the modifications of converting hydrophobic UCNPs into hydrophilic ones have merits and drawbacks. Some properties of UCNPs, including morphology, monodispersity, and UCL, may be affected more or less. The perfect modifications with the control of particle size and homogeneity and the preserved UCL need to be developed.44@NaYF4 of core or some other substances whose lattice is similar to host matrices of cores, like \u03b1\u2010NaYF4@CaF2. As Section For the upconversion nanostructures, it is important to improve their UCL efficiency. So far, numerous efforts have devoted to improving UCL efficiency of upconversion nanostructures, such as seeking optimal matrices, tuning the doping concentration, constructing core\u2013shell nanostructures, and so on. Core\u2013shell nanostructures play an important role in upconversion nanomaterials, which can not only improve the optical properties but also combine discrete functional units together. Core\u2013shell structures are generally divided into two classes: epitaxial core\u2013shell nanostructures, and nonepitaxial ones. Shell of the epitaxial core\u2013shell nanostructures must have low lattice mismatch with the core, which could decrease surface defects and improve UCL efficiency. The shell layer could be host matrx, like NaYF4:Yb,Tm Nanocrystals through thermal decomposition.4 nanoparticles via heating\u2010up method was reported by Liu et al.Heating\u2010up method is a commonly used synthetic method to obtain core\u2013shell structures through seed\u2010mediated epitaxial growth.Figure4:Yb,Er/NaYF4 UCNPs was achieved by successive layer\u2010by\u2010layer method. However, quantum yield was 0.47 \u00b1 0.05% for the heterogeneous doping NaGdF4:Yb,Er/NaYF4 UCNPs. The upconversion emission of NaGdF4:Yb,Tm/NaGdF4:A was also improved 20%\u201330% by successive layer\u2010by\u2010layer method. Chen's group reported the synthesis of LiLuF4:Ln3+ core/shell UCNPs through successive layer\u2010by\u2010layer method and high absolute upconversion quantum yields were achieved 5.0% and 7.6% for LiLuF4:Er and LiLuF4:Tm core\u2013shell structures, respectively.Heating\u2010up method has a few shortcomings, such as volatile solvent removal, prolonged heating and the centrifugation and washing of core nanoparticles, as such hot\u2010injection technique was developed to obtain upconversion core\u2013shell nanostructures.4 NCs via thermal decomposition. Then SNCs as shell precursors were injected into a hot solution of core NCs (defocusing) and dissolved and deposited on the larger core NCs (self\u2010focusing). Therefore, core\u2013shell nanostructures would be obtained. If the cycle process of defocusing and self\u2010focusing was repeated, multilayer core\u2013shell nanostructures could form are dissolved and deposited on the surface of nanocrystal cores with larger size due to their high surface energy. Veggel and co\u2010workers synthesized core\u2013shell nanostructures through this method.m Figure\u00a0e\u2013i. The 4:Yb,Er@NaGdF4 multilayer core\u2013shell UCNPs were synthesized for the first time . The size of NaYF4:Yb,Er core upconversion nanocrystals was about 16.3 nm. Diameter of NaYF4:Yb,Er@NaGdF4 core\u2013shell nanocrystals with 4\u2010layer NaGdF4 shell increased to \u224830 nm. With the shell growth, the uponversion optical property was optimized and improved under the same excitation power. Doping luminescent centers in shells, the core\u2013shell nanostructures could realize up\u2010down conversion dual\u2010mode luminescence.The prior three methods are generally used to prepare hydrophobic core\u2013shell upconversion nanostructures. Our group synthesized hydrophilic lanthanide\u2010doped core\u2013shell UCNPs by microwave assisted polyol processes.2 and TiO2 are the most common inorganic shell materials. The reverse microemulsion method is suitable to coat SiO2 shell on the oleate or OM\u2010capped UCNPs. Numerous functional groups can be encapsulated in SiO2 shells or be connected to the surface of SiO2 shells for further applications.N\u2010hydroxysuccinimide ester groups.2) has also attracted massive attention owing to their large surface area and tunable pore size. The nanocomposites combining mSiO2 and UCNPs are very promising for biological imaging, drug delivery PDT and chemical detection. For example, Shi and Bu synthesized azobenzene\u2010modified UCNP@mSiO2, and load anticancer drug doxorubicin (DOX) into mesopores silica.2 shell on the hydrophobic UCNPs via reverse microemulsion method, and then further coated another dense SiO2 shell via water\u2010phase regrowth method. Hot water etching, with PVP as protecting agents, is conducted to obtain upconversion core/hollow porous silica shell nanostructures (UCSNs). UCSNs are very conducive in terms of drug delivery. TiO2 shell is also coated on the surface of UCNPs for some particular applications, such as photocatalysis2 layer on UCNPs after modification with a surfactant layer.4:Yb,Tm@TiO2 nanocomposites showed obvious photocatalytic activity under NIR light. Lin coated TiO2 on UCNPs for NIR light triggered PDT and Zhao et al. designed novel UCNP@SiO2@TiO2 nanocomposites for high\u2010performance dye sensitized solar cells.4:Yb,Tm hybrid nanostructures via a solution method.4:Yb,Er@Ag core/shell nanocomposites and demonstrated that they were a promising upconversion imaging and PTT agent.The shells of nonepitaxial core\u2013shell structures, whose crystal lattices mismatch with those of cores, are usually inorganic materials or noble metals. SiO5In recent years, development of rare earth doped upconversion nanomaterials for various applications ranging from biomedical to electro\u2010optic field, especially bioapplications has been a research focus, in addition to the study of their light emission mechanism, synthesis routes and optical properties. First of all, the safety of upconversion nanostructures is our common concern. Not only for bioimaging and therapy, some detections also require upconversion nanostructures with biocompatibility and low biotoxicity. Therefore, at the beginning of this chapter, the biosafety of upconversion nanostructures is identified as an issue we need to consider, although it has been reviewed by Capobianco et al. and Li et al.5.1As previously mentioned, UCNPs with large anti\u2010Stokes shift, weak background interference, and no photobleaching are suitable to detect some target species, such as pH,5.1.14:20%Yb,1.6%Er,0.4%Tm UCNPs for detection and bioimaging of cyanide anion.\u2212 anions can change the absorption band of Ir1. When adding CN\u2212 anions in the LRET system, the energy transfer was blocked and the upconversion emission was recovered. By comparing the rations of UCL in the absence and presence of CN\u2212 anions, low detection limit of 0.18 \u03bcM CN\u2212 anions could be obtained. They also designed a LRET nanoprobe combining hCy7 (an organic dye) and UCNPs to detect methylmercury (MeHg+), which would cause language and memory barriers , 540 nm (green), and 650 nm (red), respectively. The hCy7\u2010UCNPs could emit green and NIR emission in the absence of MeHg+. When meeting MeHg+, the hCy7 was converted into hCy7' that exhibits NIR absorption band (centered at 800 nm) rather than red absorption band (centered at 660 nm). Through detecting the ratiometric upconversion emission at 660 to 800 nm, MeHg+ could be monitored. Vetrone et al. developed a nanothermometer based on poly(N\u2010isopropylacrylamide) (pNIPAM) modified NaGdF4:Yb,Er UCNPs which combined an organic dye (FluoProbe532A) to detect the temperature at a subcellular level.2+,2+,2,2S,3+,2+,Organic dyes like chromophoric Ru(II) complex (N719) and Ir(III) complex (Ir\u20109), which have an absorption band matched well with the upconversion emission, could play as energy acceptor for upconversion turn\u2010on probe.rs Figure10a.1635.1.24:Yb,Tm@NaYF4 UCNPs as energy donor and MnO2 nanosheet as energy acceptor. MnO2 nanosheet was deposited on the surface of UCNPs by reduction of potassium permanganate solution.2 nanosheet modified UCNPs was quenched due to the formation of LRET between Tm3+ ions and MnO2. But facing GSH, the MnO2 nanosheet was reduced to Mn2+ ions. MnO2\u2010induced UCL quenching effect was inhibited, and the LRET was terminated, thereby UCL was recovered. The recovered upconversion emission was a function of the concentration of GSH. Furthermore, they monitored the GSH levels in living cells. Liu's group reported a LERT probe for biothiols and few\u2010atom silver nanoclusters (Ag NCs) as energy acceptor of UCNPs, which was presented in Figure\u00a0If the absorption bands of inorganic nanoparticles are match with the upconversion emission bands of UCNPs, the distance is suffciently close as well, the LRET process would occur.4 UCNPs for the detection of \u03b2\u2010hCG (an important disease marker) Figure\u00a0c.41 Firs5.23+ have two emission bands centered at 540 and 650 nm, separately. So single\u2010band red upconversion light (650 nm) is a purse for bioimaging.3+\u2010doped UCNPs with an upconversion emission peak centered at 800 nm under 980 nm, in which both excitation and emission bands are located in the \u201coptical window,\u201d provide a high penetration depth. Prasad et al. demonstrated high\u2010contrast UCL imaging of deep tissues (UCL was imaged through 3.2 cm pork tissue) based on \u03b1\u2010NaYbF4:Yb,Tm@CaF2 core\u2212shell nanoparticles.3+ as sensitizer was introduced into upconversion nanostructures. Yan's group demonstrated that using shorter wavelength excitation band centered 808 nm rather than 980 nm could greatly minimize the tissue overheating effect.4:Yb,Tm UCNPs and they fed C.elegans with the mixture of B\u2010growth media and NaYF4:Yb,Tm UCNPs.4 UCNPs for UCL bioimaging in a small black mouse.FigureBioimaging technology has attracted much attention given its ability of visualization. Fluorescence imaging, MRI, and CT are the most common bioimaging techniques. First, we talk about the fluorescent bioimaging based UCNPs. Compared to the traditional fluorescent probes, such as semiconductor quantum dots or organic dyes, UCNPs with some advantages, such as weak background autofluorescence, deep tissue penetration, low photobleaching, and large anti\u2010Stokes shifts, are good fluorescence agent for in vitro or in vivo imaging. Autofluorescence can be eliminated because bio\u2010samples do not have upconversion emission under NIR excitation. For bioimaging, it is optimal that excitation and emission wavelengths are in the NIR spectral range (700\u20131100 nm) and red region (600\u2013700 nm) which is termed \u201coptical window\u201d of the biological tissues. The UCNPs containing Er3+ ions in UCNPs are of importance for some certain imaging techniques. This kind of UCNPs is an excellent multimodal imaging agent. Gd3+ ions with seven unpaired electrons in the ground state show large paramagnetic moment, which is commonly used as T1\u2010weighted MRI contrast agent. Gd3+\u2010based host materials, such as NaGdF4,4,4,53+ ions could exist as host matrix or just as a shell coated on core nanoparticles.3+ ions can also be incorporated into the host matrix as dopant. Li and co\u2010workers reported NaYbF4:Gd,Yb,Er nanophosphors with suitable magnetic properties (r1 = 0.41 s\u22121 Mm\u22121) for both MRI and upconversion imaging of Kunming mice.3+ ions by modification using gadopentetic acid (Gd\u2010DTPA). For example, Li's group obtained a core\u2013shell UCNP\u2010based nanostructure with NaLuF4:Yb3+, Tm3+ as core SiO2 as shell and Gd complex of Gd\u2010DTPA as capping surfactant. These nanostructures had excellent UCL, high r1 value of 6.35 s\u22121 mM\u22121, and strong X\u2010ray attenuation, which display relatively low biotoxicity and are for upconversion/MRI/CT tri\u2010modal bioimaging.3+ or Co3+ ions, also can combine UCNPs as T2\u2010weighted MRI contrast agent.4:Yb,Tm@FexOy nanostructures which exhibit excellent UCL of NIR to NIR and a saturation magnetization of \u224812 emu g\u22121.2\u2010weighted MRI/upconversion bio\u2010imaging in vivo of the lymphatic system had been done. The design might be of great help to clinical lymph nodal study and diagnosis.In order to achieve the temporary and spatial sensitivity and accuracy of diagnosis, multimodal bioimaging was developed. Multimodal bioimaging based on UCNPs, which combines UCL imaging with other imaging technologies, like MRI and CT, could make up for their own shortcomings and tap their respective advantages. So the multimodal bioimaging has great contribution to further clinical treatment. In addition to UCL, some specific Lnx18F , it has been the most commonly used radionuclide for PET bioimaging. PET bioimaging, which could produce 3D imaging, is used for biodistribution investigation. Li et al. fabricated 18F labeled magnetic\u2010upconversion nanophosphors for in vivo upconversion/MRI/PET bioimaging.18F, 153Sm with a half\u2010time of 46.3 h is commonly incorporated into upconversion nanostructures as radionuclide for SPECT imaging. Li's group reported the synthesis of NaLuF4:153Sm,Yb,Tm nanoparticles. Nanoparticles were utilized for in vivo SPECT imaging with high sensitivity and the ex vivo biodistribution of nanoparticles was easily qualified by this method.The lanthanides, especially Lu, with higher atomic numbers than iodine show excellent X\u2010ray attenuation ability, which makes them an optimal option as CT contrast agent. Obviously, Au,4:Yb,Tm as core and 4 nm of 153Sm3+\u2010doped NaGdF4 as shell. In this nanostructure, Yb and Tm, Lu, Gd, and Sm153 were responsible for UCL, CT, MRI, and SPECT imaging, respectively. The combination of those four imaging modalities can provide detailed information, which shows its value in tumor angiogenesis imaging.4:Tm@NaYF4 UCNPs. The two active imaging components of PoP and UCNPs could be used in no less than six different imaging techniques, including fluorescence (FL), photoacoustic, Cerenkov luminescence (CL), PET, CT and UCL imaging , with a particle diameter of sub\u201050 nm.2 layer via a water\u2010in\u2010oil reverse microemulsion technique. Furthermore, in order to improve the efficiency of loading photosensitizers, Zhang's group designed mesoporous silica coated UCNPs and incorporated ZnPc (photosensitizer) into the mesoporous silica shell. Under NIR 980 nm irradiation, the UCNPs converted NIR light to visible light which could further activate ZnPc to produce reactive singlet oxygen for killing cancer cells.\u22122 of 978 nm light for 45 min. The photosensitizer can also attach to the surface of UCNPs via covalent bonding.4:Yb,Er@CaF2@SiO2\u2010PS nanostructure for PDT in vitro, in which, photosensitizer (PS) was covalently grafted to mesoporous channels of mesoporous silica.1O2 production efficiency, the 660 nm upconversion emission of NaYF4:Yb,Er UCNPs was enhanced via doping 25% Yb3+ and the efficiency of energy transfer was improved because the distance of energy transfer was shortened via covalent assemblage. Liver tumor inhibitory ratio was \u224880.1% under 980 nm irradiation duration 15 min has been designed by Shi's group.Compared to surgery, chemotherapy and radiotherapy, PDT is a noninvasive cancer treatment. PDT is involved, in which PDT drugs (photosensitizers) are activated by UV, visible light or NIR to generate cytotoxic reactive oxygen species (ROS) for killing the target cells. The organic molecules, such as methylene blue (MB), zinc(II) phthalocyanine (ZnPc), Chlorine6 (Ce6) and so on, and semiconductor nanomaterials are the most common photosensitizers for PDT. Because UV and visible light with low penetration ability limit the development of PDT, utilizing UV and visible light converted by UCNPs from NIR can overcome the limitation of low penetration ability.in Figure13a,b. 5.3.22, mesoporous SiO2 or polymer coated UCNPs with porous or hollow structure are good drug and siRNA carrier for chemotherapy.5:Yb/Tm UCNPs as drug carrier.2@DOX\u2010ZnO nanostructure.2. ZnO as a gatekeeper blocked the mesopores of SiO2 to the noneffective leakage of drugs. However, ZnO could be dissolved in acidic solution. So, drugs could be effectively released in the location of the tumor with very low side effects. The multimodality bioimaging including UCL, CT and MRI provided detailed and exact information for drug release. They demonstrated that higher amounts of HeLa cell death caused by UCNP@mSiO2@DOX\u2010ZnO nanostructures than DOX. Phototrigger\u2010induced chemotherapy based on UCNPs utilizing the visible or ultraviolet light converted by UCNPs from NIR to control drug release, which means that UCNPs are a switch of drug release. For example, Shi's group reported the synthesis of hollow mesoporous silica coated NaYF4:Yb/Tm@NaGdF4 (UCNP@hmSiO2) nanostructures.2 as drug carrier.Although chemotherapy is invasive, it is still an effective method for disease treatment. The chemotherapy based on UCNPs includes two aspects: imaging\u2010guided chemotherapy and phototrigger\u2010induced chemotherapy. Imaging\u2010guided chemotherapy could indirectly observe and monitor the extent of drug release by using the UCL of UCNPs. SiO2 Figure\u00a0c,d. The 5.3.33O4 nanoparticles, and Au nanoparticles via layer\u2010by\u2010layer self\u2010assembly to be used for multifunctional bioimaging and PTT.xS were also used as photothermal agents for tumor therapy. Zhang et al. reported the synthesis of GO covalently grafted UCNPs and then loading ZnPc on GO, which could act as a theranostic platform for UCL bioimaging and PTT/PDT of cancer.2O3:Yb/Er\u2212CuxS multifunctional nanostructures. DOX could be loaded into the nanostructures. Simultaneous bioimaging and chemotherapy/PTT could be achieved in this nanostructure, where CuxS acted as the photothermal functional section, DOX acted as the chemotherapeutic drugs and UCL acted as the part of bioimaging. nanostructures, they utilized Au25(SR)18 clusters with size of 2.5 nm to produce PDT and PTT effect by receiving energy from the UCNPs and the pH/temperature\u2010responsive P(NIPAm\u2010MAA) could control DOX release. An in vivo anticancer therapy has been conducted, which demonstrated that this design could markedly improve the therapeutic efficacy.In many cases, these three treatments are used together.5.42 is a semiconductor, which is frequently used for solar energy harvesting. However, TiO2 with a high band gap (3.2 eV) can only absorb UV light and most of the solar energy is wasted. Therefore, incorporating upconversion materials and TiO2 into the system of solar energy harvesting, where TiO2 absorbs the visible or ultraviolet light converted by upconversion material from NIR, can increase the utilization of solar energy.t\u2010BOC\u2010coated \u03b1\u2010NaYF4:Yb/Er and \u03b1\u2010NaYF4:Yb/Tm UCNPs.4 upconversion microrods, which could act as multicolor barcoding for anticounterfeiting.4:Yb/Tm UCNPs to develop a new temporal\u2010domain approach to multiplexing.4:Yb,Er,Gd UCNPs and demonstrated that the UCNPs could be in 3D displays.In addition to the above applications demonstrated, upconversion nanostructures could also be used for solar energy harvesting,6In this review, we reviewed recent advances in upconversion nanostructures in terms of mechanism, design and synthesis, and some applications. We also provided a number of typical examples to demonstrate the various aspects of upconversion nanostructures in detail. Though, the development of upconversion nanostructures grows substantially in recent years, critical challenges remain, which impose great barriers to further optimize the upconversion nanostructures for commercialization. First, controllable preparation of upconversion nanostructures is highly desired, which includes both morphology and composition, and surface properties and the ability for chemical modifications. As one kind of optical materials, the main issue of upconversion nanostructures is that the fluorescence efficiency of the upconversion is far less sufficient, which largely limits their applications. One urgent task is to generate upconversion nanostructures with high fluorescence quantum yield. Obviously, upconversion is a nonlinear optical phenomenon, moreover, quantum yield is not applicable for characterizing their fluorescence efficiency. Therefore, it is required to establish a new and sound pathway to evaluate upconversion fluorescence efficiency. In addition to fluorescence efficiency, the color of upconversion emission, such as single\u2010band and full\u2010color emission, should be a research focus. On the other hand, the dispersibility, chemical stability, biocompatibility, and long\u2010term toxicity are still issues that should be well addressed in the development process. Last but not the least, the upconversion nanostructures should be combined with other functional materials to create new and promising structures and properties."} {"text": "The purpose of the study was to establish a mathematical model for correlating the combination of ultrasonography and noncontrast helical computerized tomography (NCHCT) with the total energy of Holmium laser lithotripsy.In this study, from March 2013 to February 2014, 180 patients with single urinary calculus were examined using ultrasonography and NCHCT before Holmium laser lithotripsy. The calculus location and size, acoustic shadowing (AS) level, twinkling artifact intensity (TAI), and CT value were all documented. The total energy of lithotripsy (TEL) and the calculus composition were also recorded postoperatively. Data were analyzed using Spearman's rank correlation coefficient, with the SPSS 17.0 software package. Multiple linear regression was also used for further statistical analysis.r = \u20130.565, P\u200a<\u200a0.001), and there was a strong correlation between the calculus size and the TEL . The difference in the TEL between the calculi with and without AS was highly significant . The CT value of the calculi was significantly correlated with the TEL . A correlation between the TAI and TEL was also observed . Multiple linear regression analysis revealed that the location, size, and TAI of the calculi were related to the TEL, and the location and size were statistically significant predictors .A significant difference in the TEL was observed between renal calculi and ureteral calculi (A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established; this model may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus. Thus, the Holmium laser has become the primary choice to fragment calculi.After being investigated and demonstrated to be safe and effective in 1960, a variety of laser techniques have been used in urological procedures. Because of its precision and strong decomposing power, the Holmium laser has become one of the most popular tools in urological procedures, including lithotripsy. One major advantage of the Holmium laser is that it can fragment urinary calculi efficiently, regardless of size, hardness, chemical composition, and physical consistency; accordingly, a high stone-free rate can be achieved. However, Dushinski and Lingeman have proposed that the mechanism of fragmentation should be explained by the photothermal effect on urinary calculi. Chan et al have also reported that the Holmium laser lithotripsy mechanism is primarily a photothermal effect through which the Holmium laser increases the temperature of the irradiated area, causing a chemical breakdown of the calculi and weakening the physical strength of the irradiated area, whereas interstitial water and vapor expansion merely facilitate the drainage of the fragments within the area.The mechanism of Holmium laser lithotripsy has been investigated for many years. Many studies have suggested that it might result from the photoacoustic effect generated by pulsed lasers. When pulsed lasers are fired, cavitation bubbles are produced at the water\u2013calculus interface. Shock waves are produced by continuous rebound, and the bursting of the bubbles is transmitted to the calculus and can cause calculus fragmentation. have reported that the fragmentation rate does not increase when the frequency increases and that the pulse energy is constant. Chawla et al have shown that the fragmentation rate increases with the pulse energy, but it does not consistently increase with the pulse frequency. It has been reported that in cases of high density and fixed calculi, a higher pulse energy, and shorter pulse duration will lead to increased fragmentation efficacy. In an in vitro study, Peter and Olivier observed that a low frequency-high pulse energy setting is more efficient than a high frequency-low pulse energy setting at the same power levels; they also found linear correlations between the pulse energy and fragmentation size, as well as the fissure width and the depth.Several parameters that affect the fragmentation efficiency of Holmium laser lithotripsy have been assessed, including the pulse energy, pulse duration, frequency, and power setting. Sea et alDifferent energy is needed to fragment calculi with different characteristics. Unguided use of energy in laser lithotripsy has some disadvantages. Insufficient energy cannot fragment calculi effectively, whereas overused energy may lead to a higher incidence rate of complications and overconsumption. Preoperative evaluation of the total energy of Holmium laser lithotripsy can predict the operation difficulty, estimate the operation time, and make emergency program to high-risk patient in advance. Therefore, guiding the use of energy plays a key role in Holmium laser lithotripsy, and it may improve the safety and efficiency of laser lithotripsy and increase the cost-effectiveness of its operation.First introduced in the early 1990s, the mechanism of Holmium laser lithotripsy and the parameters for fragmentation efficiency have been frequently investigated. However, no in vivo studies have reported a correlation between the combination of ultrasonography and noncontrast helical computerized tomography (NCHCT) and the total energy of lithotripsy (TEL), particularly the potential role of ultrasonography in the assessment of TEL, such as twinkling artifact and acoustic shadowing (AS). The purpose of this study was to establish a mathematical model to correlate the combination of ultrasonography and NCHCT with the total energy of Holmium laser lithotripsy.22.1This study proposal was approved by the institutional review board. A total of 180 patients with a single renal or ureteral calculus were enrolled in the study from March 2013 to February 2014. All patients provided written informed consent prior to study enrollment.2.2All examinations were performed by a single ultrasonologist (with 7 years of experience) using a Philips iU22 ultrasound scanner using a C5\u20131 curvilinear array probe. All patients underwent ultrasonography the day before lithotripsy. No special preparation was required for the patients with kidney or proximal/middle ureter calculi. Patients with calculi in the distal ureter were requested to moderately fill their bladders. Scanning started with gray-scale imaging to measure the calculus size and to confirm the calculus location and the presence or absence of AS by calculating the number of color pixels in each artifact. In Adobe Photoshop CS5, the background gray pixels were deleted, the background was masked in black, and the images containing only color pixels from twinkling were saved. Image J, an image editing application for medical image processing, was then used to select a region of interest that included all color pixels in the artifact and some background pixels and to draw a histogram of the region. The number of color pixels in the artifact was calculated by subtracting the number of background pixels from the number of all pixels in the region. All data were obtained from measuring 5 different images, and the average of the 5 measurements was used as the final result.Philips DICOM Viewer R2.5L1-SP3 software was used for data processing. Five static images were randomly chosen from each cine loop replay for the statistical analysis. In gray-scale imaging, the calculus location was confirmed and classified as renal or ureteral. The maximum calculus size was measured and classified into 4 grades: smaller than 10\u200amm was grade 1; between 10 and 20\u200amm was grade 2; between 20 and 30\u200amm was grade 3; and larger than 30\u200amm was grade 4. AS was assessed and classified into 2 levels: calculus without AS was level 0 and calculus with AS was level 1. There are no uniform or strict criteria to evaluate the intensity of twinkling artifacts, which makes the evaluation subjective and operator-dependent. Referring to Gao et al2.32 regions of interest on each calculus. The average of the 3 measurements was used as the final CT value for each specific calculus. The CT values of the calculi were classified into 4 grades: below 400 Hu was grade 1, between 400 and 800 Hu was grade 2, between 800 and 1200 Hu was grade 3, and over 1200 Hu was grade 4.Immediately after undergoing ultrasonography, all patients were examined using noncontrast helical computerized tomography (NCHCT) with a Philips Brilliance iCT 256 scanner . The CT value of each calculus was generated, as follows: Hounsfield units were measured for 3 different 0.02\u200acm2.4All lithotripsies were performed by 2 surgical urologists (1 with 5 years of experience and the other with 7 years of experience) using a dual-wavelength Holmium laser therapeutic machine . The following laser settings were used: 200\u200a\u03bcm laser fiber at output energy of 0.5/0.6\u200aJ and a pulse repetition rate of 20/35\u200aHz. During lithotripsy, fragments were targeted to obtain the smallest size possible. Using the laser fiber as a frame of reference, all fragments >4\u200amm were removed with a basket catheter. The total energy of each completed laser lithotripsy was recorded. The criterion for a completed laser lithotripsy was no residual fragment left, or, if any, the fragment size had to be \u22644\u200amm, and spontaneous excretion of the fragment was expected. The efficacy of lithotripsy was evaluated 1 to 2 days after the procedure through ultrasonography and plain film of the kidney\u2013ureter\u2013bladder.2.5After the completion of each laser lithotripsy, 1 fragment was randomly chosen for composition analysis through automatic infrared spectrophotometry with the Medical Automated FTIR Human Calculi Analysis System .2.6P-value of\u200a<\u200a0.05 was considered to be statistically significant.Statistical analyses were performed with the SPSS 17.0 software package. Spearman's rank correlation analysis was used to assess the correlations between the TEL and the calculus location, size, AS level, TAI, and CT value, as well as the calculus composition. Multiple linear regression was performed to formulate a mathematical model to estimate the TEL. A 3Of the 180 patients who participated in the study from March 2013 to February 2014, 5 patients were excluded from the final analyses because their lithotripsies were evaluated as failed: in 2 cases, the renal calculi in the subrenal calyx were too secluded to be reached and fragmented effectively, and 3 patients had residual fragments >4\u200amm after the procedure. Therefore, valid data for this study were collected from 175 patients . Fifty-one patients had renal calculi, and 124 patients had ureteral calculi. The calculi had an average size of 14.7\u200a\u00b1\u200a6.1\u200amm (6\u201341\u200amm), and 46 of them were grade 1, 101 were grade 2, 22 were grade 3, and 6 were grade 4. In terms of AS, 16 calculi were level 0, and 159 were level 1. In terms of CT values, 7 calculi were grade 1, 47 were grade 2, 72 were grade 3, and 49 were grade 4 , and the TEL for renal calculi was much higher than that required for ureteral calculi. Statistical analysis showed a strong correlation between the size of the calculus and the TEL , and larger calculi required a higher TEL. There was also a significant difference in the TEL between different levels of AS , and calculi with AS required more energy for fragmentation. TAI was strongly correlated with TEL , and a higher TEL was required for calculi with a greater TAI. A correlation between the CT value of the calculus and the TEL was also observed , and the energy required for lithotripsy was proportional to the CT value of the calculus. There were no statistically significant differences between the TEL values for the different calculi compositions . The equation shows that more energy is needed for lithotripsy in patients with larger calculi, renal calculi, or calculi with greater TAI.To further eliminate confounding factors and to estimate the TEL in patients with different characteristics, a multiple linear regression model was built with the following parameters: calculus location , AS level , size of calculus, TAI and CT value Table . After c4A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established in our study, and it may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus. By providing an initial estimate of the total energy required in Holmium laser lithotripsy, the combination of ultrasonography and NCHCT is likely to improve the safety and efficiency of laser lithotripsy, as well as the cost-effectiveness of the procedure, by avoiding overconsumption of pulse energy during the operation. analyzed the differences in cumulative Holmium laser energy in different locations. They concluded that renal calculi required more energy than ureteral calculi. We reached a similar conclusion: the required TEL differed significantly between the kidney and the ureter. Although the exact mechanism is still not clear, 1 conceivable explanation for the phenomenon is that hydronephrosis or hydrocalycosis around the renal calculus makes it more mobilizable than ureteral calculi, and the total amount of contact between the calculus and the tip of the laser fiber is decreased. Therefore, more pulses are likely to be fired inefficiently, and more energy may be wasted during the fragmentation of renal calculi.In a previous retrospective study, Molina et al have performed a systematic review of Holmium laser use for calculus lithotripsy. They reported that the required cumulative energy of pulses increased with the calculus size and the mass. This finding is consistent with our finding that the correlation between the calculus size and the TEL is statistically significant.The characteristics of calculi play an important part in the laser lithotripsy procedure. In terms of size, a larger calculus requires more energy for fragmentation. Blomley et al observed that it was much denser and harder than plaster of Paris, and it was more difficult to fragment in SWL compared to plaster of Paris. In an in vitro study, Wezel et al reported that at all tested settings, the fragmentation efficiency was remarkably higher for soft calculi than for hard calculi. Kronenberg and Traxer compared the fragmentation efficiency among artificial calculi constructed from different materials. They found that the ablation rate was higher in calculi made of plaster of Paris than those made from BegoStone Plus. They also concluded that a hard calculus material was more difficult to ablate than a soft calculus material at the same laser lithotripter settings. According to these studies, it is reasonable to conclude that the energy required for calculus fragmentation through laser lithotripsy is proportional to the density of the calculus.Note that for the first time, our study demonstrated a significant difference between the TEL at different AS levels. This phenomenon has 2 explanations. First, acoustic impedance plays an important role in the intensity of AS. A larger difference in the acoustic impedance between materials will lead to a stronger reflection at the interface and a greater AS. The higher the calculus density, the larger its acoustic impedance and the greater its AS. Second, a higher density calculus requires more energy from laser lithotripsy for fragmentation. After investigating the characteristics of BegoStone, Liu and Zhong found a significant correlation between the twinkling artifact grade and the calculus size and deduced that a higher artifact grade should be expected for larger calculi. Considering that both TAI and TEL are correlated with calculus size, a correlation between TAI and TEL should also be reasonably expected.We also noticed a significant correlation between TAI and TEL that has not been previously reported. However, due to many potential influential factors from the twinkling artifact, we could not clearly define the correlation. One possible explanation is the correlation between the calculus size and TAI. In an in vitro study, Louvet divided patients into low and high attenuation coefficient groups based on the preoperative median absolute CT values for average calculus density, and they found the fragmentation efficiency was significantly higher in the low attenuation coefficient group compared to the high group. Molina et al have also reported that calculus size and hardness (measured by NCHCT) were important predictors for cumulative laser energy.In alignment with previous studies, a significant correlation between TEL and CT values of the calculus was also observed in our study. In a retrospective study, Ito et al observed that when the laser fiber and the energy level remained unchanged, the fragmentation efficiency of the Holmium laser varied among different calculus compositions. They attributed this phenomenon to the different temperature thresholds of different compositions. Molina et al reported that all calcium calculi require less energy compared with uric acid calculi in a univariate analysis. However, we failed to find a statistically significant difference in the TEL among different compositions of calculi. Although the reason for this observation is not clear, the possible explanations are as follows. First, only 1 fragment was collected per calculus for the composition analysis, which might not accurately represent the calculus as a whole. Second, although we could determine the composition of a calculus, we were unable to define the proportion of each component of the calculus because of the functional limit of the calculi analysis system that we used. Therefore, the statistical outcome of the correlation between the TEL and the calculus composition could be affected.The role that calculus composition plays in laser lithotripsy efficiency remains less defined. In an in vitro study, Teichman et alThis study has several limitations. First, a patient's BMI might affect the TAI because of the change of depth to the calculus, which we failed to consider in our study. Second, we could not pre-set and optimize the machine settings for evaluating the TAI more sensitively and specifically. In addition, we performed the study using equipment from only 1 manufacturer, and it is possible that different results could be obtained using different machines. The factors above may have caused deviations; thus, further investigation is needed.5A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established; this model may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus.This manuscript was edited for English language by American Journal Experts (AJE). The authors are particularly grateful to Yuanyuan Liu for her assistance with the statistical analysis. They also thank Mingjie Li and Xiangtao Wang for their technical advice about the Holmium laser lithotripsy."} {"text": "A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework. A large-scale industrial Internet of Things (IIoT) is deploHowever, with the wide openness of communication infrastructure which is used to improve efficiency, reliability, and sustainability of services such as command disaggregation attack. The attack may result in disruptions of physical process. In this paper, we focus on the process of launching the command disaggregation attack and its detection method. Previous studies, such as [When commands reach sub-controllers, malicious entities remotely attack sub-controllers to generate wrong executed commands called such as , introdu such as can not Driven by the above considerations, we depict two different command disaggregation attack modes: (1) false command sequence; and (2) wrong command allocation. The former refers to the situation that attackers delay the disaggregation of some commands to disorder its logic, thereby resulting in disruptions of physical process; the latter refers to the situation that disaggregated commands are issued to other than the expected or planned actuators, causing the failure of control objective or physical damages. We also describe three attack models to implement command disaggregation attacks in two kinds of modes. When attackers manipulate the disaggregation of commands, they simultaneously inject false feedback data to confuse security detectors to ensure that the attack goes undetected. To deal with the threats above, we provide a detection framework based on correlations among two-tier command sequences, which collects two-tier commands including those issued from the central controller and the sub-commands executed by actuators. We design components of the detection framework and explain the method of mining correlations among commands and using the correlations to detect attacks. Finally, two cases are studied to demonstrate the different levels of impact from various attack models and the effectiveness of proposed detection framework.The rest of the paper is organized as follows. We introduce the related work and summarize our contributions in In this section, we first survey the state of the art of attacks that cause disruptions of physical process. Then, we review the works about attack detection.Three methods, namely, false command injection, false data injection, and time-delay attacks, can be used to disrupt the physical system. In ,16,17, aAlthough many detection methods have been proposed to detect anomalies caused by attacks, they are not effective to identify false command disaggregation attacks. For example, in , some co(i)We introduce two kinds of command disaggregation attack modes, namely, false command sequence and wrong command allocation.(ii)We describe three attack models to implement command disaggregation attacks in two modes. Attacks based on the three models can not be detected by the existing detection methods.(iii)We provide an effective detection framework based on correlations among two-tier command sequences. Detecting command disaggregation attacks with false feedback data injection is still an unexplored topic and our method is the first to effectively identify command disaggregation attacks before a disruption occurs.After summarizing the related work in attack and in detection, we clarify our contributions asIn this section, we first introduce a simplified model of IIoT control system. Second, we unveil two kinds of command disaggregation attack modes, including wrong command allocation and false command sequence, wherein we depict attack models.t. After multi-tier sub-controllers disaggregate these commands, sub-commands kth kind of command. t.ith sensor. ith sensor at time instant l. jth kind of state and k, k, and under normal circumstances, k.kth actuator. N means the number of actuators. An actuator only executes a sub-command in unit time, and a sub-controller only disaggregates one command from the upper-tier sub-controller during once outflow of the central controller. ith actuator. The system state at time t, t when current commands are issued and the time of its last outflow.AC(t\u2212dt) , which cS. A disruption occurs when the system state is The system model is described using six-tuple:The model is based on the assumption that the information and physical systems have not yet been attacked, and all observed states and commands can be regarded as a representation of normal system behavior. From the above process, we can know that the accurate feedback data and commands are critical for the normal running of systems. When security mechanisms such as authentication and crypWe also use To describe the attack models, we define two operations about sets, \u201c\u2212\u201d and \u201c+\u201d. For any two sets In this section, we will disclose two kinds of attack modes and describe the corresponding attack models in details. During the implementation of the attack models, attackers usually inject false data into sensors or feed back false data to detectors to hide signs of attacks.When a command Next, we depict the attack models that implement the above two situations.(1) Attack model based on wrong command inner allocation (WCIA)\u2022 Information collectionAttackers first find a set of issued commands Equation .(4)ci\u2208C\u2022 False data injectionith sub-controllers, they also need to inject the same false feedback data to manipulate the corresponding (i + 1)th tier sub-controllers. The disaggregation of commands is influenced and the executed sub-commands are changed from When attackers discover that the current state is (2) Attack model based on wrong command outer allocation (WCOA)\u2022 Information collectionAttackers first find a set of issued commands Equation .(5)ci\u2208C\u2022 Command modificationWe use \u2022 False data injectionWhen disaggregated commands have been modified, attackers need to inject bad feedback data to the next-tier sub-controllers. Bad data informs the next-tier sub-controllers that the current state is When the sub-commands Under normal situations, if As shown in \u2022 Information collectionAttackers first find command sequence Equation .(6)ci\u2208C\u2022 Time-delay attackAttackers manipulate the sub-controllers to delay the disaggregation of \u2022 False data injectionCommands WCIA, WCOA, and FCS change executed commands during the process of disaggregation, meanwhile, inject false data to confuse detectors. The existing detection methods in The detection framework is in charge of collecting command sequences, mining correlations, and identifying anomalies. As shown in \u2022 Command Collectorith actuator. Data is then transferred to two other components, namely, correlation analyzer and exception detector.Command collector is responsible for collecting commands from IIoTs. Command collector gets commands from two sites, as shown in \u2022 Correlation AnalyzerCorrelation analyzer tries to discover whether correlations exist among commands and sub-commands. Correlation analyzer mines correlations by using the recently collected history data. Once in a while the analyzer will update the correlations in correlation database. We will discuss which correlations and how they are mined in the next subsection.\u2022 Correlation DatabaseCorrelation information is stored in the correlation database. Correlation information includes discovered correlations and the time and number of occurrences of commands and sub-commands. Correlation database provides the corresponding information when the correlation analyzer or exception detector requires.\u2022 Exception DetectorException detector examines anomalies of the input four-tuple based on correlation information. The exception detector directly utilizes correlations in database, instead of waiting for knowledge from the correlation analyzer, to identify anomalies. Therefore, the time that the detector spends in identifying anomalies is not related to correlation mining. The detector can provide the real-time result when a 4-tuple is input.We mainly mine two kinds of correlations including correlations between a command and sub-commands, and correlations between executed sub-commands.If executed sub-command Latter support ratio denotes the ratio of the number of occurrences that kth actuator. The value of At the beginning of correlation mining, there exist many 4-tuples Phase I: verified correlation selection. In this phase, the correlation analyzer only needs to find a command C when correlation mining between sub-command Phase II: correlation validation. In the second phase, the correlation analyzer judges whether there exists a correlation between We use Equation , the corEquation , the corThe two phases are executed repetitively until set k) and x(k) indicate the number of occurrences that ith actuator and the number of occurrences that kth outflow. If Equation is satisThe flowchart of correlation mining among sub-commands is given in The correlation analyzer can obtain Equation by applyAfter computing ehaviors . At the Lastly, we introduce the detection process of the exception detector.Exception detector identifies anomalies based on broken correlations. For a sub-command In this section, we investigate two cases about tank system and energy trading system in the smart grid to illustrate the impact of attacks and the effectiveness of our detection framework.A tank system ,32 with We only describe a sub-system to illustrate the control process. turning on the pump that outputs ingredient A at time 0 s (turning off the pump that outputs ingredient A at time M s (turning on the pump that outputs ingredient B at time M + 60 s (turning off the pump that outputs ingredient B at time opening the valve that outputs liquid C at time closing the valve that outputs liquid C at time A plan of producing The above neutralization process depicted is simulated in Java, where the central controller, actuators, and sub-controllers are designed as components by using Java Class. Every switch and sensor are seen as attributes of related actuators. When some attributes occur a change, the central controller issues new commands. Some executed sub-commands can cause the changes of the attributes. Different components communicate with each other by function call with parameters. The parameters include commands and feedback data. The central controller automatically keeps running and issues commands based on the users\u2019 input and the designed control process. During the operation of the system, values of sensors, sub-commands, and commands are written into different files per unit time. Moreover, every sub-controller component provides an interface for users. When users call the interface and input parameters, sub-controllers have been compromised and commands and feedback data can be modified.With the increasing proliferation of new energy, many users can become suppliers who sell energy to other users called consumers. Every supplier has an energy storage system that stores extra energy. When consumers need to buy energy, energy is routed to these consumers from suppliers based on energy routing schemes.A simplified model of energy trading system in the smart grid ,34 is shIn the model, there are 12 sub-commands and 6 sensors, which are shown in turning on the switch that outputs energy at time 0 s (turning on the switch that inputs energy at time 0 s (turning off the switch that outputs energy at time K s (turning off the switch that inputs energy at time K w energy, it will turn on the switch with the largest amount of energy until the output is equal to K w. If multiple users request power, the sub-controller will turn on other switches to output energy. The initial volume of every storage system is At the beginning of every circle, consumers sent their demands In this subsection, we introduce six attack cases based on WCOA, WCIA and FCS. Under normal circumstances of scenario 1, users randomly receive orders of goods including Six attack cases are described as follows:Attack case 1 in scenario 1: When the controller issues command Attack case 2 in scenario 1: At Attack case 3 in scenario 1: At Attack case 4 in scenario 2: When the controller issues command Attack case 5 in scenario 2: At Attack case 6 in scenario 2: At During the above attack processes, attackers also modify data of sensors to confuse the central controller and detectors, thereby resulting in sensory data same to those in The above six cases demonstrate that command disaggregation attacks can lead to disruptions of physical process and create great impact.We employed java to implement the detection framework described in We check whether the proposed detection framework can effectively identify six attack cases. Two kinds of correlations are obtained by analyzing data. In We also implement two other detection methods in ,35 to deTo better illustrate the performance of the proposed detection framework, we randomly launch attacks based on WCIA, WCOA, and FCA in scenario 1 and scenario 2. Every type of attack is launched many times at the different time. We find that attacks based on FCA and WCOA can be identified with 100% accuracy in two scenarios. Attacks based on WCIA in scenario 1 can be identified with 95% accuracy because some elaborately constructed attacks enable the correlation between two sub-commands not to be broken. An example will be described in The experiments demonstrate that the detection framework can effectively identify many command disaggregation attacks, and can find anomalies before disruptions of physical process occur.This section discusses further improvement measures for the defects of our detection framework.Difficulties of correlation mining. A large number of linear relationships exist among data of complex IIoTs [ex IIoTs , howeverex IIoTs to identThe futility of detecting elaborately constructed attack sequences. Experiments in In this study, we focus on the command disaggregation attack and its detection method. We describe three attack models to implement command disaggregation attacks in two kinds of modes. The examples of the tank system and energy trading system demonstrate that command disaggregation attacks in two modes can cause severe damage to physical process and an effective detection method is necessary. We also provide a novel framework to detect command disaggregation attacks. The framework utilizes the correlations between commands and sub-commands to identify anomalies. The two cases demonstrate that our detection framework can identify undetected command disaggregation attacks by the existing detection methods with high accuracy if there exist corresponding correlations among commands and sub-commands. Besides that, our method can identify anomalies before a fault occurs. In future, we will strengthen the detection framework to detect command disaggregation attacks in more complex IIoTs."} {"text": "Escherichia coli (ETEC) are bacterial pathogens that are frequently associated with diarrhoeal disease, and are a significant cause of mortality and morbidity worldwide. The Global Burden of Diseases, Injuries, and Risk Factors study 2016 (GBD 2016) is a systematic, scientific effort to quantify the morbidity and mortality due to over 300 causes of death and disability. We aimed to analyse the global burden of shigella and ETEC diarrhoea according to age, sex, geography, and year from 1990 to 2016.Shigella and enterotoxigenic We modelled shigella and ETEC-related mortality using a Bayesian hierarchical modelling platform that evaluates a wide range of covariates and model types on the basis of vital registration and verbal autopsy data. We used a compartmental meta-regression tool to model the incidence of shigella and ETEC, which enforces an association between incidence, prevalence, and remission on the basis of scientific literature, population representative surveys, and health-care data. We calculated 95% uncertainty intervals (UIs) for the point estimates.Shigella was the second leading cause of diarrhoeal mortality in 2016 among all ages, accounting for 212\u2008438 deaths (95% UI 136\u2008979\u2013326\u2008913) and about 13\u00b72% (9\u00b72\u201317\u00b74) of all diarrhoea deaths. Shigella was responsible for 63\u2008713 deaths (41\u2008191\u201393\u2008611) among children younger than 5 years and was frequently associated with diarrhoea across all adult age groups, increasing in elderly people, with broad geographical distribution. ETEC was the eighth leading cause of diarrhoea mortality in 2016 among all age groups, accounting for 51\u2008186 deaths (26\u2008757\u201383\u2008064) and about 3\u00b72% (1\u00b78\u20134\u00b77) of diarrhoea deaths. ETEC was responsible for about 4\u00b72% (2\u00b72\u20136\u00b78) of diarrhoea deaths in children younger than 5 years.The health burden of bacterial diarrhoeal pathogens is difficult to estimate. Despite existing prevention and treatment options, they remain a major cause of morbidity and mortality globally. Additional emphasis by public health officials is needed on a reduction in disease due to shigella and ETEC to reduce disease burden.Bill & Melinda Gates Foundation. Escherichia coli .13According to recent global disease burden estimates, diarrhoea accounts for more than 1 million deaths and about 4% of the total global disability-adjusted life-years (DALYS) per year across all age groups.ETEC is one of the first symptomatic enteric illnesses for many children, causing several hundred million cases of diarrhoea each year, mostly in young children.Evidence before this studyEscherichia coli (ETEC) diarrhoea include population representative surveys, scientific literature, and health-care utilisation data. We searched PubMed, with no language restrictions, for studies published between Jan 1, 1990, and Dec 31, 2017, with the following search string: (diarrhoea [title] OR diarrhoea [MeSH terms] OR diarrhoea [title] OR diarrhoea [MeSH terms] AND (shigell* [title/abstract] OR enterotoxigenic e. coli [title/abstract]) AND (aetiolog* [title/abstract] OR aetiology [MeSH Terms] OR cause [title/abstract] OR pathogen [title/abstract]) NOT (colitis [title/abstract] OR enterocolitis [title/abstract] OR inflammatory bowel [title/abstract] OR irritable [title/abstract] OR Crohn* [title/abstract] OR HIV [title] OR treatment [title] OR therapy [title]) NOT (appendicitis [title/abstract] OR esophag* [title/abstract] OR surger* [title/abstract] OR gastritis [title/abstract] OR liver [title/abstract] OR case report [title] OR case-report [title] OR therapy [title] OR treatment [title]) AND humans [Mesh]). The Maternal and Child Epidemiology Estimation group (MCEE) estimated 42\u2008000 deaths among children younger than 5 years due to ETEC and 28\u2008000 deaths due to shigella. The MCEE modelling approach was categorical, meaning that if a pathogen was present in a diarrhoeal stool sample, diarrhoea was attributed to that pathogen, and used conventional bacterial culture methods for diagnostic detection. The Global Burden of Diseases, Injuries, and Risk Factors (GBD) study 2016 used molecular diagnostics.Sources for this analysis of the global burden of shigella and enterotoxigenic Added value of this studyOur analysis uses the GBD study to estimate shigella and ETEC incidence, disability-adjusted life-years, and mortality across every country for each sex and all ages from 1990 to 2016. We estimated that shigella was responsible for about 210\u2008000 deaths among all ages, including about 63\u2008700 among children younger than 5 years, and that ETEC was responsible for about 51\u2008200 deaths among all ages and about 18\u2008700 deaths in children younger than 5 years. Our results challenge some previous estimates with regard to the relative and absolute magnitude of the health burden associated with diarrhoea caused by shigella and ETEC.Implications of all the available evidenceOur study calls for a widespread improvement in the quality and quantity of data, including improved surveillance systems and utilisation of standard reporting mechanisms and case definitions. Refined burden estimates for the acute and long-term burden of shigella and ETEC are needed to guide funders and public health officials to make evidence-based decisions for the alleviation of diarrhoeal diseases, with particular attention to the development of effective and attainable vaccines. Data on the burden of diarrhoeal diseases caused by shigella and ETEC will help public health officials to identify proper age appropriate vaccination schedules and target regions where the burden of these pathogens is substantial.Although shigellosis occurs worldwide, the greatest burden is among children in low-income countries. Repeated infection is not uncommon because of the multiple serotypes associated with illness, and the decrease in the incidence of disease with increasing age shows that immunity eventually develops.Both shigella and ETEC are important causes of diarrhoea and dysentery in people older than 5 years, with an estimated 100 million episodes occurring annually among those aged 5\u201314 years.38Shigella spp are antigenically diverse, encompassing two toxins and over 25 colonisation factors for ETEC, and 50 serotypes or subtypes for shigella, which makes the development of vaccines challenging.Shigella flexneri 2a, 6, 3a, and Shigella sonnei.45Shigella and ETEC vaccine candidates are currently in various phases of research and development.To inform vaccine development priorities, the disease burdens of shigella and ETEC need to be characterised at regional and national levels. Co-infecting pathogens, asymptomatic infections, antigenic diversity, and variability of diagnostic methods can complicate the determination of diarrhoeal aetiology for children in LMICs.Detailed methods on the Global Burden of Disease (GBD) Study and on diarrhoea estimation in GBD have already been published.Diarrhoea-related mortality was modelled in the Cause of Death Ensemble model (CODEm) platform.Diarrhoea-related morbidity, including incidence and prevalence, was modelled in DisMod-MR (version 2.1).The cause of diarrhoea is estimated separately from mortality and morbidity.Shigella spp and for both heat stable (ST)-ETEC and heat labile (LT)-ETEC.Diarrhoea aetiologies are based on molecular diagnostic case definitions. We did a systematic reanalysis of the GEMS samples using real-time PCR. Our modelling strategy requires that the continuous real-time PCR test results be dichotomised into positive and negative results. To do this, we identified the lowest cycle threshold at which the diagnostic accuracy, defined as the ability to discriminate between cases and controls, was maximised. We fitted a Loess curve to each cycle threshold distribution of aetiology and the proportion of diarrhoea cases that were correctly identified .estA or eltB E coli genes in the primary GEMS laboratory results and the lower cycle threshold score for ST (both STh and STp genes) or LT gene targets in the real-time PCR reanalysis and the highest rates of mortality due to shigella in this age group were in sub-Saharan Africa, where mortality rates were greater than 10 per 100\u2008000 people per year in northern, western, eastern, and central regions . Under-5ETEC was the eighth leading cause of diarrhoea mortality in 2016 among all age groups globally, accounting for an estimated 51\u2008186 deaths and the global mortality rate among children younger than 5 years ranged from less than 0\u00b71 per 100\u2008000 in many regions to 8\u00b78 per 100\u2008000 (4\u00b76\u201314\u00b73) in eastern sub-Saharan Africa . The greThe burden of shigella and ETEC varied by geographical region . DiffereThe use of bacterial culture to detect shigella and ETEC in diarrhoeal stool samples is likely to miss a substantial proportion of infections.r2 \u22120\u00b733, 95% UI \u22120\u00b742 to \u22120\u00b724). Shigella is strongly correlated with a highly negative slope, indicating that these causes, shigella especially, are focused in low-income countries and enterotoxin production within the small intestine. ETEC produce ST or LT enterotoxins, or both, which stimulate the release of fluid and electrolytes from the intestinal epithelium, resulting in watery diarrhoea.Campylobacter spp, and adenovirus.Diarrhoea early in childhood can impede the absorption of nutrients in the gut, leading to malnutrition.Shigella affects people of all ages and is a predominant cause of diarrhoea mortality throughout adolescence and adulthood. Our analysis shows that shigella was the leading cause of death among adults older than 70 years. Although routine immunisation programmes are an attractive option for the prevention of shigella, our results suggest that such programmes might miss a substantial burden of shigella mortality in this age group.The long-term solution for disease reduction is an integrated approach that includes improved water quality, sanitation and handwashing, optimised nutrition, better access to medical care, and vaccines. A combined shigella\u2013ETEC vaccine is also being investigated, partly because both pathogens affect similar geographical settings and populations.30Our results differ from previous estimates in some respects. The Child Health Epidemiology Research Group\u2014now called the Maternal and Child Epidemiology Estimation group (MCEE)A systematic reanalysisOur findings have several limitations. First, our estimates of mortality, morbidity, and aetiological attribution for shigella and ETEC are restricted by availability of data, particularly data sparsity in regions of the world with a high diarrhoea burden. In addition, scarce data are available for the neonatal age group. Although adjustment for factors such as maternal immunity might help to improve our model estimates, quantification of the effect of maternal immunity is restricted by the availability of data. We account for this limitation by including UIs with each of our estimates, and our modelling approach allows us to make inferences for places and times with little data, based on more reliable estimates from other periods and regions to generate the best possible estimates. There is also a general scarcity of data on diarrhoea among populations older than 5 years and, although we model causes for diarrhoea in these age groups, the ORs from the oldest age group in GEMS\u2014under 5 years old\u2014are assumed to be representative in older ages. Second, this analysis only accounts for the acute phase of diarrhoea in our YLD estimates for the two pathogens. Consequently, our DALY estimates severely underestimate diarrhoea-associated long-term sequaelae, such as stunting and cognitive impairment.In summary, our findings give an insight into the global burden of shigella and ETEC diarrhoea globally, spanning over 25 years for both sexes and all ages. Such refined burden estimates for the mortality, morbidity, and long-term effects of shigella and ETEC are needed to guide funders, public health officials, and policy makers. Refined burden estimates will help these individuals and organisations to make evidence-based decisions for the allocation of resources and the promotion of vaccine development and other effective strategies to reduce the unacceptable burden of diarrhoea worldwide.list of all GBD 2015 data sources for each country see http://ghdx.healthdata.org/gbd-2015/data-input-sourcesFor a online results see https://vizhub.healthdata.org/gbdcompare/ and https://ghdx.healthdata.org/gbd-2016/For code see http://ghdx.healthdata.org/global-burdendisease-study-2016-gbd-2016-causes-death-3For the"} {"text": "OF) conditions and, a high, by using greenhouses (GH). For OF, data belong to five municipalities of the Guanent\u00e1 province (Santander department), while for GH, data belong to five municipalities of the Alto Ricaurte province (Boyac\u00e1 department). The data presented here includes information on soil parental materials and climate variables (averages\u00a0\u00b1\u00a0standard deviations) relevant from the agricultural point of view, which were calculated from historical climate series. Soils natural fertility data, obtained by sampling the production areas, are also presented. After filtering the data, 67 samples were obtained for OF and 70 for the GH. For GH, a dataset with the results of 38 soil samples taken inside greenhouses were paired with the results of samples taken outside these greenhouses in uncropped areas. In the case of these soil analyses, the data correspond to tables with the results reported by the laboratory for both, chemical and physical variables, for each location in which soil samples were taken. In this work, the main dataset is one that contains the inputs of fertilizers and water, and the corresponding yields of tomato production cycles managed by local growers. This information was collected through two data collection tools: surveys (SVY) to growers about these aspects in their last production cycle, and through detailed follow-ups of selected production cycles (FWU). For the OF, we collected data from 71 cycles through the surveys and 22 through the follow-ups, while for the GH, information from 138 to 38 tomato cycles was collected through surveys and follow-ups, respectively. A table with the results aggregated by tomato cycle is attached.Datasets presented here were employed in the main work \u201cUnderstanding the heterogeneity of smallholder production systems in the Andean tropics \u2013 The case of Colombian tomato growers\u201d Gil, et al., 2019. In this region, tomato crop is developed under two technological levels: low, carried out under open field ( The distance between the two production zones is 115 km. The landscape of the Guanent\u00e1 province is formed by mountains and hills crossed by rivers, which in some sectors form small riverbanks . The solar radiation was estimated from the hours of bright sunshine (hours day\u22121) since this variable was not included in the original data. The incoming solar radiation was estimated from extraterrestrial solar radiation and relative sunshine hours, following the equation proposed by Angstrom \u22122 day\u22121); \u22122 day\u22121); \u22121); DL is the day length (hours). The coefficients For each production area, the elevation profiles were constructed with data extracted from Google Earth and supplemented with information about the soil parental materials obtained from previous studies 2.23), ammonium (N\u2014NH4), phosphorus (P) and potassium (K) contents, pH, electrical conductivity (EC), soil organic carbon (SOC); and physical properties such as clay, silt and sand contents (%). Exchangeable N\u2014NH4 and N\u2014NO3 were determined by extraction with KCl, and the solution was analyzed as described by Bremner and Keeney 2O5 and K2O by multiplying them by 2.292 and 1.205, respectively.Initially, in each production area, 75 soils samples were collected at 30 cm depth on fallow plots between May and July 2015. Sampling spots were determined by a non-aligned random sampling procedure and adjusted once on the field to sample only uncropped soils. Based on the geographic coordinates of the sampling points, we determined the altitude and slope using a 30 m digital elevation model (DEM). Soil samples were processed at a certificated soils laboratory, and the analysis included chemical properties such as nitrate . The sampling sites were randomly selected based on satellite images on which GH locations were clearly identified. Samples were analysed including the same variables used to describe the soil natural fertility in a certified soils laboratory and following the aforementioned methods. We took 38 pairs of soil samples during June 2013.As part of the characterization, we determined the effect of 2.4SVY) and a direct follow-up observation procedure (FWU). SVY consisted on a questionnaire of closed-ended questions about technical aspects related to the last tomato growing cycle such as: cropped area, plant density, cycle length, type and amount of fertilizers applied, crop management practices, water input by irrigation and yield. Questions were redacted by the research team and subsequently were tested through a simulacrum on local growers to improve its comprehension. Once on the field, previously trained undergraduate students conducted the interviews. Between 2009 and 2010, a total of 80 and 174 surveys were carried out to randomly selected smallholder of the OF and GH technological levels, respectively.In the present work, two data collection tools were employed: surveys for the OF and 30 months (from September 2010 to March 2013) for the GH. From both data collection tools, the data recorded were aggregated in order to obtain the total inputs (e.g. fertilizers) employed for the tomato production along with the total yield achieved. For commercial fertilizers, the nutrient elements contribution was obtained from the official information showed in the label. In the case of organic fertilizers, samples were taken and analyzed to determine the concentration of nitrogen, phosphorus and potassium.In GH fertilization strategies on the soils and the data about the management practices in relation to the fertilization are georeferenced. In each table, samples have associated the coordinates in decimal degrees taken in the official geodesic datum for Colombia (MAGNA-SIRGAS).The datasets of soil natural fertility, the effect of the"} {"text": "Aspergillus flavus was isolated from the haemocoel of worker bees. Observations on the metabolomic profile of this strain showed kojic acid to be the dominant product in cultures on Czapek-Dox broth. However, an accurate review of papers documenting secondary metabolite production in A. flavus also showed that an isomer of kojic acid, identified as 5-(hydroxymethyl)-furan-3-carboxylic acid and named flufuran is reported from this species. The spectroscopic data of kojic acid were almost identical to those reported in the literature for flufuran. This motivated a comparative study of commercial kojic acid and 5-(hydroxymethyl)-furan-3-carboxylic acid, highlighting some differences, for example in the 13C-NMR and UV spectra for the two compounds, indicating that misidentification of the kojic acid as 5-(hydroxymethyl)-furan-3-carboxylic acid has occurred in the past.In the course of investigations on the complex phenomenon of bee decline, Fungi have evolved the capability to produce a great number of secondary metabolites involved in the improvement of their ecological fitness, and many of them play important biological roles as virulence factors, chemical defense agents, developmental regulators, insect attractants, and chemical signals for communication with other organisms. On these properties is founded the pharmacological exploitation of many products as antibiotic, antiviral, antitumor, antihypercholesterolemic, and immunosuppressant agents ,2,3,4,5.Aspergillus, well-known for its ubiquity and cosmopolitan distribution is hydrogenionic concentration, CH is the total acid concentration, CL is the compound concentration and Kw is ionic product . Experimental data were processed with Hyperquad softare +, 125 [M-OH]+, 113 [M-CHO]+, 97 [M-COOH]+, 69 [M-CH2OH-COO]+. Kojic acid (2). UV \u03bbmax nm (log \u03b5): (H2O) 240 (3.39); (MeOH) 240 (3.38); (pH 2.5) 243 (2.97); (pH 3.0) 243 (3.07); (pH 4.0) 243 (3.07); (pH 5.0) 243 (3.12). MALDI TOF/MS: m/z 143 [M+H]+, 125 [M-OH]+, 113 [M-CHO]+, 97 [M-COOH]+.5-(Hydroxymethyl)furan-3-carboxylic acid ] were dissolved in MeOH (1.5 mL); an ethereal solution of CH2N2 was slowly added until a yellow color became persistent. The reaction mixtures were stirred at room temperature for 4 h. The solvent was evaporated under a N2 stream at room temperature. Residues of each reaction were analyzed by TLC on silica gel; 6, was evidenced at Rf 0.37 by eluting with EtOAc-MeOH (9:1), while Rf 0.54 and 0.82 corresponded to 7 and 9 respectively, as eluted with CHCl3-i-PrOH (92:8).Fifteen milligrams of samples , dissolved in pyridine (30 \u03bcL), were converted into the corresponding acetyl derivatives by acetylation with Ac2O (30 \u03bcL) at room temperature overnight. The reaction was stopped by addition of MeOH, and the azeotrope formed by addition of benzene was evaporated in a N2 stream at 40 \u00b0C. Residues of each reaction were analyzed by TLC on silica gel; 5 was evidenced at Rf 0.44 by eluting with CHCl3-i-PrOH (95:5), while Rf 0.54 and 0.82 corresponded to 7 and 9 respectively, as eluted with CHCl3-i-PrOH (92:8).Ten mg of samples [KA, 5-(hydroxymethyl)furan-3-carboxylic acid, 5-"} {"text": "Upbringing in a high environmental risk locale increases the risk for schizophrenia by 122%. Individuals living in a high gene-by-environmental risk locale have a 78% increased risk compared to those who have the same genetic liability but live in a low-risk locale. Effects of specific locales vary substantially within the most densely populated city of Denmark, with hazard ratios ranging from 0.26 to 9.26 for environment and from 0.20 to 5.95 for gene-by-environment. These findings indicate the critical synergism of gene and environment on the etiology of schizophrenia and demonstrate the potential of incorporating geolocation in genetic studies.Spatial mapping is a promising strategy to investigate the mechanisms underlying the incidence of psychosis. We analyzed a case-cohort study ( Schizophrenia (SCZ) risk is influenced by genetic and environmental factors. Here, the authors develop a statistical method for analyzing gene-by-environment effects in SCZ risk across Denmark with fine spatial resolution. Take as an example one of the best-established environmental risks for schizophrenia, childhood upbringing in an urban area. Persons born and raised in urban areas have an approximately twofold increased risk of schizophrenia compared to those born and raised in rural areas5. Researchers have examined potentially causal elements of urban upbringing, such as accessibility to health care6, selective migration of individuals8, air-pollution9, infections10, and socioeconomic inequality13. Yet none of these factors have substantially explained the risk associated with urbanicity14, nor are they highly correlated with instruments used in defining urbanicity, such as population density15. The conditional relationships between genetic liabilities and putative environmental factors are even harder to detect despite some cohort studies suggesting an interaction between urban upbringing and family history of schizophrenia20.For public mental health, it is critical to know which environmental factors can be modified to mitigate the risk of psychiatric disorders. However, identifying modifiable environmental factors has been a contentious issue21, the candidate environment approach suffers from the \u201cspotlight effect\u201d, ignoring the likely complexity of many environmental factors interacting with each other and with genetic liabilities to determine overall risk for illness. The environmental impact can even be a joint holistic effects from multiple environmental factors3. Measurement of the specific environmental factor may also be imprecise, masking its relationship to the illness. For example, many instruments have been devised to characterize socioeconomic inequality, yet have not shown consistent effects on incidence of schizophrenia. Given the complexity of real-life socioeconomic forces, lack of association with schizophrenia could be caused by instrument measurement error or because the instrument does not capture the relevant social-economic factors12.The difficulty in isolating specific environmental risk elements underlying urbanicity effects on schizophrenia incidence exemplifies a serious methodological challenge. The process for discovering environmental risk factors typically relies on a hypothesis-driven \u201ccandidate environmental factor\u201d approach. Researchers need to formulate a carefully constructed environmental hypothesis, measure it, and then determine if it associates with risk of the disease. Analyses is usually performed in a study of selected participants not necessarily representative of the entire population of interest. Similar to the candidate gene approach before the dawning of genome-wide association studies (GWAS)22, identifying spatially localized disease \u201chot spots\u201d can assist in the discovery of latent environmental factors. Advanced methods for disease mapping have been developed within the field of geostatistics, particularly in applying spatial random effect models to infer latent environmental variation in causal risk factors23. As the urbanicity-related increase in risk for schizophrenia was first noted through spatial clustering of disease incidence24, inferring risk hot spots to a finer resolution may provide insight into potential risk-modulating environmental elements before investing substantial resources in active measurement.An alternative to the candidate environment approach is to assess spatial patterns of disease risk without directly measuring environmental factors. As with John Snow isolating the environmental source of cholera outbreak via mapping the cases27, our method differs by utilizing spatial fine-mapping and enabling the partition of risk into E and GxE components without the need for candidate environmental factors.With this concept in mind, we develop a disease mapping strategy to address the need for discovering environmental factors without direct measurement. We use spatial random effects to map the geographic distribution of genetic liabilities (G), locale of upbringing (E), and their synergistic effects (GxE) on disease risk. By treating E and GxE as \u201clatent random fields\u201d on the map of Denmark, we avoid methodological issues inherent in the candidate environment approach. Although several studies have utilized random effect models to examine spatially localized risk for schizophreniaAs a proof of concept, we examine geospatial variation in schizophrenia risk across Denmark. To do so, we apply this novel analytical approach to data from a population-based case-cohort study that includes subject genotyping and detailed residential information from birth up to age 7 years. We are thus able to assess locale of upbringing effects on schizophrenia risk with a resolution beyond conventionally defined levels of urbanicity, allowing us to assess variation in spatial risk, and to ask whether spatially localized environmental factors modulate genetic liability of risk for schizophrenia.We utilize the entire population cohort of iPSYCH, excluding cases, to derive locales. The resulting map contains 186 non-overlapping locales, with the number of cohort members ranging from 65 to 197 individuals in each locale (median\u2009=\u2009105). Figure\u00a05. The inclusion of spatial random effects (E) reduces the urbanicity effect to hazard ratio\u2009=\u20091.64 with confidence interval encompassing 1. Model 3 with both E and GxE effects significantly contributes explanatory power to the variation in risk for schizophrenia (Log-likelihood ratio tests p\u2009<\u20092\u2009\u00d7\u200910\u221216), while the urbanicity effect is further reduced (hazard ratio\u2009=\u20091.46). Due to the concerns of residual confounds from interaction effects, Model 3 contains full pairwise interaction terms of fixed-effect covariates included in the model, i.e., PRS, genetic principal components, gender, and family history1. Median hazard ratios for E and GxE components, defined as the median absolute difference in hazard ratios for all possible combinations of pairs of locales28, are 2.22 and 1.78, respectively, representing a 122 and 78% expected change in risk if living in a high-risk locale.Table\u00a0The geographical distribution of E and GxE are shown in Fig.\u00a0Our novel spatial mapping analysis strategy transforms the \"candidate environment\u201d approach for disease risk into a search for environmental hot spots, localizing where environmental factors appear to have a strong impact. The flexibility of this approach enables the estimation of the amount variance accounted for by E and GxE effects without direct measurement of environmental risk factors. Both simulations and empirical application demonstrate the utility of this strategy as an alternative to the candidate environment approach.Applying this strategy to nationwide, population-based longitudinal data enriched with genetic information, we recapitulate the well-known urban-rural gradient in schizophrenia risk based on the residential information alone. Furthermore, we show that locale of upbringing significantly contributes to the risk for schizophrenia even after controlling for population density. Both E and GxE spatial effects demonstrate substantial variation within city boundaries and account for a higher proportion of schizophrenia risk than simple urban-rural contrasts. In terms of schizophrenia risk, results indicate that the locale an individual was born and raised in is more important than urban-rural differences per se, even within the confines of a single city. Our patterns of E and GxE across Denmark can be regarded as reference distribution. The partitioned risk contour serves as an initial guide to find the true risk element. Further comparisons with putative environmental factors can reveal the underlying elements that are highly relevant for the etiology of schizophrenia.29. Thus, the PRS we used may be biased toward older patients, reducing the predictive power of the already weak biological instrument. Third, the diagnostic uncertainty of very early-onset schizophrenia (onset age lesser than 13-years-old) can impact observed associations. However, a recent validation study of schizophrenia diagnoses using the Danish registry has shown good reliability in both early-onset (age 13 years to 18 years) and very early-onset (age\u2009<\u200913 years) schizophrenia, with diagnostic concordance greater than 82 percent30. Another concern with the relatively young age of the iPSYCH sample is the inclusion of cohort members younger than 10-years-old who have very low-risk of being diagnosed as schizophrenia. These subjects are handled in the Cox proportional hazards model by treating their potential future diagnoses as right-censored outcomes, and hence have little impact on the model outputs. To verify this, we performed a sensitivity analysis on Model 3. We removed anyone younger than age 10 at study end and re-ran Model 3. As expected, the results are almost identical, with the E component on-average increasing risk by 127 percent and GxE component on-average increasing the risk by 77 percent . Fourth, as shown in our simulations, the size of the GxE effect depends upon the predictive accuracy of the G effect. Because the PRS is a weak instrument of G, the true size of the GxE effect is probably several times larger than our current estimate, as suggested by our simulations. Fifth, we did not examine the impact of migration on locale effects. Since we cannot differentiate GxE from the gene by environment correlation introduced by migration, we restricted our analyses to individuals who have Danish parents and defined the locales as the place of birth. Although by this we intended to reduce the influence of migration, migration itself can be an important contributor for spatially-embedded risk8, as many migrants tend to live in clusters, especially in urban areas. A recent study on community samples across several countries shown that individuals with higher genetic risks of schizophrenia tend to migrate to urban area8. However, the spatial patterns we observe are unlikely due to the confounding effects of within generational drift4 since locale of upbringing was assessed before age 7, at which age no one had yet been diagnosed with schizophrenia. Inter-generational drift might still cause spatial aggregation of individuals with high genetic liabilities. A Swedish family-based study suggested urbanicity effects on schizophrenia can be explained by familial aggregation of risk13. Nevertheless, familial risk might not be the result of genetic liability but shared environmental risks within families. Danish registry studies using a cohort independent of our sample showed no evident urban aggregation of polygenic risk20, and the polygenic risk scores associated with incidence of schizophrenia independent of family history31. Therefore, there is little evidence to suggest that the identified spatial patterns is driven by inter-generational drift of families with high genetic liability for schizophrenia. Finally, we did not investigate a variety of possible socioeconomic factors in our current analyses. The potential importance such factors mandates in-depth examination in the future research; however, obtaining, validating, and analyzing socioeconomic variables as potential candidate environmental factors in the iPSYCH sample needs to be handled carefully and is beyond the scope of current paper.As a proof of concept study, our current analysis is not without limitations. First, the average age of the iPSYCH case-cohort is younger than the expected incidence peak of schizophrenia. Although the age range of our cohort is 8\u201332 years, encompassing the incidence peak of schizophrenia, some cohort members are still at risk for schizophrenia. Right-censoring among cohort members reduces the power of statistical analyses. However, by analyzing the case-cohort with age-adjusted RR\u2019s and survival analyses with inverse probability of sampling weights, we obtain unbiased estimates of incidence proportions. Second, our case-cohort is relatively young, while existing GWAS of schizophrenia tend to recruit more chronic patients in middle age7. Given the uncertainty involved, invalid constructs or measurement error could be contributors to low power to detect risk associations with specific environmental factors. Our spatial mapping strategy is an alternative approach, since finding high-risk locales does not depend on correct specification of a purported environmental risk factor.Despite these caveats, we demonstrate that locale effects and modulating effects of locale on genetic risk account for a substantial proportion of urbanicity effects in Demark. Living in a locale with a high E component increases the risk for schizophrenia by as much as 122 percent, independent of genetic liability and family history. Meanwhile, living in a locale with a high GxE component can increase risk due to genetic liability for schizophrenia by as much as 78 percent. Because our results demonstrate risk variation with finer resolution and stronger effects than urban-rural demarcation, there must be specific factors underlying previously observed urban effects. However, identification of factors explicating urban risk has been unsuccessful to dateIn the nineteenth century, epidemiology pioneer John Snow mapped high-density regions of cholera cases onto London streets and thus identified the water source as the key infectious medium. By demonstrating that the locale of upbringing significantly contributes to risk and modulates genetic susceptibility to schizophrenia, we hope this is the first step in isolating the source of spatial risk variation, facilitating the design of future public health interventions for severe mental disorders.Our spatial mapping approach follows three steps: (1) defining neighboring locales to characterize the latent environment field, (2) estimating random effects associated with each locale, and (3) mapping the spatial distribution based on the realized effects on locales. These three steps are calibrated to ensure a good balance between fine spatial resolution and adequate statistical power. Furthermore, the modeling strategy partitions observed effects on risk for schizophrenia into different components: locale of upbringing (E), genetics (G), and the synergistic effects of spatial locale and genetics (GxE).32, ensuring each defined locale has a sufficient number of study subjects to be well-powered while achieving a fine spatial resolution . The Voronoi tessellation partitions the whole map into smaller units based on individuals\u2019 coordinates on the map, making sure every point in a given unit area is closer to its centroid than any other. Their neighborhood relationships are defined simultaneously because the centroids are connected by the dual of Voronoi tessellation, i.e., Delaunay triangulation. After defining neighborhood relationships, individuals are grouped with their closest neighbors, making the locale growing in size, until the number of individuals in the defined locale reaches a pre-defined range , we obtain an unbiased estimation of spatial effects (E), while GxE effects are conservatively bounded by the predictive power of the genetic instrument of psychiatric disorders have lacked information on locale of upbringing, while population registry studies with detailed residential locales have not yet implemented polygenic data analyses. By linking with the Danish Civil Registration System, iPSYCH has a nationally representative sample with whole-genome genotyping and detailed chronological residential information. Altogether with the case-cohort design17, these characteristics of iPSYCH enable us to obtain nationally representative estimates of the locale effects and the modulating effects of locale on genetic risk.We demonstrate the feasibility of our spatial mapping approach by characterizing E and GxE effects of schizophrenia in the Danish population. To map the synergistic effects of locale of upbringing and schizophrenia genetic liability, chronological residential information and genotyping data from the same population-based cohort is needed. The Danish Lundbeck Foundation Initiative for Integrative Psychiatric Research (iPSYCH) case-cohort study provides a unique opportunity for this aim34. The aim of the iPSYCH study was to combined biobank and national registry to comprehensively examine the genetic and environmental risk factors of mental illness34. Cohort members were randomly sampled individuals from the entire Danish population born between 1981 and 2005 and surviving past 1 year of age . Individuals with a diagnosis of selected mental disorders were ascertained through the Danish Psychiatric Central Research Register, using diagnostic classifications based on the International Classification of Diseases, 10th revision, Diagnostic Criteria for Research . The use of these samples is protected under strict regulation with the Danish legislation. The informed consent was obtained from all participants. The study is approved by the Danish Scientific Ethics Committee, the Danish Health Data Authority, the Danish data protection agency and the Danish Neonatal Screening Biobank Steering Committee. Here, we focused on a subset of cases who were diagnosed with schizophrenia. A flow chart of the recruitment can be found in the Supplementary Information .For this analysis, we extracted genotyped schizophrenia cases and a population random sample cohort from the iPSYCH study36 and with both parents born in Denmark based on Danish registry information. The final analyses include 24,028 case-cohort members who met above criteria and passed genotyping quality controls. Supplementary Table\u00a0To prevent confounds due to recent emigration/immigration and large-scale ethnic differences, we restrict our analyses to unrelated individuals who are of European descent, as determined by genetic ancestry37. Next, we implement the spatial mixed effects model to identify sources of variation in the observed risk across locales. Given the concern of potential confounds, all models include fixed effects of gender, the first three genetic principal components, and family history as covariates. Genetic principal components were covaried to reduce the potential for spatial confounds due to population history35. Family history of psychosis was also covaried to avoid clustering of high-risk families and unmodeled rare genetic mutations31. Family history was obtained by querying parents\u2019 records in the registry. Survival models were used to account for age distribution34 and observations were weighted by the inverse of each subject\u2019s sampling probability38 for inclusion in iPSYCH. Time-to-event is defined as age at first hospital contact for schizophrenia for cases, and the minimum of age of death, disappearance, emigration or age at date of registry information collection (31 December 2013) for cohort members without schizophrenia. Because locale of upbringing, especially place at birth, has been consistently associated with a twofold increase in schizophrenia risk17, we defined the locale based on the place at birth in our analysis. To reduce the effects of potential confounding caused by differences in time residing in the defined locale, we added the duration of residence in the same locale as a stratifying factor in models, so that only subjects residing the same time in a given locale are compared (5 years or 7 years due to the sampling time frames). For comparison purposes, we also fit a model with fixed-effect of the covariates and no random effects (Model 1).We performed our analysis of iPSYCH case-cohort based on a sequence intend to demonstrate the magnitude of partitioned E and GxE in the context of well-researched urbanicity effect. First, we examined the risk distribution through our algorithm for locale definition without multilevel modeling. This represents an overall risk distribution without partitioning the risks into different components. We use the Mantel-Haenszel approach for estimating risk ratios (RR) while correcting for age differences4, the effect measure for population density is contrasted between 55 person/km2 and 5220 person/km2 (urban category). Sensitivity analyses indicate the effect measures remain the same if we use locale at age 5 or 7 years instead of locale at birth.As a byproduct of our locale defining algorithm, the population density of each locale is also automatically calculated, since the size of each locale is inversely proportional to the population density. In the statistical analyses, population density is a continuous instrument, derived by dividing the number of individuals by the area of the defined locale, using the locale at birth for population density. To determine whether we reproduce the urbanicity effects previously reported in Danish cohorts39 and IMPUTE2 was used for imputation40. The reference panel was 1000 genomes project phase 341.Eleven million single-nucleotide polymorphisms (SNPs) were imputed based on genotyped SNPs that pass the following criteria: minor allele frequencies greater than 1 percent; frequencies in Hardy\u2013Weinberg Equilibrium; SNPs autosomal and bi-allelic. SHAPEIT3 was used for phasing36 to perform the calculation because of its computational speed. By including the leading PCs in the models, it reduces the risk of spurious findings emerging due to population stratification35. Here, we used first three genetic principal components in our analysis since none of the remaining genetic principal components show associations with schizophrenia in iPSYCH sample.To control for potential confounds due to distant shared ancestry within the sample, we calculated genetic principal components (PCs) for iPSYCH samples. Genetic PCs were derived based on principal component analysis with a set of 43,769 independent SNP that are genotyped and passed quality control. We used flashPCA42. The PRS is the sum of the products of effect sizes of SNPs estimated from this independent GWAS and the dosage of those SNPs from the iPSYCH case-cohort. The included SNPs were pruned to ensure independence, while no significance threshold was set to filter SNPs. Parameters for calculating PRS include clumping , and pruning . Nonetheless, PRS is inherently a weak genetic instrument, so our estimate on GxE is as a conservative lower bound of interaction effects using the summary statistics for 34,129 cases and 45,512 controls from the Psychiatric Genomics Consortium (PGC) Schizophrenia GWAShttps://chunchiehfan.shinyapps.io/iPSYCH_geo_tess_SZ/]. The interactive version of the disease mapping is shown on the web portal while all the relevant codes can be downloaded on it. All analyses are implemented in R43. R packages employed include spatstat 44and coxme45. The geographical visualization is done with ggmap46, which extracts geographical information from Google Maps. An interactive version of the risk map is generated using leaflet47 and shiny48.The code used for simulations, empirical analysis, and visualization can be found at [Supplementary InformationPeer Review FileSource DataReporting Summary"} {"text": "There is limited information available regarding the clinical management of intravenous immunoglobulin-resistant Kawasaki disease (KD). We aimed to evaluate the optimal treatment options for patients with refractory KD by presenting an indirect-comparison meta-analysis.PubMed, EMBASE, Web of Science, and the Cochrane Database were searched on August 31, 2018. Unpublished studies were also searched in ProQuest Dissertations & Theses and through manual retrieval strategies. Randomized concurrent controlled trials (RCTs), high-quality non-randomized concurrent controlled trials (non-RCTs), and retrospective studies associated with AEs were included. The quality of all eligible studies was assessed using Cochrane collaboration\u2019s tool and non-randomized study guidelines. Risk ratios (RR) with 95% confidence intervals (CIs) for dichotomous outcomes were estimated in our analysis. GRADE profiler 3.6.1 was used to assess the evidence profile.Twelve studies involving 372 immunoglobulin-resistant KD patients were identified and analyzed. Neither infliximab nor intravenous pulse methylprednisolone (IVMP) was significantly more effective than second IVIG infusion with respect to lowering coronary artery lesions (CALs) and treatment resistance . No significant differences were found between infliximab and IVMP in the incidence rate of CALs , the treatment resistance , the rates of coronary artery aneurysm and the coronary artery dilatation . Furthermore, compared with second IVIG infusion, both infliximab and IVMP showed significant effectiveness in antipyretic effects . However, Infliximab was noninferior to IVMP on antipyretic effects . IVMP treatment showed significant association with fewer AEs than second IVIG infusion and infliximab . No significant differences were noted between infliximab treatment and second IVIG infusion .Infliximab, IVMP, and second IVIG infusion showed no significant differences in the cardioprotective effect or the rate of treatment resistance. Infliximab and IVMP treatment were more effective than second IVIG infusion regarding antipyretic effects. IVMP treatment may have an advantage due to its lower total rate of AEs associated with drug infusion.CRD42016039693).The study has been registered on PROSPERO (The online version of this article (10.1186/s12887-019-1504-9) contains supplementary material, which is available to authorized users. Kawasaki disease (KD) is an acute self-limited systemic vasculitis that occurs mainly in infants and children . KD invoIntravenous pulse methylprednisolone is the most commonly used steroid regimen, which rapidly inhibits inflammation and suppresses cytokine levels in KD patients. Several clinical trials have investigated the efficacy of steroids in IVIG nonresponders \u201312, but Currently, infliximab, IVMP, and second IVIG infusion are the conventional care for immunoglobulin-resistant KD patients who have failed the initial standard therapy. However, the efficacy of and adverse effects (AEs) associated with these drug administrations are not well known. In the absence of any trials directly assessing the efficacy and AEs of infliximab and methylprednisolone treatment for immunoglobulin-resistant KD, one method to evaluate efficacy and AEs is to conduct an adjusted indirect comparison of data from existing trials with a common control .An indirect comparison is an ideal method by which to resolve issues when there is no direct evidence from current clinical trials. If direct evidence of both \u03b1 versus \u03b3 and \u03b2 versus \u03b3 is available, an indirect comparison of \u03b1 versus \u03b2 is conducted using the same intervention \u03b3 as a common comparator. The meta-analysis defined second IVIG infusion as the common comparator. This adjusted indirect comparison meta-analysis aimed to evaluate the safety and effectiveness of these three therapies for children with immunoglobulin-resistant KD in the hope of providing evidence-based clinical advice.Ethical approval was not required because this was a meta-analysis of previously published trials and no real patients were included. The meta-analysis conformed to standard guidelines and was written according to the PRISMA statement . This reWe searched PubMed, EMBASE, Web of Science, and the Cochrane Database for articles published from each database\u2019s date of inception to August 31, 2018, using a combination of basic text and MeSH terms. Specifically, we performed a MeSH search using \u2018mucocutaneous lymph node syndrome\u2019 and a keyword search using the phrase \u2018Kawasaki disease\u2019 and terms related to intravenous immunoglobulin . This search strategy was modified to fit each database. In addition, unpublished studies were searched in ProQuest Dissertations & Theses and following manual retrieval strategies; we reviewed (1) references from published articles to identify additional relevant studies, (2) conference proceedings likely to contain trials relevant to the analysis, and (3) unpublished data or incomplete trials for relevant trial authors. All searches included non-English language literature RCTs and high-quality non-randomized concurrent controlled trials (non-RCTs); retrospective studies associated with AEs were reviewed; (2) studies whose patient populations included children with immunoglobulin-resistant KD according to the criteria of the Japanese Ministry of Health and Welfare or the A RCTs andStudies were selected by 2 independent reviewers (H. You and H. Chi) according to the above inclusion criteria, and disputes regarding the studies were resolved by H. Chan. Data extracted from each study included the publication year, age, setting, design, number of cases, initial course of the disease, initial treatment, retreatment, and the follow-up time points at which echocardiographic assessments were performed. The primary outcomes were CALs and the rate of treatment resistance. The secondary outcomes were AEs associated with drug infusion and antipyretic effects.http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp) under the three main categories [The methodological quality of the included RCTs was assessed using the Cochrane collaboration tool to assess the risk of bias ; The Mettegories .The overtegories .I2 test and considered significant at I2\u00a0>\u200950% or P\u2009<\u20090.1. GRADE profiler 3.6.1 was used to assess the evidence profile\u00a0[A traditional pair-wise meta-analysis was conducted. All statistical analyses were performed using Stata 14.0 software . The ris profile\u00a0., of which, nine studies were RCTs, while remaining trials were non-RCTs, according to the Cochrane Handbook. Those two non-RCTs trials . The indirect comparison relative risk (RR) of the total incidence rate of CALs for infliximab versus IVMP was 0.70. No significant difference between infliximab and IVMP was found in the rate of coronary artery aneurysm or coronary artery dilatation .Neither infliximab nor IVMP treatment was significantly more beneficial than second IVIG infusion with respect to reducing the total incidence rate of CALs in patients with immunoglobulin-resistant KD . Therefore, no significant difference in treatment resistance was found between infliximab and IVMP.The rate of treatment resistance was not higher in the infliximab group than in the second IVIG infusion group . The indirect comparison RR of antipyretic effects for infliximab versus IVMP was 1.18 , indicating that the antipyretic effects of infliximab and IVMP were not significantly different.Infliximab was associated with significant antipyretic effects than second IVIG infusion . The indirect comparison RR of the total rate of AEs for infliximab versus IVMP was 2.34. The result showed that the total rate of AEs associated with drug infusion was lower for IVMP treatment than for infliximab treatment.Compared to second IVIG infusion, IVMP treatment was associated with fewer AEs and the rate of treatment resistance (I2\u00a0=\u200978%) in the IVMP treatment group. As shown in Figs.\u00a0Significant heterogeneity was observed among the included studies in terms of antipyretic effects I2\u00a0=\u20096% and theStrengthened by the GRADE system, the working group grades of evidence were high for CALs in the infliximab group, moderate for CALs in the IVMP group, high for the rate of treatment resistance in the infliximab group, moderate for the rate of treatment resistance in the IVMP group, moderate for antipyretic action in both groups, moderate for the total rate of AEs associated with drug infusion in the infliximab group, and low for the total rate of AEs associated with drug infusion in the IVMP group. The indirect comparison suggested evidence grades of moderate for CALs between infliximab versus IVMP, low for treatment resistance between infliximab versus IVMP, moderate for antipyretic action between infliximab versus IVMP, and low for AEs associated with drug infusion among infliximab, IVMP, and second IVIG infusion .Tests for funnel plot asymmetry and meta-regression analyses weren\u2019t conducted in the meta-analysis since the number of included studies in pair-wise meta-analysis was <\u200910, according to Cochrane Handbook .TNF-\u03b1 is elevated in the acute phase of KD and may be a contributing factor in patients who subsequently develop a coronary artery aneurysm. Infliximab, which is a chimeric monoclonal antibody against TNF-\u03b1, has been used to treat patients with immunoglobulin-resistant KD for the past 10\u2009years. Several studies have suggested that treatment with infliximab results in faster fever resolution, shorter hospitalization, and even improved coronary artery outcomes compared to second IVIG infusion , 27 and A previous traditional pair-wise meta-analysis of IVMP was published by Yang et al. in 2015 . This anOur meta-analysis suggested that IVMP and infliximab may have limited ability to prevent or treat CALs in immunoglobulin-resistant KD patients, as they showed the same cardioprotective effects as second IVIG infusion. Neither initial IVIG nonresponders nor patients treated with early initial IVIG with methylprednisolone pulse therapy are at a lower risk for coronary artery abnormalities . A retroThe results revealed that transient hepatomegaly was most likely associated with infliximab treatment , 16. HowMillar et al. suggested that corticosteroid use in the acute phase of KD in patients with evolving coronary artery aneurysms might be associated with worsened aneurysms and impaired vascular remodeling . AccordiCertain laboratory parameters in KD patients are considered useful markers of inflammation that may reflect disease severity and treatment effects; such parameters include leukocyte and platelet counts, erythrocyte sedimentation rate, and the levels of hemoglobin, C-reactive protein, albumin, TNF-\u03b1, monocyte chemoattractant protein-1 (MCP-1), aspartate aminotransferase (AST) and alanine aminotransferase (ALT). Previous studies revealed that IVIG nonresponders have a higher neutrophil differential, higher C-reactive protein levels, and lower cholesterol levels than responders, and there was a high risk of CALs in patients with more severe and persistent inflammation , 47. AddCardiovascular manifestations and complications are closely connected to morbidity and mortality associated with severe KD, during both acute illness and long-term follow-up. Early diagnosis and early IVIG infusion in incomplete KD patients could reduce the risk of CALs , 49. RisTo date, because few clinical trials have assessed the efficacy of medications other than second IVIG treatment, neither the AHA nor the Research Committee of the Japanese Society of Pediatric Cardiology (RCJSPC) reached consensus on the treatment options for IVIG-resistant KD. Both the AHA and RCJSPC recommend mostly a second IVIG treatment as the best reasonable therapy in IVIG-resistant patients , secondly as IVMP , then as infliximab treatment , 26, 50.Nevertheless, this meta-analysis had several limitations. First, the use of an indirect comparison might have created differences in the clinical outcomes assessed herein. However, in the absence of sufficient head-to-head data pertaining to different treatments, an adjusted indirect comparison of the treatments in question can produce reasonable results. Some clinicians have even argued that adjusted indirect comparisons produce less bias than direct comparisons , 51. UntNeither infliximab nor IVMP was associated with cardioprotective effects or decreases in the rate of treatment resistance compared with second IVIG infusion, and both treatments were more effective than second IVIG infusion due to their antipyretic effects. Additionally, IVMP may have an advantage due to its lower total rate of AEs associated with drug infusion. However, the results of this meta-analysis should be interpreted with caution due to the presence of potential limitations. Until data from direct clinical trials comparing infliximab with IVMP are available, our meta-analysis provides preliminary evidence for the optimal management of immunoglobulin-resistant KD patients.Additional file 1:Search strategy. (DOCX 16 kb)"} {"text": "Real\u2010world evidence of second\u2010line treatment and beyond with immune checkpoint inhibitors (ICIs) in Chinese patients is lacking. Here, we aimed to assess the efficacy, responses, and immune\u2010related side effects of anti\u2010PD\u20101 agents in real\u2010life practice.We retrospectively analyzed consecutive patients who received nivolumab or pembrolizumab monotherapy at Peking Medical College Hospital. We collected baseline characteristics, evaluated treatment efficacy, and categorized immune\u2010related adverse effects (irAEs). Predictive factors of treatment response were also determined.The study included 97 patients with a median age of 64\u2009years. The majority of patients were male, with nonsquamous histological type and advanced stage tumor, and had a history of smoking. Most patients received ICIs as second\u2010line therapy. Expression of PD\u2010L1 was detected in 34.11% patients. Overall response rate (ORR) and disease control rate (DCR) were 16.49% and 60.82%, respectively. None of the patients achieved complete response (CR). The median PFS and OS were150\u2009days and 537\u2009days, respectively. The incidence of immune\u2010related toxicities was similar to the one previously reported. Patients with driver gene mutations had shorter PFS than patients without, while patients who encountered irAE had relatively longer PFS.The real\u2010world clinical outcome of ICIs in second\u2010 and further\u2010line NSCLC therapy is promising. Several characteristics may have predictive value for efficacy. Occurrence of irAEs during treatment was acceptable and could be an independent positive predictive for PFS.Efficacy and safety profile of ICIs as second\u2010line treatment or above for patients with NSCLC are promising in real world circumstancesIncidence and median time to the occurrence of irAEs vary between organsDriver gene mutations are associated with lower progression\u2010free survivalOccurrence of irAEs is associated with higher progression\u2010free survival Advances in immuno\u2010oncology have caused a dramatic shift in the treatment landscape of advanced non\u2010small cell lung cancer (NSCLC) in recent years. Immune checkpoint inhibition therapy, which has a profoundly different cure mechanism from target therapy or chemotherapy, restoring the efficacy of tumor\u2010specific T cells within the tumor microenvironment thereby enhance immune response and has shown promising outcomes in NSCLC.Although significant responses of NSCLC to PD\u20101 inhibitors have been demonstrated in clinical trials, there is a paucity of data in real\u2010world. In real\u2010world settings, patient cohorts are more heterogeneous, and some patients are unsuitable for clinical trials. Real\u2010world evidence (RWE) includes data from patients of different background and can help improving management of individual patients. The aim of this study was to assess the efficacy, responses, and immune\u2010related side effects of anti\u2010PD\u20101 agents in real\u2010life practice after the approval of anti\u2010PD\u20101 therapy in China. We also analyzed treatment alternatives to PD\u20101 inhibitors after tumor progression. To our knowledge, this study is the largest single site retrospective study of real\u2010world in China.This study was conducted in Peking Union Medical College Hospital (PUMCH). Patients were collected from a prospective cohort data base (CAPTRA\u2010Lung Study).During the treatment cycles, disease assessments were performed every six weeks. The Response Evaluation Criteria in Solid Tumors (RECIST) criteria v 1.1 were used to evaluate disease responses. Progression\u2010free survival (PFS) was calculated from the beginning of anti\u2010PD\u20101 treatment to date of progression of disease or death. Overall survival (OS) was calculated from the beginning of anti\u2010PD\u20101 treatment to death.Adverse effects (AEs) with an immunological basis were defined immune\u2010related adverse effects (irAEs). All irAEs were classified and graded according to the National Cancer Institute Common Toxicity Criteria for Adverse Events .We conducted descriptive analyses on clinical and pathological variables. We compared variables that might be associated with clinical efficacy using univariate and multivariate Cox proportional hazards regression. Progression\u2010free survival (PFS) and overall survival (OS) data are presented as Kaplan\u2010Meier curves. Statistical analyses were performed with SPSS 20 and GraphPad Prism 8.0.Every patient in the study had signed their informed consent. The retrospective analysis was approved by the ethic board of PUMCH.A total of 2430 patients received treatment at the lung cancer center of PUMCH from 1 April 2017 to 31 December 2019. Among these, 97 patients had received anti\u2010PD\u20101 treatment as second\u2010line therapy or beyond. The majority of these patients were men (men: women ratio = 2.03:1), and their median age was 64\u2009years.Most patients had nonsquamous histology type (59.79%) and metastatic disease (77.32%). The most frequent metastatic site was contralateral lung, followed by bone, liver, and adrenal. Most patients had smoking history (58.76%). Checkpoint inhibitor was given a second\u2010line treatment in 72 patients (74.23%) and third or fourth line in 25 patients (25.77%). Nivolumab was given to the majority of patients (63.92%). The patients' characteristics are summarized in Table EGFR mutations, ALK fusions, ROS1 fusions, MET\u201014 skipping, RET rearrangement, and KRAS oncogene had been tested by next generation sequencing or amplification refractory mutation system PCR in 74 patients . The analysis showed that 21 patients had driver gene mutations, including 15 cases (15.46%) of EGFR 19\u2010del or 21\u2010L858R mutations, three cases (3.09%) of ROS1 fusion, two cases (2.06%) of RET rearrangement, and one case (1.03%) of MET\u201014skipping. KRAS was detected in eight patients (8.25%) , ranged between one and 49% in 23 patients (23.71%), and was negative in 31 patients (31.96%) experienced irAEs. Of these, 19 patients had irAEs involving more than one organ. The organ most commonly involved was the skin, followed by endocrine system and liver.The median time from immunotherapy to first irAEs was 63\u2009days. Moreover, the median time to occurrence of irAEs varied between organs and systems Fig .Most irAEs were limited to grade 2, whereas grade 3 or 4 irAEs occurred in nine cases (9.4%). Patients were given systemic glucocorticoids for the treatment of irAEs greater than grade 3, except for endocrine irAEs, for which replacement therapies were given. Cyclosporin A, cyclophosphamide, anti\u2010IL\u20106 antibody, and anti\u2010TNF\u03b1 antibody were given to selected patients with critical and refractory diseases. The incidence and grades of irAEs are reported in Table Nine patients had dose interruptions, and six patients permanently stopped immunotherapy due to myocarditis (two cases), pneumonia (two cases), myocarditis plus pneumonia (one case), and grade 4 bulla (one case). Most patients experienced improvement or resolution of toxicity. Three patients died presumably as a consequence of irAEs. The causes of death were myocarditis, pneumonia, and pneumonia plus myocarditis and hepatitis.The median follow\u2010up time for all patients was 249\u2009days. During this time, 72 patients (74.22%) had disease progression and 42 patients (43.30%) died. The median PFS and OS were150\u2009days and 537\u2009days, respectively . The regimens included single\u2010drug chemotherapy (23.71%), continued immunotherapy (15.46%), and targeted therapy (15.46%). The most used mono\u2010chemotherapies were docetaxel (12.37%), gemcitabine (4.12%), or paclitaxel (3.09%). Targeted therapies were only used in patients who were positive to driver gene mutations, and consisted of second\u2010 or third\u2010generation tyrosine\u2010kinase inhibitors (TKIs) and rechallenge of first\u2010generation TKIs. Eight of the patients who continued immunotherapy had concurrent local treatment such as radiofrequency ablation, localized radiotherapy, and interventional embolotherapy.We further investigated the clinicopathological factors that might affect the efficacy of PD\u20101 inhibitors. Univariate analysis showed that median PFS was significantly increased in patients with Eastern Cooperative Oncology Group (ECOG) performance status (PS) score 0\u20101who experienced irAEs during therapy Fig . InsteadPatients with recurrent or advanced NSCLC for whom first\u2010line chemotherapy and/or targeted therapy fail generally have a poor prognosis. ICIs, which have the ability to restore the patient's antitumor immunity, are becoming the new choice for these patients. In several clinical trials, ICIs have shown a significantly higher response rate and durable clinical response than chemotherapy in patients with advanced NSCLC.However, most of the evidence to date comes from clinical trials and cannot be generalized to real\u2010world patients. There are only a few retrospective analyses that, however, include smaller cohorts of Chinese patients.This study retrospectively analyzed the efficacy, outcomes, side effects, and clinical factors associated with prognosis in a longitudinal cohort of real\u2010world patients with NSCLC receiving monotherapy of ICIs as second\u2010line treatment and above. To the best of our knowledge, this is one of the largest comprehensive retrospective studies of real\u2010world patients from mainland China who were treated with second\u2010line PD\u20101 inhibitor monotherapy.In published clinical trials, the ORR of second\u2010line ICI monotherapy ranged from 18 to 37%.The incidence of total irAEs, \u2265 grade 3 irAE, and median time to irAEs in our study were comparable to those in previous reports.EGFR\u2010sensitive mutations and ALK fusion did not respond well to ICI therapy.EGFR mutations activate PD\u2010L1 expression and induce immune escape,EGFR or ALK\u2010mutated patients receiving ICIs are scarce. A retrospective study showed that only one in 28 EGFR\u2010mutant or ALK\u2010positive patients achieved PR as best response.EGFR or ALK\u2010mutated patients were not significant. Recently, a systemic analysis showed no benefit on OS by using second\u2010line nivolumab, pembrolizumab, and atezolizumab treatment in EGFR\u2010mutated patients against docetaxel.Previous studies found that patients with IrAEs, with an incidence of 40% to 51%,Furthermore, the pattern of follow\u2010up therapies revealed that most patients had further\u2010line therapies, including chemotherapy, and a few patients continued immunotherapy beyond progression. The latter group of patients had oligoprogressive diseases and had been treated with localized therapy along with maintenance ICIs. Although the therapeutic regimen might prolong the PFS2 of driver gene mutated NSCLC with CNS and/or limited systemic disease progression on targeted therapies,EGFR mutations in this study had only received first\u2010generation TKI therapy. The efficacy of ICIs in patients resistant to third\u2010line TKIs still needs to be defined. Third, tumor mutation burden, which may affect the efficacy of nivolumab, was not tested in most patients. It would be interesting to collect these data for future studies. Cases from different cancer centers should also be collected and analyzed comprehensively in the future.There are several limitations of this study. First, this is a retrospectively designed, nonrandomized study conducted in a single center. Therefore, the patient number is limited, and biases could exist in patients' inclusion criteria and efficacy evaluations. Second, most patients with In conclusion, in heterogeneous real\u2010world settings, ICI monotherapy showed promising clinical outcomes and acceptable side effects as second\u2010 and further\u2010line treatment for patients with advanced NSCLC. Clinical factors such as driver gene mutations and appearance of irAEs were independent predictive factors for PFS. Further prospective studies are required to understand the underlying mechanism and relationship between clinical factors and ICI response in patients.The authors declare no potential conflicts of interest.Table S1 The treatment choice of all the 97 patients.Click here for additional data file."} {"text": "In developing countries, Pakistan is one of the countries where access to health and health-related indicators is a major concern. Their improvement would reduce inequalities among various Communities/Districts or groups of Communities. A Community health index (CHI) in this regard is estimated to explore inequality ratio, inequality slope, and spatial analysis of inequalities among all Communities at regional and geographical levels.Data from Pakistan Social and Living Standard Measurement (PSLM) survey, Round-VI, 2014\u201315 were used to construct CHI. The index was constructed in two steps. In the first step, the study indicators were standardized while in the second step, the standardized indicators were aggregated into a single metric by applying non-linear Geometric Mean formula.The inequality ratio of 16.59 estimated for Pakistan was found to be higher than the ratio of Atlanta city, GA (5.92), whereas, a lower slope coefficient was estimated for Pakistan than Atlanta city, GA (0.38\u2009<\u20090.54). This ratio of disparity was also found to be lower for urban regions as compared to rural (7.78\u2009<\u200917.54). While the slope coefficient was slightly higher for urban regions (0.45\u2009>\u20090.43). The results of the spatial analysis revealed different patterns of inequalities. A cluster of healthy districts was found in Punjab province, whereas districts from Baluchistan had made a bunch of deprived/unhealthy districts in terms of CHI scores. Besides, separate maps for all provinces showed that capital districts of all provinces were relatively well-off/developed.The instant results concluded that inequalities in access to health and health-related indicators exist across countries as well as across geographical regions. To reduce or eradicate these inequalities, government and public health workers are recommended to set priorities based on access to composite index.The online version contains supplementary material available at 10.1186/s12889-020-09870-4. Improvement towards equitable access to health and health-related determinants is a global concern and fundamental to the advancement of Sustainable Development Goals (SDGs) . Access Nations throughout the globe are lacking access to health and health-related indicators, but more in low- and middle-income countries. According to UNDP\u2019s report, about 103 million of the world population has no literacy skills . ApproxiPakistan being a low- and middle-income country faces a lack of access to health and health-related socioeconomic indicators. About 39% of the population suffers from multi-dimensional poverty . ConsideIt could be viewed as access to health and health-related socioeconomic, housing, clean drinking water, and environmental factors, the government of Pakistan has made extraordinary progress in accessibility and sustainability towards these indicators since 2000 . HoweverThere are several approaches used to measure geographical disparities. One of these popular approaches is the recently developed Urban Health Index (UHI) . Though As the aim of the composite index is to examine health/wellbeing disparities among districts at the geographical level, this paper can contribute to the existing literature from certain perspectives. First, the composite index is constructed very rarely and has not been constructed before in Pakistan. Second, in Pakistan, there has been no study found on district health/wellbeing disparities at the regional and geographical levels. Finally, spatial analysis of health inequalities has not been conducted in Pakistan and we have analyzed the data spatially through ArcGIS and GeoDa applications , 31.Before explaining the step-wise methodology of CHI, the selection of appropriate domains (and their sub-domains) was the main problem in index construction. Therefore, the dimensions were selected based on certain important grounds. First, based on previous literature these indicators have significantly influenced the health status , 32\u201340. To construct the Community Health Index (CHI), the WHO\u2019s methodology was applied, recently used to measure the Urban Health . CHI wasThe selected indicators were standardized by taking the difference of the indicator\u2019s actual value from its lowest value divided by the difference between the highest values to the lowest value. Lowest and highest values are also known as the minimum and maximum goalposts respectively. The mathematical derivation is given in Eq. as underI\u2009=\u2009actual value of an indicator.Where; I)\u2009=\u2009maximum value of indictor I.Max (I)\u2009=\u2009minimum value of indicator I minus a small value or a chosen value.Min* formulae. The Geometric Mean methodology is explained mathematically in Eq. as underj\u201d is the total number of indicators and \u201c1/j\u201d is the power of the product of all standardized indicators. The index score was calculated by multiplying the values of standardized indicators together for all communities (districts) and raising the product to the jth root. This simple Geometric Mean formula assigned equal weights to all sub-dimensions or simply it is known as Unweighted Geometric Mean. However, as in the present study, all dimensions (and not their sub-dimensions) were assumed to be weighted equally, the above simple Geometric Mean formula had been converted to the Weighted Geometric Mean formula. The mathematical expression of Weighted Geometric Mean is written in Eq. was computed by taking the ratio of the mean of the upper 10% of the distribution (first decile) to the mean of the lower 10% of the distribution (last decile) of CHI scores. Mathematically we can write the formula as under;The disparity/inequality slope of the middle 80% of CHI scores was estimated by applying simple linear regression through the Ordinary Least Square (OLS) technique. These districts/communities were ranked in ascending order in terms of CHI scores. The CHI scores were regressed on the rank variable. Mathematically;i\u201d is the index score of \u201cith\u201d district and \u201cRanki\u201d is the rank of that particular district. \u201cf\u201d shows a functional relationship between the dependent and independent variables. Equation (Where \u201cCHIEquation is econo\u03b2\u201d is the disparity slope that shows the average CHI heterogeneity between each adjacent district (Community) and is influenced by the ranking of districts/communities. As the number of communities was varied in different provinces, the rank variable was rescaled through dividing each area\u2019s rank by the total number of units in the midsection and regressed the CHI scores again on the rescaled rank variable.In Eq. , \u201c\u03b2\u201d is The slope estimated through Eq. can be uAs one of the objectives is to assess the patterns of spatial geographical disparities among districts of Pakistan, the CHI scores were displayed on choropleth maps. Spatial representation is useful because one can directly differentiate the extent of the health/well-being quality of various districts for different regions. Moreover, it provides essential information about different patterns of health (well-being) to public health workers and can make some useful planning in this regard. A step by step brief procedure of how to show CHI scores on maps is given in Additional\u00a0file\u00a0Data were extracted from Pakistan Social and Living Standard Measurement Survey (PSLM): Round-VI, 2014\u201315 conducted by the Pakistan Bureau of Statistics (PBS), Islamabad . This suFor analysis purposes, Excel spreadsheets , Statistical Package for Social Sciences (SPSS) version 20, Geographic Information System (ArcGIS), and GeoDa applications were used throughout the study , 31.As the study was based on the assumption that the selected indicators should be positively correlated with one another, Pearson\u2019s Correlation Metrix is shown in Table\u00a0The distribution of the Community Health Index (CHI) for 113 districts of Pakistan is shown in Fig.\u00a0Figure\u00a0Figure\u00a0Figure\u00a0There has been a significant improvement in access to health and health-related social indicators but inequalities exist across nations as well as across different geographical regions within the country. For instance, in our study, the disparity ratio (16.59) is higher than the ratio of 5.92 found for census tracks of city of Atlanta, state of Georgia, USA . ComparaThere is also a significant difference in disparity slopes across nations as well as across various geographical groupings within a country. Census tracks of city of Atlanta, GA are more heterogeneous and sensitive (with a slope of 0.54 ) than diSpatial analyses are useful in understanding the patterns of inequalities among various geographical regions. Figure To assess whether there exist variations in the spatial pattern of urban and rural regions of all districts, separated maps for both rural and urban regions have been constructed Fig. . From boA separate spatial analysis of each province provided information about where the clusters of healthier or unhealthier districts in terms of CHI scores exist. For instance, concentrating on the Punjab map (Panel A), Northern districts are making a bunch of relatively healthier districts , as the data set utilized in our study does not have rich information about other than the selected indicators. Secondly, the analysis was conducted only for 113 districts of Pakistan, and the rest of the districts were ignored either due to law and order situations or having fewer respondents (households) that could not show the real picture of the whole district. Finally, rich information was available on Islamabad district but in the present study, we have ignored this district. It is because Islamabad is the capital of Pakistan and after estimating the CHI score for that district, it was highly developed and was far away from all other study districts which made our results.To conclude here, spatial inequalities concerning access to health and health-related social indicators persist across countries and geographical regions within the country. In this regard, this study provides a foundation for measuring the extent of inequalities and heterogeneities within communities/districts. Besides, this analysis also offers a basis for assessing the spatial patterns of disparities across various geographical boundaries. Governments and public health researchers/practitioners can use such outcomes to set priorities and can eradicate the disparities as health-related socioeconomic attributes are integrated and improvement in any of these indicators especially in deprived/unhealthy regions will enhance the overall quality of health/wellbeing .Additional file 1. Steps of constructing choropleth map using ArcGIS application.Additional file 2:Table A. List of upper and lower decile districts of Pakistan with respect to CHI scores. Table B. Regression analysis for disparity slope . Table C. Region-wise list of upper and lower decile districts of Pakistan with respect to CHI scores. Table D. Regression analysis for disparity slope . Table E. Regression analysis for disparity slope (All provinces of Pakistan).Additional file 3:Figure A. Distribution of Community Health Index of districts in all Provinces. Figure B. Spatial distribution of CHI of districts in all provinces of Pakistan."} {"text": "Early-psychosis researchers have documented that duration of untreated psychosis (DUP) is an important predictor of outcomes in first-episode psychosis. Very few cross-national studies have been conducted, and none have been carried out involving patients from both Mexico and the U.S. We collaborated to answer three questions: (1)Are DUP estimates similar in two very different settings and samples? (2)Are demographic variables, premorbid adjustment, and symptom severity similarly related to DUP in the two different settings? (3)Does the same set of variables account for a similar proportion of variance in DUP in the two settings?Data on sociodemographic characteristics, premorbid adjustment, symptom severity, and DUP were available for 145 Mexican and 247 U.S. first-episode psychosis patients. DUP was compared, and bivariate analyses and multiple linear regressions were carried out in each sample.DUP estimates were similar (medians of 35 weeks in Mexico and 38 weeks in the U.S.). In the Mexican sample, DUP was associated with gender, employment status, premorbid social adjustment, and positive symptom severity (explaining 18% of variance). In the U.S. sample, DUP was associated with age, employment status, premorbid social adjustment, and positive symptom severity (but in the opposite direction of that observed in the Mexican sample), accounting for 25% of variance.Additional cross-national collaborations examining key facets of early-course psychotic disorders, including DUP, will clarify the extent of generalizability of findings, strengthen partnerships for more internationally relevant studies, and support the global movement to help young people struggling with first-episode psychosis and their families. Studies on first-episode psychosis from around the world have consistently shown duration of untreated psychosis (DUP)\u2014often defined as the time interval from onset of frank psychotic symptoms to the first contact with a psychiatric facility to receive adequate pharmacological treatment\u2014to be an important predictor of clinical and social outcomes in patients with first-episode psychosis . A longecauses poor outcomes or if individuals at risk for poor outcome receive specialized treatment long after the onset of symptoms [The specific mechanisms underlying the association between DUP and outcome variables are not yet clearly identified. It is still unknown if DUP symptoms . It is csymptoms , and thasymptoms \u201310.Although longer DUP can be considered a risk factor for poorer outcomes in patients with psychosis, little is known about the determinants of a prolonged DUP . Some stLower premorbid adjustment is reflected by a poorer adaptation to school, lower academic performance, and limited social relationships during childhood and adolescence. If an insidious illness onset begins during this time, early manifestations of the disorder, such as predominant negative symptoms, are often misattributed to other circumstances rather than a serious mental illness, such as substance use, difficulties at school, or simply behaviors considered to be characteristic of adolescence , 20\u201322. In this study, we leveraged a collaboration between Mexican and U.S. early psychosis researchers to study two samples with regard to key variables pertaining to first-episode psychosis. We specifically sought to answer three research questions about DUP. First, are DUP estimates similar or different in the two very different settings and samples? Second, are basic demographic variables, premorbid adjustment scores, and symptom severity scores similarly related to DUP in the two settings and samples? Third, does the same set of variables account for a similar portion of variance in DUP in the two different settings and samples?Patients in Mexico were consecutively recruited from both the inpatient and outpatient services of the Instituto Nacional de Psiquiatr\u00eda Ram\u00f3n de la Fuente Mu\u00f1\u00edz (INPRFM), a highly specialized mental health center in Mexico City, dedicated to research, education, and treatment of psychiatric patients. A total of 145 patients, enrolled from the prospective Mexican First-Episode Psychotic Study were incWith regard to the settings from which the U.S. participant were drawn, as part of a project examining the effects of premorbid marijuana use on early-course psychosis , consecuStructured Clinical Interview for DSM-IV Axis I Disorders (SCID-I) [At both sites, after a clinical interview with the patient and his or her relatives, a trained clinical researcher made a diagnosis using the (SCID-I) , to confPremorbid Adjustment Scale (PAS) [childhood (\u226411 years), early adolescence (12\u201315 years), late adolescence (16\u201318 years), and adulthood (\u226519 years). Functioning in each of these age periods is assessed across five major psychosocial domains that are rated from 0 to6 (severe impairment): sociability and withdrawal, peer relationships, scholastic performance, adaptation to school, and social-sexual functioning. Social-sexual functioning is not included as a psychosocial domain during the childhood period, while scholastic performance and adaptation to school are not measured during the adulthood period. The adulthood period was not assessed in the present study. Thus, in childhood, academic functioning includes scholastic performance and adaptation to school, and the social functioning encompasses sociability and withdrawal and peer relationships. In both early adolescence and late adolescence, academic functioning is comprised of scholastic performance and adaptation to school, and social functioning includes sociability and withdrawal, peer relationships, and social-sexual functioning. At both the Mexican and U.S. sites, functioning was not rated in a respective age period if prodromal or psychotic symptoms began during or within one year of that period, as described previously [The le (PAS) assessesle (PAS) \u201332. Funceviously .Positive and Negative Syndrome Scale (PANSS) [Positive and negative symptom severity was assessed at both sites with the commonly used (PANSS) . The PAN (PANSS) . At the At the Mexican site, DUP was measured following the criteria proposed by Larsen and was Symptom Onset in Schizophrenia (SOS) inventory [At the U.S. site, age at onset of psychosis and DUP were determined using the nventory . The SOSnventory , 38\u201340.2 tests of association for categorical variables, independent samples Student\u2019s t tests for continuous variables with approximately normal distributions (based on examination of descriptive statistics and the Kolmogorov-Smirnov test), and Mann-Whitney U tests for continuous variables that were not normally distributed. Correlations were computed using Pearson and Spearman correlation coefficients as appropriate.Bivariate analyses for comparisons between the Mexican and U.S. samples of first-episode psychosis patients were carried out using \u03c7To assess the independent effects of correlates of DUP in both samples, we computed a log transformation of DUP, ln(DUP+1), which normalized the distribution to allow for multiple linear regression models.z=3.99, p<0.001) and more likely to be unemployed (at a trend level). Patients in the U.S. sample had completed more years of education , were more likely to be male , and were more likely to be single .We first compared our two first-episode psychosis samples in terms of basic demographic variables. As shown in z=0.02, p=0.99); the distributions of DUP are shown in In terms of PAS scores, patients in the Mexican sample had lower PAS academic adjustment scores (indicating better premorbid academic adjustment) across all three age groups. However, PAS social adjustment scores did not differ. Also shown in p<0.01) and in the moderate range (the average of the three correlations was 0.46 in the Mexican sample and 0.49 in the U.S. sample). Correlations between the three social adjustment scores were also statistically significant and larger (the average of the three correlations was 0.68 in the Mexican sample and 0.58 in the U.S. sample). On the other hand, correlations between the three academic adjustment scores and the three social adjustment scores were only modest and 0.17 in the U.S. sample ).Inter-correlations among PAS subscale scores, in both samples, are shown in r=0.15 in Mexican patients and r=0.23 in U.S. patients), and negative symptom and general psychopathology symptom subscale scores were most correlated .With regard to inter-correlations among the three PANSS symptom severity scores, findings were again consistent in the Mexican and U.S. samples of first-episode patients . Specifip<0.001), though it was not associated with DUP in the Mexican sample . Also in the U.S. sample, employment status was associated with DUP at a trend level . In the Mexican sample, both gender and employment status were associated with DUP .In terms of associations between DUP and the five demographic variables , and employment status), in the U.S. sample, age and employment status were associated with DUP. Age was directly correlated with DUP . In both samples, DUP was associated neither with negative symptom severity nor general psychopathology symptom severity. In the Mexican sample, longer DUP was associated with a lesser severity of positive symptoms , but in the U.S. sample, longer DUP was associated with a greater severity of positive symptoms .Correlations among DUP, PAS academic adjustment (an average score across the three age periods), PAS social adjustment , and PANSS scores are shown in . Similarly, in the model pertaining to the U.S. sample, the four independent variables accounted for 25% of the variance in DUP .In each sample, multiple linear regressions were run, including variables found to be associated with DUP in the aforementioned bivariate tests. Results of these two models are shown in 4.social adjustment was related to DUP in both samples, though premorbid academic adjustment was not. Third, interestingly, positive symptom severity was associated with DUP in the two samples, but in contrasting ways. Fourth, we found that a similar set of four variables account for approximately one-fifth to one-quarter of the variance in DUP in the two different settings and samples.Several interesting findings emerged from this analysis, which represents one of the first cross-national comparisons of DUP and predictors/correlates of DUP, and the first to do so among first-episode samples in Mexico and the U.S. First, we found that the distributions of DUP\u2014and the medians of DUP\u2014were remarkably similar in these two very different settings and samples. Second, we observed similarities in variables associated with DUP; for example, premorbid With regard to the first of these findings, in our samples, the median DUP is around 8\u20139 months. This is consistent with findings in several other studies , 41. OneOur similarities in correlates of DUP add more evidence regarding the importance of poor premorbid social adjustment in understanding DUP. This variable may produce longer DUP as poor social adjustment is related to decreased social support \u201347 (incllesser severity of this kind of symptoms in the Mexican sample. However, a longer DUP was associated with greater positive symptom severity in the U.S. sample. This apparently contradictory finding could be explained by the exacerbation of stigma in the presence of positive symptoms [Regarding the relationship between DUP and positive symptoms, as expected due to the difficulty for the patient/family to recognize/accept a psychiatric disorder that needs to be evaluated and treated when positive symptoms are less severe or not present, a longer DUP was associated with symptoms and conssymptoms . Additiosymptoms pointed a priori that our two settings and samples were quite different; we view the findings as particularly informative in part because of this lack of similarity. In both samples, four predictors accounted for 18\u201325% of the variance in DUP.In comparing DUP estimates, predictors of DUP, and proportion of DUP explained by similar sets of predictors, we knew Several methodological limitations should be noted. First, this was a secondary analysis of two existing but similar datasets that were combined for the purposes of this collaborative analysis. As such, there were subtle differences in the exact measurement methods for key variables, including DUP. Second, as is true of any study of DUP, this is a difficult construct to measure as it inevitably relies on retrospective recall of patients and their family members. Third, and also related to the fact that this was a secondary analysis, there are a host of other variables that would have ideally been measured, and which could have explained more variance in DUP. Despite these limitations, our analysis reveals the value of cross-national collaborations in examining key facets of the early course of psychotic disorders, including DUP. Such studies will clarify the extent of broad generalizability of findings even in quite different samples; strengthen partnerships for more rigorous and internationally relevant studies; and support the now global movement to help young people struggling with early psychosis and their families, who must navigate complex systems of care while facing diverse social and psychological challenges."} {"text": "Ectopic expression of RUNX2 has been reported in several tumors. In melanoma cells, the RUNT domain of RUNX2 increases cell proliferation and migration. Due to the strong link between RUNX2 and skeletal development, we hypothesized that the RUNT domain may be involved in the modulation of mechanisms associated with melanoma bone metastasis. Therefore, we evaluated the expression of metastatic targets in wild type (WT) and RUNT KO melanoma cells by array and real-time PCR analyses. Western blot, ELISA, immunofluorescence, migration and invasion ability assays were also performed. Our findings showed that the expression levels of bone sialoprotein (BSP) and osteopontin (SPP1) genes, which are involved in malignancy-induced hypercalcemia, were reduced in RUNT KO cells. In addition, released PTHrP levels were lower in RUNT KO cells than in WT cells. The RUNT domain also contributes to increased osteotropism and bone invasion in melanoma cells. Importantly, we found that the ERK/p-ERK and AKT/p-AKT pathways are involved in RUNT-promoted bone metastases. On the basis of our findings, we concluded that the RUNX2 RUNT domain is involved in the mechanisms promoting bone metastasis of melanoma cells via complex interactions between multiple players involved in bone remodeling. SPP1 (secreted phosphoprotein 1 )gene product, OPN(osteopontin), was observed in bone metastases [Skeletal metastases occur when cancer cells from a primary tumor invade the bone. Generally, bone metastases are associated with breast, prostate and lung cancers . Bone metastases ; it was tastases . Importatastases . In parttastases . PTHrP wtastases in head tastases . In additastases . Recentltastases . Conside2 and in RMPI growth medium containing 10% fetal bovine serum (FBS) (Sigma-Aldrich), supplemented with antibiotics (1% penicillin/streptomycin) and 1% glutamine. All cell lines were tested negative for mycoplasma using the LookOut Mycoplasma PCR Detection Kit (Sigma-Aldrich).We used A375 and MELHO (DSMZ-Deutsche Sammlung von Mikroorganismen und Zellkulturen) human melanoma cells. The RUNT KO cells were obtained using CRISPR/Cas9 as we previously described . Cell liOnce 80% confluence was reached, cells were harvested, washed and counted using a Burker haemocytometer for all experiments.gatatcTTCGCCTCACAAACAACC-3\u2032) and the reverse primer Runx2R-XhoI (5\u2032-ggacctcgagATATGGTCGCCAAACAGAT-3\u2032); underlined nucleotides represent the restriction sites. The amplified fragment was inserted in the pCRTM2.1 cloning vector, then excised by EcoRV/XhoI digestion and finally cloned in pcDNA3-Flag-HA vector . The cloned fragment was sequenced at the BMR Genomics facility (http://www.bmr-genomics.it). RUNX-2 expression was validated by Western blot.The RUNX-2 gene was cloned into the pcDNA3 vector as previously described ,23. BrieThe exogenous PTHrp peptide was added to A375, 3G8, MELHO and 1F5 melanoma cells seeded into 24-well plates at a concentration of 100 \u00b5g and incubated for 24 h. Treated cells were then harvested to perform expression analyses.A375 and MELHO melanoma cells were plated in 96-well plates at a density of 1000 cells per well and incubated overnight. Cells were then treated with ERK1/2 and AKT inhibitors for 24 h at a final concentration of 2 \u00b5M in RPMI1640 10% FBS. Cultured media were collected to perform ELISA assays, while cells were stored for gene expression analysis.\u00ae Design and Analysis desktop software (Thermo Fisher Scientific).PCR arrays were performed using a TaqMan\u2122 Human Tumor Metastasis Array according to the manufacturer\u2019s instructions. The amplification reaction and the results analysis were carried out using a QuantStudio\u2122 3 Real-Time PCR System equipped with QuantStudioVEGFA, Hs00900055_m1; VEGFR, Hs01052961_m1; CD31, Hs01065279_m1; IBSP, Hs00173720, OPN, Hs00167093_m1). Gene expression for MMP9 was tested using the Power SYBR\u00ae Green PCR Master Mix (Thermo Fisher Scientific). Gene expression was normalized to the housekeeping \u03b22-microglobulin gene, and the relative fold expression differences were calculated. TaqMan SDS analysis software was used to analyze the Ct values. Three independent experiments with three replicates for each sample were performed.Total RNA extraction and RT were performed as previously reported . PCRs we\u00ae TGXTM precast gradient 4\u201320% gels and transferred onto polyvinylidene difluoride (PVDF) membranes (Thermo Fisher Scientific). PVDF membranes were then probed with the primary and secondary antibodies reported in RIPA buffer was used for protein extraction (Thermo Fisher Scientific) and protein concentrations were determined by BCA assay (Thermo Fisher Scientific). Protein samples were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) using mini-PROTEANSignals were detected using a chemiluminescence reagent , and images were acquired using an LAS4000 Digital Image Scanning System . Densitometric analyses were performed using the ImageJ software, and the relative protein band intensity was normalized to \u03b2-actin and expressed as the optical density (OD) ratio. The data were obtained from three independent experiments.\u00ae 488 anti-rabbit (Cat. #A-11034) secondary antibody, and nuclear staining was performed by using ProLong\u2122 Gold Antifade Mountant with DAPI (Thermo Fisher Scientific). Images were captured using a Leica DM2500 microscope . In particular, four different fields were measured for each sample in three independent experiments, and each field contained approximately 80\u2013100 total cells.Cells were fixed and processed according to the manufacturer\u2019s protocols. BSP primary antibodies were diluted (as reported in the datasheet) in Antibody Dilution Buffer and incubated overnight at 4 \u00b0C. The slides were then incubated with the Alexa Fluor4 cells were seeded onto the upper chamber of transwell plates of 8 \u00b5m diameter in the presence of RPMI supplemented with 1% FBS for 24 h (migration assay) or 48 h (invasion assay). The invasion assays were performed by coating the upper chamber with Matrigel. The lower chamber was filled with medium with or without bone fragments . After 48 h, cells adherent to the upper surface of the membrane were removed. Thereafter, cells in the membrane underside were fixed with 4% of paraformaldeide and stained with DAPI. Cells were then visualized under a Leica DM 2500 to take pictures and to evaluate the number of adherent cells. Cells were counted in ten random fields at 40X magnification.To assess bone tropism, we first compared cells\u2019 ability to migrate in the presence or absence of a bone fragment. Therefore, cells were seeded on a 6-well plate at a density of 500,000 per well. After adhesion, half of each well was scratched using a cell scraper, and the relative migration distance (RMD) was calculated in the absence or presence of a bovine bone slice placed at the same distance in all samples. Cultures were carried out for 2 days using DMEM (Dulbecco\u2019s Modified Eagle Medium) supplemented with 10% FBS and Glutamax at 37 \u00b0C and 5% CO. The migration ability assay was conducted with an EVOS\u2122 FL Auto Imaging System (Thermo Fisher Scientific) under time-lapse protocol for 48 h. Distances between the cell front and the bone slice or the signed blank space for every well were measured at the beginning and at the end of each experiment. Relative migration distances (RMDs) were calculated using the following formula: RMD = (t0\u2013t1)/t0, where t0 is the distance between the cell front and the bone slice at time zero of the assay and t1 is the same distance at the end, as previously reported . The RMDFor PTHrP protein detection, we performed an ELISA . WT and RUNT KO cell lines were plated onto 96-well plates at a density of 10,000 cells/well. After 3 days of culture, the medium was collected and centrifuged at 1000 g for 20 min at 4 \u00b0C. Standards were prepared following the manufacturer\u2019s instructions. Samples and standards were plated into the ELISA microplate, and the assay was conducted according to the manufacturer\u2019s instructions.https://string-db.org) for independent inspection related to their predicted connections.RUNX2, PTHrP, AKT and ERK proteins were submitted to the STRING portal for Windows, version 16.0 .Student\u2019s paired t-test was used to compare the variation of variables between two groups. Differences were considered statistically significant at Then, we evaluated the metastatic gene expression profile in A375 and RUNX2 RUNT KO (RUNT KO) melanoma cells by using a Human Tumor Metastasis Array. The data showed lower expression of several genes involved in the metastatic process in RUNT KO cells compared to A375 cells . To valiWe then tested RUNT domain influence in driving melanoma cell migration to the bone by analyzing the expression of IBSP and SPP1 genes. As shown in As PTHrP expression is regulated by RUNX2, we measured PTHrP levels in WT and RUNT-KO melanoma cell culture media. Interestingly, we observed a significant reduction in PTHrP concentration in RUNT KO cells media compared to the WT cell media A, as welIn order to confirm the RUNT domain role in inducing PTHrP expression, we cultured KO cells in the presence of exogenous PTHrP (+exPTHrP). As shown in Considering the modulatory role of VEGFR2 in the ERK pathway, we then looked for ERK pathway modifications in KO cells. The observed reduced levels of ERK and pERK proteins expression in RUNT KO cells compared to WT cells suggested an activating role of the RUNT domain A. ERK anTo evaluate the interaction between RUNT and AKT/ERK signaling pathways, we treated WT cells with either AKT or ERK inhibitors. Our data showed that the inhibition of AKT and ERK pathways did not affect RUNX2 gene expression A. HoweveImportantly, inhibition of both the AKT and ERK pathways reduced the amount of PTHrP released in the WT melanoma cell medium D. AccordAt first, we evaluated the migration ability of cells either in the presence or in absence of bone fragments in vitro. In particular, we calculated the levels of the relative migration distance of WT versus RUNX2 KO cells in the presence or absence of bone fragments. We observed that RMD levels of, W.T.; normalized with the RMD of RUNX2 KO cells, were higher in the presence of bone fragments . TherefoTo further analyze the role of the RUNT domain in driving melanoma cell migration to bone fragments, we tested the migration and invasion ability of WT and RUNT-KO melanoma cells, respectively, in a transwell system. The migration ability in the absence of bone fragments was lower for RUNT-KO cells compared to WT A. In theThe bone microenvironment regulates complex and relevant processes such as hematopoiesis, osteogenesis and osteolysis . VariousTranscription factor RUNX2 is the master gene of osteogenic differentiation. Its expression is high in pre-osteoblasts and in early osteoblasts, but decreases in mature osteoblasts . HoweverRUNX2 involvement in regulating the epithelial\u2013mesenchymal transition (EMT) process has been demonstrated in melanoma . RecentlIn lung cancers, RUNX2 overexpression was shown to promote EMT via direct regulation of vimentin along with other proteins ,37. ReceWe demonstrated that the RUNT domain promotes melanoma cell proliferation and migration . AccordiWe also observed a reduced expression of PTHrP, an autocrine/paracrine ligand involved in malignancy-induced hypercalcaemia and skeletal metastatic lesions, in RUNT KO cells . InteresTo evaluate the involvement of RUNX2 in osteotropic mechanisms, we performed in vitro experiments by using bone fragments as previously described by Mannavola and coworkers . EmploymFinally, since our results showed that the RUNT domain affects the expression and the activity of various molecules involved in bone metastasis, we conclude that RUNX2, via the RUNT domain, may promote bone metastasis of melanoma through a complex scenario affecting different and strongly associated pathways."} {"text": "Loxodonta africana) is listed as vulnerable, with wild populations threatened by habitat loss and poaching. Clinical pathology is used to detect and monitor disease and injury, however existing reference interval (RI) studies for this species have been performed with outdated analytical methods, small sample sizes or using only managed animals. The aim of this study was to generate hematology and clinical chemistry RIs, using samples from the free-ranging elephant population in the Kruger National Park, South Africa. Hematology RIs were derived from EDTA whole blood samples automatically analyzed (n = 23); manual PCV measured from 48 samples; and differential cell count results (n = 51) were included. Clinical chemistry RIs were generated from the results of automated analyzers on stored serum samples (n = 50). Reference intervals were generated according to American Society for Veterinary Clinical Pathology guidelines with a strict exclusion of outliers. Hematology RIs were: PCV 34\u201349%, RBC 2.80\u20133.96 \u00d7 1012/L, HGB 116\u2013163 g/L, MCV 112\u2013134 fL, MCH 35.5\u201345.2 pg, MCHC 314\u2013364 g/L, PLT 182\u2013386 \u00d7 109/L, WBC 7.5\u201315.2 \u00d7 109/L, segmented heterophils 1.5\u20134.0 \u00d7 109/L, band heterophils 0.0\u20130.2 \u00d7 109/L, total monocytes 3.6\u20137.6 \u00d7 109/L , lymphocytes 1.1\u20135.5 \u00d7 109/L, eosinophils 0.0\u20130.9 \u00d7 109/L, basophils 0.0\u20130.1 \u00d7 109/L. Clinical chemistry RIs were: albumin 41\u201355 g/L, ALP 30\u2013122 U/L, AST 9\u201334 U/L, calcium 2.56\u20133.02 mmol/L, CK 85\u2013322 U/L, GGT 7\u201316 U/L, globulin 30\u201359 g/L, magnesium 1.15\u20131.70 mmol/L, phosphorus 1.28\u20132.31 mmol/L, total protein 77\u2013109 g/L, urea 1.2\u20134.6 mmol/L. Reference intervals were narrower than those reported in other studies. These RI will be helpful in the future management of injured or diseased elephants in national parks and zoological settings.The African elephant ( Loxodonta africana) is a megaherbivore, which had an extensive range across the African continent until the 1930s. Loss of habitat and poaching led to the present classification of this species as Vulnerable by the International Union for Conservation of Nature (IUCN) (Loxodonta africana africana (South African bush elephant) based on the geographical distribution (The African elephant (e (IUCN) . Accordie (IUCN) . The subribution . Elephanribution . Additioribution .Elephas maximus), mainly in Sri Lanka, Thailand and Myanmar, presumably because of the easier access to this species, as these animals are at least partially under human care ] are one of the most valuable diagnostic tools in veterinary medicine and are used to help differentiate diseased from healthy individuals , 8. Theyman care \u201317. Some (ISIS)] .The objective of this study was to establish RI for hematology and selected clinical chemistry measurands for a free-ranging African elephant population. The RI were generated in accordance with the guidelines published by the American Society for Veterinary Clinical Pathology (ASVCP) with minEthical approval specifically for this study was obtained from the University of Pretoria Faculty of Veterinary Science Research Ethics Committee and Animal Ethics Committees (certificate number REC 132-19).via a 18G needle and direct vacutainer collection from an auricular vein at first handling after the venipuncture site was swabbed with alcohol (ethanol 70%). Disposable medical gloves were worn during the blood collection. Whole blood was collected in sealed EDTA and serum vacutainers . Serum samples were left to clot for at least 30 min standing upright in a cooler box. Samples were transported cooled, until they were processed in the VWS laboratory within 6 h of collection. EDTA whole blood was analyzed with the scil Vet abc or the Horiba ABX Micro VS60 hematology analyser. Automated hematology analysis was not routinely performed for all animals. Blood smears were made using a standard pushing (wedge) technique . Serum tubes were centrifuged at 1,300 g for 10 min, and serum aliquoted into cryotubes and frozen at \u221280\u00b0C.The samples for this study originate from the free-ranging African elephant population from the KNP, South Africa. The animals were immobilized for park management purposes or other unrelated studies. Immobilization was performed by Veterinary Wildlife Service (VWS) veterinarians according to SANParks Standard Operating Procedures (SOP). Elephants were darted from a helicopter using an air-pressurized dart propelled by a carbon dioxide powered rifle . Immobilization was induced with etorphine , azaperone , and hyaluronidase , with dose ranges based on subjective weight and age estimation by the same veterinarian (etorphine 0.003 mg/kg and azaperone 0.01 mg/kg). Once the elephant was recumbent, the ground crew approached and assisted the elephant into lateral recumbency, if required. At the end of the procedure, naltrexone was administered intravenously at 20 times the etorphine dose (mg), and the animal observed until it had fully recovered. Sample collection proceeded according to a standardized protocol as follows: blood was taken echnique and staiAt the time of immobilization, a physical examination was performed. Animals without injuries and free of clinical abnormalities were considered healthy. All data for the animal, including the sex, general condition, age and weight estimation, microchip number and geographical location of the immobilization site, were recorded in Excel spreadsheets. Notes were added for abnormal clinical findings or injuries, if present. Sample selection was made according to this information. All selected samples were collected between October 2014 and August 2019, meaning they were stored no longer than 5 years. This threshold was chosen as no studies could be found on stability beyond this time . SamplesData from the original hematology analyses were reviewed. Analysis was performed using EDTA whole blood after mixing at room temperature, with a Scil Vet ABC (first 25 results) and a Horiba ABX Micros ESV60 (last 11 results), using the domestic horse setting on both analyzers. These analyzers and settings have not been validated for elephant blood. Internal quality control using manufacturer-supplied quality control material was performed every day before analysis. These results from the original hematology analyses were reviewed for the present study. Firstly, the automated calculated hematocrit (HCT) was compared to a manual PCV performed at the same time. Only automated results with a HCT within 3% of the PCV were included in this study. The white blood cell count (WBC), red blood cell count (RBC) and platelet count (PLT) as measured by impedance , and the hemoglobin concentration (HGB), as measured by a cyanide-free photometric method were considered accurate enough to be used for this study. The erythrocyte indices mean cell volume (MCV), mean cell hemoglobin (MCH) and mean cell hemoglobin concentration (MCHC) were calculated using the following standard equations:12/L, PCV %; MCV fL, MCH pg, MCHC g/L) as well as the severity of toxicity (1+ to 4+) according to a standardized grading system described for domestic species . MorpholAnalysis was performed using the Abaxis VetScan VS2 according to the manufacturer's instructions. The Large Animal rotor was used and included the following measurands: albumin, alkaline phosphatase (ALP), aspartate aminotransferase (AST), calcium, creatine kinase (CK), gamma glutamyltransferase (GGT), globulin, magnesium, phosphorus, total protein (TP) and urea. The analytical methods for these measurands are shown in Statistical analyses were performed with MedCalc software version 19.1.7 and the Excel add-on Reference Value Advisor version 2.1 accordinp-value of >0.2 for the Shapiro-Wilk test, not >0.05, was used to define a Gaussian distribution or robust method . The 90% confidence intervals (CI) of the lower and upper reference limits (RL) were calculated using a bootstrap method. The ratio of the upper or lower CI to the RI was calculated by dividing the former by the latter .To examine the potential effect of storage time on clinical chemistry measurands, the number of days from storage to analysis was enumerated, and the correlation between days in storage and measurand concentration or activity calculated using Pearson's correlation coefficient, Samples were selected from free-ranging elephants, with no overt clinical abnormalities, living in the KNP in South Africa and were collected between October 2014 and August 2019. Of the original 79 selected samples, two were excluded after further detailed analysis of their capture records, as one was found deceased subsequent to a road traffic accident, and one had a snare injury. After statistical analysis and removal of 28 complete data sets with outliers, 51 samples from apparently healthy animals remained. These consisted of 42 males and 9 females. All were sub adults or adults according to the original selection datasheet, except for one male calf which was estimated to be 4 years old. Results for PCV were available for 48 of these animals. Only 23 samples from the automated hematology analysis were finally included, after applying the described screening procedures. Results thereof are shown in The CVs obtained for the clinical chemistry measurands from the repeatability study with elephant serum on the VS2 were: albumin 1.1%, ALP 3.1%, AST 4.7%, calcium 0.8%, CK 5.2%, GGT 6.6%, globulin 3.7%, magnesium 0.9%, phosphorous 1.0%, TP 1.4%, urea 3.9%. Imprecision for all measurands was considered acceptable, when compared to total allowable analytical error guidelines for veterinary species .r = 0.69, p < 0.001; CK r = 0.50, p < 0.001). In other words, older samples appeared to have a decreased AST and CK activity compared to more recent samples, which may indicate a storage effect (No significant correlation between storage time (minimum 5 months to maximum 5 years) and measurand concentration/activity was found for any measurands apart from AST and CK , published between 1977 and 1980; all these data originated from the same elephant population in eastern Africa (n = 18\u201323), where on later examination, infestation with the bile duct hookworm and other parasites was discovered and a different subspecies of African elephant (Loxodonta africana cyclotis) report dStudies on Asian elephants are more common and involve both managed and free-ranging animals. Managed elephants often reside in tourist camps or are working elephants from the timber industry, from various locations in South East Asia, including India, Sri Lanka, Thailand and Myanmar , 17, 36.In the above mentioned studies on Asian elephants, the animals were described as clinically healthy or apparently healthy. Sample sizes varied greatly; the smallest one was performed on six individuals with longitudinal samples with the12) and PCV (49%) described in the East African study, were higher than those determined in the current study and the minimum-maximum (1.4\u20136.0 \u00d7 1012/L and 18\u201380%) ranges much wider, in comparison to the current reference intervals (12/L) and PCV (35.1%); MCV range (106\u2013122 fL) was slightly narrower and MCHC range (310\u2013390 g/L) wider but comparable to the current RIs , reported in the African forest elephant (n = 5) (9 /L) from captive and immobilized African elephants (9/L) in tuskers (9/L) in free-ranging elephants in Sri Lanka, the latter being the highest reported WBC count in apparently healthy elephants are lower than the current RIs (0.4 \u00d7 109/L) , 39. Int\u00d7 109/L) .9/L) (9/L) (9/L) . The lowL) (9/L) . The higelephant . The widMost of the results for clinical chemistry measurands reported in other studies of comparable geographical region were similar to the current measurements, even though analytical methods differed , 32, 43.Serum mineral concentrations in the African elephant vary by geographical location. The means of calcium, magnesium and phosphorus in the current study were higher compared to those from elephants at Sengwa Wildlife Research, Zimbabwe, but lower in comparison to elephants from Ruwenzori National Park in Uganda , 30. FirDetermination of AST and CK activity is especially important for captured animals, as increases in these enzymes are associated with muscle injury, intramuscular injection, trauma and capture stress in domestic and wild animals such as dogs, horses, some ruminants and rhinoceros \u201348. AnimLoxodonta africana are even lower whereas urea and globulin means were higher than the current results from the same geographical location (KNP), using the same drug combination and clinical chemistry panel, did not consider the influence of the anesthetics relevant without significant changes in the measurands , 62. In For future studies, the sample size should be increased, especially for the automated hematology analyses, as the current sample size was rather small and the analyzers are not validated for this species. Ideally, the same machine should be used for all samples, which in the best case scenario is validated for the species to be analyzed. Unfortunately, this was not possible for this study due to resource restrictions present in the laboratory. It is important to note that the automated hematology reference intervals are specific to the analyzers and settings used in this study. The desirable sample size for all measurands would be >120, but above 39 is considered reasonable for RI studies, which we were able to attain for blood smear analyses and WBC count as well as chemistry measurands . Larger This is the first RI study for hematology and a clinical chemistry panel relevant for the African elephant performed using appropriate statistical methods and a strict outlier elimination approach. In comparison with previously performed research on the same species and for Asian elephants, the current ranges are narrower, which will have a greater potential to identify normal and abnormal clinical pathology results in individuals of this species in the future. The established RI will function as an important tool for researchers and clinicians working with the species, provided that drugs, geographic location and nutrition are taken into consideration when interpreting results. Future studies will be needed with greater sample sizes to create RIs for different groups .The original contributions presented in the study are included in the article/The animal study was reviewed and approved by Veterinary Science Research Ethics Committee and Animal Ethics Committee (certificate number REC 132-19), Faculty of Veterinary Science, University of Pretoria.CS contributed to conceptualization and study design, data collection, data curation, performed the data analysis, and wrote the first draft of the manuscript. EH contributed to conceptualization and study design, data collection, data curation, performed the data analysis, and acquired funding. JH contributed to data collection. MM contributed to conceptualization and design of the study, and assisted with sample provision. PB assisted with sample provision and data curation. All authors contributed to manuscript revision, and read and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Primary Care Networks (PCNs) are a new organisational hierarchy with wide-ranging responsibilities introduced in the National Health Service (NHS) Long Term Plan. The vision is that PCNs should represent \u2018natural\u2019 communities of general practices (GP practices) collaborating at scale and covering a geography that fits well with practices, other healthcare providers and local communities. Our study aims to identify natural communities of GP practices based on patient registration patterns using Markov Multiscale Community Detection, an unsupervised network-based clustering technique to create catchments for these communities.Retrospective observational study using Hospital Episode Statistics - patient-level administrative records of attendances to hospital.General practices in the 32 Clinical Commissioning Groups of Greater Londonst April 2017 and 31st March 2018.All adult patients resident in and registered to a GP practice in Greater London that had one or more outpatient encounters at NHS hospitals between 1The allocation of GP practices in Greater London to PCNs based on the registrations of patients resident in each Lower Layer Super Output Area (LSOA) of Greater London. The population size and coverage of each proposed PCN.3 428 322 unique patients attended 1334 GPs in 4835 LSOAs in Greater London. Our model grouped 1291 GPs (96.8%) and 4721 LSOAs (97.6%) into 165 mutually exclusive PCNs. Median PCN list size was 53 490, with a lower quartile of 38 079 patients and an upper quartile of 72 982 patients. A median of 70.1% of patients attended a GP within their allocated PCN, ranging from 44.6% to 91.4%.With PCNs expected to take a role in population health management and with community providers expected to reconfigure around them, it is vital to recognise how PCNs represent their communities. Our method may be used by policymakers to understand the populations and geography shared between networks. In the absence of data-driven approaches, Primary Care Networks (PCNs) have formed through interpersonal relationships between practices rather than through an understanding of the distribution of their registered patients.This study uses Markov Multiscale Community Detection, a data-driven, unsupervised clustering method, to identify \u2018naturally occurring\u2019 communities of general practices (GP practices) to collectively form 165 PCNs across London.In doing so, this technique produces PCNs which are most representative of the spatial communities of patients for whom PCNs provide care.National Health Service England have proposed that PCNs should contain 30 000 to 50 000 patients restricted to a single Clinical Commissioning Group, however we find this may not represent patterns of care delivery in an urban setting.The use of Hospital Episode Statistics ensures that the obtained PCNs are related to secondary care utilisation; yet, on the other hand, these PCNs may not reflect patients who rarely use healthcare services but remain registered to a GP practice.The introduction of Primary Care Networks in the National Health Service (NHS) Long Term Plan in 2019 marks one of the biggest changes to general practice in England.3 4The new NHS Long Term Plan and GP contract announced in 2019 introduced the \u2018Primary Care Network\u2019 (PCN),5While many practices are already members of networks or federations, or informally collaborate with other GPs, the new organisational structure may not match existing arrangements. According to NHS England, PCNs are specified to be networks of neighbouring practices covering a population of 30 000 to 50 000, which will not cross the boundaries of a CCG, although this is not an absolute requirement.In this article, we set out an approach to defining communities that conform to the criteria required of PCNs, based only on the registration of patients from a given geographical area to a GP. Our approach uses Markov Multiscale Community Detection (MMCD), which uses Louvain optimisation to detect and obtain robust partitions of a network, without imposing a priori the number of partitions that should be produced.st April 2017 to 31st March 2018 were identified from Hospital Episode Statistics (HES).All adult patients presenting to outpatient secondary care in England from 1In cases where a patient was registered at more than one LSOA and GP practice combination within the 1 year time period of our study, the record with the highest frequency for that individual was chosen; where these were tied, the most recent combination was chosen. GP practices contributing fewer than 100 patients were excluded.In order to quantify the extent of overlap between areas covered by different GP practices in London, the equivalent market size (EMS) of each LSOA was calculated as the reciprocal of the Herfindahl-Hirschman Index , with clear surrounding margins of suboptimal values.The geographical co-ordinates of each GP practice were identified from their registered postcode. For each of the 16 000 partitions produced, the pairwise geographical straight-line distances between all practices within each PCN was calculated. Where the median distance from a practice to all other practices within a PCN community was more than four times the median pairwise distance between all practices within the PCN community, this practice was excluded as a spatial outlier. The number of practices within each PCN community was calculated and the proportion of practices contained within the polygon drawn around the outer geographical limits of the practices comprising the PCN community was calculated.PCN communities where the number of practices was less than 3 or greater than 20, or where the percentage of practices spatially located within the outer spatial limits (defined by the polygon drawn around each practice coordinate) of the PCN community was more than 25% were excluded. The total number of practices present in the remaining PCN communities was calculated. The partition with the highest number of included GP practices was taken as the optimal partition.For this optimal partition, the GP practice name, location and CCG were linked by practice code, using data from NHS Digital. Practice list sizes as of March 2018 were also linked using data from NHS Digital.25LSOAs were subsequently assigned to a PCN community based on the PCN to which the highest number of patients within a given LSOA were registered. GP practices were mapped along with their corresponding assigned community allocation and LSOA boundaries. The proportion of patients resident within the same PCN as their registered GP was calculated for each PCN.The data on which this study is based was granted following review by a panel including patient and lay representatives. Patients were not invited to contribute to the writing or editing of this document for readability or accuracy.st April 2017 and 31st March 2018 was seen, from 1.1 to 20.6. 185 LSOAs (3.8%) had an equivalent market size of less than two GP practices, consistent with primary care provision by a single dominant GP practice. The median equivalent market size across London was 4.9 GP practices per LSOA, while 259 LSOAs (5.4%) had an equivalent market size of more than 10 GP practices. The median LSOA-level equivalent market sizes of CCGs in London ranged from 3.2 to 7.0. Overall, the median equivalent market size for LSOAs north of the river Thames was 23% higher than those south of the river (5.3 vs 4.3).Residents in the same LSOA in London were generally registered to a wide range of GP practices. An optimal configuration of GP communities was found at an RMST parameter of 5.5 and Markov time of 0.054. For this optimal clustering, only 43 GP practices were unassigned to a community according to our criteria: 28 practices lay in 20 communities with only 1 to 2 practices, and a further 15 practices were spatial outliers. Collectively, these 43 practices were the modal provider of primary care for 114 LSOAs with a total population of 187 101 (2.3%). Our optimal partition consisted of 165 PCNs grouping 1291 practices, which cover 4721 LSOAs in London and 97.7% of the estimated London population. A map of this optimal configuration, displaying GP practices superimposed on the LSOAs assigned to each PCN is shown in The PCNs ranged in size from 3 to 18 practices with a median of 8. Median list size of PCNs was 53 490 patients . Around two-thirds (67.9%) of PCNs contained practices from only one CCG and the remaining PCNs contained practices from either two (23.0%) or three (9.1%) CCGs.Across all 4721 LSOAs, the median percentage of patients registered to GP practices located within their allocated PCN was 73.7%, ranging from a minimum of 24.8% to a maximum 98.6% , left. AAs the health system develops to prioritise integrated and collaborative working, we need to improve our understanding of the relationships between providers and patients. With the requirement for GPs to form larger organisational units in the form of PCNs, there is a need to quantify how representative these communities are of their populations. The techniques used in our study provide an unsupervised, data-driven means of producing mutually exclusive PCNs formed by bringing together GP practices that frequently provide care to patients from the same geographical regions. In doing so, it makes no assumptions regarding geography or organisational hierarchies and produces \u2018natural\u2019 communities. Using this technique, we showed that despite having no prior geographical knowledge, 97.7% of the population of London may be assigned to a PCN of appropriate size and which are spatially consistent.The factors that determine practices joining a PCN are complex and rely on the interpersonal and professional relationships between GPs as much as the shared relationships in registration of the local population. Furthermore, predefined boundaries cannot be ignored, with local authorities and CCGs taking responsibility for commissioning for a given geographical area and registered patients. However, networks must work for their population, and existing tools to guide decision-making are limited. While most practices have now joined PCNs, it is likely that boundaries will change as practices move in and out of PCNs, reflecting the complex dynamics of a system which needs to work for the population and providers.In the context of the PCNs that are forming across London, our findings may in the future be directly compared with the actual PCNs that form. While it is hoped that the PCNs suggested by this study conform to those being formed across London, identifying discrepancies between the two conformations may offer a means of identifying PCNs which may not adequately represent local patterns of patient registration, and therefore may benefit from reorganisation. Second, where practices struggle to align with one another to form PCNs, the findings of this study offer a tool by which to propose suitable PCN structures.Our findings also raise questions regarding the optimal size and configuration of PCNs across London. Our modelling suggests that the 30 000 to 50 000 recommended list size for PCNs may be too restrictive. The finding of a median list size for our optimal configuration of 53 490 in London, with an IQR from 38 000 to 73 000 patients suggests that the current recommended range may not permit larger networks to form where underlying patient registration patterns would favour this. A wider range of list sizes also suggests that greater variability in PCN size should be allowed depending on local need. Similarly, while a clear majority of the PCNs we found (67.9%) are formed from practices within a single CCG, almost one-third contain practices in two or three CCGs, suggesting that there should be flexibility in conformity to CCG boundaries. However, there may be additional problems for a PCN containing practices from more than one CCG where priorities do not align.If PCNs take on a role in population health management, and community services are required to reconfigure to match the footprints of PCNs,Population dispersion is likely to be a greater problem in a densely populated urban area such as London, where there are extensive transport links and a greater choice of providers within a shorter commuting distance. However, our estimates, which by design are modelled to represent optimal patterns of registration, are likely to overestimate coverage of the PCN compared with those PCNs that have already formed. The opportunity for patients to retain registration with a practice after moving outside of its historic catchment postcodes may reduce coverage of a PCN where a fixed relationship to a discrete geographical area is required.The emergence of digital NHS general practice, in the form of GP at Hand and its peers which allow patients access to video consultations as the first point of contact, disrupts the notion of a \u2018place-based\u2019 relationship between primary care providers and geographical communities of patients. Currently, the registered population for such services is disproportionately of working age, with 98.5% of registrants between 20 to 64 years of age, compared with a London-wide estimate of 75.3%.As such innovations in registration and GP provision mature and potentially scale across the health system, the significant differences between patients using digital primary care services and traditional general practices may undermine efforts to enforce relationships between primary care providers and discrete geographical communities of patients. This trend may signal the emergence of a differentiated system of primary care, where those with low care needs are served by essentially rootless digital primary care providers and those with higher care needs are attended to by well-integrated, accessible primary care providers with a nearby physical presence. In such cases, the creation of PCNs by providers of digital primary care may be orthogonal to the underlying ethos of vertical integration and investment in relationships with community care providers that underlies the current policy.One of the main limitations of our analysis is in use of secondary care data, rather than primary care data. While HES represented 3.4 million patients in London, this covers less than half of the estimated population of 8.8 million.A limitation with our method was that the optimal configuration was unable to fit every practice in London to a PCN, with 43 (out of 1334) unassigned as a result of our selection criteria. Fifteen practices were unassigned as spatial outliers, which is to say that their median distance to all other practices in their network was more than four times greater than the median pairwise distance between all practices in the network. These rare instances may reflect the statistical noise of the modelling technique which is agnostic to the spatial proximity of providers to one another. A further 28 practices were unassigned due to their proposed PCNs containing fewer than three practices. In these cases, allocation of unassigned practices in collaboration with practices and commissioners, to nearby larger PCNs could be an appropriate solution to ensure complete allocation of practices to PCNs. The finding that many unassigned practices were near the periphery of London suggests a boundary effect where the exclusion of practices and the population outside of London may have affected the model in these regions.As health systems adapt towards closer integration across services, network analysis offers a data-driven and unbiased means of understanding the connections between PCNs and their patients. Our findings demonstrate that GP practices may be combined into communities reflecting their underlying populations in accordance with the specification of PCNs. At a time when integration of community, primary and secondary care is being prioritised, concurrently, place-based primary care anchored in the local community is increasingly being challenged with the growth of online GP consultation providers, such as that provided by GP at Hand in London. Upscaling primary care into larger networks has the potential to weaken further the ties between providers and their communities. There is a pressing need to better understand how these networks will represent their geographies and patients, to identify who may gain and who may lose out, and ensure a well-intentioned policy does not widen inequalities in health."} {"text": "This current study provides novel insights into the performance of rice genotypes under varying As stress during different growth stages for further use in ongoing breeding programs for the development of As-excluding rice varieties for As-polluted environments.Rice remains a major staple food source for the rapidly growing world population. However, regular occurrences of carcinogenic arsenic (As) minerals in waterlogged paddy topsoil pose a great threat to rice production and consumers across the globe. Although As contamination in rice has been well recognized over the past two decades, no suitable rice germplasm had been identified to exploit in adaptive breeding programs. Therefore, this current study identified suitable rice germplasm for As tolerance and exclusion based on a variety of traits and investigated the interlinkages of favorable traits during different growth stages. Fifty-three different genotypes were systematically evaluated for As tolerance and accumulation. A germination screening assay was carried out to identify the ability of individual germplasm to germinate under varying As stress. Seedling-stage screening was conducted in hydroponics under varying As stress to identify tolerant and excluder genotypes, and a field experiment was carried out to identify genotypes accumulating less As in grain. Irrespective of the rice genotypes, plant health declined significantly with increasing As in the treatment. However, genotype-dependent variation in germination, tolerance, and As accumulation was observed among the genotypes. Some genotypes showed high tolerance by excluding As in the shoot system. Arsenic content in grain ranged from 0.12 mg kg Oryza sativa L.) is one of the most vital staple grains of the world, 90% of which is produced and consumed in Asian countries, and it influences the livelihoods of several billion people and less toxic pentavalent arsenate As(V) and in rainfed upland paddy soils (aerobic conditions), oxidized arsenate As(V) dominates is physicochemically analogous to the essential nutrient phosphorus (P), and arsenite As(III) uses the active uptake linked to aquaglyceroporin mediate acquisition pathways to gain access into the plant system can substitute for inorganic phosphate in a variety of biochemical processes that affect key metabolism in the cell. In contrast, As(III) having high affinity toward sulfhydryl-containing enzymes interferes with key enzymes in a deleterious way constitutes the major As species loaded into rice plants species dominate species dominate \u21cc As(V). Arsenate As(V) dominates during drying and arsenite As(III) dominates during wetting program were systematically evaluated under varying As stress aiming at understanding the genetic variation in germplasm for germination capacity, As tolerance, shoot exclusion, and low grain accumulation. We combined three different screening strategies to recognize appropriate genotypes for As tolerance and exclusion during different growth stages. We hypothesized that the selected rice genotypes would show a diverse range of morphological and As accumulation trait variability for As-induced toxicity stress. The main objectives of this study were to (a) identify the germination ability of 53 genotypes facing varying As stress and cluster them into different As stress-tolerance groups, (b) explore the genetic potential of individual genotypes for As tolerance and exclusion at the early growth stage of rice and cluster them into different As stress-tolerance groups by shoot exclusion and inclusion groups, and (c) identify the most significant genotypes that accumulate less As in grain and categorize them into different As grain-accumulating groups. By combining these three different screening strategies, we explored the interactions of As tolerance and exclusion traits during different growth stages.Fifty-three genotypes involving seven IRRI-GSR advanced fixed lines and 46 different rice cultivars from the core breeding collection of the IRRI-GSR breeding program were sys2, Sigma Aldrich, Singapore) to evaluate the germination ability of seeds. All the purified seeds of the rice genotypes were oven-dried for 5 days at 60\u00b0C to break any residual seed dormancy and were surface-sterilized with 1% sodium hypochlorite (NaOCl) for 1 min, followed by rinsing with deionized water several times. Germination was evaluated on a moist filter paper (Whatman No. 1) bed dampened with 10 ml of the respective As treatment placed on each of the 15-cm-diameter Petri dishes. Fifty sterilized healthy seeds per genotype were laid on each Petri dish and incubated at room temperature (28\u201332\u00b0C) for germination. Each treatment was replicated three times and the seeds were allowed to germinate for 10 complete days. During this period, the Petri dishes were moistened with the respective solutions of As when required. After 10 days of incubation, the number of seeds germinated was recorded by the emergence of the radicle and coleoptile for the respective treatments and controls. Based on the germination percentage under different As treatments, the genotypes were placed into four different categories: highly tolerant (>80% germination), moderately tolerant (>50% germination), moderately susceptible (<50% germination), and highly susceptible (<20% germination). The tolerance percentage of the genotypes was calculated with the following equation:The seed germination screening assay was carried out in the plant growth chamber facility at IRRI for evaluating the germination tolerance of genotypes under varying As treatments. Five concentrations of inorganic As treatments, 0 ppm (control), 5 ppm (low), 10 ppm (medium), 15 ppm (high), and 20 ppm (very high), were supplied in the form of sodium arsenate . The control treatment without As in the nutrient solution was maintained throughout the experiment. The plants were grown under As-toxic conditions for 18 days, with the pH adjusted to 5.4 every other day and nutrient solutions were completely renewed every seventh day. The experiment was laid out as a complete randomized design with three independent replicates, and five repeats per line in each replicate, leading to 32 hydroponic trays, each accommodating up to 100 seedlings. To compare the differences among the lines, the relative chlorophyll concentrations were measured non-destructively from the base, middle, and tip of the uppermost leaves of each plant, and the average values were expressed as SPAD units as the indicator of leaf senescence caused by toxic As treatment. The responses of the plants to the As treatments were evident at 18 days after treatment. Changes in the shoot and root length response to the As treatments were measured for each entry at 18 days after treatment. Shoot length was measured from the base of the plant to the tip of the longest leaf, whereas root length was measured from the base of the plant to the root tip. Three plants per entry per replicate were rinsed with deionized water to remove any excess nutrients sticking to the surface of the plants and then oven-dried for 3 days at 70\u00b0C to remove moisture. Before sample processing for As analysis, dry biomass was recorded.Seedling-stage As stress screening experiments were conducted in a hydroponic system in the controlled phytotron glasshouse facility at IRRI. Optimum rice-growing conditions were maintained throughout the experiment: 29/21\u00b0C (day/night) temperature, 70% relative humidity, and natural light. Seeds of the 53 genotypes were oven-dried for 5 days at 60\u00b0C to break any residual seed dormancy and incubated at 30\u00b0C for 48 h for pre-germination. One seedling per genotype was transferred per well with 1 cm diameter on a Styrofoam seedling tray with the size of 28 \u00d7 32 \u00d7 1.25 cm having 100 wells (10 \u00d7 10) with a nylon net bottom fixed in a dark plastic tray containing 8 L of full-strength Yoshida nutrient solution . Twenty-one-day-old seedlings were transplanted to the field with the experiment laid out in a randomized complete block design in two-row plots with 12 plants per row at a spacing of 20 \u00d7 20 cm. Standard agronomic practices with optimum fertilizer application, irrigation, and plant protection measures were carried out to ensure a good crop growth cycle for grain development. At maturity, eight middle plants of each genotype were harvested in bulk. The harvested seeds were oven-dried at 50\u00b0C for 3 complete days and de-husked brown rice was analyzed for As accumulation.Field screening for As accumulation in grain was conducted at the field station of IRRI during the 2014 dry season. IRRI is situated at latitude 14\u00b013\u2032N and longitude 121\u00b015\u2032E, and the paddy soil type is a Maahas clay loam, isohyperthermic mixed-type tropical soil with average total As content in soil ranging from 2.6 to 7.2 mg kg3), followed by 2 ml of hydrogen peroxide (H2O2), and 1 ml of deionized water was added and pre-digested overnight in the fume hood .The root and shoot samples from the hydroponic screening experiment and grain samples from the field experiment were analyzed for total As content. Dried samples were thoroughly homogenized by an ultra-centrifuge mill modified with a tungsten blade to avoid any cross-contamination. Ground 0.2\u20130.5 g of sample was added to a closed-vessel digester, with 5 ml of high-purity 69% concentrated nitric acid was carried out to observe the pattern of variation among the 53 rice genotypes, the relationship among individuals, and their characteristics. Originally, relative index values were derived by assessing the response to the control value for respectively measured traits and absolute values for As content in shoot, root, and grain. Then, the index and absolute values from different screening strategies were combined to identify the correlation of the response variable vectors and genotypes. The analyses were performed using JMP\u00ae, Version <16> design with three true treatment replicates (independent containers) and three randomized blocks within each treatment set containing all 53 genotypes. Two-way ANOVA was carried out to observe the effects of lines, treatment, and line-treatment interaction on different As stress-induced traits, and Tukey's honestly significant difference test was used for of means . For grap < 0.01) (< 5 ppm As (60.69%) < 10 ppm As (41.19%) < 15 ppm As (27.30%) < 20 ppm As (16.06%). The 53 genotypes were grouped into four clusters based on the relative tolerance percentage under varying As treatments . The meaeatments . Genotyps stress . Except \u22121), and As shoot excluders (<12 mg kg\u22121) to 27.85 mg kg\u22121 (Xing-Ying-Zhan) and root As content ranged from 119.86 mg kg\u22121 (Huang-Hua-Zhan) to 146.54 mg kg\u22121 (BR11). The overall results indicated that adequate genetic variability is present among the studied genotypes for As accumulation in young seedlings for As stress. The concentrations of As in the seedlings of the 53 genotypes are presented in Exposure to varying concentrations of As for 18 days induced a significantly negative response among the investigated 53 genotypes. Chlorophyll content, plant height, and root length showed significant variation between the control and 15 ppm As treatment. Root and shoot length and biomass accumulation were negatively influenced by As treatment. Accumulation of As in the shoot and root tissue increased significantly with the increase in As in the treatment . There wmg kg\u22121) . The 53 mg kg\u22121) . In the \u22121 also showed higher variation among the genotypes in grain As accumulation (\u22121 in Huang-Hua-Zhan (indica subspecies) from China to 0.48 mg kg\u22121 in IRAT 109 from Brazil. Based on As accumulation in the grain, the genotypes were classified into three groups: low As content rice (<0.2 mg kg\u22121), moderate As content rice (<0.3 mg kg\u22121), and high As content rice (>0.3 mg kg\u22121) . Genotypes from indica subspecies and japonica subspecies (NPT-IR68552-55-3-2) accumulate less As (<0.2 mg kg\u22121) in the grain. High-yielding mega-variety IR64 also accumulated less As (<0.2 mg kg\u22121) in the grain.Unpolished brown rice of the 53 genotypes harvested from the field with soil As content <7 mg kgmulation . Among tmg kg\u22121) . The ricr correlation coefficient among various measured As stress-responsive traits from germination tolerance in the early growth stage of rice, and grain As content screening revealed the existence of a complex association between the studied traits , relative germination indices and relative root-shoot morphological parameters of 53 rice genotypes, and PCA best describes the response to identify As-tolerant and low-accumulating genotypes. Of the total variation (34.8%), the first two principal components (PCs) accounted for 19.8 and 15% among the genotypes, respectively . The fir(III), a neutral molecule is the most dominating As species. It enters plants through aquaporin channels, primarily through the nodulin 26-like intrinsic proteins is the most prevalent As species in aerobic soils, accounting for just a small proportion of total As in flooded paddy soils. However, in rice, OsHAC1;1 (Loc_Os02g01220) and OsHAC1;2 (Loc_Os04g17660) are responsible for As(V) reduction to As(III) to facilitate As(III) efflux out into the external environment was used in the treatment in this study at the germination stage for 10 days. Similar concentrations frequently occur in the topsoil of the rice-growing regions of Bangladesh and West Bengal, India. The presented results revealed the toxic effect of As and germination ability decreased significantly with increasing As concentration in the treatment [0 <5 <10 <15 <20 ppm As(III)] which is consistent with the results of previous studies treatment with high accumulation in the shoot .In hydroponics As stress, irrespective of the rice genotype, the overall plant performance decreased with an increasing concentration of As in the nutrient solution . Arsenicindica cultivars tend to accumulate higher amounts of inorganic As in grain and shoot than japonica cultivars and displayed strong susceptibility in germination and seedling-stage experiments as they were unable to exclude As from the shoot system. Similar results were observed with the same genotypes in field-based experiments in grain and also displayed moderately resistance in hydroponic screening by excluding As in the shoot, but it displayed susceptibility in germination tolerance. In our study, we explored the genotypes commonly used by breeders to develop varieties for Bangladesh and India. In developing rice-growing Asian countries, straw is either burned in the field or used as fodder for cattle, which further increases the risk of As exposure , WTR1, GSR IR1-5-Y4-S1-Y1, OM997, and Zhong413 were tolerant against a high concentration of As and they also accumulate very low concentrations of As in the grain. These identified As-tolerant genotypes would contribute greatly to the development of As-excluding varieties for contaminated ecosystems. Recently, genotype WTR1 was nominated in Bangladesh and released as BRRI dhan69, making a significant impact on mitigating the problem of As contamination.Our study demonstrates screening strategies for recognizing suitable tolerant and As-excluding germplasm, and offers great potential for use in targeted breeding programs for As-contaminated regions. Irrespective of the As concentration in the treatment, germination, overall plant health, and tolerance percentage declined drastically, signifying that the presence of As harms rice production. For future screening of rice germplasm, 10 mg kgThe original contributions presented in the study are included in the article/JA, MF, and VM conceived the research. VM and JA created the rice diversity panel. VM conducted the As screening experiment and collected the data. VM, FZ, AW, AP, L-BW, and JM worked on the data analysis and drafting of the manuscript. VM, JA, MF, JM, and ZL reviewed the manuscript. All authors read and approved the manuscript.https://nbn-resolving.org/urn:nbn:de:hbz:5n-55609).The authors would like to thank and acknowledge the Bill & Melinda Gates Foundation for providing a research grant to ZL for the Green Super Rice project under ID OPP1130530. The work reported in this manuscript was part of VM's Ph.D. thesis funded through the Green Super Rice project at IRRI (The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Electronic Health Records (EHRs) are now widely used to create a single, shared, and reliable source of patient data throughout healthcare organizations. However, health professionals continue to experience mismatches between their working practices and what the EHR allows or directs them to do. Health professionals adopt working practices other than those imposed by the EHR to overcome such mismatches, known as workarounds. Our study aims to inductively develop a typology of enduring EHR workarounds and explore their consequences by answering the question: What types of EHR workarounds persist, and what are the user-perceived consequences?This single case study was conducted within the Internal Medicine department of a Dutch hospital that had implemented an organization-wide, commercial EHR system over two years ago. Data were collected through observations of six EHR users and 17 semi-structured interviews with physicians, nurses, administrators, and EHR support staff members. Documents were analysed to contextualize these data .Through a qualitative analysis, 11 workarounds were identified, predominantly performed by physicians. These workarounds are categorized into three types either performed while working with the system (in-system workflow sequence workarounds and in-system data entry workarounds) or bypassing the system (out-system workarounds). While these workarounds seem to offer short-term benefits for the performer, they often create threats for the user, the patient, the overall healthcare organization, and the system.This study increases our understanding of the enduring phenomenon of working around Electronic Health Records by presenting a typology of those workarounds that persist after adoption and by reflecting on the user-perceived risks and benefits. The typology helps EHR users and their managers to identify enduring types of workarounds and differentiate between the harmful and less harmful ones. This distinction can inform their decisions to discourage or obviate the need for certain workarounds, while legitimating others.The online version contains supplementary material available at 10.1186/s12911-021-01548-0. Electronic Health Records (EHRs) have been widely implemented because of their promise of improved patient service, quality, healthcare safety, and reduced costs . EHRs ar\u2018workarounds are behaviors that may differ from organizationally prescribed or intended procedures. They circumvent or temporarily \u2018fix\u2019 an evident or perceived workflow hindrance in order to meet a goal or to achieve it more readily\u2019 then I can copy-paste that to the in-basket and have the physician prescribe it. [\u2026] Otherwise, I have to figure it all out myself, Google it and so on.\u201d (I_MA3).The expected benefit of this workaround is for the user. First, an improved workflow and perceived time savings are achieved when copy-pasting instead of discretely registering the data. Second, a fuller overview is created. As one medical administrator explained: \u201cconvenience and support [the EHR] offers when registering discretely\u201d (I_PH5), and the hindrance it causes when conducting research with the data present in the EHR. As one physician explains: \u201cyou can create all kinds of reports if the data are registered discretely; you cannot do that if it is only plain text\u201d (I_PH4).The likely risks of copy-pasting affect the user, the system, and the hospital in terms of extra work due to the loss of system support as the performers of this workaround miss out on the \u201cThey are using something called a specialty comment to just to write when they should be discussing the patient.\u201d (I_SS2).The second in-system data entry workaround is using separate text fields in addition to the required data fields in the EHR. This workaround was not seen during observations but mentioned during interviews with three members of the support team, three physicians, and two nurses. A member of the EHR support team commented that she observed this workaround put into practice by physicians during multidisciplinary meetings: \u201cthe only option that is available now\u201d (I_SS2). One nurse explained that if she wants to report a patient\u2019s status, she \u201ccan only enter a fixed number of words or characters\u201d (I_NU2). The lack of space to make comments creates her need \u201cto add a separate note\u201d (I_NU2). Another reason for this workaround is the lack of an overview of the relevant data for a particular physician or specialism: \u201c[\u2026] a problem list is a problem list, but I also see about fifteen other problems which I might not find interesting at all\u201d (I_PH5). In this situation, the separate note field is used to restructure issues in the problem list. A final reason is the lack of knowledge about the system\u2019s functionality and which is the right button to click. As an alternative, \u201cthey just put it in a note\u201d (I_SS2).In some cases, the underlying reason for adding separate note fields is the lack of functionality in the EHR, and it is \u201cgood and concise summary of all the relevant information\u201d (I_PH5). The risks associated with this workaround affect the hospital and the user. Registering data as a separate piece of plain text makes it harder to generate data from the EHR for research purposes. A second risk is that the use of separate note fields leads to a potentially incomplete overview if applied extensively: \u201cIt is a like a toilet roll from which all the sheets have been torn, you have to read through 100 notes? 200 notes? The slightest inconvenience experienced by a patient will have a note allocated to it\u201d (I_PH3).A possible benefit of this workaround is that, through these separate notes, healthcare providers create a better overview for themselves in the form of a \u201cI use [the EHR] in the way I want to use it, within certain boundaries. I take a certain degree of freedom. [\u2026] There is always more than one way to skin a cat, and I know we have to work with [the EHR]. But, inside [the EHR], I believe everyone should be able to take their own paths, within the frameworks we set along the way\u201d (I_PH1).We found the third in-system data entry workaround was not entering certain patient-specific information\u2014deliberately leaving data fields empty. This workaround was observed with one physician and mentioned during interviews with five other physicians. A reason for this workaround is that users feel that the EHR restricts their autonomy and directs them to enter data they do not wish to enter: \u201cYesterday I registered a patient that was already known to another specialism. This took me two hours. Two hours to order all this data in the EHR. Imagine that. Normally I would do that too, but I just wrote it down, and it cost me 10\u00a0min\u201d (I_PH3). As is clear from the quote above, not filling in certain data fields in the EHR is expected to benefit the user by saving the physicians time to spend on other activities. On the other hand, there are associated risks for the system and the patient. One physician highlighted that the EHR \u201conly works well if we all use it correctly\u201d (I_PH5), meaning that this workaround damages the information quality that the EHR can provide. Moreover, not registering data potentially leads to cascading errors in medicine prescriptions: \u201cCertain disciplines order an antibiotic treatment and do not include an end date, so it just stays in the system. The patient probably stopped taking the antibiotics a while ago, but the system says otherwise. This, in turn, has consequences for my prescriptions\u201d (I_PH1).Another related reason is that entering data is time-consuming. A physician mentioned: \u201cI find this very regretful, but I know these things happen\u201d (I_SS4). Further, a physician explained a situation in which this would occur: \u201cThis is often over the phone: the physician is busy elsewhere. I can imagine that when he picks up the phone and is asked \u2018May this patient receive that medicine\u2019, he might say: Just fix it, I\u2019m busy\u201d (I_PH5).The fourth in-system data entry workaround identified is sharing login details with other employees, which is a clear violation of hospital policies. This workaround was not observed, and most interviewees denied sharing login details with colleagues. However, five members of the support team recognized this took place: \u201cthere is no computer to hand\u201d (I_NU2). All the interviewees were doubtful that this workaround has overall benefits. One physician felt \u201cthat it saves you a certain amount of work, but that does not stack up against the risks\u201d (I_PH5). While positive effects were hardly mentioned, the interviewees did identify the risks of sharing login details with co-workers. One member of the support team explained the implications for patient safety: \u201cAssume that a person with insufficient knowledge orders or gives the wrong medication. Imagine a patient being allergic to a type of medication but receives it anyway. In the worst case, they die. This can happen if you start acting in someone else\u2019s name. It is very dangerous\u201d (I_SS4).Most interviewees explained that sharing login details would be done either because of a lack of time or a lack of physical facilities\u2014when \u201cyou never know if these details will be used again without your knowledge\u201d (I_PH1).A risk for the user is that once login details are shared, there is no control over potential abuse of these details, \u201cI received the exact same pop-up as four weeks ago, to pay extra attention to the patient\u2019s medication. Now I can\u2019t continue, I have to enter new data into the system. Otherwise I cannot prescribe medicines. So what happens is, you are just going to make it up, you know, you just want to get on\u201d (I_PH1). The nurse in mentioning this workaround recognized it in the following situation: \u201ceven though I emptied the catheter bag at 11\u00a0pm, I enter this as emptied at 9.59\u00a0pm\u201d (I_NU1).The final in-system data entry workaround is entering data that does not represent reality. This workaround was not observed, but described by a physician and a nurse. As an illustration, the physician explained: \u201c[The EHR] should not rule over us, it should help us. I mean, we need to use [the EHR], but it should not be the case that it dictates how I should do my job\u201d (I_PH1). In the second, the nurse explained she has no other option than to use this workaround as in the EHR \u201ca day needs to be finalized at 10\u00a0pm [\u2026] That\u2019s just how it is designed\u201d (I_NU1).In the first instance, this workaround was used deliberately to bypass the restricting power of the system over the work process to avoid delays in ordering medicines. As a physician explained: \u201cother alarm bells starting to ring\u201d (I_NU1). The nurse did not expect this workaround to bring about dangerous situations since \u201c30\u00a0min or an hour\u2019s difference is negligible\u201d (I_NU1). The physician said that, as with ignoring pop-ups, entering incorrect data may lead to a false sense of safety, negatively affecting the patient.In the nurse\u2019s case, there is an expected benefit of registering incorrect data for the users: it avoids a distorted image of the previous day improving workflow the next day. All activities scheduled for the previous day need to be registered by 10\u00a0pm to avoid In addition to the several workarounds that are used within the EHR system described above, our data also demonstrate that people bypass the EHR system by using other systems or relying on other routines. These out-system workarounds include writing down information on paper, using one or more shadow systems, giving verbal consent for dispensing medication, and detaching a scanner from the COW (EHR- Computer-on-Wheels) to take it into a patient\u2019s room.\u201cWhen I do my round of patient visits, I always have a piece of paper with me. [\u2026] I would rather write some keywords on paper, sit behind my desk, think about it, and register the information in peace.\u201d (I_PH1). We also observed that nurses write patient information on their hands with the intention to enter it into the EHR at a later stage.First, many interviewees recognized the use of paper for making notes. This workaround was observed once and mentioned twelve times during interviews with nurses, physicians, members of the support team and medical administrators. In effect, users rely on paper in combination with the EHR: \u201cThe patient might be very nervous, in such cases, you want to make eye contact. An option would be to bring your computer, but then you would be talking to the screen instead of to the patient. [\u2026] You look less at the patient\u2019s face, so you don\u2019t see the impact of what you are saying. [\u2026] And the patient could feel less heard\u201d (I_NU1).As described above, the physician feels that first writing down information on paper and later registering it in the EHR helps to process the information, suggesting a lack of trust in their own abilities to directly register the data correctly. Another motive for using paper is to maintain eye contact with patients. As one nurse explained: \u201cget lost or lie around\u201d (I_NU3). Further, patient safety is jeopardized as there is a possibility \u201cto overlook items because they are not on the work lists on your computer, especially if you don\u2019t know that patient well\u201d (I_NU2).The expected benefits of paper are for the user and for the patient. First, the user saves time that can be used to process and reconsider the information provided by the patient. Second, patient contact is preserved. Risks for the user are a loss of overview, extra work and a potential loss of data as paper might The second out-system workaround identified is the use of a system other than the EHR. Two physicians, four members of the support team, and one medical administrator admitted to using Microsoft Word and Microsoft Excel either as a substitute for or as a complement to the EHR.\u201cEvery day there are three, four, five notes added for each patient. These are all separate, so I cannot just scroll through them. So, open, close, open, close. Then I also have to remember each note\u2019s content. Therefore, I open a Word document next to it, to create my own file\u201d (I_PH3). Another reason for using shadow systems is a lack of functionality in the EHR: \u201c[The EHR] does not support planning intakes. [The planners] keep an Excel file with the entire planning, while you really just want to be able to do this in the EHR\u201d (I_SS5). A final reason for this workaround is that the healthcare provider prefers a different layout than that proposed by the EHR: \u201cPeople make the entire letter layout in Word because they find letters generated by the EHR ugly\u201d (I_SS5).A reason given for using these shadow systems was the lack of overview presented by the EHR: \u201cimprovements in the system can be made\u201d (I_SS5). On the downside, shadow systems have the expected risk for users of creating extra work when physicians have to enter information in both the EHR and their shadow system. Also, there is a risk of forgetting to register data in the EHR alongside the shadow system, resulting in the EHR system not being up-to-date.An expected benefit for the users when using shadow systems is that better overviews will be created. Also, by keeping an Excel file, the intake planners can schedule patient intakes a few months ahead, which they could not do if they only use the EHR system, thereby improving their workflow. In terms of the system, by acknowledging the deficiencies of the EHR, \u201cWhat I sometimes try, and it depends a bit on the nurse to be honest, is that I say: here, you have my consent to carry it out. I will register the order later, or e-mail me at the moment\u00a0that I have to order it. That's how I do it\u201d (I_PH2). The physician continued by outlining situations in which this would happen, which are often due to inconveniences: \u201cImagine standing in the corridor, and you receive such a call, well, then you don\u2019t have [the EHR] at hand. Or you\u2019re in the middle of an outpatient visit and you are phoned. Right, it is always unexpected\u201d (I_PH2). That is, giving verbal consent occurs because of a lack of time or physical facilities. One of the nurses offered a third reason for this workaround: \u201cno time, no motivation. [\u2026] If they [physicians] are at home and don\u2019t feel like starting up the system they tend to give consent verbally\u201d (I_NU3). As such, she is implying that this workaround is related to a physician\u2019s willingness to order through the EHR directly.The third out-system workaround identified is a physician giving verbal consent to a nurse for dispensing medication, only to also order this medication in the EHR sometime later. This workaround was not observed but mentioned during interviews with two nurses and one physician: \u201cI think patient care is paramount, before the administrative part\u201d (I_NU3). There is a possible risk of this workaround for the system. If one forgets to record the verbal request for medication in the EHR later, then \u201caccording to the system, this patient did not receive the medication\u201d (I_NU3), leaving the system not up-to-date.Expected benefits of this workaround for the user are an improved workflow and time savings. For the patients, better care is expected to follow a verbal consent: The final out-system workaround entails detaching a scanner from its \u2018computer-on-wheels\u2019 (COW) to scan the wristbands of patients and the labels on infusion bags. The COW is a fairly large input/output device that forms part of the EHR.\u201cwake up patients by the COW\u2019s noise\u201d (I_NU3).Nurses are supposed to bring the COW into the patient room and scan each infusion bag for every patient separately while monitoring the COW\u2019s screen. This workaround was not observed but admitted by a nurse during an interview, who performs this workaround to not \u201cscanning the wrong infusion bag, or the wrong patient\u201d (I_NU3).The expected benefit of this workaround is that patients are not disturbed by the COW during the night. However, taking the scanner into the patient room and away from the COW, might jeopardize patient safety as the nurse will be unable to notice any errors that might appear on the screen when accidentally Overall, we were able to identify 11 workarounds. Two of them were in-system workflow sequence workarounds, five in-system data entry workarounds and four out-system workarounds. Each of the outlined workarounds were used by one or more occupational groups within the hospital. The expected consequences for each of these workarounds entailed benefits as well as risks and affected the user, the patient, the hospital, or the system. Table This study\u2019s aim was to develop a typology of EHR workarounds and explore their user-perceived consequences by answering the question: what types of EHR workarounds persist and what are the user-perceived consequences? Thus, we focused on enduring rather than on temporary workarounds. These workarounds persisted for two main reasons. First, many workarounds in this hospital signal that individual physicians and medical departments have the professional autonomy to deviate from system-enforced and prescribed work processes. Doctors are ultimately accountable for patients and can, for example, prescribe medicines by telephone and have them included in the EHR system afterwards if this is in the direct medical interest of the patient. Second, EHR systems can structurally hinder desired and established work processes, requiring adaptations that the EHR supplier does not support. In the latter case, health professionals and their departments can resolve the problem by explicitly accepting and institutionalizing a workaround. Sometimes it is also necessary to first treat a patient and update the system for completeness, reimbursement, or research. From this perspective, while a workaround may be highly legitimate, especially in acute and emergency situations, it requires post-hoc data registration and processing . Below win-system workflow sequence workarounds are created in response to the time consuming and impositional characteristics of the EHR. Expected consequences of this type of workaround are benefits for the user , risks for the patient and for the hospital (incorrect billing).We have identified three types of workarounds, two in-system and one out-system workarounds. The in-system data entry workarounds are created in response to a lack of time, fear of losing data, lack of knowledge, lack of functionality; lack of overview and the impositional features. The expected benefits for the user following this type of workaround are an improved workflow, time savings and a better overview. However, these workarounds also have foreseen risks: (1) for the user, such as additional work, loss of overview, excessive details, loss of potential system support; (2) for the patient, such as jeopardizing safety, false sense of safety and cascading errors; and (3) for the hospital, such as hindering research.Second, out-system workarounds, are responses to a lack of trust in one\u2019s own abilities, reduced patient contact, lack of time, and limited motivation. Out-system workarounds have expected benefits for users , for patients , and for the system (improvement of the system). However, out-system workarounds also carry possible risks for the users, such extra work, loss of overview and loss of data, for the patient, such as jeopardizing safety and cascading errors, and for the system, such as the system not being up-to-date.The third type, First, the data show that users expect each type to have consequences for both users and patients. In terms of the potential benefits, users recognize six different positive consequences. An improved workflow was most often mentioned. This benefit is expected to follow from 9 of the 11 workarounds, mainly from the in-system workflow sequence and in-system data entry workarounds. As such, most of the reported workarounds are seen to contribute to an easier and more flawless workflow for users. Turning to risks, the study\u2019s participants mentioned 11 different risks, which were more evenly distributed across the workaround types. Jeopardizing patient safety was seen as the most common risk involved. Surprisingly, the interviewed users did not acknowledge any consequences of out-system work arounds for the hospital, while they showed themselves aware of the adverse effects for the system\u2019s integrity. Likewise, they did not think in-system workflow workarounds would affect the system, but did see risks for the hospital. Consequences for themselves and for patients may be more concrete or meaningful for them than those affecting the hospital or the system. For persisting workarounds it will be important to also systematically weigh the hospital- and system-related benefits and risks. Therefore, more research on user attributions regarding their workarounds is called for.Second, this study shows how users do not perceive their workarounds to produce only benefits or only risks. Indeed, this study reveals how seven workarounds are expected to yield both benefits and risks for the same stakeholder, mainly the users. Other workarounds are perceived to have both beneficial and risky implications but for different stakeholders. Importantly, overall the workaround types that persist remain controversial: while users expect to benefit from all workaround types, each type is also expected to create risks, especially for patients on the longer term and for research purposes.Bringing only a scanner into the patient room and Sharing login details, which were instigated by nurses and support staff. Since scanning medication, blood bags, and infusion bags is part of the nurses\u2019 tasks would have to be deliberate, physicians might be reluctant to acknowledge it openly.Only 2 of the 11 identified workarounds were not created by physicians. These were s\u2019 tasks , we can s\u2019 tasks . Since t\u201cmasters of workarounds\u201d . A plausible explanation, given that age has been shown to play a key role in intentions regarding EHR use I think a lot of people conduct workarounds and just think it is part of the job\u201d (I_PH4). This may have limited the quantity and the range of workarounds presented in this paper. However, the proposed three main types seem sufficiently robust for transfer to other contexts. Second, this study was conducted at the Internal Medicine and supporting departments of a large hospital. Healthcare professionals in other departments or hospitals may create other workarounds for different reasons. Further, teaching hospitals, as this one, tend to have more elaborated EHRs [There are some limitations that may have affected this study\u2019s results. First, while a clear set of workarounds have been identified, some may have been overlooked. It is quite possible that not all workarounds were admitted or even recognized by the interviewees. As one physician commented: ted EHRs . This coConsidering the types of workarounds, the results record only a few in-system workflow sequence workarounds compared with in-system data entry and out-system workarounds. As such, the workarounds identified in this study are not predominantly responses to perceived misalignments in the workflow. The users primarily created workarounds to deal with data registration rather than because the system was unsupportive of their workflow. This indicates that healthcare professionals deliberately work around EHR systems in order to avoid the extra administrative tasks that come with such a system, or as a form of resistance to information technology in general. This could be a relevant area for future research. Further, given the exploratory method used in this study, future research could focus on different medical specialties or on healthcare organizations other than hospitals. Finally, given the possibility that users did not voice all the workarounds they enact, we would suggest that future research on EHR workarounds employ direct and preferably relatively unobtrusive observations when examining the creation and application of workarounds, e.g. through participatory observation.This study has increased our understanding of the persistence of working around Electronic Health Records through a typology of enduring workarounds coupled with their user-perceived risks and benefits. Our typology can promote awareness among EHR users and hospital managers of the different types of workarounds and enable them to distinguish harmful from less harmful workarounds. This may support them in their decisions to prohibit, discourage or obviate the need for certain workarounds, while encouraging and possibly institutionalizing others.Additional File 1. Observation scheme. Additional File 2. Interview protocol. Additional File 3. Codebook."} {"text": "Modern machine learning (ML) technologies have great promise for automating diverse clinical and research workflows; however, training them requires extensive hand-labelled datasets. Disambiguating abbreviations is important for automated clinical note processing; however, broad deployment of ML for this task is restricted by the scarcity and imbalance of labeled training data. In this work we present a method that improves a model\u2019s ability to generalize through novel data augmentation techniques that utilizes information from biomedical ontologies in the form of related medical concepts, as well as global context information within the medical note. We train our model on a public dataset (MIMIC III) and test its performance on automatically generated and hand-labelled datasets from different sources . Together, these techniques boost the accuracy of abbreviation disambiguation by up to 17% on hand-labeled data, without sacrificing performance on a held-out test set from MIMIC III. Disambiguating abbreviations is important for automated clinical note processing; however, deploying machine learning for this task is restricted by lack of good training data. Here, the authors show novel data augmentation methods that use biomedical ontologies to improve abbreviation disambiguation in many datasets. Semi-supervised approaches also took root during this time: Pakhomov et al. improved the contextual representation of senses in clinical notes by augmenting them with text from the Web and biomedical abstracts, but were only able to validate their methods on eight abbreviations2.Health care practitioners typically abbreviate complex medical terminology when preparing clinical records, saving time of writing out long terms/phrases, while making the text clear to an experienced professional in the context. Correctly disambiguating medical abbreviations is important to build comprehensive patient profiles, link clinical notes to ontological concepts, and allow for easier interpretation of the unstructured text by practitioners from other disciplines. Expanding abbreviated terms into their long forms is nontrivial since abbreviations can have many expansions. For example, \u201cra\u201d can mean right atrium, rheumatoid arthritis, or room air depending on both its local (adjoining words) and global (type of note and other information in it) context. While disambiguating abbreviations is typically simple for an expert in the field, it is a challenging task for automated processing, which has been addressed by a number of methods going back at least 20 years. These methods largely rely on supervised algorithms such as Naive Bayes classifiers trained on co-occurrence counts of senses with automatically tagged medical concepts in biomedical abstracts6. More recently, abbreviation disambiguation models have been fine-tuned using contextualized embeddings generated from BERT and ELMo model derivatives8. However, the development and deployment of methods for automated abbreviation disambiguation are limited by the availability of appropriate training data. Creating hand-labeled medical abbreviation datasets to train and test ML models is costly and difficult, and to the best of our knowledge, the only such publicly available dataset with training data and labels is the Clinical Abbreviation Sense Inventory (CASI)9, which contains just 75 abbreviations. The sparsity of these datasets makes methods built based on them vulnerable to overfitting and inapplicable to abbreviations not present in the training data. This is evident in studies where training and testing models on different corpora can result in performance drops of 20\u201340%6. Moreover, the same studies typically disambiguate 50\u20132000 abbreviations, compared to >80,000 medical abbreviations that are in AllAcronyms, a crowd-sourced database of abbreviations and their possible expansions10.Modern methods that disambiguate abbreviations rely on the local context of the abbreviation to discern its meaning. A number of supervised machine learning (ML) models have been built for abbreviation disambiguation in medical notes, including ones based on support vector machines (SVM), Naive Bayes classifiers, and neural networks11 utilized reverse substitution (RS) to auto-generate training data by replacing expansions with their corresponding abbreviations. For example, the phrase \u201cPatient was administered intravenous fluid\u201d in the training data was transformed to \u201cPatient was administered ivf\u201d, and the label for this instance of the abbreviation \u201civf\u201d was \u201cintravenous fluid\u201d. RS, however, creates imbalanced training sets because the distributions of terms in their abbreviated and long forms are often different. Some phrases, due to their obvious meaning, or because they are too long, are rarely written out fully; for example, milligrams (mg) next to medication dosage, or \u201cin vitro fertilization\u201d . Although additional work has improved RS by hand-labeling of specific instances6, none of the existing methods for abbreviation disambiguation scale to tens of thousands of medical abbreviations listed in resources such as AllAcronyms.Finley et al.12, and Kirchoff and Turner demonstrated that document contexts are useful in medical abbreviation disambiguation tasks13. A study by Li et al.14 represented acronyms in scientific abstracts using the embeddings of words with the highest term frequency\u2013inverse document frequency (TF-IDF) weights within a collection of documents. This was motivated by the idea that acronym expansions are related to the topic of the abstract and that topics can be described by words with the highest TF-IDF weights.An additional problem with medical abbreviation disambiguation is that the local context of a word is not always sufficient to disambiguate its meaning. For example, \u201crt\u201d could represent \u201cradiation therapy\u201d or \u201crespiratory therapy\u201d, and the phrase \u201cthe patient underwent rt\u201d cannot be disambiguated without further information. Huang et al. showed that words can be better represented by jointly considering their local and global contextsWe used information from related medical concepts to create more balanced and representative examples of training data for RS approaches. We did this by sampling sentences of related concepts in the immediate vector space and adding them to our training cohort, which is especially beneficial for medical concepts that are rare or not written in the training text.15 by constraining medical concepts to be in the same vector space as their neighbors.We leveraged structural relationships in biomedical ontologies such as the unified medical language system (UMLS) to pre-train our modelsWe defined a simple global context that combines medical knowledge from the entire note and used it in conjunction with the local context of an abbreviation to further improve the accuracy of abbreviation disambiguation.In this work, we tackle the problem of disambiguating medical abbreviations in the absence of training data for any specific abbreviation, thus dramatically increasing the ability of such models to generalize to new texts. We took the following three-pronged approach:16, we show a 2% accuracy improvement on i2b2, and over a 7% increase on abbreviations with little training data. Finally, we recruited medical professionals and students to hand label abbreviations in i2b2. We tested our model on these abbreviations and found a 16% improvement compared to the baseline.Using these three techniques, we achieve an overall 17% improvement on CASI. Using automatically generated testing samples from i2b2, a collection of patient discharge summaries17. We then map medical concepts in UMLS to the resulting vector space to generate a word embedding for every medical concept. Then, for a given abbreviation, we augment the training samples for each expansion with sentences containing closely related medical concepts determined using embedding distance to perform the classification task of predicting the correct expansion for an abbreviation given its local context (the neighboring words) and global context . Further details on each step are provided in the \u201cMethods\u201d section.To evaluate the contribution of each component of our model, we compared its performance to models trained without the critical sub-components. The first model (Control) uses training samples acquired using RS without any alterations. The second is similar to the first, but samples training sentences with replacement such that each expansion has an equivalent number of training samples. The third model (Relatives) incorporates our novel data sampling technique by including relatives of expansions in the biomedical ontology (UMLS) into the training set. We sample concepts with replacement so that all expansions have an equivalent number of training samples. To evaluate whether using the structure of the ontology can improve the results we initiated it with weights learned from the hierarchical training task . We also trained the models using both only the local neighborhood of the abbreviation (default) and by incorporating global context information for each sample . We used bootstrapping to obtain the mean for each abbreviation by resampling our predicted values and true values 999 times.We consider two forms of accuracy. Micro accuracy is the total number of abbreviations correctly disambiguated divided by the total number of samples in the test set across all abbreviations with two or more possible expansions. Macro accuracy is the average of individual abbreviation accuracies and gives a better reflection on the performance of imbalanced datasets. A Wilcoxon signed-rank test was used to compare the macro accuracy results of different models (micro accuracy is a point estimate).a held-out test set consisting of RS samples of abbreviation expansions from MIMIC\u00a0III (20% of the dataset),an orthogonal dataset of 65 abbreviations from CASI with gold-standard annotations,1116 abbreviations from i2b2 generated by finding sentences with expansions from AllAcronyms using RS, and24 abbreviations from i2b2 hand-labeled by medical students.We evaluated our model on four datasets (see \u201cMethods\u201d for further detail):Table\u00a0p\u2009=\u20091.2e\u221203) increase in accuracy on CASI compared to the control. Incorporating global context increased this value to 15% (p\u2009=\u20095.3e\u221205)\u00a0and hierarchical pretraining improved it by another 2% (p\u2009=\u20091.4e\u221205).\u00a0This demonstrates that the global context in which related terms appeared and hierarchical information aided disambiguation.The p-values and performance differences between all models are displayed in Fig.\u00a011(https://github.com/gpfinley/towards_comprehensive) and utilized it on the CASI dataset. We found that our baseline model (\u201cControl\u201d) has a 3% improvement .While the main goal of this paper is to evaluate the data augmentation and the use of ontological information for pre-training, which can be applied to any method, as an additional baseline we downloaded and installed the codebase of Finley et al.Figure\u00a0p-values and performance differences of the various model iterations tested on the larger test set of 1116 abbreviations using another orthogonal dataset, i2b2, with labels generated using RS. of all the models. While the full model outperformed the control, the total performance gain was more modest (2%). Higher overall performance and smaller improvement indicate that i2b2 more closely resembles MIMIC\u00a0III with respect to the frequency of different disambiguations. For example, in the case of \u201civf\u201d, there are significantly fewer instances of fully spelled out \u201cin vitro fertilization\u201d compared to \u201cintravenous fluids\u201d in both MIMIC\u00a0III (zero versus 2503) and i2b2 (2 versus 49). At the same time, \u201cin vitro fertilization\u201d is the more common expansion in CASI (294 versus 181). This could indicate either a difference between the datasets, or human behavior: the RS method relies on the long form of an abbreviation to be written out fully, and this may be less likely with abbreviations that are either clearer in the context, or longer and hence rarely written out.Table\u00a0Further breaking down our model\u2019s performance on the i2b2 test set, we see that it performed better on abbreviations with little training data Fig.\u00a0. Almost p-values between all models. Table\u00a0As an orthogonal metric, we manually labeled a dataset of 270 abbreviations from the i2b2 corpus. Of the 270, only 24 had multiple expansions, illustrating the strong bias in which abbreviations are written out fully (see \u201cMethods\u201d). Figure\u00a019. One advantage of using such embeddings is that a single word can have a different embedding depending on its context. Specifically, attention-based models such ClinicalBERT have shown dramatic improvements in clinical NLP tasks20. We fine-tuned the ClinicalBERT model for our abbreviation disambiguation task and compared it to our baseline CNN model (see \u201cExperiments\u201d for training specifications). We find that in our case, using embeddings from ClinicalBERT does not offer any improvement on our baseline. We believe that this is because the clinical notes in both the training and test sets lack structure typical of training corpa for BERT such as Wikipedia and that the average number of expansions per abbreviation is small enough that a single embedding per term is sufficient. The benefit of using a simpler model such as our baseline is that BERT models are memory intensive; because we train one model per abbreviation, this adds up quickly and becomes impractical.Impressive advances have been made in a variety of natural language processing (NLP) tasks by learning context-dependent embedding representationsTo evaluate whether our data augmentation technique (Relatives) is applicable to more complex models, we applied clinicalBERT to the CASI dataset and find that there is a significant gain in performance . We have also observed a significant performance increase for the i2b2 hand-labeled dataset . This demonstrates that our novel data augmentation technique can be useful regardless of the underlying model architecture.p\u2009<\u20090.05 using the standard bootstrap method), showing that there is value in disambiguating abbreviations in clinical notes before using them in downstream tasks, at least in this specific setting. See \u201cMethods\u201d for additional details on this experiment. This work will be more fully described in a separate paper.We tested the impact of expanding abbreviations in clinical notes on the prediction of required medical tests in an Emergency Department setting. We developed a model that extracts UMLS concepts in clinical notes from the Hospital for Sick Children and predicts whether the patient received a specific clinical test . We found that without expanding abbreviations, the model achieved an accuracy of 78.09% on an independent test set. After training the model on clinical notes with expanded abbreviations, the performance increased to 78.51% . Overall, our approach overcomes the lack of training data and achieves additional improvements by considering the relationship of the terms in the ontology, introducing a pre-training step to help embed concepts. For all samples, we are also able to generate better representations by considering the global context in which an abbreviation appears. Because of these improvements, our overall framework demonstrates up to 17% higher accuracy of abbreviation disambiguation on auxiliary datasets.In this work, we demonstrate a general algorithm for disambiguating medical abbreviations that scales to previously unseen medical acronyms by utilizing biomedical ontologies as prior medical knowledge. Our approach is based on the ideas introduced by us in an extended conference abstract6, we train one model per abbreviation; the training cost of this is significantly more expensive using our data augmentation technique than the baseline model since we do 25 rounds of Bayesian optimization to search for the optimal temperature . To give a concrete example, training 65 abbreviations (the size of CASI dataset) on a single Tesla V-100 GPU takes ~25\u2009h using our data augmentation technique but ~1\u2009h for the baseline. Furthermore, given the variability among the various datasets used in this study, more work is required in creating a unified corpus for possible abbreviations and related medical terms. While AllAcronyms give an idea for what senses are possible, there exist senses in CASI that are not in AllAcronyms. Finally, to make the pipeline fully end-to-end, better abbreviation detection models should be developed. While some publically available models do exist, they are trained on a small fraction of all possible medical abbreviations5.One notable limitation of our approach is runtime. As was done in the previous work23 and word sense disambiguation in biomedical texts24. We believe that such approaches can be useful for addressing a wide variety of biomedical problems.Our approach has immediately led to better results for the abbreviation disambiguation problem and has further implications for the development of other ML-based methods. Utilizing examples of closely related concepts from an ontology has already shown improved results for named entity recognitionWe used clinical notes from MIMIC\u00a0III as our training set. We collected sentences from MIMIC III containing abbreviation expansions, as well as concepts in UMLS to augment our training set. We also used MIMIC-III to pre-train word embeddings using FastText and IDF weights.We augmented our training sets based on relationships between expansions and concepts defined by UMLS Metathesaurus.We used the medical section of AllAcronyms, a crowd-sourced database, to obtain a list of 80,000 medical abbreviations and 200,000 potential expansions. We removed abbreviations with only one disambiguation and those that do not appear in UMLS, resulting in 30,974 abbreviations.We used the CASI dataset as an orthogonal test set to measure model generalizability. We removed expansions that are the same as the abbreviation and abbreviations with one expansion since the disambiguation task is trivial in that case. This left us with 65 abbreviations. On average, each abbreviation has 4 expansions with 459 test sentences.As another test set, we used i2b2. This dataset does not have hand-labeled annotations, so we used RS to generate labels. There are 1116 abbreviations in i2b2 containing more than one expansion. On average, each abbreviation has 4 expansions with 97 test sentences.To ensure the in-distribution performance was not compromised with our augmentation techniques and demonstrate the level of overfitting, we tested our model on a small test set from MIMIC\u00a0III. We test our model on the same 1116 abbreviations as in the i2b2 test set.We also generated a hand-labeled dataset to better reflect the frequency of abbreviations used in practice. Starting with 270 abbreviations whose expansions occurred with a similar frequency in i2b2, we sampled up to 50 sentences containing each abbreviation. We developed a website that presented this sentence and possible expansions . The temperature T is a \u201csharpening\u201d function25. For each abbreviation, we searched for a temperature that minimizes the loss on the MIMIC\u00a0III validation set using Bayesian optimization, constraining the temperature to be between 2\u22121 and 2. We found that smaller values overfit to MIMIC\u00a0III, while larger ones added too much noise. For each abbreviation, we performed 25 iterations of Bayesian optimization using the Tree-structured Parzen Estimator algorithm and took the model with the lowest validation loss26.For each expansion for a given abbreviation, we augmented the training samples with the ten most related medical concepts. Figure\u00a023. The network consists of one convolution layer with a filter size of one word, followed by ELU activation27. Max-overtime pooling was used to combine the output into a single vector, vt. W1 and b correspond to the weight matrices and bias vectors, respectively, which we learned through training.We mapped an input sentence to a vector representation using a simple encoder similar to that used by Arbabi et al.x to the final encoded sentence representation:g, we took the weighted average of the embedding vectors for each word in the document. The embeddings were weighted using IDF weights trained on the MIMIC\u00a0III corpus. The vector g was calculated as follows:j is the index of the abbreviation, i is the index of the ith word in the document, and d is the number of words in the document. ith word.A fully connected layer with ReLU activation followed by L2 normalization was used to map g with the encoded sentence vector, v, and normalized it to produce the final encoded sample embeddingWe then concatenated Our model was trained to minimize the distance between a target expansion embedding and its context Fig.\u00a0.Fig. 5OvH, where each row, Hc, corresponds to the embedding of an expansion for a given abbreviation. To do the classification task of assigning an expansion label, c, to an input sentence, e, we took the dot-product of H and e and apply a softmax function, such thatp(c|e).Our model represents expansion embeddings with an embedding matrix, 23) and has been shown to improve vector space representations for general language tasks28.Ontologies are structured medical terminologies that link related concepts together. Incorporating structure in this form can entangle embeddings of related concepts and generate more refined embedding clusters in the medical domain\u03b4 (we treated \u03b4 as a hyperparameter and found \u03b4\u2009=\u20092.6 to work best). We linked these concepts using the lowest common ancestor in the UMLS hierarchy and trained a model to predict which concepts from UMLS best fit the context. This is similar to our abbreviation model; however, to take structural information into account, we first learned a matrix, c. Hc is then derived by taking the sum of c\u2019s ancestors\u2019 embeddings, c project c to a global location, while the raw embedding of c learns a local location. During training, we backpropagated through We linked related concepts together using a hierarchical medical ontology as a prior. The benefit of this is that concepts with insufficient training data are constrained to be close to their relatives. For an abbreviation, we took all expansions and closest medical concepts within a Euclidean distance of 20. In general, we find that ClinicalBERT performs significantly better than the original BERT model for this application. We finetune the ClinicalBERT model on our task by stacking two hidden layers joined by a nonlinear activation function which takes as input the embedding from the ClinicalBERT model and outputs the probability of each expansion. We experiment with the number of hidden layers and find that two work best. BERT-style architectures output an embedding for each token in the input, as well as a \u201cstart of sentence\u201d token that serves to encode the entire sentence18. We experiment with using the \u201cstart of the sentence\u201d token and the abbreviation token, and find that the \u201cstart of sentence\u201d token performs better. We also tune other hyperparameters such as sentence length, batch size, and learning rate.We compare our embeddings to contextualized word embeddings generated from ClinicalBERT5. While this model is publically available, it was trained to detect only 500 medical abbreviations, which is a small proportion of all existing abbreviations and may have missed abbreviations in our notes. 156,801 sentences (89%) were found to contain at least one abbreviation using the CARD model. We removed all sentences that did not contain any abbreviations (11%), since we are comparing the performance of expanded vs. non-expanded sentences. We then expanded the detected abbreviations using our pipeline. We then extracted CUIs from both the original and expanded notes and encoded them using a one-hot representation. We trained a simple neural network consisting of two fully connected layers linked by a ReLU nonlinear activation function that took in the one-hot-encoded CUIs and predicted which of the tests the patient should have received. The model was evaluated based on whether the corresponding tests were actually ordered by the physicians.We acquired 176,140 clinical notes from the emergency department at the Hospital for Sick Children as well as the corresponding codes for tests that the patients received. The possible tests were: . First, we detected abbreviations in the clinical notes using the CARD abbreviation detection model developed in ref. 17. For the classification task, we built one model for each abbreviation. To train our model, we used a maximum of 1000 samples per expansion and found a context window of 3 words to work best. On average, each abbreviation had 3.46 expansions. We trained our models on 60% of the sample set, validated it on 20%, and kept 20% as a held-out test set. We trained all concept embedding models for 100 epochs with a learning rate of 0.01 and saved the epoch with the lowest validation loss. We ran the Bayesian Optimization acquisition function using 15 random seeds and used the one with median validation loss to get typical model performance. Only 1 random seed was used for the i2b2-RS dataset due to extremely long runtimes.We trained our model on sentences from MIMIC\u00a0III. We collected sentences containing expansions from CASI and medical concepts from UMLS using RS. In total, 105,161 concepts in UMLS were found in MIMIC\u00a0III. To learn word vectors, we trained a FastText model using a skip-gram architecture with an embedding size of 100To pre-train our model using structural relationships from UMLS, we first learned a 1-D CNN encoder similar to our abbreviation model that predicted what medical concept is present given its context. The embeddings for each concept were calculated by summing the raw embeddings of itself and its ancestors . We collected sentences for every concept in UMLS using the RS technique. To train our model, we used 1000 samples per concept and incorporated both a local context of 3 words and the global context. On average, the global context is 60 words per clinical note. We split the dataset into a training set (90%) and a validation set (10%). We used a learning rate of 0.002 and a batch size of 2048.We initialized the weights of the convolution and fully connected layers with corresponding weights from the hierarchy model. If an expansion for a given abbreviation has a concept code in UMLS, we also initialized the expansion embedding in the abbreviation model with the corresponding embedding from the hierarchy.Further information on research design is available in the\u00a0Supplementary InformationSupplementary Dataset 1Description of Additional Supplementary FilesReporting SummaryPeer Review File"} {"text": "Animals were examined at 3, 5 and 7 days post-infection (DPI) for lung histopathology, viral load and production of proteins regulating the progression of SARS-CoV-2 infection. Results indicated that oral administration of PT150 caused a dose-dependent decrease in replication of SARS-CoV-2 in lung, as well as in expression of ACE2 and TMPRSS2. Lung hypercellularity and infiltration of macrophages and CD4+ T-cells were dramatically decreased in PT150-treated animals, as was tissue damage and expression of IL-6. Molecular docking studies suggest that PT150 binds to the co-activator interface of the ligand-binding domain of both AR and GR, thereby acting as an allosteric modulator and transcriptional repressor of these receptors. Phylogenetic analysis of AR and GR revealed a high degree of sequence identity maintained across multiple species, including humans, suggesting that the mechanism of action and therapeutic efficacy observed in Syrian hamsters would likely be predictive of positive outcomes in patients. PT150 is therefore a strong candidate for further clinical development for the treatment of COVID-19 across variants of SARS-CoV-2.Despite significant research efforts, treatment options for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) remain limited. This is due in part to a lack of therapeutics that increase host defense to the virus. Replication of SARS-CoV-2 in lung tissue is associated with marked infiltration of macrophages and activation of innate immune inflammatory responses that amplify tissue injury. Antagonists of the androgen (AR) and glucocorticoid (GR) receptors have shown efficacy in models of COVID-19 and in clinical studies because the cell surface proteins required for viral entry, angiotensin converting enzyme 2 (ACE2) and the transmembrane protease, serine 2 (TMPRSS2), are transcriptionally regulated by these receptors. We postulated that the GR and AR modulator, PT150, would reduce infectivity of SARS-CoV-2 and prevent inflammatory lung injury in the Syrian golden hamster model of COVID-19 by down-regulating expression of critical genes regulated through these receptors. Animals were infected intranasally with 2.5 \u00d7 10 Following the emergence of a number of idiopathic cases of severe pneumonia in December 2019 in Wuhan, China, deep sequencing of lower respiratory samples from these patients revealed a novel beta-coronavirus that was identified as the causative agent of COVID-19 and contNidovirales; Family: Coronaviridae) are enveloped, non-segmented, positive-sense RNA viruses that contain very large genomes up to 33.5 kilobases (kb). The four genera of these viruses \u2013 Alphacoronavirus, Beta-coronavirus, Gamma-coronavirus, and Delta-coronavirus \u2013 share a highly conserved genome organization comprising a large replicase gene followed by structural and accessory genes. The organization of the SARS-CoV-2 coronavirus genome is arranged from the 5\u2019-leader-UTR, replicase, S (spike), E (envelope), M (membrane), N (nucleocapsid) to the 3\u2019 UTR poly (A) tail (Coronaviruses (Order: (A) tail . Notably(A) tail . The spi(A) tail . Viral i(A) tail . Proteol(A) tail .Targeting ACE2 and TMPRSS2 has therefore emerged as an important therapeutic strategy for the treatment of COVID-19 by preventing entry of SARS-CoV-2 into cells, thereby limiting viral replication. Both ACE2 and TMPRSS2 are highly expressed in bronchiolar epithelial cells and are transcriptionally regulated by the androgen receptor (AR) through 5\u2019-flanking promoter elements . In anotcis-acting transcription factors could also be problematic, due to excessive blockade of cortisol function. Allosteric modulators of both AR and GR that could dampen transcriptional activation through these receptors would be preferable as a means of downregulating expression of AR- and GR-regulated genes. Ligands that act as allosteric modulators of nuclear receptors tend to favor stabilization of transcriptional co-repressor proteins on chromatin, such as CoREST, HDAC2/3/4 and NCoR2, that prevent binding of co-activator proteins in response to activation of factors , 17. Thu factors , 19 thatin vitro . Based o2. Virus titrations from animal tissues were performed by plaque assay as described previously at 37\u00b0C with 5% COeviously . Plaquesad libitum prior to being moved to the BSL-3 containment facility for experimental infection. Hamsters were anesthetized by inhalation with isoflurane and then intranasally inoculated with 2.5 \u00d7 104 TCID50/ml equivalents of SARS-CoV-2 in sterile Dulbecco\u2019s modified Eagles medium (DMEM). Hamsters not receiving SARS-CoV-2 were given a sham inoculation with the equivalent volume of DMEM vehicle. To assess activity of PT150 against SARS-CoV-2, experimental groups were as follows: control (sham inoculation + miglyol vehicle), SARS-CoV-2 + miglyol, SARS-CoV-2 + 30 mg/Kg PT150, SARS-CoV-2 + 100 mg/Kg PT150. The experimental drug (PT150) was dissolved in 100% miglyol 812 and delivered by oral gavage at 8\u03bcL/g body weight under isoflurane anesthesia. Animals were weighed daily to deliver an accurate dose of drug and were monitored for clinical severity of disease through daily health checks according to an approved clinical scoring matrixs. The SARS-CoV-2 + vehicle group also received miglyol 812 by oral gavage. Animals were observed for clinical signs of disease at time of dosing each day . Groups of animals were euthanized at 3, 5 and 7 days post-infection (DPI). Eighteen hamsters were euthanized at 3 and 5 DPI. On day 7 post-infection (7 DPI) the remaining 24 hamsters were euthanized . Animals were euthanized by decapitation under isoflurane anesthesia and tissue was collected for immunohistochemistry, viral isolation, RNA analysis and histopathology.All animal protocols were approved by the Institutional Animal Care and Use Committee at Colorado State University (IACUC Protocol No. 996). Hamsters were used in compliance with the PHS Policy and Guide for the Care and Use of Laboratory Animals and procedures were performed in accordance with National Institutes of Health guidelines. Male and female Syrian hamsters were divided equally and randomly assigned to treatment groups at 8 weeks of age . The animals were housed in the CSU animal facility and allowed access to standard pelleted feed and water via Integrated DNA Technologies : forward primer nCOV_N1, probe nCOV_N1 , and reverse primer nCOV_N1 (2 = 0.991) equation generated.The 2019-nCoV CDC qPCR Probe Assay, which targets regions within the nCoV nucleocapsid gene, was used and adapted here. In brief, RNA was isolated from hamster lung by homogenizing tissue in trizol reagent using 5mm stainless steel beads and Qiagen TissueLyser II homogenizer under BSL-3 conditions. RNA was then extracted from individual samples using previously described methods . RNA pur0569191) . Each qRen bloc and fixed whole in 10% neutral buffered formalin under BSL-3 containment for at least 72 hours before being transferred to CSU Veterinary Diagnostic Laboratory, BSL-2 necropsy area for tissue trimming and sectioning. Four transverse whole-lung sections were stained with hematoxylin and eosin (H&E). Tissue was sectioned at 5\u00b5m thickness and were mounted onto poly-ionic slides. Sections were then deparaffinized and immunostained using the Leica Bond RXm automated robotic staining system. Antigen retrieval was performed by using Bond Epitope Retrieval Solution 1 for 20 minutes in conjunction with base plate heat application. Sections were then permeabilized (0.1% Triton X in 1X TBS) and blocked with 1% donkey serum. Primary antibodies were diluted to their optimized dilutions in tris-buffered saline and incubated on the tissue for 1 hour/antibody: Rabbit SARS nucleocapsid protein , goat ionized calcium binding adaptor molecule 1 , goat angiotensin converting enzyme 2 , rabbit transmembrane serine protease 2 , mouse interleukin 6 . Sections were then stained for DAPI (ThermoFisher) and were mounted on glass coverslips using ProLong Gold Anti-Fade medium and stored at 4\u00b0C until imaging.Lungs from 60 hamsters were extirpated + cells were determined per 1 mm2 areas given the previously drawn ROI overall area.The studies described here were conducted by a single investigator. Images were captured using an automated stage Olympus BX63 fluorescent microscope equipped with a Hamamatsu ORCA-flash 4.0 LT CCD camera and collected using Olympus CellSens software. Quantification of protein was performed by acquiring five randomized images encompassing the pseudostratified columnar epithelium around bronchi at 400x magnification all from different lung lobes. Regions of interest were then drawn to enclose the epithelial layer and exclude the lumen, in order to accurately obtain average intensity measurements. The Count and Measure function on Olympus CellSens software was then used to threshold the entirety of the ROI and measure the given channel signal. Quantification of invading inflammatory cells was performed by generating whole lung montages by compiling 100x images acquired using automated stage coordinate mapping with an Olympus 10X air objective (0.40\u00a0N.A.) All images were obtained and analyzed under the same conditions for magnification, exposure time, lamp intensity, camera gain, and filter application. ROIs were drawn around the lung sections and the co-localization function of Count and Measure within Olympus CellSens software was applied the sections. IBA12) was determined in hematoxylin and eosin (H&E) -stained histological sections by digital image analysis. A digital montage was compiled at 100X magnification using an Olympus X-Apochromat 10X air objective (N.A. 0.40) consisting of approximately 1,200 individual frames per lung lobe. Affected regions of interest (ROI) were subsequently automatically identified using Olympus CellSens software by quantifying whole-lung montages scanned from each hamster for total number of nuclei or nucleated cells (to exclude erythrocytes) stained with H&E, relative to the total area of the ROI for each lung.Quantification of the total affected pulmonary parenchyma as well as counting of inflammatory cells per area , and thePhylogenetic analysis was performed by aligning protein coding sequences across multiple species within the protein of interest. FASTA files were downloaded from National Center for Biotechnology Institute\u2019s (NCBI) gene databases and were then input into Molecular Evolutionary Genetic Analysis software for alignment. Muscle alignment was performed, and the evolutionary history was inferred using Neighbor-Joining methodology resultinn=6/group) were quantified independently in technical triplicate. The resultant Cp values were used to calculate relative gene expression by use of the Pfaffl method was used as the internal standard reference gene. Primer pairs used in qRT-PCR are listed in l method . Relativl method where hel method was perfGenes of interest were entered into STRING protein-protein interaction database (V11.5) and view+ T-cells as a percent of lung area, the overall intensity of IL-6 expression as a percent of lung area, and the overall percent of lung area occupied by IBA1+ macrophages. These responses were normalized to control values according to the following equation:To model the temporal sequence of each cellular response in SARS-CoV-2-infected hamsters, normalized pathological overlays were generated for each parameter encompassing all time points examined . Parameters modeled included SARS-CoV-2 viral load , the number of CD4. The conpost hoc test for multiple comparisons. Differences between two variables were identified using a two-way ANOVA following a Tukey post hoc multiple comparisons test. Significance is denoted as *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. All statistical analysis was conducted using Prism.All data is presented as mean +/- SEM, unless otherwise noted. Experimental values from each mean were analyzed with a ROUT (\u03b1=0.05) test for exclusion of significant outliers. Differences between each experimental group were analyzed using an unpaired t-test or a one-way ANOVA with Tukey\u2019s 4 TCID50 SARS-CoV-2 (USA-WA1-2020 strain) by intranasal inoculation. Body weights were monitored daily for each animal, with a noted decline in average body weight in each experimental group that reached a maximum loss by day 5 with a total overall loss of eight-percent body weight =6.231) and time post-infection =16.39). Hamsters treated with PT150 at 30 and 100 mg/Kg/day did not show a statistically significant difference in body weight from control hamsters at day 5 post-infection, at which the maximal extent of body weight loss is observed in Syrian hamsters infected with SARS-CoV-2 and treated with vehicle only , metabolism, half-life, etc. and examined by a veterinary pathologist blinded to the treatment groups. Representative lung sections from each experimental group are presented in Lungs were examined for the extent of immune cell infiltration at 3, 5 and 7 DPI by quantitative digital image analysis and the glucocorticoid receptor (PDB: 3CLD), using the Glide module within Schr\u00f6dinger [9-11]. For the androgen receptor between clustering pairs is visualized by purple boxes outlining groupings generated =14.71; Timepoint, p<0.0001, F=29.99), with the greatest differences evident at 3 DPI, where both PT150-treated groups were different from the SARS-CoV-2 + vehicle group, as well as from each other, indicating dose-dependent effects on reduction of IL-6 in the bronchiolar epithelial layer of infected hamsters cells and activated macrophages, which is known to be a critical mediator of lung injury in COVID-19 patients . This cyfunction , 80. Thi+ T-cell proliferation and further recruitment of circulating macrophages that initiate a severe innate immune response leading to broncho-interstitial pneumonia and consolidation of the lung parenchyma. Expression of IL-6 is necessary for recruitment of immune cells to the site of infection, which leads to the \u2018cytokine storm\u2019 observed in patients that is associated with poorer outcomes , National Institute of Allergy and Infectious Disease R01 AI140442 (TS), and the National Science Foundation 2033260 (TS).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "BackgroundNosocomial infections are a significant health concern. Following surgery, infections are most commonly associated with the surgical site, yet there are other potential sources for infections after surgical interventions. Identification of the source of infections can be very challenging.MethodologyCitrobacter freundii. The only connection between all cases was the anesthesiologist. An epidemiological inquiry could not definitively identify the source of the outbreak. Therefore, we utilized an RNA sequencing technique to evaluate the microbiome of the anesthesiologist and compared the results to bacteria cultured from the bloodstream of the two patients.An outbreak of postoperative infections following surgery led to intensive care unit (ICU) admission of patients immediately after the surgical procedure. The blood cultures of two patients were positive for ResultsCitrobacter freundii is an uncommon source of bloodstream infections, and in the normal human microbiome, the results establish the source of a cluster of infections to the anesthesiologist.The anesthesiologist\u2019s microbiome contained amplicons that were identical to those of the bacteria in the patient\u2019s bloodstream. Because ConclusionsIn cases of nosocomial infections, when conventional microbiological techniques do not clearly establish the source of the infection, using 16S RNA sequencing should be considered. Various sources cause outbreaks of infections following surgery. These include healthcare personnel, environmental sources, contaminated medications, or the patients themselves. Several studies have examined various operating room parameters, including staff attire\u00a0, hand hyWhen several infectious episodes occur over a short period of time, an investigation into the source of the infections should be performed, and an epidemiological effort to identify the cause of the outbreak should be made. If a clear source is found, actions can be taken to prevent further risk. In some cases, identifying the source of the infections using conventional microbiological tools may not be sufficient, necessitating the use of more advanced molecular techniques .Citrobacter freundii, in his microbiome. This enabled us to establish the source of the outbreak and provide further training to the clinician to improve compliance with infection prevention guidelines.In this study, when conventional microbiological methods failed, we used 16S RNA sequencing to establish the source of infection during an outbreak following surgery.\u00a0Using this technique, we found that\u00a0the anesthesiologist who was involved in the care of all patients carried the organism causing the infection, Description of the episodeA patient who underwent dilatation and curettage (D&C) on January 16, 2020, for a missed abortion, developed a clinical picture of sepsis including fever (temperature >38\u00b0C) and hypotension . She was admitted to the intensive care unit (ICU) and treated with antibiotics for suspected sepsis due to the surgical procedure. A few hours later, another patient who had also undergone a D&C was admitted to the ICU due to sepsis. The following day, we learned that two more women who underwent the same procedure had signs of an infectious episode and were being treated at two other hospitals for a clinical picture suggestive of sepsis.On January 17th, a male patient who had a revision of total knee replacement surgery developed a septic clinical picture shortly after the surgery, was admitted to the ICU, and required intubation and mechanical ventilation due to acute respiratory distress syndrome. On the following day, two patients who had undergone colporrhaphy the day before complained of fever, weakness, and hypotension. They were both admitted to the ICU and diagnosed with sepsis based on their clinical presentation and laboratory results of white blood cell count, C-reactive protein level, and procalcitonin levels. Details of the patients are presented in Table An epidemiological investigation was initiated to determine a possible connecting link between the cases. Staff members were questioned regarding personal medical issues, hand hygiene, procedures involving equipment, and medications. There was no change from usual practice.The investigation revealed that the anesthesiologist in all cases was the same physician. No other clinician was involved in the care of all patients. When questioned, he denied any current illness or unusual exposure. He had anesthetized six women on the same day as the four patients with D&C before these cases. None of them had any complaints either immediately after the procedures or when questioned three days later by phone. The two operating rooms in which the cases occurred are located on different floors of the hospital. The sterile equipment used in the procedures was similar for the first four cases but different in the last three cases.As part of the clinical assessment of the patients, the Sequential Organ Failure Assessment (SOFA) score was calculated. The maximal score of the patients was correlated with the duration of surgery. Statistical analysis was performed with JMP software .Citrobacter freundii. Patient seven also had an aerobic Gram-negative bacillus: Acinetobacter baumannii, cultured from the same blood specimen. Because the anesthesiologist was the only identified common factor involved in the care of all patients, we decided to test the possibility of him being the source of the infection. Cultures of the anesthesiologist\u2019s mouth, nares, and hands were taken three times in the days following the events and were negative for Gram-positive or Gram-negative pathogenic bacteria.Blood cultures were obtained from the five patients who were treated in our hospital. Patients five and seven had positive cultures. Both cultures grew the same anaerobic Gram-negative organism: Citrobacter freundii (two isolates from patients five and seven) and Acinetobacter baumannii from patient seven.We then obtained samples from different body sites\u00a0to characterize his microbial composition using V4 16S rRNA amplicon metagenomic sequencing. Swabs were obtained from his nares, buccal area, hands, and feces (physician A), and from another anesthesiologist who served as control (physician B). We also sequenced using\u00a0V4 16S rRNA amplicon metagenomic sequencing the three isolated bacteria that were grown and identified as 16S rRNA gene amplicon sequencing and analysesDNA extraction and polymerase chain reaction (PCR) amplification of the variable region 4 (V4) of the 16S rRNA gene using Illumina adapted universal primers 515F/806R was conducted using the direct PCR protocol [Extract-N-Amp Plant PCR Kit ], as previously described [Citrobacter freundii, and \u201cTACAGAGGGTGCGAGCGTTAATCGGATTTACTGGGCGTAAAGCGTGCGTAGGCGGCTTTTTAAGTCGGATGTGAAATCCCCGAGCTTAACTTGGGAATTGCATTCGATACTGGGAAGCTAGAGTATGGGAGAGGATGGTAGAATTCCAGG\u201d for Acinetobacter baumannii. A heatmap was generated using Calour version 2018.10.1 with default parameters [Reads were processed in a data curation pipeline implemented in QIIME 2 version 2019.4 . Reads wWe have uploaded the 16S files to NCBI SRA. The SRA accession is SPRJNA803025.Approval for publication of this epidemiological event was provided by the Institutional Review Board of Assuta Medical Centers .Clinical descriptionThe clinical information regarding the patients is presented in Table Citrobacter freundii were the same as the obtained 16S sequence in the NCBI blast.As shown in Figure Acinetobacter baumannii ASV was identified only in the culture but not in physician A or B samples or in any of the procedural blank samples.Moreover, this identical ASV was identified in physician A hands and fecal samples but not in physician B samples or in any of the negative procedural controls Figure . This clNosocomial infections are a common occurrence, and the source of infection is frequently ascribed to bacterial transfer from patient to patient, most commonly by staff. Other possible sources are the patient\u2019s flora, and, in some cases, the possibility of a clinician being the source of the infectious organism has also been reported.The possible contribution of anesthetic practice to nosocomial infections has received less attention than other possible sources. Although perioperative nosocomial infections are a significant portion of hospital-acquired infections, these are most commonly due to surgical technique, staff involved directly with the surgical wound, patients themselves, or poorly sterilized equipment. There have been several reports of anesthetic medications, particularly propofol, as a source of infection, particularly if single patient vials of the medication are used in more than one patient -12. An eDuring routine anesthetic care, the most probable mechanism by which an anesthesiologist can cause infection in a patient is through contamination of the anesthesia workplace, particularly the anesthesia machine after airway management, and especially the areas which are frequently touched during anesthesia, namely, the adjustable pressure limit valve and the ventilatory settings controls ,17. The In many cases, it may not be feasible to comply with the guidelines because of the sheer number of hand hygiene opportunities in a fast-paced scenario such as an anesthetic induction . For exaAn expert guidance protocol for the prevention of contamination of anesthesia workstations was recently published . This guAn interesting observation in our series is the fact that the severity of the infection was correlated with the length of the procedure Figure , Table 2Citrobacter freundii is an uncommon pathogen of nosocomial infections, although it is frequently present in the environment. In hospitalized patients, Citrobacter freundii is the source of infection in only 0.7% of those with bacteremia [cteremia . It is aLimitationsA major limitation of our epidemiological inquiry and assessment of the events is the fact that we have no clear explanation for the mechanism by which this infection occurred. Although breaches in the aseptic technique were identified in the physician\u2019s work, we could not explain the occurrence of a cluster of patients over two days, without any change in practice just before the outbreak. We could not identify any special event regarding the anesthesiologist\u2019s health in the days before the outbreak. Despite the fact that we have no mechanistic explanation for the infection of all patients, there is no doubt that the source of the infecting organism was the anesthesiologist.The correlation between the severity of the infection and the duration of surgery, that is, the time when the anesthesiologist was caring for the patient invariably leads to the conclusion that the number of contacts between the anesthesiologist and the patient was a contributing factor to the development of sepsis. This indicates that the source of infection was through the intravenous line or through the airways. During anesthesia delivery, both airway management maneuvers and intravenous medication administration increase as the duration of the procedure increases. Therefore, we believe that the source of the infection was most likely intravenously, although we cannot rule out contact through the airways as a source.Alverdy et al. recently reported that the fact that breaches of aseptic technique occur regularly in the operating room, yet infections following surgery are not that common, suggesting that the source of infection should be sought elsewhere. They propose an approach of sequencing organisms that were cultured from infected wounds and comparing them to the patient\u2019s microbiome, which may lead to identifying the source of the infection in many cases and perhaps direct us toward different approaches to deal with surgical site infections .The complexity of infection control during anesthesia delivery is due to the intensity of the clinical activities, as well as the unique setup of the anesthesia workplace. When an infection occurs following surgery, its source may not always be apparent even when an epidemiological investigation is conducted. Others have reported on the role of anesthesia personnel in the transmission of pathogenic organisms from one patient to another due to lapses in sterile techniques. Our report establishes the role of the clinician\u2019s microbiome as the source of infections in patients. This source of infection will usually be missed when utilizing conventional microbiological techniques when performing an epidemiological inquiry.When there is an outbreak of infections following surgery, we propose that, in addition to exploring a patient as a source of infection, and performing standard cultures of the patients, environment, and the clinicians who were involved in patient care, an additional approach should be utilized. This involves the evaluation of the\u00a0microbiome of the clinicians involved in the patient\u2019s care. As in the outbreak described here, this may provide valuable information which can be significant in identifying the source of infections and decreasing nosocomial infections."} {"text": "Westermanniadifficilis Dohrn, 1860 , currently included in Dohrnemesa Wygodzinsky, 1945, is transferred to the genus Polauchenia McAtee & Malloch, 1925 with the resulting new combination: Polaucheniadifficilis , comb. nov. An updated key to the species of Polauchenia is provided.Based on the examination of its lectotype (here designated), Emesinae classified in four tribes in the Neotropics Dohrn\u201d and \u201ctenerrima (Westermannia) Dohrn\u201d as unplaced species in the genus Emesa. Because ([1821]) , KirkaldDohrnemesa, Westermanniadifficilis because the types were not examined again. However, he considered that, considering the generic synonyms of Westermannia and Westermannias with Emesa proposed by Dohrnemesa in the future.When describing Dohrnemesa close to Polauchenia McAtee & Malloch, 1925 and pointed out that the main differences between them were the absence of tubercles or spined humeri and the presence of a free short vein emitted from the base of the basal cell of the forewing in Dohrnemesa. In the key to genera of the Emesini presented by Dohrnemesa and Polauchenia, respectively, is the main character that separates these genera. Additionally, based on this characteristic alone, Polaucheniareimoseri Wygodzinsky, 1950 to Dohrnemesa, assuming that the possession of two veins emitted from the base of the basal cell indicated that this species belonged to the latter genus , comb. nov. The main differences are that in P.difficilis, comb. nov. the base of basal cell is pointed, emitting a single longitudinal vein, with the absence of a short free vein at this base and the humeri are spined. Both differences are sufficient to show that P.difficilis, comb. nov. does not belong to Dohrnemesa but to Polauchenia. Additionally, as commented below, several other diagnostic characteristics of Polauchenia are present in the specimen examined.In the current work, we confirm that the lectotype (here designated) of Westermanniadifficilis Dohrn, deposited in the Hemimetabola Collection of the Museum f\u00fcr Naturkunde Berlin, Leibniz Institute for Evolution and Biodiversity Science, Berlin, Germany (MFNB), was directly examined of ned Figs , 4\u201314.W.difficilis Dohrn, 1860 were taken with a Canon EOS 6D of 6D Fig. \u201312, 14 wFigure Reduviidae separates the lines and a double slash (//) different labels.EmesinaeSubfamily Taxon classificationAnimaliaHemipteraReduviidaecomb. nov.E0BAD3D1-3D24-55C7-9A60-A76373FABAE8WestermanniadifficilisEmesa]; Dohrnemesa], 1949: 34 . Dohrn, 1860: 251 [description], 1863: 47\u201348 [redescription]; DohrnemesadifficilisW.difficilis figured by ; Westermanniadifficilis, male lectotype (here designated): [handwritten label]: Leptol. / difficilis / Dohrn // [blue underlined handwritten label]: Columb; Moritz. // [printed label]: 3326 // [printed label]: [at right side]: QR CODE, [at left side]: http://coll.mfn-berlin.de/u/ /123b88 // [printed red label]: LECTOTYPE / Westermanniadifficilis Dohrn, 1860 / designated by H. R. Gil-Santana & / J. Deckert 2020 (MFNB).Leptolemusdifficilis Dohrn , collected in the Caribbean islands and Venezuela, but there is a disagreement among some authors if he collected in Colombia. It is possible that the records of his collecting from Colombia originated from the confusion between Venezuela and Colombia, parts of the former having once belonged to the ancient vice-kingdom of \u201cNueva Granada\u201d . In thisPolaucheniadifficilis, comb. nov. can be separated from other species of the genus by the combination of characters presented in the key below. Polaucheniadifficilis, comb. nov. shares similarities with P.paraprotentor Gil-Santana & Ferreira, 2017 but differs from this species in several characteristics, such as: the pale markings of the antenna, middle and hind femora are simple (P.paraprotentor) or bordered by darker markings , those on antenna narrow, with the pale annuli as long or only slightly longer (P.paraprotentor) or quite longer (four and seven times) than the width of the segment ; fore coxa with a median pale annulus or two pale annuli at submedian basal portion and approximately midportion of distal half of the segment (P.paraprotentor); distal portion of forewings with or without (P.paraprotentor) a large whitish subdistal marking; petiole approximately 1.5 (P.paraprotentor) or 1.3 times as long as fore lobe; humeri spined or not (P.paraprotentor); spine of scutellum obliquely directed upwards or backwards (P.paraprotentor); spines of scutellum and metanotum mostly pale or brownish (P.paraprotentor).Male. Measurements (mm): total length: to tip of abdomen 10.0; to tip of forewings 10.6. Coloration: brownish to light brown, with yellowish to pale markings or portions , allowing us to state 10.6 mm as the actual minimum of the genus. Moreover, it is noteworthy that W.difficilis. It is not possible to know how accurately A. Dohrn measured the specimen or if he rounded the measurement to an exact number. It is possible that the specimen was originally 11 mm in length when examined by him but due to the passing of time, the specimen may have shortened a little.The only small difference is the total length, which was recorded as being 10.6 mm for Dohrnemesa, the genus where it was currently included are absent in this species. Therefore, it becomes clear that W.difficilis, providing an additional argument to disregard his placement of P.difficilis, comb. nov. in Dohrnemesa.On the other hand, some characteristics which Westermanniadifficilis was designated here as a lectotype following the Art. 74.1 of ICZN.The type specimen of Polauchenia and nine in Dohrnemesa , D.santosi Wygodzinsky, 1945) , 2017."} {"text": "Owing to the nature of health data, their sharing and reuse for research are limited by legal, technical, and ethical implications. In this sense, to address that challenge and facilitate and promote the discovery of scientific knowledge, the Findable, Accessible, Interoperable, and Reusable (FAIR) principles help organizations to share research data in a secure, appropriate, and useful way for other researchers.The objective of this study was the FAIRification of existing health research data sets and applying a federated machine learning architecture on top of the FAIRified data sets of different health research performing organizations. The entire FAIR4Health solution was validated through the assessment of a federated model for real-time prediction of 30-day readmission risk in patients with chronic obstructive pulmonary disease (COPD).The application of the FAIR principles on health research data sets in 3 different health care settings enabled a retrospective multicenter study for the development of specific federated machine learning models for the early prediction of 30-day readmission risk in patients with COPD. This predictive model was generated upon the FAIR4Health platform. Finally, an observational prospective study with 30 days follow-up was conducted in 2 health care centers from different countries. The same inclusion and exclusion criteria were used in both retrospective and prospective studies.Clinical validation was demonstrated through the implementation of federated machine learning models on top of the FAIRified data sets from different health research performing organizations. The federated model for predicting the 30-day hospital readmission risk was trained using retrospective data from 4.944 patients with COPD. The assessment of the predictive model was performed using the data of 100 recruited (22 from Spain and 78 from Serbia) out of 2070 observed (records viewed) patients during the observational prospective study, which was executed from April 2021 to September 2021. Significant accuracy (0.98) and precision (0.25) of the predictive model generated upon the FAIR4Health platform were observed. Therefore, the generated prediction of 30-day readmission risk was confirmed in 87% (87/100) of cases.Implementing a FAIR data policy in health research performing organizations to facilitate data sharing and reuse is relevant and needed, following the discovery, access, integration, and analysis of health research data. The FAIR4Health project proposes a technological solution in the health domain to facilitate alignment with the FAIR principles. FAIR4Health is a project that received funding from the European Union\u2019s (EU) Horizon 2020 research and innovation program under grant 824666. This project started in December 2018 and ended in November 2021. The main objective of this European project was to promote and encourage the EU health research community to apply the Findable, Accessible, Interoperable, and Reusable (FAIR) principles [open data [FAIR data [Despite strong concerns and challenges regarding data sharing in health research ,3 and fopen data ,5 and FAAIR data ,7, it isAIR data ,9. GivenAIR data .The purpose of the FAIR4Health project was to dThe aim of the FAIR data principles is to enSince its formal release via the FORCE11 community , the FAIFAIR principles are being adopted by a diverse range of research disciplines, such as economics, semantic web, and environment. Several groups have assessed the uptake to date and the challenges encountered. FAIR4Health and otheFAIR4Health adds to the analysis and experience of the application of FAIR principles in the health research field, specifically in health research data sets on COPD.COPD is a respiratory disease characterized by persistent symptoms and chronic limitation of airflow. This disease is known to be underdiagnosed even though it affects almost 10% of adults worldwide and its Previous studies have shown that there are several risk factors associated with readmission in patients with COPD, such as significant deterioration of lung function, low oxygen saturation in pulse oximetry, decreased activity levels, comorbidities, and the absence of medication reconciliation during hospitalization . HospitaRegarding the comorbidities, it is noted that several studies agree that the greater the number of comorbidities, the greater the risk of readmission for patients with COPD ,29. The Therefore, COPD is a major health problem that must be addressed and analyzed . SeveralIn this paper, the clinical validation of the FAIR4Health solution is described, including the development and selection of the most appropriate model for predicting 30-day readmission risk in patients with COPD and the assessment of such a model. This study builds upon the FAIRification of health research data sets of different health research performing organizations and a federated machine learning architecture on top of the FAIRified data sets of different organizations. The entire FAIR4Health solution was validated in real-world settings with the clinical use case described in this paper.The use case that was designed in this study to validate the FAIR4Health solution was composed of two phases: (1) a retrospective multicenter observational study, including the training of the predictive models in the FAIR4Health platform, and (2) an observational prospective study with a 30-day follow-up.In the retrospective study, the population included patients aged >18 years diagnosed with COPD, considering that COPD-related comorbidities are observed at a younger age . PatientIn the first phase, which is to train the federated machine learning models, three different organizations participated with their health care and health research data sets: (1) Universite De Geneve from Switzerland provided health care data from the electronic health record (EHR) of the University Hospitals of Geneva; (2) Virgen del Roc\u00edo University Hospital as part of the Andalusian Health Service from Spain provided health care data from the EHR of the Virgen del Roc\u00edo University Hospital in Seville; and (3) Instituto Aragon\u00e9s de Ciencias de la Salud and Instituto de Investigaci\u00f3n Sanitaria Arag\u00f3n from Spain provided a health research data set based on the EpiChron Cohort ,35, a stFor organizations contributing with health research data sets from previous research projects, the sample size was defined by taking into account the original size of the data sets in the previous research, whereas for organizations contributing with health care data sets from the EHRs, it was defined from the number of patients fulfilling the inclusion and exclusion criteria.The variables for the training and prediction processes were related to demographic, multimorbidity, comorbidities, polypharmacy, laboratory, and hospitalization data. The principal dependent variable was readmission, defined as unplanned hospitalization for any cause related to COPD within 30 days of hospital discharge.Following the clinical protocol defined in the study, an observational prospective study with a 30 days follow-up was carried out after the retrospective study to assess the impact of the early predictive model by collecting data from a cohort of recruited patients. Patients aged \u226518 years with a diagnosis of COPD who were admitted to the hospital for this disease and who signed the informed consent form (ICF) were included in the observational prospective study, complying with the same inclusion and exclusion criteria as described for the retrospective study.Two health care organizations participated in the observational prospective study in which the trained predictive model was tested: (1) Internal Medicine Department of the Virgen del Roc\u00edo University Hospital in Seville as part of the Andalusian Health Service (SAS) from Spain and (2) Clinic for Obstructive Pulmonary Diseases and Acute Pneumopathies of the Institute for Pulmonary Diseases of Vojvodina (IPBV) from Serbia. In both cases, the sample size was defined by considering the number of patients admitted to the hospital during the prospective study period, thus fulfilling the inclusion and exclusion criteria.Regarding the study variables, the same variables were collected at the time of inclusion of each patient during the prospective study as in the retrospective study. As a monitoring variable, aiming to assess the prediction performance of the model on the patient\u2019s risk of readmission, it was analyzed whether the patient with COPD had a readmission within the 30 days of discharge.Ethical approval for this study was obtained from all participating health research organizations based on regional regulations before involving them in the execution of the case studies .Technical and organizational measures were defined to safeguard the rights and freedoms of the data participants, including the data minimization principle. Informed consent procedures were defined, including informed consent and information sheets. A data protection officer was appointed at each data owner institution. To reinforce the appropriate coverage of these ethical aspects, at the beginning of the study, an external ethics advisory board was made up, which involved reviewing deliverables, generating reports, and performing presentations to support the FAIR4Health Consortium.Making health data FAIR opens up new horizons, especially for the secondary use of health care and reuse of health research data sets. The FAIR4Health project proposed a FAIRification workflow to be usTo address the challenges of the health domain, the proposed workflow adapted the generic FAIRification process defined by GO FAIR . First, These steps were (1) raw data analysis, (2) data curation and validation, (3) data deidentification and anonymization, (4) semantic modeling, (5) making data linkable, (6) license attribution, (7) data versioning, (8) indexing, (9) metadata aggregation, and (10) publishing.Steps 2, 3, 7, and 8 were newly introduced in the FAIR4Health FAIRification workflow. The FAIRification workflow was based on the HL7 Fast Healthcare Interoperability Resources (FHIR) . Making Along with the onFHIR.io repositories, at each organization, a DCT and a DPT were installed, and these tools were used by the data managers and FAIR4Health researchers to FAIRify their existing data sets, collaborating to appropriately treat the databases. Following the FAIRification workflow, the raw data were first transformed into HL7 FHIR by creating the associated FHIR resources through the DCT. It was shown that the DCT is a valid software tool that meets the challenges of raw data analysis, curation, and validation steps . Once thThe FAIR4Health project implemented the PPDDM philosophy by designing and implementing a federated machine learning architecture. The ultimate aim of this architecture is to address the challenging security and privacy concerns of health data owners. The PPDDM architecture does not allow data to leave their servers. Partial machine learning models were trained on each FAIRified data set at each organization, and then these partial models were used to develop a boosted machine learning model on the central FAIR4Health platform. The platform provides a web-based graphical user interface to the researchers so that they can define their features, create distributed data sets, and then train federated models. The PPDDM architecture was composed of the agent implementation. Then, the agents were deployed at each data source organization on top of their FAIRified data sets. These agents communicated with their associated onFHIR.io repositories at each deployment site. A manager was deployed as a backend to the FAIR4Health platform graphical user interface so that these agents can be orchestrated to build distributed data sets and federated predictive models on top of those distributed data sets. During the retrospective study, the researchers of the data owner organizations used the platform to train federated machine learning models on the retrospective data sets that were previously made FAIR using the FAIRification tools. The PPDDM implementation provided a set of machine learning algorithms to the researchers to be executed in a federated manner. These algorithms were grouped as (1) support vector machine, (2) logistic regression, (3) decision trees, (4) random forest, and (5) gradient-boosted trees.During the retrospective study, a number of machine learning models were generated by using the prediction algorithms listed above as well as trying out various values for different parameters . More focus was given to the tree-based algorithms because the data in the agents were skewed in one direction, and tree-based methods produced better results than the others when the data were unbalanced. In addition, k-fold cross-validation was used to split the data into a set of nonoverlapping training and test sets to obtain more accurate results.In the experiments, better results were obtained with the predictive models generated using the random forest algorithm. An example screenshot of the platform is shown in 3-fold cross-validation with area under the curve of the receiver operating characteristic evaluation metricImputation strategy: median\u2014replaces the missing values using the approximate median value of the featureMaximum depth of a tree: 5Minimum information gain: 0.0Impurity: giniNumber of trees: 50Feature subset strategy: auto-calculates the number of features at each tree node as the square root of the total number of features in the classification algorithm.After the parameters of the algorithm were selected, the predictive model was generated using retrospective data sets of 4.944 patients with COPD. Subsequently, an observational prospective study was conducted to validate and evaluate an early predictive model for 30-day readmission risk in patients with COPD.In total, 100 patients were recruited and included in the observational prospective study with a 30-day follow-up, from April 2021 to September 2021, including recruitment and follow-up. During that period, the study participants were recruited by performing weekly prevalence cuts in which all patients hospitalized because of COPD conditions were systematically evaluated, offering inclusion in this study to all those who met the inclusion criteria and did not meet any exclusion criteria.Clinicians and researchers performed functional and clinical validations of the FAIR4Health solution during the observational prospective study. As this was a multicenter observational study, the recruitment and inclusion of patients in the study were carried out as mentioned below.For SAS, the clinical team reviewed 711 hospitalized patients during the study period, and 53 (7.5%) of them fulfilled the inclusion criteria and did not meet any exclusion criteria. Finally, 22 patients with COPD signed the ICF and were included in this observational prospective study. Out of the total recruited patients in SAS, 18% (4/22) were female and 82% (18/22) were male.In the case of IPBV, out of 2070 hospitalized patients, 113 (5.46%) patients were hospitalized because of COPD exacerbation, and 83 (73.5%) patients met all inclusion criteria and did not meet any exclusion criteria and signed the ICF. A total of 78 patients were included in this observational prospective study.Of the total patients recruited during the study period, 47% (37/78) were female and 53% (41/78) were male.All data gathered from patients with COPD were entered into the FAIR4Health platform to obtain the prediction generated by the predictive model for 30-day readmission risk and to assess its performance.When the prediction was obtained using the FAIR4Health platform, a concordance analysis was performed to compare the real data with the predicted values. Concerning the reality of readmissions among the 100 patients recruited, in both cases, the patients were followed up during hospitalization, and the follow-up was performed during the following 30 days. Out of a total of 22 patients recruited from SAS, 3 (14%) were readmitted within 30 days of discharge . Out of a total of 78 patients recruited from IPBV, 10 (15%) were readmitted during the follow-up period. Finally, from the 100 recruited patients, (1) the accuracy of predictions generated by the FAIR4Health platform was confirmed in 87% (87/100) of the cases; that is, either the patient was readmitted to the hospital because of COPD in real life and the algorithm predicted that there was early 30-day hospital readmission risk or the patient was not readmitted and the algorithm predicted that there was no early 30-day hospital readmission risk and (2) the prediction generated was not confirmed in 13% (13/100) of the cases; that is, in real life, the patient was readmitted within 30 days and the platform predicted that there was no early 30-day hospital readmission risk or the patient was not readmitted and the platform predicted that there was early 30-day hospital readmission.The application of the FAIR principles in health research data sets of health research performing organizations from different countries allowed the federated data analysis to accelerate the discovery of scientific outputs. Therefore, the analysis of legal, technical, and ethical requirements of health research data were addressed during data FAIRification. Furthermore, a clinical decision support model for predicting 30-day readmission risk in patients with COPD at discharge based on the risk factors uncovered previously, using data mining approaches, was implemented, deployed, and validated. Finally, through a multicenter study in which the rate of readmission of patients with COPD within 30 days after hospital discharge was analyzed, clinical partners could reach use case objectives and obtain an early 30-day hospital readmission risk predictive model. Further details of the FAIR4Health pathfinder case studies can be found in the FAIR4Health public report on the demonstrators\u2019 performance .It is important to highlight that the FAIR4Health solution was implemented following a practical extensibility capacity, so that other research questions can be covered using the solution without the need to perform adaptations. Furthermore, to improve the reusability capacity of the study, using both the open-source code and the generated metadata freely available in GitHub , the stuFirst, significant cross-cutting data-related challenges were addressed during data collection. Data extraction from EHRs and other types of health care sources aligning this extraction with a FAIR4Health common data model was not trivial and required a lot of conceptual and technical efforts because of (1) the complexity of the raw data , (2) free text used in some fields in the raw data sources, and (3) differences between the types of raw data sources. To address the complexity of the raw data, each health research organization from different countries that participated in the data extraction involved colleagues who were experts in each source data model. To handle the information in free text fields, natural language processing techniques were assessed. Finally, in some cases, manual natural language processing to extract structured information from unstructured information was performed. To manage the differences between the nature of the raw data sources, each raw data set was analyzed in depth in a collaborative effort between each clinical partner and the technical partners to reach the required configuration in the FAIR4Health solution, achieving the FAIRification of all raw data and finally achieving the PPDDM models\u2019 generation using all sources.Second, concerning the predictive model generated in this study, it can be stated that it is possible to generate more efficient prediction parameters if the distribution of the readmission variable in the data sets is better adjusted. The readmission variable, which was the dependent variable, was not balanced in the data sets of the retrospective studies (data sets used to generate the predictive model for this prospective study), which resulted in the generated results being good but not perfect as desired. For more effective models, in the future, a better adjustment of the distribution of the readmission variable using data sets with more patients will be addressed to boost the application of predictive models in clinical practice. Most studies of predictive models based on machine learning show poor methodological quality and are at a high risk of bias. The small study size, poor management of missing data, and failure to address overfitting are factors that contribute to the risk of bias .In contrast, it is crucial to add that this study was carried out while these 2 health care organizations were experiencing the consequences of the COVID-19 pandemic, and clinical researchers had to make significant efforts to properly conclude the prospective study:IPBV as a health care institution was included in the national COVID-19 system of health care institutions caring for COVID-19 positive patients with severe clinical difficulties. Owing to this reorganization of the Serbian health care system, the likelihood of hospitalization of patients with COPD has been reduced since March 2020. Many of the researchers responsible for patient recruitment in the prospective study were engaged in COVID-19 departments, and the remaining researchers were overworked during the study period.On the side of SAS, this health care institution was involved in the care of patients with suspicion of COVID-19 and COVID-19\u2013positive patients with severe clinical difficulties. All health professionals in SAS had a higher workload in health care. In fact, different clinical researchers participating in this observational study were transferred during the project to the COVID-19 Emergency Hospital in Seville (Spain), relieving each other, with an essential health care priority and looking after patients who did not meet the inclusion criteria of this study and could not be recruited. The clinical researchers identified a low use of health care services (both urgencies and consultancies) by patients with COPD; presumably, the patients waited for more severe symptoms to go to the health care centers because of the fear of having contact with COVID-19\u2013positive patients. In addition, hospitalizations of patients with COPD were restricted, similar to what has happened in other pathologies, to avoid patient flow through health care centers.Considering the final version of the FAIR4Health solution and the main outcomes of this study, some future advances can be taken into account:Both the FAIRification tools and the FAIR4Health platform were validated using the FAIR4Health common data model. The solution has been designed and developed by considering the extensive capacity of other data models, so it is appropriate to continue the validation and testing with other data models in future clinical validations.The whole FAIR4Health solution covers alignment with relevant standards: HL7 FHIR, International Classification of Diseases, SNOMED Clinical Terms, Logical Observation Identifiers Names and Codes, and the Anatomical Therapeutic Chemical classification system. Other standards such as other HL7 standards, epidemiological standards, and W3C standards could be considered to be integrated if viable.The FAIR4Health platform was validated using the following machine learning algorithms: frequent pattern growth, support vector machine, logistic regression, decision trees, random forest, and gradient-boosted trees. Deep learning algorithms such as neural networks can be considered in future studies to improve the capabilities of the FAIR4Health platform.From a scientific point of view, some researchers of the FAIR4Health Consortium contribute to the application of the FAIR principles in the health research field, being involved in international working groups part of the European Open Science Cloud, the European Federation for Medical Informatics, the Research Data Alliance, the GO FAIR initiative, and HL7 International.Despite the limitations mentioned above, the objective of this study was achieved: to validate the FAIR4Health solution through the assessment of a federated model that was generated by applying a federated machine learning architecture on top of the FAIRified data sets of different health research performing organizations for real-time prediction of 30-day readmission risk in patients with COPD.The clinical, technical, and functional validation of the FAIR4Health solution was achieved through (1) the application of FAIR principles through the FAIR4Health FAIRification tools in health research data sets of different health research performing organizations and FAIRifying data from 4.944 patients with COPD; (2) development and use of federated machine learning architecture on top of the FAIRified data sets; and (3) clinical, technical, and functional development and assessment of a federated model for predicting 30-day readmission risk in patients with COPD, with an accuracy of 0.98, a precision of 0.25, and a confirmed prediction in 87% (87/100) of the cases.In the retrospective study where 3 different organizations participated with their health care and health research data sets, the federated model was generated with an accuracy of 98.6% and a precision of 25%. In the observational prospective study in which 2 health care organizations participated, 100 patients were recruited for the federated model to predict their readmission risk to the hospital within 30 days because of COPD. Therefore, the accuracy of predictions generated by the model, and hence the FAIR4Health platform, was confirmed in 87% (87/100) of the cases.Health research performing organizations are aware of the need to implement a FAIR data policy to facilitate data sharing and reuse following the discovery, access, integration, and analysis of health research data. One obvious example would be the COVID-19 pandemic, where international cooperation allowed the rapid sequencing and epidemiological studies to be carried out, thus demonstrating the need and importance of data sharing to accelerate health research ,47. For The FAIR4Health project proposes a technological solution in the health domain to facilitate the use of larger and more heterogeneous data sets, thus increasing the variability of the data and the size of the data sets. Therefore, an increase in the scope of the research will be obtained and a significant improvement in the ability to generate more accurate predictive models."} {"text": "Secreted phospholipases of type A2 (sPLA2s) are proteins of 14\u201316 kDa present in mammals in different forms and at different body sites. They are involved in lipid transformation processes, and consequently in various immune, inflammatory, and metabolic processes. sPLA2s are also major components of snake venoms, endowed with various toxic and pharmacological properties. The activity of sPLA2s is not limited to the enzymatic one but, through interaction with different types of molecules, they exert other activities that are still little known and explored, both outside and inside the cells, as they can be endocytosed. The aim of this review is to analyze three features of sPLA2s, yet under-explored, knowledge of which could be crucial to understanding the activity of these proteins. The first feature is their disulphide bridge pattern, which has always been considered immutable and necessary for their stability, but which might instead be modulable. The second characteristic is their ability to undergo various post-translational modifications that would control their interaction with other molecules. The third feature is their ability to participate in active molecular condensates both on the surface and within the cell. Finally, the implications of these features in the design of anti-inflammatory drugs are discussed. A phospholipase A2 (PLA2) is an enzyme that hydrolyzes the second ester bond of a phospholipid releasing a fatty acid and a lysophospholipid. Eucaryotic PLA2s are classified in intra- and extra-cellular or secreted. Secretory PLA2s (sPLA2s) are translated with a N-terminal signal peptide that is removed in the endoplasmic reticulum, and then they are translocated through the Golgi and secretory vesicle to the extracellular space. They are abundant in numerous human body fluids, in pancreatic juice, tears, seminal fluid, and some of them are detectable also in blood serum ,2.sPLA2s are also present in prokaryotes, and procaryotic and eukaryotic sPLA2s share the same mechanism of action, in which a water molecule, activated by the histidine residue H48) of the active site, performs a nucleophilic attack on the sn2 carbonyl oxygen of the phospholipid . The rea8 of the The best-known sPLA2 functions are the digestion of lipids, in the digestive system, and the antibacterial defense in various body sites, e.g., in the intestines, in tears. However, these proteins are involved in various inflammatory and lipid metabolism processes and consequently in numerous diseases, from metabolic and cardiovascular, to neurodegenerative and neuromuscular diseases. Important toxic components of the snake venoms are sPLA2s of group I and II. They are endowed with several toxic properties, among which the most widespread are hemotoxicity, myotoxicity, and neurotoxicity . The homThe biological activity of PLA2s, although they are involved in numerous pathologies, has not yet been fully understood. Firstly, because their enzymatic activity, well characterized in vitro, is more difficult to know in vivo because it is influenced by many local parameters, such as the type of lipids present, their accessibility, the micro-local calcium concentration, and interactions with other proteins and carbohydrates. Secondly, because sPLA2s perform other activities besides their enzymatic function. Natural sPLA2 homologs of snake venom lacking catalytic activity, as the aspartic acid residue (D49) in the active site is replaced by lysine (K49) or other amino acids, possess equal or even higher toxicity than their catalytically active counterparts. This non-catalytic activity of sPLA2s is even less known because it depends on a complex network of interactions with many other types of molecules. Finally, the activity of sPLA2s is not only expressed outside the cell because, following interaction with various surface receptors and co-receptors, sPLA2s are internalized .In recent years, many reviews have been focused on these proteins, some addressing one group in particular, group IB , includiConventional sPLA2 share a pattern of disulphide, comprising five to eight bridges . The cenBothrops pirajai, piratoxin-I, as it is present in some but not all structures of the proteins. This suggests that it can be an allosteric bridge, i.e., a bridge that confers different activities to the protein depending on its state, open or closed phosphorylation-regulated prolyl isomerase . These dIn conclusion, to investigate the biological action of sPLA2s, it is important to consider that, when they are imported into the cytosol, they may undergo PTMs that affect their molecular interactions and biological activity. The pathological action of human variants of sPLA2s, and of toxic sPLA2s present in animal venoms, could also be due to mutations of their PTM sites.The sPLA2s interact with different types of proteins, both extra- and intracellular . Basic pTwo-dimensional condensates can also be formed on the membrane of the cell or internal organelles . RecentlBecause of interaction with the surface receptors described above, sPLA2s are internalized and transported to the paranuclear, or in some cases, even nuclear zones ,66,74. TBothrops asper myotoxin-II has a high affinity for phosphatidic acid [The toxic action of snake sPLA2s is not yet fully understood. At the level of the plasma membrane, in the case of catalytically active phospholipases, one hypothesis is that the enzyme action may destabilize membranes, in the case of myotoxins, or induce the fusion of neurotransmitter vesicles with the synaptic terminal, in the case of neurotoxins ,83. In tdic acid and is adic acid .To summarize, the multiple interactions established by sPLA2s with different types of molecules, lipids, carbohydrates, membrane receptors, co-receptors, and intracellular enzymes mean that they can participate in the formation of molecular condensates both on the cell surface and intracellularly. Understanding how these complexes are formed and broken down, as well as their function, will be crucial to understanding the activity of sPLA2s.Most, if not all, diseases in which sPLA2s are involved have an inflammatory component, and A2-type phospholipases, mainly sPLA2s, are considered an important target in the development of anti-inflammatory drugs, as they arbitrate the release of fatty acids that are then converted into inflammatory mediators. Today, inflammation is mainly treated with cyclooxygenase 1 and 2 inhibitors that act downstream of the action of sPLA2s, or with cortisone drugs that act instead upstream of sPLA2s production, as they regulate the expression of different inflammatory agents. Both types of treatments have important side effects, ranging from gastrointestinal and cardiovascular disorders in the case of cyclooxygenase inhibitors, to disorders of various kinds in the case of cortisone drugs, as this class of proteins regulates various physiological processes that are consequently altered .Inhibitors of sPLA2 enzymatic activity have not yielded the desired results in clinical trials . One posExamples of molecules that interfere with molecular interactions of sPLA2s already exist: an aptamer that binds to NCL, a disordered co-receptor protein of a snake venom sPLA2, preventing its internalization into the cell and protecting it from its toxic effects ; and penIn the future, we will see an increase in the development and creation of molecules acting on molecular condensates , and we To sum up, the development of new anti-inflammatory drugs targeting sPLA2s will have to consider the complexity of their action and regulation, as well as the multiple molecular interactions they can establish. To this end, new experimental models will be required to test the action of compounds that affect the formation and stability of the molecular complexes of which they take part."} {"text": "Still, there are large gaps in our knowledge and large parts of modern enuresis management guidelines are (still) not based on firm evidence. In this review I will question the following commonly made assumptions regarding enuresis evaluation and treatment:In this review I will argue that much of what we do with these children is based more on experience and well-meant but poorly supported assumptions than on evidence. Some advice and therapies are probably ineffective whereas for other treatments we lack reliable predictors of treatment response. More research is obviously needed, but awaiting new results enuresis management could be substantially simplified. Enuresis used to be viewed as a purely psychiatric disorder. Until the 1980s the evaluation of bedwetting children was focused on behavior, early trauma and other psychological factors, and therapy\u2014if any therapy was advocated\u2014was usually psychotherapy in various forms.But since the seminal work in the late 80s by the Aarhus group we know more . EnuresiThe recommended strategy for managing these children has changed accordingly, as reflected by international guidelines \u201310. Now,Nowadays, children with enuresis are expected to be taken seriously and the wait-and-see attitude is no longer accepted, at least for children aged six years or more. Neither is psychotherapy advocated as a primary (or indeed secondary) therapy. Instead the recommendation is often given that the LUT function of these children be \u201cnormalised\u201d by the institution of regular drinking and voidThe new prevailing strategy for the management of children with enuresis is surely a great step forward compared with the views of several decades ago, but there are still problems. Much of what we now do is (still) based not on firm evidence but on experience and assumptions. These assumptions are not by any means unreasonable, just not properly tested.The aim of this review is to scrutinize some of the central assumptions underlying modern enuresis management. By doing this I do not mean neither to polemize or criticize the experts nor distance myself from my contribution to the current guideline documents but, hopefully, to underline fields needing more research and to suggest ways that\u2014pending that research\u2014enuresis management may be simplified, at least outside the university setting.According to the International Children's Continence Society (ICCS) nocturnal enuresis can, and should, be subdivided into monosymptomatic nocturnal enuresis (MNE) and nonmonosymptomatic nocturnal enuresis (NMNE) on the basis of whether daytime symptoms of LUT dysfunction are also present or not , 14. TheThe MNE/NMNE definitions were an update of the previous terminology document that stated that MNE, defined only as enuresis without daytime incontinence, involved \u201curodynamically normal voidings\u201d whereas NMNE did not . When weStill, we did make assumptions regarding the underlying pathogenesis and the expected response to therapy. The argument went something like this: (1) we know that detrusor overactivity is one crucial pathogenic factor behind enuresis, (2) we assume that symptoms such as daytime incontinence and urgency, and findings such as a high daytime micturition frequency, indicate underlying detrusor overactivity , 17, (3)We have usually assumed that children with NMNE constitute the minority, but this has been questioned \u201323, 29. Urgency is a particularly problematic symptom. It is assumed to indicate underlying detrusor overactivity , a phenoLooking for studies addressing whether children with MNE and NMNE require different therapies the result is meager. In very many studies, perhaps the majority, only children with presumed MNE have been included. And even though many of those studies have not had a verified \u201chigh\u201d or \u201clow\u201d (daytime) micturition frequency among the exclusion criteria the result is that we know very little about which therapy works and does not work in children with NMNE. Studies expressly including children with both MNE and NMNE while clearly characterizing them, and giving them the same therapy while looking for differences in therapy response between the groups, are very few indeed . The resPerhaps the only remaining argument for giving different treatment across the MNE/NMNE divide is that if there is nocturnal polyuria there is no need to assume concomittant detrusor overactivity and desmopressin could be tested straight away. But even this assumption has not been properly tested, due to a lack of studies of desmopressin treatment in children with properly defined NMNE . And howIn the latest enuresis guideline document from the ICCS these uncertainties have been acknowledged by joining the previously separate NMNE and MNE documents , 7 into In future guidelines, I suggest that much less emphasis is put on the MNE-NMNE subdivision.via weighing of diapers or sheet covers the parents told the child to void, (b) the child believed that it was supposed to void, (c) it was socially convenient to go to the toilet, (d) the bladder was full, or (e) uninhibited detrusor contractions? It's really only the last reason for voiding which is interesting for us, the others are just obscuring the picture.Sadly, the evidence for a link between voiding chart data such as voided volumes or voiding frequency and cystometric findings is very tenuous , 32. AndThe value of the voiding chart as a predictor of enuresis therapy response is also meager. There are data suggesting that normal voided volumes are more common among desmopressin responders \u201338, but therapy, regardless of whether it is useful as a diagnostic tool or not. By documenting the micturitions in a chart it is probably easier to adhere to the voiding schedule according to the instructions given predicts a likelihood that desmopressin will work . HoweverA problem here is that nocturnal urine production measurements, to be reasonably reliable as predictors of desmopressin response, need to be performed several times. Once or twice is not enough , 47. AndIt could also be questioned\u2014given the less than perfect correlation between nocturnal urine production and desmopressin response\u2014whether the absence of nocturnal polyuria in a child who does not respond to the enuresis alarm means that we should let anticholinergics or tricyclics be the next step before testing desmopressin. I think not.Although old psychodynamic explanations regarding enuresis pathogenesis are clearly obsolete it has been convincingly shown that children with enuresis\u2014especially if they also have daytime incontinence and/or fecal incontinence\u2014are more prone to behavioral problems or neuropsychiatric disorders than their nonenuretic peers , 48, 49.The central motivation for this recommendation is neither that we consider the psychiatric issues to be causative nor that treatment of them will by itself make the children dry. Instead it rests on the assumption that concomittant problems such as ADHD will negatively influence treatment response.This assumption does not seem unreasonable regarding the enuresis alarm or urotherapy\u2014treatments that demand much active cooperation from the child. But there is no reason to believe that response to pharmacological treatment is affected .And although it seems fair to suspect that successful alarm treatment is difficult to achieve in a child who scores positive on a screening tool for, say, ADHD, this has not been put to the test in prospective studies. Intriguingly, in an American study comparing 95 enuretic children with ADHD and 95 children without ADHD no differences were found regarding alarm treatment results . Maybe tIt should be kept in mind that in many countries and settings child psychiatry and psychology are scarce resources. Can the healthcare system take care of all the new referrals, should this recommendation be followed? Thus, given the current state of evidence, I would suggest that if screening tools are used and indicate behavioral/psychiatric issues, then the child should not be automatically referred unless there are also substantial problems with social interaction apart from the wetting. Or perhaps wait until one serious alarm attempt has been tried and failed.This recommendation is also based on assumptions regarding detrusor overactivity, which is the main cause behind daytime incontinence . First, But we have no firm evidence for the truth of either the first or the second of these arguments. In fact, there are earlier studies indicating that it may be the other way around, i.e., that enuresis alarm treatment may work regardless of concomittant daytime incontinence , that itThe third argument, about the impact of daytime incontinence being greater for the child, may certainly be true but that's not for us to decide.Thus, pending new evidence, the best strategy is probably to let the families decide about which problem to address first. Or treat both conditions simultaneously. As the evidence now stands we have no grounds for delaying the enuresis therapy just because the child also wets during daytime.vice versa may boost the child's self esteem, it should be noted that it is time-consuming for both the family and the healthcare provider.Based on these considerations it is fair to say that daytime urotherapy has no place in the initial treatment of children with enuresis.The purpose of this review has not been to criticize the experts behind the existing guidelines. I have myself been very active in the creation of the relevant ICCS documents a contribution which I do not regret. These documents have been based on the available evidence, and, whenever evidence has been unavailable, the collective clinical experiences and reasonable assumptions of the experts. My aim here has been to highlight the many existing uncertainties and the areas in which recent evidence contradict our previous assumptions. My views and recommendations are summarized in \u2022The role (if any) of voiding charts in the evaluation of enuretic children\u2022Desmopressin and alarm response in enuretic children with concomittant daytime symptoms\u2022The effect of treatment for neuropsychiatric disorders on response to first-line enuresis therapy\u2022Studies on the need for, or benefit of, treatment of non-bothersome constipation in children with enuresisIt should be obvious that there are many areas in which new research is sorely needed, and these include such basic questions as what to focus on during the primary evaluation and how to choose first-line therapy for vast numbers of children. Most of the research needed is not hi-tech or expensive, but the impact for the many affected children will be considerable. Here are some suggested fields that deserve further study:\u2022Put less emphasis on the differentiation of enuresis into monosymptomatic and nonmonosymptomatic varieties\u2022If voiding charts are used, make sure that families who don\u2019t manage to complete them are not lost\u2022Don't let concomittant daytime incontinence be a contraindication to enuresis treatment\u2022Stop using (daytime) urotherapy as a treatment of enuresisBased on the available evidence today, I suggest that the following changes are made to the recommended enuresis management:\u2022At the first visit, focus on warning signals that indicate serious underlying conditions \u2022No need for voiding charts or measurement of nocturnal urine production\u2022Do not let concomittant daytime incontinence delay enuresis therapy\u2022Treat constipation only if it bothers the child or if there is also daytime incontinence\u2022Start directly with alarm or desmopressin treatment according to family preferences\u2022Seek the help of a psychiatrist/psychologist if the child has substantial problems with social interaction or school, but do not let this delay enuresis therapySituations with strained resources, for the families and/or the healthcare system, deserve special mention. Here, the perfect can be the enemy of the good. We cannot expect that all families of enuretic children who seek healthcare assistance for the first time are able to adhere to time-consuming or labor-intensive evaluation methods or therapies. Likewise, we cannot expect primary care healthcare professionals without expertise regarding the pediatric LUT to be able to conduct state-of-the art enuresis management the way we experts would do it. In this setting\u2014awaiting new research findings\u2014I suggest the following cornerstones of a simplified, \u201cbare-bones\u201d enuresis management strategy for primary care:This way, an immense benefit could be gained for millions of children while we keep doing research in order to make future management strategies more evidence-based."} {"text": "LSD is an important transboundary disease affecting the cattle industry worldwide. The objectives of this study were to determine trends and significant change points, and to forecast the number of LSD outbreak reports in Africa, Europe, and Asia. LSD outbreak report data (January 2005 to January 2022) from the World Organization for Animal Health were analyzed. We determined statistically significant change points in the data using binary segmentation, and forecast the number of LSD reports using auto-regressive moving average (ARIMA) and neural network auto-regressive (NNAR) models. Four significant change points were identified for each continent. The year between the third and fourth change points (2016\u20132019) in the African data was the period with the highest mean of number of LSD reports. All change points of LSD outbreaks in Europe corresponded with massive outbreaks during 2015\u20132017. Asia had the highest number of LSD reports in 2019 after the third detected change point in 2018. For the next three years (2022\u20132024), both ARIMA and NNAR forecast a rise in the number of LSD reports in Africa and a steady number in Europe. However, ARIMA predicts a stable number of outbreaks in Asia, whereas NNAR predicts an increase in 2023\u20132024. This study provides information that contributes to a better understanding of the epidemiology of LSD. Capripoxvirus genus of the poxviridae family [Lumpy skin disease (LSD) is an emerging transboundary viral disease that is caused by the lumpy skin disease virus (LSDV), which belongs to the e family . Cattle e family , but some family . Arthrope family ,5,6,7. Te family ,9. It cae family . The Wore family .In 1929, the first outbreak of LSD occurred in Zambia, and in the next decade, the virus extended to sub-Saharan Africa . LSD wasChange point analysis and trend analysis are statistical methods that are generally utilized to determine and monitor the behavior of time series data . The terSeveral research publications have provided critical information on the global status and regional or country situation of LSD outbreaks. For example, the spread of LSD from Africa to Europe, the Middle East, and Asia ; the epiDisease forecasting utilizing well-accepted prediction methods is critical for developing strategic plans to monitor and prevent disease outbreaks. Predictions of COVID-19, which appeared in hundreds of publications, are a prime example of the widespread application of forecasting methodology ,36. ForeSystematically, LSD outbreak reports from various regions around the globe have been published continuously by the WOAH. For a better understanding of LSD epidemiology, the trends, change points of disease trends, and forecasts of LSD outbreaks are worth investigating. Thus, the aims of this study were: (i) to determine the trends and change points in the time series data, and (ii) to forecast the number of LSD reports based on data from Africa, Europe, and Asia.https://wahis.woah.org, accessed on 14 August 2020), were imported and analyzed. Based on the WOAH report file, the numbers of LSD reports are shown as biannual data. For instance, 2020 has two semesters, with the first semester covering total LSD reports from January to June 2020, and the second semester covering July to December 2020.In this study, data on the number of LSD reports in Africa, Europe, and Asia from January 2005 to January 2022, publicly available on the official WOAH website . The segments ,39.Given m segments of the time series data, change point detection based on this technique is achieved by minimizing the function :(1)\u2211i=1mThe ARIMA and NNAR models were utilized to predict the number of LSD reports over the next 3 years (2022\u20132024) for each continent. The ARIMA technique is based on the principle that future values of a time series are generated from a linear function of past observations and white noise terms . The ARIp, d, q), where ARIMA has three parameters, which can be written as ARIMA , with The NNAR model uses lagged values of the time series data as inputs to a neural network. For non-seasonal data, it has the notation NNAR data differencing until the data become stationary, (ii) examining ACF and PACF for the differenced data and selecting potential candidate models, and (iii) comparing the selected models using the Akaike information criterion (AIC) [nnar function, an automatic algorithm in the forecast package, provides a procedure to determine the best-fitting NNAR model as output [The forecasting of LSD outbreak reports was carried out using R statistical software and the \u201cdplyr\u201d, \u201cxts\u201d, \u201ctsbox\u201d, \u201cTSstudio\u201d, and \u201cforecast\u201d packages. The on (AIC) ,44. Techs output .Additionally, the African data were split into two datasets: one covering the years 2005\u20132015 (training set) and another covering the years 2016\u20132020 . The training set was used to build an ARIMA and NNAR model, both of which were utilized to generate forecast values. Further, the forecasted values were compared to the actual ones in the validation set. In addition, error metrics, including mean absolute percentage error (MAPE), mean absolute scale error (MASE), and root mean square error (RMSE), were calculated using functions from the \u201cMetrics\u201d package in order to measure the predictive abilities of the ARIMA and NNAR models ,45. Overall, Africa had 29,966, Asia had 8837, and Europe had 2471 outbreak reports during the study period. Africa had an undulating trend during 2005\u20132019, and by the end of 2020, outbreaks had dropped sharply and remained consistently low, whereas Europe had a peak in 2016, a sharp decline in 2017, and then became stable, and Asia had three peaks throughout the period .n = 18,072), with the highest number occurring in 2014 (n = 1915). Ethiopia, ranked second, has been reporting outbreaks for several years.Regarding the top five African nations reporting the most LSD outbreaks , Zimbabwn = 524), observed in 2016. North Macedonia, Albania, Montenegro, Russia, and Greece were the top five European nations to report LSD outbreaks that year during the whole study period, with the maximum in 2019. From 2013 to 2019, Turkey reported notably high numbers of LSD epidemics in 2014 and 2015. Iran had its highest number in 2019. During the period from 2021 to January 2022, Thailand had the highest number of LSD reports.In Asia , Oman haThe time-series data of the number of LSD reports have four change points for each continent. Technically, once the change points have been identified, the segments that correspond to them are represented. For example, the second segment is found between the first and second change points . In thisIt was observed that the fourth segment of the African data had the For Europe, all four change points were detected during 2015\u20132017 . The firFor Asia, four change points and five segments corresponding to them were identified . The firp, d, q) and NNAR notations obtained from the analysis are shown in The forecasting of LSD outbreaks in Africa, Europe, and Asia by ARIMA and NNAR is shown in Change points in LSD outbreak time series data provide information on times when significant changes occurred in the data, which is essential information for epidemiology, particularly in the temporal dimension. Forecasts of the number of LSD reports based on well-accepted forecast methods offer useful baseline data that can assist authorities with planning disease surveillance and prevention efforts.After the first outbreak in Zambia in 1929, the disease became prevalent in several regions of Africa . ZimbabwOur findings further show that the fourth segment had the In Europe, four change points and only two segments were identified. These change points correspond to the situation of the disease in 2015\u20132016. The first change point was found in early 2015, when the first LSD outbreak occurred in Greece . The secIn Asia, Turkey reported numerous LSD outbreaks from 2013 to 2016 . The incIn this study, we applied ARIMA and NNAR models to forecast the numbers of LSD outbreak reports. Overall, the number of outbreaks in Africa is expected to be higher than that reported in 2020\u20132021, whereas the number of outbreaks in Europe is projected to remain consistent. Forecasts of LSD outbreaks in Asia show an increasing trend in 2023\u20132024 based on the NNAR model, whereas ARIMA predicts a larger number of outbreaks than what occurred in January 2022. Notably, the results demonstrate that the prediction capabilities of both ARIMA and NNAR models tested with African data are not highly accurate, which may be influenced by the limited number of observations employed for model training, A follow-up study with more observations would allow for the development of more accurate forecast model models. Moreover, our results revealed that the prediction abilities of the ARIMA and NNAR were approximately comparable. This could be due to the fact that the data set contains both linear and non-linear patterns, and, therefore, the strengths of one model may not provide an advantage over another ,45.Our forecasts offer authorities useful information that can be incorporated into strategies to monitor and prevent future LSD outbreaks. Of note, the forecasts are generated from past observations; thus, they do not account for any future situation or implementation. If interventions such as more effective control measures are adopted, it is likely that fewer outbreak reports will be received than anticipated. In this aspect, we suggest using the forecast numbers as basic information or benchmarks, with the goal of keeping the number of outbreaks below these figures.The current study has several limitations. We were unable to determine the seasonality characteristics of the number of LSD outbreak reports due to the biannual format of the available data. Accordingly, it would be advantageous for future research if the data from WOAH were made public in a monthly format. Moreover, it important to note that forecast results should be interpreted with caution. Because forecasts are based on previous observations and patterns, some interventions and changes in disease drives in the future, which may change the patterns, will have an impact on the actual disease occurrences, and, therefore, our forecast may be over- or underestimated. Furthermore, there may be underreporting of LSD outbreaks in some countries during certain periods, so the reports used in this study may not represent the actual situation. Moreover, forecasting was limited to two methods. Thus, follow-up studies to investigate other methods of forecasting the number of LSD outbreak reports are warranted.It is notable that LSDV isolates from outbreaks in some countries are genetically related ,62,63; tIn this work, we used a statistical approach to identify major changes in the data underlying LSD outbreak reports. Additionally, we utilized time series models to forecast the number of LSD outbreak reports in Africa, Europe, and Asia during 2022\u20132024. Although LSD outbreak reports in Africa appear to be decreasing since 2020, it is expected that the number of reports will increase slightly. The number of LSD outbreak reports in Europe is projected to continue the previous 5-year steady trend. Additionally, the forecast predicts an increase in the number of outbreak reports in Asia. These findings indicate that LSD remains a substantial threat to the cattle industry in various countries; thus, efforts should be made to monitor its spread within and between regions. Additionally, because LSD is regarded as a significant transboundary disease, strict disease prevention and control in every country are critical. Furthermore, coordination among nations to control and eradicate the disease is essential."} {"text": "Pseudomonas aeruginosa (Pa) (108 CFU ml\u20131) was inoculated to wheat plants with and without foliar-applied MLEs at two different concentrations (MLE 1 = 1:15 v/v and MLE 2 = 1:30 v/v) twice at 25 and 35 days after seed sowing (50 ml per plant) after the establishment of drought stress. Results revealed that Pa + MLE 2 significantly increased fresh weight (FW), dry weight (DW), lengths of roots and shoot and photosynthetic contents of wheat. A significant enhancement in total soluble sugars, total soluble proteins, calcium, potassium, phosphate, and nitrate contents validated the efficacious effect of Pa + MLE 2 over control-treated plants. Significant decrease in sodium, proline, glycine betaine, electrolyte leakage, malondialdehyde, hydrogen peroxide, superoxide dismutase (SOD), and peroxide (POD) concentrations in wheat cultivated under drought stress conditions also represents the imperative role of Pa + MLE 2 over control. In conclusion, Pa + MLE 2 can alleviate nutritional stress and drought effects in wheat. More research in this field is required to proclaim Pa + MLE 2 as the most effective amendment against drought stress in distinct agroecological zones, different soil types, and contrasting wheat cultivars worldwide.Less nutrient availability and drought stress are some serious concerns of agriculture. Both biotic and abiotic stress factors have the potential to limit crop productivity. However, several organic extracts obtained from moringa leaves may induce immunity in plants under nutritional and drought stress for increasing their survival. Additionally, some rhizobacterial strains have the ability to enhance root growth for better nutrient and water uptake in stress conditions. To cover the knowledge gap on the interactive effects of beneficial rhizobacteria and moringa leaf extracts (MLEs), this study was conducted. The aim of this experimental study was to investigate the effectiveness of sole and combined use of rhizobacteria and MLEs against nutritional and drought stress in wheat. Nitrogen-fixing bacteria Crop production is always under pressure to increase and sustain the food demands, considering the estimated increase in global population that might increase from the current 7.7 billion to approximately 9.6 billion in the year 2050 . MoreoveWater constitutes approximately 80\u201390% of the total biomass of herbaceous plants and is crucial in almost all plant physiological processes, a principal means of nutrients and metabolite transport . Water sThe induction of drought stress tolerance has been described by several physiological and biochemical changes . In spitSeveral microorganisms, mainly bacteria, colonize the plant root zone . The benMoringa oleifera Lam belongs to the Moringaceae family and native to the subcontinent (Moringa leaf extract (MLE), as a plant biostimulant in the foliar application, enhances the growth of plants that are grown even under abiotic stress conditions . Moringaontinent . Moringaontinent . These montinent . Sprayinontinent . This vaontinent .Triticum aestivum L.) is one of the most important cereal crops worldwide, considering its production and human consumption. Wheat supplements almost one-third of the total global population. Harvesting more yields to meet the future food demands of the increasing population is a major agricultural concern at all times. Different environmental factors contribute to crop yields were studied solely on wheat grown at high temperatures. This study covers the knowledge gap in the combined use of NFB and MLE under normal and heat stress situations. In earlier studies, treating plants with rhizobacteria and the effects of foliar sprayed MLE were assessed solely on wheat grown under drought stress conditions by considering their ameliorative implications on various physiological and biochemical attributes but their combined implications are still not confirmed.Therefore, this study aimed to investigate the best treatment combination for alleviation of heat stress in wheat. Anaj 17 genotype was selected because it is one of the latest genotypes developed in the past 5 years in Pakistan and it performed well under abiotic stress conditions so we wanted to check its response to the application of MLE and rhizobacteria. It is hypothesized that the combined use of MLE and NFB might be a better approach than the sole application to improve wheat growth attributes under heat stress. This study was conducted to analyze the ameliorative and best co-application of applied amendments in alleviating the drought stress effects on wheat plants. It is hypothesized that MLE and rhizobacterial combined application might be an effective approach to improve wheat attributes than their sole applications in drought effects. In earlier studies, treating plants with rhizobacteria and the effects of foliar sprayed MLE were assessed solely on wheat grown under drought stress conditions by considering their ameliorative implications on various physiological and biochemical attributes but their combined implications are still not confirmed. Therefore, this study is conducted to analyze the ameliorative and best co-application of applied amendments in alleviating the drought stress effects on wheat plants. It is hypothesized that MLE and rhizobacterial combined application might be an effective approach to improve wheat attributes than their sole applications in drought effects.The wheat crop was sown at the experimental site of the Government College University Faisalabad . A greenhouse pot experiment was carried out from November to January. The experimental layout was a randomized complete block design (RCBD) that was replicated three times.\u20131), pH (water) 7.3; organic matter contents 1.38%; available N 0.032 ppm, available P 5.93 ppm, and available K 32.3 ppm.The clay loam soil (8\u201312 inches in depth) collected from the experimental site of the Government College University Faisalabad was air-dried and sieved through a 2-mm sieve. The collected soils were sterilized through solarization, by covering the soil with a thin layer of plastic sheet. The heat from the sun builds up the temperature of the soil to kill most of the bacteria, weeds, and pests . Some soA well-adapted wheat genotype Anaj 17, which performs well under abiotic stress conditions, was selected. Seeds were disinfected using 95% ethanol and washed using 70% sodium hypochlorite solution followed by rinsing with distilled water three times.Pseudomonas aeruginosa (Pa) (strain) [inoculation with Pa], (iii) MLE 1 [foliar sprayed MLE at 1:15], (iv) MLE 2 [foliar sprayed MLE at 1:30], (v) Pa + MLE 1 [inoculation with Pa bacteria + foliar sprayed MLE at 1:15], and (vi) Pa + MLE 2 [inoculation with Pa bacteria + foliar sprayed MLE at 1:30].The treatments (two sets of pots) were as follows: (i) control 2O), and diammonium phosphate [24]. Initially, ten seeds were sown in each pot containing 12 kg clay loamy soil (25 cm diameter \u00d7 30 cm height), and after complete emergence, they were thinned to six plants per pot.At the start of the experiment, the soil was fertilized with a basal dose of N\u2013P\u2013K fertilizer (0.51\u20130.45\u20130.38 N\u2013P\u2013K g) using urea (46% N), sulfate of potash (50% K\u20137 were spread on Luria-Bertani (LB) agar plates at 37\u00b0C temperature and inoculated overnight. Bacterial growth was determined by measuring optical density at 600 nm using a spectrophotometer was shifted to canopies for the imposition of drought stress. Well-watered conditions were maintained at 70 FC (70% field capacity), while drought stress was maintained at 45 FC (45% field capacity). The plants that remained were kept at different water regimes until harvesting.An aerobic PGPR strain of free-living soil nitrogen-fixing bacteria; Pa (Pa strain) isolated from the rhizosphere of wheat roots growing in local field areas by serial dilution method was used in this study. The serial dilutions up to 10otometer . Before \u20131 FW, carotenoids = 1.58 mg g\u20131 FW, total phenolics = 1.68 \u03bcmol g\u20131 FW, mg g\u20131 FW, nitrogen = 14.13 mg g\u20131 DW, phosphorous = 2.98 mg g\u20131 DW, potassium = 11.97 mg g\u20131 DW, and calcium = 16.8 mg g\u20131 DW. The extracts were freshly prepared before their application. Distilled water was sprayed on control plants in both applications. MLE was sprayed to a pot-grown wheat twice after 25 and 35 days of sowing date (50 ml per plant) after the establishment of drought stress. Tween-20 surfactant was used in a foliar spray (0.1% v/v).Fresh and disease-free moringa leaves were collected and rinsed with water. Notably, a 100-g leaf sample was extracted in 1 L distilled water (1:10 w/v) for 15 min . Later, Harvest was done after 45 days of planting. Root and shoot fresh weights (FWs) and lengths were measured immediately after harvest at the experimental site. Fresh samples were stored at -30\u00b0C in a biomedical refrigerator for fresh analysis. Three samples per treatment were oven-dried (65\u00b0C) for 3 days to determine their dry weights (DWs) and ionic content analysis using the acid digestion method.The chlorophyll contents of wheat leaves were determined as described by Stomatal conductance (gs) of fully developed leaves (three plants per treatment) was measured by putting them in a portable infrared gas analyzer chamber . The measurements were made 6 days after the first MLE foliar spray.Fully expanded leaves from each replicate were taken, wrapped in aluminum foils, immersed in liquid nitrogen, and transferred into plastic zipper bags. These samples were stored at \u201380\u00b0C for further analysis. Following, biochemical analysis was performed using a spectrophotometer .To determine osmolytes as sugars and non-enzymatic antioxidants, 50 mg of dried leaves were homogenized in 10 ml of 80% ethanol and filtered followed by the re-extraction in 10 ml ethanol, and a 20 ml of the final volume was maintained. This obtained solution was used to evaluate flavonoids , solubleFor the determination of P contents in wheat molybdate/ascorbic acid, the blue technique was used, and nitrate contents were assessed by post-hoc test, which was performed to measure specific differences between treatments using the Duncan\u2019s Multiple Range Test (DMRT) in a completely randomized block design. The significant diffidences between treatment means were determined using analysis of variance and mean separation at a 5% significance level (p \u2264 0.05). In addition, the Pearson correlation of different wheat attributes under drought stress and well-watered conditions was performed. Logarithmic data transformation to obtain near-normal distribution was implemented before analysis, where required.Statistical analysis of data was performed using a All applied treatments had a significant positive effect on root and shoot FWs, DWs, and lengths under control and drought stress conditions . In thisAll photosynthetic pigments such as chlorophyll a, chlorophyll b, total chlorophyll, and carotenoids were significantly affected when subjected to drought stress conditions (45 FC) as compared with well-irrigated conditions (70 FC) . All appFlavonoids, phenolics, total soluble sugars, total soluble proteins, and stomatal conductance of wheat plants were significantly affected in drought stress (45 FC) compared with well-watered plants (70 FC) . Both so2O2) increased significantly in plants subjected to 45 FC water conditions. Furthermore, antioxidant enzyme [superoxide dismutase (SOD) and peroxide (POD)] activities were increased in drought stress.Ionic nutrient contents increased significantly in plants that were irrigated at 70 FC compared with plants irrigated with water to reach only 45 FC . However2O2, and SOD and POD contents significantly. Also, they increased calcium, potassium, phosphate, and nitrate concentrations in plant tissues. It is worth mentioning in this study that sole and combined applications of either MLE or Pa were better in decreasing the oxidative stress indicators and increasing nutrient contents of wheat, and this signifies the success of these ameliorating treatments for drought stress. Pa + MLE 1 and Pa + MLE 2 recorded the least oxidative stress indicators and enzymatic antioxidant activities during more nutrient assimilation in plants.All additives had significant positive improvements in ameliorating drought effects in wheat. These amendments were more detectable when the dose of MLE foliar application was increased. Pa and MLE decreased sodium, proline, GB, EL, MDA, HP. aeruginosa inoculated plants, MLE 1 = foliar applied MLE at 1:15, and MLE 2 = foliar applied MLE at 1:30] .All values are the means of three replicates \u00b1 SD. Different labels represent significant different alphabets using the least significant difference (LSD) test. [70 FC = well-irrigated conditions, 45 FC = drought stress conditions; C = control, Pa (strain) = A significant positive correlation exists between plant morphological attributes and photosynthetic pigments . This miThis study evaluates the effect of MLE (MLE 1 = 1:20 v/v and MLE 2 = 1:30 v/v) and PGPR strain (Pa) on drought stress tolerance ability of wheat (Anaj 17 genotype) under two irrigation regimes (45 FC and 70 FC). The use of organic or biofertilizers as a global initiative instead of costly chemical fertilizers is addressed by several researchers . Plant-dBacillus sp. and Klebsiella sp. and concluded that inoculated plants have more DW than non-inoculated ones. In this sense, + available to the plants. Moreover, according to the study by The highest dry matter contents in both root and shoot were obtained with the combined application of Pa + MLE 2 mixture that agrees with previous findings describing that soil beneficial bacteria can promote plant growth in abiotic stress conditions. Pseudomonas sp. and Azospirillum sp.).Photosynthetic pigment contents are considered a valuable physiological indicator for evaluating the damage caused by stress intensity. The wheat genotype in this study had more chlorophyll contents when supplemented with Pa and MLE with the highest value recorded in their combined treatment at both water regimes. This alteration is directly related to plant macronutrient contents such as nitrogen. The reactive oxygen species (ROS) produced during abiotic stress cause the reduction of chlorophyll contents by damaging photosynthetic machinery . SimilarP. aeruginosa and MLE applied separately and in combination increased levels of sodium, calcium, potassium, phosphate, and nitrate in wheat submitted to drought. This might be due to the nutrient solubilization by added microorganisms due to the production of ACC deaminase and foliar sprayed MLE either in sole or combined treatments can alleviate nutrient and drought stress effects. Nevertheless, the combined application of Pa and MLE 2 (Pa + MLE 2) can efficiently improve wheat growth attributes, photosynthetic pigment contents, and nutrient uptake under drought stress. The combined supplementation of Pa and MLE had more significant positive effects compared with their sole applications. Pa + MLE 2 was also efficient in decreasing oxidative stress indicators and enzymatic antioxidants of wheat under drought stress. There is a need for more investigations at field levels by considering the effect of other environmental constraints to effectively validate these findings.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.IL, SA-D, LA-A, RH, KAM, NM, SA, MA, AA, PP, KM, and TG: researching and writing. AA, SA, and LA-A: writing. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer FZ declared a shared affiliation with one of the author, RH, to the handling editor at the time of the review.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "We investigated the incremental benefit of PET over the mDC in NVE.International guidance recognizes the shortcomings of the modified Duke Criteria (mDC) in diagnosing infective endocarditis (IE) when transoesophageal echocardiography (TOE) is equivocal. Dual-center retrospective study (2010-2018) of patients undergoing myocardial suppression PET for NVE and PVE. Cases were classified by mDC pre- and post-PET, and evaluated against discharge diagnosis. Receiver Operating Characteristic (ROC) analysis and net reclassification index (NRI) assessed diagnostic performance. Valve standardized uptake value (SUV) was recorded.69/88 PET studies were evaluated across 668 patients. At discharge, 20/32 had confirmed NVE, 22/37 PVE, and 19/69 patients required surgery. PET accurately re-classified patients from possible, to definite or rejected , with significant incremental benefit in both NVE (AUC 0.883 vs 0.750) and PVE (0.877 vs 0.633). Sensitivity and specificity were 75% and 92% in NVE; 87% and 86% in PVE. Duration of antibiotics and C-reactive Protein level did not impact performance. No diagnostic SUV cut-off was identified.PET improves diagnostic certainty when combined with mDC in NVE and PVE.The online version contains supplementary material available at 10.1007/s12350-021-02689-5. Infective endocarditis (IE) is classically viewed as a rare diagnosis, however its incidence has risen since the turn of the century, associated with a surge in cardiovascular intervention in an aging population.218F-Fluorodeoxyglucose positron emission tomography with computed tomography to aid in the diagnosis of prosthetic valve endocarditis (PVE) has been increasingly reported with high sensitivity and specificity,12The use of Barts Heart Centre (BHC) was formed in May 2015 following a merger of the cardiology departments of Barts Health NHS Trust and University College London Hospitals (UCLH) NHS Trust. This merger made BHC the single cardiac surgery referral center for North Central and East London, and resulted in a significant increase in the number of IE cases seen. In line with the European Society of Cardiology (ESC) IE guidelines, this prompted formalization of the UCLH model to form an Endocarditis Team.Under terms of an overarching audit, a dual center retrospective review identified all patients undergoing PET for IE from January 2010 to December 2018. Patients imaged early in the post-operative period following valve surgery for IE (< 3 months), those with CIED-only IE, and studies with failure of myocardial suppression were excluded in consensus. The definitive discharge diagnosis was recorded by surgical specimen in those who underwent operative intervention, or by Endocarditis Team consensus in those medically managed (excluding the PET findings).ad hoc bedside discussion.The Endocarditis Team review all cases of IE referred to our Institution on a weekly basis. Prior to the formation of the BHC team, the core members led in the clinical care of patients with IE at UCLH via 18F-FDG (4.5 MBq/kg) at a mean time of 64 minutes (SD 13 minutes) (mean activity 157 \u00b1 39 MBq), we performed combined imaging with an MI DR PET-CT scanner . An unenhanced, ungated CT was performed from vertex to thigh for attenuation correction. A subsequent PET was performed at a bed overlap of 49% and a time per bed position of 100s. The reconstruction method was VUE Point FX, with 2 iterations, 12 subsets and a 5\u00a0mm Gaussian filter. All studies were standardized for display and reading with an SUV window threshold of 0-10All patients underwent myocardial suppression technique to suppress metabolic activity in the myocardium. This was achieved using a > 24-hour high fat, carbohydrate-restricted diet, a > 12-hour fast and intravenous injection of unfractionated heparin (50 IU/kg) 60 minutes prior to assessment with PET.PET images were read in a blinded fashion by two independent investigators (CP & LM) with joint reading to resolve discrepancies in consensus, using attenuation and non-attenuation corrected images (the latter in particular for PVE). Myocardial suppression was graded as good, fair, poor or non-diagnostic,Studies were assessed for avidity of the culprit valve. The pattern and distribution of avidity was categorized as focal, heterogenous, homogenous or none . An overall verdict (yes/no) was given on a case-by-case basis as to whether the PET was suspicious for IE or not, with a study considered positive if uptake was either \u2018focal\u2019 or \u2018heterogenous\u2019. Note was made if PET suggested an alternative diagnosis.An elliptical region of interest (ROI) was placed over the valve, mediastinal blood pool and liver for semi-quantitative assessment of avidity using absolute mean and maximum standardized uptake values (SUV) allowing target-to-background analysis. In addition to avidity at the level of the cardiac valves, note was made of the presence of extracardiac uptake .Studies were analyzed using the freely available Horos (version 3.3.5).\u03c72 for categorical data. Diagnostic performance was evaluated using Receiver Operating Characteristic (ROC) analysis and net reclassification index (NRI). Respective sensitivities, specificities, positive and negative predictive values for PET in both native and prosthetic valve disease were calculated using the discharge diagnoses, categorized as either confirmed or rejected IE as described above.GraphPad Prism (version 7.0) and SPSS (version 25) were used for statistical analyses. Descriptive statistics were calculated for continuous variables, and PET was undertaken in 88/668 patients, with 69 studies 10.3%) eligible for inclusion; 59/404 (14.6%) following the formalization of the Endocarditis Team in October 2015. The cohort featured 48 male patients, with an overall mean age of 61 years (range 21\u201389 years). Thirty-two (46%) were native valve patients and 37 cases had prosthetic valves, of which 20 were tissue and 17 mechanical prostheses. All patients, except one NVE patient, underwent assessment with TOE . Further baseline characteristics are described in Table 0.3% eligStaphylococcus aureus was isolated in 20/69 (29.0%) patients; 18/69 (26.1%) were peripheral blood culture-negative (BCNIE) had confirmed NVE and 22/37 (59%) confirmed PVE, giving a total prevalence for IE in our cohort of 61%. Nineteen (28%) patients required surgical intervention, 9 (28%) NVE and 10 (27%) PVE, with the remaining 50/69 cases managed medically, as per European Society of Cardiology (ESC) and British Society for Antimicrobial Chemotherapy (BSAC) guidance.18P > 0.10). Mean CRP at the time of PET was 38.6 (SD \u00b1 29.8) mg/L in 20/32 NVE cases and 39.3 (SD \u00b14 5.4) mg/L in 29/37 PVE cases. Prolonged duration of antibiotic therapy was associated with a downward trend in CRP at time of PET . There was no difference in PET performance with CRP < 40 and \u2265 40 in NVE, PVE or overall (P > 0.10).12The median duration of IE-targeted antibiotic therapy pre-PET was 20.6 (IQR 9.5-25.0) days in 30/32 NVE cases and 17.0 (IQR 11.5-33.0) days in 33/37 PVE cases. Categorical analysis of median duration of antibiotics showed no significant impact on PET performance in NVE, PVE or overall , there were 9 episodes of further IE in 8 patients, with 1 treatment failure in a medically managed patient unfit for surgery. Median IE free duration was 379 days in these individuals (range 28-1095 days). Of these episodes, 5 required surgical intervention during the index admission and four were managed medically. Independent review of mDC post-PET and discharge diagnosis made by the Endocarditis Team showed PET to have correctly confirmed or refuted IE in all cases.P < 0.001) and PVE compared to discharge diagnosis were excluded from analysis due to complete failure of myocardial suppression, rendering the scans non-diagnostic Figure . All rems Figure . PET senmax or SUVmean, nor when these parameters were normalized to hepatic and mediastinal blood pool uptake cases of NVE and 18 49%) in PVE Table . Diffuse9% in PVEThe climbing incidence of IE, and clear benefits of the Endocarditis Team mandate a guideline-driven approach to the diagnosis of IE.1217IE remains a diagnostic challenge, especially in the absence of positive major criterion of the mDC. In our cohort, all patients had ongoing clinical suspicion of IE following equivocal TOE in 68/69 cases , and 26% with likely BCNIE Table , as wellOur data add further weight to the growing body of evidence advocating the sensitivity and specificity of PET in PVE, and corroborate our ability to utilize PET appropriately in IE. However, we have now been able to additionally demonstrate comparable sensitivity and specificity in NVE, at 75% and 92%, respectively, as with one similar series to date.8Given the lack of typical TOE findings of significant valvular insufficiency and/or presence of vegetations in this cohort, it is unsurprising that fifty patients (72%) lacked a surgical indication and were managed medically. This is despite 99% of patients having at least one TOE, a high rate compared to 81%-88% of patients in other studies. Identifying the factors responsible for the high performance of PET in the current cohort is critical. Factors thought responsible for the low sensitivity of PET in NVE are well summarized in the literature, from both technical limitations of the modality as well as the pathophysiology of NVE.Achieving adequate myocardial suppression is imperative to the successful use of PET in IE, with ~ 85% of suppression graded as good or fair in our cohort, and only 3/72 (4%) studies found to be non-diagnostic and therefore excluded from analysis. Even when myocardial suppression was poor, meaningful valvular assessment was still possible in this cohort based on visual analysis. This is in comparison to failure of myocardial suppression in 5%-32% of the cohort in similar studies of NVE and PVE.2318F-FDG.23In our practice, PET is used where the diagnosis remains unclear despite 99% utilization of high-quality TOE (with lower rates in other series), including in patients with NVE. This is particularly important as this group of patients typically lack valve findings that mandate early surgery, hence diagnostic equipoise. Our approach is to therefore repeat echocardiography prior to PET, in order to easily identify those in whom valve dysfunction may have developed. However, this is reflected in the long median duration of antibiotics prior to PET and downward trend in CRP when compared to other studies.We would suggest the use of PET in IE requires involvement of an Endocarditis Team with high volume throughput. This will both optimize case selection and improve technical reading of the study. This is particularly important to correctly distinguish valve from myocardial tracer uptake, and recognize patterns of uptake typical and atypical for IE, especially following previous cardiac surgery Figure . Our anaThe results of this study suggest that the real-world application of PET to patients with IE has meaningful benefit. Nonetheless, a formal prospective multicenter diagnostic accuracy study with hard endpoints is warranted, and we would argue should include patients with both NVE and PVE.Advances in IE have been hindered by low incidence and a lack of randomized trials as a result.The incidence of IE and the limitations of the mDC may cause bias when relying on expert consensus to confirm or refute IE in medically managed patients, and is a significant issue in all IE studies without surgical specimens. However, this is where the guidance currently supports the use of PET, where the diagnosis is not clear, and therefore surgical intervention is not necessarily mandated. Blinded scoring of the mDC and imaging analysis by the research team reduce the limitations of expert consensus in our study, and is supported by net reclassification following PET, highlighting the benefit of PET overall and for reclassification of individual patients. This impact is further mitigated by follow-up data that suggest the correct diagnosis was made, especially given a low incidence of recurrent episodes of IE over 212 patient-years.Despite these limitations, the incremental benefit of PET in both NVE and PVE described herein suggests meaningful benefit. However, we would only advocate the routine use of PET where the diagnosis is equivocal after high-quality TOE and surgery is not mandated for another indication.In this retrospective analysis, we highlight the incremental benefit of PET for the diagnosis of IE in both native and prosthetic disease. PET performs well irrespective of inflammatory markers or duration of IE-focussed antibiotic treatment. We advocate the use of PET by expert Endocarditis Teams where both NVE and PVE is suspected, but TOE remains equivocal.The literature consistently identify poor sensitivity of PET/CT for NVE. We highlight that in a high-volume center, PET can be used to contribute to the diagnosis of both NVE and PVE in a meaningful manner. PET provides meaningful information at valve level in PVE and NVE, to help confirm and refute the diagnosis (NRI), outperforms mDC alone (AUC) and has higher than reported sensitivity in NVE. We further explore duration of antibiotics, CRP at time of PET and time to PET to explain why our findings differ to the rest of the literature.Supplementary file1 (PPTX 3408 kb)Supplementary file2 (PPTX 3221 kb)Below is the link to the electronic supplementary material."} {"text": "N = 290) were collected from farms (N = 63), collection centers (N = 5), shops/kiosks (N = 37), milk bars (N = 17), roadside vendors (N = 14), restaurants (N = 3), milk vending machines (N = 2), mobile traders (N = 2) and a supermarket (N = 1). Mean values of colony-forming units for TBC and TCC were referenced to East African Standards (EAS). Logistic regression analysis assessed differences in milk acceptability based on EAS. The raw milk from farms and collection centers was relatively within acceptable EAS limits in terms of TBC (3.5 \u00d7 105 and 1.4 \u00d7 106 respectively) but TCC in the milk from farms was 3 times higher than EAS limits (1.5 \u00d7 105). Compared to farms, the odds ratio of milk acceptability based on TBC was lower on milk bars (0.02), restaurants (0.02), roadside vendors (0.03), shops/kiosks (0.07), and supermarkets (0.17). For TCC, the odds that milk samples from collection centers, milk bars, restaurants, roadside vendors, and shops/kiosks were acceptable was less than the odds of samples collected from farms . Comparison of raw milk across the nodes showed that the odds of milk samples from restaurants, roadside vendors, and shops/kiosks being acceptable were less than the odds of samples collected the farm for TBC . For TCC, the odds of raw milk from collection centers, restaurants, roadside vendors, milk bars, and shops/kiosks being acceptable were lower than the odds of acceptability for the farm samples . Practices with possible influence on milk bacterial quality included muddy cowsheds, unconventional animal feed sources, re-use of spoilt raw milk, milk adulteration, acceptance of low-quality milk for processing, and lack of cold chain. Therefore, milk contamination occurs at various points, and the designing of interventions should focus on every node.Food networks present varying food safety concerns because of the complexity of interactions, production, and handling practices. We investigated total bacteria counts (TBCs) and total coliform counts (TCCs) in various nodes of a Nairobi dairy value chain and identified practices that influence food safety. A value chain analysis framework facilitated qualitative data collection through 23 key informant interviews and 20 focus group discussions. Content thematic analysis identified food safety challenges. Cow milk products ( The global dairy industry, comprised of ~265 million cows, has continued to grow over the past decade, with milk production increasing from 590 million tons in 2009 to 683 million tons in 2018 . For exaIncreased demand for milk consumption coupled with its predicted low supply will put pressure to existing value chains, and this will trigger the evolution of more milk supply chains, which will complicate already complex food systems. Food systems present some of the most complicated networks, especially in urban areas where production and distribution are through simple to complex value chains [\u20138; HuestAccording to the national livestock production report of 2012, Nairobi, one of the fastest-growing urban cities in Africa producedBased on the annual per capita consumption and projected growth rate of the city, approximately at 4% , it meanAchieving food safety in such complex food systems is a challenge, particularly because milk is produced primarily by small-scale farmers, and marketing channels are dominated by informal systems \u201320. The There are many factors that may contribute to unsafe milk , and chaMycobacterium species , Kenya. The overall objective of the Urban Zoo project was to understand mechanisms leading to the introduction and transmission of pathogens to urban populations through livestock commodity value chains in 33 sub-locations of Nairobi. Most of the qualitative data used in this study were collected during the mapping of the dairy value chain implemenTo comprehensively map the bacteriological (TBC and TCC) landscape of milk in Nairobi, milk sampling was conducted on nodes that were identified during the initial mapping by Kiambi et al. . These iThe selection of participants for focus group discussions (FGDs) was facilitated by government animal health assistants (AHAs) in the sub-county of focus. This, however, was in contrast to the FGD with KDB licensing officers, which were organized by the KDB head office and the Kibera FGDs, which were facilitated by a community mobilizer . The guidance on selection of participants was given to the AHAs/community mobilizer such that for each group there was adequate gender representation, wider geographical coverage of the people , and participant's vastness in understanding of the dairy systems in the area. Each group of participants was selected based on their specific type of business/enterprise as described by the stakeholder analysis. The various FGDs that were conducted included dairy cow farmers from Kibera, an urban informal settlement), farmers in Kikuyu and Dagoretti (peri-urban areas) dairy cooperatives, traders not associated with DTA, retailers, LPOs, PHOs, KDB officers in charge of licensing and the city council of Nairobi. Key informant interviews were conducted with managers of a feed manufacturing company, officials of DTA, various managers of large processing companies , KDB, and representatives from the Directorate of Veterinary services and Department of Livestock Production.Twenty FGDs with 105 people and 23 key informant interviews (KIIs) with 35 people were conducted. From each of the FGDs, a checklist with open-ended questions was used for data collection. Qualitative data were collected on: (i) production practices including housing of the livestock, sourcing of livestock feeds, management of animal health, management of milk obtained from cows undergoing treatment, (ii) milk handling practices in bulking centers including assurance of milk safety/quality, management of milk that has been rejected for poor quality or spoilage, transportation of raw milk from farms, and processing practices that could influence food safety, (iii) practices at a retail level including sourcing of milk, transportation, value addition, management of spoilt milk, and (iv) waste management and food safety management practices. In addition, all the participants were asked to describe challenges that they or other people working in the milk system experienced that would lead to compromise in food safety. The information gathered through FGDs was triangulated during KIIs. When discrepancies were detected, additional interviews were conducted with other experts working in or conducting research on the dairy value chain.In addition to data from the FGDs and KIIs, epidemiological data were collected from nodes of the value chain where milk samples were collected. In each node, a pretested questionnaire was administered to collect data on: (i) type of node, (ii) amount of milk handled per day, (iii) type of milk and milk products handled, (iv) main sources of milk, (v) methods of milk preservation, and (vi) costs related to buying and selling of milk. Documentation of the interview processes was aided by video and voice recording (following consenting of the participants) and taking notes.http://www.wikihow.com/Create-a-Random-Sample-in-Excel) to facilitate random selection of one sub-location from a peri-urban area and another from an informal settlement to represent milk chains in the two different settings. The Uthiru and Korogocho areas (2 (The selection of sampling sites for this study involved a process whereby the 33 sub-locations (where the Urban Zoo project was working) were entered in a Microsoft Excel worksheet in the sub-location was aided by the area administrative officer (chief). These segments included various production units (farms), collection centers, distributors, and milk retailers. The nearest dairy farm from the chief's office was the first to be enrolled in the study upon obtaining consent. Subsequently, the next nearest farm in the same village was identified, and the procedure was repeated up to a maximum of four farms within one single area in a village. This was considered cluster one. Then, the research team moved about 200\u2013300 m in the Uthiru area and about every 50 m in the Korogocho area (because of much higher household density) from the first cluster to another within the same village, and the process was repeated. This was done throughout the village until the teams got to the start point before proceeding to the next village where the same procedure was repeated. For retailers, up to four milk vendors identified between and within the clusters and between the villages were enrolled. These included shops/kiosks, restaurants, milk bars, roadside vendors, supermarkets, automated milk machines, milk collection centers, and mobile traders (hawkers).A biological sampling of cow milk was conducted in farms, milk bars, shops/kiosks, supermarkets, restaurants, roadside vendors, milk vending machines, and milk collection centers. Different milk types that were sampled included: raw, pasteurized liquid milk, ultra heat treated, fermented, and yogurt. In farms, milk samples were collected only in morning hours when the temperature was cool. A farmer was requested to obtain about 50 ml of milk directly into a sterile barcoded falcon tube. However, if the farmer was unable to milk for whatever reason, they were requested to provide whatever remained from the last milking. To obtain about 50 ml of milk from the other nodes (retail and bulking centers), participants were requested to transfer the sample directly into the sterile barcoded falcon tubes. However, if the milk was in packets or sealed bottles, the entire content was purchased. All the milk samples were immediately placed in a cool box that was packed with ice packs and transported to the laboratory within 2\u20134 h of collection.\u22121-10\u22124) were prepared in a sterile phosphate-buffered diluent (0.0425 g of potassium dihydrogen phosphate per L of distilled water), pH 7.2. Enumeration of TBC was conducted using sterile standard plate count agar (SPCA); that was prepared according to the manufacturer's instructions. One milliliter of an undiluted milk sample and each of the four serial dilutions were aseptically pipetted into a separate sterile pre-labeled disposable 90-mm diameter Petri dish on which freshly prepared agar was poured. The mixture (sample plus media) was gently but thoroughly mixed by whirling to ensure even distribution of the sample into the culture medium. The content was left to solidify at room temperature, and the plates were incubated at 32\u00b0C for 48 h. This was followed by an assessment of plates that had countable colony-forming units (CFUs). Plates that had between 25 and 250 CFUs per plate were selected for enumeration.For the determination of TBC, samples were prepared according to the protocol described by Christen et al. . For eac\u00ae) guided by the manufacturer's instructions. Culture and isolation were carried out as described elsewhere . Data entry was complemented with data collected in notebooks. The first step was to collate data in pre-formatted word documents. This allowed for systematic organization of the emerging food safety themes. The second stage of analysis entailed thorough reading of the templates and organization of the data in distinct sections based on the emerging food safety themes, which were categorized as challenges. These included a category on what practice(s) was of food safety concern, who said it (during the interview), where the practice(s) was mentioned to occur, and why the practice was said to occur. To comprehensively explore factors that may impact food quality and safety, the qualitative analysis contextualized the main practices that were mentioned in milk production, bulking centers, processing, transportation, and retailing.Data cleaning, coding, and analysis were conducted in Stata 16 . DescripA logistic regression analysis was performed to detect differences in milk TBC and TCC between various nodes and milk types. Two binary outcome variables were used as an indicator of whether a sample was acceptable or not based on levels according to the EAS standards for TBC Model coefficients are reported at odds ratios (OR) where coefficients above 1 indicate increase in odds and coefficients below 1 indicate decrease in odds. Because of the clustered nature of the data , the variance-covariance matrix corresponding to the parameter estimates was specified using a clustered sandwich estimator, i.e., vce (cluster) command in Stata. This estimator allowed us to account for intragroup correlation for the estimation of standard errors. The model specification was performed in Stata 16.1 .Ethical approval for this study was obtained from the ILRI Institutional Research Ethics Committee (project reference: ILRIIREC2014\u201304/1). ILRI IREC is accredited by the National Commission for Science, Technology, and Innovation (NACOSTI) in Kenya. Ethical approval was also obtained from the Royal Veterinary College ethics committee (project reference: URN 2013 0084H).N = 82) were women. The age of participants ranged from 18 to 86 years, with a mean age of 41.69 and mode of 45 years. Most of the respondents (\u224885%) reported to own an enterprise, while the rest were either employees (\u224812%) or relatives (\u22483%). For those who kept cows (farmers), the majority of people (\u224884%) reared between 2 and 3 milking cows, followed by those keeping 6\u20139 cows (\u224813%). Only a small proportion of farmers (\u22483%) kept 10\u201313 milking cows. In terms of volumes of milk handled per day, the majority of people (\u224859%) reported to handle between 0.5 and 20 L of milk, followed by 21\u2013100 L (\u224834%), while only a small proportion (\u22487%) handled more than 100 L per day.One hundred and forty four people were interviewed during milk sampling. Of these, 56.9% (N = 63), milk collection centers (N = 5), kiosks (N = 37), milk bars (N = 17), roadside vendors (N = 14), restaurants (N = 3), mobile traders (N = 2), milk vending machines (N = 2), and a supermarket (N = 1). The different types of samples collected included raw milk (N = 203), homemade fermented milk (N = 12), pasteurized milk (N = 35), ultra heat-treated milk (N = 13), processed yogurt (N = 13), and processed fermented milk (N = 11).Two hundred and ninety (290) cow milk samples were collected from various nodes represented by the respondents. These included farms (About 44% of the milk from which these samples were obtained was described to have come from within Nairobi County. The rest (\u224850%) was described to be sourced from Kiambu County (peri-urban area neighboring Nairobi), about 3.4% from further rural areas, and a small proportion of milk (\u22482.6%) was of unknown origin. Delivery of milk from various sources was reported to be mainly (\u224888%) direct (own cows or own transport). The rest reported to have milk delivered by traders (\u22489%), dairy cooperatives (\u22483%), and a small proportion (\u22480.5%) by processors (processed products). Information regarding recent use of antibiotics in cows from which samples were obtained was not known by 50% of the respondents, while the rest of the participants said that antibiotics had not been used in cows for about 2 weeks prior to the sampling.5 cfu/ml, at grade II of EAS (>2 \u00d7 105\u20131 \u00d7 106), while that from collection centers was within the limits, at grade III of EAS (1 \u00d7 106\u20132 \u00d7 106) at 1.4 \u00d7 106. This shows that the mean values for the milk from farms were better in terms of TBC than that from collection centers. Furthermore, the bacterial quality of milk deteriorated at the retail level . The liquid milk from all nodes except for pasteurized and UHT had higher mean values than the raw milk from farms which had a mean of 3.5 \u00d7 105 cfu/ml. For example, cfu/ml mean value for the raw milk from collection centers was four times higher than the EAS limits recommended for raw milk from farms, 11.4 times for restaurants, 12.3 times for milk bars, 22.6 times for roadside vendors, and 9.4 times for milk collected from shops/kiosks. Mean values for processed (pasteurized and UHT) products were within the EAS limits. When comparing the mean values of milk samples that had values with unacceptable EAS standards, the milk from roadside vendors was worst in terms of TBC, followed by milk bars, restaurants, and shops/kiosks in that order.3\u20135 \u00d7 104), the TCC in milk from farms was three times higher, indicating unacceptable contamination of milk, which should not be sold to consumers. Quality of milk from roadside vendors was 13 times poorer than the EAS limits, 8.2 times poorer for milk from milk bars, 3.2 times poorer for milk from shops/kiosks, and 1.6 times poorer for milk from restaurants. Hence, the worst raw milk in terms of TCC when compared to EAS was from roadside vendors, milk bars, shops/kiosks, farms, and restaurants, in that order.For analysis based on the type of nodes, the reference node type was farm so coefficients represent differences between a farm and a node type. For TBC, the results show that milk samples from milk bars and restaurants were less acceptable than samples collected from farms (odds 0.02). Similarly, the odds of samples from roadside vendors, shops/kiosks, and supermarkets being acceptable were less than the odds of samples collected from farms . For TCCAn analysis comparing the TBC in raw milk from various nodes of the value chain showed that the odds that milk samples from restaurants and roadside vendors were acceptable were less than the odds that samples collected from farms were acceptable (0.03). Similarly, the odds that samples from milk bars and shops/kiosks were acceptable were lower than the odds of milk samples from farms . For TCC, the results showed that the odds that milk was acceptable from various nodes were all lower than the odds of acceptability for farm samples. That is, 0.12 for samples from collection centers, 0.12 for restaurants, 0.02 for roadside vendors, 0.06 for milk bars, and 0.05 for samples from shops/kiosks .For the analysis based on type of milk, the reference was raw milk . There wThere were several practices that were mentioned during key informant interviews and FGDs that could possibly influence food safety . In prodsince not every farmer will have bad milk or will have used antibiotics at the farm, the good milk will neutralize the bad milk, and overall all the milk will be fairly good. So we don't reject all that need to rejected except when it is grossly curdled or dirty. Our competitors who don't care about quality, especially the informal traders will be waiting for it, and they will sell it, since milk market is ever ready.\u201d Milk cooling and basic screening tests were said to be lacking in most collection centers, with screening relying on organoleptic tests. On disposal of milk that had been rejected in bulking centers, dairy cooperatives and large processors reported that such milk were sent back to suppliers, and they reported that some of it was returned back to the food chain.In bulking centers, the FGD and KII with dairy cooperatives and some large processors reported that they sometimes, especially during milk scarcity, accepted milk that should be rejected. They argued that rejection of milk in such periods of scarcity and in the midst of the liberalized dairy sector would set their competitors at an undue advantage of selling what they rejected. One of the managers in the bulking centers argued that bad milk would be neutralized (unacceptable contents would be diluted) by good milk. He said, \u201cIn retail, milk adulteration by addition of water was reported to be a frequent occurrence aiming at increasing the volume of milk. This was reported to occur mainly during dry seasons when milk production was low. According to traders and retailers, the practice was mentioned to occur at the farm level, in traders (those selling milk to retailers), retailers, including roadside vendors, milk bars, restaurants, and in shops/kiosks. Another food safety challenge mentioned in retail was the lack of cooling facilities during transportation and at sale points. Raw milk that got spoilt either in transit or at sale points was said to be sold either as fresh liquid milk at a price than lower than that of raw milk, or was converted into yogurt or fermented milk. Fermented milk is basically prepared by letting the raw milk that had curdled to stay for a few more days in a container to ferment more, while the preparation of yogurt entailed the addition of flavors and color to the raw fermented curdled milk. Finally, there was a glaring gap in regulation and enforcement of food safety practices in the value chain, as several businesses (traders and retailers) were reported and observed to be operating without necessary government permits and licenses.By 2050, the demand for milk consumption will triple in Sub-Sharan Africa , with coThis study utilized the previously developed framework of Nairobi's dairy value chain to invesThis study found that TBC levels in production nodes (farms and collections centers) were generally good and within the acceptable limits of the EAS for TBC, but that TCC limits were higher (than EAS) in farms. This agrees with several other studies that have demonstrated good bacterial quality of milk at farm levels , 66, 67.We noted that the bacterial quality of milk deteriorated at retail level. For example, the logistic regression analysis for TBC showed that the odds that milk samples from retailers were acceptable were less than the odds of samples collected from farm as follows: 0.02 (milk bars), 0.02 (restaurants), 0.07 (shops/kiosks), 0.03 (roadside vendors), and 0.11 (supermarkets). For TCC, the model results indicated that the odds of milk samples from milk collection centers were acceptable were less than the odds of samples collected from farms (0.18), and that the odds of milk from milk bars and restaurants were less than the odds of farm samples . Similarly, the odds of samples from roadside vendors and shops/kiosks being acceptable were less than those of samples collected from farms . Our results agree with findings from other studies that show such bacterial deterioration of milk as it flows from farms through the retailing system , 42, 74.accidentally curdled,\u201d and it was reported to be converted into fermented milk or yogurt (addition of flavors and colors to the fermented milk). This probably explains the high TBC and TCC in homemade fermented milk and homemade yogurt. On the regression analysis, our results indicated that the odds that the homemade (yogurt and fermented) and processed (yogurt and fermented) types of milk were acceptable for TBC was 97 and 90% lower, respectively, than raw milk, while for TCC, the homemade (yogurt and fermented) had lower odds (94%) than raw milk. While most Kenyans normally boil milk before consumption , processed fermented milk (2.7 \u00d7 101), and processed yogurt (5.6 \u00d7 104). Some practices elucidated during FGDs and KIIs with the dairy cooperatives and large processing companies that may contribute to high bacterial load in processed milk included lack of cold chain in collection centers and acceptance of milk that should be rejected so as not to benefit competitors who care less about quality. Unfair competition among value chain actors has been identified as one major factor that hinders achieving optimal food safety in the Nairobi dairy value chain were acceptable were less than the odds of samples collected from farm being acceptable for TBC and TCC. Likewise, the odds that raw milk samples from retail were acceptable was less than the odds of raw samples collected from farm being acceptable for TBC and TCC. With raw milk as the reference, the analysis by milk type showed few differences in the odds of TBC and TCC acceptability across milk types.Several practices with possible influence on food safety were mentioned. In production, these were related to keeping cows in very muddy cowsheds, obtaining animal feeds from dumpsites and market leftovers, obtaining feeds by the roadside, treatment of cows by unqualified personnel coupled with compromise on withdrawal periods following treatment, resale of milk that has been rejected from dairy cooperatives or allowing it to ferment further (and taken as fermented milk), and addition of water to increase the volume of milk. In bulking centers, the practices were related to accepting milk that should be rejected, lack of cold chain and basic screening tests, and lack of procedures for the management of milk that has been rejected. In retail, there was milk adulteration by addition of water and other chemicals, lack of cooling facilities during transportation and at sale points, sale to consumers of raw milk that got spoilt, and conversion into yogurt or fermented milk.The analytical methodology presented in this study demonstrates a practical approach for strategic policy decisions. To achieve milk quality and safety, the authors suggest the implementation of more robust training of people involved in the milk system. However, this needs to be guided by a critical analysis of prevailing challenges in every segment of the value chain. Every node of the value chain should be considered prior to designing and implementing any intervention, as further underscored in Kiambi et al. , FAO 2626, and Ahttps://doi.org/10.17638/datacat.liverpool.ac.uk/1639.The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: Data repository of the University of Liverpool available at SK, EF, PA, JR, and EK designed the study and data collection tools. SK and JM collected data. SK, JO, PA, and EF drafted the manuscript. SK, GA, and NG developed the culture and isolation standard procedures, facilitated culture and isolation and interpreted TBD and TCC results. All authors read, commented on, and approved the final manuscript for publication.This study was supported by the United Kingdom Medical Research Council, Biotechnology and Biological Science Research Council (United Kingdom), the Economic and Social Research Council (United Kingdom), the Natural Environment Research Council (United Kingdom), through the Environmental & Social Ecology of Human Infectious Diseases Initiative (ESEI), grant reference: G1100783/1. This study also received support from the CGIAR Research Program on Agriculture for Nutrition and Health (A4NH) led by the International Food Policy Research Institute (IFPRI).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Gynura procumbens is an edible flowering plant that has been utilized as traditional therapy for numerous diseases. The current experiment investigates the hepatoprotective potentials of the ethanol extract of Gynura procumbens leaf (EEGPL) against thioacetamide (TAA)-induced liver cirrhosis in rats. Thirty Sprague Dawley rats were randomly divided into 5 clusters: A, rats received orally 10% Tween 80 and intraperitoneal (i.p) inoculation of sterile distal water; B, rats received orally10% Tween 80; C, rats received orally daily 50\u00a0mg/kg of silymarin, while groups; D and E, rats received orally daily doses of 200 and 400\u00a0mg/kg of EEGPL, respectively. Furthermore, B-E clusters received 200\u00a0mg/kg thioacetamide (i.p) three times a week for 60 days.The liver gross morphology of rats that received only TAA (B) revealed irregular rough surface layers compared to smoother livers of rats that received EEGPL. Histopathology of group B revealed clear hepatic necrosis and fibrous connective tissue, which were significantly reduced in C-E groups. EEGPL treatment caused a significant down-regulation of PCNA and \u03b1-SMA protein expressions. Antioxidant (SOD and CAT) enzymes in hepatic homogeneity were meaningfully lower, and MDA levels were significantly higher in TAA controls compared to those of C-E groups. Moreover, EEGPL treatment caused a reduction of TNF-\u03b1 and IL-6 and increased expression of IL-10 cytokines. Therefore, the hepatoprotective potentials of EEGPL might be contributed to its modulation of detoxification enzymes, anti-inflammatory, and antioxidant activities. In recent decades, liver dysfunctionality and fibrosis have notably increased, mainly due to environmental carcinogens and lifestyle changes. New data analysis in 2020 has shown that the prevalence of hepatic injury and fibrosis has increased by 13% since 2000, reaching more than 1.5 billion [ in 2016 .Another cause of liver cirrhosis is thioacetamide (TAA), a laboratory chemical used by researchers to enhance the formation of hepatocyte injury and cirrhosis in rats, which resembles liver injury detected in humans . The patGynura [Herbal medicine and its derivatives have a long history as a traditional remedy for liver dysfunctionalities, and in today's world, these products are gaining more interest as alternative medicine due to numerous drawbacks related to synthetic chemicals. In last few decades, numerous research studies have reported the hepatoprotective of traditional herbal medicines including many species of the genus Gynura ,8.G. procumbens (Merr.) which is known in Malaysia as Sambung nyawa, is broadly dispersed in South-East Asian countries. Various parts of this medicinal herb have been conventionally used for dealing with eruptive fevers, rash, constipation, hypertension, diabetes mellitus, rheumatism, viral diseases of the skin, kidney diseases, migraine, and cancer [d cancer . The lead cancer and haved cancer .kB). Moreover, scientists revealed that pro-inflammatory cytokines can induce the pathway actions of NF-kB, thereby creating a continuous auto-controlling cycle that can duplicate the inflammatory process for a prolonged time [Inflammatory cytokines are major contributors of the progression or suppression of liver cirrhosis in any stage of this disease. The cytokines tumor necrosis factor (TNF-\u03b1), interleukin-1 \u03b2 (IL-1 \u03b2), and interleukin-6 (IL-6) have been well-known as pro-inflammatory biomarkers that were significantly elevated during hepatocyte damage, which also facilitate the further progression of the disease through their contribution in many biological pathways, including lipid metabolism, protein synthesis (positive and negative acute phase), biliary system obstruction, and fibrosis progression . In contged time . Therefoged time .Quite a lot of studies have revealed that EEGPL displays numerous biological activities due to its phytocontents . Despite22.1G. procumbens leaves were found at Ethno Resources Sdn Bhd, Malaysia. The plant identification and authentication were done according to criteria recorded at the Herbarium of Rimba Ilmu, Foundation of Natural Science, and the University of Malaya. The GP leaves were desiccated and ground into \ufb01ne dust. Two hundred grams of the residue were immersed in 1\u00a0L of 95% ethanol for 4 days. At that point, the combination was sieved via Whatman paper (#1) and cleansed by condensed pressure in a spinning evaporator. The obtained plant extract was dissolved in 10% Tween 80 and delivered to rats through oral gavage [The l gavage .2.2Thioacetamide was bought from a Swiss company (Sigma-Aldrich)and then it was dissolved in 10% Tween 80 and mixed well till wholly quartzes were liquefied. At that time, 200\u00a0mg/kg body mass was given to rats by intraperitoneal injection in 3 dosages weekly for 60 days. The injection of TAA will produce significant tissue damage and biochemical modulations in rats, analogous to liver cirrhosis occurring in humans .2.3The mature male Sprague Dawley rats, aged 7\u20138 weeks and weighted 170\u2013180\u00a0g, were obtained from the Animal House, Cihan University-Erbil. Rats were kept in separate cages (bottom designed with wide-mesh wire) to prevent coprophagia and ingested with tap water and standard rat diets (pellets). The experiment began after 7 days of adaptation. For the acute toxicity procedure, rats were divided randomly into three groups; A, normal controls received 10% Tween 80 (5\u00a0mL/kg): B, rats received 2\u00a0g/kg of EEGPL; C, rats treated with 4\u00a0g/kg of EEGPL. Rats were fasting for overnight before treatment delivery. After supplementation, rats were fasting (food and water)foranother 3\u20134\u00a0h. The observation procedure started after supplementation and continued for 48\u00a0h (every 8\u00a0h) for any possible toxic signs or abnormal changes. After two weeks, rats received an over dose of anaesthesia (ketamine and xylazine) and sacrificed. The intercardial blood puncture were obtained for biochemical analysis and rat organs (liver and kidney) dissected for histological evaluations .2.4The mature rats were aimlessly distributed into 5 clusters (6 rats each):A, normal control rats had daily oral dose of 10% Tween 80 (5\u00a0mL/kg) and weekly three injections (5\u00a0mL/kg) of sterile distilled water.B, cirrhosis rats received daily oral of 10% Tween 80 (5\u00a0mL/kg) and weekly three injections of 200\u00a0mg/kg of TAA.C, Reference rats received daily oral dose of silymarin (50\u00a0mg/kg) and weekly three injections (200\u00a0mg/kg) of TAA.Gynura procumbens leaf)-treated rats were given orally 200 and 400\u00a0mg/kg, respectively, and weekly three injections (200\u00a0mg/kg) of TAA. The experiment continued for 8 weeks, and the body masses of laboratory animals in all groups were taken weekly [D and E, EEGPL , and placed on the \ufb01lter papers for the inspection of weight and gross pathological alterations .2.6The sliced liver tissues transferred into10% phosphate bu\ufb00ered formalin for 24\u00a0h for the fixation purpose and then transferred into mechanical tissue processing machine . Liver tissue slices (3\u20135\u00a0\u03bcm thickness) were fixed on slides for histological evaluations by using hematoxylin, eosin, and Masson Trichrome stains .2.7The liver fibrosis evaluation was also studied by immunohistochemical technique via estimation of PCNA and \u03b1 -smooth muscle actin (\u03b1-SMA) protein expression. The intensity of stains in liver tissue was determined by calculating values of stained cells divided by 1000 liver cells and mitotic index was found based on detected cellular mitosis .2.8Liver tissue samples obtained from both lobes of the liver. Hepatic tissues (1\u00a0g) were placed in a flask containing 10\u00a0mL (10%) of PBS solution (pH 7.2) before normalization and homogenization procedure by a homogenizer machine at 5000\u00a0rpm (15\u00a0min at \u22124\u00a0\u00b0C). The supernatant part was obtained and stored in a \u221280\u00a0\u00b0C freezer. Antioxidant kits were bought from German company (Merk) for the evolution of and superoxide dismutase (SOD)), and malondialdehyde (MDA) contents .2.9The evaluation of pro-inflammatory cytokines (TNF-\u03b1 and IL-6) and anti-inflammatory cytokines (IL-10) in serum samples were possible via the application of an easily accessible ELISA kit (Cusabio Biotech Co. China). ELISA inter and intra-assay consistency is calculated via wells obtained from exact plate/assay kit.Intra-Assay Precision, samples (3) of determined intensity were run three times on one plate. Intra-Assay: CV<8%Inter-Assay Precision, samples (3) of known concentration were tested in several separate assays. CV (%)\u00a0=\u00a0SD/meanX100. Inter-Assay: CV<12%The procedure began with centrifugation of the blood samples at 3000\u00a0g (15\u00a0min), and the obtained supernatant (serum) was separated and evaluated for the cytokine contents using a common enzyme-linked immunosorbent assay kit following the producer's guidelines as written in the ELISA kit for rats, TNF-\u03b1, IL-6, and IL-10. The concentrations were determined based on the purified reference cytokines .2.10The blood samples centrifuged and the isolated serum was investigated for the amount of liver enzyme contents including ALT, alanine aminotransferase; ALP, alkaline phosphatase; AST, aspartate aminotransferase. Furthermore, the synthetic and excretory efficiency of the liver were evaluated via detection of the total protein, albumin, and bilirubin levels.2.11The statistical procedure included one-way analysis of variance (ANOVA) and Graph Pad Prism (version 9.0). Values were presented as MEAN\u00a0\u00b1\u00a0S.E.M., and the significant level was set at p\u00a0<\u00a00.05.33.1The present outcomes revealed the absence of any behavioural or physiological changes in rats after two-week administration of 2\u00a0g/kg and 4\u00a0g/kg of EEGPL. Furthermore, rats consumed equal amounts of food and water without significant differences in their body weight compared to normal controls. The biochemical analysis showed non-significant changes in the serum profiles represented by comparable levels of liver and renal parameters between normal control and supplemented rats. The histopathological investigations of liver and kidneys from EEGPL-treated rats showed the absence of any structure changes, which were very comparable to that of normal control rats. A\u2013C. IndiThe present data outcomes for liver A and kiThe oral supplementation of EEGPL (2 and 4\u00a0g/kg) to rats revealed non-significant biochemical changes in the kidney based on the estimated parameters compared to normal controls .3.23.2.1The current results have detected significant differentiation between the body weight (BW) of experimental rats compared to those of normal controls. Rats treated with TAA only had statistically lower BW (178.33\u00a0\u00b1\u00a02.41\u00a0gm) than that 322\u00a0\u00b1\u00a05.74, 295.42\u00a0\u00b1\u00a02.82, 226.30\u00a0\u00b1\u00a03.75, 263.40\u00a0\u00b1\u00a04.33\u00a0gm of normal control, silymarin, 200 and 400\u00a0mg/kg of EEGPL-treated rats, respectively.The liver weightiness of rats in the TAA control group was significantly higher (13.29\u00a0\u00b1\u00a00.06\u00a0gm) than that of the normal control, silymarin or EEGPL. Animals nourished with silymarin or EEGPL had positive augmentation of their body mass and liver index . The liv3.2.2The morphological appearance of livers obtained from normal control rats demonstrate smooth surface of hepatic tissues with regular tissue layers. However, the microscopic views of liver tissue dissected form TAA control rats showed numerous micronodules with rough irregularities on the hepatic tissues. Rats treated with standard drug (silymarin) or EEGPL (200\u00a0mg/kg and 400\u00a0mg/kg) had significantly lower tissue damage (induced by TAA) in their parenchymal tissue, as shown by uniform tissue structure and fewer micronodules , GA.Fig.3.2.3The normal control rats (A) had a normal liver structure in the absence of any notable signs of inflammation or necrosis. The histological results of liver dissection from TAA control rats (B) showed significant tissue damage represented by disrupted endothelium, unclear nuclei, and increased cytoplasmic vacuoles, indicating severe tissue necrosis and inflammation. The results also revealed significant modification of parenchymal cells by fibrous septa that align the collagen linkage in the hepatic triangles, indicating numerous micro-, macro-nodules in the hepatocytes. Such nodules were surrounded by bundles of connective tissues that divide the liver into lobules, which exhibited significant inflammation and tissue necrosis. The liver histology of silymarin-treated rats (C) showed significant recovery from TAA-induced tissue damage as shown by lower infiltrated cells, hepatic micronodules (necrosis), and reduced tissue disruption compared to TAA control. Thus, the structure of the liver was well protected, as shown by normal hepatic lobules and numerous veins spreading throughout connective tissue. The liver tissue analysis of rats receiving 200\u00a0mg/kg of EEGPL (D) demonstrated significant retrieval from TAA-induced damage represented by lower scores for fibrotic tissues, lower nucleus damage, and vacuolization, fewer parenchymal cell damage with less tissue fibrosis and micro-nodules than that of TAA control, but not as significant as rats ingested the standard drug silymarin. Finally, the microscopic views of liver tissue showed fewer tissue penetration and parenchymal cell regeneration as expressed by fewer necrotic zones and fewer vacuoles in their endothelial and sub-endothelial layers , H&E.The Mason trachoma technique is utilized to evaluate the collagen fibrous deposition in the hepatic tissues. The current results of liver tissue coloured with Masson's trichrome revealed a lack of any collagen deposition in the hepatocyte tissues of normal control rats , C or EE3.2.4The current results of the liver fibrosis rate according to the expression of immunohistochemical staining of \u03b1SMA protein in the liver tissue significantly differed between all rat groups. Histological evaluation showed the lowest \u03b1-SMA intensity in the liver parenchymal tissues of normal control rats A and F. The results of evaluating the hepatocyte proliferation were determined based on the expression of immunohistochemical PCNA stain in the parenchymal cells by utilizing a typical anti-PCNA antibody .Fig. 4EfThe normal control rats showed zero expression of PCNA stains, indicating the absence of cell renewal A. Contra3.2.5The current results of antioxidants in liver homogenates significantly varied between the treated and control rats. The normal control rats (A) had the highest concentration of antioxidant activity in their homogenized hepatic tissues, represented by an increased level of SOD (17.87 U/mg), CAD (36.84 U/mg), and reduced lipid peroxidation activity, represented by the lowest MDA level (1.18 U/mg) compared to all other rat groups. In contrast, TAA control rats (B) experienced the lowest antioxidant status expressed by decreased SOD (8.35 U/mg) and CAT (19.95 U/mg) levels and an increased lipid peroxidation status indicated by increased MDA (4.62 U/mg) values a and b. 3.2.6The results of inflammatory and anti-inflammatory biomarkers were significantly different between all rat groups due to differentiation in the liver damage grades. The normal rats (A) showed the lowest TNF-\u03b1 and IL-6 and the highest IL-10 cytokines compared to all other rat groups. While rats receiving only TAA had significant up-regulation of TNF-\u03b1 and IL-6 values, and down-regulation of IL-10 cytokines, indicating severe inflammation initiated by TAA. In contrast, rats treated with silymarin (C) or EEGPL (D and E) had a significant reduction in the TNF-\u03b1 and IL-6 values and a significant increase in the IL-10 in the serum samples compared to those of TAA control rats a-c.Fig. 3.2.7The serum biochemical results revealed significant up-regulation in the TAA control rats (B) compared to the normal control rats (A), indicating clear enzyme leakage due to severe hepatic tissue penetration induced by TAA. Rats who had oral ingestion of silymarin (C) or EEGPL (D and E) supplementation showed lower liver enzymes and bilirubin concentrations in their serum compared to those of TAA control rats . The liv4The present EEGPL supplementation (2 and 4\u00a0g/kg) to rats in a 14-day toxicity trial did not cause any abnormalities in the serum biochemical (liver and kidney) parameters or their organtissue structure, with a zero mortality rate even after the experimental period. Suggesting that the toxic dose for this plant exceeds the 4\u00a0g/kg dosage. Similar results were reported in the absence of any physiological change or mortality among rats administered 5\u00a0g/kg of EEGPL for 2 weeks .The present work revealed that rats treated only with TAA (B) had significantly lower body weight and increased liver weight compared to the normal control. Similar results have been reported by numerous researchers during their investigations of different liver cirrhosis trials ,21,22. HThe current study showed that TAA control rats experienced significant hepatomegaly and augmented the weightiness/body mass ratio, which could be correlated with the accumulation of fat and deterioration in the hepatocytes . This effect was in line with a previous statement on the improved liver heaviness/body heaviness relation in hepatotoxic rats ,22. The In the current study, the TAA injection caused severe liver cirrhosis in rats, which could be due to its effects on liver enzymes, as results indicated significant up-regulation of ALP, ALT, and AST enzymes in TAA control rats. Studies have linked the TAA efficacy in the elevation of liver enzyme production with its reaction with the nucleic acids (DNA and RNA), thereby causing extracellular liver damage that accelerates the hepatic enzyme formation and their leakage into the extracellular fluid . In contThe biochemical evaluation also revealed that TAA injection caused a significant reduction in the levels of albumin and total proteins in TAA control rats. This has been linked with the effect of this chemical on lowering the transcriptional processes (mRNA) and significantly with the RNA program from the nucleus to the cytoplasm, initiating severe injury in the cell membranes and leading to significant enzyme leakages. Prolonged liver leakage consequently leads to diminished proteins in cells and extracellular fluids . SimilarThe current gross study showed increased production of micro-, macro-nodules on the parenchymal tissues of the liver obtained from TAA control rats. Similarly, numerous studies have shown the efficacy of TAA in the reformation of the liver surface tissues (irregular surface), indicating proliferative cells (tumors) and presence of severe hepatocyte injury. In contrast, silymarin or EEGPL-treated rats had livers with a more regular surface smooth area, indicating significant protection against TAA-induced liver damage. Similar outcomes were reported previously ,35.The histological examination of liver tissue stained with H and E stains revealed significant liver injury in TAA control rats who received only TAA (TAA controls). Moreover, livers obtained from the TAA control group showed significant collagen deposition (using Masson's trichrome stain), indicating severe alteration of the membrane permeability of hepatocytes. In contrast, rats administered silymarin or EEGPL showed lower colour intensity in their liver tissue, meaning less collagen deposition occurred. Accordingly, numerous scientists have reported the efficacy of TAA in releasing collagens from liver cells and the potential of herbal medicines in turning over this action ,36.The immunohistochemical outcomes have shown that TAA controls experienced severe liver cirrhosis, fibrosis, and cellular proliferation, as shown by increased concentrations of \u03b1-SMA and PCNA proteins in their liver tissue. In contrast, silymarin or EEGPL-treated rats revealed significantly lower expression of these two proteins, indicating less liver tissue damage. Similar results were reported on the up-regulation of these proteins by TAA and their down-regulation by medicinal plants or their natural products ,37. PrevO-hexoside, coumaroylquinic acid, and caffeoyl-O-hexoside) as previously reported in details [The evaluation of antioxidants (SOD and CAT) in the liver homogenates has become one of the main indicators of oxidative stress-causing liver injury . The cur details ,41.G. procumbens. The current outcome scientifically backup the therapeutic efficacy of this plant that could serve as new medicinal source for better management of patients with liver cirrhosis.The immune defence changes are another physiological alteration caused by TAA in hepatotoxic rats. Immunity modulation mainly includes increased production of pro-inflammatory cytokines (TNF-\u03b1 and IL-6) and decreased anti-inflammatory cytokines (IL-10), thereby causing more ROS formation and oxidative tension . Studies5Gynura procumbens have shown significant hepatoprotective potentials based on the histological, immunohistochemical and biochemical evaluations. The molecular mechanism behind these action can be explained through its positive augmentation on the inflammatory cytokines, antioxidant enzymes, immunohistochemical proteins (\u03b1-SMA and PCNA), altogether resulted in decreased liver cell proliferation, and lower mitotic index. Hepatoprotection of EEGPL can be also correlated with its inducing actions on the SOD and CAT enzymes and its inhibitory potentials on MDA levels. The current study faced many obstacles including, a small sample size, unavailability and expensiveness of laboratory reagents, and inability to extract specific active ingredients of this plant due to poor facility and lack of specialized instruments.The ethanolic leave extracts of The study protocol approved by ethics committee in Cihan University-Erbil . The procedure was according to the standards set by national and international organization for animal use in the laboratory .Ahmed A.j. Jabbar: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Zaenah Zuhair Alamri, Nur Ain Salehen, Zakia Salim Amur Al Sinawi, Soliman Mohammed Alfaif: Contributed reagents, materials, analysis tools or data.Mahmood Ameen Abdulla: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data.This work received no specific funding or grant.Data will be made available on request.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} {"text": "Streptococcus pneumoniae, ComA is a conserved efflux pump responsible for the maturation and secretion of peptide signals, including the competence-stimulating peptide (CSP), yet its structure and function remain unclear. Here, we functionally characterize ComA as an ABC transporter with high ATP affinity and determined its cryo-EM structures in the presence or absence of CSP or nucleotides. Our findings reveal a network of strong electrostatic interactions unique to ComA at the intracellular gate, a putative binding pocket for two CSP molecules, and negatively charged residues facilitating CSP translocation. Mutations of these residues affect ComA\u2019s peptidase activity in-vitro and prevent CSP export in-vivo. We demonstrate that ATP-Mg2+ triggers the outward-facing conformation of ComA for CSP release, rather than ATP alone. Our study provides molecular insights into the QS signal peptide secretion, highlighting potential targets for QS-targeting drugs.Quorum sensing (QS) is a crucial regulatory mechanism controlling bacterial signalling and holds promise for novel therapies against antimicrobial resistance. In Gram-positive bacteria, such as Quorum sensing is a regulatory mechanism controlling bacterial signaling and ComA, a conserved efflux pump, is responsible for the maturation and secretion of peptide signals. Here, authors determine the 3D structure and demonstrate its function as an ABC transporter. Processes regulated by QS include antibiotic production6, the CRISPR system7, biofilm formation10, sporulation12, and competence16. Unlike QS signaling molecules in Gram-negative bacteria that can freely diffuse across the cell membrane, an active transport mechanism is required to export peptide signals in the Gram-positive QS systems4. For example, in the competence pathway in S. pneumoniae, the QS signal competence stimulating peptide (CSP) is processed and secreted by a bi-functional efflux pump ComA allows pathogens to synchronize gene expression in response to the ever-changing environmentomA Fig.\u00a017\u201319. Co25. One potential approach is to intercept QS, which can inhibit virulence and reduce the severity of infections. Among the early Gram-positive QS systems targeted is the accessory gene regulator (agr) QS system in Staphylococcus aureus, which led to the development of autoinducing peptide-based agr QS inhibitors27. Another system being explored for its therapeutic potential is the fsr QS circuit in Enterococcus faecalis27. Additionally, efforts are underway to attenuate virulence in pneumococcal infections24. However, current QS modulator development mainly focuses on disrupting the interaction between signal peptides and their receptors, rather than targeting the upstream biosynthesis process, due to the lack of underpinning molecular structures. ComA, a highly conserved membrane transporter in Gram-positive bacteria, represents an ideal drug target since signal peptide secretion is ubiquitous.The discovery of quorum sensing (QS) in bacteria has led to the development of new strategies for combating bacterial infections30. During this process, the immature peptide binds to the cytosolic protease domain with the leader peptide and the core peptide located is positioned in the central cavity of the TMD. The mature peptide is generated by cutting off the leader peptide with the peptidase domain, followed by secretion across the membrane. The fact that all PCATs have the same architecture with a conserved peptidase domain suggests that the mechanism of peptide processing is conserved29. However, the process of peptide secretion may differ due to variations in the shape and size of the substrates, and this process is not yet fully understood because no structure has been obtained with the matured peptide binding.ComA is a member of the widespread PCAT (peptidase-containing ATP-binding cassette transporter) family. It consists of three domains: an N-terminal C39 cysteine peptidase domain (PEP), a C-terminal nucleotide-binding domain , and a transmembrane domain (TMD). The PCAT family is known for its role in peptide processing, which has been extensively studied in PCAT133, elucidating the mechanism of CSP binding and secretion without a full-length transporter structure is challenging, thereby limiting the development of drugs targeting the QS signal peptide secretion process. In this study, we reconstitute full-length ComA, characterize it biochemically, and solved its structure by cryo-EM. ComA is unique because it has a high affinity towards ATP. Additionally, we determined multiple structures of ComA in various conformational states, with and without CSP and nucleotides, covering critical stages of CSP export. We identify a putative noncanonical peptide binding site at the outer leaflet of the membrane. Critical residues in ComA were also validated by in-vitro peptidase assays and in-vivo CSP secretion assays in S. pneumoniae. Additionally, we report ComA activities were inhibited by Zn2+, which trapped the molecule in the outward-facing conformation. ATP-Mg2+, but not ATP, triggers the outward-facing conformation of ComA and facilitates CSP release. Our work provides a comprehensive mechanistic insight underlying the export of peptide signals for quorum sensing in gram-positive bacteria.Despite the availability of partial structures of the ComA PEP and NBD domains over a decade of studies34. We did not detect a significant difference in ATPase activity between LMNG solubilized ComA and ComA reconstituted in peptidiscs or nanodiscs . The peptidase activity of full-length ComA was also notably higher than the truncated peptidase domain abolished the ComA peptidase activity, confirming their role in substrate cleavage and the turnover rate of ComA were determined at 25\u2009\u2009oC Fig.\u00a0. The ATPain Fig.\u00a0. We alsoage Fig.\u00a0. In summWe further investigated the coordination between the catalytic activities of the PEP and NBD domains of ComA. First, the ATPase activity was measured in the presence and absence of the substrate peptide. Neither ComC nor matured CSP affected the ATPase activity of ComA Fig.\u00a0. On the 2+, compared to the enzyme in the presence of ComC alone ComA enzyme in the presence of ATP-Mgone Fig.\u00a0. We sugg28, where it was noted that the PEP domain is associated with the primary structure of the ABC transporter when the two NBD domains are isolated from one another. Concurrently, the ATP binding initiates the dissociation of the PEP domains, subsequently leading to their flexibility. We aimed to examine if this hypothesis is applicable to ComA; hence, a cryo-electron microscopy (cryo-EM) study of ComA mutants was conducted in the presence or absence of substrate CSP or ATP experiments on ComA in the presence and absence of ATP, cross-linked with disuccinimidyl sulfoxide (DSSO) with 2 biological replicates digested with trypsin and 2 biological replicates digested with chymotrypsin. Our CLMS analysis with 1% false discovery rate (FDR) cut-off identified a total of 46 unique cross-link sites across both states, with 27 cross-links found in ATP-free state only, 6 cross-links found in ATP-bound state only and 13 cross-links found in both ATP-free and ATP-bound states mutant sample was in the absence of Mg39, Sav186640, PCAT129, and MsbA43 33 28. Though an apparent correlation between the central cavity size and substrate size may exist, it does not dictate a strict rule. For instance, smaller molecule transporters, such as rV1819c (transporting Bleomycin and vitamin B12)45 and IrtAB 46, have cavity sizes of 3780\u2009\u00c5\u00b3 and 4600\u2009\u00c5\u00b3, respectively. Thus, while cavity size might affect substrate accommodation, it\u2019s not always proportional to the size of the transported substrate.Our structural analysis, utilizing the 3V serverIn addition, this cavity also has a remarkable set of charged residues, which can be divided into three distinct parts: inner, middle and outer regions. The negatively charged inner region is closed to the IC-gate Fig.\u00a0 and contWe then performed cryo-EM experiments with ComA and ComC to understand the interaction between the mature CSP peptide and ComA and the observation that 13 out of the 17 residues of the CSP were fitted acceptably into the density mutant Figs.\u00a0a and b. 2+ binding, which was further stabilized by hydrogen bonds with residue S420 in TM6. As D194 is likely crucial for CSP binding, its movement may disrupt the interaction between ComA and CSP. Moreover, TM2 squeezes inward by about 2\u2009\u00c5 toward the central translocation pathway, which may destabilize the CSP binding pocket.State 2 drew our attention because it was in an OF-conformation with the EC gate open, which may allow the bound CSPs to escape Figs.\u00a0a and b. 2+, rather than ATP alone, instigates the opening of the extracellular (EC) gate, we determined a structure of the ComA (E647Q) mutant in the presence of both ATP and Mg2+ . The bacteriocin transporter BlpA was then inactivated to prevent its interference in the assay52. Subsequently, the corresponding comA mutants were constructed in the \u2206blpA comC-HiBiT background. comC-HiBiT expression was induced by adding CSP to the culture. If ComC-HiBiT was processed and exported by ComA or its variants, the CSP-HiBiT fusion would be detected in the culture supernatant when LgBiT and a luminescent substrate were added. Otherwise, the ComC-HiBiT peptide would be trapped in the cytoplasm. Thus, the CSP-HiBiT fusion could only be detected in the cell pellet fraction.We next validated the critical residues of ComA and ComC by measuring CSP export in S. pneumoniae.As expected, changing residues Y216 and Y433 in ComA to alanine reduced CSP export by ~2-fold, likely because these variants are defective in CSP-binding Figs.\u00a0a, and b.2+, ATP-Mg2+, ATP-Zn2+, and ComC. ComA has several distinct features, including an extensive electrostatic interaction network at the IC gate. This may account for the unusually high ATP affinity and thermostability, perhaps important for enabling robust QS in an ever-changing environment. Moreover, the inner and middle regions of the central cavity are charged, which may push the CSP molecules through the channel after leader peptide cleavage. The remodeling of the central translocation tunnel may also help CSP export Then, in a process analogous to our presumed ComC-bound structure, which strongly mirrors the peptide-bound PCAT1 structure28, the substrate peptide binds to the PEP domain located at the intracellular site. 3) After cleavage of the leader peptide, the two mature CSP molecules enter the central cavity, a state captured in our ATP-absent CSP-bound ComA structure. The negatively charged residues in the cytosolic region of the conduit facilitate the translocation of CSP molecules towards the outer leaflet binding site. Two potential salt bridges between D194 (ComA) and R9 (CSP) stablize the substrate binding. Once CSP binds, the intracellular gate closes, the PEP domains dissociate from the ABC transporter, and the two NBDs approach each other, preparing for ATP binding. 4) Subsequently, ATP and Mg2+ bind to the two NBDs, causing local reconfiguration of the putative CSP binding pocket, including disruption of the potential D194 (ComA) - R9 (CSP) ion pair, and minor shifts in TM2. The opening at the extracellular gate, with a radius larger than 5.2\u2009\u00c5, enables the release of CSP from the central cavity. 5) Finally, after the CSP molecules are released, Mg2+ dissociate from the ATP binding site, the extracellular gate closes once more, resulted in an OF-occluded state as observed in PCAT1 as well29. This conformation with Mg2+ dissociated may be facilitated by ATP hydrolysis in cell, followed by ADP release from the NBD domain. ComA then reverts to the apo state, ready for the next cycle of CSP processing and secretion.Based on the ComA structures trapped in different states and on the assumption that the peptide processing mechanism is likely conserved among PCATs, which has been elegantly studied in PCAT1del Fig.\u00a0 for the One limitation of this study is that the PEP domains of ComA could not be clearly resolved to high resolution. The reason why these domains are especially flexible in ComA is unclear, but it may be due to the transient fashion of CSP processing. Future directions will be to trap the pre-cleavage state using noncleavable ComC analog and to investigate how ATP binding inhibits the peptidase activity of ComA. Nevertheless, the peptidase activity seems to couple with the transporter function , and cloned into the pRSFDuet-M2 or pET-15b vector with an MBP tag or His tag at N-terminus. Vectors of ComA mutants and ComC mutants (E1Q and E1R) were constructed using site-directed mutagenesis method.The ComA, truncated peptidase domain (PEP) of ComA, and ComC genes were amplified from the genomic DNA of E. coli cells. 0.5\u2009mM Isopropyl \u03b2-D-1-thiogalactopyranoside (IPTG) was added when cells OD600 reached to 0.8 and cultured for another 20\u2009h at 18\u00b0C. Harvested cell pallets were resuspended using Buffer A in the presence of 1\u2009mM dithiothreitol (DTT) and 1\u2009mM phenylmethylsulfonyl fluoride (PMSF). Cell suspension was broken by a high-pressure homogenizer and ultracentrifuged to get the cell membranes. Buffer A with 2% LMNG (Anatrace) was used to solubilize membranes for 2\u2009h at 4\u00b0C, and then the supernatant was subjected to further purification with anti-MBP beads (SMART lifesciences). Buffer A in the presence of 60\u2009mM maltose and 0.01% LMNG was applied to elute ComA proteins. MBP tag was cleaved by tobacco etch virus protease (TEVP) for further size-exclusion chromatography (SEC) purification with Superose 6 Increase 10/300 GL column (Cytiva). Fractions corresponding to ComA were pulled and concentrated to 4.5\u2009mg/ml for the following studies.For ComA and its mutants, the reconstructed pRSFDuet-M2-ComA plasmid was transformed and expressed in BL21(DE3) E. coli cells. 0.5\u2009mM Isopropyl \u03b2-D-1-thiogalactopyranoside (IPTG) was added when cells OD600 reached to 0.8 and cultured for another 15\u2009h at 18\u00b0C. Cells were collected and resuspended using Buffer B with 1\u2009mM DTT 1\u2009mM PMSF. Cell suspension was subsequently broken by a high-pressure homogenizer and cell debris was moved using a high-speed centrifuge. The supernatant was collected and applied to MBP affinity purification and eluted with Buffer B in the presence of 60\u2009mM maltose. The MBP-tag was moved by incubation with 1\u2009mg/ml TEVP at 30\u2009\u00b0C for 30\u2009min, and concentrated for further purification by SEC with Superdex 200 Increase 10/300 GL column. Fractions corresponding to PEP domain were pulled and concentrated for the peptidase assay.For the truncated PEP domain, the reconstructed PRSFDuet-M2-PEP plasmid was transformed and expressed in BL21(DE3) E. coli DE3 and cultured to OD600\u2009=\u20090.3\u20130.4 for expression induced with 0.5\u2009mM IPTG for 1\u2009h at 37\u2009\u00b0C. Harvested cell pellets were resuspended with Buffer A and were broken using a high-pressure homogenizer. Supernatant from the high-speed centrifuge was further applied to His-affinity purification. Buffer B in the presence of 300\u2009mM imidazole was used to elute the ComC protein and concentrated for size-exclusion chromatography with Superdex 200 Increase 10/300 GL column (Cytiva). The concentration of purified ComC was determined by microvolume UV-Vis Spectrophotometer (Thermo Fisher) at 257\u2009nm, based on the extinction coefficient of phenylalanine53. Tricine-SDS-PAGE54 was used for ComC characterization.For ComC and its mutants, the corresponding pET-15b-ComC plasmids were transformed in 55. In short, ComA was solubilized using 2% n-dodecyl-\u03b2-D-maltoside (DDM) (Anatrace), followed by MBP affinity purification. After binding to the beads, the protein was washed with ten-column volumes (CV) of Buffer B in the presence of 1\u2009mg/ml peptidisc peptide (Peptidisc Biotech), and then eluted in Buffer C . MBP tag was cleaved by TEVP for further SEC purification with Superose 6 Increase 10/300 GL column (Cytiva). Purified peptidisc with ComA reconstituted was further concentrated for functional study.Peptidisc reconstitution was performed as previously described56 for intensities of protein bands. To test the effect of nucleotide on the peptidase activity, ATP, ATP analogs including ATP\u03b3S, AMP-PNP, and Vanadate, MgCl2 and EDTA were added at a final concentration around 3\u2009mM.Purified ComC substrate was incubated with ComA enzymes at a molar ratio of 15:1 for indicated time and temperature. Total volume of the reaction system was 10\u2009\u03bcL. ComC cleavage was analyzed by 12% Tricine-SDS-PAGE gel and Image J softwareE. coli BL21(DE3) and subsequently purified via Nickel-Nitrilotriacetic acid (Ni-NTA) and Size Exclusion Chromatography (SEC) using a Superdex 200 Increase 10/300 GL column (Cytiva) in a buffer containing 50\u2009mM Tris-HCl (pH 7.5) and 100\u2009mM NaCl. The MSP1D1 was then concentrated to a final concentration of 5\u2009mg/ml. E. coli total lipid was prepared by solubilization to a concentration of 10\u2009mg/ml using 200\u2009mM sodium cholate. In parallel, ComA with a cleaved MBP tag, suspended in 0.02% DDM, was concentrated to 2\u2009mg/ml.The membrane scaffold protein MSP1D1 was expressed in 57. Specifically, for ComA reconstitution, the molar ratio used was ComA: MSP1D1: E. coli total lipid=1:2:150. The final sodium cholate concentration in the mixture was set to 25\u2009mM. Bio-beads SM2 (Bio-Rad) were prepared at a ratio of 100\u2009mg per 1\u2009ml of the mixture, added to initiate the reconstitution process, which works by extracting detergents from the system. After an hour of incubation at 4\u00b0C with constant rotation, the mixture was left to incubate overnight under the same conditions. Following the incubation period, the Bio-beads were removed, and the reconstituted sample was clarified via centrifugation. The sample then underwent separation on an SEC Superose 6 Increase 10/300 GL column in the same buffer used previously . The resulting samples were analyzed by SDS-PAGE and subsequently concentrated for further use.The nanodisc reconstitution procedure employed in this study closely follows the previously described method58 with minor modifications. First, 2\u2009\u00b5g of purified ComA or ComA mutants was incubated with 50\u2009\u00b5l reaction solution of 50\u2009mM HEPES, pH 7.5, 10% glycerol, 100\u2009mM NaCl, 2.5\u2009mM ATP and 5\u2009mM MgCl2, and incubated in a water bath at 37\u2009\u00b0C for 30\u2009min. The reaction was stopped by adding 50\u2009\u00b5l 12% (w/v) SDS. Then the reaction was incubated at room temperature for 5\u2009min after the addition of 100\u2009\u00b5l solution containing 12% (w/v) ascorbic acid in 1\u2009M HCl and 2% (w/v) ammonium molybdate in 1\u2009M HCl. Finally, 150\u2009\u00b5l solution containing 25\u2009mM sodium citrate, 2% (w/v) sodium metaarsenite, and 2% (v/v) acetic acid was added, and incubated at room temperature for 10\u2009min. Absorbance at 848\u2009nm was measured using a multimodal microplate reader (HIDEX), and a standard curve of potassium phosphate ranging from 0.05\u2009mM to 0.6\u2009mM was generated to quantitate the amount of released phosphate. Reaction reagents were purchased from Sigma.All ATPase activity assays were performed using a previously described procedure2+ were added and incubated at 37\u2009\u00b0C for 25\u2009min before applying the mixture to cryo-EM grids. For the ATP/Zn2+ complex, final concentrations of 2\u2009mM ATP/Zn2+ were added and incubated at 37\u2009\u00b0C for 25\u2009min before applying the mixture to cryo-EM grids. Grids were blotted for 3\u20134.5\u2009s with 100% relative humidity and plunge-frozen in liquid ethane cooled by liquid nitrogen using a Vitrobot System (Gatan). Cryo-EM data were collected at liquid nitrogen temperature on a Titan Krios electron microscope (Thermo Fisher Scientific), equipped with a K3 Summit direct electron detector (Gatan) and GIF Quantum energy filter. All cryo-EM movies were recorded in counting mode with SerialEM459 with a slit width of 20\u2009eV from the energy filter. Movies were acquired at a nominal magnification of 81,000\u00d7, corresponding to a calibrated pixel size of 0.858\u2009\u00c5 on the specimen level. The dose rate was set to be 7.6 counts per physical pixel per second. The total exposure time of each movie was 6\u2009s, resulting in a total dose of 46.4 electrons per \u00c52, fractionated into 40 frames (150\u2009ms per frame). More details of electron microscopy data collection parameters are listed in Supplementary Table\u00a03\u2009\u03bcl of purified ComA or ComA mutants in detergents at a concentration of 4.5\u2009mg\u2009ml/ml was applied to glow-discharged Quantifoil holey carbon grids . For the ATP complex, a final concentration of 20\u2009\u00b5M ComC and 2\u2009mM ATP were added into ComA (E647Q) and incubated at room temperature for 15\u2009min before freezing. For the CSP complex, final concentrations of 20\u2009\u00b5M ComC were added and incubated at 25\u2009\u00b0C for 25\u2009min before applying the mixture to cryo-EM grids. For the ATP\u03b3S complex, final concentrations of 20\u2009\u00b5M ComC and 2\u2009mM ATP\u03b3S/Mg42. Dose-fractionated movies collected using K3 Summit direct electron detector were subjected to motion correction using the program MotionCor260. A sum of all frames of each movie was calculated following a dose-weighting scheme, and used for all image-processing steps except defocus determination. CTFFIND461 was used to calculate defocus values of the summed images from all movie frames without dose weighting. Particle picking was performed using a semi-automated procedure with SAMUEL and SamViewer62.EM data were processed as previously described63, which included five different side views. We increased the contrast of the motion-corrected images by binning them four times, then applied a 48-pixel box for particle picking. An initial set of around 200 images was used for preliminary particle selection, followed by a basic 2D classification using SAMUEL64. The five most accurate 2D averages, exhibiting intact ABC transporter features, were chosen for another round of template-based particle picking on all motion-corrected images. The picked particles underwent screening through a 2D classification using SAMUEL. We selected particles from the 2D averages displaying clear ABC transporter features, specifically a rectangular-shaped TMD region. Averages showing two distinct dots at the center, corresponding to the top or bottom views of the sample, were also chosen, although side views were sufficient for subsequent 2D and 3D reconstructions.For particle picking, we used 2D averages from our prior FtsEX study as templates65. At each step, we only retained particles from the group displaying complete ABC transporter features and relatively high resolution compared to other classes in the results for further processing. All refinements followed the gold-standard procedure, in which two-half data sets were refined independently. The overall resolutions were estimated based on the gold-standard criterion of Fourier shell correlation (FSC)\u2009=\u20090.143. Local resolution variations were estimated from two half-data maps using ResMap66. The amplitude information of the final maps was corrected by \u201crelion_post_process\u201d using the program RELION365.Post-SAMUEL screening, selected particles were extracted from the dose-weighted, unbinned, motion-corrected images using a 256-pixel box size. The subsequent 2D and 3D classifications and 3D refinement were conducted using \u201crelion_refine_mpi\u201d in RELION32+ ComA were processed by CryoSPARC67. Dose-fractionated movies collected using K3 Summit direct electron detector were subjected to motion correction using the program MotionCor260. A sum of all frames of each movie was calculated following a dose-weighting scheme and used for all image-processing steps except defocus determination. CTFFIND461 was used to calculate defocus values of the summed images from all movie frames without dose weighting. Particle picking was performed using the blob picker followed by the template picker. 2D and 3D classification and 3D refinement were carried out using \u201c2D classification\u201d, \u201cAb-initial Reconstruction\u201d and \u201cHeterogenous Refinement\u201d. Refinements were done using \u201cHomogenous Refinement\u201d and \u201cNon-Uniform Refinement\u201d. The overall resolutions were estimated based on the gold-standard criterion of Fourier shell correlation (FSC)\u2009=\u20090.143. Local resolution was estimated by \u201cLocal Resolution Estimation\u201d.EM data of ATP/Zn68 using the crystal structure of PCAT1 (PDB ID: 4S0F) as the template. This initial model was rigid-body fitted to our cryo-EM maps in UCSF Chimera69, extensively rebuilt in Coot70, and refined using real space refinement in Phenix71. Restraints for ATP and ATP\u03b3S were generated with phenix_elbow program using its isomeric SMILES string files obtained from the PDB Chemical Component Dictionary through Ligand Expo. Ligands were manually docked into the cryo-EM maps in Coot, followed by iterative real-space refinements in Phenix. Final models were validated with statistics from Ramachandran plots, MolProbity scores, and Clash scores with the program in Phenix (Thermofisher Scientific) at 37oC in 5% CO2. PCR products were synthesized using high fidelity Phusion DNA polymerase (NEB M0530S) and purified with the QIAquick PCR purification kit (Qiagen 28106) following the manufacturer\u2019s protocol. Cells were transformed with cassettes assembled by isothermal assembly after the induction of natural competence. Transformants were selected on blood plates supplemented with the indicated antibiotics. Allelic replacements were performed using the Janus cassette (P-spec-rpsL)73. The resulting strains were validated by diagnostic PCR using GoTaq DNA polymerase and Sanger sequencing. Antibiotics were purchased from Sigma-Aldrich and used at final concentrations of 0.3\u2009\u00b5g/ml for erythromycin (Erm), 150\u2009\u00b5g/ml for spectinomycin (Spec), and 300\u2009\u00b5g/ml for streptomycin (Str).Primers used for the synthesis of ComC-HiBiT expressing strains are listed in Supplementary Table\u00a0oC in 5% CO2. To prevent the secretion of CSP by the BlpAB transporter52, strains expressing ComC-HiBiT are in a \u2206blpA background. When the optical density at 600\u2009nm (OD600) reached 0.1 to 0.3, cultures were normalized to OD600 of 0.1, and exogenous CSP was added to induce natural competence. After an hour of induction, cultures were immediately placed on ice for 5\u2009minutes. Cells were pelleted by centrifugation at 16,100x g for 2\u2009minutes at 4oC. The supernatant fraction was collected and stored at 4oC. Pellets were washed with prechilled BHI twice and the washed pellets were then resuspended in 1\u2009ml of prechilled BHI. The suspension was transferred to the lysis matric column homogenizer and disrupted 10X for 3 rounds of lysis. Each round consists of 3 cycles at 6\u2009M/s for 40\u2009seconds. Suspensions were placed on ice for 5\u2009minutes after every round of homogenization. Cell suspensions were centrifuged at 16,100x g for 2\u2009minutes at 4\u2009oC to remove cellular debris. The \u0394comA and \u0394comC strains were used as negative controls. Supernatant and cell lysate samples of the ComC-HiBiT-expressing strains were aliquoted into a white 96-well plate (Corning). The HiBiT tag in the samples were quantified by adding an equal volume of the HiBiT Extracellular Detection Reagent to each well. Luminescence was measured using a Tecan plate reader with the following settings: 1\u2009s integration time, gain 135. Experiments were repeated three times, and the differences between mutants were evaluated by the Mann-Whitney U test.Mutants of ComA and ComC expressing ComC-HiBiT were generated with primers listed in Supplementary Table\u00a0Purified ComA proteins were cross-linked in buffer A for pre-cleavage and buffer B for post-cleavage state with the addition of 1\u2009mM disuccinimidyl sulfoxide , with 26:1 ratio for DSSO:protein by molarity in DMSO for 60\u2009min at 25\u2009\u00b0C with shaking. Cross-linking was then quenched by adding 1\u2009M Tris to the final concentration 80\u2009\u00b5M. Cross-linked protein was then run into SDS-PAGE gel and the gel was stained with premixed colloidal coomassie G-250 staining solution (BioRad 1610786). Bands with molecular weight corresponding to cross-linked ComA were excised and proceeded with in-gel digestion. Briefly, gel bands were incubated three times with 50% ethanol in 100\u2009mM TEAB, triethylammonium bicarbonate, with gentle agitation for 5\u2009min. Then the gel was rehydrated with 100\u2009mM TEAB followed by dehydration with 100% ethanol which was performed successively for two times. Excess ethanol was removed by short vacuum centrifugation. The gel pieces were then rehydrated with 20\u2009mM TCEP, tris(2-carboxyethyl) phosphine, in 100\u2009mM TEAB and incubated at 55\u2009\u00b0C for 60\u2009min and left to cool to room temperature. Next, 500\u2009mM CAA, chloroacetamide, was added to final 55\u2009mM concentration and incubated in the dark for 30\u2009min. The gel was then washed twice with 100\u2009mM TEAB and dried by vacuum centrifugation. 2 gel replicates were then digested by addition of trypsin (Pierce 90058), 2\u2009\u00b5g in 100\u2009mM TEAB, and 2 gel replicates were digested by addition of chymotrypsin (Promega V1061), 2\u2009\u00b5g in 100\u2009mM TEAB, 10\u2009mM CaCl2, and incubated at 37\u2009\u00b0C for 16\u2009h. Protease digestion was quenched by the addition of TFA, trifluoroacetic acid, to a final concentration of 1% (v/v). Peptides were extracted by the addition of 30% ACN, acetonitrile, followed by 100% ACN. Extracted peptides were pooled and dried by vacuum centrifugation. Dried peptides were then resuspended in water with 0.1% formic acid and desalted with C18 stage tips (Empore C18 discs). Briefly, stage tips were activated with 100% ACN then equilibrated twice with water with 0.1% formic acid. Cross-linked peptides were loaded and the stage tip was washed twice with water with 0.1% formic acid. Peptides were eluted with 65% ACN, 0.1% formic acid and dried by vacuum centrifugation.74. Desalted cross-linked peptides were resuspended in 0.5% acetic acid, 0.06% TFA, 2% acetonitrile in water. 1\u2009\u00b5g of cross-linked peptide was injected on a Easy-nLC 1200 (Thermo) chromatography system coupled to a Orbitrap Fusion Lumos mass spectrometer (Thermo) using a 50\u2009cm \u00d7 75\u2009\u00b5m inner diameter Easy-Spray reverse phase column over a 60\u2009min gradient from 0.1% formic acid in water to 40% acetonitrile with 0.1% formic acid. MS acquisition was performed with MS1 using Orbitrap 60\u2009K resolution with scan range 350\u20131650\u2009m/z. Precursor ions with 3\u20138 positive charge were selected for MS2 CID fragmentation with normalized collision energy of 30% and Orbitrap analyzer at 30\u2009K resolution. MS3 HCD fragmentation was triggered based on targeted mass difference of DSSO (31.9721\u2009Da) for 4 dependent scans with normalized collision energy of 30% and Ion Trap analyzer in rapid mode.Liquid chromatography-mass spectrometry (LC-MS) acquisition of cross-linked peptides was performed as previously describedq value\u2009\u2264\u20090.01. Cross-links were visualized by xiVIEW. Raw mass spectrometry spectra and search data were uploaded to the jPost repository75.Thermo raw files were searched against fasta file containing ComA protein sequence using Metamorpheus v0.0.318 with a calibration search with precursor mass tolerance of 10ppm and product mass tolerance of 20ppm. Cross-link search was performed for DSSO on K, S, T, Y amino acids for MS2 CID and MS3 HCD scans with 3 maximum missed cleavages, trypsin protease and fixed modification for carbamidomethyl (C) and variable modifications for oxidation (M), deamidation DSSO hydrolyzed by water and hydrolyzed by Tris , DSSO alkene and thiol . Crosslinks from intralinks result files were filtered for Further information on research design is available in the\u00a0Supplementary InformationPeer Review FileReporting SummarySource Data"} {"text": "Modern machine learning (ML) and deep learning (DL) techniques using high-dimensional data representations have helped accelerate the materials discovery process by efficiently detecting hidden patterns in existing datasets and linking input representations to output properties for a better understanding of the scientific phenomenon. While a deep neural network comprised of fully connected layers has been widely used for materials property prediction, simply creating a deeper model with a large number of layers often faces with vanishing gradient problem, causing a degradation in the performance, thereby limiting usage. In this paper, we study and propose architectural principles to address the question of improving the performance of model training and inference under fixed parametric constraints. Here, we present a general deep-learning framework based on branched residual learning (BRNet) with fully connected layers that can work with any numerical vector-based representation as input to build accurate models to predict materials properties. We perform model training for materials properties using numerical vectors representing different composition-based attributes of the respective materials and compare the performance of the proposed models against traditional ML and existing DL architectures. We find that the proposed models are significantly more accurate than the ML/DL models for all data sizes by using different composition-based attributes as input. Further, branched learning requires fewer parameters and results in faster model training due to better convergence during the training phase than existing neural networks, thereby efficiently building accurate models for predicting materials properties. The process has been catalyzed by the increase in the availability of large-scale datasets through experiments and first-principles calculations such as high throughput density functional theory (DFT) computations17 and the ease to access and analyze them by using various data mining tools19. Such application of ML techniques has attracted significant attention throughout the materials science research community and therefore led to the new paradigm of materials informatics25 which has helped materials scientists better understand materials and predict their properties.Modern machine learning (ML) techniques using high-dimensional data representations have seen widespread success in the field of materials science owing to their ability to efficiently detect hidden patterns in existing datasets and link input representations to output properties for a better understanding of the scientific phenomenon and accelerating materials discovery process26. Although limited, we have also seen a growing application of more advanced deep learning (DL) techniques in recent years29. Harvard Energy Clean Project by Pyzer\u2013Knapp et al.8 used a three-layer network for predicting the power conversion efficiency of organic photovoltaic materials. Montavon et al.29 predicted multiple electronic ground-state and excited-state properties using a model trained on a four-layer network on a database of around 7000 organic compounds. Zhou et al.27 used high-dimensional vectors learned using Atom2Vec along with a fully connected network with a single hidden layer to predict formation energy. ElemNet28 used a 17-layered architecture to learn formation energy from elemental composition but has shown performance degradation beyond that depth. Some research performed domain knowledge-based model engineering within a deep learning context in materials science for predictive modeling33. Montavon et al.26 trained a four-layer network on a database of around 7000 organic compounds to predict multiple electronic ground-state and excited-state properties. SchNet30 incorporated continuous filter convolutional layers to model quantum interactions in molecules for the total energy and inter-atomic forces which follow fundamental quantum chemical principles. CheMixNet31 has tried to learn molecular properties from the molecular structures of organic materials by applying deep learning methods. Boomsma and Frellsen introduced the idea of spherical convolution in molecular modeling by making use of the structural environments within proteins. Jha et al.32 developed a deep learning framework to predict the crystal orientations of polycrystalline materials from their electron back-scatter diffraction patterns. Work in33 performs deep learning by making deeper layered architecture ranging from 10-layer to 48-layer composed of skip connections after every layer using composition and structure based representations to predict materials properties across different datasets. There also have been several efforts to learn either the atomic interaction or the material embeddings using graph-based networks from the crystal structure and composition38. SchNet is extended in34 where the authors used an edge update network to allow for neural message passing between atoms for better property prediction for molecules and materials. Crystal graph convolution neural networks (CGCNN)35 directly learn material properties via the connection of atoms in the crystal structure of the crystalline materials, providing an interpretable representation. MatErials Graph Network (MEGNet)36 was developed as a universal model for the property prediction of molecules and crystals. Goodall and Lee37 developed an architecture that takes stoichiometric attributes instead of crystal attributes as inputs along with matscholar embedding obtained from material science literature using advanced natural language processing algorithms to learn appropriate materials descriptors from data using a graph-based neural network composed of message-passing layer and fully-connected layers. Atomistic Line Graph Neural Network (ALIGNN)38 combines atom, bond, and angle-based information obtained from the structure of the materials to obtain high-accuracy models for improved materials property prediction.Conventionally, traditional ML techniques such as Random Forest, Support Vector Machine, and Decision Tree, have often been applied in materials informatics applications39, here, we focus on addressing the general issue of how to efficiently build deep neural network architectures for more robust and accurate predictive performance by imposing a parametric constraint (17-layers in our case) and utilizing the available limited computational resources effectively and efficiently. For that, we analyze and propose design principles for a time and parameter-efficient deep learning framework composed of deep neural networks that can predict materials properties using numerical vector-based representations. Since the model architectures for the regression problem are composed of fully connected layers, it is highly non-linear and learning the mapping from input to output is comparatively more challenging than the classification problem. To maximize accuracy and minimize training time under parametric constraints using a neural network composed of fully connected layers, we present a novel approach based on a combination of residual learning with skip connections around a stack of multiple layers\u00a042 and branched architecture45, which were originally proposed for classification problems for text or image classification.In general, introducing complex input attributes, network components, and architecture design has been shown to produce more accurate predictive models for materials properties prediction tasks. However, these improvements require higher computational resources and training time which is undesirable, making it hard to leverage such complex components to build predictive models. Hence, rather than focusing on introducing complex input attributes, network components, and architectural designs in a bid to boost model performance as done in recent works46 with a branched structure in the initial layers. BRNet uses BNet as the base network and adds residual connections after each stack for better convergence during the training. BNet and BRNet architectures are designed for the prediction task of learning the formation energy from a vector-based material representation composed of 86 features representing a composition-based elemental fraction as the model input. When trained using 15, BNet and BRNet achieved a mean absolute error (MAE) of 0.042 eV/atom and 0.041 eV/atom respectively compared to an MAE of 0.149 eV/atom using AutoML47. A conference version of this work appeared in Gupta et al.48; the current article significantly expands on the conference paper with additional modeling experiments on more datasets, and subsequent analysis of results and insights. We compare our proposed architectures against traditional ML models, and multiple baselines using deep neural network architectures for regression (made using 17 fully connected layers): ElemNet28 with dropout at variable intervals of fully connected layers, and individual residual network (IRNet)33 with residual connections, batch normalization, and ReLU activation function after each layer. We provide a detailed evaluation and analysis of BNet/BRNet on various publicly available DFT-computed and experimental materials datasets and show that branched networks consistently outperform other ML models and DL networks on the materials property prediction tasks. We also observe that the use of branching leads to faster convergence than existing approaches, while reducing the number of model parameters significantly. BRNet and BNet leverage a simple and intuitive approach of introducing branching with/without residual connections after each layer without using any domain-dependent model engineering, which makes it appealing to researchers working not only on materials but other scientific domains to leverage it for their predictive modeling tasks.We introduce a novel approach to leverage branching in neural networks with and without residual connections for each individual layer (BRNet and BNet). BNet comprises of a series of stacks, each composed of a fully connected layer and LeakyReLU49 with four properties, Automatic Flow of Materials Discovery Library (AFLOWLIB)50 with four properties, Materials Project (MP)14 with four properties, Joint Automated Repository for Various Integrated Simulations (JARVIS) with five properties17, Kingsbury Experimental Formation Enthalpy (KEFE)51 with 1 property, and Kingsbury Experimental Band Gap (KEBG)51 with 1 property. DFT-computed datasets were downloaded from the website of the database, and experimental datasets (KEFE and KEBG) were obtained using Matminer18. The relevant information about the datasets used to evaluate our methods are shown in Table\u00a0We use six datasets of DFT-computed and experimental properties in this work: Open Quantum Materials Database (OQMD)In each of the datasets, materials property values correspond to the lowest formation energy among all compounds with the same composition, representing its most stable crystal structure. The datasets are randomly split with a fixed random seed into training, validation, and test sets in the ratio of 81:9:10.39. Given that computational resources are usually limited, and oftentimes we only see marginal improvements in the accuracy of the model as compared to the exponential increase in the number of parameters added to the architecture of the deep neural network33, analyzing design principle to improve the accuracy of the model under parametric constraints might be a more practical and useful goal to work towards. We thus explore a novel approach of using branching at the early stage of the deep neural network architecture composed of fully connected layers to maximize the performance of the model under a parametric constraint. Note that parametric constraint in this work refers to using a fixed number of layers for constructing the architecture of the deep neural network, i.e., 17 layers in our case. We design two deep neural networks (BRNet and BNet) which contain branching with/without residual connections where both the proposed networks take a numerical vector-based representation as model input to predict the materials property of interest.Many of the existing DL works in materials science focus on introducing complex input attributes, network components, and architectural designs to boost model performance46 as the activation function with (BRNet) and without (BNet) residual connections. To demonstrate the impact of our approach, we compare our proposed architectures against traditional ML models and multiple existing architectures (ElemNet and IRNet) comprised of the same number of layers (17 fully connected layers in our case) for fair comparison in terms of parametric constraint. In this study, we give ElemNet and IRNet architecture different sets of inputs for model training than what was previously used in their respective works to test the generalized performance of the different architectures. For a detailed description of the existing architectures (ElemNet and IRNet), the reader is referred to their respective publications33. We show the performance comparison of the proposed architectures with other existing deep neural networks for formation energy as the materials property and composition-based elemental fraction as the model input using various datasets in Table\u00a0BRNet and BNet architectures are designed for the prediction task of learning the formation energy from a numerical vector-based representation composed of 86 features representing a composition-based elemental fraction as the model input. The deep neural network architectures are composed of fully connected layers, where each fully connected layer is followed by LeakyReLUTable\u00a0Next, we demonstrate the significance of branching on the prediction modeling tasks of \u201cOther materials properties\u201d. We train BRNet and BNet for predicting materials properties from numerical vector-based representation composed of 86 features representing composition-based elemental fractions as the model input. To illustrate the impact of branching, we also compare the performance of our proposed networks against traditional ML algorithms, ElemNet, and IRNet.We observe in Table\u00a0Figure\u00a03 for model input instead of 86 elemental fractions (EF)28. Table\u00a0Next, we illustrate the versatility of leveraging branching in the deep neural network architecture by building models with different composition-based attributes as model input. We train BRNet, BNet, ElemNet, and IRNet similar to the previous analysis, but use 145 composition-based physical attributes52. Hence, we will only use the numerical vector-based representation composed of composition-based elemental fractions for further analysis.From Table\u00a0In our analysis, we generally observe the benefit of leveraging branched deep neural network architecture which tends to perform better than other DL networks and traditional ML models. Here, we investigate the performance of the proposed networks against the experimental datasets which are usually small in size as compared to the DFT-computed datasets. We train traditional ML models and DL models using numerical vector-based representation composed of 86 features representing composition-based elemental fractions as the model input.From Table\u00a0Next, we perform performance analysis using a bubble chart, prediction error chart, and cumulative distribution function (CDF) of the prediction errors. We mainly compare the accuracy and training time of different deep neural networks comprised of the same number of layers when trained using numerical vector-based representation composed of composition-based elemental fractions on formation energy from four different DFT-computed datasets .Figure\u00a0Figure\u00a0We presented a novel approach to leverage the concept of branching in deep neural network architecture to enable better performance for materials property prediction under parametric constraints. To illustrate the benefit of leveraging the proposed approach, we built a general deep learning framework composed of branched deep neural network architectures BRNet and BNet. To compare the performance of the proposed models, we use traditional ML algorithms and existing deep neural networks ElemNet and IRNet, which consist of the same number of layers in their architecture to ensure a fair comparison of parametric constraints. The proposed BRNet and BNet architectures were designed (optimized) for the task of predicting formation energy using a numerical vector-based representation composed of 86 composition-derived elemental fractions as the model input. On the design problem, the proposed models leveraging the proposed design approach significantly outperformed the traditional ML algorithms, ElemNet and IRNet. We demonstrated the efficacy of the proposed approach by evaluating and comparing these DL model architectures against ElemNet, IRNet, and traditional ML algorithms on a variety of materials properties available across multiple materials datasets. Furthermore, we demonstrated that the presented DL model architectures leveraging the proposed approach are versatile in their vector-based model input by evaluating prediction models for different materials properties using different numerical vector-based representations, i.e., composition-derived 145 physical attributes and composition-derived 86 elemental fractions. The proposed approach outperforms other ML/DL models in terms of model accuracy irrespective of the size of the materials property being analyzed, where branching provides a better capability to capture the mapping between the given input material representation and the output property. In general, the training time of a deep neural network model depends on the given prediction task (model inputs and model output), the size of the training dataset, and the architecture and depth of the neural networks (number of model parameters). In our case, as the depth of the neural networks (the number of layers used to construct the architecture) is fixed, the complexity and components used to construct the architecture is the only factor that can affect the training time. We see that the use of branched architecture helps in a significant reduction of training time as compared to other baseline architectures used for comparison. Additionally, to check the robustness of the proposed branched architectures even further, we perform empirical and statistical analysis to explore the benefits of branching in deep neural networks. In the empirical analysis, we perform predictive analysis by changing the location of the branch and the number of branches for a given location under fixed parametric constraints , thus rejecting the null hypothesis at p-value to compare the test MAEs of BNet and BRNet and obtain the p-value < 0.05 for 3 out of 4 cases. For the MP dataset, although BNet performed better as compared to BRNet in terms of mean \u00b1 standard deviation of the test MAE, we obtained the p-value > 0.05, which shows that the results are not significantly better statistically. This shows that, in general, BRNet tends to perform at least comparable or better than BNet. Since the proposed approach of leveraging branching the deep neural network architecture in BRNet and BNet does not depend on any particular material representation/embedding as model input, we expect that it can also be used to improve the performance of other DL works leveraging other types of materials representations in materials science and other scientific domains. The proposed approach of branched deep neural network architecture is conceptually simple to implement and build upon. The BRNet framework code is publicly available at https://github.com/GuptaVishu2002/BRNet.Next, we perform statistical analysis where we estimate a one-tailed entclass1pt{minima46 as the activation function. The simplest instantiation of this architecture adds no residual connections and thus learns simply the approximate mapping from input to output which we refer to as Branched Network (BNet). We also create a deep neural network architecture with residual connection after every sequence, so that each sequence needs only to learn the residual mapping between its input and output. The residual connection has the effect of making the regression learning task easier and providing a smooth flow of gradients between layers. We refer to this deep neural network as a branched residual network (BRNet). The implementation of all the models used in this work is publicly available at https://github.com/GuptaVishu2002/BRNet.The design approach and mathematical formulation for branched deep neural network architecture is illustrated in Fig.\u00a054 and Keras55. We found the best hyperparameters to be Adam56 as the optimizer with a mini-batch size of 32, alearning rate of 0.0001, mean absolute error as loss function, and LeakyReLU46 as activation function after every fully connected layer . Rather than training the model for a specific number of epochs, we used early stopping with patience of 100 epochs, meaning that we stopped training when the performance did not improve in 100 epochs. For traditional ML models, we used an AutoML library called hyperopt sklearn47 to find the best-performing ML model implementations and employed mean absolute error (MAE) as loss function and error metric. For the number of model parameters and model size used by each of the deep learning models, please refer to Supplementary Table\u00a0We implement the deep learning models with Python and TensorFlow 2Supplementary Information."} {"text": "Nally et al. conducte2) between intervention and control groups [\u2026]\u201d and \u201cForest plot for standardised mean difference of change in BMI z-score between intervention and control groups [\u2026]\u201d [standardized mean differences of change scores, but rather raw mean differences of post-intervention values. The data shown in Figure 4 are post-intervention values and not change scores.First, the results in Figures 3 and 4 do not match the methods and figure captions. The captions of Figures 3 and 4 read \u201cForest plot for standardised mean difference of change in BMI \u201d . Althoug"} {"text": "Health inequalities are rooted in historically unjust differences in economic opportunities, environment, access to health care services, and other social determinants. Owing to these health inequalities, the COVID-19 pandemic has disproportionately affected underserved populations, notably people of color, incarcerated and formerly incarcerated individuals, and those unable to physically distance themselves from others. However, people most strongly impacted by health disparities, and the pandemic, are not frequently engaged in research, either as researchers or as participants, resulting in slow progress toward improving health equity. Establishing ways to foster the engagement of historically excluded people is crucial to improving health equity through patient-centered health research.This study aimed to assess the use of equity-centered design thinking (EDT) for engaging community members in research prioritization related to COVID-19. The co-design methods and subsequent production of a toolkit that can be used for engagement were assessed through process evaluation and qualitative methods.Process evaluation and qualitative inquiry, using reflexive thematic analysis, were undertaken to examine the use of EDT. Patient community members and stakeholders remotely partnered with design and health researchers in a year-long digital process to cocreate capacity-building tools for setting agenda for research regarding the impact of COVID-19 on health outcomes. Through a series of 3 workshops, 5 community partners engaged in EDT activities to identify critical challenges for the health and well-being of their communities. The subsequent tools were tested with 10 health researchers who provided critical input over the course of 2 workshops. Interviews with co-designers, project materials, and feedback sessions were used in the process evaluation and finalization of an equity-centered toolkit for community engagement in research. Data from the co-design process, meetings, workshops, and interviews were analyzed using reflexive thematic analysis to identify salient themes.Process evaluation illustrated how the EDT co-design process offered an approach to engage patient partners and community stakeholders in health-related research around COVID-19. The participants expressed satisfaction with design thinking approaches, including creative activities and iterative co-design, as a means of working together. Thematic analysis identified 3 key themes: the value of authentic partnerships, building trust and empathy through design, and fostering candid dialogue around health and social issues impacting historically underrepresented and underinvested communities.The project addressed the need to test EDT strategies for fostering inclusive community engagement in health research agenda setting and provided an alternative to traditional top-down models. Despite the increasing use of human-centered design in health, few projects explicitly include equity in design thinking approaches. The use of methods and tools to intentionally engage underrepresented stakeholders in the process of research agenda setting and equitably sharing power between researchers and community members may improve health research, ultimately improving health equity. Health inequalities are rooted in historically unjust differences in economic opportunities, environment, access to health care services, and other social determinants. In particular, structural racism continues to be a key determinant of health in the United States . Owing tPatient-centered outcomes research (PCOR) and comparative effectiveness research (CER) are important approaches for identifying ways to prevent and mitigate ill health, where research priorities and agendas originate from authentic patient needs and preferences . Patientwith rather than for people who will use them you get to know them as a real person, you get to know them not just a professor or...this lady who runs a research program. I got a chance to feel a part of her, get a real sense of her and that just brought on a different [way]...how you feel about participating in the workshop and how successful you want it to be because you know, people are being real.Community Partner 1During the second workshop on Rapid Critical Utopian Action Research activities, the participants were exposed to future thinking, which shifted their perspective to \u201cdreambuild\u201d a positive future. They thought of the advice they would give, such as urging the people in 2019 to see their primary care physician, considering how interrupted routine medical care would become owing to the pandemic, and to consider ways to protect their mental health. One of the participants focused on the financial stress the pandemic placed on the members of her community:I should\u2019ve educated my community on financial freedom because I realized during the pandemic that that was the biggest part of everybody trying to be safe was being concerned about their finances.Community Partner 3In another future-focused activity, the participants imagined utopian and dystopian news headlines about health in the future. Through this activity, they reflected on some of their key takeaways from living through the COVID-19 pandemic and the introduction of the COVID-19 vaccine. A headline proposed by one of the participants is \u201cDue to the research for the COVID vaccines, other safe and effective vaccines were made more quickly in the future.\u201d Another participant proposed the following headline: \u201cAfrican Americans\u2019 disbelief in the vaccine because of past history and the fear of being used as guinea pigs caused more death!\u201d Furthermore, yet another participant proposed the headline \u201cAmericans no longer believe in the health care system.\u201d Issues of trust and lack of, or lost, trust in health care emerged in most future headlines framed during this activity.Through an analysis of workshop notes and transcripts after the first 2 workshops and as part of the iterative design process, the research team sought to reduce technological challenges. The third workshop adopted a lower-technology approach using paper materials and prototyping with hands, rather than an approach involving heavy use of a computer.The \u201cMad Libs\u201d-style activity in the third workshop titled \u201cA COVID story\u201d allowed the stakeholders to share stories of their friends and loved ones that tapped into community perceptions and norms. One of the stakeholders crafted the following story:My story is about Michelle...The biggest issue faced related to coronavirus is being separated from her family. She lost her father, and then she just lost her aunt within the past month from COVID-19. Her family is really going through a tough time with not being able to see her grandmother...having to minimize the funerals to just a few people was really devastating to their family. They just felt a lot of strain on their family. They feel like they should\u2019ve been provided better services and instructions from the government.Community Partner 1The participants were particularly responsive to the story telling activity, with several noting that it would be an effective way to obtain feedback from members of their communities. Others echoed the sentiment that personal stories about oneself or someone in one\u2019s life are always effective at getting people to speak more comfortably, particularly about sensitive topics.Eco-mapping, the activity in which the participants identified the resources that were around them and their families, spurred dialogue around the distrust of health care professionals and researchers. For the prototyping activity, a mainstay of design thinking approaches, the participants were provided a package of basic materials and were asked to create something that could help with mental health during the pandemic. Creations included a bunker filled with food and medical supplies, a pod-finder phone app to locate nearby individuals to isolate with, a shrine to commemorate loved ones lost to COVID-19, and a time travel device to transport people to their happiest childhood memories. The participants described prototyping as follows:I didn\u2019t think I was really artsy until I started doing these kinds of projects...I was like I\u2019m not going to be able to do this, you guys, and they were like \u2018Ok we\u2019ve got 8 minutes and we got all this stuff right here\u2019 and I\u2019m like \u2018This is stuff that children play with, I don\u2019t know what to do with pipe cleaners,\u2019 and you sit there and you see your imagination runs so wild. Like just in this moment, I thought of my god, this will be so great, a great idea to have this bunker underground...Community Partner 1However, one of the participants expressed being unsure what to do with the materials:I don't know anything to do. Looking at this was like me looking at Latin. It was just like, I don't know what to do with this.Community Partner 2After the workshops, one of the stakeholders reported that she had already spoken with the board of her community organization about design thinking and wanting to incorporate it into their work. Another individual described how she shared her knowledge of and interest in design thinking with friends and peers with whom she worked in her community:I always explain this to people, we were always taught to think inside the box, growing up in a society where your process was supposed to be the same\u2014you learn your ABCs and your 123s and that was the way we were taught to live...now in design thinking you don\u2019t have to think in that process if you don\u2019t want to, it\u2019s not the only process that\u2019s the right process and that\u2019s what design thinking is. It\u2019s thinking outside the box. So whenever someone says \u2018what made you think of that?\u2019 I say \u2018design thinking made me think of that! I just thought about how it\u2019s always been done and I think about how we live today, and how can we enhance what\u2019s always been done to how we can do it today...I was just thinking outside of the box, that\u2019s all.\u2019Community Partner 1Finally, the participants expressed that the EDT processes used for creating the toolkit could have value beyond the study for their own communities, for connectedness, and for the broader promotion of health equity. One of the community participants stated the following:I know the ultimate goal from this is to take what we\u2019re bringing to this to the communities in some type of way, I\u2019m very excited to see how that is actually going to come forth. The things we are discussing are actively the things people in our community are talking about behind closed doors anyway so if we could bring it to the actual community in some kind of way to have this kind of connectedness in the time of COVID where we can\u2019t really be connected, I think that could be amazing at this time and bring about a connectedness in communities that we don\u2019t really have...Community Partner 3The final step in the toolkit development process involved holding researcher usability and feedback workshops, with the support of partners at the Louisiana Public Health Institute and the patient-centered outcomes research network PCORnet . In an eOn the basis of the insights from all participants and reflections on the design activities piloted, the research team and community partners used the information from developmental evaluation to cocreate the Grounding Health Research in Design (GRID) Toolkit for use by health researchers and communities interested in adopting EDT. Stakeholders\u2019 inputs on the utility of EDT activities and selections of activities that would be most impactful for use in community work informed the content included in the toolkit. The team reviewed the toolkit draft resulting from this process and contextualized the evaluation findings to ensure that the toolkit is community-engaged, stakeholder-driven, and adaptable to a variety of topical areas for research prioritization. Throughout the co-design process, specifically while working together to determine activities for inclusion in the toolkit, the participants discussed various equity approaches. The equity-based VISIONS Inc communication framework (presented in the first meeting and used throughout the project) established an environment in which there was room for controversy and discussion.The final GRID Toolkit includes the most salient activities identified in this study, with examples related to COVID-19. However, it can be used across various health topics and is intended to serve as a resource for improving the engagement of all people. Plain language and simplified text were used throughout the toolkit to improve usability and address questions raised by the community participants. Activities piloted through the workshops, refined through the process evaluation, and that benefited from health researcher feedback were included.The toolkit may be downloaded in its entirety from the website of the body that funded this research, the Patient Centered Outcome Research Institute , and froThe thematic analysis resulted in the identification of the 3 following salient themes, which are noted in value of establishing authentic partnerships, along with the process of building trust and empathy through EDT and how this fostered candid dialogue around health and social issues. Throughout their participation in the workshops and follow-up interviews, both researchers and community participants expressed the importance of engaging in a truly collaborative approach to research. In this context, the community participants vented their frustration with previous involvement in research that felt inauthentic and superficial in nature, as expressed in the following quote:Broadly, the thematic analysis of experiences across different categories of participants indicated the theme of the I\u2019m so tired of researchers who get a grant, get a proposal, put it all together, get halfway through it and then you come to the community because you\u2019re not interacting with the population you need to interact with and here you come, now you need help, right? But you should\u2019ve been there in the beginning and the people should\u2019ve been there from the beginning...And then you get very little information at the end, that\u2019s the other problem, you know, they don\u2019t go back and say this is what we got out of this.Community Partner 2Similarly, the health researchers who participated in a usability workshop for the EDT toolkit expressed that they were often less successful than they wanted to be in building authentic and equitable partnerships with community members or patient partners . One of the researchers described this as follows:That\u2019s also something that\u2019s very underestimated within research projects, within budgets, just that capacity, understanding that it takes hours of commitment to build that relationship and to build that trust and comfort with community partners. I think that was huge. We underestimate it...patient partners have been a part of [named health project] for years, but the same issue has been coming up frequently, which really is just them feeling underutilized, and also, they\u2019re just not understanding what the research process is. They're feeling like...the concepts that are being discussed aren\u2019t really tailored to include them in a way that\u2019s meaningful. It\u2019s an ongoing conversation we\u2019ve had for years and so we\u2019ve tried different solutions to address those issues, but it\u2019s an evolving process.Health Researcher 3Another researcher spoke of the often-underestimated outcome of relationship building:We were all talking about how do you measure success and sometimes we\u2019re looking for outcomes, but sometimes the outcome is actually the relationship to be able to continue to do other workHealth Research 2At the conclusion of the year-long EDT process, the community participants described the level and type of engagement in this project as perspective changing. Being involved in both the crafting of the design process and prioritization of topics was described as changing their perceptions of research and programs intended for their communities:I would encourage everybody to participate in research workshops to be able to know the steps in how these decisions are made and what effect it has on you and the people that make the decisions like us who participated in the research. I love to know that a real person like myself participated in this study or that study, how they come up to decide that this particular program or process is a good process for me is because someone like myself participated in the process that contained that information.Community Partner 1building trust and empathy. The community participants expressed that having the space and ability to discuss health-related community issues in partnership with researchers through EDT allowed for an in-depth exploration to identify key issues and potential solutions. Empathy and trust would also allow for realistic discussions about the needs of the participants.The second theme identified was One of the community participants raised concerns about how researchers, including those using EDT methods, needed to consider the perspectives and lives of those who were not present in the room. Building empathy was described as follows:From working with communities and neighborhood associations...everybody don\u2019t have a tablet. Everybody just don\u2019t have these things that we\u2019re talking about. We have to think about, we truly have to think about the people who don\u2019t, because we don\u2019t really want to leave them out of certain things. We have to figure out how to include them in research and whatever. Just because they don\u2019t have\u2014they don\u2019t have all the things with them...they might be in their head. We have to figure out how to get this stuff to work for them. That\u2019s all I\u2019m saying.Community Partner 2A health researcher who engaged in usability testing of the EDT toolkit spoke of expanding the understanding of community relationships through empathy and trust:Your comment made me reflect on the context of the community. It\u2019s not just that interaction between the patient or the person that is interacting with the healthcare system. The fact that these people work in the community, they have jobs, they have different relationships with different people in their families, it\u2019s a bigger ecosystem than just that transactional bidirectional relationship of an interview...this helped me think about that.Health Researcher 1During one of the workshops with the community participants where everyone shared photographs of ways in which the pandemic changed their life, a research team member shared a photograph of a bag of chips and a glass of wine illustrating the difficulty in eating healthy during the pandemic owing to changes in routine. A community participant later shared in a follow-up interview how the moment when the researcher shared this photo and narrative stuck with her as a moment when she felt more trust and empathy between the researchers and community members:I loved [researcher\u2019s] photo \u2018cause I was going to do something like that and thought I couldn\u2019t. She is one of the leaders of this, and she was really real about who she is and what the pandemic is and how she had to see herself in a new light. The photo just stayed with me and I shared it with my friends to let them know what I encountered [working on this project].Community Partner 1The participants conveyed that everyone who was engaged from the start of the EDT process indicated the value placed on their involvement and the subsequent trust and empathy that were built. Other issues of trust were discussed in terms of experiences that the community participants had had with professionals involved in health care, which need to be addressed to build trust and empathy based on engagement and shared experiences:...we have a lot of trust issues when it comes to healthcare professionals or people involved in the healthcare field. The more we get to know you as a person in my community, the better of a relationship you will have with the people because they get to know that you\u2019re a real person. The fact that you live in a community that you\u2019re servicing, that always makes a big\u2026I want to know if you live around here. Are you breathing the same air I breathe? Your house\u2019s going to flood the same way mine flood or do you drive three hours to come here to my community [and] this person don\u2019t have a clue about what\u2019s going on? It does make a big impact on the people that serves for them to know who you are.Community Partner 1fostering candid dialogue around health through the use of EDT activities, especially with regard to issues that were exposed by the COVID-19 pandemic. For example, the participants in workshops spoke frankly about how the COVID-19 pandemic put a further strain on the trust of communities, particularly Black communities, in the individuals and institutions delivering health care. The following quotes illustrate the development of this theme:The final theme identified was ...they feel like they should\u2019ve been provided better services and instruction from the government. They\u2019re uncertain about whether they want to get a COVID-19 vaccine because, firstly, they don\u2019t trust the healthcare system that\u2019s provided by our government today. Barely anybody in my community relies on healthcare. I got to be honest. Most of them don\u2019t have a primary care doctor. They rely on old-fashioned remedies for healing. Through COVID-19, I think it\u2019s going to get more toward self healing in our Black communities because this has really put a strain on our trust to healthcare. I do believe that it\u2019s going to become more like that for the Black community because this here was a horrible strain on our trust for healthcare from the government. They have mostly gotten their information, like I said, from each other and a few medical providers that they trust along the way.Community Partner 1Conversations where health issues were candidly discussed also included dialogue around the skepticism regarding health care providers and medicines:I\u2019m diagnosed with diabetes, high blood pressure and cholesterol. [After my wife died] I did lose about 20 pounds. I was about 210, I think and I went down to 195 and some. I went for a three or four-month checkup, and then doc say, \u2018Oh, all your signs looking good. I1C is lower than what mine is.\u2019 He was saying, \u2018Whatever you\u2019ve been doing, keep doing,\u2019 and I said, \u2018Okay, I\u2019ll keep not taking that meds. Oh, I ain\u2019t been taking these meds for about three to four months.\u2019 They say once you get diagnosed, it\u2019s hard to get undiagnosed.Community Partner 4The tone of equitable interactions fostered an open discussion of health. The participants described communities focusing more on self-healing and the use of traditional practices passed down through families:I\u2019ve never been in for taking shots, going to the hospital, because like I said, I come from old-fashioned remedies. New Orleans is a place where that has been our lifestyle. I got to be honest, I don\u2019t have a primary care doctor. I don\u2019t. 50 years old, I don\u2019t have a primary care doctor. I pay for services when I desperately need them, and I provide myself with my own medical services, but there are medical professionals I trust. I do have a medical professional that I do trust. Now, I trust him because he has had to deal with the hard time of me.Community Partner 1Within one of the community partnered workshops, discussions revolved around how community members obtain COVID-19\u2013related information. One of the participants reflected on frank discussions in her everyday life:talking about the vaccines in the community and stuff...that made me think about when I go and get my hair done, I always talk about it in there because nobody in there wants to get [the vaccine]. None of the hairdressers are planning to get it, most of the other people who are in there getting their hair done as they\u2019re saying, they won\u2019t either. They asked me a lot of questions, though, because I\u2019ve had both shots at this point. I was thinking about too like where can you or how are we going to get to more people of color to address their concerns and issues with wanting to get the vaccine or not trust in the vaccine and things like that, being that everyone needs to get it, really, the majority of people.Community Partner 3We have described the cocreation of an equity-focused, human-centered design process to improve the engagement of underrepresented people, particularly Black Americans, in patient-centered health research on COVID-19, with assessment through process evaluation and qualitative thematic analysis. A group of engaged community members worked collaboratively with health researchers to create tools for research agenda setting, which were tested and refined. From the assessment and evaluation of the experiences of stakeholders around the practice and process of co-design and research prioritization, the team incorporated learning to produce a toolkit appropriate to the needs of underrepresented communities for research agenda setting. The tools can be used by communities and health researchers to improve engagement in research prioritization. Key themes identified included the value of establishing authentic partnerships, building trust and empathy, and fostering candid dialogue around health.In the United States, achieving health equity is highlighted as the primary focus of public health policies and practices , and theIn this study, we presented a process evaluation and thematic analysis assessing the engagement of community partners in health research prioritization using EDT. Our previous work established that design thinking or human-centered design has been increasingly used in health research, especially as a participatory means of working with community members on health research ; howeverFocusing on equity while incorporating design into public health and PCOR may provide a more appropriate approach to address health equity ,36. In tFew of the burgeoning efforts to incorporate design into health research specificUsing an equity-based approach, this study sought to transform the research experience of the participants from a transactional to a meaningful engagement . The parThe free availability of the toolkit for download ensures that patient engagement researchers and community members can use and implement similar strategies. Future projects include expanding the toolkit to address other health issues severely impacting minoritized communities, such as perinatal mortality, and to provide a broader array of research environments for patient engagement.The limitations of this study include the challenges in replicating the specific personnel and resources used, including technological resources and capacity. There were some technological challenges that affected the full participation of all stakeholders in the workshop activities. Some stakeholders did not have access to reliable or stable Wi-Fi services, which necessitated adjustments. Workshop activities relied on video and screen sharing; however, the stakeholders sometimes had to join the workshop via telephone and were only able to listen in. Some stakeholders dialed in on a mobile phone while using iPads (Apple Inc) to view the screen via a Zoom link. In addition, several stakeholders contracted COVID-19 during the project period, marking challenges in attending and completing the workshop activities. These reflect real challenges when engaging community members as partners in research. In addition, inviting stakeholders through referrals may have resulted in individuals being more likely to respond positively to and participate more actively in the activities than would be the case when implementing these activities in community settings; however, social desirability bias is widespread in all study designs.This study evaluated EDT strategies for community engagement in health research agenda setting, which provide an alternative to traditional top-down models and foster inclusive approaches. Despite the increasing use of human-centered design in health, only a few projects explicitly include equity in design thinking approaches. The use of methods and tools to intentionally engage underrepresented stakeholders in the process of research agenda setting and equitably sharing power between researchers and community members may improve health research, ultimately improving health equity."}